text
stringlengths
104
605k
# Eric has a treehouse 14ft above the ground. How long would it take a water balloon dropped from the treehouse to fall to the ground? ##### 1 Answer Feb 27, 2016 $14 f t = 14 \times 0.3048 = 4.27 m$ The distance that body falls verticaly is $\frac{g {t}^{2}}{2}$ $\frac{9.81 {t}^{2}}{2} = 4.27$ ${t}^{2} = 4.27 \times \frac{2}{9.81} = 0.871$ $t = \sqrt{0.871} = 0.933 s$
## anonymous one year ago Use synthetic division to determine whether the number k is an upper or lower bound (as specified for the real zeros of the function f). k = 2; f(x) = 2x3 + 4x2 + 2x - 4; Lower bound? I believe that it is lower bound, is that correct? How would I solve this? 1. Mertsj Possible zeros: $\pm\frac{4}{1},\pm\frac{4}{2},\pm\frac{2}{1},\pm\frac{2}{2}, \pm\frac{1}{1},\pm\frac{1}{2}$ or: $\pm4, \pm2, \pm1, \pm\frac{1}{2}$ 2. Mertsj $f(x)=2(x^2+2x^2+x-2)=0$ $x^3+2x^2+x-2=0$ 3. Mertsj |dw:1437012555983:dw| 4. Mertsj |dw:1437012814980:dw| 5. Mertsj So we see there is a zero between 2 and 1/2 and since f(2) is positive and f(1/2) is negative, 2 is an upper bound.
# bayesian survival analysis in r 0 Bayesian survival analysis with BUGS. Ibrahim, Chen, and Sinha have made an admirable accomplishment on the subject in a well-organized and easily accessible fashion." ∙ Are there any estimates for cost of manufacturing second if first JWST fails? Panshin's "savage review" of World of Ptavvs. R is one of the main tools to perform this sort of analysis thanks to the survival package. 0 Survival analysis studies the distribution of the time to an event. Alternatively, the newly developed function survregbayes (https://rdrr.io/cran/spBayesSurv/man/survregbayes.html) is more user-friendly to use, which fits three popular semiparametric survival models (either non-, iid-, CAR-, or GRF-frailties): proportional hazards, accelerated failure time, and proportional odds. techniques of Survival Analysis and Bayesian Statistics. indirect effects with the additive hazards model. ∙ How should I handle money returned for a product that I did not return? Ubuntu 20.04: Why does turning off "wi-fi can be turned off to save power" turn my wi-fi off? Prior Posterior Maximum likelihood estimate 50 % Credible Intervall Posterior median. In this ∙ Share Tweet. This post illustrates a parametric approach to Bayesian survival analysis in PyMC3. BACCO contains three sub-packages: emulator, calibrator, ... binomial, Pois- son, survival, response times, ordinal, quantile, zero-inflated, hurdle, and even non-linear models all in a multilevel context. In spBayesSurv: Bayesian Modeling and Analysis of Spatially Correlated Survival Data. This tutorial shows how to fit and analyze a Bayesian survival model in Python using PyMC3. 2nd ed. To learn more, see our tips on writing great answers. Its applications span many fields across medicine, biology, engineering, and social science. Survival analysis is a branch of statistics for analyzing the expected duration of time until one or more events happen, such as death in biological organisms and failure in mechanical systems. Survival analysis is at the core of epidemiological data analysis. 4-7 In our data, posterior density was calculated for age, gender, and smoking. A Bayesian Proportional-Hazards Model In Survival Analysis Stanley Sawyer — Washington University — August 24, 2004 1. ∙ Various confidence intervals and confidence bands for the Kaplan-Meier estimator are implemented in thekm.ci package.plot.Surv of packageeha plots the … Here we will showcase some R examples of Bayesian survival analysis. I am going through R's function indeptCoxph() in the spBayesSurv package which fits a bayesian Cox model. Kaplan-Meier: Thesurvfit function from thesurvival package computes the Kaplan-Meier estimator for truncated and/or censored data.rms (replacement of the Design package) proposes a modified version of thesurvfit function. Bayesian data analysis in R? Package for Bayesian model averaging and variable selection for linear models, generalized linear models and survival models (cox regression). 0 Why do most Christians eat pork when Deuteronomy says not to? Why do Arabic names still have their meanings? Amsterdam: Academic Press. Thanks for contributing an answer to Cross Validated! ∙ 0 ∙ share Survival data is encountered in a range of … However recently Bayesian models are also used to estimate the survival rate due to their ability to handle design and analysis issues in clinical research.. References Given that my data is just a set of survival times between 0 and 100, along with censored (yes/no) information, how would I use this function and how should I handle the input "s"? 0 2016. It only takes a minute to sign up. Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday. Use MathJax to format equations. share. How to do Bayesian inference with some sample data, and how to estimate parameters for your own data. Bayesian Time-to-Event Analysis We used Bayesian analysis to estimate pronghorn survival, mortality rates, and to conduct mortality risk regression from time-to-event data (Ibrahim et al. Parametric models of survival are simpler to … Making statements based on opinion; back them up with references or personal experience. Greater Ani (Crotophaga major) is a cuckoo species whose females occasionally lay eggs in conspecific nests, a form of parasitism recently explored []If there was something that always frustrated me was not fully understanding Bayesian inference. We demonstrate the functionality through worked examples. How to avoid overuse of words like "however" and "therefore" in academic writing? One of the teams applied Bayesian survival analysis to the characters in A Song of Ice and Fire, the book series by George R. R. Martin. Bayesian Survival Analysis Author: Austin Rochford. It actually has several names. Bayesian Survival Analysis Using the rstanarm R Package 02/22/2020 ∙ by Samuel L. Brilleman, et al. In the latter case, Bayesian survival analyses were used for the primary analysis in four cases, for the secondary analysis in seven cases, and for the trial re-analysis in three cases. McElreath, Richard. Survival analysis is normally carried out using parametric models, semi-parametric models, non-parametric models to estimate the survival rate in clinical research. Lc_decg Lc_decg. The suite of models that can be estimated using rstanarm is broad click here if you have a blog, or here if you don't. Bayesian approaches were used for monitoring in 14 trials and for the final analysis only in 14 trials. regression modelling by providing a user-friendly interface (users specify Bayesian survival analysis. Accelerated failure time (AFT) models are used widely in medical researc... Prognostic models in survival analysis are aimed at understanding the share | improve this question | follow | edited Sep 16 '18 at 0:02. How do I respond as Black to 1. e4 e6 2.e5? Bayesian Survival Analysis with Data Augmentation. Sometime last year, I came across an article about a TensorFlow-supported R package for Bayesian analysis, called greta. 177 1 1 silver badge 10 10 bronze badges. (exponential, Weibull, Gompertz) and flexible parametric (spline-based) hazard I have look through Bayesian Survival Analysis(2001) by Joseph George Ibrahim Ming-Hui Chen, Debajyoti Sinha , and would like to try out bayesian relative survival analysis in R. >From the cran project website, i know that the package relsurv is for Relative survival and the package splinesurv is for Nonparametric bayesian survival analysis. This function fits a Bayesian proportional hazards model (Zhou, Hanson and Zhang, 2018) for non-spatial right censored time-to-event data. ∙ r bayesian survival-analysis stan rstan. share. estimation. asked Sep 15 '18 at 21:49. only on the survival modelling functionality. What is the role of the "prediction" input parameter? Why does Taproot require a new address format? Survivalanalysesareparticu-larly common in health and medical research, where a classic example of survival outcome dataisthetimefromdiagnosisofadiseaseuntiltheoccurrenceofdeath. (e.g. Journal of the American Statistical Association "This is one … 0 (You can report issue about the content on this page here) Want to share your content on R-bloggers? absence of user-friendly implementations of Bayesian survival models. Although Bayesian approaches to the analysis of survival To be more clear, a new example is attached at the end. None of these factors were found to be significant effect survival of lung cancer patients. This tutorial provides an introduction to survival analysis, and to conducting a survival analysis in R. This tutorial was originally presented at the Memorial Sloan Kettering Cancer Center R-Presenters series on August 30, 2018. Survival data is encountered in a range of disciplines, most notably health Comparison of CPH, accelerated failure time model or neural networks for survival analysis, Survival Analysis on Rare Event Data predicts extremely high survival times, survival analysis using unbalanced sample, Simulation in R of data based on Cox proportional-hazards model for power analysis. Estimation of the Survival Distribution 1. Implementing that semiparametric model in PyMC3 involved some fairly complex numpy code and nonobvious probability theory equivalences. Not only is the package itself rich in features, but the object created by the Surv() function, which contains failure time and censoring information, is the basic survival analysis data structure in R. Dr. Terry Therneau, the package author, began working on the survival package in 1986. Bayesian survival analysis for "Game of Thrones" Last fall I taught an introduction to Bayesian statistics at Olin College. article we describe how the rstanarm R package can be used to fit a wide range Active 3 years, 5 months ago. In this course you will learn how to use R to perform survival analysis. their model using customary R formula syntax and data frames) and using the 09/19/2017 ∙ by Michael J. Crowther, et al. Why is a third body needed in the recombination of two hydrogen atoms? Easy Random Interaction Model Tool, Bayesian Stacked Parametric Survival with Frailty Components and Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Cross Validated is a question and answer site for people interested in statistics, machine learning, data analysis, data mining, and data visualization. 0 Viewed 508 times 1. This may be in part due to a relative My students worked on some excellent projects, and I invited them to write up their results as guest articles for this blog. I have previously written about Bayesian survival analysis using the semiparametric Cox proportional hazards model. Is it possible to just construct a simple cable serial↔︎serial and send data from PC to C64? A more comprehensive treatment of Bayesian survival analysis can be found in Ibrahim, Chen, and Sinha . Posted on March 5, 2019 by R on in R bloggers | 0 Comments [This article was first published on R on , and kindly contributed to R-bloggers]. 07/26/2020 ∙ by Denise Rava, et al. By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy. 08/29/2019 ∙ by Matthew W. Wheeler, et al. This tutorial shows how to fit and analyze a Bayesian survival model in Python using PyMC3. 06/11/2020 ∙ by Michael J. Crowther, et al. share, Prognostic models in survival analysis are aimed at understanding the 2 Bayesian Survival Analysis Using rstanarm analysis(engineering),andeventhistoryanalysis(sociology). What do I do to get my nine-year old boy off books with pictures and onto books with text content? Keywords: Bayesian Inference, Right censoring, LaplaceApproximation, Survival function. rev 2020.12.2.38094, The best answers are voted up and rise to the top, Cross Validated works best with JavaScript enabled, Start here for a quick overview of the site, Detailed answers to any questions you might have, Discuss the workings and policies of this site, Learn more about Stack Overflow the company, Learn more about hiring developers or posting ads with us, Stuck with package example code in R - simulating data to fit a model, https://rdrr.io/cran/spBayesSurv/man/survregbayes.html, “Question closed” notifications experiment results and graduation, MAINTENANCE WARNING: Possible downtime early morning Dec 2, 4, and 9 UTC…, Survival analysis: continuous vs discrete time, Interval censored Cox proportional hazards model in R. In survival analysis, why do we use semi-parametric models (Cox proportional hazards) instead of fully parametric models? There are two packages that can be used to interface R with WinBUGS . But if you scratch the surface there is a lot of Bayesian jargon! Join one of the world's largest A.I. In the R example, the authors have included a vector "s" which was used to initially simulate the survival times data in their example as well as the predictors. "Many books have been published concerning survival analysis or Bayesian methods; Bayesian Survival Analysis is the first comprehensive treatment that combines these two important areas of statistics. Is it considered offensive to address one's seniors by name in the US? We illustrate these concepts by analyzing a mastectomy data set from R ’s HSAUR package. 0 ∙ MathJax reference. ∙ Package for Bayesian model averaging and variable selection for linear models, generalized linear models and survival models (cox regression). ∙ Survival analysis studies the distribution of the time to an event. re... We discuss causal mediation analyses for survival data and propose a new... Multilevel mixed effects parametric survival analysis, pammtools: Piece-wise exponential Additive Mixed Modeling tools, The Multiplicative Mixed Model with the mumm R package as a General and (left truncation), time-varying covariates, time-varying effects, and frailty the function spCopulaCoxph). re... How to avoid boats on a mainly oceanic world? share, Multiplicative mixed models can be applied in a wide range of scientific... Lc_decg. ∙ Its applications span many fields across medicine, biology, engineering, and social science. share, This article introduces the pammtools package, which facilitates data 02/22/2020 ∙ by Samuel L. Brilleman, et al. (I also had some questions about the R code which I have posted separately on Stack … 0 11/02/2018 ∙ by Sofie Pødenphant, et al. ∙ Moore ( 2016 ) also provides a nice introduction to survival analysis with R . Throughout the Bayesian approach is implemented using R and appropriate illustrations are made. The Bayesian Learning for Neural Networks (BLNN) package coalesces the predictive power of neural networks with a breadth of Bayesian sampling techniques for the first time in R. BLNN offers users Hamiltonian Monte Carlo (HMC) and No-U-Turn (NUTS) sampling algorithms with dual averaging for posterior weight generation. Is there a way to notate the repeat of a larger section that itself has repeats in it? Bayesian methods were previously used by many authors in survival analysis. We Doing Bayesian Data Analysis, Second Edition: A Tutorial with R, JAGS, and Stan. I'm not sure what this "s" is. Best way to let people know you aren't dead, just taking pictures? Description Usage Arguments Details Value Author(s) References See Also Examples. The function example is conducted under the framework of spatial copula models (i.e. Description . Module Specification 2020-21 – 2463 Module Intended Learning Outcomes Upon successful completion of the module a student will be able to: 1. share, Accelerated failure time (AFT) models are used widely in medical researc... anticipate these implementations will increase the uptake of Bayesian survival How can one prevent invaders to use their city walls against themselves? With the release of Stata 14 came the mestreg command to fit multilevel ... 06/04/2018 ∙ by Andreas Bender, et al. The rstanarm package facilitates Bayesian You can write the transformed parameters block in one line with vector[N] scale = beta[1] + beta[2] * Density + sigma_D *r_Day[Day]; if you define Density in the data block. This includes standard parametric communities, © 2019 Deep AI, Inc. | San Francisco Bay Area | All rights reserved. I am confused by some of the input parameters to this functions. Interpreting the result of an Bayesian data analysis is usually straight forward. Interval Censored Failure Times, A flexible parametric accelerated failure time model, DeepHazard: neural network for time-varying risks, Time-dependent mediators in survival analysis: Modelling direct and (GLMMs), generalised additive models (GAMs) and more. Why did the scene cut away without showing Ocean's reply? ∙ Statistical Rethinking: A Bayesian Course with Examples in R … In some fields it is called event-time analysis, reliability analysis or duration analysis. (I also had some questions about the R code which I have posted separately on Stack Overflow: Stuck with package example code in R - simulating data to fit a model). BMA: Bayesian Model Averaging . analysis in applied research. ∙ The survival package is the cornerstone of the entire R survival analysis edifice. (I have also posted on SO, but posting here too since I would like to understand the theory behind this model ). 11/26/2020 ∙ by Odd O. Aalen, et al. There are multiple well-known Bayesian data analysis textbooks, but they typically do not cover survival analysis. BACCO is an R bundle for Bayesian analysis of random functions. site design / logo © 2020 Stack Exchange Inc; user contributions licensed under cc by-sa. I. Briefly speaking, you just need to ignore the spred=s0 in the prediction settings, that is, prediction=list(xpred=xpred) is sufficient. data can provide a number of benefits, they are less widely used than classical What led NASA et al. Bayesian survival analysis. How can a hard drive provide a host device with file/directory listings when the drive isn't spinning? Over the last few years, there has been increased interest shown in the application of survival analysis based on Bayesian methodology. Survival Analysis is a sub discipline of statistics. A robust implementation of hyper-parameters and optional … Considering T as the random variable that measures time to event, the survival function $$S(t)$$ can be defined as the probability that $$T$$ is higher than a given time $$t$$ , i.e., $$S(t) = P(T > t)$$ . to decide the ISS should be a zero-g station when the massive negative health and quality of life impacts of zero-g were known? and medical research. models, as well as standard parametric accelerated failure time (AFT) models. I am confused by some of the input parameters to this functions. ∙ It was then modified for a more extensive training at Memorial Sloan Kettering Cancer Center in March, 2019. Active 3 years, 6 months ago. How to dry out a soaked water heater (and restore a novice plumber's dignity)? tr... of Bayesian survival models. and includes generalised linear models (GLMs), generalised linear mixed models To subscribe to this RSS feed, copy and paste this URL into your RSS reader. effects. Asking for help, clarification, or responding to other answers. I am going through R's function indeptCoxph in the spBayesSurv package which fits a bayesian Cox model. Request PDF | Bayesian survival analysis in clinical trials: What methods are used in practice? Ask Question Asked 3 years, 10 months ago. I am confused by some of the input parameters to this functions. This topic is called reliability theory or reliability analysis in engineering, duration analysis or duration modelling in economics, and event history analysis in sociology. Introduction. By using our site, you acknowledge that you have read and understand our Cookie Policy, Privacy Policy, and our Terms of Service. share, We discuss causal mediation analyses for survival data and propose a new... ∙ Demonstrate an understanding of the theoretical basis of Survival Analysis and assumptions related to different Survival Analysis models 2. Ask Question Asked 3 years, 6 months ago. All types of censoring (left, right, interval) are allowed, as is delayed entry likelihood-based) approaches. Usage. Stan software (a C++ library for Bayesian inference) for the back-end Viewed 2k times 1 $\begingroup$ I am going through R's function indeptCoxph() in the spBayesSurv package which fits a bayesian Cox model. ∙ ∙ Pontificia Universidad Católica de Chile ∙ 0 ∙ share Survival analysis is one of the most important fields of statistics in medicine and the biological sciences. In this article we focus 05/12/2020 ∙ by Danilo Alvares, et al. ∙ share, To better understand effects of exposure to food allergens, food challen... Should it not only contain the predictor covariates? Theprodlim package implements a fast algorithm and some features not included insurvival. And medical research, where a classic example of survival analysis and assumptions to... Is encountered in a range of disciplines, most notably health and quality of impacts... The ISS should be a zero-g station when the massive negative health and medical research ( xpred=xpred ) bayesian survival analysis in r! I did not return text content methods were previously used by many authors in survival can... We illustrate these concepts by analyzing a mastectomy data set from R ’ s HSAUR.... Of lung cancer patients the function example is conducted under the framework of spatial copula models ( Cox )! Statistics at Olin College or responding to other answers in PyMC3 not return last fall I taught introduction... Fit multilevel... 09/19/2017 ∙ by Michael J. Crowther, et al is at the end we showcase! Brilleman, et al Course you will learn how to use their city walls against themselves you n't... Zero-G station when the massive negative health and quality of life impacts of were... Does turning off wi-fi can be used to interface R with WinBUGS Stata 14 came the command! Samuel L. Brilleman, et al drive is n't spinning our tips on writing great answers cut away without Ocean! Licensed under cc by-sa walls against themselves an article about a TensorFlow-supported R package for Bayesian of. Cancer patients settings, that is, prediction=list ( xpred=xpred ) is sufficient bundle for model. Returned for a product that I did not return to learn more, See tips! Last fall I taught an introduction to survival analysis models 2 to learn more See. R bundle for Bayesian analysis, called greta set from R ’ s HSAUR package, biology, engineering and. Projects, and how to use R to perform survival analysis Stanley Sawyer — Washington University August... So, but posting here too since I would like to understand the theory behind this )... Can one prevent invaders to use their city walls against themselves URL into your RSS.! Are there any estimates for cost of manufacturing Second if first JWST fails accomplishment on the survival rate in research. Called event-time analysis, reliability analysis or duration analysis R and appropriate illustrations made. Up with References or personal experience out a soaked water heater ( restore! Months ago can a hard drive provide a host device with file/directory listings the. Save power '' turn my wi-fi off needed in the spBayesSurv package which fits Bayesian... An Bayesian data analysis is at the end share | improve this Question | follow | edited 16... Across medicine, biology, engineering, and Sinha have made an admirable accomplishment on the survival modelling functionality Michael... Description Usage Arguments Details Value Author ( s ) References See also Examples should be a zero-g when! You are n't dead, just taking pictures model in survival analysis with R JAGS! Have a blog, or here if you scratch the surface there is a third needed. Outcomes Upon successful completion of the module a student will be able to: 1 oceanic. Students worked on some excellent projects, and social science understand the theory behind this model ) Bayesian... Months ago an admirable accomplishment on the subject in a well-organized and easily accessible.. Pictures and onto books with pictures and onto books with text content likelihood estimate 50 % Intervall... Impacts of zero-g were known a product that I did not return do I respond as Black to e4... To avoid boats on a mainly oceanic world 3 years, there been. R ’ s HSAUR package as guest articles for this blog of user-friendly implementations of Bayesian analysis! At Memorial Sloan Kettering cancer Center in March, 2019 4-7 in our,. Cox proportional hazards model ( Zhou, Hanson and Zhang, 2018 ) for non-spatial right censored time-to-event data data. It was then modified for a more extensive training at Memorial Sloan Kettering Center. Michael J. Crowther, et al science and artificial intelligence research sent straight to your inbox every Saturday for own! Analysis Stanley Sawyer — Washington University — August 24, 2004 1 the end needed. And Sinha have made an admirable accomplishment on the subject in a well-organized and easily accessible fashion. possible just. Stanley Sawyer — Washington University — August 24, 2004 1 's seniors by in... The subject in a well-organized and easily accessible fashion. Sloan Kettering cancer Center in March 2019! Confused by some of the theoretical basis of survival outcome dataisthetimefromdiagnosisofadiseaseuntiltheoccurrenceofdeath this RSS,! Decide the ISS should be a zero-g station when the drive is n't spinning a... Command to fit a wide range of … Bayesian survival analysis models 2 is event-time... In spBayesSurv: Bayesian inference, right censoring, LaplaceApproximation, survival function writing great answers classic example of outcome. From R ’ s HSAUR package get my nine-year old boy off books with pictures and onto books text. The prediction '' input parameter rstanarm R package 02/22/2020 ∙ bayesian survival analysis in r Samuel L. Brilleman et! Implemented using R and appropriate illustrations are made function fits a Bayesian model! Boy off books with pictures and onto books with pictures and onto books with pictures onto... On the subject in a range of … Bayesian survival analysis how to avoid boats on a oceanic. Also provides a nice introduction to Bayesian Statistics at Olin College the spred=s0 in recombination! Do n't how the rstanarm R package 02/22/2020 ∙ by Samuel L. Brilleman, et al the... What this s '' is need to ignore the spred=s0 in the US a third needed... Social science is normally carried out using parametric models, generalized linear models survival... Bayesian model averaging and variable selection for linear models bayesian survival analysis in r semi-parametric models, generalized linear models survival... Clarification, or here if you have a blog, or here if you a... To understand the theory behind this model ), and social science Bayesian Cox model model ) of Second. ) References See also Examples using PyMC3 Bayesian proportional hazards model the prediction settings, that is, (... Memorial Sloan Kettering cancer Center in March, 2019 which fits a Bayesian survival studies! Analysis for Game of Thrones '' last fall I taught an introduction to survival analysis and Bayesian.! Making statements based on Bayesian methodology Bayesian Course with Examples in R … techniques of survival dataisthetimefromdiagnosisofadiseaseuntiltheoccurrenceofdeath! Be more clear, a new example is conducted under the framework of copula... This function fits a Bayesian Cox model agree to our terms of service, privacy policy and cookie policy a., 10 months ago R package 02/22/2020 ∙ by Samuel L. Brilleman, et al overuse! Theprodlim package implements a fast algorithm and some features not included insurvival not. Of user-friendly implementations of Bayesian survival models ( Cox regression ) averaging variable. Scene cut away without showing Ocean 's reply model in Python using PyMC3 semiparametric model PyMC3. Of user-friendly implementations of Bayesian survival analysis studies the distribution of the module a student will be able:... Analyze a Bayesian proportional hazards model ( Zhou, Hanson and Zhang, 2018 ) for non-spatial right censored data... Posted separately on Stack … Bayesian survival analysis for Game of Thrones '' last fall I an... Illustrations are made % Credible Intervall Posterior median therefore '' in academic?! Estimate the survival package is the cornerstone of the main tools to perform survival analysis studies the of... User-Friendly implementations of Bayesian jargon is conducted under the framework of spatial copula models ( i.e theoretical basis survival... The week bayesian survival analysis in r most popular data science and artificial intelligence research sent straight to inbox. Avoid overuse of words like however '' and therefore '' in academic?! Applied research a simple cable serial↔︎serial and send data from PC to C64 most notably and! Bayesian Modeling and analysis of Spatially Correlated survival data is encountered in a range of disciplines, most health... This function fits a Bayesian Cox model or responding to other answers to our terms of,. Doing Bayesian data analysis is usually straight forward in clinical research ubuntu 20.04: why does turning ! Were previously used by many authors in survival analysis edifice of an Bayesian data analysis,. savage review '' of world of Ptavvs and analysis of random functions of spatial copula models ( regression! ∙ 0 ∙ share survival data Edition: a tutorial with R, JAGS, and how fit. ( ) in the US Stack Exchange Inc ; user contributions licensed under cc by-sa into RSS! Have also posted on SO, but posting here too since I would like understand. ( 2016 ) also provides a nice introduction to Bayesian survival analysis in applied research survival! Over the last few years, 10 months ago without showing Ocean 's reply cookie policy '' last fall taught. Will showcase some R Examples of Bayesian survival analysis and assumptions related to different survival analysis in PyMC3 the... Olin College trials and for the final analysis only in 14 trials understand... Module Specification 2020-21 – 2463 module Intended Learning Outcomes Upon successful completion of the prediction input. The drive is n't spinning this is one of the entire R survival analysis Chen and! How the rstanarm R package can be turned off to save power '' turn wi-fi... Called greta has repeats in it with text content novice plumber 's dignity ) features included. Hydrogen atoms to fit and analyze a Bayesian Cox model Proportional-Hazards model in Python using PyMC3 e4 e6 2.e5 |..., et al a TensorFlow-supported R package can be used to interface R with WinBUGS Game of ''... Analyze a Bayesian Cox model ; back them up with References or personal experience share... Data analysis, called greta in this article we focus only on the subject in a well-organized and easily fashion...
# Carnot's theorem (thermodynamics) Carnot's theorem, developed in 1824 by Nicolas Léonard Sadi Carnot, also called Carnot's rule, is a principle that specifies limits on the maximum efficiency that any heat engine can obtain. Carnot's theorem states that all heat engines operating between the same two thermal or heat reservoirs can't have efficiencies greater than a reversible heat engine operating between the same reservoirs. A corollary of this theorem is that every reversible heat engine operating between a pair of heat reservoirs is equally efficient, regardless of the working substance employed or the operation details. Since a Carnot heat engine is also a reversible engine. the efficiency of all the reversible heat engines is determined as the efficiency of the Carnot heat engine that depends solely on the temperatures of its hot and cold reservoirs. The maximum efficiency (i.e., the Carnot heat engine efficiency) of a heat engine operating between cold and hot reservoirs, denoted as ${\displaystyle H}$ and ${\displaystyle C}$ respectively, is the ratio of the temperature difference between the reservoirs to the hot reservoir temperature, expressed in the equation ${\displaystyle \eta _{\text{max}}={\frac {T_{\mathrm {H} }-T_{\mathrm {C} }}{T_{\mathrm {H} }}}}$, where ${\displaystyle T_{\mathrm {H} }}$ and ${\displaystyle T_{\mathrm {C} }}$ are the absolute temperatures of the hot and cold reservoirs, respectively, the efficiency ${\displaystyle \eta }$ is the ratio of the work done by the engine (to the surroundings) to the heat drawn out of the hot reservoir (to the engine), and the subscript ${\displaystyle {\text{max}}}$ stands for maximum, so ${\displaystyle \eta _{\text{max}}}$ represents the maximum efficiency. ${\displaystyle \eta _{\text{max}}}$ is greater than zero if and only if there is a temperature difference between the two thermal reservoirs. Since ${\displaystyle \eta _{\text{max}}}$ is the upper limit of all reversible and irreversible heat engine efficiencies, it is concluded that work from a heat engine can be produced if and only if there is a temperature difference between two thermal reservoirs connecting to the engine. Carnot's theorem is a consequence of the second law of thermodynamics. Historically, it was based on contemporary caloric theory, and preceded the establishment of the second law.[1] ## Proof An impossible situation: A heat engine cannot drive a less efficient (reversible) heat engine without violating the second law of thermodynamics. Quantities in this figure are the absolute values of energy transfers (heat and work). The proof of the Carnot theorem is a proof by contradiction or reductio ad absurdum (a method to prove a statement by assuming its falsity and logically deriving a false or contradictory statement from this assumption), based on a situation like the right figure where two heat engines with different efficiencies are operating between two thermal reservoirs at different temperature. The relatively hotter reservoir is called the hot reservoir and the other reservoir is called the cold reservoir. A (not necessarily reversible) heat engine ${\displaystyle M}$ with a greater efficiency ${\displaystyle \eta _{_{M}}}$ is driving a reversible heat engine ${\displaystyle L}$ with a less efficiency ${\displaystyle \eta _{_{L}}}$, causing the latter to act as a heat pump. The requirement for the engine ${\displaystyle L}$ to be reversible is necessary to explain work ${\displaystyle W}$ and heat ${\displaystyle Q}$ associated with it by using its known efficiency. However, since ${\displaystyle \eta _{_{M}}>\eta _{_{L}}}$, the net heat flow would be backwards, i.e., into the hot reservoir: ${\displaystyle Q_{\text{h}}^{\text{out}}=Q<{\frac {\eta _{_{M}}}{\eta _{_{L}}}}Q=Q_{\text{h}}^{\text{in}},}$ where ${\displaystyle Q}$ represents heat, ${\displaystyle {\text{in}}}$ for input to an object denoted by the subscript, ${\displaystyle {\text{out}}}$ for output from an object denoted by the subscript, and ${\displaystyle h}$ for the hot thermal reservoir. If heat ${\displaystyle Q_{\text{h}}^{\text{out}}}$ flows from the hot reservoir then it has the sign of + while if ${\displaystyle Q_{\text{h}}^{\text{in}}}$ flows to the hot reservoir then it has the sign of +. This expression can be easily derived by using the definition of the efficiency of a heat engine, ${\displaystyle \eta =W/Q_{h}^{out}}$, where work and heat in this expression are net quantities per engine cycle, and the conservation of energy for each engine as shown below. The sign convention of work ${\displaystyle W}$, with which the sign of + for work done by an engine to its surroundings, is employed. The above expression means that heat into the hot reservoir from the engine pair (can be considered as a single engine) is greater than heat into the engine pair from the hot reservoir (i.e., the hot reservoir continuously gets energy). A reversible heat engine with a low efficiency delivers more heat (energy) to the hot reservoir for a given amount of work (energy) to this engine when it is being driven as a heat pump. All these mean that heat can transfer from cold to hot places without external work, and such a heat transfer is impossible by the second law of thermodynamics. • It may seem odd that a hypothetical reversible heat pump with a low efficiency is used to violate the second law of thermodynamics, but the figure of merit for refrigerator units is not the efficiency, ${\displaystyle W/Q_{h}^{out}}$, but the coefficient of performance (COP),[2] which is ${\displaystyle Q_{c}^{out}/W}$ where this ${\displaystyle W}$ has the sign opposite to the above (+ for work done to the engine). Let's find the values of work ${\displaystyle W}$and heat ${\displaystyle Q}$ depicted in the right figure in which a reversible heat engine ${\displaystyle L}$ with a less efficiency ${\displaystyle \eta _{_{L}}}$ is driven as a heat pump by a heat engine ${\displaystyle M}$ with a more efficiency ${\displaystyle \eta _{_{M}}}$. The definition of the efficiency is ${\displaystyle \eta =W/Q_{h}^{out}}$ for each engine and the following expressions can be made: ${\displaystyle \eta _{M}={\frac {W_{M}}{Q_{h}^{out,M}}}={\frac {\eta _{M}Q}{Q}}=\eta _{M},}$ ${\displaystyle \eta _{L}={\frac {W_{L}}{Q_{h}^{out,L}}}={\frac {-\eta _{M}Q}{-{\frac {\eta _{M}}{\eta _{L}}}Q}}=\eta _{L}.}$ The denominator of the second expression, ${\displaystyle Q_{h}^{out,L}=-{\frac {\eta _{M}}{\eta _{L}}}Q}$, is made to make the expression to be consistent, and it helps to fill the values of work and heat for the engine ${\displaystyle L}$. For each engine, the absolute value of the energy entering the engine, ${\displaystyle E_{abs}^{\text{in}}}$, must be equal to the absolute value of the energy leaving from the engine, ${\displaystyle E_{abs}^{\text{out}}}$. Otherwise, energy is continuously accumulated in an engine or the conservation of energy is violated by taking more energy from an engine than input energy to the engine: ${\displaystyle E_{\text{M,abs}}^{in}=Q=(1-\eta _{M})Q+\eta _{M}Q=E_{\text{M,abs}}^{out},}$ ${\displaystyle E_{\text{L,abs}}^{in}=\eta _{M}Q+\eta _{M}Q\left({\frac {1}{\eta _{L}}}-1\right)={\frac {\eta _{M}}{\eta _{L}}}Q=E_{\text{L,abs}}^{out}.}$ In the second expression, ${\displaystyle |Q_{h}^{out,L}|=|-{\frac {\eta _{M}}{\eta _{L}}}Q|}$ is used to find the term ${\displaystyle \eta _{M}Q\left({\frac {1}{\eta _{L}}}-1\right)}$ describing the amount of heat taken from the cold reservoir, completing the absolute value expressions of work and heat in the right figure. Having established that the right figure values are correct, Carnot's theorem may be proven for irreversible and the reversible heat engines as shown below.[3] ### Reversible engines To see that every reversible engine operating between reservoirs at temperatures ${\displaystyle T_{1}}$ and ${\displaystyle T_{2}}$ must have the same efficiency, assume that two reversible heat engines have different efficiencies, and let the relatively more efficient engine ${\displaystyle M}$ drive the relatively less efficient engine ${\displaystyle L}$ as a heat pump. As the right figure shows, this will cause heat to flow from the cold to the hot reservoir without external work, which violates the second law of thermodynamics. Therefore, both (reversible) heat engines have the same efficiency, and we conclude that: All reversible heat engines that operate between the same two thermal (heat) reservoirs have the same efficiency. The reversible heat engine efficiency can be determined by analyzing a Carnot heat engine as one of reversible heat engine. This conclusion is an important result because it helps establish the Clausius theorem, which implies that the change in entropy ${\displaystyle S}$ is unique for all reversible processes:[4] ${\displaystyle \Delta S=\int _{a}^{b}{\frac {dQ_{\text{rev}}}{T}}}$ as the entropy change, that is made during a transition from a thermodynamic equilibrium state ${\displaystyle a}$ to a state ${\displaystyle b}$ in a V-T (Volume-Temperature) space, is the same over all reversible process paths between these two states. If this integral were not path independent, then entropy would not be a state variable.[5] ### Irreversible engines Let's think two engines, one is ${\displaystyle M}$ that is relatively more efficient irreversible engine while the other is ${\displaystyle L}$ that is relatively less efficient reversible engine, and we construct a machine described in the right figure (${\displaystyle M}$ drives ${\displaystyle L}$ as a heat pump). Then this machine violates the second law of thermodynamics. Since a Carnot heat engine is a reversible heat engine, with the conclusion in the discussion about two reversible heat engines above, we have the first part of Carnot's theorem: No irreversible heat engine is more efficient than a Carnot heat engine operating between the same two thermal reservoirs. ## Definition of thermodynamic temperature The efficiency of a heat engine is the work done by the engine divided by the heat introduced to the engine per engine cycle or ${\displaystyle \eta ={\frac {w_{\text{cy}}}{q_{H}}}={\frac {q_{H}-q_{C}}{q_{H}}}=1-{\frac {q_{C}}{q_{H}}}}$ (1) where ${\displaystyle w_{cy}}$ is the work done by the engine, ${\displaystyle q_{C}}$ is the heat to the cold reservoir from the engine, and ${\displaystyle q_{H}}$ is the heat to the engine from the hot reservoir, per cycle. Thus, the efficiency depends only on ${\displaystyle {\frac {q_{C}}{q_{H}}}}$.[6] Because all reversible heat engines operating between temperatures ${\displaystyle T_{1}}$ and ${\displaystyle T_{2}}$ must have the same efficiency, the efficiency of a reversible heat engine is a function of only the two reservoir temperatures: ${\displaystyle {\frac {q_{C}}{q_{H}}}=f(T_{H},T_{C})}$. (2) In addition, a reversible heat engine operating between temperatures ${\displaystyle T_{1}}$ and ${\displaystyle T_{3}}$ must have the same efficiency as one consisting of two cycles, one between ${\displaystyle T_{1}}$ and another (intermediate) temperature ${\displaystyle T_{2}}$, and the second between ${\displaystyle T_{2}}$ and ${\displaystyle T_{3}}$ (${\displaystyle T_{1}). This can only be the case if ${\displaystyle f(T_{1},T_{3})={\frac {q_{3}}{q_{1}}}={\frac {q_{2}q_{3}}{q_{1}q_{2}}}=f(T_{1},T_{2})f(T_{2},T_{3})}$. (3) Specializing to the case that ${\displaystyle T_{1}}$ is a fixed reference temperature: the temperature of the triple point of water as 273.16. (Of course any reference temperature and any positive numerical value could be used — the choice here corresponds to the Kelvin scale.) Then for any ${\displaystyle T_{2}}$ and ${\displaystyle T_{3}}$, ${\displaystyle f(T_{2},T_{3})={\frac {f(T_{1},T_{3})}{f(T_{1},T_{2})}}={\frac {273.16\cdot f(T_{1},T_{3})}{273.16\cdot f(T_{1},T_{2})}}.}$ Therefore, if thermodynamic temperature is defined by ${\displaystyle T'=273.16\cdot f(T_{1},T),}$ then the function viewed as a function of thermodynamic temperature, is ${\displaystyle f(T_{2},T_{3})={\frac {T_{3}'}{T_{2}'}}.}$ It follows immediately that ${\displaystyle {\frac {q_{C}}{q_{H}}}=f(T_{H},T_{C})={\frac {T_{C}'}{T_{H}'}}}$. (4) Substituting this equation back into the above equation ${\displaystyle {\frac {q_{C}}{q_{H}}}=f(T_{H},T_{C})}$ gives a relationship for the efficiency in terms of thermodynamic temperatures: ${\displaystyle \eta =1-{\frac {q_{C}}{q_{H}}}=1-{\frac {T_{C}'}{T_{H}'}}}$. (5) ## Applicability to fuel cells and batteries Since fuel cells and batteries can generate useful power when all components of the system are at the same temperature (${\displaystyle T=T_{H}=T_{C}}$), they are clearly not limited by Carnot's theorem, which states that no power can be generated when ${\displaystyle T_{H}=T_{C}}$. This is because Carnot's theorem applies to engines converting thermal energy to work, whereas fuel cells and batteries instead convert chemical energy to work.[7] Nevertheless, the second law of thermodynamics still provides restrictions on fuel cell and battery energy conversion.[8] A Carnot battery is a type of energy storage system that stores electricity in heat storage and converts the stored heat back to electricity through thermodynamic cycles.[9]
The domain of Question: The domain of $\cos ^{-1}\left(x^{2}-4\right)$ is (a) $[3,5]$ (b) $[-1,1]$ (c) $[-\sqrt{5},-\sqrt{3}] \cup[\sqrt{3}, \sqrt{5}]$ (d) $[-\sqrt{5},-\sqrt{3}] \cap[\sqrt{3}, \sqrt{5}]$ Solution: The domain of $\cos ^{-1}(x)$ is $[-1,1]$ $\therefore-1 \leq x^{2}-4 \leq 1$ $\Rightarrow-1+4 \leq x^{2}-4+4 \leq 1+4$ $\Rightarrow 3 \leq x^{2} \leq 5$ $\Rightarrow \pm \sqrt{3} \leq x \leq \pm \sqrt{5}$ $\Rightarrow x \in[-\sqrt{5},-\sqrt{3}] \cup[\sqrt{3}, \sqrt{5}]$ Hence, the correct answer is option (c).
Birational morphisms of regular schemes Compositio Mathematica, Volume 91 (1994) no. 3, pp. 325-339. @article{CM_1994__91_3_325_0, author = {Sun, Xiaoto}, title = {Birational morphisms of regular schemes}, journal = {Compositio Mathematica}, pages = {325--339}, volume = {91}, number = {3}, year = {1994}, zbl = {0816.14007}, mrnumber = {1273655}, language = {en}, url = {http://archive.numdam.org/item/CM_1994__91_3_325_0/} } TY - JOUR AU - Sun, Xiaoto TI - Birational morphisms of regular schemes JO - Compositio Mathematica PY - 1994 SP - 325 EP - 339 VL - 91 IS - 3 UR - http://archive.numdam.org/item/CM_1994__91_3_325_0/ UR - https://zbmath.org/?q=an%3A0816.14007 UR - https://www.ams.org/mathscinet-getitem?mr=1273655 LA - en ID - CM_1994__91_3_325_0 ER - %0 Journal Article %A Sun, Xiaoto %T Birational morphisms of regular schemes %J Compositio Mathematica %D 1994 %P 325-339 %V 91 %N 3 %G en %F CM_1994__91_3_325_0 Sun, Xiaoto. Birational morphisms of regular schemes. Compositio Mathematica, Volume 91 (1994) no. 3, pp. 325-339. http://archive.numdam.org/item/CM_1994__91_3_325_0/ [1] Abhyanker, S.S.: Resolution of Singularities of Embedded Algebraic Surface. Academic Press, New York (1966). | MR | Zbl [2] Crauder, B.: Birational morphisms of smooth threefolds collapsing three surfaces to a point. Duke Math. J. 48(3) (1981) 589-632. | MR | Zbl [3] Danilov, V.I.: The decomposition of certain birational morphisms. Izv. Akad. Nauk. SSSR. Ser. Math. 44 (1980). | MR | Zbl [4] Luo, T. and Luo, Z.: Factorization of birational morphisms with fiber dimension bounded by 1. Math. Ann. 282 (1988) 529-534. | MR | Zbl [5] Luo, Z.: Ramification divisor of regular schemes. Preprint. | MR [6] Schaps, M.: Birational morphisms factorizable by two monoidal transformations. Math. Ann. 222 (1976) 223-228. | MR | Zbl [7] Schaps, M.: Birational morphisms of smooth threefolds collapsing three surfaces to a curve. Duke Math. J. 48(2) (1981) 401-420. | MR | Zbl [8] Sancho De Salas, T.: Theorems of factorizations for birational morphisms. Compositio Math. 70 (1989). | Numdam | MR | Zbl [9] Sun, X.: On birational morphisms of regular schemes. Preprint. [10] Teicher, M.: Factorization of a birational morphism between 4-folds. Math. Ann. 256 (1981) 391-399. | MR | Zbl
# 3-Bus System (Nagata's Book Test Case) I. Introduction: $$\bullet$$ This simple 3 Bus Test Case is taken from Prof. Nagata's book [1]. $$\bullet$$ The system input data is available as a text file in III, and is given as HTML format as follows: System Input Data BS 1 1.0 0.0 0.0 0.0 0.0 0.0 0.05 BQ 2 1.0 0.0 0.0 -0.2 0.2 0.0 0.17 BV 3 1.1 -0.35 0.0 0.0 0.0 0.0 0.40 <> L 1 2 0.3 0.4 0.0 0.0 1.0 L 2 3 0.0 0.4167 0.0 0.0 1.0 II. Single-Line Diagram: $$\bullet$$ The single-line diagram of the proposed system - as taken from the book - is shown below: III. Files: $$\bullet$$ System Input Data Format (txt Format) [Download] IV. References: [1] Ueda, "WebFlow: Web-Based AC Power Flow Calculation Using PHP," [Accessed Mar. 16, 2015]. [Online]. Available: http://sys.elec.kitami-it.ac.jp/ueda/demo/WebPF/testdata.html
# Locality and the renormalization of branched zeta values #### 05.05.2017, 11 Uhr  –  Haus 9, Raum 2.22 Arbeitsgruppenseminar Analysis Pierre Clavier Localised structures are given by some structures together with an independence relation. They are meant to encode the notion of “locality” in Physics. Taking into account locality allows to define a (multivariate) regularisation of branched zeta values. Branched zeta values are a generalization of the usual zeta values in the same way that decorated rooted trees are a generalization of words. The construction of the branched zeta values uses a universal property of rooted trees decorated by a localised set. zu den Veranstaltungen
# Feature model In software development, a feature model is a compact representation of all the products of the Software Product Line (SPL) in terms of "features". Feature models are visually represented by means of feature diagrams. Feature models are widely used during the whole product line development process and are commonly used as input to produce other assets such as documents, architecture definition, or pieces of code.[citation needed] A SPL is a family of related programs. When the units of program construction are features—increments in program functionality or development—every program in an SPL is identified by a unique and legal combination of features, and vice versa. Feature models were first introduced in the Feature-Oriented Domain Analysis (FODA) method by Kang in 1990.[1] Since then, feature modeling has been widely adopted by the software product line community and a number of extensions have been proposed. ## Background A "feature" is defined as a "prominent or distinctive user-visible aspect, quality, or characteristic of a software system or system".[1] The focus of SPL development is on the systematic and efficient creation of similar programs. FODA is an analysis devoted to identification of features in a domain to be covered by a particular SPL.[1] ### Model A feature model is a model that defines features and their dependencies, typically in the form of a feature diagram + left-over (a.k.a. cross-tree) constraints. But also it could be as a table of possible combinations.[citation needed] ### Diagram A feature diagram is a visual notation of a feature model, which is basically an and-or tree. Other extensions exist: cardinalities, feature cloning, feature attributes, discussed below. ### Configuration A feature configuration is a set of features which describes a member of an SPL: the member contains a feature if and only if the feature is in its configuration. A feature configuration is permitted by a feature model if and only if it does not violate constraints imposed by the model. ### Feature Tree A Feature Tree (sometimes also known as a Feature Model or Feature Diagram) is a hierarchical diagram that visually depicts the features of a solution in groups of increasing levels of detail. Feature Trees are great ways to summarize the features that will be included in a solution and how they are related in a simple visual manner. [2] ## Feature modeling notations Current feature modeling notations may be divided into three main groups, namely: • Basic feature models • Cardinality-based feature models • Extended feature models ### Basic feature models Relationships between a parent feature and its child features (or subfeatures) are categorized as: • Mandatory – child feature is required. • Optional – child feature is optional. • Or – at least one of the sub-features must be selected. • Alternative (xor) – one of the sub-features must be selected In addition to the parental relationships between features, cross-tree constraints are allowed. The most common are: • A requires B – The selection of A in a product implies the selection of B. • A excludes B – A and B cannot be part of the same product. As an example, the figure below illustrates how feature models can be used to specify and build configurable on-line shopping systems. The software of each application is determined by the features that it provides. The root feature (i.e. E-Shop) identifies the SPL. Every shopping system implements a catalogue, payment modules, security policies and optionally a search tool. E-shops must implement a high or standard security policy (choose one), and can provide different payment modules: bank transfer, credit card or both of them. Additionally, a cross-tree constraint forces shopping systems including the credit card payment module to implement a high security policy. A feature diagram representing a configurable e-shop system. ### Cardinality-based feature models Some authors propose extending basic feature models with UML-like multiplicities of the form [n,m] with n being the lower bound and m the upper bound. These are used to limit the number of sub-features that can be part of a product whenever the parent is selected.[3] If the upper bound is m the feature can be cloned as many times as we want (as long as the other constraints are respected). This notation is useful for products extensible with an arbitrary number of components. ### Extended feature models Others suggest adding extra-functional information to the features using "attributes". These are mainly composed of a name, a domain, and a value.[4] ## Semantics The semantics of a feature model is the set of feature configurations that the feature model permits. The most common approach is to use mathematical logic to capture the semantics of a feature diagram.[5] Each feature corresponds to a boolean variable and the semantics is captured as a propositional formula. The satisfying valuations of this formula correspond to the feature configurations permitted by the feature diagram. For instance, if ${\displaystyle f_{1}}$ is a mandatory sub-feature of ${\displaystyle f_{2}}$, the formula will contain the constraint ${\displaystyle f_{1}\Leftrightarrow f_{2}}$.[6] The following table provides a translation of the basic primitives. The semantics of a diagram is a conjunct of the translations of the elements contained in the diagram. We assume that the diagram is a rooted tree. Feature Diagram Primitive Semantics ${\displaystyle r}$ is the root feature ${\displaystyle r}$ ${\displaystyle f_{1}}$ optional sub-feature of ${\displaystyle f}$ ${\displaystyle f_{1}\Rightarrow f}$ ${\displaystyle f_{1}}$ mandatory sub-feature of ${\displaystyle f}$ ${\displaystyle f_{1}\Leftrightarrow f}$ ${\displaystyle f_{1},\dots ,f_{n}}$ alternative sub-features of ${\displaystyle f}$ ${\displaystyle \left(f_{1}\lor \dots \lor f_{n}\Leftrightarrow f\right)\land \bigwedge _{i ${\displaystyle f_{1},\dots ,f_{n}}$ or sub-features of ${\displaystyle f}$ ${\displaystyle f_{1}\lor \dots \lor f_{n}\Leftrightarrow f}$ ${\displaystyle f_{1}}$ excludes ${\displaystyle f_{2}}$ ${\displaystyle \lnot (f_{1}\land f_{2})}$ ${\displaystyle f_{1}}$ requires ${\displaystyle f_{2}}$ ${\displaystyle f_{1}\Rightarrow f_{2}}$ ## Configuring products A product of the SPL is declaratively specified by selecting or deselecting features according to user's preferences. Such decisions must respect the constraints imposed by the feature model. A "configurator" is a tool that assists the user during a configuration process. For instance by automatically selecting or deselecting features that must or must not, respectively, be selected for the configuration to be completed successfully. Current approaches use unit propagation[7] and CSP solvers.[4] ## Properties and analyses An analysis of a feature model targets certain properties of the model which are important for marketing strategies or technical decisions. A number of analyses are identified in the literature.[8][9] Typical analyses determine whether a feature model is void (represents no products), whether it contains dead features (features that cannot be part of any product), or the number of products of the software product line represented by the model. Other analyses focus on comparing several feature models (e.g. to check whether a model is a specialization or refactoring or generalization of another).[10] ## Tools Some tools supporting the editing and/or analyses of feature models are:
# How to save modification to an environment? I have a style which I’m building up as I learn how to use TeXmacs. I like to have my environments (like theorem, definition, etc) framed with a background color. This is something I can do from the GUI. However, I’d like to be able to this as a default for all documents that use my style (also because I don’t always want to remember the hex code of the background colors I am using). I was hopping that after making my modifications from the GUI and choosing “edit macro” from the context menu, I would get the modified code. All I get is the default code (without any modifications). I tried surrounding a `wide-framed-colored` around the environment as a workaround, but this has two problems: (1) the background color extends outside the border, (2) I cannot inserted a table and center it inside said environment. So, my question is, how do I save the modifications I’ve made into my style file so that I don’t have to repeat the procedure every time? Admitting that I understood what you want. The modifications are hidden Try opening the file with a text editor and looking in the “collection” tag at the end, you should find there the modifications to the environments you did through the gui, each introduced with an “associate” tag. I never tried to put these “associate” tags in a style file. Also, I am under the impression that you cannot control environments individually using “associate”; if you e.g. change padding through the GUI and correspondingly obtain ``````<associate|padding-above|2fn> `````` all environments that use padding will take a `padding-above` of 2fn. Perhaps it is better to collect the settings from the “collection” tag (I think settings coincide with environment variables) and wrap the environments you want to modify inside a `with` construct; and put the modified environments in the style file. Thanks as usual. Adding the “ornament-color” property for environments like theorem, definition. However, it did not work for the example environment. Is there some workaround for that? I investigated a bit and found out that the package `framed-theorems` (inside `packages\customize\theorem`) defines in a different way `render-remark` (on which the example environment is based) and `render-enunciation` (on which the theorem environment is based). I copied the definition of `render-enunciation` onto `render-remark`, with adaptations, obtaining ``````<assign|render-remark|<\macro|which|body> <render-enunciation|<remark-name|<arg|which><remark-sep>>|<arg|body>> </macro>> `````` And now it is possible to color the background of examples. It is possible that there are deficiencies in my implementation of `render-remark`, because I do not understand the logic of the original implementation (see the `framed-theorems` package), but it is what I am able to do.
× SAT Fractions and Decimals In a parking lot there are $$17$$ black cars, $$26$$ gray cars, $$31$$ red cars, and $$5$$ purple cars. What fraction of the cars are gray? (A) $$0$$ (B) $$\frac{5}{79}$$ (C) $$\frac{26}{79}$$ (D) $$\frac{26}{17}$$ (E) $$26$$ For how many integers $$k$$ between $$25$$ and $$45,$$ inclusive, is it true that $$\frac{16}{k}, \frac{7}{k}$$, and $$\frac{11}{k}$$ are all in lowest terms? (A) $$\ \ 0$$ (B) $$\ \ 1$$ (C) $$\ \ 9$$ (D) $$\ \ 10$$ (E) $$\ \ 11$$ Which of the following is (are) smaller than $$x$$, if $$x=\frac{3}{17}$$? $$\begin{array}{r r l} &\mbox{I.}&\frac{1}{x}\\ &\mbox{II.}&\frac{x+1}{x}\\ &\mbox{III.}&\frac{x+3}{x-3}\\ \end{array}$$ (A) I only (B) III only (C) I and II only (D) II and III only (E) I, II, and III For the final step in a calculation, Steven accidentally multiplied by $$1000$$ instead of dividing by $$10000$$. What should he do to his answer to correct it? (A) Divide it by $$10000000$$. (B) Divide it by $$1000$$. (C) Multiply it by $$1000$$. (D) Multiply it by $$10000$$. (E) Multiply it by $$10000000$$. Wendy ate $$\frac{1}{8}$$ of a pizza and Mark ate $$\frac{1}{7}$$ of what was left. What fraction of the pizza is still uneaten? (A) $$\ \ -\frac{6}{7}$$ (B) $$\ \ 0$$ (C) $$\ \ \frac{41}{56}$$ (D) $$\ \ \frac{3}{4}$$ (E) $$\ \ \frac{7}{8}$$ ×
# Movement of point 1. Aug 21, 2004 ### TSN79 I have a structure that looks something like this; a steel-pole is vertical for 2m and then horizontal for 1m, like an uppside-down L-shape. At the end of the horizontal part, a force acts downwards (10kN). How do I go about to find both the horizontal and vertical movement of the point where the force acts? 2. Aug 21, 2004 ### Clausius2 You have to use the Navier-Bresse equations. If you don't know them, say you don't and I will explain you the calculus. The shape you describes is very simple, so you will not spend much effort in solving it. First of all, you must work out the flector's distribution of the structure. When you have this distribution, insert it in N-B equations. There are two theorems, first and second Mohr theorems that surely will help you much. If you don't know what flector's distribution means (maybe this word does not exist in english), put it across next thread, please. 3. Aug 21, 2004 ### enigma Staff Emeritus Is the structure free to rotate, or is it fixed somewhere and you're measuring deflection? 4. Aug 23, 2004 ### TSN79 Is this the Navier-Bresse equation you mentioned? $$\delta=\frac{FL}{AE}$$ Because I have a pretty good idea that this is used in the solution, where F is a force, L is length, A is area, and E is a constant for steel. Delta is the movement I think. Flector's distribution I have not heard of, perhaps we call it something else in norwegian. By the way, the structure is fixed to the ground at the end of the vertical part. Explain please...? 5. Aug 23, 2004 ### Tom Mattson Staff Emeritus In every problem of this type, your solution must contain the following 3 ingredients: 1. Equilibrium You must write down the equilibrium equations, using Newton's second law. 2. Force-Deformation Relations That's the relationship between F and &delta; that you just wrote down in your last post. Now those two are easy. The tricky part is the third ingredient. 3. Compatibility These deforming members are fighting over a fixed amount of space. If one bar pushes &epsilon; units to the right, the other bar must give by moving &epsilon; units to the left. You should let the displacement of each member be a vector such as ui+vj, where u and v are independent variables, and use right triangle trigonometry on the deformed member. Last edited: Aug 23, 2004 6. Aug 24, 2004 ### Clausius2 See this structure: _______C | B | | | A A=point ground-clamped (ground fixed without posibility of rotation); B=welding point between two girders; C=force exerted point. A force F is exerted downwards in this point. Lenghts: AB=L1; BC=L2; All right, I'm going to solve this problem: i) first of all we are going to calculate the bending moment distribution M(in spanish it is said "momento flector"). Force Reactions: VA=vertical reaction in A (pointing upwards). HA=horizontal reaction (pointing rightwards) in A; MA=moment reaction in A (turning anticlockwise); VA=F; HA=0; MA=F*L2; ok? So that bending moments are M=MA in point A, M=MA in point B; and M=0 in point C. You should see bending moment is constant along AB, and linear along BC. ii) Navier-Bresse equations: Horizontal movement in C: $$\overline{u_{c}}=\int{\frac{M}{EI}sds}$$ where E=Young modulus; I=section's inertia moment; s=doesn't matter. You can employ 2nd Mohr theorem in order to solve this integral. Pay attention: Take the bending distribution along AB. It's rectangular shaped isn't it?. Take the centroid of this distribution, namely G. It's trivial to see it's located at the middle point of AB. Proyect it over the girder AB. And then, join together points G and C with a straight line. The segment normal to this last line will be the tangent of the trayectory of point C due to ONLY AB bending moment distribution. You can draw a vector over this last line (it will point to right down side) to see spatially the path of point C. The horizontal component of this vector will be Uc. How is it calculated?. By handling the last equation: $$\overline{v}=\sum(\frac{A_{i}}{EI}(d(G_{i}U)\overline{e_{x}}+d(G_{i}V)\overline{e_{y}}))$$ This equation is all what you need. The sum sweeps i=1,2 because of two bending moment distributions. A= area of each bending moment distribution (one is rectangular and the other one is triangular). d(GV)=distance from each centroid calculated as stated before and V, a line which goes trough C point in vertical direction (y direction) d(GU)=distance from each centroid and U, a line which goes trough C in horizontal direction (x direction). e=unitary vector. v=movement vector. My solution is: $$\overline{v}=\frac{F L_{1} L_{2}}{EI}(0.5L_{1}\overline{e_{x}}-L_{2}\overline{e_{y}})+\frac{F L_{2}^2}{3EI}(-L_{2}\overline{e_{y}})$$; Anyway, you are solving an elastic body. If you don't have any knowledge about elastic theory or structural engineering, or you never have heard about N-B equations, then you are endangered fighting against this problem. I advice you to consult any structures book.
# Determining camera orientation (possibly using calibration images). I need to generate a camera calibration pattern. Cameras are expected to be placed at an average height of 15 to 30 feet above ground pointing downwards at roughly 30 degrees. These cameras are deployed in open spaced monitoring different types of movement. During installation, operators often make mistakes when entering camera height and angle information into our system. The objective is to get a sense of the ground plane and to be able to calculate how close or far objects are relative to their size. The only automated option I could think of was to have operators use a camera calibration pattern that have been printed on a large paper. The pattern image will be a simple black and white checkerboard with a constant number of rows and columns (let's say 10). The problem is, I will not know in advance what size the printed image is. The variables are: • Height of the camera. • Angle of the camera to the ground. • Distance to pattern image. • Size of the pattern image. Question: How can I determine the size of the pattern image while analyzing (which will allow me to scale other objects in a relative way). Is there a better way to approach this problem? • You may wish to tag this under trigonometry. – Ron Gordon Jan 20 '13 at 13:43 I apologize for not having a picture. Let the height of the camera be $h$, the angle of the camera to the ground be $\theta$, the distance to the pattern image be $d$, and the size of the pattern be $s$. Consider the simple case where $\theta = 0$. Similar triangles reveals that the image size $t(0) = s h/d$. It should be noted that the light emanates from the camera at a half-angle $\phi = \arctan{[s/(2 d)]}$. Now consider the tilted case. The system may be modeled with a triangle with angles $\theta$, $\pi/2 + \phi$, and $\psi = \pi/2 - (\theta + \phi)$. The side opposite the angle $\psi$ has length $t(0)$. We may then use the law of sines to find the image size $t(\theta)$: $$\frac{t(\theta)}{\cos(\phi)} = \frac{t(0)}{\cos(\theta+\phi)}$$ $$t(\theta) = \frac{2 s h}{2 d \cos(\theta) - s \sin(\theta)}$$
July 02, 2022 In this post, I’ll be discussing how the different light types in Blender were incorporated into the many lights sampling algorithm. Some of these lights could be immediately plugged in to the existing work, but others needed some redesigning of the logic. ## Different Light Types Blender has a variety of light types that all need to be supported by the algorithm. This includes: • Point, Spot, and Area Lights • Emissive Triangles • Distant and Background Lights Each of these bullet points have slightly different places in the code, which I’ll elaborate on below. ### Point, Spot, and Area Lights Once we have point lights working, it’s really immediate to also incorporate spot and area lights. All we have to do is update our light tree construction to account for their different bounding information (actually, I had to update some of the traversal code as well, but the next section will explain why I don’t discuss it here). The first thing to get out of the way is that the energy calculations for these light types are exactly the same. Next, this is the different bounding cone information for each light type: Light Type Axis $\theta_o$ $\theta_e$ Point Arbitrary $\pi$ $\pi / 2$ Spot Spotlight Direction 0 Spotlight Angle Area Normal Axis 0 $\pi / 2$ This makes sense because point lights don’t have a defined orientation direction while spot lights and area lights do. Implementing this in the code is also very straightforward using a bunch of conditionals. The bounding box information for these lights is only slightly more interesting, and there are only a few things to note. The point lights and spot lights have an associated size. What this means is that they can actually emit light from a radius of that size, so our bounding box needs to account for that radius. On the other hand, area lights can either be a disk or a rectangle, and thus have 2 dimensions to them. They have the members axisu and axisv which correspond to the orientation of those dimensions, as well as a sizeu and sizev which dictate how far along the axis it goes. Knowing this, the code is also relatively straightforward: // intern\cycles\scene\light_tree.cpp Light *lamp = scene->lights[lamp_id]; LightType type = lamp->get_light_type(); const float3 center = lamp->get_co(); const float size = lamp->get_size(); if (type == LIGHT_POINT || type == LIGHT_SPOT) { /* Point and spot lights can emit light from any point within its radius. */ } else if (type == LIGHT_AREA) { /* For an area light, sizeu and sizev determine the 2 dimensions of the area light, * while axisu and axisv determine the orientation of the 2 dimensions. * We want to add all 4 corners to our bounding box. */ const float3 half_extentu = 0.5 * lamp->get_sizeu() * lamp->get_axisu() * size; const float3 half_extentv = 0.5 * lamp->get_sizev() * lamp->get_axisv() * size; bbox.grow(center + half_extentu + half_extentv); bbox.grow(center + half_extentu - half_extentv); bbox.grow(center - half_extentu + half_extentv); bbox.grow(center - half_extentu - half_extentv); } In other parts of the Cycles code, the area light’s size is also scaled by the size member. I’ve always seen this factor equal to 1.0 in my own debugging, but I’ve left it here just to be safe. ### Emissive Triangles Even though I have a separate section for emissive triangles, I’m not really going to talk about the bounding information calculation (most of it was based off the past GSoC work anyways). Instead, this is going to be about how emissive triangles made me realize that some of the traversal logic needed to be reconsidered. Originally, I didn’t think there would be anything too special about triangle lights besides using the prim_id to distinguish them during light tree construction. However, when I got to traversal, I encountered a slight issue: although I could differentiate between a normal light source and an emissive triangle using the light_distribution array, I still didn’t have enough information to calculate the importance. It’s still possible to get the triangle’s vertices to manually calculate the bounding box min/max and also take the cross product to find the orientation axis. But then there’s also the issue of finding a proper energy estimate. In any case, doing all of this work during traversal seems like a huge performance issue. Additionally, this information is stuff that we can calculate at construction time and then store for the future. So using the same idea as the device_vector<KernelLightTreeNode> light_tree_nodes, there’s an array on the device containing the bounding information for each emitter: // intern\cycles\kernel\types.h typedef struct KernelLightTreeEmitter { /* Bounding box. */ float bounding_box_min[3]; float bounding_box_max[3]; /* Bounding cone. */ float bounding_cone_axis[3]; float theta_o; float theta_e; /* Energy. */ float energy; /* prim_id denotes the location in the lights or triangles array. */ int prim_id; union { struct { int object_id; } mesh_light; struct { float size; } lamp; }; } KernelLightTreeEmitter; static_assert_align(KernelLightTreeEmitter, 16); The information under prim_id is the same as the information from the light distribution. However, by keeping it inside of this struct, we can remove our light tree kernel’s dependency on the light distribution. There still is a lot of overlap between this struct and the KernelLightTreeNode struct, but it works for now. Now after our construction has sorted all of the primitives in order, we can fill out the corresponding bounding information: // intern\cycles\scene\light.cpp KernelLightTreeEmitter *light_tree_emitters = dscene->light_tree_emitters.alloc(num_distribution); for (int index = 0; index < num_distribution; index++) { LightTreePrimitive &prim = light_prims[index]; BoundBox bbox = prim.calculate_bbox(scene); OrientationBounds bcone = prim.calculate_bcone(scene); float energy = prim.calculate_energy(scene); light_tree_emitters[index].energy = energy; for (int i = 0; i < 3; i++) { light_tree_emitters[index].bounding_box_min[i] = bbox.min[i]; light_tree_emitters[index].bounding_box_max[i] = bbox.max[i]; light_tree_emitters[index].bounding_cone_axis[i] = bcone.axis[i]; } light_tree_emitters[index].theta_o = bcone.theta_o; light_tree_emitters[index].theta_e = bcone.theta_e; if (prim.prim_id >= 0) { light_tree_emitters[index].mesh_light.object_id = prim.object_id; // query shader flags (same as light distirbution) } else { Light *lamp = scene->lights[prim.lamp_id]; light_tree_emitters[index].lamp.size = lamp->size; } } dscene->light_tree_emitters.copy_to_device(); The advantage of this approach is that all of the decision making is happening at construction time. During traversal, we don’t need any conditionals to handle a different construction for each type of light. We just trust that all the information has been calculated correctly beforehand and then directly plug it into our formula. The last thing to do is to adjust the triangle sampling PDF. If you recall in my last post, I mentioned that the light distribution will pre-calculate some of the PDF. For example, for light sources, it knows that it’ll be sampling uniformly over light samples, so it sets // intern\cycles\scene\light.cpp kintegrator->pdf_lights = 1.0f / num_lights; On the other hand, the light distribution samples triangles relative to their total area. This is something that varies per-triangle, so the best that can be done is to pre-compute kintegrator->pdf_triangles as 1.0f / trianglearea. Then the contributing PDf is calculated during triangle_light_sample(): // intern\cycles\kernel\light\light.h const float pdf = area * kernel_data.integrator.pdf_triangles; We’ll also be using triangle_light_sample(), but that’s not going to be the PDF of our sampling method. Instead, we set kintegrator->pdf_triangles and then divide ls->pdf by the triangle’s area to counteract the multiplication done inside of the function. This essentially converts the pre-computed PDF to 1.0f, so now we’re free to control the PDF appropriately. ### Distant and Background Lights The reason why distant lights and background lights need to be handled separately is because the light tree is inherently location-based. Since these lights can be considered infinitely far away, we can’t really construct a bounding box or anything to make them part of the light tree. The original method we wanted to implement was to first pick a light from a light tree and another light from the distant/background lights, and then choose one of the two after weighing their importances. The idea would be that having 2 specific lights would be more specific. However, halfway through implementing this, I discovered that this would actually be pretty complicated. This is because we not only need to calculate the probability of selecting the light in order to scale our PDF accordingly. Now suppose we select one object from $A = \{A_1, A_2\}$ and one object from $B = \{B_1, B_2\}$. Then we put our two selected objects into a new group $C$ and select one out of the two. For the sake of shorter notation, let $O_{i_N}$ denote the probability of selecting object $O_i$ from group $N$. Now the probability of ending up with $A_1$ as our final selection would be: $\mathbb{P}(A_{1_C}) = \mathbb{P}(A_{1_C} | A_{1_A} \cap B_{1_B}) \cdot \mathbb{P}(A_{1_A} \cap B_{1_B}) + \mathbb{P}(A_{1_C} | A_{1_A} \cap B_{2_B}) \cdot \mathbb{P}(A_{1_A} \cap B_{2_B})$ Technically there are a few more terms (cases where $A_2$ is selected from group $A$) but we can ignore them because they’re all equal to $0$. The general idea is that to find the actual probability, we’d have to partition the probabilities into cases. So in this case, the true probability of selecting $A_1$ is the probability of selecting it when $C = \{A_1, B_1\}$ plus the probability of selecting it when $C = \{A_1, B_2\}$. In code, we’d be able to naturally find the value of a single one of these terms, but we’d have to do a lot of extra computation to find the others. The next best thing we can do is first decide whether we want to sample from the light tree or to sample from the distant lights. For now, the easiest way to do this is by examining their relative energies. The advantage to this approach is that we can pre-compute both of these during construction time, but in the future, we may want to introduce an appropriate importance heuristic to decide between the two. Here, pdf_light_tree is calculated as the relative energy of the light tree compared to the total energy involved: // intern\cycles\kernel\light\light_tree.h float tree_u = path_state_rng_1D(kg, rng_state, 1); if (tree_u < kernel_data.integrator.pdf_light_tree) { pdf_factor *= kernel_data.integrator.pdf_light_tree; ret = light_tree_sample<false>( kg, rng_state, randu, randv, time, N, P, bounce, path_flag, ls, &pdf_factor); } else { pdf_factor *= (1 - kernel_data.integrator.pdf_light_tree); ret = light_tree_sample_distant_lights<false>( kg, rng_state, randu, randv, time, N, P, bounce, path_flag, ls, &pdf_factor); } The downside to this approach is that we’ll have to perform a linear scan if we want to sample from the distant lights group. Realistically though, or at least from my perpective, most scenes shouldn’t have that many distant lights. Furthermore, we can also compute importance heuristics if we choose to sample from the distant light group, so we can make more informed decisions about which light to sample. For now, light_tree_distant_light_importance() only returns the energy of the given distant light: // intern\cycles\kernel\light\light_tree.h const int num_distant_lights = kernel_data.integrator.num_distant_lights; float total_importance = 0.0f; for (int i = 0; i < num_distant_lights; i++) { total_importance += light_tree_distant_light_importance(kg, P, N, i); } const float inv_total_importance = 1 / total_importance; float light_cdf = 0.0f; float distant_u = path_state_rng_1D(kg, rng_state, 1); for (int i = 0; i < num_distant_lights; i++) { const float light_pdf = light_tree_distant_light_importance(kg, P, N, i) * inv_total_importance; light_cdf += light_pdf; if (distant_u < light_cdf) { *pdf_factor *= light_pdf; ccl_global const KernelLightTreeDistantEmitter *kdistant = &kernel_data_fetch( light_tree_distant_group, i); const int lamp = -kdistant->prim_id - 1; if (UNLIKELY(light_select_reached_max_bounces(kg, lamp, bounce))) { return false; } return light_sample<in_volume_segment>(kg, lamp, randu, randv, P, path_flag, ls); } } This is bound to change as we come up with better heuristics in the future. ## Closing Thoughts Thanks to the heavy debugging from the work with point lights, most of the math was pretty much working from the get-go. However, there’s still a lot of optimizations to the heuristics that can (and will) be made. My main concern at the moment is that these heuristics don’t take visibility into consideration, which can really hurt the sampling in extreme cases. For example in one case, we could be placing high importance on one group of lights and dedicating a lot of samples towards them, without realizing that they’re actually all occluded! We’ll have to have another discussion for this in the future, but one solution that comes to mind is to also randomly select between using the light tree sampling and using the default light distribution sampling. Secondly, I also realized that there are 3 additional functions to update, which are used when Cycles performs indirect light samples (I’ll be making a separate post about this). These functions are basically used when Cycles is sampling based off of the BSDF and the sample intersects a light source, so we need to calculate what the direct lighting’s PDF would be in order to weight the multiple importance sampling. The functions are: • background_light_pdf() • triangle_light_pdf() • light_sample_from_intersection() These functions are pretty self-explanatory, but it’ll be a little tricky to incorporate the light tree into them. More on that in the next post! Written by Jeffrey Liu who is a second-year Math & CS undergraduate at the University of Illinois Urbana-Champaign.
# Find Fxx of the following equation? ## f(x, y) = 5x arctan(x/y) I found Fx of this but I have trouble finding Fxx May 24, 2018 $\frac{10 {y}^{3}}{{x}^{2} + {y}^{2}} ^ 2$ #### Explanation: $f \left(x , y\right) = 5 x {\tan}^{-} 1 \left(\frac{x}{y}\right) \implies$ ${f}_{x} \left(x , y\right) = 5 {\tan}^{-} 1 \left(\frac{x}{y}\right) + 5 x \times \frac{1}{1 + {\left(\frac{x}{y}\right)}^{2}} \times \frac{1}{y}$ $q \quad = 5 {\tan}^{-} 1 \left(\frac{x}{y}\right) + 5 \frac{x y}{{x}^{2} + {y}^{2}} \implies$ ${f}_{x x} \left(x , y\right) = 5 \times \frac{1}{1 + {\left(\frac{x}{y}\right)}^{2}} \times \frac{1}{y}$ $q \quad q \quad q \quad q \quad + 5 \frac{\left({x}^{2} + {y}^{2}\right) \times y - x y \times 2 x}{{x}^{2} + {y}^{2}} ^ 2$ $q \quad q \quad = \frac{5 y}{{x}^{2} + {y}^{2}} + \frac{5 y \left({y}^{2} - {x}^{2}\right)}{{x}^{2} + {y}^{2}} ^ 2$ $q \quad q \quad = \frac{5 y \left\{\left({x}^{2} + {y}^{2}\right) + \left({y}^{2} - {x}^{2}\right)\right\}}{{x}^{2} + {y}^{2}} ^ 2$ $q \quad q \quad = \frac{10 {y}^{3}}{{x}^{2} + {y}^{2}} ^ 2$
# Homework Help: How to calculate 1st Overtone Frquency 1. Jun 4, 2015 ### Ben James How to I calculate the overtone frequency and the wavelength when I'm given the values to calculate the fundamental frequency of a string? I've got equations such as: L=lambda/2 * n, v = f * lambda (Maybe I'm missing one?) I don't know how to use them in this event. Any hints? 2. Jun 4, 2015 ### stevendaryl Staff Emeritus Do you know what the meaning of the variable $n$ is in the expression relating $L$ and $\lambda$? 3. Jun 4, 2015 ### Ben James Just been looking over it again. I believe it's the harmonic number. Do I get the wavelength by Lamda = L And I get the frequency by f = n/2L Squareroot(T/mu)? 4. Jun 4, 2015 ### stevendaryl Staff Emeritus Some people might use the phrase "harmonic number", but there is another common word that starts with "o". As for your answer, you have two equations involving L and v, and neither one mentions mu or T, so those don't need to appear in your answer.
# Classical presentation of fundamental group of surface with boundary It is well known fact about fundamental group of orientable compact surface: Letting $g$ be the genus and $b$ the number of boundary components of surface $M$. There is a generating set $S=\{\alpha_1, \beta_1, ... , \alpha_g, \beta_g, x_1, ..., z_b\}$ for $\pi_1(M)$ such that $$\pi_1(M, *) = \langle a_1, b_1, ... , a_g, b_g, x_1, ... , x_b \mid [a_1, b_1]\cdots [a_g, b_g]= x_1\cdots x_b \rangle .$$ but I don't know where find a book with proof of it. I was looking in classical positions as: Hatcher, Massey, May, Greenberg without success. The best what I found is calculation of fundamental group of surface without boundary, but boundary is important for me. Could anybody help me? OK, my bad, Fulton's Algebraic topology: A First Course only deals with the closed case. I'll suppose that you know this case quite well. Let's do the bounded case by hand. First case: one boundary component Keep in mind the classical decomposition of the closed surface $F_{g,0}$ of genus $g$ : you have 1 vertex, $2g$ edges, and that $2$-cell whose boundary gives the complicated $[a_1,b_1]\cdots[a_g,b_g] = 1$ relation. Now, take a needle, and pierce a hole in the middle of the 2-cell. You get $F_{g,0} \setminus \textrm{a point}$. Deformation retract the pierced 2-cell on its boundary: that creates a movie whose opening scene is this pierced surface, and whose closing scene is the $1$-skeleton, which is a wedge of $2g$ circles (the $a_i$'s and the $b_i$'s). What happens in the middle of the movie? Well, you have a surface with a disc-shaped hole which expands with time. Topologically, it's exactly the surface $F_{g,1}$ of genus $g$ with 1 boundary component. So we have learned two things: • Piercing a surface (i.e. taking a point out) or making a true hole in it (i.e. take an open disc out) gives the same result up to homotopy equivalence [that's quite irrelevant for our discussion, but it's good to know nevertheless. Of course it works for many other spaces: they only have to be locally not too complicated]. • A pierced surface has the homotopy type of a graph. This is quite important for the study of surfaces. In particular, it gives the wanted presentation: $$\pi_1(F_{g,1}) = \left\langle a_1, \ldots, a_g, b_1, \ldots, b_g\right\rangle.$$ Of course, because the boundary of the surface is associated to the word $[a_1, b_1]\ldots[a_g, b_g]$, you can choose to write this group $$\pi_1(F_{g,1}) = \left\langle a_1, \ldots, a_g, b_1, \ldots, b_g,x \middle| x = [a_1,b_1]\cdots [a_g, b_g]\right\rangle$$ but this quite obfuscates the fact that this group is free. Second case: the sphere with holes Take now $F_{0,b+1}$, the sphere with $b+1 > 0$ boundary components. You can see it as the disc with $b$ boundary components. This amounts to choosing one of the boundary components and declaring it the "outer" one. It's quite easy to retract that on a wedge of $b$ circles, so that $$\pi_1(F_{0, b}) = \left\langle z_1, z_2, \ldots, z_{b}\right\rangle.$$ In this presentation, the (carefully oriented) bouter boundary component is simply the product $z_1\cdots z_b$. The general case $F_{g,b}$ You can write the surface $F_{g,b}$ of genus $g$ as the union of $F_{g,1}$ and $F_{0,b+1}$, gluing the boundary of the former with the outer boundary of the latter. Since we have computed the fundamental groups of the two pieces and that we know the expression of the gluing curve in both of them ($[a_1, b_1]\cdots[a_g,b_g]$ and $z_1\cdots z_b$, respectively), the Van Kampen theorem gives us the answer $$\pi_1(F_{g,b}) = \frac{\left\langle a_1, \ldots, a_g, b_1, \ldots, b_g\right\rangle * \left\langle z_1, \ldots, z_b \right\rangle}{\langle\langle [a_1, b_1]\cdots[a_g,b_g] \cdot (z_1\cdots z_b)^{-1}\rangle\rangle} = \left\langle a_1, \ldots, a_g, b_1, \ldots, b_g, z_1, \ldots, z_b \middle| [a_1, b_1]\cdots[a_g,b_g] =z_1\cdots z_b \right\rangle.$$ It is probably worth noting that you can rewrite the relation so that it expresses $z_b$ (say) as a word in the other generators. You can then eliminate it and notice that this is also a free group (again, as long as $b > 0$, $F_{g,b}$ deformation retracts to a graph). • ok, i didn't find it in this book. Of course I found proof/calculation of fundamental group for surface without boundary. Are you sure you found this? – Filip Parker Aug 27 '14 at 14:52 • You are absolutely right. Hope the edit helps. – PseudoNeo Aug 28 '14 at 13:40
# Monogame working with Krypton I am trying to make a simple 2D game engine using Monogame and the Krypton 2.0 lighting engine. So far I have succeeded in rendering a light but I am unable to generate any shadows from any hull's. Does anyone know if this can be done using Krypton with Monogame? If this cannot be done does anyone know of any other lighting engines that work with Monogame. I just need to draw one light from the center of the view port. As a side note I was able to get proof that the hull is in the correct position because I can see a grey outline of the hull at the correct position. My creation code for the light and the hull is the following: var texture = LightTextureBuilder.CreatePointLight(this.GraphicsDevice, 512); Light2D light = new Light2D() { Texture = texture, Range = 700, Color = Color.White, Intensity = 0.5f, Angle = MathHelper.TwoPi * 1f, X = 0, Y = 0, }; hull.Scale = new Vector2(50, 50); hull.Position = new Vector2(75,0); As for drawing krypton I use the following code: _krypton.Matrix = ActiveRoom.Camera.GetTransformation(); _krypton.SpriteBatchCompatablityEnabled = true; _krypton.CullMode = CullMode.None; _krypton.Bluriness = 1; _krypton.LightMapPrepare(); _spriteBatch.Begin(/*camera transform and blending. Nothing special*/); //draw code is here _spriteBatch.End(); _krypton.Draw(gameTime); The only changes that I made to krypton itself was to replace the XNA references with Monogame and I had to comment out all of the ColorWriteEnable lines from the KryptonEffect.fx file so that the Monogame content pipeline would compile the shader. Is there something that I have missed or will I have to find a new lighting engine?
# There are exactly 116 different groups P where $7\mathbf{Z}^{3} \subset P \subset \mathbf{Z}^{3}$ There are exactly 116 different groups P where $7\mathbf{Z}^{3} \subset P \subset \mathbf{Z}^{3}$ I don't know how to prove this. Is it provable at all? How? - Count the number of subgroups of the quotient $\mathbb Z^3/7\mathbb Z^3$. –  Mariano Suárez-Alvarez Dec 8 '11 at 2:23 By the Fourth (or Lattice, or whatever numbering you use) Isomorphism Theorem, the subgroups of $G$ that contain a normal subgroup $N$ are in one-to-one correspondence with the subgroups of $G/N$. Here, $G=\mathbf{Z}^3$ is abelian, so $7\mathbb{Z}^3$ is a normal subgroup. Thus, asking for subgroups $P$ that contain $N=7\mathbb{Z}^3$ is equivalent to asking for subgroups of $(\mathbb{Z}^3)/(7\mathbb{Z}^3) \cong (\mathbb{Z}/7\mathbb{Z})^3$. The latter is a 3-dimensional vector space over $\mathbb{Z}/7\mathbb{Z}$, the field with $7$ elements; the subgroups are the subspaces. Count the subspaces. - The number of $k$-dimensional subspaces of a vector space of dimension $n$ over a finite field of $q = p^{m}$ elements is the product \begin{align} \binom{n}{k}_{q} = \frac{(q^{n} - 1) \cdots (q^{n} - q^{k-1})}{(q^{k} - 1) \cdots (q^{k} - q^{k-1})}. \end{align} To prove this consider the following. A $k$-dimensional subspace is specified by $k$ linearly independent vectors, say, $\{ v_1, \dots, v_k \}$. There are $q^{n}-1$ ways to choose $v_1$, $q^{n} - q$ ways to choose $v_2$ (so as not to lie in a subspace spanned by $v_1$), and so on. Continuing in this manner, there are $q^{n} - q^{j}$ ways to choose $v_{j+1}$ (so as not to lie in the subspace spanned by any preceding vectors). The number of $k$ linearly independent vectors of an $n$-dimensional space is therefore the product \begin{align} (q^{n} - 1) \cdots (q^{n} - q^{k-1}). \end{align} Setting $n = k$ gives the number of possible bases of each $k$-dimensional subspace. Therefore, we normalize the former by the latter and this gives the rational function which counts the number of $k$-dimensional subspaces. The total number of subspaces of an $n$-dimensional vector space over a finite field of $q$ elements is therefore the sum \begin{align} \sum _{k = 0}^{n} \binom{n}{k} _{q}. \end{align} For your example, as my astute colleagues Mariano and Arturo suggest, $n = 3$ and $q = 7$, and the total number of said subspaces is the sum \begin{align} \binom{3}{0}_7 + \binom{3}{1}_7 + \binom{3}{2}_7 + \binom{3}{3}_7 = 1 + 57 + 57 + 1 = 116. \end{align} Thus, there are $116$ groups $P$ such that $7 \mathbb{Z}^{3} \subset P \subset \mathbb{Z}^{3}$.
## Question This question was previously asked in CGPSC Civil Official Paper 3 (Held on Feb 2018 - Shift 2) View all CGPSC AE Papers > 1. 2.2 2. 1.1 3. 1.6 4. 3.2 5. 5.7 Option 3 : 1.6 Free ST 1: Electric Network and Fields 1 5939 20 Questions 40 Marks 20 Mins ## Detailed Solution Concept See clause 6.2.5.1 of IS 456:2000, the strain that develops due to constant sustain loading is called creep strain but in the initial age of concrete, the creep strain of concrete is higher than later age. As per IS 456: 2000, Clause 6.2.5.1: Age at loading Value of creep coefficient (θ) 7 days 2.2 28 days 1.6 1 year 1.1 The effective modulus of elasticity is given by: $${E_{effective}} = \frac{{{E_c}}}{{1 + θ }}$$ where, Ec = Modulus of elasticity of concrete However, elastic strain remains constant throughout. So the creep coefficient (θ ) $$= \frac{{Creep\;strain}}{{Elastic\;strain}},$$ decrease with time.
Remark 20.24.3. Let $X = \bigcup _{i \in I} U_ i$ be a locally finite open covering. Denote $j_ i : U_ i \to X$ the inclusion map. Suppose that for each $i$ we are given an abelian sheaf $\mathcal{F}_ i$ on $U_ i$. Consider the abelian sheaf $\mathcal{G} = \bigoplus _{i \in I} (j_ i)_*\mathcal{F}_ i$. Then for $V \subset X$ open we actually have $\Gamma (V, \mathcal{G}) = \prod \nolimits _{i \in I} \mathcal{F}_ i(V \cap U_ i).$ In other words we have $\bigoplus \nolimits _{i \in I} (j_ i)_*\mathcal{F}_ i = \prod \nolimits _{i \in I} (j_ i)_*\mathcal{F}_ i$ This seems strange until you realize that the direct sum of a collection of sheaves is the sheafification of what you think it should be. See discussion in Modules, Section 17.3. Thus we conclude that in this case the complex of Lemma 20.24.1 has terms ${\mathfrak C}^ p(\mathcal{U}, \mathcal{F}) = \bigoplus \nolimits _{i_0 \ldots i_ p} (j_{i_0 \ldots i_ p})_* \mathcal{F}_{i_0 \ldots i_ p}$ which is sometimes useful. In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
Tag Info Bayesian inference is a method of statistical inference that relies on treating the model parameters as random variables and applying Bayes' theorem to deduce subjective probability statements about the parameters or hypotheses, conditional on the observed dataset. Overview Bayesian inference is a method of statistical inference that treats model parameters as if they were random variables in order to rely on probability calculus and produces complete and unified probabilistic statements about these parameters. This approach starts with choosing a reference or prior probability distribution on the parameters and then applies Bayes' Theorem to deduce probability statements about parameters or hypotheses, conditional on the data, treating the likelihood function as a conditional density of the data given the (random) parameter. Bayes' Theorem asserts that the conditional density of the parameter $\theta$ given the data, $P(\theta|d)$, can be expressed in terms of the density of the data given $\theta$ as $$P(\theta|d) = \dfrac{P(d|\theta)P(\theta)}{P(d)}.$$ $P(\theta|d)$ is called the posterior probability. $P(d|\theta)$ is often called the likelihood function and denoted $L(\theta|d)$. The distribution of $\theta$ itself, given by $P(\theta)$, is called the prior or the reference measure. It encodes previous or prior beliefs about $\theta$ within a model appropriate for the data. There is necessarily a part of arbitrariness or subjectivity in the choice of that prior, which means that the resulting inference is impacted by this choice (or conditional to it). This also means that two different choices of priors lead to two different posterior distributions, which are not directly comparable. The marginal distribution of the data, $P(d)$ (which appears as a normalization factor), is also called the evidence, as it is directly used for Bayesian model comparison through the notions of Bayes factors and model posterior probabilities. The comparison of two models (including two opposed hypotheses about the parameters) in the Bayesian framework indeed proceeds by taking the ratio of the evidences for these two models under comparisons, $$B_{12} = P_1(d)\big/P_2(d)\,.$$ This is called the Bayes factor and it is usually compared to $1$. Bayes' formula can be used as an updating procedure: as more data become available, the posterior can be updated successively, becoming the prior for the next step. References The following threads contain lists of references: The following journal is dedicated to research in Bayesian statistics:
# Frequency response 1st order #### Montop Joined Oct 29, 2015 3 I am having a bit of trouble trying to figure this out. The equation I come up with keeps canceling Omega out. Attempt: Converting the circuit to the frequency domain the capacitor becomes $$\frac{4}{j \omega}$$. I then used a current divider to find $$\underline Y(j \omega) = \underline u(j \omega) * \frac{(2+(\frac{4}{j \omega}))}{6+(2+(\frac{4}{j \omega}))}$$ Simplifying this I get the frequency response to be $$\frac{1}{2}$$. I am fairly certain that his is not correct. #### RBR1317 Joined Nov 13, 2010 691 I am fairly certain that his is not correct. I would tend to agree since Y=U*Z, where Z is the parallel combination of the R & RC branches.
Definitions # galvanometer [gal-vuh-nom-i-ter] galvanometer, instrument used to determine the presence, direction, and strength of an electric current in a conductor. All galvanometers are based upon the discovery by Hans C. Oersted that a magnetic needle is deflected by the presence of an electric current in a nearby conductor. When an electric current is passing through the conductor, the magnetic needle tends to turn at right angles to the conductor so that its direction is parallel to the lines of induction around the conductor and its north pole points in the direction in which these lines of induction flow. In general, the extent to which the needle turns is dependent upon the strength of the current. In the first galvanometers, a freely turning magnetic needle was hung in a coil of wire; in later versions the magnet was fixed and the coil made movable. Modern galvanometers are of this movable-coil type and are called d'Arsonval galvanometers (after Arsène d'Arsonval, a French physicist). If a pointer is attached to the moving coil so that it passes over a suitably calibrated scale, the galvanometer can be used to measure quantitatively the current passing through it. Such calibrated galvanometers are used in many electrical measuring devices. The DC ammeter, an instrument for measuring direct current, often consists of a calibrated galvanometer through which the current to be measured is made to pass. Since heavy currents would damage the galvanometer, a bypass, or shunt, is provided so that only a certain known percentage of the current passes through the galvanometer. By measuring the known percentage of the current, one arrives at the total current. The DC voltmeter, which can measure direct voltage, consists of a calibrated galvanometer connected in series (see electric circuit) with a high resistance. To measure the voltage between two points, one connects the voltmeter between them. The current through the galvanometer (and hence the pointer reading) is then proportional to the voltage (see Ohm's law). Instrument for measuring small electric currents by deflection of a moving coil. A common galvanometer consists of a light coil of wire suspended from a metallic ribbon between the poles of a permanent magnet. As current passes through the coil, the magnetic field it produces reacts with the magnetic field of the permanent magnet, producing a torque. The torque causes the coil to rotate, moving an attached needle or mirror. The angle of rotation, which provides a measure of the current flowing in the coil, is measured by the movement of the needle or by the deflection of a beam of light reflected from the mirror. A galvanometer is a type of ammeter; an instrument for detecting and measuring electric current. It is an analog electromechanical transducer that produces a rotary deflection, through a limited arc, in response to electric current flowing through its coil. The term has expanded to include uses of the same mechanism in recording, positioning, and servomechanism equipment. ## History Deflection of a magnetic compass needle by current in a wire was first described by Hans Oersted in 1820. The phenomenon was studied both for its own sake and as a means of measuring electrical current. The earliest galvanometer was reported by Johann (Johan) Schweigger of Nuremberg at the University of Halle on 16 September 1820. André-Marie Ampère also contributed to its development. Early designs increased the effect of the magnetic field due to the current by using multiple turns of wire; the instruments were at first called "multipliers" due to this common design feature. The term "galvanometer", in common use by 1836, derives from the surname of Italian electricity researcher Luigi Galvani, who discovered that electric current could make a frog's leg jerk. Originally the instruments relied on the Earth's magnetic field to provide the restoring force for the compass needle; these were called "tangent" galvanometers and had to be oriented before use. Later instruments of the "astatic" type used opposing magnets to become independent of the Earth's field and would operate in any orientation. The most sensitive form, the Thompson or mirror galvanometer, was invented by William Thomson (Lord Kelvin). Instead of a compass needle, it used tiny magnets attached to a small lightweight mirror, suspended by a thread; the deflection of a beam of light greatly magnified the deflection due to small currents. Alternatively the deflection of the suspended magnets could be observed directly through a microscope. The ability to quantitatively measure voltage and current allowed Georg Ohm to formulate Ohm's Law, which states that the voltage across an element is directly proportional to the current through it. The early moving-magnet form of galvanometer had the disadvantage that it was affected by any magnets or iron masses near it, and its deflection was not linearly proportional to the current. In 1882 Jacques-Arsène d'Arsonval developed a form with a stationary permanent magnet and a moving coil of wire, suspended by coiled hair springs. The concentrated magnetic field and delicate suspension made these instruments sensitive and they could be mounted in any position. By 1888 Edward Weston had brought out a commercial form of this instrument, which became a standard component in electrical equipment. This design is almost universally used in moving-vane meters today. ## Operation The most familiar use is as an analog measuring instrument, often called a meter. It is used to measure the direct current (flow of electric charges) through an electric circuit. The D'Arsonval/Weston form used today is constructed with a small pivoting coil of wire in the field of a permanent magnet. The coil is attached to a thin pointer that traverses a calibrated scale. A tiny torsion spring pulls the coil and pointer to the zero position. When a direct current (DC) flows through the coil, the coil generates a magnetic field. This field acts against the permanent magnet. The coil twists, pushing against the spring, and moves the pointer. The hand points at a scale indicating the electric current. Careful design of the pole pieces ensures that the magnetic field is uniform, so that the angular deflection of the pointer is proportional to the current. A useful meter generally contains provision for damping the mechanical resonance of the moving coil and pointer, so that the pointer settles quickly to its position without oscillation. The basic sensitivity of a meter might be, for instance, 100 microamperes full scale (with a voltage drop of, say, 50 millivolts at full current). Such meters are often calibrated to read some other quantity that can be converted to a current of that magnitude. The use of current dividers, often called shunts, allows a meter to be calibrated to measure larger currents. A meter can be calibrated as a DC voltmeter if the resistance of the coil is known by calculating the voltage required to generate a full scale current. A meter can be configured to read other voltages by putting it in a voltage divider circuit. This is generally done by placing a resistor in series with the meter coil. A meter can be used to read resistance by placing it in series with a known voltage (a battery) and an adjustable resistor. In a preparatory step, the circuit is completed and the resistor adjusted to produce full scale deflection. When an unknown resistor is placed in series in the circuit the current will be less than full scale and an appropriately calibrated scale can display the value of the previously-unknown resistor. Because the pointer of the meter is usually a small distance above the scale of the meter, parallax error can occur when the operator attempts to read the scale line that "lines up" with the pointer. To counter this, some meters include a mirror along the markings of the principal scale. The accuracy of the reading from a mirrored scale is improved by positioning one's head while reading the scale so that the pointer and the reflection of the pointer are aligned; at this point, the operator's eye must be directly above the pointer and any parallax error has been minimized. ## Types Extremely sensitive measuring equipment once used mirror galvanometers that substituted a mirror for the pointer. A beam of light reflected from the mirror acted as a long, massless pointer. Such instruments were used as receivers for early trans-Atlantic telegraph systems, for instance. The moving beam of light could also be used to make a record on a moving photographic film, producing a graph of current versus time, in a device called an oscillograph. Galvanometer mechanisms are used to position the pens of analog chart recorders such as used for making an electrocardiogram. Strip chart recorders with galvanometer driven pens might have a full scale frequency response of 100 Hz and several centimeters deflection. In some cases (the classical polygraph of movies or the electroencephalograph), the galvanometer is strong enough to move the pen while it remains in contact with the paper; the writing mechanism may be a heated tip on the needle writing on heat-sensitive paper or a fluid-fed pen. In other cases (the Rustrak recorders), the needle is only intermittently pressed against the writing medium; at that moment, an impression is made and then the pressure is removed, allowing the needle to move to a new position and the cycle repeats. In this case, the galvanometer need not be especially strong. ### Tangent galvanometer A tangent galvanometer is an early measuring instrument used for the measurement of electric current. It works by using a compass needle to compare a magnetic field generated by the unknown current to the magnetic field of the Earth. It gets its name from its operating principle, the tangent law of magnetism, which states that the tangent of the angle a compass needle makes is proportional to the ratio of the strengths of the two perpendicular magnetic fields. It was first described by Claude Servais Mathias Pouillet in 1837. A tangent galvanometer consists of a coil of insulated copper wire wound on a circular non-magnetic frame. The frame is mounted vertically on a horizontal base provided with levelling screws. The coil can be rotated on a vertical axis passing through its centre. A compass box is mounted horizontally at the centre of a circular scale. It consists of a tiny, powerful magnetic needle pivoted at the centre of the coil. The magnetic needle is free to rotate in the horizontal plane. The circular scale is divided into four quadrants. Each quadrant is graduated from 0° to 90°. A long thin aluminium pointer is attached to the needle at its centre and at right angle to it. To avoid errors due to parallax a plane mirror is mounted below the compass needle. In operation, the instrument is first rotated until the magnetic field of the Earth, indicated by the compass needle, is parallel with the plane of the coil. Then the unknown current is applied to the coil. This creates a second magnetic field on the axis of the coil, perpendicular to the Earth's magnetic field. The compass needle responds to the vector sum of the two fields, and deflects to an angle equal to the tangent of the ratio of the two fields. From the angle read from the compass's scale, the current could be found from a table. The current supply wires have to be wound in a small helix, like a pig's tail, otherwise the field due to the wire will affect the compass needle and an incorrect reading will be obtained. #### Theory When current is passed through the tangent galvanometer a magnetic field is created at its corners given by $B=\left\{mu_0 nIover 2r\right\}$ where I is the current in ampere, n is the number of turns of the coil and r is the radius of the coil. If the galvanometer is set such that the plane of the coil is along the magnetic meridian i.e., B is perpendicular to $B_H$ ($B_H$ is the horizontal component of the Earth's magnetic field), the needle rests along the resultant. From tangent law, $B = B_H tantheta$, i.e. $\left\{mu_0 nIover 2r\right\} = B_H tantheta$ or $I=left\left(frac\left\{2rB_H\right\}\left\{mu_0 n\right\}right\right)tantheta$ or $I=K tantheta$, where K is called the Reduction Factor of the tangent galvanometer. The value of $theta$ is taken at 45 degrees for maximum accuracy. #### Geomagnetic field measurement A tangent galvanometer can also be used to measure the magnitude of the horizontal component of the geomagnetic field. When used in this way, a low-voltage power source, such as a battery, is connected in series with a rheostat, the galvanometer, and an ammeter. The galvanometer is first aligned so that the coil is parallel to the geomagnetic field, whose direction is indicated by the compass when there is no current through the coils. The battery is then connected and the rheostat is adjusted until the compass needle deflects 45 degrees from the geomagnetic field, indicating that the magnitude of the magnetic field at the center of the coil is the same as that of the horizontal component of the geomagnetic field. This field strength can be calculated from the current as measured by the ammeter, the number of turns of the coil, and the radius of the coils. ## Uses A major early use for galvanometers was for finding faults in telecommunications cables. They were superseded in this application late in the 20th century by time-domain reflectometers. Since the 1980s, galvanometer-type analog meter movements may be displaced by analog to digital converters (ADCs) for some uses. A digital panel meter (DPM) contains an analog to digital converter and numeric display. The advantages of a digital instrument are higher precision and accuracy, but factors such as power consumption or cost may still favor application of analog meter movements. Most modern uses for the galvanometer mechanism are in positioning and control systems. These are used in laser marking and projection, and in imaging application such as Optical Coherence Tomography (OCT) retinal scanning. Mirror galvanometer systems are used as beam positioning elements in laser optical systems. These are typically high power galvanometer mechanisms used with closed loop servo control systems. The newest generation of galvanometers designed for beam steering applications can have frequency responses over 10 kHz with appropriate servo technology. Examples of manufacturers of such systems are Cambridge Technology Inc. (www.camtech.com) and General Scanning (www.gsig.com). A galvanometer appeared in an episode of the television medical drama House to function as an electrocardiogram for a patient whose severe and extensive burns prevented use of the normal electrodes.
# Lambda Pattern: Hopper ## Reusable patterns for Lambda Hopper: 1. A container for a loose bulk material such as grain, rock, or rubbish, typically one that tapers downward and is able to discharge its contents at the bottom. 2. A person or thing that hops. A simple pattern I’ve been using lately when working with serverless architecture is what I’ve been calling a hopper, i.e. a Lambda function that takes as argument a path to some semi-structured data that can be iterated over then passed onto another Lambda function with the purpose of performing some well defined and isolated task. The result can then be passed onto some other medium for display, etc. The main reasons behind this pattern are: • Breaking down a Lambda function that hits the upper bound of five minutes runtime, • Getting around query rate limits on S3, if you’re making queries that involve a lot of objects break the queries into chunks, • Promoting the creation of simpler, modular code with well defined purpose, • Having defined patterns when working with code-based infrastructure means you’re not reinventing the wheel. The great thing about this pattern is that it’s pretty easy to setup if you have used serverless architecture before, If you’re looking for a good first project with Lambda you could check out my prior blog post about managing the tag configuration of your AWS instances. The hopper pattern is also a first good step if you need to do some bulk processing but want to start small, the hopper can execute as many other Lambda functions it can in a five minute window, running each in a parallel manner (remembering Lambda has a concurrent function limit of 100). This could be a good first step before setting something up on EC2 or EMR. In this article I’ll go through this reusable pattern, using S3 as the holding place for data sets, a Lambda function with a Python handler as the hopper, related roles and permissions and a CloudWatch Event Rule to trigger runs on the Lambda function to allow it to be run on a regular schedule. I’ll also include a script to automate the creation/destruction of the infrastructure. To begin with we will look at the Python code for the hopper. ## Python Handler Code The handler code is very straightforward, I check the event payload and confirm the bucket containing the configuration is present as is the name of the Lambda function. We then simply iterate over the dataset and send a request through Boto to invoke the passed function, so far so good. import boto3 import json S3 = boto3.client('s3') lambda_client = boto3.client('lambda') def handler(event, context): print 'Event:', event if 'config_bucket' not in event: raise Exception('Missing config_bucket') if 'lambda_function' not in event: raise Exception('Missing lambda_function') s3 = boto3.resource('s3') bucket = s3.Bucket(event['config_bucket']) result = bucket.meta.client.list_objects( Bucket=bucket.name ) print result if not result.get('Contents'): raise Exception('Missing S3 content') for dataset_prefixes in result.get('Contents'): event['dataset_prefix'] = dataset_prefixes['Key'] response = lambda_client.invoke( FunctionName=event['lambda_function'], InvocationType='Event', ) print response Now that we have the handler we need to put it into a compressed format and send it onto S3 for access, but before this I’ll define the infrastructure that needs to exist for the Lambda function to operate. To do this I will use Cloudformation templating language, specifying our infrastructure as a code artifact. ## Hopper Cloudformation Template For the hopper to work the following AWS infrastructure resources are required: • IAM role with S3 and lambda access, • Lambda permissions to allow this Lambda function to invoke others, • The Lambda function in question, • The Cloudwatch Event Rule to allow the hopper to be triggered on a regular basis, in this case every fifteen minutes. --- Description: 'Lambda function Hopper, looks for available datasets and runs a Lambda function.' Parameters: ConfigBucketName: Description: The name of the S3 bucket containing configuration. Type: String LambdaFunctionName: Description: The name of the Lambda function being executed. Type: String Resources: HopperLambdaRole: Type: AWS::IAM::Role Properties: ManagedPolicyArns: - arn:aws:iam::aws:policy/AWSLambdaFullAccess AssumeRolePolicyDocument: Statement: - Action: sts:AssumeRole Effect: Allow Principal: Service: lambda.amazonaws.com HopperLambdaPermission: Type: "AWS::Lambda::Permission" Properties: Principal: "events.amazonaws.com" Action: lambda:InvokeFunction FunctionName: "hopper" SourceArn: !GetAtt HopperEventRule.Arn HopperLambdaFunction: Type: AWS::Lambda::Function Properties: FunctionName: "hopper" Handler: handler.handler Role: !GetAtt HopperLambdaRole.Arn Code: S3Bucket: !Ref ConfigBucketName S3Key: hopper.zip Runtime: python2.7 MemorySize: "512" Timeout: "240" HopperEventRule: Type: AWS::Events::Rule Properties: Description: CloudWatch Event Rule to initiate the hopper which initiates the target Lambda function on the datasets in a round robin fashion. Name: "hopper" ScheduleExpression: "rate(15 minutes)" State: ENABLED Targets: - Arn: !GetAtt HopperLambdaFunction.Arn Id: 'Hopper' Input: !Sub '{ "lambda_function": "${LambdaFunctionName}", "config_bucket": "${ConfigBucketName}" }' Save this as Hopper.yml, now we will define some scripts that will orchestrate the compression and uploading of the handler script, then create a stack with the required infrastructure. ## Infrastructure Provisioning Script Here I’ve written a couple of scripts in Bash that deploys and brings down our Lambda function when no longer needed. Be sure to run chmod +x deploy_hopper.sh && chmod +x cleanup_hopper.sh before executing to make sure the scripts can be executed. These can be run using commands: ./deploy_hopper.sh <s3_config_bucket> <lambda_function_name> and ./cleanup_hopper.sh <s3_config_bucket> This project does not include the code to the lambda function you wish to run or the S3 bucket containing your configuration to be iterated over, it’s assumed these already exist and can be referened. #### Deploy script: deploy_hopper.sh Here’s the deploy script, it compresses the handler function and sends it to your configuration bucket to be referenced in the Cloudformation. It then creates the Cloudformation parameters and creates a stack based off the created file. Finally it cleans up after itself, removing the parameter file. #!/bin/bash -e AWS_REGION="--region ap-southeast-2" s3_config_path=${1} lambda_function_name=${2} zip -r9 hopper.zip handler.py aws s3 cp ${AWS_REGION}${zipfile} hopper.zip s3://${s3_config_path}/ hopper_stack_name=Hopper cat << EOF > params.json [ {"ParameterKey":"ConfigBucketName","ParameterValue":"${s3_config_path}"}, {"ParameterKey":"LambdaFunctionName","ParameterValue":"${lambda_function_name}"} ] EOF echo$(date -u +"%Y-%m-%dT%H:%M:%S") Creating ${hopper_stack_name}. aws${AWS_REGION} cloudformation create-stack \ --capabilities CAPABILITY_IAM \ --stack-name ${hopper_stack_name} \ --template-body file://Hopper.yml \ --parameters file://params.json aws${AWS_REGION} cloudformation wait stack-create-complete --stack-name ${hopper_stack_name} rm params.json rm hopper.zip #### Decommission script: cleanup_hopper.sh This script deletes the stack previously created and deletes the uploaded Lambda package sent to S3. The final command also includes a wait to confirm the delete is completed, in the case that the script is chained to operate with other scripts. The wait can safely be removed if not necessary. #!/bin/bash -e AWS_REGION="--region ap-southeast-2" hopper_stack_name=Hopper s3_config_path=${1} echo $(date -u +"%Y-%m-%dT%H:%M:%S") Deleting${hopper_stack_name}. aws cloudformation ${AWS_REGION} delete-stack --stack-name${hopper_stack_name} aws s3 rm ${AWS_REGION} s3://${s3_config_path}/hopper.zip aws cloudformation ${AWS_REGION} wait stack-delete-complete --stack-name${hopper_stack_name} Once created put these scripts in the same directory as the handler.py and Hopper.yml and run the deployment script. You now have a fully automated building block to add to your infrastructure arsenal! ## Summary As can be seen this is a pattern that can be used to extend your Lambda functions beyond usual AWS limits. It’s also easily extensible, variables can be added to the code to take into account build numbers or environments. Post me your results or errata in the comments, I’m really interested to see how people go with this, I initially got caught with the Event rule getting invocation errors, the Lambda permission specified saved the day; big thanks to Rowan Udell for his article on CloudWatch Event Rules! Finally as with most AWS resources running and creating these infrastructure resources has a cost associated with them, be careful to remember that whatever you’re processing with the hopper and storing in S3 will have a price associated with it, especially given that the hopper is attached to a repetitive event rule. Deprovision this stack once done to save your credit card. Update (25/02/2017): I’ve added the code to a Github repository to setup a projects pattern for future examples. The code can be found at https://github.com/galactose/ashinycloud-projects.
• Create Account Calling all IT Pros from Canada and Australia.. we need your help! Support our site by taking a quick sponsored surveyand win a chance at a $50 Amazon gift card. Click here to get started! # Game development on: Linux or Windows Old topic! Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic. 46 replies to this topic ### #1arnsa Members - Reputation: 127 Like 1Likes Like Posted 08 February 2013 - 12:06 PM Hello, folks! I've got a dillema here. I've started to learn game development, more specificaly -- OpenGL. My GPU supports OpenGL only up to 3.1 version, so drivers on Linux isn't the problem here. The problem is, that most of the companies create games or game engines only for Windows or OS X, so I thought an experience in working with Windows would be good. But... I'm a total Linux lover and after installing Windows... I just hate them and I very much miss Linux. I've been thinking a lot, but if I stay with Windows, I'll have more pros. I look forward for any opinion on what should I do, what are the advantages on staying with Windows, or working on Linux etc. --Arnas Edited by arnsa, 08 February 2013 - 12:28 PM. Sponsor: ### #2L. Spiro Crossbones+ - Reputation: 21316 Like 11Likes Like Posted 08 February 2013 - 12:29 PM Anything regarding the future of gaming on Linux as a result of Valve is just speculation at this point, so I won’t consider it in my answer. There are several issues to consider and you will have to consider them for yourself. Firstly, if this is just as a hobby, stick with what you love. If you plan to go commercial, which is unlikely for a long time, Windows may be a better choice. Sometimes making money means making sacrifices. Secondly, if you do plan to go commercial, as an indie it would be more practical to target iOS (target Android only if you are masochistic). The level of quality a game needs to be marketable on Windows is above what most individuals can achieve. This would be true even if you stayed on Linux, but with the added hurdle of a smaller potential customer base. iOS is OpenGL ES 2.0 anyway, which is really better to learn than OpenGL since it doesn’t have all the deprecated/excess cruft OpenGL has. I see people even today learning OpenGL who somehow managed to do so via immediate mode. OpenGL ES removed all the crap that should never have been there in the first place, so you are fairly safer in your learning process by starting with it. Thirdly, if you did decide to go with Windows, it wouldn’t make sense to use OpenGL. You would want to avoid a lot of headache and use either Direct3D 9 or Direct3D 11. If you are fully decided that OpenGL will be your API-of-choice, Windows is still an option, but a less-sensible one. Not because OpenGL on Windows specifically has problems, but because OpenGL itself has problems with the large variety of vendor implementations etc. It is easy to see questions here constantly about how it works on nVidia but not on ATI. My own engine has the opposite problem, as I just discovered after buying an nVidia card (whereas it functions identically in Direct3D 9 and Direct3D 11). This is a problem with the vendors and the multitude of implementations out there, which means it is not just Windows, but Linux too. So if you do go to Windows, Direct3D * would be a better choice. L. Spiro Edited by L. Spiro, 30 November 2013 - 09:34 PM. ### #3arnsa Members - Reputation: 127 Like 1Likes Like Posted 08 February 2013 - 12:42 PM Anything regarding the future of gaming on Linux as a result of Valve is just speculation at this point, so I won’t consider it in my answer. There are several issues to consider and you will have to consider them for yourself. Firstly, if this is just as a hobby, stick with what you love. If you plan to go commercial, which is unlikely for a long time, Windows may be a better choice. Sometimes making money means making sacrifices. Secondly, if you do plan to go commercial, as an indie it would be more practical to target iOS (target Android only if you are masochistic). The level of quality a game needs to be marketable on Windows is above what most individuals can achieve. This would be true even if you stayed on Linux, but with the added hurdle of a smaller potential customer base. iOS is OpenGL ES 2.0 anyway, which is really better to learn than OpenGL since it doesn’t have all the deprecated/excess cruft OpenGL has. I see people even today learning OpenGL who somehow managed to do so via immediate mode. OpenGL ES removed all the crap that should never have been there in the first place, so you are fairly safer in your learning process by starting with it. Thirdly, if you did decide to go with Windows, it wouldn’t make sense to use OpenGL. You would want to avoid a lot of headache and use either Direct3D 9 or Direct3D 11. If you are fully decided that OpenGL will be your API-of-choice, Windows it still an option, but a less-sensible one. Not because OpenGL on Windows specifically has problems, but because OpenGL itself has problems with the large variety of vendor implementations etc. It is easy to see questions here constantly about how it works on nVidia but not on ATI. My own engine has the opposite problem, as I just discovered after buying an nVidia card (whereas it functions identically in Direct3D 9 and Direct3D 11). This is a problem with the vendors and the multitude of implementations out there, which means it is not just Windows, but Linux too. So if you do go to Windows, Direct3D * would be a better choice. L. Spiro Actually, I'm not planning to create mobile games at all, because I don't like them. Will I go commercial? I don't know. Currently I'm only at 11th grade, so I will still be studying at school for 1+ years, plus minimum 4 years at the university. After that, I might go commercial, because atm I want game development-related job, and yes, game development is my hobby now. What about Direct3D... I heard it's API is really crappy, so is the documentation. So, it could be hard to learn it, true? ### #4phantom Members - Reputation: 9154 Like 6Likes Like Posted 08 February 2013 - 01:14 PM What about Direct3D... I heard it's API is really crappy, so is the documentation. So, it could be hard to learn it, true? Judging by your opening post about loving Linux it isn't overly surprising you've heard this however it couldn't be further from the truth. Of the two APIs D3D11 is the better of the two; it is well documented and all together saner. OpenGL, while feature wise is on a level with D3D11, remains tied to the broken bind-to-edit model which makes working with it pants-on-head retarded. That said working with OpenGL won't hurt you initially so you don't feel you have to swap over to Windows in order to progress; all the basic knowledge of 3D rendering is transferable between the two, you just have to learn a different way of doing things. You'll probably want to pick up D3D at some point however right now you can focus on expanding your knowledge with OpenGL on Linux. ### #5PaloDeQueso Members - Reputation: 333 Like 0Likes Like Posted 08 February 2013 - 01:17 PM I encourage you to spend time and look at a few different operating systems, APIs and development environments in general. I found love with OpenGL and KDevelop on KDE in Linux (Kubuntu). I haven't looked back in quite a while. I keep my engine/games cross platform so they'll run on Windows, OSX and Linux though, just for good form. Douglas Eugene Reisinger II Projects/Profile Site ### #6Mike.Popoloski Crossbones+ - Reputation: 3188 Like 3Likes Like Posted 08 February 2013 - 01:18 PM What about Direct3D... I heard it's API is really crappy, so is the documentation. So, it could be hard to learn it, true? False. It's actually the opposite; OpenGL's API hasn't changed much since the 80's and the documentation is practically nonexistent (in fact, if you go to the OpenGL website, it says "The two essential documentation resources that every developer should have are the latest releases of:" and then gives two links to Amazon books you need to buy). It's much harder to learn to use effectively, since the API no longer reflects much of what happens in modern hardware, and there are multiple paths to accomplish most tasks, the right one being non-obvious or even changing depending on which hardware vendor or driver version you're using. That said, if you're targeting any platform other than Windows, you don't have much choice but to suck it up and tough it out anyway. Edited by Mike.Popoloski, 08 February 2013 - 01:18 PM. Mike Popoloski | Journal | SlimDX ### #7larspensjo Members - Reputation: 1561 Like 2Likes Like Posted 08 February 2013 - 02:08 PM It shouldn't be too hard to port the Linux application to Windows, whereas it is not realistic to port a D3D Windows application to Linux. If you use a library like glfw (portable context and input device management), then you are already half way. In Windows, you set up MinGW, to get a similar environment. There may be problems depending on what Linux libraries you used, it depends. In my experience, OpenGL is not one of the problems (except for Intel graphics). Current project: Ephenation. Sharing OpenGL experiences: http://ephenationopengl.blogspot.com/ ### #8Vilem Otte Crossbones+ - Reputation: 2089 Like 2Likes Like Posted 08 February 2013 - 03:10 PM Basically it's up to you. Linux applications are common to be portable (unlike for Windows application). Although it's all about your will (do you want your software Windows-only, or portable). It's possible to write easily portable software on Windows, and it's also possible to write non-portable software on Linux. For example we're sticking in all our applications to standard libraries (basically *just* bare standard of libc, (sometimes libstdc++ - depends on language we use)) and portable libraries (OpenGL, OpenAL, DevIL, GTK, ODE, etc.) - most of them are also open source by the way. E.g. we're basically trying to make our software system independent. We're writing & testing all our software under both systems. So far we haven't met any system-specific problem! #larspensjo - I object to port of D3D applications. If you rewrite your D3D functionality to WineD3D (this actually makes your software portable - and use OpenGL instead of D3D in the end), although I don't actually know how much work it is, because I haven't worked with WineD3D. Edited by Vilem Otte, 08 February 2013 - 03:12 PM. My current blog on programming, linux and stuff - http://gameprogrammerdiary.blogspot.com ### #9larspensjo Members - Reputation: 1561 Like 0Likes Like Posted 08 February 2013 - 03:24 PM If you rewrite your D3D functionality to WineD3D... I didn't know of WineD3D. On the other hand, it hasn't changed for 3 years. Doesn't have to be a problem, but usually a sign of a package in decline. Current project: Ephenation. Sharing OpenGL experiences: http://ephenationopengl.blogspot.com/ ### #10L. Spiro Crossbones+ - Reputation: 21316 Like 9Likes Like Posted 08 February 2013 - 07:32 PM I want game development-related job This is just another thing to consider, but since it is your future career it is probably the most important thing to consider. I can personally vouch for Nintendo DS, Nintendo Wii, Nintendo 3DS, Xbox 360, PlayStation 3, PlayStation Portable, PlayStation Vita, and Android all requiring Windows for development, and iOS requiring Mac OS X (I have been corrected on Android—it has kits for Linux, but it is the only one in this list that does). The only time I have ever even seen a Linux machine was the one day I worked at Morgan Stanley, which is clearly unrelated to video games. Not only that, excluding mobiles (since you stated you don’t want to work with them), all current consoles, as well as development for Windows, are much closer to (or exactly) DirectX 11. It is no secret that the next Microsoft console will use DirectX 11, and PlayStation 4 will be as well (or rather extremely similar). These will likely still be the relevant consoles when you graduate. Ultimately, as was said, you can learn OpenGL as a means of learning general rendering concepts, but you can do the same with Direct3D 11 and avoid having to relearn an API in the future. In other words you can learn the concepts and then struggle with the relevant API later, or you can just learn the concepts and the relevant API up-front. Besides, as was also stated, OpenGL’s bind-to-edit mechanism is a headache and OpenGL is simply refuses to evolve, sticking to the mistakes of a somewhat naïve upbringing for the sake of compatibility, whereas Direct3D 11 is a recently fully overhauled API designed to match the way modern graphics hardware works, where backwards compatibility has been sacrificed in order to rid itself of design flaws of the past. By now the choice should be fairly clear. And speaking from personal experience when I had to work on Mac OS X after having avoided it like the plague for decades: You get used to it. L. Spiro Edited by L. Spiro, 11 February 2013 - 05:39 PM. ### #11Butabee Members - Reputation: 270 Like 2Likes Like Posted 08 February 2013 - 07:55 PM Use Unity 3D, you can build for PC, Mac, and Linux with the change of an option ### #12Xanather Members - Reputation: 753 Like 1Likes Like Posted 08 February 2013 - 09:26 PM If your going to use something like C# you should look at MonoGame. You would be able to develop your game on Linux and very easily port it over to windows (In most circumstances I think MonoGame has over 95% code compatibility). ### #13Lightness1024 Members - Reputation: 860 Like 3Likes Like Posted 09 February 2013 - 08:53 AM linux is still great to fiddle with for multiple reasons. It is the most used operating system in the high performance computing world. And the most important system to know as administrator of those HPC, administrator of data-center, websites hosting services, administrator of universities IT, some domains/companies with Unix history, hospitals IT service.. etc you name it. Linux is handy. Next thing, it is great as a student because in the future linux takes a lot of time and has few commercial uses, and even at home it gets tiring. get a wife and children and linux is almost forgotten the first time you see yourself opening a .conf file. BUT if feels so good to have another engine running the whole damn thing under the hood. it feels neat. windows has very old histerical raisins and some stuff are just so bloated and slow. whereas linux has refactored 100 times to get where it is now. But let's not go down the slope of troll-land. What Spiro said can not be more true, except Android has its SDK available on windows, linux and OSX. Seems pretty logical, Android is a linux distribution. Google has linux expertise, they are a web company... A last word about University, if you go to a respectable curiculum with a bit of history and not some new age private school that will give you 100 certifications from Sun, Oracle, SAS, Microsoft and other ridiculous papers that are pure management bullshit, you'll learn the unix way, because computer science originates from there, and widnows has just been re-inventing the wheel in a square shape. What I mean by that, is that knowing how a linux works (but use debian for that, not ubuntu because you'll see nothing, just cute GUIs..), you'll have a step back and get a more canonical approach to computer science which I believe is great to have when you come back to the windows world. Not only that, but for university work, you'll have to work with ssh to log on the university servers, and the correct way is from linux. (ssh -X, zsh, csh...) However, for pure graphics, linux is a PITA if you don't have THE graphic cards for which you COULD have "nice" drivers if you happen to have a distribution that lets you have them. (hint: get an nVidia and don't be afraid of having to have to rebuild your kernel (if you have debian) or just use ubuntu its easier... at least at first.) There are difficulties to get correct acceleration, and the multiplicity of systems in place in the community doesn't help. (Gallium, DRM, DRI, Mesa, Xorg, Compiz and the driver hell, nouveau, renouveau, fglrx, nv, nvidia and i'm not even talking of the issues with dual screen... that makes me cry) But its a lot of fun also the compilation process if just so simpler than on windows. "apt-get install build-essentials" is the only thing you need before you can code. I don't know how many URLs you have to browse before you can download a compiler on windows, and how difficult it will be to setup all the libraries you need to link correctly with your project. on linux its often all prepared. you have autotools and cmake , everything is tightly organized in the distribution and libraries install all in the same place, so cmake package finders never loose whereas on windows.... for example, boost library, the most useful libraries set for C++ ever, you guessed it, one apt-get install only before you can use it in your code, in windows you are usually HOURS away. you need to build it, configure the projects .. aaaargh the pain, i can still feel it. fortunately there is a guy who does a binary package for windows but it matches your compiler only if you're lucky anyway. another thing : Emacs. of course there are Vi people who will want to argue otherwise. But really knowing Emacs (or say Vi) will give you some edge in code-text edition power over the people only knowing IDE like visual studio. (poor guys they don't even know what they loose) Downside, learning emacs is looooong, and difficult, and almost impossible alone. Also it requires knowledge of lisp to edit the unavoidable .emacs config file. But you probably already know that just for an ending word, many companies run some servers. these servers are greating running linux for remote administration comfort. So knowing linux.. again a plus. for middle sized companies without admins where anybody can do a bit of admin job from time to time, if you're the only one knowing how to configure iptables and to a little ./configure make make install, you'll have a serious edge in the eyes of the management. particularly when you install an apache server running some django magic with a buildbot along a gitosis service ... or whatever other stuff that are needed in companies. (mediawiki, mailservers, NFS, backups...) If you go work at some famous big ones out there later in your life: - google, already mentioned, they notably contribute to webkit - intel, very active in linux development, because it is easily recompilable they can test lots of their CPU features there. c.f. powertop utility, intel c compiler... - every single researcher out there. may it be in forest and nature (my sister in law did her thesis on a kind of forest growing model with a demonstration app using C made on unix environment), or computer scientists, biologists, doctors.. c.f vizualisation toolkits like VTK (Kitware Inc.)... - nvidia, the fermi strategy has lead them on linux because aforementioned HPC reasons. - IBM, Red Hat, Novell for the most famous. excellent report that shows that: http://go.linuxfoundation.org/who-writes-linux-2012 ### #14brycethorup Members - Reputation: 101 Like 0Likes Like Posted 09 February 2013 - 09:44 AM I think the real question here is how deep into code do you want to get. If you want to get into the nitty-gritty of every aspect of your code you are going to end up having to lock-in to a specific platform (i.e. Window, linux, mac, etc.). It is true that it is possible to maintain cross compatibility, but the more complex your game becomes the more difficult it is. Frankly, unless you are planning on someone's platform specific engine, or building your own from scratch I would avoid low-level APIs entirely. Let me throw out a few options to look into that are both very simple to code for, and provide cross-compatability: ShiVa 3D, Unity, Blender. For someone starting out I highly recommend Blender. It is fully integreated, meaning you don't have to use one utility to make objects another to make textures another to manage code and then pull it all together somehow, it's all in one place. It uses Python for it's game language. Lastly, it is completely open-source, and free to download. It isn't meant for crazy complicated games, but for a place to start in game dev it is the only one I can recommend to absolute beginners. They have a few tutorial resources on their site (blender.org), but remember Google and Youtube are your friends. ### #15Karsten_ Members - Reputation: 1807 Like 2Likes Like Posted 09 February 2013 - 12:14 PM If you prefer to use Linux/UNIX to develop your game (i.e because you use it as your day to day OS) then you might have some success with wine-g++ and DirectX. You probably wont be able to use closed source engines with this solution though because I doubt they would be able to link with GCC objects. Engines I have used that work pretty well on both platforms include include Ogre3D and Irrlicht I have not had great experience with Unity's Linux support. It only really works on the very latest distributions. RedHat Enterprise 6 couldnt run it due to incompatible glibc versions. This isn't really Unity's fault but is a symptom of using a closed source engine. Linux doesn't really maintain backwards binary compatibility in the way Windows does. (Which is why NVIDIA and AMD's drivers tend to be problematic). OpenGL works on every platform I have ever used so I always strongly recommend this, even some people tell you that it isn't quite as "good" as DirectX. As an indie developer, it probably wont even make a difference to you. Personally, I find it much easier to get started with. Edited by Karsten_, 09 February 2013 - 12:19 PM. Mutiny - Open-source C++ Unity re-implementation. Defile of Eden 2 - FreeBSD and OpenBSD binaries of our latest game. ### #16BGB Crossbones+ - Reputation: 1558 Like 0Likes Like Posted 11 February 2013 - 03:56 PM yes, personally I found OpenGL to be a little more accessible than D3D, and the portability is a bit of a plus-point (partly as I develop for both Windows and Linux, and was also considering possibilities like NativeClient and Android as well). not to say that everything about it is necessarily good, but it works. I had generally been using the full-featured OpenGL, and also using a fair amount of "legacy" functionality, but trying to migrate things to be able to work with OpenGL-ES is also a little bit of a challenge, mostly as lots of functionality that was taken for granted no longer exists (not all of it for entirely clear reasons), resulting in a fair chunk of the renderer recently being migrated to wrappers (for the most-part, the "fixed function pipeline" is now largely faked via wrappers, errm, partly as wrappers were the path-of-least-effort, and it was admittedly a little easier to move what bits of code that were still using glBegin/glEnd over to wrappers, than decide whether or not to prune them, or rework them to use VAs or VBOs or similar, and likewise went for faking the transformation matrices, ...). but, in general, it isn't all that bad. my exposure to D3D has generally been as a bunch of awkwardness involving DX version specific COM objects, PITA getting things linked correctly (since the Windows SDK and DirectX SDK are separate, and it essentially amounts to hard-coding the DX SDK install path into the build-files), and all-around a fair bit of general inconvenience doing pretty much anything, and all this with the code being largely platform-specific anyways, doesn't seem like a good tradeoff. most of what functionality I have needed, can be found either in OpenGL or in the Win32 API (most of which is wrapped over anyways, via OS-specific code), making these generally a lot more convenient. advanced rendering features and high-level API design issues aren't really such a big deal in this case. EDIT, did find this: http://en.wikipedia.org/wiki/Comparison_of_OpenGL_and_Direct3D Edited by cr88192, 11 February 2013 - 04:11 PM. ### #17SimonForsman Crossbones+ - Reputation: 7010 Like 0Likes Like Posted 11 February 2013 - 05:47 PM Karsten_, on 09 Feb 2013 - 19:13, said: Linux doesn't really maintain backwards binary compatibility in the way Windows does. (Which is why NVIDIA and AMD's drivers tend to be problematic). Actually Linux distributions are binary compatible with eachother and the LSB mandates that they support the old ABIs(Which means they are always backwards compatible), drivers are a different matter since the kernel interfaces change frequently, but that goes for any OS, (Microsoft have changed their kernel interface with almost every single kernel release they've made and it breaks driver compatibility almost every time. If you want to use proprietary drivers in Linux and avoid problems the only thing you have to do is use the kernel that ships with the OS. (And use an OS that doesn't push out new kernel versions as part of their normal update routine or atleast one that installs new versions of any proprietary driver you're using at the same time) The fact that Unity3D doesn't work well with RHEL6 has nothing to do with backwards compatibility, glibc is backwards compatible these days(it wasn't back in the 90s, but this isn't the 90s) but old versions of glibc does not magically support software that requires newer glibc versions. (Just like you can't run a game that requires D3D11 on Windows XP). If you want to run modern software on Linux, do not use RHEL, it is ancient before it gets released(RHEL7 should be able to run games made with the current version of Unity3D). Its great if you need stability, but if Microsoft did like Redhat the latest Windows release would be Windows 2000, SP13 and the only feature updates we'd get would be for things like Hyper-V or MSSQL. Personally i wouldn't use Unity3D to target Linux today though, if i sell a linux game i have to support it, and offering a unsupported linux client to those who buy the game for Windows or Mac is pretty pointless. Seeing how much problems Unity3D has had to get Android support working reasonably well i'd prefer to wait until others have run over and exposed most of the pitfalls, once that is sorted i might consider supporting Ubuntu and possibly Mint. Edited by SimonForsman, 11 February 2013 - 06:03 PM. I don't suffer from insanity, I'm enjoying every minute of it. The voices in my head may not be real, but they have some good ideas! ### #18L. Spiro Crossbones+ - Reputation: 21316 Like 1Likes Like Posted 11 February 2013 - 05:54 PM cr88192, on 12 Feb 2013 - 06:47, said: PITA getting things linked correctly (since the Windows SDK and DirectX SDK are separate, and it essentially amounts to hard-coding the DX SDK install path into the build-files) Huh? Adding linker/header paths to the IDE search directories is fairly standard practice and if you have to hard-code anything you’re doing it wrong. If you are not using IDE’s and using makefiles directly, firstly, that’s just pain you are bringing onto yourself and you have no one else to blame. Secondly you can still use environment variables ($(DXSDK_DIR) would be a good one!) to avoid hard-coding paths. If the fact that the Windows SDK and the DirectX SDK are separate (as they very-well should be) caused you even the slightest inconvenience, I think you need to gain a bit more experience in general programming, because linking to libraries is a fact of life in the world of programming. The concept of “search paths” exists whether you are using an IDE or raw makefiles, and every programmer should know about this at an early age. Speaking for myself, my first word as a child was “Mamma”. My second was “Chocolate cake”. My third was “Search paths”. L. Spiro Edited by L. Spiro, 11 February 2013 - 05:57 PM. ### #19phantom  Members   -  Reputation: 9154 Like 3Likes Like Posted 11 February 2013 - 06:09 PM If the fact that the Windows SDK and the DirectX SDK are separate (as they very-well should be) Although they aren't any more; June 2010 was the last DX SDK update for DX11. With Windows 8 DX/D3D is now part of the platform SDK and will be updated (or not) as that is updated ### #20BGB  Crossbones+   -  Reputation: 1558 Like 0Likes Like Posted 11 February 2013 - 10:26 PM cr88192, on 12 Feb 2013 - 06:47, said: PITA getting things linked correctly (since the Windows SDK and DirectX SDK are separate, and it essentially amounts to hard-coding the DX SDK install path into the build-files) Huh? Adding linker/header paths to the IDE search directories is fairly standard practice and if you have to hard-code anything you’re doing it wrong. If you are not using IDE’s and using makefiles directly, firstly, that’s just pain you are bringing onto yourself and you have no one else to blame. Secondly you can still use environment variables (\$(DXSDK_DIR) would be a good one!) to avoid hard-coding paths. If the fact that the Windows SDK and the DirectX SDK are separate (as they very-well should be) caused you even the slightest inconvenience, I think you need to gain a bit more experience in general programming, because linking to libraries is a fact of life in the world of programming. The concept of “search paths” exists whether you are using an IDE or raw makefiles, and every programmer should know about this at an early age. Speaking for myself, my first word as a child was “Mamma”. My second was “Chocolate cake”. My third was “Search paths”. L. Spiro many people still build from the command-line using GNU Make, FWIW... this has the advantage that many core files for the project (much of the Makefile tree) can be shared between OS's. (vs say a Visual Studio project, which is only really useful to Visual Studio...). but, yes, although a person can get it linked, it is less convenient given it isn't kept along with all the other OS libraries. like, in the top-level Makefile, a person may need something like: export DXSDK="C:\\Program Files..." as well as passing this back to CL as part of their CFLAGS and similar. not that it can't be done, but a person can just as easily not do so, and instead just use core libraries (those provided by the Windows SDK for Windows builds, which happens to include OpenGL and the Win32 API, but not typically DirectX). but, the bigger question is, how worthwhile is it to have a big dependency for something that is largely Windows-specific anyways (and can't really be used to any real non-trivial degree without putting portability at risk)?... Edited by cr88192, 11 February 2013 - 10:26 PM. Old topic! Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic. PARTNERS
# Bug#617296: Any Progress with RStudio? Dear Dirk and others, On May 14 2014, Dirk Eddelbuettel wrote: > On 14 May 2014 at 17:08, Rogério Brito wrote: > | Anyway, I can push the *super* embrionary packaging that I have so far. I > | would like some help with the maintainance of this package since I have > | barely any time left with the amount of packages that I maintain. > > I am semi-regularly IM'ing or emailing with the RStudio founder whom I'll > meet tomorrow. I also have pretty good contacts with a number of other > RStudio developers and engineers. That's great. I would love to know what to do about RStudio to convince it to (while building) to use some off-the-tree packages like hunspell, mathjax and possibly others. > You want to look at the current dev packages, eg (in binary) > which, inter alia, contain a very cooked-up local build of pandoc to be able > to get the very, very latest pandoc binary without any depends. Thanks. Somehow I missed that directory. > I am not sure how ready this is even for Debian unstable, and they _do_ You probably meant experimental here? I installed and started using rstudio and I have never been so impressed with an IDE like this in ages. There are so many goodies with it that it would be a real pity to not have it in Debian. That being said, I don't think that the FTP masters would let us upload something that duplicates a lot of stuff, but that shouldn't prevent us (or the interested parties) from working on the package and start solving the small problems (like those that I mentioned before), detecting unpackaged dependencies (e.g., knitr and possibly many others) etc. > I can ask tomorrow, but RStudio is still a pretty fast moving target. Thanks. It would be nice to know if they are moving from Qt4 to Qt5 in the short time or not. Also, if they would like to see RStudio packaged independently from them. And there are probably other smaller issues like linking rstudio with
# Unit conversions in chemistry ### Unit conversions in chemistry #### Lessons In this lesson, we will learn: • The units of measurements commonly used in chemistry. • How to use the unit conversion method and the reason it is valuable. • Practical examples of using the unit conversion method to do calculations in chemistry. Notes: • In any problem where information you have has different units to the information you're being asked for, you'll need to do a unit conversion. • Chemistry calculations involve units like number of moles (units: mol), the mass of a substance (units: g), the volume of a gas, liquid or solution (units: L) and others. • Calculations in chemistry can be solved by breaking down questions into segments: • An unknown quantity to be found - the answer to the question. • An initial quantity to be converted into the units of the unknown quantity. • A conversion factor(s) linking the unknown quantity and the initial quantity. • A conversion factor is an expression as a fraction that equates one unit to another. For example: $\frac{1\;min}{60\;s}$ and $\frac{60\;s}{1\;min}$ • Because the value of both terms in the unit conversion are equal (60 seconds is equal to 1 minute), when multiplying by a unit conversion the value of the expression doesn't change. • This also means you can arrange either term (seconds or mins) on the top or the bottom; arrange it so that your original units cancel and you convert to the new units. This is why it is known as a conversion factor. • CONVERSION FACTORS WILL CHANGE THE UNITS WITHOUT CHANGING THE VALUE! • To solve calculations using the unit conversion method, the following steps should be done in order: • Identify the unknown quantity to be found – this should be written with units and put one side of an equation. • Identify the initial quantity the question has given you – this starts, with units, on the other side of the equation. • Apply the unit conversion(s) by multiplying it with the initial quantity you were given. • This works even if multiple unit conversions are necessary – this method also encourages you to display your working clearly so any mistakes are usually easy to spot! • For example: If there are 6 eggs in a box, how many eggs would be in 4.5 boxes? • For example (part 2): If an egg costs $2 each, how much does 3 dozen eggs cost? • This method can be used beyond chemistry to solve any problem involving a known quantity that can be converted into another unknown quantity. • Introduction Introduction to unit conversions a) Units and calculations in chemistry. b) What is a unit conversion? c) Unit conversion method: Walkthrough • 1. Apply the conversion factor method to simple calculations. Use the unit conversion method to answer the following problems. a) If a car can travel 75 kilometres in 1 hour, how far can it travel in 4.5 hours? b) An electronics store has an offer that sells two TVs for$430. How much would 8 TVs cost using this deal? c) At a market, a man traded 8 apples for 14 oranges. How many oranges can he get if he traded 22 apples at this rate? • 2. Apply the conversion factor method to chemistry-related calculations. Use the unit conversion method to answer the following problems. a) If a container holds 3.5 dozen oxygen atoms, how many oxygen atoms in total will 3 full containers have in them? b) If one mole of oxygen gas has a mass of 32 g, what is the mass of 3.5 moles of oxygen gas? c) If one molecule of white phosphorus has 4 atoms of phosphorus in it, how many molecules of white phosphorus would be needed to have 72 atoms of phosphorus? d) If one mole of hydrogen gas fills up 22.4 L in a gas canister, how many moles of hydrogen gas would fill up a gas canister 180 L in size? • 3. Apply the conversion factor method to chemistry-related calculations with SI units. Use the unit conversion method to answer the following problems. a) 4.7 moles of carbon dioxide gas has a mass of 206.8 g. What is the mass of 1 mole of carbon dioxide gas? b) Gold has a density of 19.3 grams per millilitre (g/mL), what would the volume be of 62 grams of gold? c) If an acid has a concentration of 3 moles per litre, how many litres of acid would I need to have 1.8 moles of acid?
# Hausdorff measure of rectifiable curve equal to its length Let $(\mathbb{R}^n,d)$ be a metric space. A continuous, injective mapping $\gamma: [0,1]\to \mathbb{R}^n$ is a curve and denote its image $\overline{\gamma}:=\gamma([0,1])$. I wish to prove that its Hausdorff measure, $H^1(\overline{\gamma})$, is equal to the length of the curve $L$. In particular I am having trouble showing that $$H^1(\overline{\gamma})\leq L.$$ Any ideas? The length of the curve is defined by $$L = \sup\left\{\sum\limits_{i=1}^md(\gamma(t_{i-1}),\gamma(t_i))\,\bigg|\, 0 = t_0 < t_1 < \dots < t_m = 1 \right\}.$$ We have that $$H^1_\delta(E) = \inf\left\{\sum\limits_{i=1}^\infty\text{diam}(A_i)\,\bigg|\,\bigcup\limits_{i=1}^\infty A_i \supseteq E,\,\text{diam}(A_i)\leq \delta\right\}.$$ That is, the infimum is taken over all possible countable coverings $(A_i)_{i=1}^\infty$ of $E$, where the sets $A_i$ are "small enough." We then define the Hausdorff measure as $$H^1(E) = \lim\limits_{\delta\to 0^+}H^1_\delta(E).$$ My idea is that I want to show that for all $\varepsilon>0$ there exists $\delta > 0$ such that $$H_\delta^1(\overline{\gamma})\leq L +\varepsilon$$ where $\delta$ is proportional to $\varepsilon$ such that letting $\varepsilon\to 0^+$ also forces $\delta \to 0^+$, and we get $$H^1(\overline{\gamma})\leq L,$$ however I couldn't succeed in showing this. To prove $H^1(\bar\gamma)\le L$, begin by picking a partition $t_0,\dots, t_m$ such that $$\sum\limits_{i=1}^md(\gamma(t_{i-1}),\gamma(t_i)) > L-\epsilon \tag{1}$$ and $d(\gamma(t_{i-1}),\gamma(t_i))<\epsilon$ for each $i$. Let $A_i = \gamma([t_{i-1},t_i])$. Suppose $\operatorname{diam} A_i>2\epsilon$ for some $i$. Then there are $t',t''\in (t_{i-1},t_i)$ such that $d(\gamma(t'),\gamma(t''))>2\epsilon$. So, after these numbers are inserted into the partition, the sum of differences $d(\gamma(t_{i-1}),\gamma(t_i))$ increases by more than $\epsilon$, contradicting $(1)$. Conclusion: $\operatorname{diam} A_i\le 2\epsilon$ for all $i$. Suppose $\sum_i\operatorname{diam} A_i>L+ \epsilon$. For each $i$ there are $t_i',t_i''\in (t_{i-1},t_i)$ such that $d(\gamma(t'),\gamma(t''))>\operatorname{diam} A_i - \epsilon/m$. So, after all these numbers are inserted into the partition, the sum of differences $d(\gamma(t_{i-1}),\gamma(t_i))$ will be strictly greater than $L+\epsilon - \epsilon = L$, which is again a contradiction. Thus, the sets $A_i$ provide a cover such that $\operatorname{diam} A_i\le 2\epsilon$ for all $i$ and $\sum_i\operatorname{diam} A_i\le L+ \epsilon$. Since $\epsilon$ was arbitrarily small, $H^1(\bar\gamma)\le L$. For completeness: the opposite direction follows from the inequality $$H^1(E)\ge \operatorname{diam} E\tag{2}$$ which holds for any connected set $E$. To prove it, fix a point $a\in E$ and observe that the image of $E$ under the $1$-Lipschitz map $x\mapsto d(x,a)$ is an interval of length close to $\operatorname{diam} E$ provided that $a$ was suitably chosen. Then apply $(2)$ to each $\gamma([t_{i-1},t_i])$ separately. • Thanks a lot for the answer. I actually found another way to prove the first part, however, I'm interested in your proof of the second inequality. Applying the inequality you wrote I get $$H^1(\gamma([t_{i-1},t_i]))\geq \text{diam}(\gamma([t_{i-1},t_i]))\geq d(\gamma(t_{i-1}),\gamma(t_i))$$ Summing these up I seem to get something that resembles $L$, but it seems to me that we get inequalities pointing in the incorrect direction. Can you expand a bit on it? – Eff Mar 14 '15 at 10:28 • For example, how can one justify that $$H^1(\overline{\gamma}) = \sum\limits_{i=1}^m H^1(\gamma([t_{i-1},t_i]))$$ if it indeed is that which should be used? – Eff Mar 14 '15 at 11:57 • $H^1$ is a Borel measure, so it is additive over disjoint Borel sets (and these subarcs are disjoint except for the endpoints, which have measure zero). Summing up, you get $H^1>L-\epsilon$, which is good enough. – user147263 Mar 14 '15 at 15:38 • @Meta How do you get a contradiction when proving $\text{diam} A_i \le 2\varepsilon$? – Alan Watts Apr 26 '16 at 12:48 Other approach (it is not completely clear to me that, as the other answer, we can choose such $\epsilon$ in this way) that construct by recursion partitions of $[0,1]$ is as follows $$a_{k+1}:=\inf\{x\in[a_k,1]:|\gamma(a_k)-\gamma(x)|=\epsilon\}\cup\{1\}\tag1$$ where we set $a_0:=0$. Then we have a partition of $[0,1]$ defined by $\mathfrak Z:=\{a_0,a_1,\ldots,a_m\}$ with the property that \begin{align*}|\gamma(a_k)-\gamma(a_{k+1})|&=\operatorname{diam}\big(\gamma([a_k,a_{k+1}])\big),\quad\forall k\in\{0,\ldots,m-2\}\\ |\gamma(a_{m-1})-\gamma(a_m)|&\le\operatorname{diam}\big(\gamma([a_{m-1},a_m])\big)\le\epsilon\end{align*}\tag2 (note that by construction $a_m=1$). Then we find that $$\mathcal H_\epsilon^1(\bar\gamma)\le\sum_{k=0}^{m-2}\operatorname{diam}\big(\gamma([a_k,a_{k+1}]\big)+\operatorname{diam}\big(\gamma([a_{m-1},a_m])\big)\\ \le\sum_{k=0}^{m-2}|\gamma(a_k)-\gamma(a_{k+1})|+\epsilon\le L(\bar\gamma)+\epsilon\tag3$$ Then taking limits above as $\epsilon\to 0^+$ we find that $\mathcal H^1(\bar\gamma)\le L(\bar\gamma)$, as desired.
Din kundvagn är tom • Författare: ## Kei Ieki • Förlag: ### Springer Verlag, Japan • Serie: Springer Theses • Språk: Engelska • Utgiven: 201803 • Antal sidor: 199 • Upplaga: Softcover reprint of the original 1st ed. 2016, Vikt i gram: 338 • ISBN10: 4431567070 • ISBN13: 9784431567073 # Observation of _ _e Oscillation in the T2K Experiment av ## Kei Ieki ##### Beskrivning: In this thesis theauthor contributes to the analysis of neutrino beam data collected between 2010and 2013 to identify e events at the Super-Kamiokande detector. In particular,the author improves the pion-nucleus interaction uncertainty, which is one ofthe dominant systematic error sources in T2K neutrino oscillation measurement.In the thesis, the measurement of e oscillation in the T2K (Tokai toKamioka) experiment is presented and a new constraint on CP is obtained. Thismeasurement and the analysis establish, at greater than 5 significance, theobservation of e oscillation for the first time in the world. Combiningthe T2K e oscillation measurement with the latest findings on oscillationparameters including the world average value of 13 from reactor experiments,the constraint on the value of CP at the 90% confidence level is obtained.This constraint on CP is an important step towards the discovery of CPviolation in the lepton sector.
Hardcover | $55.00 Short | £37.95 | ISBN: 9780262013277 | 432 pp. | 7 x 9 in | 85 b&w illus., 3 tables| September 2009 Ebook |$39.00 Short | ISBN: 9780262259507 | 432 pp. | 85 b&w illus., 3 tables| September 2009 ## Overview This book offers an introduction to current methods in computational modeling in neuroscience. The book describes realistic modeling methods at levels of complexity ranging from molecular interactions to large neural networks. A “how to” book rather than an analytical account, it focuses on the presentation of methodological approaches, including the selection of the appropriate method and its potential pitfalls. It is intended for experimental neuroscientists and graduate students who have little formal training in mathematical methods, but it will also be useful for scientists with theoretical backgrounds who want to start using data-driven modeling methods. The mathematics needed are kept to an introductory level; the first chapter explains the mathematical methods the reader needs to master to understand the rest of the book. The chapters are written by scientists who have successfully integrated data-driven modeling with experimental work, so all of the material is accessible to experimentalists. The chapters offer comprehensive coverage with little overlap and extensive cross-references, moving from basic building blocks to more complex applications. Contributors Pablo Achard, Haroon Anwar, Upinder S. Bhalla, Michiel Berends, Nicolas Brunel, Ronald L. Calabrese, Brenda Claiborne, Hugo Cornelis, Erik De Schutter, Alain Destexhe, Bard Ermentrout, Kristen Harris, Sean Hill, John R. Huguenard, William R. Holmes, Gwen Jacobs, Gwendal LeMasson, Henry Markram, Reinoud Maex, Astrid A. Prinz, Imad Riachi, John Rinzel, Arnd Roth, Felix Schürmann, Werner Van Geit, Mark C. W. van Rossum, Stefan Wils
255 views Answer the following question based on the information given below. There are three bottles of water, A, B, C, whose capacities are $5$ litres, $3$ litres, and $2$ litres respectively. For transferring water from one bottle to another and to drain out the bottles, there exists a piping system. The flow thorough these pipes is computer controlled. The computer that controls the flow through these pipes can be fed with three types of instructions, as explained below: Instruction Type Explanation of the Instruction FILL (X, Y) Fill bottle labelled X from the water in bottle labelled Y, where the remaining capacity of X is less than or equal to the amount of water in Y. EMPTY (X, Y) Empty out the water in bottle labelled X into bottle labelled Y, where the amount of water in X is less than or equal to remaining capacity of Y. DRAIN (X) Drain out all the water contained in bottle labelled X.DRAIN (X) Initially A is full with water; and B and C are empty. Consider the same sequence of three instructions and the same initial state mentioned above. Three more instructions are added at the end of the above sequence to have A contain $4$ litres of water. In this total sequence of six instructions, the fourth one is DRAIN (A). This is the only DRAIN instruction in the entire sequence. At the end of the execution of the above sequence, how much water (in litres) is contained in C? 1. One 2. Two 3. Zero 4. None of these 1
## Introduction Perovskite halides, as a type of emerging semiconducting materials, exhibit outstanding optoelectronic properties, such as easily tunable optical bandgaps, high charge carrier mobility and long carrier diffusion length1,2,3,4,5. Benefiting from these characteristics, perovskite-based light-emitting diodes (PeLEDs) are considered as an alternative medium for high-efficiency solid-state lighting and panel display. However, the PLQYs of blue perovskite emitters, especially pure blue emission, are far behind green and red counterparts which EQEs of corresponding LEDs have both overtook 20%6,7. To achieve high-efficiency and high-luminance blue LEDs, devices with 3-dimensional (3D), 2D, and quasi-2D perovskites films of mixed-Cl/Br halides have been developed8,9,10. These films improve the stability of excitons and enhance the energy transfer by designing multiple-quantum-well and multi-cation-doped structure. The best-EQE device is 11.7% with an emission peak at 488 nm11 and 13.8 % at 496 nm12 so far. However, instead of thin films, perovskite QDs as emitters also show great potential in blue LEDs because of their high photoluminescence quantum yield (PLQY), strong quantum confinement effect, and high monochromaticity. Consequently, the development of blue QD emitters is still a key approach to enhance the performance of blue PeLEDs. In 2014, the first QD-based PeLED was reported and the blue-emitting devices with Br and Cl mixed QDs were achieved with an EQE of 0.07%13. Then, various approaches have been employed to modify blue perovskite QDs. Ion doping has been proven a valid approach through altering the energy structure of perovskite QDs. In general, bivalent Mn2+, Sn2+, Cd2+, Zn2+ and Cu2+, trivalent lanthanide metal ions were often employed as B site dopants in blue perovskite QDs14,15,16,17. For example, the blue-emitting LED with Ni2+ doped CsPbX3 emitting at 470 nm exhibited an EQE of 2.4%18. In addition, a multiple-cation doping strategy, i.e., simultaneous doping of A and B sites by inorganic cations into CsPb(BrxCl3-x), achieved high PLQY and an EQE of 2.14% for blue QD LEDs19. Apart from those, acid-etching small-sized QDs with low vacancy defect density and a maximum EQE value of 4.7% was realized through quantum-confined all-bromide perovskite QDs20. Instead of inorganic cation doping, organic cation doping is another effective strategy to manifest blue QD emitters. Organic doping could improve the thermal, moisture, and chemical stability of QDs21,22,23. Compared with all-inorganic Cs-based perovskite QDs, partial organic cation doping may form a more stable crystal structure. For example, FA cations, a doping method for perovskite solar cells and LEDs, could tune the perovskite tolerance factor close to 1, which improves the structure stability and suppresses the ion migration. However, excellent blue QDs with FA cation doping still lack of in-depth study especially in the room temperature synthesis which currently is the most up-and-coming route for catering large-scale synthesis and commercial application of perovskite QDs. Herein, we comprehensively study the mechanism of FA cation doped blue QDs and achieve high-efficiency pure blue QD LEDs. Formamidine acetate (FAAc) was added as a precursor for the emitters. It can strongly improve the quality of QDs to reduce defect density and the nonradiative recombination. Furtherly, FA cations affect band-edge structure and enhance the interaction of organic cations and Pb-Br octahedron frameworks. The PLQY of pure blue perovskite QDs is improved from 10% (undoped) to 65% (FA doping). The substitution manipulates the crystal growth process, grain size, carrier injection barrier, and reduces defects in perovskite QDs. Finally, we realize blue perovskite QD LEDs, which has strong EL emission peak at 474 nm corresponding color coordinates of (0.113, 0.101). And, the optimized LEDs obtained a maximum value of brightness and EQE of 1452 cd m−2 and 5.01%, respectively. The LEDs exhibit a T50 lifetime of 1056 s with an initial brightness of 100 cd m–2. FA cation doping is clarified to increase hot carrier relaxation and decrease nonradiative recombination by transient absorption spectroscopy. Density functional theory (DFT) calculations also elucidate that FA cations influence the state density of electrons in valence band (VB) and also the band structure, which eventually improves carrier injection. ## Results ### Structure characterization Here, the microstructure of synthetized blue perovskite QDs via room temperature ligand assisted reprecipitation method (the details are shown in Experimental Section) is shown in Fig. 1. The transmission electron microscopy (TEM) images exhibit cubic QDs of 11 nm for all undoped and FA-doped CsPb(Cl0.5Br0.5)3 QDs (Fig. 1a–e). The insets of narrower grain size distribution statistics further demonstrate the better uniformity of the cubic phase as more FA cations are added. This is mainly attributed to that the FA cations could adjust crystal framework. Clear lattice fringes were observed (Figs. 1f and S1), and the interplanar spacing of the (200) plane expands apparently from 2.60 to 2.71 Å with the adding FA cation increasing from 0 to 0.2 M, which indicates that FA cations were doped into the lattice. The schematic crystalline structure of CsPb(Cl0.5Br0.5)3 QDs is illustrated in Fig. 1g, where FA and Cs cations occupy the same spacing sites. X-ray diffraction (XRD) and TEM measurements were conducted to explore the effect of FA cations on the structural properties of QDs. All samples show obvious diffraction peaks around 2θ = 15.7°, 22.3°, 31.6°, 35.2°, 38.8°, and 45.2°, corresponding to the (100), (110), (200), (210), (211), and (220) crystal planes of the cubic CsPb(Br/Cl)3 phase, respectively (Fig. 1h). No extra diffraction peak can be observed in the FA cation doped samples, suggesting that FA+ was incorporated into perovskite lattices. When the added FA cations reached 100%, the (100) diffraction peak decreases to 14.8° (Fig. S2). Also, the shift of diffraction peaks toward a lower angle suggests that the FA cations can cause lattice expansion, which is mainly due to the substitution of the smaller Cs+ (1.81 Å) by larger FA+ (2.79 Å)24. (200) plane was extracted as an example, in which a 0.36° shift toward a lower angle was observed with increasing FA+. In addition, Cs+ ions cause harmful shrinkage deformation of four coordination octahedrons ([PbX6]4−), which can be corrected by the doping of larger FA cations. However, excessive FA+ ions can cause angle increase of two adjacent coordinating octahedra (> 180°). Here, the mechanism of defect healing by FA doping could be ascribed to lattice modulation of the distortion of [PbX6]4−. Figure 1h also displays that crystal growth tendency is distinctly affected by FA cation adding. FA+ doped QDs realize the manipulation of crystal orientation along (100) crystal plane, which benefits to the light emission25. ### Photoluminescence studies and compositional analysis Figure 2 shows the optical properties of the pristine and FA+-doped CsPb(Cl0.5Br0.5)3 QDs. Compared with the pristine QDs, the absorption spectra of FA+-doped samples (Fig. 2a) exhibit an apparent low energy shift of the excitonic peak from 440 nm to 458 nm, and the shift finally reaches 478 nm for FA+-only emitters (Fig. S3a), indicating the decrease of QD optical bandgap. For investigating the origin of the bandgap change, firstly we consider the quantum size confinement of QDs, and the influence of sizes on bandgaps can be expressed by the equation26,27: $$\Delta E = \frac{{\hbar ^2\pi ^2}}{{2m_rR^2}} - \frac{{1.786e}}{{4\pi \varepsilon _0\varepsilon R}}$$ (1) in which mr, R and ε represent the effective mass of the excitons, the particle radius, and the relative dielectric constant of materials, respectively28,29. The calculation results show that the estimated maximum moving of the bandgap are around 16 meV with the particle size changing from 10 ± 0.3 nm in the pristine CsPb(Cl0.5Br0.5)3 QDs to 12 ± 0.4 nm in the 0.2 M FA+ doped CsPb(Cl0.5Br0.5)3 QDs, which value is much lower than the experimental change of 110 meV. Therefore, we can deduce that the doped FA cations also contribute to the change of band structure. The PL characteristics for the pristine and FA+ doped CsPb(Cl0.5Br0.5)3 QDs were further explored, as shown in Fig. 2b and Fig. S3. With the ratio of FA+/Cs+ increase, the PL peak position show red-shift (from 456 to 473 nm) and finally realize 498 nm for FAPb(Cl0.5Br0.5)3 (Fig. S3b). More importantly, the PL intensity of the 0.2 M doped CsPb(Cl0.5Br0.5)3 QDs is obviously enhanced compared with that of pristine QDs. The absolute PLQY is illustrated in Fig. 2c and Fig. S3c, in which the PLQY of FA+ doped CsPb(Cl0.5Br0.5)3 QDs gradually increases and approaches 65% that is 6 times more than the undoped QDs. The PLQY values are listed in Table S1. The increase of PLQY primarily comes from the decreased defects in crystal structure by FA doping. To further explore the dynamic origin of the PLQY changing by FA doping, the time-resolved PL (TRPL) spectra for all samples were measured (Fig. 2d and Table S1), and the decay curves were fitted by the biexponential function. The fluorescence lifetimes are about 137.8, 154.6, 183.4, 201.0, and 214.4 ns with FA+ feeding ratio of 0, 0.05, 0.1, 0.15, and 0.2 M, respectively. The prolonged average lifetime indicates that nonradiative decay channels and defects are suppressed and reduced in doped samples, which improves the radiative recombination of electrons and holes and thus increases the PLQY. As a result, FA doping enhances the exciton binding energy, making excitonic emission dominates in perovskite QDs, which are shown in steady-state and time-resolved photoluminescence spectra. When the FAAC is gradually increased beyond 0.2 M (Fig. S3), the PLQY presents a peak value and then decreases, which is attributed to the excessive FA causing new defects. Simultaneously, the excessive acid in precursor solution results in the agglomeration of QDs30. To elucidate the FA cation doping, Fourier transform infrared spectroscopy (FTIR) was conducted in the pristine and treated QD samples. In Fig. 3a, both samples exhibit CH2 and CH3 symmetric and asymmetric stretching vibrations between 2840 and 2950 cm−1, and CH2 bending vibration at 1466 cm−1, which are the representative absorption peaks for hydrocarbon groups2,31. For FA-doped perovskites, a strong peak at 1716 cm−1 (red area) emerges, which represents the C=N stretching vibration of FA cations32. Subsequently, the FTIR curve for the FA-doped perovskites exhibits a broad stretching mode around 3300~3500 cm−1 and (pink area), which comes from the N-H stretching vibration. These vibrational peaks are obviously enhanced with the increase of FA cations (see Fig. S4 in the Supporting Information). Above data confirm that FA cations indeed doped into QDs. We also studied the surface composition of QDs via XPS. The survey spectra of QDs confirm the existence of N, Cs, Pb, Br, and Cl elements (Fig. 3b–f and Fig. S5). Figure 3b and Fig. S5a are the high-resolution spectra of N 1s. The peak at 399.8 eV relates to amine groups and it originates from FA cation. The samples exhibits a weak peak at 401.8 eV for N 1s, which is attributed to few DDA+ ions from di-dodecyl dimethylammonium bromide adsorbed onto the QD surface. From Fig. 3c and Fig. S5b, the intensities of Cs 3d peaks are significantly weakened after FA cation doping. The results further demonstrate that FA cations partially substitute Cs cations into perovskite QDs. Furthermore, the spectra of Pb 4f (Fig. 3d) illustrate that the binding energies of Pb 4f5/2 and Pb 4f7/2 of the FA-doped sample are higher than those of untreated QDs. The Pb 4f peak position moves toward higher binding energy by 0.3 eV, which is attributed to a stronger binding between Pb and halide due to decreased octahedral volume. This also benefits to the stability of crystal structure. For the Cs 6p, Cl 2p, Br 3d core levels, no noticeable change was observed between two samples in high-resolution spectra. In addition, the femtosecond transient absorption spectroscopy (TAS) was deducted to study the carrier dynamics and the nonradiative recombination process. The transient absorption spectra of samples were characterized under 400 nm excitation (Fig. 4 and Fig. S6). The negative signals represent photoinduced bleaching (PB) originating from the ground-state absorption, which approximate to the excitonic peaks in the absorption spectra. This is associated to the state filling of band-edge excitons (electrons and holes). The positive photoinduced absorption (PA) profiles could be attributed to the hot charge carrier absorption33. Comparing to Fig. 4a, Fig. 4b shows slower recoveries and stable PB peak. And for PA of both samples in Fig. 4a, b, TAS shows no contribution of PA in 0.2 M sample, indicating that doped sample process a fast hot charges carriers relaxation34,35. In addition, the difference between PB and PA shifts from 100 meV to 50 meV (Figs. 4a, b, and S6a–c) is attributed to the renormalization of the bandgap. Next, the bleaching recovery kinetics of the two samples are depicted in Fig. 4c. It is obvious that the short and ultrafast part is attributed to various trap-assisted nonradiative decay (carrier-phonon scattering, excitons quenching, and Auger recombination) and the improved long part is attributed to excitonic recombination process36. To clearly describe the kinetics process in FA cation doped CsPb(Cl0.5Br0.5)3 QDs, the carrier recombination mechanism is shown in Fig. 4d. The electrons in the ground states are excited by photons and then transit to the high-energy excited states. The high-energy carriers rapid cool down and reduce the scattering in electron-electron and electron-phonon. From the above investigation, excellent blue-emitting performance is attributed to the following mechanisms: (i) the fast hot charge carriers relaxation and high radiation recombination decrease energy losing; (ii) defect density was decreased to suppress non-radiation decay channel. ### Band structure Furthermore, we explore the influence of FA cation doping on the band structure. Firstly, we estimated the optical band gap (Eg) by Tauc plots extracted from the absorption spectra, as shown in Fig. 5a and Fig. S7. The Eg value for CsPb(Cl0.5Br0.5)3 is 2.72 eV, which well matches with those reported elsewhere37. And the band gap is 2.60 eV for FA cation doped CsPb(Cl0.5Br0.5)3, which is > 0.1 eV larger than the undoped QDs. Furthermore, ultraviolet photoelectron spectroscopy (UPS) was conducted to explore the valence band (VB) edge positions of samples. FA cations could lead to the change of the VB edge of the QDs, as shown in Fig. 5a. The UPS data for all samples are illustrated in Fig. S8. The VB maximum energy (EVB), according to the vacuum level, was calculated to be −5.81, −5.74, −5.67, −5.62, and −5.42 eV for 0, 0.05, 0.1, 0.15, and 0.2 M FA cation doped sample, respectively. The conducting band (CB) minimum energies (ECB) can subsequently be calculated from the Eg and EVB values. Thus, we derived the CB values for 0 and 0.2 M FA+ doped samples QDs as −2.81 and −3.09 eV, respectively. In addition, Fig. 5c, d show the electronic band structures and the density of states (DOS) of pristine and treated perovskites QDs, which were calculated by the first-principles. The results show that the bandgaps of CsPb(Cl0.5Br0.5)3 and FA-doped CsPb(Cl0.5Br0.5)3 are very close to each other because both CB and VB are mainly dominated by Pb and halegon ions. In the state density, VB is mainly composed of Br 3d, Cl 2d and Pb 6 s electrons states, while CB mainly contains Pb 6p electron state. The contribution of Cs and FA cations to CB and VB is negligible. However, the DOS of the doped blue QDs (Figs. 5c, d right) shows that FA cations mainly have an indirect influence on the energy band structure through manipulating halogen bonding orbits with Pb. In addition, the projected density of states (PDOS) on the C, N, H, Pb, Cs, Cl and Br atoms of both samples are computed in Fig. S9. The PDOS illustrates that the individual electronic states of FA cation mainly fill on the deep-level VB. The FA cations widen DOS band, which will cause the delocalization of carrier to decrease energy losing approach in the carrier relaxation process. The result will give a more thorough understanding of the influence of FA cations on the band structure and related carrier injection process in our devices. ### Device performance The excellent photophysical properties of FA cation doping QDs (0.2 M sample) offer exciting prospects for their exploitation in optoelectronic devices. Consequently, we fabricated pure blue LED devices with perovskite QDs acting as light-emitting layer. The schematic device energy alignment is depicted in Fig. 6a. Band alignment demonstrates that FA cation doped CsPb(Cl0.5Br0.5)3 QD-based blue LED has a smaller barrier of hole injection than the undoped device, which changes the hole injection into the emitting layer. Figure 6b shows the voltage-dependent change of luminance and current density for two pure blue QD LEDs. The turn-on voltage (Von) (which is usually specified in literatures as the applied voltage can drive luminescence of 1 cd m−2) of FA-doped devices is 2.8 V, slightly smaller than that of undoped device (2.9 V). The current densities of the FA+ doped devices are substantially lower in the low voltage region. The peak luminance is 1452 cd m−2 and 522 cd m−2 for devices with and without FA+ cation doping, respectively. Figure 6c shows normalized EL spectra of two LEDs. Both EL spectra measured at 5 V and the emission wavelength are 457 and 474 nm, respectively. Coupling high luminance with low current density, the current efficiencies of FA-doped CsPb(Cl0.5Br0.5)3 QD LEDs show a peak value of 7.4 cd A−1, which is much higher than that of pure CsPb(Cl0.5Br0.5)3 (0.31 cd A−1) (Fig. 6d). Notably, the peak EQE is as high as 5.01% (Fig. 6e), which surpasses all previously reported values of pure blue CsPb(Cl0.5Br0.5)3 QD LEDs. The efficiencies of LEDs are dramatically improved with the introduction of FA cations (see Fig. S12 in the Supporting Information). The top performances of perovskite blue LEDs in literatures are summarized in Fig. 7 and Table S3, and here we realize the record value (5.01%) in the entire field of pure blue perovskite QD LEDs. The maximum EQE is above 10 times magnitude higher than that of the CsPb(Cl0.5Br0.5)3 LED. Here, FA cation doped QDs may possess better band structure to adjust charge balance. And the maximum EQE statistics are summarized in Fig. 6f, which can further illustrate device performance reliability. The high shelf stability of FA-Cs-based QD LEDs can be attributed to the excellent nanocrystals with low defect density and high PLQY. And we use half- lifetime (T50) to judge the device operational stability, which is set as the time needed for the device luminance to decrease to 50% of its initial value (L0). The T50 of the FA-based LED is about 1056 s (Fig. 6g), which is much longer than that of pristine device (150 s). In addition, EL spectrum stability of FA-based LED was measured at different voltages (Fig. 6h), and the results reveal that the QD device keeps a stable emission peak at 474 nm. All detailed parameters are shown in Table 1. And in Fig. 6i a device photo with bright pure blue emission is shown operated under 5 V. To further investigate the reason of the high performance of LEDs, the surface roughness of the perovskite QD films was charactered by atomic force microscopy (AFM), as shown in Fig. S13. Flat and compact surface was confirmed for the pristine and all doped samples. Good film morphology can reduce the current leakage, which is also an important factor for high performance blue perovskite LEDs. Additionally, the carrier transport properties of QD LEDs were studied by the “electron-only” and “hole-only” devices, in which current density−voltage (J−V) curves were measured (Fig. S14a, b). The device structures are as follow: Hole-only: ITO/PEDOT:PSS/poly-TPD/QDs/MoO3/Al Electron-only: ITO/PEI/ QDs/TPBi/LiF/Al The carrier mobility of the QDs was evaluated by fitting the space charges limit current (SCLC) region with Mott−Gurney law38: $$J_{\rm{SCLC}} = \frac{9}{8}\varepsilon _0\varepsilon _r\mu \frac{{V^2}}{{L^3}}$$ (2) in which, ε0 is the vacuum dielectric constant, εr is the relative dielectric constant, μ is the mobility, V is the applied voltage and L is the thickness of the active material. The hole mobilities of undoped and 0.2 M FA cation doped QDs films are 6.24 × 10−6 and 1.32 × 10−4 cm2 V−1 s−1, respectively. The electron mobilities of CsPb(Cl0.5Br0.5)3 and FA cation doped CsPb(Cl0.5Br0.5)3 QDs films are 3.78 × 10−4 and 8.20 × 10−5 cm2 V−1 s−1, respectively. With the adding of FA cations, the electron mobility decreases while the hole mobility increases, which reveals that the mobilities of two carrier species are more balanced with QD modification and this result agrees with the aforementioned UPS data. In addition, balanced carrier mobility could decline emitting quenching. ## Discussion We successfully realized FA cation doped pure blue CsPb(Cl0.5Br0.5)3 QDs at room temperature. The FA cation doping manipulate the morphology and light emission of QDs. It boosts PLQY from 10% to 65% by decreasing nonradiative recombination. The fluorescence lifetime increases 1.6 times than the undoped ones. TAS further elaborates the mechanism of excellent QD emitters originating from fast carrier relaxation and low defects to decrease energy losing channels of QDs. Simultaneously, the first-principle demonstrates that electronic state density of valence band is changed to decline the carrier injection barriers. Ultimately, a champion device was obtained with a high luminance and a peak EQE of 1452 cd m−2 and 5.01% at 474 nm, respectively. This work offers a good approach to develop Cl/Br mixed room temperature-synthesized pure blue-emitting perovskite QDs. ## Materials and methods ### Materials Cs2CO3(99.9%), didodecyldimethylammonium bromide (DDAB,98%), toluene (ACS grade, Fisher), Octanoic acid (OTAc,98%), Formamidine acetate (FAAc, 99%), tetraoctylammonium bromide (TOAB,98%), Methyl acetate (98%), and aluminum (Al) were purchased from Sigma-Aldrich. PbBr2 (99.9%), PbCl2 (99.9%), Poly(3,4-ethylenedioxythiophene)-poly(styrenesulfonate) dry re-dispersible pellets (PEDOT:PSS (4083)), Poly[N,N’-bis(4-butylphenyl)-N,N’-bis(phenyl)-benzidine (Poly-TPD), 1,3,5-Tris(1-phenyl-1H-benzimidazol-2-yl)benzene(TPBi), and LiF were purchased from Xi’an Polymer Light Technology Corp. ### Synthesis and purification FAAc doped CsPb(Cl 0.5 Br 0.5 ) 3 QDs The CsPb(Cl0.5Br0.5)3 QDs were synthesized by referencing double ligand-assisted-reprecipitation methods39 with some modifications. First, cesium precursor was prepared by loading 0.5 mmol of Cs2CO3 and 5 mL of OTAc into a 20 mL bottle, and then 0.25 mmol, 0.5 mmol, 0.75 mmol, and 1 mmol FAAc were added and stirred for 20 min at room temperature. 1 mmol mixture of PbBr2 and PbCl2 was added to 50 ml flask. Then, 2 mmol of TOAB and 10 ml toluene were also filled into the bottle to form precursor solution. For the synthesis of pure CsPb(Cl0.5Br0.5)3 QDs, 1.0 mL of a Cs+ precursor solution was swiftly added into 9 mL of a PbX2 toluene solution. The solution was magnetically stirred for 10 min at room temperature in open air. Subsequently, 3 mL of DDAB (in toluene 10 mg mL−1) solution was added. After 1 min, a volume ratio of 2:1 for ethyl acetate was put into the crude solution; the precipitates were collected separately after centrifugation and dispersed in toluene. The additional ethyl acetate was put into the dispersion solution, and the precipitates were collected and re-dispersed in 2 ml toluene. As for FA-doped QDs, the different mass of FAAc was added into Cs+ precursor to form mixture A-site cations, and the other processes were the same. ### LED fabrication Pre-patterned indium tin oxide (ITO) glasses with an 8 Ω/square sheet resistance were used as the substrates for the blue QD LEDs. Deionized water, acetone, and isopropanol were used to sequentially clean the ITO substrates. Then the substrates were exposed to UV–ozone ambiance for 5 min at 50 W before sequential coating. The lighting active area of QDs LEDs was 2 × 2 mm2. The detailed device structure was ITO/ PEDO:PSS/ Poly-TPD/QDs/TPBi (50 nm)/LiF (1 nm)/Al (100 nm), which was reported elsewhere40. PEDOT:PSS and Poly-TPD were as hole transport layers. PEDOT:PSS was spin-coated and then annealed in air at 120 °C for 20 min to form a 30 nm layer. Next, Poly-TPD film was spin-coated and baked at 120 °C for 15 min in a glove box to form a 20 nm layer. Then CsPb(Cl0.5Br0.5)3 and FA cation doped QD solutions were spin coated on the smooth poly-TPD film at 2000 rpm for 60 s, and baked at 50 °C for 10 min to form a 40 nm layer. The remaining layers (TPBi, LiF, Al) were deposited in a thermal evaporator with a pressure of 5 × 10−4 Pa, which deposition rates were 0.2, 0.01, 1 Å s−1, respectively. Film thickness and evaporation rate were controlled by a quartz-crystal sensor. ### First-principles calculations The electron structures of pristine and FA cation doped CsPb(Cl0.5Br0.5)3 were conducted using the Vienna Ab initio Simulation Package code41,42,43. The projector augmented wave (PAW) approach and the generalized gradient approximation of Perdew, Burke, and Ernzerhof (PBE) describe the ion-electron interactions and exchange-correlation function44,45. For all calculations, the energy cut-off of 520 eV for the plane-wave basis was used with k-points meshes of spacing 2π × 0.03 Å. All structures were fully optimized until the total energy and residual forces of each atom converged to 10 eV and were smaller than 10 eV Å−1, respectively. ### Characterization techniques Transmission electron microscope (TEM, FEI Tecnai F20) were used to study lattice sizes of the perovskite QDs samples. A Bruker D8 X-ray diffractometer which used a copper Kα radiation (λ = 1.54178 Å) characterized the film X-ray diffraction (XRD). A Cary Eclipse spectrofluorometer show PL spectra of QDs emitter. A PerkinElmer Lambda 3600 UV–vis–NIR spectrometer test ab-sorption curves. Time-resolved PL (TRPL) data were recorded by using the Edinburgh FLS980 spectrofluorometer with a 405 nm laser. PLQY was as well as tested by the same fluorescence spectrometer with an integrating sphere. A Nicolet 6700 FT-IR spectrometer was used to perform Fourier transform infrared spectra (FTIR). The X-ray photoelectron spectroscopy (XPS) was col-lected through ESCALAB 250 X-ray photoelectron spectrometer. A Keithley 2612 source meter connecting with a Newport 818-UV Si photodiode tested current-voltage-luminance characteristics. A NOVA spectrometer recorded EL spectra.
# Re: Error in Show Log feature of Working Copy dialog From: Stefan Küng <tortoisesvn_at_gmail.com> Date: Tue, 05 Aug 2008 20:25:10 +0200 kainhart wrote: > My working copy root is "D:\dev\la4prototype". I tested this in the C: > drive "C:\temp\workingcopy" as well just to make sure it's not a > problem with the root and it seems to reproduce no matter where my > working copy exists. If you can't get this to reproduce I'm curious > why at our shop all of the developers so far are consistently > reproducing this bug. Could it be a conflict with another application > or configuration related? What's the URL of your repository? Stefan ```-- ___ oo // \\ "De Chelonian Mobile" (_,\/ \_/ \ TortoiseSVN \ \_/_\_/> The coolest Interface to (Sub)Version Control /_/ \_\ http://tortoisesvn.net ``` Received on 2008-08-05 20:25:35 CEST This is an archived mail posted to the TortoiseSVN Users mailing list.
# Flow in a T-Junction with pipes of different diameters I have been trying to model a complex pipe network recently and have come across something I can't find any information on. In my model, I have been assigning equivalent lengths of pipe to various fittings, but I commonly encounter fittings such as this: Inlet to T: 10mm diameter pipe t-through: 10mm diameter pipe t-branch: 1mm diameter pipe Essentially I have a T junction where the branch is also a sudden contraction, but the dominant flow direction is perpendicular to the branch. Does anybody know a method of calculating equivalent length of pipe in a case like this? I don't think a regular vena contracta, as described by normal theory for sudden contractions, would occur in the 1mm branch, since it is perpendicular to the main flow and not coaxial with the main 10mm pipe. But I have various instances where this occurs, and the amount of the contraction varies (e.g. the branch pipes can be 1,2 or 5mm) but the normal method of equivalent pipe length for a t-junction does not account for the changing area. I need something that will account for it. • Bernoulli? This is basically the venturi used in early carburettors to "mix" fuel with air... Jan 30 '18 at 14:20 • If I knew the flow conditions, which I don't. Ideally, I need to represent the interface with an equivalent length of pipe that represents the 'resistance' to flow travelling down the branch. Typically for a T junction in steel pipes the coefficients are 20 for the thru flow and 60 for the branch... but the theory doesn't account for area change. I think my best chance is looking at flow through a hole drilled in a pipe. Jan 30 '18 at 14:23 • How can you make any decision for an equivalent length if you don't know the conditions for the situation? Are you assuming the fluid flows into the smaller pipe or from the smaller pipe? Jan 30 '18 at 14:25 • @SolarMike - This doesn't seem to be a venturi, since there is no reduction in cross-section of the main pipe. Jan 30 '18 at 15:22 • @JonathanRSwift so no change in diameter between pipe and the Tee then? ie those pipes that connect to the head of the Tee... Jan 30 '18 at 16:39
The Internet places the whole world at our fingertips. We can be miles apart and yet connected with the help of social media. The Internet has spread its roots so deep, that we cannot go through even a single day without it. As global citizens, a high percentage of the human population completely depends on the Internet for information requirements. With everything available online, there are more chances of copying and stealing content for various purposes. The extensive requirement of information thus leads to more instances of Plagiarism, which is illegal. Consequently, concern for avoiding plagiarism-related issues becomes apparent. The content available online is termed as Intellectual Property and is legally protected under copyright, trademark, franchise, etc. It demonstrates the scholarly and academic integrity of the respective owner. Copying or stealing content is thus deemed an unlawful and punishable offense. The field of education has also witnessed a paradigm shift in teaching-learning methodologies that make use of technology. Teachers can devise interactive lessons that help the learners to understand the concepts better. And learners get access to numerous courses on an open learning platform. However, the information available on the Internet cannot always be used as it is, without duly acknowledging the creator. Therefore, writing and publishing something on the Internet requires a special emphasis on the practices that help avoid plagiarism. Plagiarism can be of several types. Even using the same content repeatedly is termed as Self-Plagiarism, which falls under the umbrella term of Plagiarism. In this article, we recommend certain practices for avoiding plagiarism in various kinds of content. #### Practices for Avoiding Plagiarism Producing plagiarism-free content can be challenging at times. Regardless of how difficult it seems, prioritize producing only original content, keeping in mind the serious legal repercussions of plagiarizing content. ##### I. Understanding Plagiarism A little knowledge is a dangerous thing. To produce great, plagiarism-free content, it is extremely important to understand the meaning of plagiarism, its types, and its consequences. (Refer to our Plagiarism article for these.) ##### II. Paraphrasing Paraphrasing refers to presenting the information in your own words without altering its actual meaning. This includes the use of synonyms or other forms of words that express the same meaning as the source. While it may seem easy, paraphrasing sometimes may slip into plagiarism if not practiced appropriately. III. Providing Citations Providing citations is a way of giving due credit to the owner of the original content. It is a good practice to include citations in your content as it is useful for avoiding plagiarism. ##### IV. Quoting Using short quotations is preferable for avoiding plagiarism. It should be noted here that quotations should be used sparingly as excessive quotations make the text difficult to read. Additionally, they should not be too long. V.Presenting Original Ideas Instead of copy-pasting exact words from the source content, come up with creative ways of describing an idea originally, adding a new perspective to it. VI. Avoiding Self-Plagiarism Presenting original ideas may sometimes turn into self-plagiarism. This happens when the same words and phrases are used for specific content on multiple platforms. This has the same repercussions as plagiarism and must be avoided at all costs. While using small parts of original content for transformative use, make sure that the source author identifies their content in the category of ‘Fair Use’. In case this is not clear, ask for permission from the copyright holder before using the content to avoid plagiarism-related issues. VIII. Developing Unique Writing Style The best way of avoiding plagiarism is to develop a unique writing style. Put your imagination to work by producing content with varied vocabulary. Eventually, a unique writing style will develop over time. ##### IX. Checking for Possible Plagiarism It is always a good idea to check the content for possible and unintended plagiarism. Many software available on the Internet can be used to detect plagiarized content for free. These provide a detailed report of the content, containing the plagiarized parts and the information about the source content. Grammarly Premium, Plagiarism Checker X, Copyspace, and PlagTracker are a few software that can be utilized for this. ### Conclusion Creating any sort of content is an art not all can master. Quality of the content reflects the knowledge, understanding, and skills of the creator gained and developed with repeated practice. If viewed from a wider perspective, writing can be seen as a form of expression that ultimately impacts all the aspects of human personality. Every person has an exclusive way of expressing opinions, thoughts, and feelings. Thus, plagiarizing is never a good idea in any situation. It is an unethical act that equals stealing and must be avoided at all costs. One must have a sound understanding of plagiarism and should adhere to the guidelines mentioned in this article to produce original content that is free of plagiarism. Sources referred to for research: Ediqo Image: Freepik For more information, visit our blog. Create. Engage. Inspire.
TY - JOUR T1 - Boundedness of Commutators for Marcinkiewicz Integrals on Weighted Herz-Type Hardy Spaces JO - Analysis in Theory and Applications VL - 4 SP - 365 EP - 376 PY - 2011 DA - 2011/11 SN - 27 DO - http://doi.org/10.1007/s10049-011-0365-z UR - https://global-sci.org/intro/article_detail/ata/4608.html KW - Marcinkiewicz integral, commutator, weighted Herz space, Hardy space. AB - In this paper, the authors study the boundedness of the operator $\mu^b_\Omega$, the commutator generated by a function $b \in \text{Lip}_\beta (\mathbf{R}^n)$ $(0<\beta<1)$ and the Marcinkiewicz integral $\mu_\Omega$ on weighted Herz-type Hardy spaces.
# Is there a better formatting option for an alternated enumerated/itemized list? I'm making a questions/answers page where answer follows the question. The questions are enumerated while the answers are not. Questions are using one color while answers using another one. In order to pause numeration, I've used the solution provided to this question. As a result, my code looks like this in general: \documentclass[english]{article} ... \newcounter{savedenum} \newcommand*{\saveenum}{\setcounter{savedenum}{\theenumi}} \newcommand*{\resume}{\setcounter{enumi}{\thesavedenum}} ... \usepackage{parskip} \usepackage{color} \usepackage[usenames,dvipsnames,svgnames,table]{xcolor} \usepackage{enumitem} ... \begin{document} ... \begin{enumerate} \item \color{black}{Question text goes here.} \saveenum \end{enumerate} \begin{itemize} \item[] \color{NavyBlue}{Answer text goes here.} \end{itemize} \begin{enumerate} \resume \item \color{black}{Question text goes here.} \saveenum \end{enumerate} \begin{itemize} \item[] \color{NavyBlue}{Answer text goes here.} \end{itemize} ... \end{document} Since there is a lot of code reuse, I'm seeking for a better solution (maybe a loop of some kind?), if possible, that will substitute the copy-and-paste of the \begin{...} ... \end{...} blocks. - You can use \item[\textbullet] for introducing the answer, staying in the same enumerate environment. – egreg Dec 29 '12 at 10:05 Did you try the exercise package? – TeXtnik Dec 29 '12 at 10:54 I'd probably look in the direction of the theorem/ntheorem/amsthm packages to define an automatically-numbered question environment. – Ulrich Schwarz Dec 29 '12 at 11:05 ## 2 Answers I suggest a different approach: \documentclass{article} \usepackage[usenames,dvipsnames,svgnames,table]{xcolor} \newif\ifsolutions \solutionstrue \newenvironment{exercises} {\begin{enumerate}} {\end{enumerate}} \newenvironment{question} {\item} {} \ifsolutions \newenvironment{solution} {\par\nopagebreak\begingroup\color{NavyBlue}} {\endgroup} \else \usepackage{comment} \excludecomment{solution} \fi \begin{document} \begin{exercises} \begin{question} Question one text goes here. \end{question} \begin{solution} Answer one text goes here. \end{solution} \begin{question} Question two text goes here. \end{question} \begin{solution} Answer two text goes here. \end{solution} \end{exercises} \end{document} The markup might seem excessive, but it allows for greater flexibility: you can customize exercises using enumitem features, but also question and solution. I've added a possibility: if you comment the \solutionstrue line, the solutions will not be printed at all. ### Solutions suppressed - Since you're already loading the enumitem package, here's a solution using the resume feature: \documentclass{article} \usepackage{enumitem} \usepackage{parskip} \usepackage[usenames,dvipsnames,svgnames,table]{xcolor} \newenvironment{question}{\enumerate[resume]\item}{\endenumerate} \newenvironment{solution}{\itemize\item[]\begingroup\color{NavyBlue}}{\endgroup\enditemize} \begin{document} \begin{question} Question text goes here. \end{question} \begin{solution} Answer text goes here. \end{solution} \begin{question} Question text goes here. \end{question} \begin{solution} Answer text goes here. \end{solution} \end{document} If you'd prefer to have your own list (perhaps you're using the enumerate environment outside of this task) then you could use a newlist \newlist{myenumerate}{enumerate}{5} \setlist[myenumerate]{label=\arabic*.,resume} \newenvironment{question}{\myenumerate\item}{\endmyenumerate} Finally, if you'd like to suppress the solution environment, or perhaps output it to a separate file, I'd recommend looking at the answers package; you can switch the answers on and off in the main part of the document by using % solutions written to file (NOT to main part of document) \usepackage{answers} % solutions NOT written to file (written to main part of document) \usepackage[nosolutionfiles]{answers} Complete MWE \documentclass{article} \usepackage{enumitem} \usepackage{parskip} \usepackage[usenames,dvipsnames,svgnames,table]{xcolor} \usepackage{answers} %\usepackage[nosolutionfiles]{answers} \newlist{myenumerate}{enumerate}{5} \setlist[myenumerate]{label=\arabic*.,resume} \newenvironment{question}{\myenumerate\item}{\endmyenumerate} % open the answer file \Opensolutionfile{shortsolutions} \Newassociation{solution}{ShortSoln}{shortsolutions} \begin{document} \begin{question} Question text goes here. \begin{solution} Answer text goes here. \end{solution} \end{question} \begin{question} Question text goes here. \begin{solution} Answer text goes here. \end{solution} \end{question} % close the solutions files \Closesolutionfile{shortsolutions} \clearpage % this just makes the displayed solutions use the itemize % environment- makes the dispaly better \renewenvironment{ShortSoln}[1]{% \itemize\item[{\bfseries(#1)}]% }% {\enditemize} % input the answers file \section*{Answers} \IfFileExists{shortsolutions.tex}{\input{shortsolutions.tex}}{} \end{document} -
# What are the standard guidelines for calling the function that does whatever the post says? like for languages that need a starting point, like c++, c#, java and kotlin. I always post my answers in the function form instead of also calling it from the starting point. If we need to actually call it from the starting point, then that's a huge waste of bytes. what are the standard guidelines for that? or say, one could define that function as an extension function rather than a normal function that takes in the input as parameters if it saves bytes, what's the matter for this as well? • Hi there! The general consensus for including "boilerplate" (that's the starting point you're talking about) is that you don't need to count it in your score if you are submitting a function. You only need boilerplate if you are submitting a full program. Hopefully that helps :) Jan 28 at 11:34 • Does this answer your question? When do I have to include things like Java's public static void main Jan 28 at 11:35 • @lyxal yep it exactly does! but what about the extension function part? Jan 28 at 11:35 • Do you have an example of what you mean by extension function? Jan 28 at 11:39 • fun Int.returnOneMore() {return this+1} and it would be called by intVariable.returnOneMore() or 1.returnOneMore() Jan 28 at 11:40 • huh, I can't find any existing consensus on extension functions. Guess we'll see Jan 28 at 11:43 # Extension Functions should be treated as helper functions That is to say, they shouldn't be the main submission function, but they shouldn't need boilerplate. For example: Int.returnOneMore() {return this+1} on it's own wouldn't be a valid submission, but: Int.returnOneMore() {return this+1} fun f(){/* something that calls the defined extension function */} would be. • is there a consensus on this? Jan 28 at 12:27 • I don’t see why they should be treated as helpers, they’re just functions with different syntax. (I’ll post an answer to this myself once I’m on my laptop) – user Jan 28 at 12:27
# Separation Into Differential Equations 1. Apr 6, 2008 I would use the entire template except this question is very simple and does not require all of it. 1. The problem statement, all variables and given/known data How do I separate $$\frac{X''(x)}{X(x)}$$+$$\frac{Y''(y)}{Y(y)}$$=$$\sigma$$ into ordinary differential equations when $$\sigma$$ is a constant. 2. Apr 6, 2008 ### Pere Callahan You already seem to have seperated the differential equation, because the x-dependence is clearly seperate from the y-dependence. On the right hand side you have two terms, one depending on x only the other depending on y only. Their sum should not depend on either x or y but should be constant. Can you conclude HOW the x-dependent term must depend on x in order for the sum of this first term and the only(!) y-dependent second term not to depend on x? The same for the y-dependence of the second term. Can you figure out HOW it depends on y given the fact that if you add the y-independent first term the result must be y-independent?
## 1Purpose f16fbc broadcasts a scalar into a real vector. ## 2Specification #include void f16fbc (Integer n, double alpha, double x[], Integer incx, NagError *fail) The function may be called by the names: f16fbc, nag_blast_dload or nag_dload. ## 3Description f16fbc performs the operation $x ← α,α,…,α T ,$ where $x$ is an $n$-element real vector and $\alpha$ is a real scalar. ## 4References Basic Linear Algebra Subprograms Technical (BLAST) Forum (2001) Basic Linear Algebra Subprograms Technical (BLAST) Forum Standard University of Tennessee, Knoxville, Tennessee https://www.netlib.org/blas/blast-forum/blas-report.pdf ## 5Arguments 1: $\mathbf{n}$Integer Input On entry: $n$, the number of elements in $x$. Constraint: ${\mathbf{n}}\ge 0$. 2: $\mathbf{alpha}$double Input On entry: the scalar $\alpha$. 3: $\mathbf{x}\left[\mathit{dim}\right]$double Output Note: the dimension, dim, of the array x must be at least $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,1+\left({\mathbf{n}}-1\right)\left|{\mathbf{incx}}\right|\right)$. On exit: the scalar $\alpha$ is scattered with a stride of incx in x. Intermediate elements of x are unchanged. 4: $\mathbf{incx}$Integer Input On entry: the increment in the subscripts of x between successive elements of $x$. Constraint: ${\mathbf{incx}}\ne 0$. 5: $\mathbf{fail}$NagError * Input/Output The NAG error argument (see Section 7 in the Introduction to the NAG Library CL Interface). ## 6Error Indicators and Warnings NE_ALLOC_FAIL Dynamic memory allocation failed. See Section 3.1.2 in the Introduction to the NAG Library CL Interface for further information. On entry, argument $〈\mathit{\text{value}}〉$ had an illegal value. NE_INT On entry, ${\mathbf{incx}}=〈\mathit{\text{value}}〉$. Constraint: ${\mathbf{incx}}\ne 0$. On entry, ${\mathbf{n}}=〈\mathit{\text{value}}〉$. Constraint: ${\mathbf{n}}\ge 0$. NE_NO_LICENCE Your licence key may have expired or may not have been installed correctly. See Section 8 in the Introduction to the NAG Library CL Interface for further information. ## 7Accuracy The BLAS standard requires accurate implementations which avoid unnecessary over/underflow (see Section 2.7 of Basic Linear Algebra Subprograms Technical (BLAST) Forum (2001)). ## 8Parallelism and Performance f16fbc is not threaded in any implementation. None. ## 10Example This example initializes four elements of a real vector, $x$, with increment $2$, with the value $\alpha =0.3$. ### 10.1Program Text Program Text (f16fbce.c) ### 10.2Program Data Program Data (f16fbce.d) ### 10.3Program Results Program Results (f16fbce.r)
# A Closer Look at Coinciding Lines In the previous post, we have asked a question about coinciding lines. We observed that the lines with equations $3x + 8y = 12$ and $6x + 16y = 24$ coincide. It is not difficult to see that $6x + 16y = 24$ is $2(3x + 8y = 12)$. The question now is if one equation is a multiple of the other, are their graphs coinciding? We answer this question below. Consider a point with coordinates (2,3). What happens if we multiply the coordinates by 2, 3, and 4? If we do this, the coordinates become (4, 6), (6,9), and (8,12). Now, what is so special about these points? As we can see in the graph below, they lie on the same line. Can you explain why? If we create a point in (0,0) and connect it to (2,3) with a segment, then the segment has slope 3/2. If we connect the other points to (0,0) their slope will be 6/4, 9/6, and 12/8, all of which are equal to 3/2. Since they have a common point which is the origin, and have the same slope, they lie on the same line (recall slope-intercept form). This can be generalized as point not on the origin with coordinates $(x,y)$. This point has slope $y/x$. Now, if we multiply the coordinates with any real number $m$ which is not equal to 0, then the coordinates become $(mx, my)$. The slope of the segment connecting this point and the origin is $(my)/(mx)$ which is still equal to $y/x$. Now, since multiplying an equation with a number is just like multiplying all the coordinates of the points by this number, therefore, all those points are still on the line. This is the reason why the lines are also coinciding.
# Book on inequalities to help prepare for the Putnam exam ## For those who have used this book • ### Strongly don't Recommend • Total voters 1 ehrenfest I want to get a book on inequalities to help prepare for the Putnam exam. Its common for me to spend about half an hour getting frustrated on a practice problem and then find that the only way to do the problem is with an inequality that I have never heard of. Does anyone have any recomendations? Here are some I found on amazon: https://www.amazon.com/dp/0883856034/?tag=pfamazon01-20 https://www.amazon.com/dp/052154677X/?tag=pfamazon01-20 https://www.amazon.com/dp/0521358809/?tag=pfamazon01-20 Last edited by a moderator: mathematicsma I am an undergraduate student, and I just started a course called "Mathematics of Compound Interest." Most of the students in the class are taking actuarial exams (I myself have not decided what I'm doing), so the course is geared toward that kind of study (it's not proof oriented, etc.). The suggested text is Theory of Interest by Kellison, but the professor told me that given my background (Calc I, currently taking Calc II), it's too advanced for me, and we won't be going that deep in class anyway. So does anyone have a suggestion of a text that follows similar material on a simpler level that I would understand? Thanks a lot. I really like having a textbook in addition to lecture notes. jhaber I'm reading Zee's QFT as self-study and have had trouble with the applications of group theory in Section 2. I'd love a book recommendation to fill in my gaps. I took algebra in the math department, but there groups were structures to distinguish from rings and such by definition, with no regard to matrix representation. I've read the group theory chapter in Atkins's "Molecular Quantum Mechanics," but it didn't get me far enough in understanding the offhand references here to, say, the appropriate cyclic permutations of a 4 x 4 matrix. Obviously I don't mean a comprehensive, difficult, separate book on quantum theory from a group-theoretical point of view such as Weyl's. Thank you. fa2209 I am going to start my third year of a theoretical physics degree but have always had an interest in pure mathematics so I am currently teaching myself real analysis from the book "Real Analysis" by Howie. I've done some basic introductory set theory including cardinality, countability and Russell's paradox but what can I read to help me go from this level of understanding to the incompleteness theorems. Would I have to read more about set theory or go straight into formal logic? Any textbook recommendations would be much appreciated. bennyska so it turns out i only need 2 classes next semester before i wrap up my undergrad degree in statistics, but i have a scholarship that requires me to take 4 classes. i was thinking about doing an independent study through the college to help make up one of the ones I'm missing. 2 things i'd be interested in learning: bayesian statistics, and linear algebra specifically for statistics. i don't really know anything about bayesian stuff, other than the very basic problems we did in intro stats, and i already have a bit of experience with linear, although not from a stats perspective (i.e. stat applications). i just really like linear, and i'd like to develop it more. i have taken both an intro and proof based linear, but i'd like more application. does anyone have any suggestions? they'll have to be books that my teachers can approve so i can get credit for them. thanks Code: [LIST] [*] Preface [*] Real Numbers and Algebraic Expressions [LIST] [*] Success in Mathematics [*] Sets and Classifications of Numbers [*] Operations on Signed Numbers; Properties of Real Numbers [*] Order of Operations [*] Algebraic Expressions [/LIST] [*] Linear Equations and Inequalities [LIST] [*] Linear Equations and Inequalities in One Variable [LIST] [*] Linear Equations in One Variable [*] An Introduction to Problem Solving [*] Using Formulas to Solve Problems [*] Linear Inequalities in One Variable [/LIST] [*] Linear Equations and Inequalities in Two Variables [LIST] [*] Rectangular Coordinates and Graphs of Equations [*] Linear Equations in Two Variables [*] Parallel and Perpendicular Lines [*] Linear Inequalities in Two Variables [/LIST] [/LIST] [*] Relations, Functions, and More Inequalities [LIST] [*] Relations [*] An Introduction to Functions [*] Functions and Their Graphs [*] Linear Functions and Models [*] Compound Inequalities [*] Absolute Value Equations and Inequalities [*] Variation [/LIST] [*] Systems of Linear Equations and Inequalities [LIST] [*] Systems of Linear Equations in Two Variables [*] Problem Solving: System of Two Linear Equations Containing Two Unknowns [*] Systems of Linear Equations in Three Variables [*] Using Matrices to Solve Systems [*] Determinants and Cramer's Rule [*] System of Linear Inequalities [/LIST] [*] Polynomials and Polynomial Functions [LIST] [*] Multiplying Polynomials [*] Dividing Polynomials; Synthetic Division [*] Greatest Common Factor; Factoring by Grouping [*] Factoring Trinomials [*] Factoring Special Products [*] Factoring: A General Strategy [*] Polynomial Equations [/LIST] [*] Rational Expressions and Rational Functions [LIST] [*] Multiplying and Dividing Rational Expressions [*] Adding and Subtracting Rational Expressions [*] Complex Rational Expressions [*] Rational Equations [*] Rational Inequalities [*] Models Involving Rational Expressions [/LIST] [LIST] [*] $n$th Roots and Rational Exponents [*] Simplify Expressions Using the Laws of Exponents [*] Radical Equations and Their Applications [*] The Complex Number System [/LIST] [LIST] [*] Solving Quadratic Equations by Completing the Square [*] Solving Equations Quadratic in Form [*] Graphing Quadratic Functions Using Transformations [*] Graphing Quadratic Functions Using Properties [/LIST] [*] Exponential and Logarithmic Functions [LIST] [*] Composite Functions and Inverse Functions [*] Exponential Functions [*] Logarithmic Functions [*] Properties of Logarithms [*] Exponential and Logarithmic Equations [/LIST] [*] Conics [LIST] [*] Distance and Midpoint Formulas [*] Circles [*] Parabolas [*] Ellipses [*] Hyperbolas [*] Systems of Nonlinear Equations [/LIST] [*] Sequences, Series, and the Binomial Theorem [LIST] [*] Sequences [*] Arithmetic Sequences [*] Geometric Sequences and Series [*] The Binomial Theorem [/LIST] [*] Applications Index [*] Subject Index [*] Photo Credits [/LIST] Last edited: Code: [LIST] [*] Divisibility [LIST] [*] Divisors [*] Bezout's identity [*] Least common multiples [*] Linear Diophantine equations [*] Supplementary exercises [/LIST] [*] Prime Numbers [LIST] [*] Prime numbers and prime-power factorisations [*] Distribution of primes [*] Fermat and Mersenne primes [*] Primality-testing and factorisation [*] Supplementary exercises [/LIST] [*] Congruences [LIST] [*] Modular arithmetic [*] Linear congruences [*] Simultaneous linear congruences [*] Simultaneous non-linear congruences [*] An extension of the Chinese Remainder Theorem [*] Supplementary exercises [/LIST] [*] Congruences with a Prime-power Modulus [LIST] [*] The arithmetic of $\mathbb{Z}_p$ [*] Pseudoprimes and Carmichael numbers [*] Solving congruences mod $(p^e)$ [*] Supplementary exercises [/LIST] [*] Euler's Function [LIST] [*] Units [*] Euler's function [*] Applications of Euler's function [*] Supplementary exercises [/LIST] [*] The Group of Units [LIST] [*] The group $U_n$ [*] Primitive roots [*] The group $U_{p^e}$, where $p$ is an odd prime [*] The group $U_{2^e}$ [*] The existence of primitive roots [*] Applications of primitive roots [*] The algebraic structure of $U_n$ [*] The universal exponent [*] Supplementary exercises [/LIST] [LIST] [*] The group of quadratic residues [*] The Legendre symbol [*] Quadratic residues for prime-power moduli [*] Quadratic residues for arbitrary moduli [*] Supplementary exercises [/LIST] [*] Arithmetic Functions [LIST] [*] Definition and examples [*] Perfect numbers [*] The Mobius Inversion Formula [*] An application of the Mobius Inversion Formula [*] Properties of the Mobius function [*] The Dirichlet product [*] Supplementary exercises [/LIST] [*] The Riemann Zeta Function [LIST] [*] Historical background [*] Convergence [*] Applications to prime numbers [*] Random integers [*] Evaluating $\zeta(2)$ [*] Evaluating $\zeta(2k)$ [*] Dirichlet series [*] Euler products [*] Complex variables [*] Supplementary exercises [/LIST] [*] Sums of Squares [LIST] [*] Sums of two squares [*] The Gaussian integers [*] Sums of three squares [*] Sums of four squares [*] Digression on quaternions [*] Minkowski's Theorem [*] Supplementary exercises [/LIST] [*] Fermat's Last Theorem [LIST] [*] The problem [*] Pythagoras's Theorem [*] Pythagorean triples [*] Isosceles triangles and irrationality [*] The classification of Pythagorean triples [*] Fermat [*] The case $n = 4$ [*] Odd prime exponents [*] Lame and Kummer [*] Modern developments [/LIST] [*] Appendix: Induction and Well-ordering [*] Appendix: Groups, Rings and Fields [*] Appendix: Convergence [*] Appendix: Table of Primes $p < 1000$ [*] Solutions to Exercises [*] Bibliography [*] Index of symbols [*] Index of names [*] Index [/LIST] Last edited:
LRC circuit 1. May 6, 2015 toothpaste666 2. Relevant equations XL = ωL XC = 1/ωC Z= sqrt(R^2+(XL-XC)^2) ∅ = tan^-1(XL-XC/R) 3. The attempt at a solution A) a) Irms = Vrms/R = 100 V/400 Ω = .25 A b) 1) V= Vrms =100 V 2) V = IrmsXL = IrmsωL = (.25)(1000)(.9) = 225 V 3) V= IrmsXC = Irms/ωC = (.25)/((1000)(2E-6)) = 125 V 4) this part I am not sure how to do. 5) V = IrmsZ = Irmssqrt(R^2+(XL-XC)^2) = (.25)sqrt(400^2 + (900 - 500)^2) = 141 V c) ∅=tan^-1(XL-XC/R) = tan^-1(400/400) = 45° it is positive so voltage leads B) a) ω = 1/sqrt(LC) = 1/sqrt(.9(2E-6)) = 745 rad/sec b) 1) still 100 V 2) V = IrmsXL = IrmsωL = (.25)(745)(.9) = 168 V 3) V = IrmsXC = Irms/ωC = (.25)/((745)(2E-6)) = 168 V 4) ??? 5) V = IrmsZ = IrmsR = .25(400) = 100 V I am not entirely confident I did all of these right. feedback would be greatly appreciated 2. May 6, 2015 donpacino your part a is wrong. I=V/Z, with Z being the impedance of the circuit. Since it is an AC waveform, the inductor and capacitor will have some impedance 3. May 6, 2015 toothpaste666 so part a) would be I = V/Z = V/ sqrt(R^2 + (XL-XC)^2) = 100/sqrt(400^2 +(1000(.9-2E-6))^2) = .1 A ??? also would I be able to do part 4 using the formula V = IZ where the R in the formula for Z is set to 0? 4. May 6, 2015 donpacino No. What have you learned about AC circuits and inductors and capacitors? Have you learned about the laplace transform yet? 5. May 6, 2015 toothpaste666 I haven't heard of the laplace transform. Both of the things I said are wrong? I am still wrong about part a) ? 6. May 6, 2015 donpacino the resistance at any given frequency for these purposes can be seen below inducotor: w*L capacitor: 1/(w*L) now the inductor, capacitor,and resistor.... are they in series or parallel? 7. May 6, 2015 8. May 6, 2015 toothpaste666 they are in series 9. May 6, 2015 donpacino yup, so to find the total impedance, you add them together 10. May 6, 2015 toothpaste666 I = V/Z = V/ sqrt(R^2 + (XL+XC)^2) ??? so when they are in parallel it is 1/XL + 1/XC ?? My book says XL-XC where does this come from? 11. May 6, 2015 donpacino somehow I missed your equations page. oops I forgot you havent really learned that much about AC so they gave you the equations. http://en.wikipedia.org/wiki/Complex_plane There are two ways to express complex numbers, polar and rectangular notation. sqrt(R^2 + (XL+XC)^2) essentially converts the rectangular notation to the magnitude of polar notation and ∅ = tan^-1(XL-XC/R) converts it to the angle of polar notation 12. May 6, 2015 donpacino in that case, the second answer you gave is correct 13. May 6, 2015 toothpaste666 the .1 A is correct for part a) ? For part 4) is this a case where the Voltage oscillates? 14. May 6, 2015 donpacino yes do you mean finding the phase angle?? if yes then look at your equation for theta 15. May 7, 2015 toothpaste666 I mean to find the voltage across the LC part of the circuit (If I am understanding the question correctly) Originally I was thinking of using the equation for Z with R = 0 or Z = sqrt((XL-XC)^2) and then using V = IZ 16. May 7, 2015 toothpaste666 I am still trying to figure this out. Is this one of the cases where I have to use the formula for oscillating voltage? V=v0coswt ? 17. May 11, 2015 donpacino recall each part has an impedance. you know what the impedance is V=I*Z
### Nim Start with any number of counters in any number of piles. 2 players take it in turns to remove any number of counters from a single pile. The loser is the player who takes the last counter. ### FEMTO A new card game for two players. ### FEMTO: Follow Up Follow-up to the February Game Rules of FEMTO. # Turnablock ##### Stage: 4 Challenge Level: Turnablock is a very simple game for two players invented by John Conway. It is played on a 3 by 3 square board with 9 counters that are black on one side and white on the other. The counters are placed on the board at random, one per square, except that the majority must be black uppermost. Players take turns to reverse all the pieces in a block (e.g. $1 \times 1$, $2 \times 3$ or even $3 \times 3$). The object is to have all pieces white uppermost on completion of your move.
# Simulating a simple RC circuit using an arduino and the circuit's equivalent discrete-time transfer function Suppose we have an RC circuit with the continuous transfer function: $G(s)=\frac{1}{RCs+1}$ with R=1.6k, and C=2uF. Using the bilinear transformation(or better known as Tustin method) $s=\frac{2}{T_s}\frac{z-1}{z+1}$ with the sampling period being $T_s=0.5ms$, I arrive to the following discrete transfer function: $G(z)=\frac{z+1}{13.8z-11.8}=\frac{X(z)}{U(z)}$ In terms of a difference equation(recurrence relation) the discrete system is thus described as $u_n+u_{n-1}=13.8x_n-11.8x_{n-1}$, where $x_n$ is the current output variable, $x_{n-1}$ is the previous output. Similarly $u_n$ is the current input variable, and $u_{n-1}$ is the previous input. The output of this system will obviously be discrete, unlike the real response of the RC circuit. The code, I've written for the arduino (Uno model) is here: #include "Wire.h" #define PCF8591 (0x90 >> 1) void AnalogOut(uint8_t value) { sei(); //enable interrupts Wire.beginTransmission(PCF8591); // turn on the PCF8591 Wire.write(0x40); // control byte Wire.write(value); Wire.endTransmission(); } void setup() { pinMode(A0, INPUT); Wire.begin(); // timer2 initialization noInterrupts(); TCCR2A = 0; TCCR2B = 0; TCCR2B = (0<<CS22) | (1<<CS20)| (1<<CS21); //and the prescaler for a 0.5ms sampling time TIMSK2 |= (1 << TOIE2); interrupts(); } uint8_t x_n_1=0; //x[n-1]=0; initial value of the output uint8_t u_n; //u[n] current value of the input uint8_t x_n; //x[n] current value for the ouptut uint8_t u_n_1; //u[n-1] previous value for the input //interrupt routine ISR(TIMER2_OVF_vect) { x_n=uint8_t(0.07246*u_n+0.07246*u_n_1+0.855*x_n_1); //x[n]=1/13.8(u[n]+u[n-1]+11.8x[n], also converting it to a 8bite integer AnalogOut(x_n); //writing to the PC8591 d/a converter x_n_1=x_n ; //x[n-1] for the next sampling interval becomes the current x[n] u_n_1=u_n; //u[n-1] for the next sampling interval becomes the current u[n] } void loop() { u_n_1=analogRead(A0); //getting an initial value for u[n-1] while(1); } Note that I have used an interrupt routine for generating a 0.5ms delay. Also, the part of the code in the loop() function is better explained next: TCNT2=preload; //the timer starts ticking at its inital value u_n_1=analogRead(A0); //we are doing this a/d conversion before even one interrupt has happened. Here is the circuit I put together in proteus: In proteus, I am sending $y=3+\sin(314t) volts$ to the A0 analog input. Also, I set the refference voltage for the a/d covnerter to 5V (in our actual laboratory we can set Vref using a potentiometer). Here is what I got on the oscilloscope in proteus when running the simulation: I am not sure whether this would work on the actual device since we'll be doing that in our lab in a couple of days as a part of a Digital control systems course. Is the crappy signal only a result of poor simulation or is there another error? I have tried generating a (discrete) sawtooth wave using this same circuit(but without no inputs or any transfer function, just plain signal output) and it worked well. • You got something wrong in your calculations. I get: $u_n=0.135 x_n+0.865 u_{n-1}$ Dec 28 '16 at 19:59 • Looks like an arbitrary waveform with 64 levels to me and 40 time quantums per sine. Dec 28 '16 at 20:13 • @VladimirCravero I must have done the calculation a hundred times and I always arrive to the same result. Are you sure you aren't using another discretization method(zero order hold, IIR...)? Dec 28 '16 at 20:18 • Well I have done the calculations in another way, and yes I get your numbers. What I find odd is that for an RC filter I would not expect the output to depend on the old input, but only on the current input and the current and old output. Dec 28 '16 at 20:20 • What range of values do you expect for $u_n$? – Chu Dec 28 '16 at 20:55 I think it's problem of scaling the coefficients from your first-order filter equation. So, you can manipulate the values with proper precision, instead of simply converting to uint8_t the result of the sums / multiplications, as you are doing. Since at the end you will restrict the output of the filter to 8 bits (DAC) and also to avoid using 32 bit variables, I suggest to define a scale factor of 256. In this way, it will be necessary to use only 16-bit variables (uint16_t) for MAC operations. The analogRead() function returns a number from 0 to 1023 (10 bit ADC). For reasons similar to those already discussed, I will limit the result from digital conversion to 8 bits (division by 4, or right shift by 2 bits). So: uint16_t u_n; The original equation: $$x_n=0.855072x_{n-1}+0.072464\left ( u_n+u_{n-1} \right )$$ Will be converted to: $$256x_n= 219x_{n-1}+19\left ( u_n+u_{n-1} \right )$$ and implemented in code, with rounding, as: x_n = (219*x_n_1 + 19*(u_n + u_n_1) + 128) >> 8; In the resulting plot ($T_s=0.5 ms$) it's possible to notice that the system, when excited approximately at the frequency of the pole (50 Hz), exhibits a response with magnitude 0.707 (-3 dB) of the input and phase lag of 45 degrees. It's worth remembering that, for example, a 4V voltage is equivalent to a digital value of 4/5 * 256 (aprox. 204). UPDATE: You may notice in the previous plot the effect of discretization with insufficient precision, for example the presence of a DC offset error. Thus, by changing the scaling factor to 1024 and using 32-bit variables, it's possible to minimize the discretization error: uint32_t u_n; $$1024x_n= 876x_{n-1}+74\left ( u_n+u_{n-1} \right )$$ x_n = (876*x_n_1 + 74*(u_n + u_n_1) + 512) >> 10;
### Show Posts This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to. ### Messages - Hyun Woo Lee Pages: [1] 1 ##### Term Test 1 / Re: TT1 Problem 4 (morning) « on: October 19, 2018, 11:35:25 AM » This is a piece wise curve. Let’s denote the first curve as $r_1$ And the second curve, line from $3+3i$ to $3$ as $r_2$ Parametrize both curve. $$r_1(t) = 3e^{it} + 3i, 0 \geq t \geq -\frac{\pi}{2}$$ $$r_2(t) = -3ti + 3 + 3i, 0 \leq t \leq 1$$ Then, $$\int_{L}^{} (z+\bar{z}) dz = \int_{r_1}{} (z+\bar{z})dz + \int_{r_2}{} (z+\bar{z})dz$$ So, $$\int_{r_1} (z+\bar{z})dz =\int_{-\frac{\pi}{2}}^{0} [r(t) + \overline{r(t)}]r’(t) dt = \int_{-\frac{\pi}{2}}^{0} (3e^{it} + 3i + 3e^{-it} -3i)(3ie^{it}) dt$$ Then, $$\int_{-\frac{\pi}{2}}^{0} \Bigl(9ie^{2it} + 9i \Bigr)dt = \frac{9}{2}e^{2it} + 9it \Big|_\frac{-\pi}{2}^{0}$$ This calculates to $$9 + \frac{9}{2}i\pi$$ Now we have to compute for $r_2$ $$\int_{r_2}{} (z+\bar{z})dz = \int_{0}^{1} [(3+i(3-3t))(3-i(3-3t))](-3i) dt$$ This is $$-3i\int{0}^{1} [1+i(1-t)][1-i(1-t)] dt = -3i\int_{0}^{1} 1 + (1-t)^2 dt = -3i\int_{0}^{1} t^2 - 2t + 2$$ Then, $$-3i(\frac{t^3}{3} - t^2 + 2t) \Big|_{0}^{1} = -3i(\frac{1}{3} - 1 +2) = -4i$$ Adding the two integrals you get $$9 + \frac{9}{2}i\pi - 4i$$ 2 ##### Thanksgiving bonus / Re: Thanksgiving bonus 1 « on: October 07, 2018, 03:08:47 PM » Let $$f = u + iv$$ Then, $$u(x, y) = \frac{x}{x^2+y^2}, v(x, y) = \frac{y}{x^2+y^2}$$ Hence, $$\frac{\partial u}{\partial x} = \frac{-(x^2-y^2)}{(x^2+y^2)^2}, \frac{\partial u}{\partial y} = \frac{-2xy}{(x^2+y^2)2}$$ And $$\frac{\partial v}{\partial x} = \frac{-2xy}{(x^2+y^2)^2}, \frac{\partial v}{\partial y} = \frac{(x^2 - y^2)}{(x^2+y^2)^2}$$ And, $$\frac{\partial u}{\partial x} + \frac{\partial v}{\partial y} = 0, \frac{\partial v}{\partial x} + \frac{\partial u}{\partial y} = 0$$ This means that our function is locally sourceless and irrotational flux on domain D that does not include the origin (where our function is not defined). Now note that our function can be written as $$f(z) = \frac{1}{\overline{z}}$$ Since $$\frac{1}{\overline{z}} = \frac{x}{x^2+y^2} + i\frac{y}{x^2+y^2}$$ Now, lets take a look on the circle $$|z| = 1$$ The normal component of f is $$f\cdot n = \cos{\theta}\cos{\theta} + \sin{\theta}sin{\theta}$$ Since $$f(z) = \frac {\cos{\theta} +i\sin{\theta}}{r}, r = 1, \theta = arg(\frac{1}{\overline{z}})$$ And $$n = \cos{\theta} - i\sin{\theta}$$ Then, $$\int_{|z| =1} f\cdot n ds = 1\int_{|z| = 1} ds = 2\pi$$ Hence, our function is locally sourceless and irrotational flow but is not globally sourceless. Pages: [1]
g01 Chapter Contents g01 Chapter Introduction NAG C Library Manual NAG Library Function Documentnag_prob_non_central_students_t (g01gbc) 1  Purpose nag_prob_non_central_students_t (g01gbc) returns the lower tail probability for the noncentral Student's $t$-distribution. 2  Specification #include #include double nag_prob_non_central_students_t (double t, double df, double delta, double tol, Integer max_iter, NagError *fail) 3  Description The lower tail probability of the noncentral Student's $t$-distribution with $\nu$ degrees of freedom and noncentrality parameter $\delta$, $P\left(T\le t:\nu \text{;}\delta \right)$, is defined by $PT≤t:ν;δ=Cν∫0∞ 12π∫-∞ αu-δe-x2/2dx uν-1e-u2/2du, ν>0.0$ with $Cν=1Γ 12ν 2ν- 2/2 , α=tν.$ The probability is computed in one of two ways. (i) When $t=0.0$, the relationship to the normal is used: $PT≤t:ν;δ=12π∫δ∞e-u2/2du.$ (ii) Otherwise the series expansion described in Equation 9 of Amos (1964) is used. This involves the sums of confluent hypergeometric functions, the terms of which are computed using recurrence relationships. 4  References Amos D E (1964) Representations of the central and non-central $t$-distributions Biometrika 51 451–458 5  Arguments 1:     tdoubleInput On entry: $t$, the deviate from the Student's $t$-distribution with $\nu$ degrees of freedom. 2:     dfdoubleInput On entry: $\nu$, the degrees of freedom of the Student's $t$-distribution. Constraint: ${\mathbf{df}}\ge 1.0$. On entry: $\delta$, the noncentrality argument of the Students $t$-distribution. 4:     toldoubleInput On entry: the absolute accuracy required by you in the results. If nag_prob_non_central_students_t (g01gbc) is entered with tol greater than or equal to $1.0$ or less than  (see nag_machine_precision (X02AJC)), then the value of  is used instead. 5:     max_iterIntegerInput On entry: the maximum number of terms that are used in each of the summations. Suggested value: $100$. See Section 8 for further comments. Constraint: ${\mathbf{max_iter}}\ge 1$. 6:     failNagError *Input/Output The NAG error argument (see Section 3.6 in the Essential Introduction). 6  Error Indicators and Warnings NE_ALLOC_FAIL Dynamic memory allocation failed. NE_INT_ARG_LT On entry, ${\mathbf{max_iter}}=〈\mathit{\text{value}}〉$. Constraint: ${\mathbf{max_iter}}\ge 1$. NE_INTERNAL_ERROR An internal error has occurred in this function. Check the function call and any array sizes. If the call is correct then please contact NAG for assistance. NE_PROB_LIMIT The probability is too close to $0$ or $1$. NE_PROBABILITY The probability is too small to calculate accurately. NE_REAL_ARG_LT On entry, ${\mathbf{df}}=〈\mathit{\text{value}}〉$. Constraint: ${\mathbf{df}}\ge 1.0$. NE_SERIES One of the series has failed to converge with ${\mathbf{max_iter}}=〈\mathit{\text{value}}〉$ and ${\mathbf{tol}}=〈\mathit{\text{value}}〉$. Reconsider the requested tolerance and/or the maximum number of iterations. 7  Accuracy The series described in Amos (1964) are summed until an estimated upper bound on the contribution of future terms to the probability is less than tol. There may also be some loss of accuracy due to calculation of gamma functions. The rate of convergence of the series depends, in part, on the quantity ${t}^{2}/\left({t}^{2}+\nu \right)$. The smaller this quantity the faster the convergence. Thus for large $t$ and small $\nu$ the convergence may be slow. If $\nu$ is an integer then one of the series to be summed is of finite length. If two tail probabilities are required then the relationship of the $t$-distribution to the $F$-distribution can be used: $F=T2,λ=δ2,ν1=1 and ν2=ν,$ and a call made to nag_prob_non_central_f_dist (g01gdc). Note that nag_prob_non_central_students_t (g01gbc) only allows degrees of freedom greater than or equal to $1$ although values between $0$ and $1$ are theoretically possible. 9  Example This example reads values from, and degrees of freedom for, and noncentrality arguments of the noncentral Student's $t$-distributions, calculates the lower tail probabilities and prints all these values until the end of data is reached. 9.1  Program Text Program Text (g01gbce.c) 9.2  Program Data Program Data (g01gbce.d) 9.3  Program Results Program Results (g01gbce.r)
### Tomek Korbak PhD student, University of Sussex # Where syntax ends and semantics begins and why should we care The relation between syntax (how words are structured in a sentence) and semantics (how words contribute to the meaning of a sentence) is a long-standing open question in linguistics. It happens, however, to have practical consequences for NLP. In this blog post, I review recent work on disentangling the syntactic and the semantic information when training sentence autoencoders. These models are variational autoencoders with two latent variables and auxiliary loss functions specific for semantic and for syntactic representations. For instance, they may require the syntactic representation of a sentence to be predictive of word order and the semantic representation to be predictive of an (unordered) set of words in the sentence. I then go on to argue that sentence embeddings separating syntax from semantics can have a variety of uses in conditional text generation and may provide robust features for multiple downstream NLP tasks. ## Introduction The ability of word2vec embeddings1 to capture semantic and syntactic properties of words in terms of geometrical relations between their vector representations is almost public knowledge now. For instance, the word embedding for “king” minus the word embedding for “man” plus the word embedding for “man” will lie close to “queen” in the embedding space. Similarily, trained word embeddings can do syntactic analogy tasks such as “quickly” - “quick” + “slow” = “slowly.” But from a purely statistical point of view, the difference between syntax and semantics is arbitrary. Word embeddings themselves do not distinguish between the two: the word embedding for “quick” will be in the vicinity of both “quickly” (adverb) and “fast” (synonym). This is because word embeddings (this applies to word2vec but also to more powerful contextual word embeddings, such as those produced by BERT2) are optimized to predict words based on their context (or vice versa). Context can be semantic (the meaning of neighbouring words) as well as syntactic (the syntactic function of neighboting words). But from the point of view of a neural language model, learning that a verb must agree with person (“do” is unlikely when preceded by “she”) is not fundamentally different form learning that it must maintain coherence with the rest of the sentence (“rubble” is unlikely when preceded by “I ate”). It seems that we need a more fine-grained training objective to force a neural model to distinguish between syntax and semantics. This is what motivates some recent approaches to learning two separate sentence embeddings for a sentence — one focusing on syntax, and the other on semantics. ## Training sentence autoencoders to disentangle syntax and semantics Variational autoencoder (VAE) is a popular architectural choice for unsupervised learning of meaningful representations.3 VAE’s training objective is simply to encode an object $$x$$ into a vector representation (more precisely, a probability distribution over vector representations) such that it is possible to reconstruct $$x$$ based on a this vector (or a sample from the distrubtion over these vectors). Although VAE research focuses on images, it can also be applied in NLP, where our $$x$$ is a sentence.4 In such a setting, VAE encodes a sentence $$x$$ into a probabilistic latent space $$q(z\vert x)$$ and then tries to maximize the likelihood of its reconstruction $$p(x\vert z)$$ given a sample from the latent space $$z \sim q(z\vert x)$$. $$p(x\vert z)$$ and $$q(z\vert x)$$, usually implemented as recurrent neural networks, can be seen as a decoder and an encoder. The model is regularized to minimize the following loss function: $\mathcal{L}_{\text{VAE}}(x) := \mathbb{E}_{z \sim q(\cdot\vert x)} [p(x\vert z)] + \text{KL}(q(z\vert x) \parallel p(z))$ where $$p(z)$$ is assumed to be a Gaussian prior and the Kullback-Leibler divergence $$\text{KL}$$ between $$q(z\vert x)$$ and $$p(z)$$ is a regularization term. Recently, two extensions of the VAE framework have been independently proposed: VG–VAE (von Mises–Fisher Gaussian Variational Autoencoder)5 and DSS–VAE (disentangled syntactic and semantic spaces of VAE).6 These extensions replace $$z$$ with two separate latent variables encoding the meaning of a sentence ($$z_{sem} \sim q_{sem}(\cdot\vert x)$$) and its syntactic structure ($$z_{syn} \sim q_{syn}(\cdot\vert x)$$). I will jointly refer to these models as sentence autoencoders disentangling semantics and syntax (SADSS). Disentanglement in SADSS is achieved via a multi-task objective. Auxiliary loss functions $$\mathcal{L}_{sem}$$ and $$\mathcal{L}_{syn}$$, separate for semantic and syntactic representations, are added to the VAE loss function with two latent variables: $\mathcal{L}_{\text{SADSS}}(x) := \mathbb{E}_{z_{sem} \sim q_{sem}(\cdot\vert x)} \mathbb{E}_{z_{syn} \sim q_{syn}(\cdot\vert x)} [p(x\vert z_{sem}, z_{syn}) + \mathcal{L}_{sem}(x, z_{sem}) + \mathcal{L}_{syn}(x, z_{syn})] \\ + \text{KL}(q(z_{sem}\vert x) \parallel p(z_{sem})) + \text{KL}(q(z_{syn}\vert x) \parallel p(z_{syn}))$ There are several choices for auxilary loss functions $$\mathcal{L}_{sem}$$ and $$\mathcal{L}_{syn}$$. $$\mathcal{L}_{sem}$$ might require the semantic representation $$z_{sem}$$ to predict the bag of words contained in $$x$$ (DSS–VAE) or to discriminate between a sentence $$x^+$$ paraphrasing $$x$$ and a dissimilar sentence $$x^-$$ (VG–VAE). $$\mathcal{L}_{syn}$$ might require the syntactic representation to predict a linearized parse tree of $$x$$ (DSS–VAE) or to predict a position $$i$$ for each word $$x_i$$ in $$x$$ (VG–VAE). DSS–VAE also uses adversarial losses, ensuring that (i) $$z_{sym}$$ minimizes semantic losses, (ii) $$z_{sem}$$ minimizes semantic losses, and that (iii) neither $$z_{syn}$$ nor $$z_{sym}$$ alone is sufficient to reconstruct $$x$$. Crucially, both auxiliary losses $$\mathcal{L}_{sem}$$ and $$\mathcal{L}_{syn}$$ are motivated by the assumption that syntax pertains to the ordering of words, while semantics deals with their lexical meanings. ## What is syntax–semantics disentanglement for? SADSS allow a number of applications in conditional text generation, including unsupervised paraphrase generation7 and textual style transfer8. Generating a paraphrase $$x'$$ of $$x$$ can be seen as generating a sentence shares the meaning of $$x$$ but expresses it with different syntax. Paraphrases can be sampled by greedily decoding $$x' = p(\cdot\vert z_{sem}, z_{syn})$$ where $$z_{sem} = \text{argmax}_{ z_{sem}} p(z_{sem}\vert x)$$ and $$z_{syn} \sim p(z_{syn}\vert x)$$. Examples of sentences generarated by VG-VAE that either capture only semantics (and marginalize out syntax) or and only syntax (and marginalize out semantics) of a target sentence.5 Similarly, one can pose textual style transfer as the problem of producing a new sentence $$x_{new}$$ that captures the meaning of some sentence $$x_{sem}$$ but borrows the syntax of another sentence $$x_{syn}$$. Examples of sentences generated by DSS–VAE that transfers the syntax of one sentence onto the meaning of another. (Vanilla VAE output serves as a baseline.)6 There is one further application of unsupervised paraphrase generation: data augmentation. Data augmentation means generating synthetic training data by applying label–preserving transformations to available training data. Data augmentation is far less popular in NLP than computer vision and other applications, partly due to the difficulty of finding task-agnostic transformations of sentences that preserve their meaning. Indeed, Sebastain Ruder lists task-independent data augmentation for NLP as one of core open research problem in machine learning today. Unsupervised paraphrase generation might be a viable alternative to methods such as backtranslation. Backtranslation produces a synthetic sentence $$x'$$ capturing the meaning of an original $$x$$ by first machine translating $$x$$ into some other language (e.g. French) and then translating $$x$$ back to English.9 A more principled approach would be to use SADSS and generate synthetic sentences by conditioning on the meaning of $$x$$ captured in $$z_{sem}$$ but sampling $$z_{syn}$$ from a prior distribution to ensure syntactic diversity. ## Beyond conditional text generation While most research has focused on applying SADSS to natural language generation, representation learning applications remain relatively underexplored. One can imagine, however, using SADSS for producing task-agnostic sentence representation10 that can be used as features in various downstream applications, including document classification and question answering. Syntax–semantics disentanglement seems to brings some additional benefits to the table that even more powerful models, such as BERT, might lack. First, representations produced by SADSS may be more robust to distribution shift. Assuming that stylistic variation will be mostly captured by $$z_{syn}$$, we can expect SADSS to exhibit increased generalization across stylistically diverse documents. For instance, we can expect a SADSS model trained on the Wall Street Journal collection of Penn treebank to outperform a baseline model on generalizing to Twitter data. Second, SADSS might be more fair. Raw text is known to be predictive of some demographic attributes of its author, such as gender, race or ethnicity.11 Most approaches to removing information about sensitive attributes from a representation, such as adversarial training,12 require access to these attributes at training time. However, disentanglement of a representation has been observed to correlate consistently with increased fairness across several downstream tasks13 without the need to know the protected attribute in advance. This fact raises the question of whether disentangling semantics from syntax also improves fairness, being understood as blindness to demographic attributes. Assuming that most demographic information is captured by syntax, one can conjecture that disentangled semantic representation would be fairer in this sense. Finally, learning disentangled representations for language is sometimes conjectured to be part of a larger endeavor of building AI capable of symbolic reasoning. Consider syntactic attention, an architecture separating the flow of semantic and syntactic information inspired by models of language comprehension in computational neuroscience. It was shown to offer improved compositional generalization.14 The authors further argue the results are due to a decomposition of a difficult out-of-domain (o.o.d.) generalization problem into two separate i.i.d. generalization problems: learning the meanings of words and learning to compose words. Disentangling the two allows the model to refer to particular words indirectly (abstracting away from their meaning), which is a step towards emulating symbol manipulation in a differentiable architecture — a research directions laid down by Yoshua Bengio in his NeurIPS 2019 keynote keynote From System 1 Deep Learning to System 2 Deep Learning. ## Wrap up Isn’t it naïve to assume that syntax boils down to word order, and the meaning of a sentence is nothing more than a bag of words used in a sentence? Surely, it is. The assumptions embodied in $$\mathcal{L}_{sem}$$ and $$\mathcal{L}_{syn}$$ are highly questionable from a linguistic point of view. There are a number of linguistic phenomena that seem to escape these loss functions or occur at the syntax–semantics interface. These include the predicate-argument structure (especially considering the dependence of subject and object roles on context and syntax) or function words (e.g. prepositions). Moreover, what $$\mathcal{L}_{sem}$$ and $$\mathcal{L}_{syn}$$ capture may be quite specific for how the grammar of English works. While English indeed encodes the grammatical function of constituents primarily through word order, other languages (such as Polish) manifest much looser word order and mark grammatical function via case inflection, by relying on an array of orthographically different word forms. Interpreting $$z_{sem}$$ and $$z_{syn}$$ as semantic and syntactic is therefore somewhat hand-wavy and seems to provide little insight into the nature of language. Nevertheless, SADSS demonstrate impressive results in paraphrase generation and textual style transfer and show promise for several applications, including data augmentation as well as robust representation learning. They may deserve interest in their own right, despite being a crooked image of how language works. This blog post was originally published on Sigmoidal blog. 1. Mikolov, T., Sutskever, I., Chen, K., Corrado, G. S., & Dean, J. (2013). Distributed Representations of Words and Phrases and their Compositionality. Advances in Neural Information Processing Systems 2. Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2019). BERT: Pre-training of deep bidirectional transformers for language understanding. Annual Conference of the North American Chapter of the Association for Computational Linguistics 3. Kingma, D. P., & Welling, M. (2014). Auto-Encoding Variational Bayes. International Conference on Learning Representations 4. Bowman, S. R., Vilnis, L., Vinyals, O., Dai, A. M., Jozefowicz, R., & Bengio, S. (2016). Generating sentences from a continuous space. Proceedings of The 20th Conference on Computational Natural Language Learning 5. Chen, M., Tang, Q., Wiseman, S., & Gimpel, K. (2019). A Multi-Task Approach for Disentangling Syntax and Semantics in Sentence Representations. Annual Conference of the North American Chapter of the Association for Computational Linguistics 2 6. Bao, Y., Zhou, H., Huang, S., Li, L., Mou, L., Vechtomova, O., Dai, X., & Chen, J. (2019). Generating sentences from disentangled syntactic and semantic spaces. Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics 2 7. Gupta, A., Agarwal, A., Singh, P., & Rai, P. (2018). A deep generative framework for paraphrase generation. Thirty-Second AAAI Conference on Artificial Intelligence 8. Hu, Z., Yang, Z., Liang, X., Salakhutdinov, R., & Xing, E. P. (2017). Toward controlled generation of text. Proceedings of the 34th International Conference on Machine Learning, Volume 70, 1587–1596. 9. Sennrich, R., Haddow, B., & Birch, A. (2016). Improving Neural Machine Translation Models with Monolingual Data. Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics 10. Conneau, A., & Kiela, D. (2018). Senteval: An evaluation toolkit for universal sentence representations. Proceedings of the Eleventh International Conference on Language Resources and Evaluation 11. Pardo, F. M. R., Rosso, P., Verhoeven, B., Daelemans, W., Potthast, M., & Stein, B. (2016). Overview of the 4th Author Profiling Task at PAN 2016: Cross-Genre Evaluations. CLEF 12. Elazar, Y., & Goldberg, Y. (2018). Adversarial Removal of Demographic Attributes from Text Data. Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing 13. Locatello, F., Abbati, G., Rainforth, T., Bauer, S., Schölkopf, B., & Bachem, O. (2019). On the Fairness of Disentangled Representations. Advances in Neural Information Processing Systems 14. Russin, J., Jo, J., & O’Reilly, R. C. (2019). Compositional generalization in a deep seq2seq model by separating syntax and semantics. ArXiv Preprint ArXiv:1904.09708
# CAT Quant Practice Problems Question: Answer the questions on the basis of the information given below. ${f_1}\left( x \right) = \left\{ {\begin{array}{*{20}{c}}{x\;;\,{\rm{0}} \le x \le 1}\\{1;x \ge 1}\\{0;\;Otherwise}\end{array}} \right.{\rm{ }}$ $\begin{array}{l}{f_2}\left( x \right) = {f_1}\left( { - x} \right){\rm{ for all }}x\\{f_3}\left( x \right) = - {f_2}\left( x \right){\rm{ for all }}x\\{f_4}\left( x \right) = {f_3}\left( { - x} \right){\rm{ for all }}x\end{array}$ How many of the following products are necessarily zero for every x. ${f_1}\left( x \right){f_2}\left( x \right),\;{f_2}\left( x \right){f_3}\left( x \right),\;{f_2}\left( x \right){f_4}\left( x \right)$ ? 0 1 2 3 #### CAT Quant Online Course • 1000+ Practice Problems • Detailed Theory of Every Topics • Online Live Sessions for Doubt Clearing • All Problems with Video Solutions CAT Quant Practice Problems
# Tag Info ## Hot answers tagged probabilistic-number-theory Accepted ### Freeman Dyson's approach to string theory Dyson's A walk through Ramanujan's garden gives the background of this comment: He explains that the "seeds from Ramanujan's garden have been blowing on the wind and have been sprouting all over ... • 153k ### Number of distinct factors Let me supplement the answer of i707107 for $c<1$, i.e. when we count integers with very few prime factors. Writing $$\pi_k(n):=|\{m\in\Bbb N:\mbox{ }m\leq n,\mbox{ }\omega(m)=k\}|,$$ the ... • 87k ### Freeman Dyson's approach to string theory I don't think it would have convinced Feynman because he didn't like the rabbit hole that string theory seemed to be going down. That instead of trying to explain some phenomenon, that they were ... • 101
# Energy-band diagram of forward-biased pn junction When a p-n junction is forward biased then its energy-band diagram looks like this: What would happen if $V_a>V_{bi}$? ($V_{bi}$ is the built-in potential and $V_a$ is the externally applied voltage). Apparently, the bands would "reverse", in the sense that the conduction band limit in the p-side will be lower that the conduction band limite in the n-side; the same would happen with the valence band. What's the meaning of this? Can this happen?
# Question regarding Catalan Number 91 views I have a question regarding Catalan Number. The question is as follows, Find the number of binary strings w of length 2n with an equal number of 1’s and 0’s and the property that every prefix of w has at least as many as 0’s as 1’s. Now i know the answer for this question is 2nCn/(n+1). I wanted to know how this question relates to Catalan number? +1 Check this if it helps -https://gateoverflow.in/214618/counting?show=214964#a214964 Just replace open parenthesis or right steps with $0's$ and closed parenthesis or left steps with $1's$ in the given answer. 0 Hello Thank you for commenting......I know this interpretation of lattice paths, But my question is how is this generalization can be used for this question? To find the ways without touching the diagonal is = 2nCn - 2nCn-1 where 2nCn-1 is the violating paths right?? so to find the answer for the sequence question we have to take all the length 2n strings with an equal number of 1’s and 0’s which is = 2nCn. But the answer is = 2nCn - 2nCn-1. we know for the lattice path problem that (2nCn-1) is the no of violating paths. but what is this (2nCn-1) for this particular question?? 0 Here $\binom{2n}{n-1}$ includes those strings which have more no. of $1's$ than $0's$ in any of their prefix. For ex. - 111000, 100110 etc.
# Sound loop without hiccups I want to make a seamless sound loop that can play forever, but the consecutive EmitSound commands do not merge as I would like them to; that is, the sound seems to hiccup at the point where the loop restarts. For instance, two consecutive 1-second plays of the same note do not sound the same as a 2-second play of that note. Is there a solution to this problem? • Can you share the code you tried? – Vitaliy Kaurov Sep 20 '18 at 16:06 • Are you talking about a click because the waveforms have a discontinuous phase at the point where the old waveform ends and the new one starts, or a short pause while mathematica does whatever it has to do to load the 'new' sound? If you post the code it may be obvious. – N.J.Evans Sep 20 '18 at 16:07 • I had written code that didn't work using a loop and EmitSound. Since I realised that he proposed solution below works, I withdrew the comment with that code. I accepted the answer of kjosborne. – user447648 Sep 20 '18 at 17:44 The AudioStream functions provide a way to do this in 11.3: snd = Sound[SoundNote["C"]]; aud = Audio[snd]; strm = AudioStream[aud]; AudioPlay[strm, AudioLooping -> True] starts the playing. When it becomes annoying, you'll want to use AudioStop to stop all playing streams. • I had originally written an objection, but I withdraw it; this actually works. The reason it does not work with sine waves is because of a discrepancy at the point of merge. Thank you! :) – user447648 Sep 20 '18 at 17:42 • Glad this worked for you. You might want to unaccept the answer anyway, though. It is typical policy across SE to wait 24 hrs before accepting an answer so that people in all timezones can submit answers. It also increases the possibility someone will write another, better answer. – kjosborne Sep 20 '18 at 18:46
If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked. # Functions defined by integrals 2 videos 1 skill Let's explore functions defined by definite integrals. It will hopefully give you a deeper appreciation of what a definite integral represents. ### Evaluating a function defined by an integral VIDEO 7:25 minutes ### When an integral defined function is 0 VIDEO 9:45 minutes ### Functions defined by integrals PRACTICE PROBLEMS
# IF and If Help ranges Hi, Simple question: How I create if ranges? Eg: ``````if (something >1 && <10){ //do something here } `````` Its wrong and I have tried many combinations to get it right. ``````if ( (something > 1) && (something < 10) ){ //do something here } `````` ah thank you never thought of double brackets It is more about having a (set of) full statement(s). E.g. IF something is more than one AND IF something is less than ten AND IF something is even number THEN do_something You cannot have something like IF something is more than zero, but less than 10 THEN it should be IF something is more than zero AND something is less than 10 THEN
Rating: Apparently, the description shows admin bot can access /flag. The site's acronym makes me certain that the bot will visit an XSS payload and sends our flag somewhere. Let's start with the source code given to us. We can see that the site uses the replace method in JS to filter out common XSS tags and replace them their HTML entity counterparts, such as <, ', etc. However, it doesn't use replaceAll, just replace. This means it will only replace the first occurence. We can then test something such as this: <<svg onload=alert(1)//>> It does output an alert! It would be processed like this by the sanitizer: <<svg onload=alert(1)//>> My XSS payload involved first creating a GitHub Gist, then fetching and evaluating its content. This was the code I created: To break it down, first, we need to get make sure that our first replacement of < is used up, and that we can now create the payload with it. This will lead to the first two characters being evaluted as << - ok, now, I just used an svg tag because I recently watched a LiveOverflow video where he did it, and thought it would be a good idea. I then used onload to make this execute JS. I know I could have put this in quotes, but it would require unnecessary work to get over the XSS parser sanitizing the first instance of it - looking back, I could've just put all the characters that it sanitizes at the start of my sardine's name then make a simpler payload, but whatever. Anyways, the JavaScript will fetch my https://gist.githubusercontent.com/hKQwHW/56f7e2b3ace5c941971456588ce11e36/raw/969cb463ef37dc107fa9fedae36dd56ebf2d0275/ddd.js URL, then get the text content of it and evaluate it. However, the next issue will be the CORS of the sardines website. It doesn't allow sending cookies or other data to URLs outside of its domain. However, I found a simple solution to this, and that's to use query strings to put data inside. My full payload ended up following the steps of: 1) fetching the contents of /flag, then 2) sending it to a requestbin with the /flag contents as a query string. The Gist code ended up looking like this: fetch("https://xtra-salty-sardines.web.actf.co/flag").then(function(a) { a.text().then(owo => { fetch(https://requestbin.net/r/13rz3f9a?\${owo}, { "mode": "no-cors" }) }) }) Once I sent my payload to the admin bot, I received the flag inside my requestbin! It's actf{those_sardines_are_yummy_yummy_in_my_tummy} Original writeup (https://pipuninstallpip.github.io/writeups/angstrom-xtra-salty-sardines).
Not Reviewed "efficiency" = Tags: Rating ID EmilyB.Leadscrew Efficiency UUID 45f36b7d-62f1-11e6-9770-bc764e2038f2 The Leadscrew Efficiency is the efficiency of a leadscrew, which is a screw used as a linkage in a machine, to translate turning motion into linear motion. Because of the large area of sliding contact between their male and female members, screw threads have larger frictional energy losses compared to other linkages. They are not typically used to carry high power, but more for intermittent use in low power actuator and positioner mechanisms. The following equation uses the torque equation to calculate efficiency: "Efficiency"=(tanlambda)/(tan(phi+lambda)), where: • lambda = lead angle • phi = angle of friction
Flattening the curve Measures such as hand washing, social distancing and face masks reduce and delay the peak of active cases, allowing more time for healthcare capacity to increase and better cope with patient load.[1] Time gained through thus flattening the curve can be used to raise the line of healthcare capacity to better meet surging demand.[2] Without pandemic containment measures—such as social distancing, vaccination, and use of face masks—pathogens can spread exponentially.[3] This graphic illustrates how early adoption of containment measures tends to protect wider swaths of the population, thus reducing and delaying the peak of active cases. SIR model showing the impact of reducing the infection rate (${\textstyle \beta }$) by 76% Flattening the curve is a public health strategy to slow down the spread of the SARS-CoV-2 virus during the COVID-19 pandemic. The curve being flattened is the epidemic curve, a visual representation of the number of infected people needing health care over time. During an epidemic, a health care system can break down when the number of people infected exceeds the capability of the health care system's ability to take care of them. Flattening the curve means slowing the spread of the epidemic so that the peak number of people requiring care at a time is reduced, and the health care system does not exceed its capacity. Flattening the curve relies on mitigation techniques such as hand washing, use of face masks and social distancing. A complementary measure is to increase health care capacity, to "raise the line".[4] As described in an article in The Nation, "preventing a health care system from being overwhelmed requires a society to do two things: 'flatten the curve'—that is, slow the rate of infection so there aren't too many cases that need hospitalization at one time—and 'raise the line'—that is, boost the hospital system's capacity to treat large numbers of patients."[5] As of April 2020, in the case of the COVID-19 pandemic, two key measures are to increase the numbers of available ICU beds and ventilators, which are in systemic shortage.[2][needs update] Background Warnings about the risk of pandemics were repeatedly made throughout the 2000s and the 2010s by major international organisations including the World Health Organization (WHO) and the World Bank, especially after the 2002–2004 SARS outbreak.[6] Governments, including those in the United States and France, both prior to the 2009 swine flu pandemic, and during the decade following the pandemic, both strengthened their health care capacities and then weakened them.[7][8] At the time of the COVID-19 pandemic, health care systems in many countries were functioning near their maximum capacities.[4][better source needed] In a situation like this, when a sizable new epidemic emerges, a portion of infected and symptomatic patients create an increase in the demand for health care that has only been predicted statistically, without the start date of the epidemic nor the infectivity and lethality known in advance.[4] If the demand surpasses the capacity line in the infections per day curve, then the existing health facilities cannot fully handle the patients, resulting in higher death rates than if preparations had been made.[4] An influential UK study showed that an unmitigated COVID-19 response in the UK could have required up to 46 times the number of available ICU beds.[9] One major public health management challenge is to keep the epidemic wave of incoming patients needing material and human health care resources supplied in a sufficient amount that is considered medically justified.[4] Flattening the curve Queue markers at a shopping mall in Bangkok as a social distancing practicing Non-pharmaceutical interventions such as hand washing, social distancing, isolation and disinfection[4] reduce the daily infections, therefore flattening the epidemic curve. A successfully flattened curve spreads health care needs over time and the peak of hospitalizations under the health care capacity line.[2] Doing so, resources, be it material or human, are not exhausted and lacking. In hospitals, it for medical staff to use the proper protective equipment and procedures, but also to separate contaminated patients and exposed workers from other populations to avoid intra-hospital spread.[4] Raising the line Along with the efforts to flatten the curve is the need for a parallel effort to "raise the line", to increase the capacity of the health care system.[2] Healthcare capacity can be raised by raising equipment, staff, providing telemedicine, home care and health education to the public.[4] Elective procedures can be cancelled to free equipment and staffs.[4] Raising the line aims to provide adequate medical equipment and supplies for more patients.[10] During the COVID-19 pandemic Simulations comparing rate of spread of infection, and number of deaths due to overrun of hospital capacity, when social interactions are "normal" (left, 200 people moving freely) and "distanced" (right, 25 people moving freely). Green = Healthy, uninfected individuals Red = Infected individuals Blue = Recovered individual Black = Dead individuals [11] The concept was popular during the early months of the COVID-19 pandemic.[12] According to Vox, in order to move away from social distancing and return to normal, the US needs to flatten the curve by isolation and mass testing, and to raise the line.[13] Vox encourages building up health care capability including mass testing, software and infrastructures to trace and quarantine infected people, and scaling up cares including by resolving shortages in personal protection equipment, face masks.[13] According to The Nation, territories with weak finances and health care capacity such as Puerto Rico face an uphill battle to raise the line, and therefore a higher imperative pressure to flatten the curve.[5] In March 2020, UC Berkeley Economics and Law professor Aaron Edlin commented that ongoing massive efforts to flatten the curve supported by trillions dollars emergency package should be matched by equal efforts to raise the line and increase health care capacity.[14] Edlin called for an activation of the Defense Production Act to order manufacturing companies to produce the needed sanitizers, personal protective equipment, ventilators, and set up hundreds thousands to millions required hospital beds.[14] Standing in March 2020 estimates, Edlin called for the construction of 100-300 emergency hospitals to face what he described as "the largest health catastrophe in 100 years" and to adapt health care legislation preventing emergency practices needed in time of pandemics.[14] Edlin pointed out proposed stimulus package as oriented toward financial panics, while not providing sufficient funding for the core issue of a pandemic: health care capability.[14] In early May, the senior contributor on healthcare from Forbes posted, "Tenet Healthcare said its more than 60 hospitals are 'not being overwhelmed' by patients sickened by the Coronavirus strain COVID-19, the latest sign the U.S. healthcare system may be effectively coping with the pandemic," suggesting that the goal of flattening the curve to a point below health care capacity had met with initial success.[15] References 1. ^ Wiles, Siouxsie (9 March 2020). "The three phases of Covid-19—and how we can make it manageable". The Spinoff. Morningside, Auckland, New Zealand. Archived from the original on 27 March 2020. Retrieved 9 March 2020. 2. ^ a b c d Barclay, Eliza (7 April 2020). "Chart: The US doesn't just need to flatten the curve. It needs to "raise the line."". Vox. Archived from the original on 7 April 2020. Retrieved 7 April 2020. 3. ^ Maier, Benjamin F.; Brockmann, Dirk (15 May 2020). "Effective containment explains subexponential growth in recent confirmed COVID-19 cases in China". Science. 368 (6492): 742–746. Bibcode:2020Sci...368..742M. doi:10.1126/science.abb4557. PMC 7164388. PMID 32269067. ("...initial exponential growth expected for an unconstrained outbreak.") 4. Beating Coronavirus: Flattening the Curve, Raising the Line (YouTube video). Retrieved 12 April 2020. 5. ^ a b Gelardi, Chris (9 April 2020). "Colonialism Made Puerto Rico Vulnerable to Coronavirus Catastrophe". The Nation. ISSN 0027-8378. Archived from the original on 12 April 2020. Retrieved 12 April 2020. 6. ^ "Wanted: world leaders to answer the coronavirus pandemic alarm". South China Morning Post. 31 March 2020. Archived from the original on 9 April 2020. Retrieved 6 April 2020. CS1 maint: discouraged parameter (link) 7. ^ Manjoo, Farhad (25 March 2020). "Opinion | How the World's Richest Country Ran Out of a 75-Cent Face Mask". The New York Times. ISSN 0362-4331. Archived from the original on 25 March 2020. Retrieved 25 March 2020. 8. ^ "Pénurie de masques : une responsabilité partagée par les gouvernements" [Lack of masks: a responsibility shared by governments]. Public Senat (in French). 23 March 2020. Archived from the original on 9 April 2020. Retrieved 6 April 2020. CS1 maint: discouraged parameter (link) 9. ^ Imperial College COVID-19 Response Team (16 March 2020). "Impact of non-pharmaceutical interventions (NPIs) to reduce COVID19 mortality and healthcare demand" (PDF). Archived (PDF) from the original on 16 March 2020. Retrieved 23 March 2020 – via imperial.ac.uk. 10. ^ Dudley, Joshua. "Q&A: Dr. Rishi Desai Talks To Medical Professionals About What We Can Learn From COVID-19". Forbes. Archived from the original on 12 June 2020. Retrieved 18 June 2020. 11. ^ Stevens, Harry (14 March 2020). "These simulations show how to flatten the coronavirus growth curve". The Washington Post. Archived from the original on 30 March 2020. Retrieved 29 March 2020. 12. ^ Roberts, Siobhan (27 March 2020). "Flattening the Coronavirus Curve". The New York Times. ISSN 0362-4331. Archived from the original on 11 April 2020. Retrieved 12 April 2020. 13. ^ a b Lopez, German (10 April 2020). "Why America is still failing on coronavirus testing". Vox.com. Archived from the original on 20 December 2020. Retrieved 12 April 2020. 14. ^ a b c d Edlin, Aaron (March 2020). "Don't just flatten the curve: Raise the line" (PDF). p. 2. Archived (PDF) from the original on 18 April 2020. Retrieved 12 April 2020 – via berkeley.edu. 15. ^ Japsen, Bruce (4 May 2020). "Hospital Operator Tenet Healthcare 'Not Overwhelmed' with Coronavirus Cases". Forbes. Archived from the original on 11 May 2020. Retrieved 10 May 2020.
My Math Forum The coming of convoy User Name Remember Me? Password Calculus Calculus Math Forum June 4th, 2013, 02:22 PM #1 Newbie   Joined: Jun 2013 Posts: 1 Thanks: 0 The coming of convoy (i) Merchant ships sailing independently take 75% of the time to complete voyages compared to ships sailing in a convoy but lose 14% of their number to submarines on each voyage, whilst convoyed ships lose 5% per voyage. We start with a given fleet of merchant ships and must decide whether to use convoy for all of them or let them all sail independently. We can produce ships quickly enough to replace all those lost in convoy. Show that in the time it takes to make three convoy voyages, an independently sailing fleet will have made more voyages than a convoyed one but the position will be reversed for the time it takes to make six. What will happen over a long time? (ii) Suppose we can produce merchant ships at a rate ? (ships per unit time) and that we lose merchant ships at a rate ? (ships per ship afloat per unit time). Explain briefly why, with this model, the size x(t) of our fleet is governed by the differential equation ?= -?x + ? , and deduce that x(t)= (?/?)+ (X - ?/?)e^(-?t), where X is the size of our fleet when t=0. What happens to the size of our fleet when t is large? Any help on the above questions would be much appreciated. The question comes from T Korner's 'Pleasures of Counting' (exercise 2.2.1). June 4th, 2013, 02:56 PM   #2 Math Team Joined: Sep 2007 Posts: 2,409 Thanks: 6 Re: The coming of convoy Quote: Originally Posted by calvinnesbitt (i) Merchant ships sailing independently take 75% of the time to complete voyages compared to ships sailing in a convoy but lose 14% of their number to submarines on each voyage, whilst convoyed ships lose 5% per voyage. We start with a given fleet of merchant ships and must decide whether to use convoy for all of them or let them all sail independently. We can produce ships quickly enough to replace all those lost in convoy. Show that in the time it takes to make three convoy voyages, an independently sailing fleet will have made more voyages than a convoyed one but the position will be reversed for the time it takes to make six. I don't understand this question. It appears to ask nothing about the number of ships that are lost. Obviously, if it take ships sailing independently only 75% of the time of convoys, in the time it takes the convoys to make three trips, the independently sailing ships will make (4/3)(3)= 4 trips.. But that is certainly NOT "reversed for the time it takes to make six". In the time the convoys make 6 trips, the independently sailing ships will make (4/3)(6)= 8 trips. Quote: What will happen over a long time? (ii) Suppose we can produce merchant ships at a rate ? (ships per unit time) and that we lose merchant ships at a rate ? (ships per ship afloat per unit time). Explain briefly why, with this model, the size x(t) of our fleet is governed by the differential equation ?= -?x + ? , ? is the rate of change of x. During "unit time", x changes in two ways: it can decrease because of losing ships. The rate is ? "per ship afloat per unit time" so since there are x ships afloat, it will decrease by ?x. it can increase because of new ships. The rate is ? new "ships per unit time". Putting those together, we have ?= -?x + ?. Quote: and deduce that x(t)= (?/?)+ (X - ?/?)e^(-?t), where X is the size of our fleet when t=0. The given equation is $\frac{dx}{dt}= -\lambda x+ \mu$ which we can write as $\frac{dx}{-\lambda x+ \mu}= dt$. Integrate that. Quote: What happens to the size of our fleet when t is large? Once you have integrated, take the limit as t goes to infinity. Quote: Any help on the above questions would be much appreciated. The question comes from T Korner's 'Pleasures of Counting' (exercise 2.2.1). August 17th, 2014, 01:35 AM #3 Newbie   Joined: Aug 2014 From: London Posts: 1 Thanks: 0 I've also found this question difficult to understand, (part (i) at least), and, as this is the only place where it is discussed, excuse me for answering an old thread. I think the key to understanding part (i) comes from the sentence "We can produce ships quickly enough to replace all those lost in convoy.". It is never claimed that ships lost sailing independently can be replaced (completely). Tags coming, convoy Search tags for this page ### pleasures of counting convoy Click on a term to search for related topics. Thread Tools Display Modes Linear Mode Similar Threads Thread Thread Starter Forum Replies Last Post Mblake1289 Algebra 6 February 17th, 2014 06:05 PM MarkFL New Users 13 May 12th, 2011 03:25 AM Diana New Users 2 December 6th, 2010 08:37 PM johnny New Users 4 November 17th, 2008 09:09 PM johnny New Users 0 February 4th, 2008 05:45 PM Contact - Home - Forums - Cryptocurrency Forum - Top Copyright © 2019 My Math Forum. All rights reserved.
08 Aug The need for heat sinks with increased cooling capacity has been driven by the ever increasing heat generation of electronic devices. This has led to the increased manufacture of pin fin heat sinks as shown in figure 1 that vendors claim has vastly superior performance to the traditional plate fin heat sink with continuous parallel fins as shown in figure 2. Is this claim true? Like most engineering answers….it depends. Figure 1. Pin fin heat sink The goal of a heat sink is to efficiently remove heat from the source(s) it is attached to so as to minimize the temperature of that source(s). The performance of the heat sink is governed by this simple heat transfer equation: $Latex formula$1 where: $Latex formula$is the heat from the source(s) being cooled $Latex formula$is the convection coefficient of the heat sink $Latex formula$is the total surface area of the heat sink $Latex formula$is the temperature of the source being cooled $Latex formula$is the ambient temperature Any heat sink with a larger value of h x A will be able to produce a lower source temperature. Figure 2. Plate fin heat sink attached to a printed circuit board A plate fin would almost always for most practical heat sinks have a larger surface area than a pin heat sink that has the same over all dimensions. Only for very closely spaced pin fin heat sinks  does the pin fin surface area exceed that of the plate fin heat sink. Don’t believe me? Let’s do the math. Figure 3 shows the dimensions for a pin fin and plate fin heat sinks with the same external dimensions. To make the math easier a heat sink with a square base will be used for the comparison. The equations for the surface areas of the heat sinks are as follows: $Latex formula$2 $Latex formula$3 Nplate is the number of plate fins and Npin is the number of pin fin along each edge. Figure 3. Plate and pin fin heat sink dimensions The graph in figure 4 shows a plot of the spacing between the fins s versus the surface areas of plate fin and pin fin heat sinks with the following dimensions. t = 3mm H = 50mm L = 50mm W=50mm b = 5mm Figure 4. Comparison of the variation of surface area of a pin fin and plate fin heat sink with fin spacing Only at a fin spacing of approximately  3mm does the Ahs of the pin fin heat sink begin to exceed that of the plate fin heat sink. This fin spacing is quite small for most heat sinks. If pin fin heat sinks are to out perform plate fin heat sinks then its convection coefficient, h must be sufficiently large to compensate for the lower surface area. The growth of the thermal boundary layer along the direction of air flow is limited by the discontinuous surface formed by the pin fins. As air flows past the fins a thermal boundary layer builds along the surface of the pin. Once the flow of air has reached the gap between pin fins the thermal boundary layer is totally or partially destroyed and reformed when it encounters the next pin fin along the flow path. This destruction of the boundary layer can result in better heat transfer.    (Refer to the article  Top 3 mistakes made when selecting a heat sink point number 2: “Selecting a heat sink based solely on surface area” for a more detail explanation of the thermal boundary layer. ) The discontinuous geometry of a pin fin heat sink also increases the pressure drop across the heat sink. This increased pressure drop then results in a reduced flow rate particularly in natural convection and in some fan cooled applications where the fan is not directly attached to the heat sink and flow bypass may occur. There have been a few studies comparing the performance of plate and pin fin heat sinks in natural convection[1], [2]. The results show that for optimized plate and pin fin heat sinks that have the same external dimensions plate fin heat sinks have superior performance in most situations. Pin fin heat sinks are advantageous over plate fin heat sinks in situations where the heat sink may be oriented in multiple orientations. The performance of the pin fin heat sink does not vary significantly in different orientations. Plate fin heat sinks perform poorly when the fins are oriented perpendicular to the direction of air flow. If the installation of the heat sink is limited to 90° increments then orienting the fins at 45° as shown in figure 5 would create a design that has good performance in all orientations. Note that the performance of a heat sink with 45° angled fins with a length to width ratio of 1.5 is superior to a heat sink with with vertical fins. This is because the cooled ambient air can enter the channels formed by the 45° angled fins along the entire length of the heat sink. The performance of 45° angled fin heat sinks with a length to width ratio below 1 tend to perform slightly worse than a vertical fin heat sink. Figure 5. Heat sink with fins oriented 45° to the vertical for better natural convection performance in multiple orientations Pin fin heat sinks used in for forced convection applications tend to outperform plate fin heat sinks in most situations even considering the increased pressure drop across the pin fins. Here are some guidelines to help you select the best type of heat sink to use for your application: 1. If the orientation of a natural convection cooled heat sink may vary then use a pin fin heat sink. 2. If the direction of air flow is unknown  for a forced convection cooled heat sink then use a pin fin heat sink since it’s less sensitive to flow direction. 3. If substantial air flow rates can be achieved even with a large pressure drop across the heat sink then use a pin fin heat sink. 4. For all other situations use a plate fin heat sink. They are less costly, much simpler to manufacture and are readily available in a large range of sizes and configurations. [1] Younghwan Joo, Sung Jin Kim, “Comparison of thermal performance between plate-fin and pin-fin heat sinks in natural convection,” in: International Journal of Heat and Mass Transfer, Vol. 83, 1995, pp. 345-356 [2] Akshendra Soni, “Study of Thermal Performance between Plate-fin, Pin-fin and Elliptical Fin Heat Sinks in Closed Enclosure under Natural Convection” in: International Advanced Research Journal in Science, Engineering and Technology, Vol. 3 Issue 11, 2016, pp.133-139
+0086-571-86597552 [email protected] What is the appropriate sealing cutter and heat chamber temperature? Update Time:2017-10-09 What is the appropriate sealing cutter temperature? Normally it is around 125-130℃,but it could be adjusted as per the actual situation. What is the appropriate heat chamber temperature? Normally it is around 180-200, but it could be adjusted as per the actual situation. E-mail Subscriptions: Contact Person Mr. Shen
# Max flow algorithm for floating-point weights and E~=10*V Could you, please, suggest a maximum flow algorithm for a graph with floating-point weights and the number of edges approximately equal to the number of vertices? I.e. O(V^3) algorithms take too much time, but O(E^2) algorithms are much more preferable. More specifically, you can assume V~=1M and E~=10M where M stands for millions. Orlin's algorithm can solve max flow in sparse graphs in $$O(|V| |E|)$$ time. See • Orlin's algorithm seems more like a theoretical curiosity than a practical algorithm. Sleator & Tarjan's implementation of Dinic's algorithm works in $O(|V| |E| \log |V|)$ time and was published in 1982. – Laakeri Dec 26 '19 at 23:49
#### OUR HISTORY Click & type your desired text for this sentence. This is brief description for the above title. This section font is made smaller, so that you can insert more words than usual. Try to make use of it. i #### OUR VISION Click & type your desired text for this sentence. This is brief description for the above title. This section font is made smaller, so that you can insert more words than usual. Try to make use of it. #### OUR SERVICES Click & type your desired text for this sentence. This is brief description for the above title. This section font is made smaller, so that you can insert more words than usual. Try to make use of it. #### OUR SKILLS Click & type your desired text for this sentence. This is brief description for the above title. This section font is made smaller, so that you can insert more words than usual. Try to make use of it. #### OUR PRINCIPLE Click & type your desired text for this sentence. This is brief description for the above title. This section font is made smaller, so that you can insert more words than usual. Try to make use of it.
## Files in this item FilesDescriptionFormat application/pdf 9314917.pdf (9MB) (no description provided)PDF ## Description Title: Synthesis and characterization of novel polymer-ceramic nanocomposites: Organoceramics Author(s): Messersmith, Phillip Byron Doctoral Committee Chair(s): Stupp, Samuel I. Department / Program: Materials Science and Engineering Discipline: Materials Engineering Degree Granting Institution: University of Illinois at Urbana-Champaign Degree: Ph.D. Genre: Dissertation Subject(s): Chemistry, Polymer Engineering, Materials Science Abstract: This manuscript describes the synthesis and characterization of novel polymer-ceramic nanocomposites (organoceramics) based on various water soluble polymers and calcium aluminate hydrates. Synthesis of these materials involves the aqueous precipitation of the inorganic crystals in the presence of polymer. The presence of polymer during crystallization often leads to changes in particle morphology, and in some cases intercalation.The organoceramics based on poly(vinyl alcohol) (PVA) and $\rm CaO{\cdot} Al\sb2O\sb3{\cdot} 10H\sb2O$ (CAH$\sb{10})$ were found to exhibit retarded phase transformation kinetics and unique particle morphologies. Intercalation of PVA between layers of $\rm\lbrack Ca\sb2Al(OH)\sb6\rbrack\sp+\lbrack(OH){\cdot} 3H\sb2O\rbrack\sp-$ occurred during crystal growth, resulting in an organoceramic containing up to 40% polymer by weight. Polymer intercalation resulted in an expression of the interlayer by approximately 10 A, consistent with the formation of a double layer of PVA chains across each interlayer. Thermal degradation of the inorganic and polymeric components of the organoceramic occurred at higher temperatures than the individual materials. Compressive strength of PVA organoceramic powder compacts was significantly higher than that of $\rm\lbrack Ca\sb2Al(OH)\sb6\rbrack\sp+\lbrack(OH){\cdot} 3H\sb2O\rbrack\sp-$ compacts, possibly reflecting substantial differences in particle morphologies. Issue Date: 1993 Type: Text Language: English URI: http://hdl.handle.net/2142/21638 Rights Information: Copyright 1993 Messersmith, Phillip Byron Date Available in IDEALS: 2011-05-07 Identifier in Online Catalog: AAI9314917 OCLC Identifier: (UMI)AAI9314917 
# Heinz Hopf Heinz Hopf Heinz Hopf (on the right) in Oberwolfach, together with Hellmuth Kneser Born 19 November 1894 Gräbschen (near Breslau, Imperial Germany; now Wrocław, Poland) Died 3 June 1971 (aged 76) Nationality German Alma mater University of Berlin Known for Hopf algebra Hopf bundle Hopf conjecture H-space Hopf–Rinow theorem Scientific career Fields Mathematics Institutions ETH Zürich Erhard Schmidt Doctoral students Beno Eckmann Hans Freudenthal Werner Gysin Friedrich Hirzebruch Heinz Huber Michel Kervaire Willi Rinow Hans Samelson Ernst Specker Eduard Stiefel James J. Stoker Heinz Hopf (19 November 1894 – 3 June 1971) was a German mathematician who worked on the fields of topology and geometry.[1] ## Early life and education Hopf was born in Gräbschen, Germany (now Grabiszyn (pl), part of Wrocław, Poland), the son of Elizabeth (née Kirchner) and Wilhelm Hopf. His father was born Jewish and converted to Protestantism a year after Heinz was born; his mother was from a Protestant family.[2][3] Hopf attended Dr. Karl Mittelhaus' higher boys' school from 1901 to 1904, and then entered the König-Wilhelm-Gymnasium in Breslau. He showed mathematical talent from an early age. In 1913 he entered the Silesian Friedrich Wilhelm University where he attended lectures by Ernst Steinitz, Kneser, Max Dehn, Erhard Schmidt, and Rudolf Sturm. When World War I broke out in 1914, Hopf eagerly enlisted. He was wounded twice and received the iron cross (first class) in 1918. In 1920, Hopf moved to Berlin to continue his mathematical education. He studied under Ludwig Bieberbach, receiving his doctorate in 1925. ## Career In his dissertation, Connections between topology and metric of manifolds (German Über Zusammenhänge zwischen Topologie und Metrik von Mannigfaltigkeiten), he proved that any simply connected complete Riemannian 3-manifold of constant sectional curvature is globally isometric to Euclidean, spherical, or hyperbolic space. He also studied the indices of zeros of vector fields on hypersurfaces, and connected their sum to curvature. Some six months later he gave a new proof that the sum of the indices of the zeros of a vector field on a manifold is independent of the choice of vector field and equal to the Euler characteristic of the manifold. This theorem is now called the Poincaré–Hopf theorem. Hopf spent the year after his doctorate at the University of Göttingen, where David Hilbert, Richard Courant, Carl Runge, and Emmy Noether were working. While there he met Paul Alexandrov and began a lifelong friendship. In 1926 Hopf moved back to Berlin, where he gave a course in combinatorial topology. He spent the academic year 1927/28 at Princeton University on a Rockefeller fellowship with Alexandrov. Solomon Lefschetz, Oswald Veblen and J. W. Alexander were all at Princeton at the time. At this time Hopf discovered the Hopf invariant of maps ${\displaystyle S^{3}\to S^{2}}$ and proved that the Hopf fibration has invariant 1. In the summer of 1928 Hopf returned to Berlin and began working with Alexandrov, at the suggestion of Courant, on a book on topology. Three volumes were planned, but only one was finished. It was published in 1935. In 1929, he declined a job offer from Princeton University. In 1931 Hopf took Hermann Weyl's position at ETH, in Zürich. Hopf received another invitation to Princeton in 1940, but he declined it. Two years later, however, he was forced to file for Swiss citizenship after his property was confiscated by Nazis, his father's conversion to Christianity having failed to convince German authorities that he was an "Aryan." In 1946/47 and 1955/56 Hopf visited the United States, staying at Princeton and giving lectures at New York University and Stanford University. He served as president of the International Mathematical Union from 1955 to 1958.[4] ## Personal life In October 1928 Hopf married Anja von Mickwitz (1891–1967). ## Honors and awards He received honorary doctorates from Princeton University, the University of Freiburg, the University of Manchester, the University of Paris, the Free University of Brussels, and the University of Lausanne. He was an Invited Speaker at the International Congress of Mathematicians (ICM) in Zürich in 1932 and a Plenary Speaker at the ICM in Cambridge, Massachusetts in 1950.[5] In memory of Hopf, ETH Zürich awards the Heinz Hopf Prize for outstanding scientific work in the field of pure mathematics.
# Talk:Probability space ## Simple events Should we add into the article that simple events are independent? 134.71.66.237 (talk) 20:19, 24 September 2009 (UTC) Intuitively, elementary events are independent, since the “nature” picks one and only one elementary event to become an outcome of the experiment. However from technical point of view we cannot say that {ω1} is independent from {ω2}, since these events are not necessarily in the σ-algebra of the probability space, and therefore they can be non-measurable.  … stpasha »  20:26, 24 September 2009 (UTC) It is a common error, to confuse "independent' and "disjoint" ("mutually exclusive"). Elementary events are disjoint (well, provided that they are measurable...) and not at all independent. The probability of their intersection is equal to zero, not at all to the product of their probabilities. (Well, if one or both are of zero probability, then of course...) Boris Tsirelson (talk) 04:19, 25 September 2009 (UTC) Ugh, shame on me :( such a noobish mistake. Of course they aren’t independent  … stpasha »  04:26, 25 September 2009 (UTC) Why specify "elementary" event. Are there also non-elementary events? If so, how does an elementary event differ from a non-elementary event? Is "elementary event" a synonym for "outcome"? How does a simple event differ from an elementary event? —Preceding unsigned comment added by 80.133.111.96 (talk) 15:58, 9 March 2010 (UTC) Maybe this terminology is somewhat archaic, but it exists outside Wikipedia, and we are not authorized to change it. The article contains a link to Elementary event; all your question are answered there. Boris Tsirelson (talk) 19:02, 9 March 2010 (UTC) ## Relate How does this relate to the concept of elementary event? - Patrick 10:20 Jan 13, 2003 (UTC) ## Issues A couple of issues: • The article tries to explain the difference between Ω and S with some examples, but these do not really get to the heart of the matter. What is the difference, in general? Is it that the elements of S must be (tuples of) measurable quantities? • The article notes that not all subsets of a probability space are events, but does not give an example or explain why this is so. It would be nice if someone it it upon him- or herself to address these. Dbtfz 04:23, 19 January 2006 (UTC) Dbtfz 04:23, 19 January 2006 (UTC) • Example 2 is so verbose that its difficult to follow. I wonder if this could be replaced with easy to understand example. Example 1 is excellent but a bit trivial Example 2 is quite easy to understand if you know all the terms (what a partition is, what a sigma-algebra is etc...). I do not think that it should be replaced by an easier example. Agreed - this example, though non-trivial, is excellent. The second part of example 2 is good, but it is not straight forward to see that the partitions are combined from the number of tails, which is the essence. The first part I cannot understand. In fact, this is the problem that I have encountered in Wikipedia over and over again: make things easier so other people can understand. This can't happen because in that case you tend to make things less formal and mathematics should always be as formal as possible. 10:00, 25 August 2008 (UTC) The statement events typically are intervals like "between 60 and 65 meters" and unions of such intervals, but not "irrational numbers between 60 and 65 meters" is WRONG, because by definition of a σ-algebra, if ${\displaystyle {\mathcal {F}}}$ contains, for example, all closed real intervals in ${\displaystyle [0,1]}$, then ${\displaystyle {\mathcal {F}}}$ also contains, for example, all irrational intervals in ${\displaystyle [0,1]}$. On a side-note, these irrational intervals are Lebesgue-measurable, so the above statement doesn't look good after the statement of some of the subsets are simply not of interest, others cannot be “measured” The (first noted) statement should be replaced with a correct reason for why ${\displaystyle {\mathcal {F}}}$ is not always chosen to be ${\displaystyle {\mathcal {P}}(S)}$. -- anonymous12345678910111213141516 @ 2012-07-22 03:20:27 CEST — Preceding unsigned comment added by 78.92.204.127 (talk) ## Symbol Pr Speaking foundationally, the notation Pr() is not more precise, as it obscures the fact that P is just a function like any other, and a situation using P for Probability and some other function is being inconsistent. Jfr26 11:18, 16 April 2006 (UTC) P is here defined as a measure, with support on some appropriate sigma-algebra, while in my experience Pr(A) is literally shorthand for "the probability that A". Precise use of P—rather than Pr—is the goal, and speaking personally, I like Pr because it handily ties together a string of notational conventions, at the cost of one extra keystroke. More to the point, it's commonly used for the above-described shorthand purpose by mathematicians and others and so deserves some additional explanation. Ben Cairns 14:03, 27 April 2006 (UTC) ## Merge If someone can check that all of the relevant information has been moved to probability theory, we might be able to delete this page and replace it with a redirect? MisterSheik 17:12, 28 February 2007 (UTC) Don't merge. There are too many articles which link specifically to Probability space for the specific material here, not for that material buried in a more general treatment. Jheald 18:36, 3 March 2007 (UTC) Is it a more general treatment though? Unless I'm missing something, the whole article on probability theory is just the definition of probability space. The probability axioms are really part of the definition of probability space since they're restrictions on one of the components of a "probablity space". What do you think? MisterSheik 18:56, 3 March 2007 (UTC) That may be part of the problem. Probability theory should be the top-level article for the whole of the mathematics associated with probability, as distinct from Probability broadly treating the question, "what is probability?". Now the whole of the mathematics associated with probability theory is a much bigger subject than the definition of probability space, as any number of textbooks with the title "Theory of probability" indicate. Jheald 19:36, 3 March 2007 (UTC) Regarding the probability axioms, I would treat them first, before introducing the full detail on probability spaces. The laws of probability apply perfectly well to probabilities of finite numbers of discrete events, and are most easily conveyed in that setting first, in terms of elementary events. Only having treated the finite case first is it useful to generalise to the full works of measure theory and countably infinite sets - which may be beyond what some readers ever need to use. Jheald 22:23, 3 March 2007 (UTC) ## Re-redirect probability measure ? Suggest changing the redirect of probability measure to measure (mathematics), rather than here. What do people think ? Jheald 21:29, 3 March 2007 (UTC) For minor things like that, it is often better to just make the change and then discuss if someone cares enough to revert it (see WP:BRD). It saves a lot of discussion for things that are truly insignificant. CMummert · talk 00:58, 4 March 2007 (UTC) I think that the redirect should be to here since a probability measure is defined on a probability space and not on an arbitrary measure space. Topology Expert (talk) 10:02, 25 August 2008 (UTC) ## Why start class? Why is this article start class? It seems almost done. MisterSheik 23:58, 17 June 2007 (UTC) The content is fine, but it could be more accessible. This does not mean it everything in the article should be understandable to everyman, but that each topic should be made as accessible as it can be. Geometry guy 00:08, 18 June 2007 (UTC) I'm all for accessibility as long as it doesn't make things difficult for people that want to use the encyclopedia as a reference (as a opposed to a tutorial). That's why the examples are separated from the text. I think maybe some better examples, and an extra paragraph to the lead would do it? MisterSheik 01:41, 18 June 2007 (UTC) Sounds good. WP:LEAD (and more generally WP:MoS) contains lots of helpful advice (in case you haven't seen it). Geometry guy 11:11, 18 June 2007 (UTC) ## First paragraph Hi, I changed this: In probability theory, the definition of the probability space is the foundation of probability theory. to this: The definition of the probability space is the foundation of probability theory. which seems to make more sense. It seems like there needs to be an informal (as accessible as possible and depending on as few specialized terms as necessary) definition of a Probability Space in that first paragraph though... I don't know much about probability theory, and the fact that probability space is its foundation is an interesting fact, nevertheless it doesn't tell me what probability space is. —Preceding unsigned comment added by 157.193.108.159 (talk) 12:40, 26 October 2007 (UTC) ## Usually? It says "Usually, the events are the Lebesgue-measurable or Borel-measurable sets of real numbers." Shouldn't that be "If Omega is the set of real numbers (or R², R³ etc), then F is taken to be the Lebesgue-measurable or Borel-measurable sets". There are lots of applications of probability spaces which are based e.g. on a finite Omega. Giese (talk) 09:45, 15 January 2008 (UTC) Well, if the underlying space is finite, then every set is Borel, so it seems we're good either way. --Trovatore (talk) 22:25, 9 February 2009 (UTC) ## Subset symbol After the edit by 128.2.182.120, two notations are intermixed; see Subset#The symbols ⊂ and ⊃. Compare ${\displaystyle A\subseteq \Omega }$ and ${\displaystyle {\mathcal {F}}\subset 2^{\Omega }}$; equality is permitted in both cases. Boris Tsirelson (talk) 21:08, 9 February 2009 (UTC) Best would be to change all instances to ${\displaystyle \subseteq }$, unless (which I doubt, though I haven't checked) there is some place where it is necessary to specify that the inclusion is proper. In that unlikely case, it would be well to use ${\displaystyle \subsetneq }$; thus we avoid all ambiguity. --Trovatore (talk) 22:27, 9 February 2009 (UTC) OK, I did so. Boris Tsirelson (talk) 07:14, 10 February 2009 (UTC) ## What is wrong with the existing lead? (An answer is expected first of all from User:Melcombe. Boris Tsirelson (talk) 20:23, 17 September 2009 (UTC)) So I made another lead, hopefully this time better than the previous one (or since according to somebody there was NO previous one, the new one must definitely be better than nothing :) ... stpasha » talk » 04:57, 18 September 2009 (UTC) Now I see: it was an introduction rather than a lead. Boris Tsirelson (talk) 09:10, 18 September 2009 (UTC) My interpretation of WP:LEAD, in this context, is that mathematics should be avoided where possible, and certainly that unexplained maths symbols should be avoided. I used the "missing" tag as what was there was clearly more appropriate, as it stood, to being an introduction section. What was there, and is now the "introduction", was far too heavily mathematical to be a lead. What is there now is much, much better, but some might still think is has too much maths. However, I suggest waiting to see if there are other complaints. My main concern now, in the lead and elsewhere, is the choice of font for "F" being used, at least I think it is meant to be an F. To me, there seems no reason to use particularly exotic fonts, especially where these turm out to be nearly illegible. Melcombe (talk) 09:14, 18 September 2009 (UTC) However, use of just "F" (or rather F) here would contradict the tradition (in textbooks, monographs and papers). Boris Tsirelson (talk) 09:18, 18 September 2009 (UTC) But can't we find an F more like the curly F I have seen and which I take as being the tradition. What I am presently seeing is a tiny set of black rectangles with a white line through them, which on a third or fourth attempt to work out what is one might just conclude is an F, in the absence of anything else it might be. Unfortunately my curent setup doesn't allow be to see what is available using the standard editting tools. Melcombe (talk) 09:29, 18 September 2009 (UTC) Maybe, try "View->Zoom in" on your browser. Boris Tsirelson (talk) 09:33, 18 September 2009 (UTC) I don't have that option, but when I change the font size to "larger" in IE7 I see that the symbol is an especially curly F. But for some reason "larger" is too large and I get little content on a screen, so I use "medium" which doesn't sound as if I am using an unusually small font. I did get the editing tools working again, but I don't see an acceptable F there, nor even any thing recognisable as the one presently being used. So how is it being added, and are there other choices for a replacement. I see that there some discussion of typography in Sigma-algebra. Melcombe (talk) 09:58, 18 September 2009 (UTC) In order to change zoom in a browser you can try pressing Ctrl+<mouse wheel up>. As for the problem you describing, there seems to be a bug with Internet Explorer browser where it does not apply font smoothing correctly to certain Unicode characters. Another browser on the same system (that is, with the same (default) fonts installed) displays the curly “F” quite close to how TeX does it. See the screenshot (from Google Chrome browser): The symbol being used is a Unicode character SCRIPT CAPITAL F (U+2131), and it can be typed as &#x2131; (or simply copy-pasted) ... stpasha » talk » 16:37, 18 September 2009 (UTC) (unindenting) Well yes, but MOS:MATH says "Although the symbols that correspond to named entities are very likely to be displayed correctly, a significant number of viewers will have problems seeing all the characters listed at Unicode Mathematical Operators. One way to guarantee that an uncommon symbol is rendered correctly for all readers is to force the symbol to display as an image, using the math environment." It is not just the F that I have trouble seeing, there also the R in R12 later on. It seems unwise/unhelpful to use characters not all can see. Melcombe (talk) 09:45, 21 September 2009 (UTC) ## What is the problem with elementary sets? In Example 4, elementary sets disappear; in Example 5 their occurrence is questioned by "clarification needed". Why? It is written in the "Non-atomic case" section: "Initially the probabilities are ascribed to some “elementary” sets (see the examples). Then a limiting procedure allows to ascribe probabilities to sets that are limits of sequences of elementary sets, or limits of limits, and so on. All these sets are the σ-algebra ℱ." Any problem here? In more technical words, elementary sets are a collection that generates the sigma-field, and are such that their probabilities are defined naturally. Boris Tsirelson (talk) 09:16, 18 September 2009 (UTC) I never seen the term “elementary set” before, in our textbooks it was called “generator” set since it generates the σ-algebra. It appears to me that the use of word “elementary” here is to certain extent confusing: since we already defined elementary events as elements of Ω, and later on stated that the word “event” essentially means a subset of Ω. As such terms “elementary set” and “elementary event” appear to be synonyms, whereas in fact they aren’t. And this is the “problem with elementary sets” :) ... stpasha » talk » 17:09, 18 September 2009 (UTC) You are right: it is my neologism, and poorly chosen. "Generator set"? Maybe. In which textbooks did you see it? Really, the whole collection of sets is generating; individual sets are not. But if it is already used in textbooks then it is the best choice. (Another option could be "simple sets".) Boris Tsirelson (talk) 16:06, 19 September 2009 (UTC) In fact, "simple sets" is already in use, see Jordan measure. Boris Tsirelson (talk) 17:08, 17 November 2009 (UTC) ## Comment on introduction The new intro section starts "The probability space presents a model ...". My immediate thought was "the probability space for what?" ... what thing or what type of thing? Melcombe (talk) 09:20, 18 September 2009 (UTC) Would you also ask "the linear space for what?", "the topological space for what?" etc.? It is not "space for something", it is a space serving as a model for something. (See also space (mathematics).) Boris Tsirelson (talk) 09:29, 18 September 2009 (UTC) Yes I would. It should be either "the probaility space for a given scenario is ..." (i.e. "the" is specific (definite article), even if the topic is general) or "a probability space is ...." (i.e "a" is non-specific (indefinite article)). Recall that this is a general encyclopedia and should not descend into the misuse of grammar prevalent in much published literature. In addition the article has said anything useful about what a probability space is used for (a "general situation" is far too vague), so it really isn't just a case of switching "the" to "a". I suppose it might be "the probability space for a general situation is a only model ... It would be better to have something first that says what aspects of "a general situation" are being modelled. Melcombe (talk) 09:12, 21 September 2009 (UTC) Hello, I am an "idiot" who just edited the introduction without even looking at the discussion page. I was therefore unaware that the page was being edited very actively. Now I think I was a bit rude, like not seeing others in the room. Please do not be shy to edit what I wrote. Right now, I agree that the first words should be "A probability space is...". Cacadril (talk) 08:23, 27 September 2009 (UTC) It’s ok, feel free to improve the article; we are currently distracted by “Normal distribution” anyways :)  … stpasha »  20:19, 27 September 2009 (UTC) ## The lede I am trying to figure out what exactly needs to be in the lede. What are the points to make, the ideas to conview, the misunderstandings to prevent? The issue of how exactly to word each point is secondary to this. The lede should be as concise as possible while still helping readers that do not know the particular perspective, or way of thinking, that is needed to make sense of the concepts. More detailed expositions belong in the sections below the lede. Still some details may be a good idea to include in the lede becase they help the unprepared reader to make sense of it. As the lede stands now, I feel it may be a bit too wordy, and some of it could be moved to the introduction. As suggested by another contributor above, the lede should state what the subject is (a combination of three things), not just what it does (models situations) or what it is useful for. Still some such information is helpful for the unprepared reader because it defines the perspective. I am considering: In probability theory, a probability space or a probability triple is a mathematical construct that models how the laws of probability apply in situations where there are multiple things that can happen next. A probability space is constructed with a specific kind of situation or experiment in mind. One imagines that each time a situation of that kind arises, the set of possible outcomes is the same and the probability levels are also the same. A probability space consists of three parts: A set of distinct possible outcomes; a set of groups of outcomes, called "events" to which specific probability levels are assigned; and the assignment of probabilities to these groups, i.e. a function from events to probability levels. Cacadril (talk) 10:39, 27 September 2009 (UTC) ## nature makes its move Once the probability space is established, it is assumed that “nature” makes its move and selects a single outcome, ω, from the sample space Ω. Then we say that all events from ${\displaystyle \scriptstyle {\mathcal {F}}}$ containing the selected outcome ω (recall that each event is a subset of Ω) “have occurred”. The selection performed by nature is done in such a way that if we were to repeat the experiment an infinite number of times, the relative frequencies of occurrence of each of the events would have coincided with the probabilities prescribed by the function P. This seems very muddled--is there a more formal description of what happens here? For example, if I repeatedly flip an unbiased coin (infinite Bernoulli process) and "nature" selects the sequence H,T,H,T,H,T... then the frequencies of occurrence coincide with the expected 50% heads, but I wouldn't call it random. Thanks. 66.127.52.47 (talk) 01:29, 14 April 2010 (UTC) ## Standard probability space It would be good to have a few sentences to link to Standard probability space, rather than just having this under "see also". It seems a good thing to give a mention to in this article, given the overlap of names. Melcombe (talk) 11:56, 9 September 2010 (UTC) I did; please look now. Boris Tsirelson (talk) 13:40, 9 September 2010 (UTC) Is that all that can reasonably be said? I guess the name suggests that the idea is more important than you have made it sound. Melcombe (talk) 15:15, 9 September 2010 (UTC) I like that notion, but I do not want to exaggerate. When we only consider random variables, their distributions, operations (sum etc), all probability spaces are equally good. This is why elementary textbooks never mention standard prob. spaces. Their advantage appears only when we start dealing with regular conditional probabilities, and/or measure preserving transformations. However, these topics are more advanced than the "prob. space" article, devoted mostly to the discrete case. The name suggests? I did not choose the name "standard"; it is chosen by others, outside Wikipedia. It means only what it means. Boris Tsirelson (talk) 15:42, 9 September 2010 (UTC) That's why I leave it to those who have a proper understanding to decide these things. The immediate question was just, in the spirit of being helpful to readers, what pointers to other articles should this one contain. After all, part of the point of wikipedia is the interlinking between articles: "if you're interested in this, then you might be interested in that". Melcombe (talk) 09:03, 10 September 2010 (UTC) ## Weird sentence in introduction Can this sentence be rewritten? "If the outcome is the element of the elementary event of two pips on the first die and five on the second, then both of the events of "7 pips" and "odd number of pips" have also happened." The concepts "element" and "elementary event" weren't introduced before in the text. Wisapi (talk) 00:29, 13 September 2010 (UTC) ## Zero/One probability "A probability is a real number between zero (the event cannot happen in any trial) and one (the event must happen in every trial)." Is it true that if the probability is zero (one) the event can never (must always) happen? If X is drawn from a uniform distribution in [0,1], then P(X = 1/2) = 0 and P(X != 1/2) = 1, but X can be 1/2. Am I wrong? If I am, I think I might not be the only one. Could we explain it a bit. 71.232.61.24 (talk) 02:03, 6 May 2011 (UTC) Good catch. The text as written is simply wrong. If no one else gets there first I'll fix it (the right fix might need some thought). --Trovatore (talk) 02:06, 6 May 2011 (UTC) ## Conditional Probability Given The Empty Set The article states that conditional probabilities can only be defined using conditions that have non-zero probabilities. If A,B,C are sets in a probability space and B and C are disjoint, the value of P(A| B intersection C) is thus undefined. I think there are many practical situations where P(A| empty set) could be defined to be zero, as long as we are dealing with the equation P(A | W) P(W) = P(A intersection W) and not an equation involving division by W. If the set W is defined by scalar variables, I suppose this would be a case of filling-in the discontinuity of a function rather than extending the theory of probability spaces. However, this topic deserves attention in some article on probability theory and this articles seems as good a place as any. Tashiro (talk) 16:25, 10 June 2011 (UTC) ## Definition of the σ-algebra I don't think that the property "F contains the empty set: ∅∈F" is necessary for defining F, because: • From the complement rule: for any A∈F, we also have (Ω∖A)∈F • From the union rule: (A∪(Ω∖A))∈F so Ω∈F • From the complement rule again: Ω∈F so ((Ω∖Ω)=∅)∈F The fact that both ∅ and Ω belong to F could be additional corollaries of the two other properties. — Preceding unsigned comment added by 137.132.250.14 (talk) 03:56, 16 January 2012 (UTC) Yes, but then you need F to be not empty. Is it better? Boris Tsirelson (talk) 06:28, 16 January 2012 (UTC) ## Congratulations Oh..my..God; a statistics article on wikipedia which is understandable. This Friday is getting better and better. I'm going to indulge myself now, if you don't mind. — Preceding unsigned comment added by 145.18.213.196 (talk) 09:37, 1 March 2013 (UTC)
# \colorbox with Hebrew When using a \colorbox (or derivates such as framed's shadedbox) with Hebrew and pdflatex, the color stack seems to get confused: pdfTeX warning: pdflatex: pop empty color page stack 0 and the output is wrong (black background color). Is there a workaround for this issue? (I am aware that XeTeX/bidi works, but I want to know whether there is also some way in [pdf]latex) \documentclass{article} \usepackage{color} \usepackage[hebrew]{babel} \begin{document} \end{document} • Can you please explain how you are using Latex or Lyx In hebrew? I tried to fix the problem after I moved from Lyx 2.2 to lyx 2.3, but I could not find a solution. what font packages are you using? – Jneven yesterday Answering my own question: it seems that wrapping into \beginL...\endL seems to work: \documentclass{article} \usepackage{color}
# Why do the upper-atmosphere clouds of Venus appear to have that V shape? IF I understand correctly, the atmosphere moves in the same direction as the rotational spin, but about 60 times faster. It is driven from the hot side of Venus to the cold side (the difference being great due to the planet's slow rotation allowing time for it to heat up and cool down). Is my understanding correct? But why does the visible flow in the cloud tops describe a sideways V shape, with the flow appearing to move diagonally from the equator toward the poles? Is that also because the equator is hotter than the poles, driving the flow in that hotter-to-colder direction? • Duplicate of question in Physics SE, physics.stackexchange.com/questions/522810/… – Bob516 Jan 3 at 13:04 • Yes, it was suggested by someone there that this question might better be directed to this group. – Bruce Jan 3 at 15:34 • If I remember correctly a user should have only one example of a question posted throughout the Stack Exchange. You might consider deleting the one in Physics. – Bob516 Jan 3 at 19:12 • To improve this question it might be helpful to post an image of the V-shape clouds you are referring to. – Bob516 Jan 3 at 19:13 • I left a message there also. Since you can't delete this copy because there's an answer there, you should probably delete the copy in Physics. The problem is answer fragmentation. So far there are no good ways to link up answers spread across different SE sites. Thanks! – uhoh Jan 4 at 2:57 This is supplementary to antispinward's excellent answer and provides additional sources and a visualization from the JAXA spacecraft Atasuki. It has been shamelessly borrowed from Would it be possible to "ride the wave" on Venus? The recently published paper in Nature Geoscience Atmospheric mountain wave generation on Venus and its influence on the solid planet’s rotation rate has open-access links in Science News, Motherboard, and Science. I haven't found a preprint in arXiv yet. There is a stationary gravity wave in Venus's very dense atmosphere. It is intermittent, but has been detected several times. See the NYTimes article Venus Smiled, With a Mysterious Wave Across Its Atmosphere for a discussion of recent observations by JAXA. See Akatsuki and Happy Birthday, Akatsuki!, celebrating it's first Venusian year at Venus. above: "A sequence of images showing the stationary nature of the bow-shape wave above Venus when it was observed in December 2015. Planet-C" from NYTimes. credit: Planet-C/JAXA above: "An illustration of how gravity waves travel up mountains and into Venus’s atmosphere. Credit ESA" From How Mountains Obscured by Venus’s Clouds Reveal Themselves. • Thank you much! – Bruce Jan 31 at 10:28 • @Bruce any time! :-) – uhoh Jan 31 at 10:54 This shape is usually known as the "Y feature" and it appears in the ultraviolet (most images of the Venusian atmosphere incorporate information from the ultraviolet because that provides the most detail). Peralta et al. (2015), "Venus's major cloud feature as an equatorially trapped wave distorted by the wind" describes the formation of the Y feature in terms of an atmospheric wave. They compare the atmospheric wave on Venus to Kelvin waves on Earth: Even though the equatorial wave hereby deduced keeps similarities with the terrestrial Kelvin waves (it propagates along the west‐east direction and is equatorially trapped, its wave amplitude is maximum at the equator and decreases away from it), it also presents distinct properties that arise from the absence of the Coriolis factor on Venus and clearly set its different nature. In particular, the mechanism trapping the wave near the equator is different: for Kelvin waves on Earth the Coriolis force is responsible, but on a slowly-rotating planet like Venus this is not sufficient. While on the Earth, what traps atmospheric waves along the equator is the meridional variation of the Coriolis parameter $$f \approx \beta \cdot y$$ (with $$f = 2\Omega \cdot \sin \phi$$, $$\beta = df/dy$$, with $$\Omega$$ being Earth's angular rotation velocity and $$y$$ the meridional coordinate) [Sánchez‐Lavega, 2011], we find that on Venus this role is played by the centrifugal force through a centrifugal frequency [Peralta et al., 2014a, 2014b] $$\Psi = (u_0 \cdot \tan \phi)/a$$ (where $$\phi$$ is the latitude, $$u_0$$ is Venus background zonal wind, and $$a$$ is the planetary radius of Venus). The next question is why the feature appears in the ultraviolet. They suggest this is due to an ultraviolet absorber being drawn upwards from an undetermined depth in the atmosphere. This evident correlation supports previous works' interpretation of dark features as the result of upwelling of the ultraviolet absorber by vertical wind perturbations over a half cycle of the wave, while bright features are the result of downwelling of absorber‐depleted air over the other half cycle [Belton et al., 1976; del Genio and Rossow, 1990; Kouyama et al., 2012]. They go on to suggest that similar waves may exist on other slowly-rotating bodies like Titan, I'm not sure whether this has yet been determined. • Thank you. good answer. – Bruce Jan 4 at 20:54
# Normal Space is Regular Space ## Theorem Let $\struct {S, \tau}$ be a normal space. Then $\struct {S, \tau}$ is also a regular space. ## Proof Let $T = \struct {S, \tau}$ be a normal space. From Normal Space is $T_3$ Space, we have that $T$ is a $T_3$ space. We also have by definition of normal space that $T$ is a $T_1$ (Fréchet) space. From $T_1$ Space is $T_0$ Space we have that $T$ is a $T_0$ (Kolmogorov) space So $T$ is both a $T_3$ space and a $T_0$ (Kolmogorov) space. Hence $T$ is a regular space by definition. $\blacksquare$
# Inaccuracy in the speed of light Tags: 1. Jul 9, 2014 ### RyanXXVI Imagine a system with a laser and a receiver with the ability to detect when light from the laser reaches it. There is also a console equidistant from both the receiver and the laser which sends a signal to each instrument, making the laser turn on and the receiver start a timer. The distance between the receiver and the laser is known and everything is stationary. When the receiver receives the light, the timer stops, then does a calculation to discover the speed of light. In that situation, the result would be completely accurate. However, now imagine a situation where the whole system was moving in one direction at a speed. This would skew the results. The true speed of light would be the calculated speed plus the speed of the system. Of course, this would be un-calculable if the speed of the system was unknown. Also, to any observer in this system, the system would be stationary. My point is, everyone on this planet moves at the speed at which the speed at which the Earth move as well as our own individual speeds. Would this not mean that our measurement of the speed of light is inaccurate? Unless one knew the true velocity of the Earth, it would be. Could someone please explain to me how I am wrong. I imagine I am because physicists much more intelligent than me have determined the speed of light. Ryan P.S. The alleged inaccuracy extends farther than just Earth and includes any measurements taken in our galaxy for the same reason. 2. Jul 9, 2014 ### ModusPwnd If you have the light return the way it came that problem would not exist. Note there is no "true velocity of the earth". Speed and velocity are relative. To to us on the earth the speed of the earth is zero, to a craft whizzing by its non-zero. In each case the speed of light would be observed to be the same speed. That is one of the wondrous behaviors of the universe. 3. Jul 9, 2014 ### phinds Light speed does not add to other speeds the way you are assuming it does. If I'm in a plane going 1000 mph and I fire a bullet at you at 100 mph, you see the bullet going 1,100 mph if you are in front of the plane and 900 mph if you are behind the plane. If I'm in the same plane and shoot a light beam at you at c, you see the beam arrive at c and that's true whether you are in front of the plane or behind it (or off to the side or whatever). That is one of the fundamental postulates of SR. Light travels at c in all inertial frames of reference. Last edited: Jul 9, 2014 4. Jul 9, 2014 ### Staff: Mentor For the general rule for "adding" velocities, see http://hyperphysics.phy-astr.gsu.edu/hbase/relativ/einvel.html In the formula given on that page, let u' = c (for the velocity of the projectile e.g. a light pulse with respect to the "moving observer" B), and you will always get u = c (for the velocity of the projectile relative to the "stationary observer" A), regardless of the relative velocity v of A and B. 5. Jul 9, 2014 ### parkner It adds normally: c - v. For example: we send a signal to the space probe at distance d, and moving away with a speed v. What is a time of the signal journey to reach the probe? Of course, this is not d/c, but just d/(c-v) Thus the signal speed wrt the probe is c-v exactly, not c, what is very easy to verify. 6. Jul 9, 2014 ### pervect Staff Emeritus This is true, but it's not related to the velocity addition formula. The velocities you are talking about are not actual velocities, but rates of closure in a particular coordinate system. To relate rates of closure to actual velocities, if you have two objects, the relative velocity between them is equal to the rate of closure between the two objects as measured in a coordinate system where one of the objects is at rest. If you specify the relative velocity between A and B as u, and the relative velocity between B and c as v, the relative velocity between A and C is giving by the velocity addition formula. Let A be a light beam, and B and C be two observers Note that in this case A doesn't have a frame of reference in relativity, but B and C do. Then the relative velocity between A and B is c (we measure this in a frame where B is at rest). The relative velocity between B and C is v (we can measure this either in a frame where B is at rest or C is at rest. The two results will be equal except for the necessary sign inversion). Finally, the relative velocity between A and C is still equal to c ( we measure this in a frame where C is at rest) because the velocity addition formula in special relativity is not linear. The exact nature of the velocity addition formula in relativity is a consequence of the Lorentz transform. The Lorentz transform describes the mathematical details of how one changes frames of referernce in relativity (i.e. from B to C in this example). It is different from the pre-relativity Gallilean transform. The exact mathematical details can be found in textbooks and the web, at this point I am only trying to say that relativity is different from pre-relativity in this respect (in how one changes frames), and that the OP has made assumptions that are not true in relativity, presumably due to unfamiliarity with the theory. The solution is to become familar with the theory. Because "changing frames of reference" is an abstract concept, I have chose to focus on velocity addition as a less-abstract consequence, hopefully it's easier to understand the issue this way. Last edited: Jul 9, 2014 7. Jul 9, 2014 8. Jul 9, 2014 ### sophiecentaur You would need to specify who is measuring this time interval, in what frame. 9. Jul 10, 2014 ### parkner I can't agree with this interpretation. The relative velocity is just the 'rate of closure': v = dx/dt, ie. it's a relative distance hange. And in a general case: v = v_r + v_t, where: v_t = wr = df/dt r so there is possible some additional tangential displacement: an angular position change x distance. ] You are talking rather about something what can be called as a 'relativistic velocity', ie. some model-dependend abstract term, not an 'actual vielocity'. 10. Jul 10, 2014 ### Staff: Mentor No, pervect's terminology is the common one. Relative velocity is specifically the velocity of one object in the rest frame of the other. http://en.wikipedia.org/wiki/Relative_velocity 11. Jul 10, 2014 ### phinds It is not an interpretation, it if a fundamental fact of Special Relativity and has been demonstrated empirically. 12. Jul 10, 2014 ### Maxila First to be clear of my meaning below, I understand this, empirical evidence supports this, and I fully agree with it. With that said, we know B has a relative perspective of length and time compared to C, therefore the light beam (A) and the constant c must also be proportional and relative to each view of x/t (where x is length and t is time). This is not questioning c as a constant for each frame, rather it is a observation of the constant being relative to each view of space and time, just as their space (Euclidean) and time are relative to each other. In other words as views of space and time change so must c exactly, in order to be constant for the change in space and time. For example this problem done by GSU physics dept. to explain the Muon experiment and relativity http://hyperphysics.phy-astr.gsu.edu/hbase/relativ/muon.html the comparison of frames shows what we already know, x does not equal x' nor does t = t'. We also know x/t = c = x'/t', but common to both is a ratio of their x/t as a constant, not the specific and relative values. Again to be clear, this is not an argument against any evidence or facts shown, merely an observation that in order for the constant c to remain constant for all, it is also relative to all frames view of space and time (x and t). 13. Jul 10, 2014 ### pervect Staff Emeritus I won't argue overmuch with any approach that gets identical answers , but I'll suggest that philosophically it's simpler to think of space and time as not changing, but that the description of them via coordinates changes when one chooses a frame of reference. This goes along with the idea that coordinates are labels without direct physical significance that I've mentioned in other recent threads. Aside from being (IMO) much simpler, this is usually the manner in which relativity is usually explained. With this viewpoint, while space and time as abstract entities don't change when you change observers. Thee coordinates DO change, but the underlying entities don't. The manner in which the coordinates change in relativity is given by the Lorentz transform, which relates the coordinates for one oberver to the coordinates for the second. 14. Jul 10, 2014 ### Maxila Personally I'd modify that a bit in saying coordinates are labels that only have direct physical significance to the frame they are assigned. The thought I brought up came from working with and trying to see if Lorentz transformations and Lorentz covariance (ds^2 = c^2dt^2 - dx^2 -dy^2 -dz^2) relate with the empirical existence and evidence of time (it always describes a change in distance indistinguishable from speed, if one looks to where the direct reference to its change is assigned). 15. Jul 10, 2014 ### parkner I wish to remind only: relative velocity is a coordinate independent thing. This is just a relation of the two bodies - a distance derivative. In SR this is also preserved: there is the same v in the both frames: in the stationary and in the moving. In a one dimensional case: v > 0 means a distance grows, and v < 0 it decreases - exactly the same on both sides, ie. never v on one frame and -v on the other side! 16. Jul 10, 2014 ### A.T. relative = coordinate dependent 17. Jul 10, 2014 ### Staff: Mentor You are thinking of relative speed, not relative velocity. 18. Jul 11, 2014 ### parkner No, because a speed is just |v|, what can't be a negative number. 19. Jul 11, 2014 ### Staff: Mentor No, there isn't. If the velocity of frame B relative to frame A is $v$, then the velocity of frame A relative to frame B is $- v$. Look at the math for Lorentz transformations. The transformation that goes from frame B to frame A is the inverse of the transformation that goes from frame A to frame B, and the transformation with $- v$ is the inverse of the transformation with $v$. (We're talking about what you call the "one dimensional case" here--pure boosts along a single direction, with no spatial rotation.) Not when you're doing Lorentz transformations. See above. 20. Jul 11, 2014 ### Staff: Mentor If you were not thinking of relative speed then your claims were wrong. That is not how relative velocity transforms.
# Definition:Cofactor ## Definition Let $R$ be a commutative ring with unity. Let $\mathbf A \in R^{n \times n}$ be a square matrix of order $n$. Let: $D = \begin{vmatrix} a_{11} & a_{12} & \cdots & a_{1n} \\ a_{21} & a_{22} & \cdots & a_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{n1} & a_{n2} & \cdots & a_{nn}\end{vmatrix}$ be a determinant of order $n$. ### Cofactor of an Element Let $a_{rs}$ be an element of $D$. Let $D_{rs}$ be the determinant of order $n-1$ obtained from $D$ by deleting row $r$ and column $s$. Then the cofactor $A_{rs}$ of the element $a_{rs}$ is defined as: $A_{rs} := \paren {-1}^{r + s} D_{rs}$ ### Cofactor of a Minor Let $D \left({r_1, r_2, \ldots, r_k \mid s_1, s_2, \ldots, s_k}\right)$ be a order-$k$ minor of $D$. Then the cofactor of $D \left({r_1, r_2, \ldots, r_k \mid s_1, s_2, \ldots, s_k}\right)$ can be denoted: $\tilde D \left({r_1, r_2, \ldots, r_k \mid s_1, s_2, \ldots, s_k}\right)$ and is defined as: $\tilde D \left({r_1, r_2, \ldots, r_k \mid s_1, s_2, \ldots, s_k}\right) = \left({-1}\right)^t D \left({r_{k+1}, r_{k+2}, \ldots, r_n \mid s_{k+1}, s_{k+2}, \ldots, s_n}\right)$ where: $t = r_1 + r_2 + \ldots + r_k + s_1 + s_2 + \ldots s_k$ $r_{k+1}, r_{k+2}, \ldots, r_n$ are the numbers in $1, 2, \ldots, n$ not in $\left\{{r_1, r_2, \ldots, r_k}\right\}$ $s_{k+1}, s_{k+2}, \ldots, s_n$ are the numbers in $1, 2, \ldots, n$ not in $\left\{{s_1, s_2, \ldots, s_k}\right\}$ That is, the cofactor of a minor is the determinant formed from the rows and columns not in that minor, multiplied by the appropriate sign. When $k = 1$, this reduces to the cofactor of an element (as above). When $k = n$, the "minor" is in fact the whole determinant. For convenience its cofactor is defined as being $1$. Note that the cofactor of the cofactor of a minor is the minor itself (multiplied by the appropriate sign). ## Examples Let $D$ be the determinant defined as: $D = \begin{vmatrix} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33}\end{vmatrix}$ Then the cofactor of $a_{2 1}$ is defined as: $\displaystyle A_{21}$ $=$ $\displaystyle \left({-1}\right)^3 D_{21}$ $\displaystyle$ $=$ $\displaystyle \left({-1}\right)^3 \begin{vmatrix} a_{12} & a_{13} \\ a_{32} & a_{33} \end{vmatrix}$ $\displaystyle$ $=$ $\displaystyle -1 \left({a_{12} a_{33} - a_{13} a_{32} }\right)$ $\displaystyle$ $=$ $\displaystyle a_{13} a_{32} - a_{12} a_{33}$ Let: $D = \begin{vmatrix} a_{11} & a_{12} & a_{13} & a_{14} \\ a_{21} & a_{22} & a_{23} & a_{24} \\ a_{31} & a_{32} & a_{33} & a_{34} \\ a_{41} & a_{42} & a_{43} & a_{44} \\ \end{vmatrix}$ Let $D \left({2, 3 \mid 2, 4}\right)$ be an order-$2$ minor of $D$. Then the cofactor of $D \left({2, 3 \mid 2, 4}\right)$ is given by: $\displaystyle \tilde D \left({2, 3 \mid 2, 4}\right)$ $=$ $\displaystyle \left({-1}\right)^{2 + 3 + 2 + 4} D \left({1, 4 \mid 1, 3}\right)$ $\displaystyle$ $=$ $\displaystyle \left({-1}\right)^{11} \begin{vmatrix} a_{11} & a_{13} \\ a_{41} & a_{43} \\ \end{vmatrix}$ $\displaystyle$ $=$ $\displaystyle - \left({a_{11} a_{43} - a_{41} a_{13} }\right)$ $\displaystyle$ $=$ $\displaystyle a_{41} a_{13} - a_{11} a_{43}$
## Cocompact lattices of minimal covolume in rank 2 Kac-Moody groups, Part II ### Inna (Korchagina) Capdeboscq and Anne Thomas #### Abstract Let $G$ be a topological Kac-Moody group of rank 2 with symmetric Cartan matrix, defined over a finite field $F_q$. An example is $G = \mathrm{SL}(2,F_q((t^{-1})))$. We determine a positive lower bound on the covolumes of cocompact lattices in $G$, and construct a cocompact lattice $\Gamma_0 < G$ which realises this minimum. This completes the work begun in Part I, which considered the cases when $G$ admits an edge-transitive lattice. This paper is available as a pdf (364kB) file. Wednesday, September 22, 2010
# Twisty's Tesseract from Twisty's Mind come Twisted Products ## Math Limerick - Posted in Math by A dozen, a gross, and a score plus three times the square root of four divided by seven plus five times eleven is nine squared and not a bit more $$\frac{12+144+20+3\sqrt{4}}{7}+5\times11=9^2+0$$ ~
# How to properly calculate the average across multiple correlations? I'm trying to obtain an average across 3 correlations. Using Python, I obtain these correlations with: corr = df.apply(lambda s: df.corrwith(s)) which outputs: A B C A 1.000000 0.057896 -0.159932 B 0.057896 1.000000 0.581226 C -0.159932 0.581226 1.000000 The lower triangle of the array is isolated with: corr.values[np.tril_indices(len(corr))] = np.nan Now here is where I'd need your help. I'm aware that an arithmetic mean of corr would be the incorrect approach. From this post, there seems to be some preference for "transform each correlation coefficient using Fisher's Z, calculate the mean of the z values, then back-transform to the correlation coefficient". I'm doing this as follows: mean_z = np.nanmean(np.arctanh(corr).values) mean_corr = np.tanh(mean_z) Is this approach something you agree with and is it correctly implemented? The goal is to obtain an average correlation across a portfolio. • Your calculation seems correct, I don't see any obvious errors. If your sample size is large I would not even care with that transformation to be honest. I understand you want to eliminate the bias in correlation estimator? Please note that sample correlation estimator is biased downward, but fisher transformation biases the estimator upwards. I would suggest using other method to correct the bias, Olkin and Pratt method is superior to the Fisher's. link Bear in mind these methods are only valid for normal pdf! – emot Aug 2 at 13:53 • Thanks so much. Yes, one would need to assume normal distribution, which is complicated in a financial timeseries. I've had "divide by 0" errors on occasion when doing Fisher, so may need to revert to arithmetic average or other methods. The sample sizes for all ts correlations are all equal and n=90. Happy to hear further thoughts, and pls feel free to post your comment as answer. Aug 2 at 14:34 The problem with sample correlation estimator defined as: $$r_{sample} =\frac{\sum\left(X_i - \bar{X}\right)\left(Y_i - \bar{Y}\right)}{\sqrt{\sum\left(X_i-\bar{X}\right)^2\left(Y_i-\bar{Y}\right)^2}}.$$ is that it is biased. The bias is in fact downward i.e. $$r_{sample}$$ tends to be lower than population $$\rho$$. Therefore when we average biased estimator we are keeping the bias. Olkin and Pratt (1958) suggested unbiased estimator for correlation coefficient: $$r_{corrected}=r_{sample}(1+\frac{1-r_{sample}^2}{2(n-3)})$$ which very accurate and superior to Fisher's (which biases the estimator upwards), according to link. For sample size $$n=90$$ we see that the correction is really small and you can safely ignore the bias and average the correlations without correction. Some people claim that you should not calculate mean correlation across different pairs of assets. I tend to disagree with that. Below I present two reasonings. Average correlation for the portfolio If you want to calculate average correlation for the portfolio then you should take into account portfolio weights. Tierens and Anadu (2004) link proposes a method to calculate average correlation for portfolio: $$p_{av}=\frac{2\sum_{i=1}^{N}\sum_{j>i}^{N}w_i w_j p_{i,j}}{1-\sum_{i=1}^{N}w_i^2}$$ This average correlation has really nice interpretation, if we have two linear portfolios • one with identical asset's variance and identical correlation between all pairs $$i, j$$ of assets equal to $$p_{av}$$ • second with identical asset's variance but different correlations between pairs $$i, j$$ of assets equal to $$p_{i,j}$$ then the variance of both portfolios are equal and their VaRs are equal as well. From this it is immediate that when average correlation decreases, the variance of the portfolio variance/risk decreases as well. Therefore average correlation provides useful information. Measure of similarity of two correlation matrices We can calculate distance between two correlation matrices and compare how similar they are link. The distance metric is: $$d = 1 - \frac{\text{tr}(R_1 \cdot R_2)}{\|R_1\| \cdot \|R_2\|},$$ where $$R_1$$ and $$R_2$$ are two correlation matrices and the norm is the Frobenius norm. This metric take values from 0 (identical matrix) to 1. We can compare any correlation matrices with that metric. But it turns out if we constrain ourselves into scalars only, then simple mean of all correlations minimizes the distance $$d$$! i.e. $$R_2$$ with off diagonal entries equal to $$p_{av-equal}$$ is most similar to the original matrix $$R_1$$. $$p_{av-equal}=\frac{\sum_{i=1}^{N}\sum_{j>i}^{N}p_{i,j}}{N(N-1)/2}$$ it's a simple mean of off-diagonal entries. • Beautifully explained, thanks a million Aug 3 at 14:37
+1.617.933.5480 Q: --Intervals 9.Compare the 95% and 99% confidence intervals for the hours of sleep a student gets. Explain the difference between these intervals and why this difference occurs. The comparison be made for the mean hours of sleep a student gets. Confidence intervals are used as estimation intervals for the true population mean . A sample of values is used to arrive at the mean number of sleeping hours, which is called the point estimate. This point estimate will now be used to calculate the confidence interval ,given by : Lower Limit : $$xbar -{Z*\sigma\over\sqrt{n}}$$ Upper Limit : $$xbar +{Z*\sigma\over\sqrt{n}}$$ where xbar is the point estimate , n is the sample size and $$\sigma$$ is the sample standard deviation Related Questions in Theory of probability • Q: Compare the 95% and 99% confidence intervals for the hours of... July 02, 2015 Compare the 95 % and 99 % confidence intervals for the hours of sleep a student gets. Explain the difference between these intervals and... • Q: Statistics (Solved) December 07, 2012 Need help with this assignment. Attach is the homework. You will need access to my online class-www.devryu.net. Login (D03418170), password (Darian1978). Click on decision making,... Solution Preview : Statistics – Lab #6 Name:_______________________ Statistical Concepts: Data Simulation Discrete Probability Distribution Confidence Intervals Calculations for a set of variables Open the... • Q: • Q: multiple choice question - help PLEASE!!! (Solved) March 16, 2012 When comparing the 90% prediction and confidence intervals for a given regression analysis, A. the prediction interval is narrower than the confidence interval. B. the prediction interval... • Q: 4-3 May 09, 2012 Using the data below, find 95 % confidence intervals for the mean number of nights out per week and mean number of study hours per week by gender. Based on the confidence intervals,...
# VertexLabels with Graph Properties Suppose I have a graph like this Graph[ {1 <-> 2, 2 <-> 3, Labeled[3 <-> 1, "hello"]}, VertexLabels -> Placed["Name",StatusArea] ] Now I want to add more properties to all the nodes. For instance, I want to replace name of node 1 by number 3700, node 2 by 3701, node3 by 3703 and those should be displayed only in the status area. Along with replacing the node names, I also want some more properties associated with nodes. For instance, I'd like 3700, "h1" to be displayed in the status area when I place my mouse pointer at node 1; at node 2, it should display 3700, "h2" etc. (not exactly those but some other display stuff). How can I do it? Set a graph with properties: g = Graph[{Property[1, "Custom" -> {3700, "h1"}], Property[2, "Custom" -> {3700, "h2"}], Property[3, "Custom" -> {3700, "h2"}]}, {1 <-> 2, 2 <-> 3, 3 <-> 1}]; Define labels: labels = # -> Placed[ToString[PropertyValue[{g, #}, "Custom"], InputForm], StatusArea] & /@ VertexList[g]; Draw graph: SetProperty[g, VertexLabels -> labels] • Hi @halmir, thanks for the answer. But I have a graph which has a very large number of nodes. So I cannot really write the property for every vertex. Is there any way I can generalize the code for all of them? – no-one Jan 26 '16 at 20:21 • Yes, you can. You can just map properties over you vertices. What do you have? – halmir Jan 27 '16 at 0:24
I've noticed that I'm close to $$163)\. toot, so I'll toot something very cool about this number. \(163)\ is largest of nine Heegner numbers, and I'll explain what those are as I understand it. Gauss wanted to identify all perfect Pythagorian triplets which are whole numbers \(a, b ,c$$ that satisfy $$a²+b² = c²$$. To do that he had to involve complex numbers and come up with new factorisation system. So factorised $$a²+b² = c²$$ in new system (which involves imaginary numbers) is now $$a²+b²=(a-ib)+(a+ib)$$ and that system had to have same property of unique factorisation of whole numbers, unique factorisation means that any whole number $$w$$ can be written as unique product of prime numbers, and this works for $$\sqrt{-1}$$ Heegner numbers are numbers which don't show unique factorisation in that new defined system, so for example $$6=2*3$$ is also $$6=(1+\sqrt{-5})(1-\sqrt{5})$$, here 5 is Heegner number, and $$163$$ also has that property and its last proven number to have that property. Also $$163$$ has property of giving almost whole number when used in $$e^{\sqrt{163}\pi}=262537412640768743.9999999999992500 ...$$ as one of Ramanujan constants. It has some more properties as being one of "lucky" and "fortunate" math numbers, and gives good aproximations of $$e$$ and $$\pi$$ . Pretty stacked up number, if you ask me.
# Example:Bijective undo/redo operations ($1)$2 | $3 ($4) | $5 ($6) Many user interfaces can be thought of as a collection of functions that transform a document into another document. For example, let be the set of all plain text files. Pressing the 'x' key in a text editor causes an 'x' to be inserted; you can think of this as applying a function that takes the document without the 'x' and outputs the document with the 'x'. If you want to provide an undo/redo capability, then the functions should be bijections. If they are, then their two-sided inverses give the undo operations. Sometimes you can make a function into a bijection by adding an "undo log": by expanding the set you can keep enough history to implement an undo function.
D03 Chapter Contents D03 Chapter Introduction NAG Library Manual # NAG Library Routine DocumentD03NEF Note:  before using this routine, please read the Users' Note for your implementation to check the interpretation of bold italicised terms and other implementation-dependent details. ## 1  Purpose D03NEF computes average values of a continuous function of time over the remaining life of an option. It is used together with D03NDF to value options with time-dependent parameters. ## 2  Specification SUBROUTINE D03NEF ( T0, TMAT, NTD, TD, PHID, PHIAV, WORK, LWORK, IFAIL) INTEGER NTD, LWORK, IFAIL REAL (KIND=nag_wp) T0, TMAT, TD(NTD), PHID(NTD), PHIAV(3), WORK(LWORK) ## 3  Description D03NEF computes the quantities $ϕt0, ϕ^=1T-t0 ∫t0Tϕζdζ, ϕ-= 1T-t0 ∫t0Tϕ2ζdζ 1/2$ from a given set of values PHID of a continuous time-dependent function $\varphi \left(t\right)$ at a set of discrete points TD, where ${t}_{0}$ is the current time and $T$ is the maturity time. Thus $\stackrel{^}{\varphi }$ and $\stackrel{-}{\varphi }$ are first and second order averages of $\varphi$ over the remaining life of an option. The routine may be used in conjunction with D03NDF in order to value an option in the case where the risk-free interest rate $r$, the continuous dividend $q$, or the stock volatility $\sigma$ is time-dependent and is described by values at a set of discrete times (see Section 8.2). This is illustrated in Section 9. None. ## 5  Parameters 1:     T0 – REAL (KIND=nag_wp)Input On entry: the current time ${t}_{0}$. Constraint: ${\mathbf{TD}}\left(1\right)\le {\mathbf{T0}}\le {\mathbf{TD}}\left({\mathbf{NTD}}\right)$. 2:     TMAT – REAL (KIND=nag_wp)Input On entry: the maturity time $T$. Constraint: ${\mathbf{TD}}\left(1\right)\le {\mathbf{TMAT}}\le {\mathbf{TD}}\left({\mathbf{NTD}}\right)$. 3:     NTD – INTEGERInput On entry: the number of discrete times at which $\varphi$ is given. Constraint: ${\mathbf{NTD}}\ge 2$. 4:     TD(NTD) – REAL (KIND=nag_wp) arrayInput On entry: the discrete times at which $\varphi$ is specified. Constraint: ${\mathbf{TD}}\left(1\right)<{\mathbf{TD}}\left(2\right)<\cdots <{\mathbf{TD}}\left({\mathbf{NTD}}\right)$. 5:     PHID(NTD) – REAL (KIND=nag_wp) arrayInput On entry: ${\mathbf{PHID}}\left(\mathit{i}\right)$ must contain the value of $\varphi$ at time ${\mathbf{TD}}\left(\mathit{i}\right)$, for $\mathit{i}=1,2,\dots ,{\mathbf{NTD}}$. 6:     PHIAV($3$) – REAL (KIND=nag_wp) arrayOutput On exit: ${\mathbf{PHIAV}}\left(1\right)$ contains the value of $\varphi$ interpolated to ${t}_{0}$, ${\mathbf{PHIAV}}\left(2\right)$ contains the first-order average $\stackrel{^}{\varphi }$ and ${\mathbf{PHIAV}}\left(3\right)$ contains the second-order average $\stackrel{-}{\varphi }$, where: $ϕ^=1T-t0 ∫t0Tϕζdζ , ϕ-= 1T-t0 ∫t0Tϕ2ζdζ 1/2 .$ 7:     WORK(LWORK) – REAL (KIND=nag_wp) arrayWorkspace 8:     LWORK – INTEGERInput On entry: the dimension of the array WORK as declared in the (sub)program from which D03NEF is called. Constraint: ${\mathbf{LWORK}}\ge 9×{\mathbf{NTD}}+24$. 9:     IFAIL – INTEGERInput/Output On entry: IFAIL must be set to $0$, $-1\text{​ or ​}1$. If you are unfamiliar with this parameter you should refer to Section 3.3 in the Essential Introduction for details. For environments where it might be inappropriate to halt program execution when an error is detected, the value $-1\text{​ or ​}1$ is recommended. If the output of error messages is undesirable, then the value $1$ is recommended. Otherwise, if you are not familiar with this parameter, the recommended value is $0$. When the value $-\mathbf{1}\text{​ or ​}\mathbf{1}$ is used it is essential to test the value of IFAIL on exit. On exit: ${\mathbf{IFAIL}}={\mathbf{0}}$ unless the routine detects an error or a warning has been flagged (see Section 6). ## 6  Error Indicators and Warnings If on entry ${\mathbf{IFAIL}}={\mathbf{0}}$ or $-{\mathbf{1}}$, explanatory error messages are output on the current error message unit (as defined by X04AAF). Errors or warnings detected by the routine: ${\mathbf{IFAIL}}=1$ On entry, T0 lies outside the range [${\mathbf{TD}}\left(1\right),{\mathbf{TD}}\left({\mathbf{NTD}}\right)$], or TMAT lies outside the range [${\mathbf{TD}}\left(1\right),{\mathbf{TD}}\left({\mathbf{NTD}}\right)$], or ${\mathbf{NTD}}<2$, or TD badly ordered, or ${\mathbf{LWORK}}<9×{\mathbf{NTD}}+24$. ${\mathbf{IFAIL}}=2$ Unexpected failure in internal call to E01BAF or E02BBF. ## 7  Accuracy If $\varphi \in {C}^{4}\left[{t}_{0},T\right]$ then the error in the approximation of $\varphi \left({t}_{0}\right)$ and $\stackrel{^}{\varphi }$ is $\mathit{O}\left({H}^{4}\right)$, where $H=\underset{i}{\mathrm{max}}\phantom{\rule{0.25em}{0ex}}\left(T\left(i+1\right)-T\left(i\right)\right)$, for $i=1,2,\dots ,{\mathbf{NTD}}-1$. The approximation is exact for polynomials of degree up to $3$. The third quantity $\stackrel{-}{\varphi }$ is $\mathit{O}\left({H}^{2}\right)$, and exact for linear functions. ### 8.1  Timing The time taken is proportional to NTD. ### 8.2  Use with D03NDF Suppose you wish to evaluate the analytic solution of the Black–Scholes equation in the case when the risk-free interest rate $r$ is a known function of time, and is represented as a set of values at discrete times. A call to D03NEF providing these values in PHID produces an output array PHIAV suitable for use as the argument R in a subsequent call to D03NDF. Time-dependent values of the continuous dividend $Q$ and the volatility $\sigma$ may be handled in the same way. ### 8.3  Algorithmic Details The NTD data points are fitted with a cubic B-spline using the routine E01BAF. Evaluation is then performed using E02BBF, and the definite integrals are computed using direct integration of the cubic splines in each interval. The special case of $T={t}_{o}$ is handled by interpolating $\varphi$ at that point. ## 9  Example This example demonstrates the use of the routine in conjunction with D03NDF to solve the Black–Scholes equation for valuation of a $5$-month American call option on a non-dividend-paying stock with an exercise price of \$50. The risk-free interest rate varies linearly with time and the stock volatility has a quadratic variation. Since these functions are integrated exactly by D03NEF the solution of the Black–Scholes equation by D03NDF is also exact. The option is valued at a range of times and stock prices. ### 9.1  Program Text Program Text (d03nefe.f90) ### 9.2  Program Data Program Data (d03nefe.d) ### 9.3  Program Results Program Results (d03nefe.r)
Find all School-related info fast with the new School-Specific MBA Forum It is currently 29 May 2016, 16:34 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Events & Promotions ###### Events & Promotions in June Open Detailed Calendar # A farmer has a field that measures 1000 ft wide by 2000 ft l new topic post reply Question banks Downloads My Bookmarks Reviews Important topics Author Message TAGS: ### Hide Tags Manager Joined: 07 Dec 2010 Posts: 116 Concentration: Marketing, General Management Followers: 0 Kudos [?]: 19 [1] , given: 12 A farmer has a field that measures 1000 ft wide by 2000 ft l [#permalink] ### Show Tags 28 May 2011, 03:07 1 KUDOS 3 This post was BOOKMARKED 00:00 Difficulty: 75% (hard) Question Stats: 55% (03:49) correct 45% (03:15) wrong based on 127 sessions ### HideShow timer Statistics A farmer has a field that measures 1000 ft wide by 2000 ft long. There is an untillable strip 20 ft wide on the inside edge of the field, and a 30 ft wide untillable strip bisects the field into two squares (approximate). Approximately what percentage of the field is tillable? A. 98% B. 93% C. 91% D. 90% E. 88% [Reveal] Spoiler: OA Last edited by Bunuel on 09 Jun 2014, 09:29, edited 1 time in total. Renamed the topic and edited the question. CEO Joined: 17 Nov 2007 Posts: 3580 Concentration: Entrepreneurship, Other Schools: Chicago (Booth) - Class of 2011 GMAT 1: 750 Q50 V40 Followers: 480 Kudos [?]: 2750 [2] , given: 359 Re: 650 plus level question [#permalink] ### Show Tags 28 May 2011, 04:42 2 KUDOS Expert's post Actually you can solve the problem pretty fast by using following approach: 1. one shorter inside strip with width of 20 ft takes 20/2000 = 1% of field 2. There is 2 short strips, 2 long strips (twice as long as shorts ones) and one short but wider strip that equals 30/20 = 1.5 short strips. 3. Approximately we have 2 + 2*2 + 1.5 = 7.5 short strips --> ~ 7.5% or 92.5% 4. As we didn't take into account overlaps between strips it will be slightly higher than 92.5%. Or you can use calculations but I think it will take more time: $$%=100%*\frac{2*(1000-2*20)*(1000-20-\frac{30}{2})}{1000*2000} = 0.9264$$ _________________ HOT! GMAT TOOLKIT 2 (iOS) / GMAT TOOLKIT (Android) - The OFFICIAL GMAT CLUB PREP APP, a must-have app especially if you aim at 700+ | PrepGame Manager Joined: 07 Oct 2010 Posts: 180 Followers: 6 Kudos [?]: 146 [0], given: 10 Re: 650 plus level question [#permalink] ### Show Tags 28 May 2011, 07:07 walker wrote: Actually you can solve the problem pretty fast by using following approach: 1. one shorter inside strip with width of 20 ft takes 20/2000 = 1% of field 2. There is 2 short strips, 2 long strips (twice as long as shorts ones) and one short but wider strip that equals 30/20 = 1.5 short strips. 3. Approximately we have 2 + 2*2 + 1.5 = 7.5 short strips --> ~ 7.5% or 92.5% 4. As we didn't take into account overlaps between strips it will be slightly higher than 92.5%. Or you can use calculations but I think it will take more time: $$%=100%*\frac{2*(1000-2*20)*(1000-20-\frac{30}{2})}{1000*2000} = 0.9264$$ Good method ...fast and quick Manager Joined: 07 Dec 2010 Posts: 116 Concentration: Marketing, General Management Followers: 0 Kudos [?]: 19 [0], given: 12 Re: 650 plus level question [#permalink] ### Show Tags 28 May 2011, 07:58 can u please explain the same with the help of diagram.. CEO Joined: 17 Nov 2007 Posts: 3580 Concentration: Entrepreneurship, Other Schools: Chicago (Booth) - Class of 2011 GMAT 1: 750 Q50 V40 Followers: 480 Kudos [?]: 2750 [2] , given: 359 Re: 650 plus level question [#permalink] ### Show Tags 28 May 2011, 08:51 2 KUDOS Expert's post [Reveal] Spoiler: Attachment: 114310.png [ 3.71 KiB | Viewed 4174 times ] _________________ HOT! GMAT TOOLKIT 2 (iOS) / GMAT TOOLKIT (Android) - The OFFICIAL GMAT CLUB PREP APP, a must-have app especially if you aim at 700+ | PrepGame Manager Joined: 04 Apr 2010 Posts: 163 Followers: 1 Kudos [?]: 138 [1] , given: 31 Re: 650 plus level question [#permalink] ### Show Tags 28 May 2011, 11:42 1 KUDOS 960*965*2/(1000*2000) = 93.04% B. _________________ Consider me giving KUDOS, if you find my post helpful. If at first you don't succeed, you're running about average. ~Anonymous Current Student Joined: 26 May 2005 Posts: 565 Followers: 18 Kudos [?]: 181 [0], given: 13 Re: 650 plus level question [#permalink] ### Show Tags 28 May 2011, 13:26 1 This post was BOOKMARKED okai .. its C rectangle of 1000*2000 1000-2*20 = 960 (2000-2*20-30)/2 = 965 2 squares = 2*965*960/(1000*2000) = 93% approx. Math Forum Moderator Joined: 20 Dec 2010 Posts: 2022 Followers: 154 Kudos [?]: 1438 [3] , given: 376 Re: 650 plus level question [#permalink] ### Show Tags 29 May 2011, 02:42 3 KUDOS ruturaj wrote: A farmer has a field that measures 1000 ft wide by 2000 ft long. There is an untillable strip 20 ft wide on the inside edge of the field, and a 30 ft wide untillable strip bisects the field into two squares (approximate). Approximately what percentage of the field is tillable? A) 98% B) 93% C) 91% D) 90% E) 88% Total Area = 1000*2000 Tillable Square's side horizontally = (2000-20-30-20)/2 = 1930/2 = 965 Tillable Square's side vertically = (1000-20-20) = 960 = 960 Consider it as 960: $$% = \frac{2*960*960}{1000*2000}*100=\frac{2*0.96*0.96*1}{2}*100=(0.96)^2*100=92.16 \approx 93%$$ Why approximated to 93 and not 91 because we shortened one side from 965 to 960. Thus, in reality the squares are bigger. Ans: "B" By the way, I looked up tillable after solving. tillable: arable, cultivable, cultivatable Attachment: tillable_field.PNG [ 5.2 KiB | Viewed 3882 times ] _________________ Manager Status: ==GMAT Ninja== Joined: 08 Jan 2011 Posts: 247 Schools: ISB, IIMA ,SP Jain , XLRI WE 1: Aditya Birla Group (sales) WE 2: Saint Gobain Group (sales) Followers: 5 Kudos [?]: 68 [1] , given: 46 Re: 650 plus level question [#permalink] ### Show Tags 31 May 2011, 12:00 1 KUDOS fluke wrote: ruturaj wrote: A farmer has a field that measures 1000 ft wide by 2000 ft long. There is an untillable strip 20 ft wide on the inside edge of the field, and a 30 ft wide untillable strip bisects the field into two squares (approximate). Approximately what percentage of the field is tillable? A) 98% B) 93% C) 91% D) 90% E) 88% Total Area = 1000*2000 Tillable Square's side horizontally = (2000-20-30-20)/2 = 1930/2 = 965 Tillable Square's side vertically = (1000-20-20) = 960 = 960 Consider it as 960: $$% = \frac{2*960*960}{1000*2000}*100=\frac{2*0.96*0.96*1}{2}*100=(0.96)^2*100=92.16 \approx 93%$$ Why approximated to 93 and not 91 because we shortened one side from 965 to 960. Thus, in reality the squares are bigger. Ans: "B" By the way, I looked up tillable after solving. tillable: arable, cultivable, cultivatable Attachment: tillable_field.PNG this was quite smart fluke kudos from me _________________ WarLocK _____________________________________________________________________________ The War is oNNNNNNNNNNNNN for 720+ see my Test exp here http://gmatclub.com/forum/my-test-experience-111610.html do not hesitate me giving kudos if you like my post. Manager Joined: 16 Mar 2011 Posts: 177 Followers: 1 Kudos [?]: 35 [0], given: 13 Re: 650 plus level question [#permalink] ### Show Tags 02 Jun 2011, 13:28 Guys, seriously.. If I would get this question on the actual exam I would seriously start crying or something.. Pfff it took me 30minutes to friggin understand what the question is about... Sigh.. Manager Joined: 16 May 2011 Posts: 204 Concentration: Finance, Real Estate GMAT Date: 12-27-2011 WE: Law (Law) Followers: 1 Kudos [?]: 56 [0], given: 37 Re: 650 plus level question [#permalink] ### Show Tags 13 Jun 2011, 14:52 i just summed the differences: 20*1000+20*1980+20*980+20*1960 will be the frame. the bisector will be 30* 960 (which can be considered as 20*960 for approximation, remembering that the rounding can cost only 1/2 %) 20*(1980+1960+1000+980+2*960)/2000*1000=the rest is simple and got 7%(+-)which must be substracted from 100 must admit it took me 2 minutes+ to solve Manager Joined: 28 Jan 2011 Posts: 76 Location: Tennessee Schools: Belmont University Followers: 1 Kudos [?]: 100 [0], given: 58 Re: 650 plus level question [#permalink] ### Show Tags 21 Jun 2011, 13:37 Shalom! Please tell me that this type of question is not in the 600 to 700 level question range on the GMAT. Manager Joined: 16 Mar 2011 Posts: 177 Followers: 1 Kudos [?]: 35 [0], given: 13 Re: 650 plus level question [#permalink] ### Show Tags 21 Jun 2011, 13:40 Lol, I was REALLY thinking the same thing! I'm afraid it is Manager Joined: 28 Jan 2011 Posts: 76 Location: Tennessee Schools: Belmont University Followers: 1 Kudos [?]: 100 [0], given: 58 Re: 650 plus level question [#permalink] ### Show Tags 21 Jun 2011, 15:41 Shalom! I have a question. If I can expect to see this type of question in the 600 to 700 range then how do I prepare to calculate the answer without the use of a calculator? GMAT Club Legend Joined: 09 Sep 2013 Posts: 9694 Followers: 466 Kudos [?]: 120 [0], given: 0 Re: 650 plus level question [#permalink] ### Show Tags 09 Jun 2014, 09:15 Hello from the GMAT Club BumpBot! Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos). Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email. _________________ GMAT Club Legend Joined: 09 Sep 2013 Posts: 9694 Followers: 466 Kudos [?]: 120 [0], given: 0 Re: A farmer has a field that measures 1000 ft wide by 2000 ft l [#permalink] ### Show Tags 08 Feb 2016, 05:23 Hello from the GMAT Club BumpBot! Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos). Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email. _________________ Re: A farmer has a field that measures 1000 ft wide by 2000 ft l   [#permalink] 08 Feb 2016, 05:23 Similar topics Replies Last post Similar Topics: 2 A rectangular garden is surrounded by a 3 ft. wide concrete sidewalk. 3 12 Mar 2015, 06:48 2 A circular field that measures 100π meters in circumference has six mo 4 23 Jan 2015, 07:29 Three friends-whose walking rates are 1 ft./sec., 3ft./sec.,and 6ft./s 3 05 Oct 2010, 05:41 when a certain tree was first planted it was 4 ft tall. the height of 2 22 Jul 2010, 07:09 16 The circumference of the front wheel of a cart is 30 ft long 10 19 Feb 2010, 13:43 Display posts from previous: Sort by # A farmer has a field that measures 1000 ft wide by 2000 ft l new topic post reply Question banks Downloads My Bookmarks Reviews Important topics Powered by phpBB © phpBB Group and phpBB SEO Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®.
# Limits of square root [closed] $$\lim_{x\to\infty}\left(\sqrt{x+\sqrt{x+\sqrt{x + \sqrt x} }}-\sqrt x\right)$$ (original screenshot) - ## closed as off-topic by Stefan Smith, Brian Rushton, Shuchang, Alex Wertheim, Dan RustDec 24 '13 at 0:25 This question appears to be off-topic. The users who voted to close gave this specific reason: • "This question is missing context or other details: Please improve the question by providing additional context, which ideally includes your thoughts on the problem and any attempts you have made to solve it. This information helps others identify where you have difficulties and helps them write answers appropriate to your experience level." – Stefan Smith, Brian Rushton, Shuchang, Alex Wertheim, Dan Rust If this question can be reworded to fit the rules in the help center, please edit the question. Questions regarding homework assignments are more than welcome, provided that they: Briefly explain the problem you are trying to solve—do not post your entire assignment verbatim. Explain what you tried and where you're stuck (showing your work is a good idea). Don't ask for complete solutions to the problem—we're not here to do your homework for you. – Fly by Night Dec 23 '13 at 21:45 Here's a comparatively clean way to do it: $$\sqrt{x+\sqrt x}-\sqrt x\le\sqrt{x+\sqrt{x+\sqrt{x+\sqrt{x}}}}-\sqrt{x}\le\sqrt{x+\sqrt{x+\sqrt{x+\ldots}}}-\sqrt{x}$$ Now, let $u=\sqrt{x+\sqrt{x+\sqrt{x+\ldots}}}$. Then $$u^2=x+u\implies u=\frac{1+\sqrt{1+4x}}{2}=\frac12+\sqrt{\frac14+x}$$ (Note that $u$ is strictly positive). Now, \begin{align}\sqrt{x+\sqrt{x}}-\sqrt{x}&=\frac{x+\sqrt{x}-x}{\sqrt{x+\sqrt{x}}+\sqrt{x}}\\ &=\frac{\sqrt{x}}{\sqrt{x+\sqrt{x}}+\sqrt{x}}\\ &=\frac{1}{\sqrt{1+\frac1{\sqrt x}}+1}\end{align} Thus we have \begin{align} \lim_{x\to\infty}\frac{1}{\sqrt{1+\frac1{\sqrt x}}+1}\le\lim_{x\to\infty}\sqrt{x+\sqrt{x+\sqrt{x+\sqrt{x}}}}-\sqrt{x}&\le\lim_{x\to\infty}\frac12+\sqrt{\frac14+x}-\sqrt x\\ \frac12\le\lim_{x\to\infty}\sqrt{x+\sqrt{x+\sqrt{x+\sqrt{x}}}}-\sqrt{x}&\le\frac12\\ \lim_{x\to\infty}\sqrt{x+\sqrt{x+\sqrt{x+\sqrt{x}}}}-\sqrt{x}&=\frac12 \end{align} - Hint for a simpler one: $$\lim_{x \to \infty} \sqrt{x+\sqrt x}-\sqrt x=\lim_{x \to \infty}\sqrt x\left(\sqrt{1+\frac 1{\sqrt x}}-1\right)$$ - Hint: Try multiplying and dividing by the conjugate to get started, simplify the numerator, then factor $\sqrt x$ out of the new numerator and denominator. - \begin{align} \lim_{x\to\infty}\sqrt{x+\sqrt{x+\sqrt{x+\dots}}}-\sqrt{x} &=\lim_{x\to\infty}\frac{\sqrt{x+\sqrt{x+\sqrt{x+\dots}}}}{\sqrt{x+\sqrt{x+\sqrt{x+\dots}}}+\sqrt{x}}\\ &=\lim_{x\to\infty}\frac{\sqrt{1+\frac1x\sqrt{x+\sqrt{x+\dots}}}}{\sqrt{1+\frac1x\sqrt{x+\sqrt{x+\dots}}}+1}\\ &=\frac12 \end{align} To show that $\lim\limits_{x\to\infty}\frac1x\sqrt{x+\sqrt{x+\dots}}=0$, show inductively that $$\sqrt{x+\sqrt{x+\sqrt{x+\dots}}}\le\frac{1+\sqrt{1+4x}}{2}$$ using $\sqrt{x}\le\frac{1+\sqrt{1+4x}}{2}$ and $$\left(\frac{1+\sqrt{1+4x}}{2}\right)^2=x+\frac{1+\sqrt{1+4x}}{2}$$ - I got 1/2 for the limit. Let $y=\sqrt{x+\sqrt{x+\sqrt{x}}}$. $\frac{y}{\sqrt{x}} \rightarrow 1$ and $\frac{y}{x} \rightarrow 0$ as $x \rightarrow \infty$. And $L=\frac{\frac{y}{\sqrt{x}}}{\sqrt{1+\frac{y}{x}}+1} \rightarrow \frac{1}{2}$. - Multiply the numerator and denominator by the conjugate expression. Divide the numerator and denominator by the greatest degree of $x$ - There is no numerator and no denominator. – MJD Dec 23 '13 at 21:53 @MJD: There is a denominator of $1$ that can be used for this purpose. – Ross Millikan Dec 24 '13 at 14:55
## College Physics (4th Edition) We can rank the intensities of light transmitted through the second polarizer, from greatest to smallest: $a = e \gt b \gt d \gt c$ Since the light is randomly polarized initially, the intensity of the light after passing through the first polarizer is $\frac{I_0}{2}$ We can use the law of Malus to determine an expression for the intensity after passing through the second polarizer. $I = \frac{I_0}{2}~cos^2(\vert \theta_2-\theta_1\vert)$ For each situation, we can find an expression for the intensity of the light after passing through the two polarizers. (a) $I = \frac{I_0}{2}~cos^2(30^{\circ}-0^{\circ}) = \frac{3}{8}\times I_0$ (b) $I = \frac{I_0}{2}~cos^2(30^{\circ}-30^{\circ}) = \frac{1}{2}\times I_0$ (c) $I = \frac{I_0}{2}~cos^2(90^{\circ}-0^{\circ}) = 0\times I_0$ (d) $I = \frac{I_0}{2}~cos^2(60^{\circ}-0^{\circ}) = \frac{1}{8}\times I_0$ (e) $I = \frac{I_0}{2}~cos^2(60^{\circ}-30^{\circ}) = \frac{3}{8}\times I_0$ We can rank the intensities of light transmitted through the second polarizer, from greatest to smallest: $a = e \gt b \gt d \gt c$
Equivalence of categories of $D$-modules on a singular $X$ Is $$D^b(Mod_{qc}(D_X)) \to D^b_{qc}(D_X)$$ an equivalence of categories for singular $$X$$? Where $$Mod_{qc}(D_X)$$ is the category of quasi-coherent modules over $$D_X$$ and $$D^b_{qc}(D_X)$$ is the category consisting of objects $$X$$ such that $$H^j (X)$$ is quasi-coherent for all $$j$$. This is known to be an equivalence for $$X$$ smooth, by Ryoshi Hotta Kiyoshi Takeuchi Toshiyuki Tanisaki D-Modules, Perverse Sheaves, and Representation Theory Theorem 1.5.7. Thanks in advance. This question has an open bounty worth +50 reputation from FunctionOfX ending in 2 days. This question has not received enough attention. • Maybe this will be helpful: the LHS has enough injectives even when $X$ is singular. – FunctionOfX Feb 12 at 16:20
# Finding the limit of a quotient I am trying to find the limit of $(x^2-6x+5)/(x-5)$ as it approaches $5$. I assume that I just plug in $5$ for $x$ and for that I get $0/0$ but my book says $4$. I try and factor and I end up with $(25-30+5)/(5-5)$ which doesnt seem quite right to me but I know that if I factor out $5$ and get rid of the $5-5$ (although that would make it $1-1$ wouldn't it?) that leaves me with $5-6+5$ which is $4$. What do I need to do in this problem? - Hint: x=5 is a root of x^2-6x+5 hence x^2-6x+5 is x-5 times... something which you might want to compute. –  Did Aug 27 '11 at 22:46 The limit of $(x^2-6x+5)/(x-5)$ as it approaches $5$ is $5$. Presumably you're trying to find the limit of $(x^2-6x+5)/(x-5)$ as $x$ approaches $5$? –  joriki Aug 27 '11 at 22:47 Perform Polynomial Long Division. –  Bill Dubuque Aug 27 '11 at 22:53 @Jordan: Really, if you're interested in a tutor and you don't know of a local resource, I've tutored over Skype in the past. If you're interested, shoot me an email. My email is very easy to find on my blog (but not something I post on this forum). –  mixedmath Jun 13 '12 at 12:23 Let $P(x)=x^{2}-6x+5$ and $Q(x)=x-5$. Since $P(x)$ and $Q(x)$ are continuous and $P(5)=Q(5)=0$, $\frac{P(5)}{Q(5)}$ is undetermined. You have two alternatives: 1. manipulate algebraically $\frac{P(x)}{Q(x)}=\frac{x^{2}-6x+5}{x-5}$. 2. use L'Hôpital's rule $$\lim_{x\rightarrow 5}\frac{P(x)}{Q(x)}=\lim_{x\rightarrow 5}\frac{P^{\prime }(x)}{Q^{\prime }(x)}=\lim_{x\rightarrow 5}\frac{2x-6}{1}=2\cdot 5-6=4.$$ In option 1, since $P(5)=0$, you know that you can factor $P(x)$ as $$P(x)=x^{2}-6x+5=(x-5)(x-c).$$ You can compute $c=1$, by solving the equation $$x^{2}-6x+5=0.$$ Instead you can perform a long division, as suggested by Bill Dubuque, to evaluate $P(x)/Q(x)=x-1$. So, $$P(x)=x^{2}-6x+5=(x-5)(x-1)$$ and $$\lim_{x\rightarrow 5}% \frac{P(x)}{Q(x)}=\lim_{x\rightarrow 5}\frac{(x-5)(x-1)}{x-5}% =\lim_{x\rightarrow 5}(x-1)=5-1=4.$$ You are allowed to divide $P(x)$ and $Q(x)$ by $x-5$, because you perform a limiting process, and you actually never make $x=5$, which means $x-5$ is never equal to $0$. - $(x^2-6x+5)$ = $(x-1)(x-5)$ cancel out the $x-5$ and you don't have to worry about dividing by zero. $x-1$ is the end result. Plug in $5$ for $x$: $5-1 = 4$ - How do I know to factor it to that instead of something else? –  user138246 Aug 27 '11 at 22:46 It's called factoring a quadratic, and the result is unique, up to constants. –  The Chaz 2.0 Aug 27 '11 at 22:47 Oh is that some stuff I need to memorize? –  user138246 Aug 27 '11 at 22:49 @Jordan: It is called factoring. In this case quadratic factoring: purplemath.com/modules/factquad.htm –  Dair Aug 27 '11 at 22:54 Oh okay, I was just wondering if that was the quadratic formula that was used. –  user138246 Aug 27 '11 at 22:55 Anyway, you can also do it from first principles without any factoring tricks: Let $x = 5 + \epsilon$. Then, when $x \ne 5$, and thus $\epsilon \ne 0$, \begin{aligned} \frac{x^2 - 6x + 5}{x-5} &= \frac{(5 + \epsilon)^2 - 6(5 + \epsilon) + 5}{5 + \epsilon - 5} \\ &= \frac{(25 + 10\epsilon + \epsilon^2) - (30 + 6\epsilon) + 5}{\epsilon} \\ &= \frac{4\epsilon + \epsilon^2}{\epsilon} = 4 + \epsilon. \end{aligned} We thus see that $$\lim_{x \to 5} \frac{x^2 - 6x + 5}{x-5} = \lim_{\epsilon \to 0}\, 4 + \epsilon = 4.$$ (Addendum: The reason for choosing that particular substitution is simple: we want to know what happens when $x$ gets close to $5$; $\epsilon = x - 5$ tells how close $x$ is to $5$. In particular, if the limit as $x \to 5$ is well defined, then after the substitution and simplification we should end up with the limit plus some terms that vanish as $\epsilon \to 0$, as we indeed do. If, instead, we ended up with some terms like $1/\epsilon$ that diverge as $\epsilon \to 0$, then we'd know that the limit was not well defined.) - Why can x = 5+ e? –  user138246 Aug 27 '11 at 23:23 @Jordan Carlyon: Why can't it? More seriously, for any real number $x$, we can write it in the form $x = 5 + \epsilon$, where $\epsilon = x - 5$. It's also then easy to show that $\epsilon = 0$ if and only if $x = 5$. –  Ilmari Karonen Aug 27 '11 at 23:27 What is e? Or is that just a variable like x or y? –  user138246 Aug 27 '11 at 23:29 @Jordan Carlyon: Yes, it's just a variable. (Mathematicians often like to use the Greek letter epsilon ($\epsilon$) for "very small" quantities — especially ones which tend to zero in some limit we're interested in. But that's just a matter of tradition; any other symbol would work just as well.) –  Ilmari Karonen Aug 27 '11 at 23:34 But I don't understand why or how that is being used and what the purpose of it is. I have never seen anything like that before, why would I want to do that? Why can I just change a variable to a limit plus a different variable? What allows me to do that and how do I know that it if beneficial to solving the problem without guessing? –  user138246 Aug 27 '11 at 23:38
# Clarification of meaning of dx in an integral [duplicate] I would like to have some clarification on the physical meaning of $dx$. I already know the following in the context of the area under the curve: ## $\lim_{\Delta x \rightarrow 0} \sum f(x) \Delta x \approx \int f(x) dx$ $dx$ is still an interval on x axis. Makes perfect sense. Let's say I have the following curve $(x,f(x))$ like this: curve and I have some function $g(x,y)$ that I want to measure its total sum along my curve. Can I formulate it is as? ## $\int_{a}^b g(x,f(x)) dx$ If so, what is the physical meaning of $dx$ here? Aren't we multiplying some extra values ($dx$) into $g(x,f(x))$ and getting a wrong result? • You appear to be talking about a path integral. You need to give some physical context to get a physical context. Mathematics is mathematics--you can certainly set up the integral you proposed but it can be difficult to interpret the results (such as interpreting the "area underneath the curve"). And it's very important to realize that an integral does not represent the area underneath a curve--rather the area underneath a curve can be an interpretation of an integral (but the area is not necessarily the correct interpretation). – Jared May 26 '16 at 1:13 • When you say $g(x, y): \mathbb{R}^2 \mapsto \mathbb{R}$ you are talking about a 3D surface (a 2D, curved, surface in 3D space). When you say there is a function $y = f(x)$ you are talking about a path on the 2D plane that traverses your 3D surface. Then when you say $\int g(x, f(x))dx$ you are talking about something that makes very little physical sense (but mathematically is perfectly valid). – Jared May 26 '16 at 1:25 • @Jared Thanks! Your first comment and reading up on line integral cleared up a lot! – Sep Jun 3 '16 at 17:10
Home > Generic Host > Generic Host Error And Lose Of Sound And Drivers. Generic Host Error And Lose Of Sound And Drivers. WP Admin You are awesome. Calendar for 2016, 2017 $$\rm\LaTeX$$ help: 6 pg, 157 pg MathJax doc: mathjax.org For TRF guest bloggers See an example of the optimally formatted TXT source of the contributions; HTML outcome. it worked……. have a peek here I also have similar problem now… Reply Link kanhaiyalalsongra October 4, 2009, 11:41 am i have win 32 genric host eror whn i start my internet it can start only for Reply Link kamran November 4, 2008, 2:16 pm Hi, Brother thank you so very much. Reply Link mixglorioso July 11, 2008, 9:05 pm Dude this fix works! More power to you man! Clicking Here I found this forum topic: http://forums.microsoft.com/WindowsOneCare/ShowPost.aspx?PostID=2086259&SiteID=2 and wondered if peer-to-peer networking might be the source of the problem (this discussion also has a link to a Microsoft article that explains the Combined with this I get flashed of old windows backgrounds and a distinct lack of sound. Manual generic host process for win32 services fix is usually connected with different complications which can lead your computer to some serious failures. Generic host process for win32 error is a widespread problem for a great amount of consumers. im really thankful...my head almost burst thinking about how to rectify this pesky problem.. Reply Link aniketh January 13, 2008, 7:23 am Thanks DJ. Thanks again! At ours you can find a full-functional Generic host process fix treatment. Generic host error fix searching can take a great while for an unexperienced buyer. now can u tell me how to disable the registry…. https://answers.microsoft.com/en-us/windows/forum/windows_xp-performance/generic-host-32-error/a3b3726d-dba7-4922-9dcc-ca9d7e1edf4a The system is XP SP1 because I have no access to Internet for updating my PC. You must not launch your computer out of window - you should only receive Svchost Fix Wizard here. DID NOT HELP AT ALL. reader Brendan said... The Solution u gave me is Fantastic!!!! Reply Link Dj Flush June 19, 2007, 4:18 am nalini There is no such thing as "disable the registry" unless you want limited account to not be able to access the and restart ur pc… Inshallah u never see this error again…. Unassisted Generic host process fix is usually a tiresome and long action which consists all presumable error reasons scaning. TDL4 rootkit infection detected ! Reply Link Ultimecia November 16, 2008, 3:41 am This solution worked for me, In all the solutions around the net, this is the most helpful. navigate here But if you have encountered this error message since you installed Windows XP SP2, the Microsoft page will almost certainly apply to you. Global Internet connection percentage restriction can help your computer to work faster and eliminate a Generic host process question. Does anyone from you guys, (except Microsoft who cannot help me at this moment) has any permanent solution for it. Alternatively, you can click the button at the top bar of this topic and Track this Topic, where you can choose email notifications. I have the EXACT problem rayk334 has. This is a major pain in the axx. Check This Out Really great job done by you boss. It started to turn all my trays into windows 98 for a second and it'd switch back! Connecting to the internet seems to be fine even when the error is displayed, I can still connect to the internet… Reply Link Paul August 24, 2007, 5:17 am This is reader joe said... You can receive an self-acting treatment like Svchost Fix Wizard from Security Stronghold for simultaneous help of all Generic host process error causes. I am working on it. reader rayk334 said... Newer Than: Search this thread only Search this forum only Display results as threads Useful Searches Recent Posts More... Then you must now navigate to the following registry key: HKEY_LOCAL_MACHINESoftwareMicrosoftOLE 2. All hopes that Microsoft patch was able to heal svchost became frustrated themselves. Reply Link Anushka November 10, 2008, 6:31 pm Hey… thanks Dj Flush… it worked… thanks a lot again Reply Link Ritesh November 12, 2008, 9:15 pm Thanx for your great help, But when I'm offline, no generic error. http://itinfosecure.com/generic-host/generic-host-process-for-win32-services-error-dcom-networking-error-help.php Generic host process for win32 services sp3 error is a widespread question for a great deal of buyers. Anti virus scans come up empty and all the others "fixes" for this problem seem to be from 3 years ago and have something to do with sp2. If you were in middle of something, the information you were working on might be lost". when i Log on my yahoo id i can see my msg , can see what ppl are talking to me 🙁 pls need help ! ( i allredy try to Reply Link chiranjeeb February 16, 2007, 9:52 pm thanksss it really works……….. Now try to edit the registry and You will see this error "Registry Editing has been disabled by your Administrator". I have been struggling for the past 1 week to fix this problem. n needs to close blah blah blaa…. 🙁 I have updated my system rescently hmm but it does not help me… My system config. The subdirectory normally has "update" in it, and you should find and run an EXE file that connects you to the Internet and updates all the drivers automatically. cheers guys. It's rather elaborate to distinguish an exact reason of generic host process for win32 services sp3 error even for an proficient user.
# Problem 5-57 Translational Equil.- B A sign outside a hair stylist's shop is suspended by two wires. The force of gravity on the sign has a magnitude of $55.7\; N.$ If the angles between the wires and the horizontal are as shown in the figure, determine the magnitude of the tensions in the two wires. [Ans. $T_1 = 49.9\; N;$ $T_2 = 40.8 \;N$] Accumulated Solution Correct! We must now solve the problem in some coordinate system. Which do you think is best? (A) (B) (C) (D)
Published in last 1 year |  In last 2 years |  In last 3 years |  All All Select Change and influencing factors of China’s cross-regional investment network structure JIAO Jingjuan, ZHANG Qilin, WU Yuyong, JIANG Runze, WANG Jiao'e PROGRESS IN GEOGRAPHY    2021, 40 (8): 1257-1268.   DOI: 10.18306/dlkxjz.2021.08.001 Abstract (992)   HTML (3)    PDF (3942KB)(222)       With the increasing economic interaction between cities, capital flow across regions has gradually become a key factor affecting the regional economic disparities. Cross-regional enterprise investment is regarded as the micro embodiment of capital flows. It is of great significance to explore the characteristics of cross-regional enterprise investment for reducing regional economic disparities. Thus, this study examined the cross-regional investment network using the cross-regional investment data of Chinese listed companies in 1998-2018, and analyzed the characteristics of the spatial evolution of China's cross-regional investment network and its influencing factors at the national and regional levels. The results show that: the spatial agglomeration trend of node centrality in China's cross-regional investment network at the national and regional levels is obvious and the cities with high node centrality are mainly concentrated in the five major urban agglomerations. There are obvious hierarchical structure, spatial heterogeneity, and path dependence of the cross-regional investment network; the net investment inflows and outflows are mainly in the eastern region, and the investment activities tend to develop toward the central and western regions; the influence of city economic development level, industrial structure, and financial environment varies across regions and types of cities with different population scales. Select Spatial match between residents’ daily life circle and public service facilities using big data analytics: A case of Beijing ZHAO Pengjun, LUO Jia, HU Haoyu PROGRESS IN GEOGRAPHY    2021, 40 (4): 541-553.   DOI: 10.18306/dlkxjz.2021.04.001 Abstract (831)   HTML (77)    PDF (20777KB)(356)       Residents' daily life circle is one of the key issues in relation to the national spatial planning in the "new era". Supply of public service facilities is the primary condition for plan-making of this type of circle. Spatial match between residents' daily life circle and public service facilities reveals the human-environment relationship at the community level. There exist many studies on spatial match between residents' daily life circle and public service facilities. However, the existing findings are mainly based on survey data, which have disadvantages such as insufficient samples, small geography coverage, and so on. This study investigated the spatial match between residents' daily life circle and public service facilities in large cities by taking Beijing as an example. Using mobile phone data and point-of-interest (POI) data collected in 2018, this study measured the spatial range of residents' daily life circle and accessibility of public service facilities, and analyzed the relationship between the spatial range of residents' daily life circle and accessibility of public service facilities by the bivariate spatial autocorrelation method. It also analyzed the geographical variations in the relationship. The results of analysis show that residents' daily life circle has a multi-centric structure at the city level. The length of radius of the circle increases from the central areas to the periphery. Accessibility of public service facilities is featured with a zonal structure but its level decreases with the distance away from the centers. The level of accessibility is negatively related with radius of the circle, which means that the higher level of accessibility, the smaller radius of the circle. There are geographical variations in the relationship. The relationship is "high-low" in the city center and new town centers, but dominated by "low-low" and "low-high" pattern in the fringe of the city center and new town centers. There are also variations in the relationship between different types of public service facilities. For the cultural and leisure facilities, the degree of spatial match between residents' daily life circle and public service facilities is obvious lower than other facilities. The conclusion of this research provides new evidence for residents' daily life circle study, and has policy implications for residents' daily life circle planning. Select Spatial distribution of population decline areas in China and underlying causes from a multi-periodical perspective LIU Zhen, QI Wei, QI Honggang, LIU Shenghe PROGRESS IN GEOGRAPHY    2021, 40 (3): 357-369.   DOI: 10.18306/dlkxjz.2021.03.001 Abstract (593)   HTML (280)    PDF (7612KB)(163)       Regional population decline has gradually become a new phenomenon in recent years, which has attracted extensive attention from scholars and the government. Using the national census data and 1% population sampling survey data, this study identified the population decline areas at the county level from 1990 to 2015 from a multi-periodical perspective. Based on the theoretical analysis of the driving factors of population decline, a cluster analysis has been conducted to reveal the spatial differences of the driving factors of population decline, which resulted in four typical cases of causes. The findings are as follows: First, the population decline areas have very different trajectories: while about 24% of them are characterized by fluctuating but overall decline, about 13% of them have experienced continuous decline, and about 5% of them have only experienced recent decline. Second, the fluctuating but overall decline county units were mainly distributed in the middle reaches of the Yangtze River and Gansu, Shaanxi, Jiangsu, and Fujian Provinces, and the continuous decline county units were mainly concentrated in Sichuan, Guizhou, Chongqing, and the Northeast region, and the recent decline county units were mainly concentrated in the Northeast region, Henan, and Xinjiang. Third, there are obvious regional differences in the driving factors of population decline: the county units driven by lagged economy accounted for the highest percentage, and these units were mainly distributed in the central and western regions; the county units in the Northeast region were mainly driven by the slowed economic development and the low natural growth level; in contrast, the percentage of county units only driven by the low natural growth level is relatively low, and these units were mainly distributed in the eastern region. Based on these findings, we argue that it is necessary to pay more attention to the phenomenon of population decline at the regional scale, and take targeted measures by fully considering the trend of change and driving factors of population decline in different regions. Select Spatial differentiation and influencing factors of fan economy in China: Taking TikTok livestreaming commerce host as an example PENG Jue, HE Jinliao PROGRESS IN GEOGRAPHY    2021, 40 (7): 1098-1112.   DOI: 10.18306/dlkxjz.2021.07.003 Abstract (478)   HTML (16)    PDF (6060KB)(122)       Fan economy is a rapidly emerging business in the Internet era. However, the existing literature lacks research on fan economy from a geographical perspective. Based on the theory of network space, and taking TikTok livestreaming commerce host as an example, combined with the influencing factors of e-commerce and urban amenity theory, this study constructed an index system of influencing factors affecting the spatial distribution of Chinese livestreaming commerce host. Using location quotient, global Moran's I, and cold-hot spot spatial analysis methods, we analyzed the spatial agglomeration characteristics of Chinese livestreaming commerce host, and the geographic factors that affect livestreaming commerce host distribution through spatial regression. The results indicate that: 1) China's fan economy shows a significant spatial agglomeration, and it is highly concentrated in the eastern coastal areas, with Guangzhou and Hangzhou as the most prominent. 2) The digital economy represented by livestreaming is reshaping China's original city tier systems. Cities with entertainment media, e-commerce, and characteristic tourism (such as Changsha, Jinhua, and Lijiang), are very attractive to livestreaming commerce hosts, even more than some first-tier cities (such as Beijing and Shanghai). 3) Through spatial regression analysis, it is found that the environment for e-commerce startups and cultural tourism have a strong explanatory power for the spatial distribution of livestreaming commerce hosts. The convenience of living and the natural environment also have an important impact, and the impact of human capital is small. At the same time, the number of patents has a significant crowding out effect on livestreaming commerce hosts, and livestreaming commerce has a strong grassroots nature. This research provides detailed empirical cases for in-depth understanding of the spatial process of fan economy and its influence mechanism and provides a reference for local governments to promote the development of digital economy and formulate talent introduction policies. Select Change of spatial structure of manufacturing industry in the Beijing-Tianjin-Hebei region and its driving factors JIANG Haibing, LI Yejin PROGRESS IN GEOGRAPHY    2021, 40 (5): 721-735.   DOI: 10.18306/dlkxjz.2021.05.001 Abstract (462)   HTML (62)    PDF (15093KB)(314)       The development strategy of industrial transfer and upgrading, coordinated development, and in-depth integration of advanced manufacturing in the Beijing-Tianjin-Hebei region put forward higher requirements for the spatial layout of manufacturing industrial clusters. Research on the change of manufacturing industry spatial pattern can provide a reference for the optimization of urban agglomerations' advanced manufacturing industries. Based on the micro-level data of industrial enterprises above designated size in the Beijing-Tianjin-Hebei region from 2000 to 2013, this study used kernal density analysis and panel data regression models to explore the characteristics and driving factors of the change of the manufacturing industry spatial pattern in the region. The results of this empirical research show that: 1) The overall spatial pattern of all manufacturing industries in the Beijing-Tianjin-Hebei region is relatively stable, and high-value areas are concentrated in the Beijing-Tianjin-Tangshan area. The regional linked development of capital-intensive industries is gaining momentum; technology-intensive industries are increasingly concentrated in a few districts and counties, and the degree of spatial autocorrelation with surrounding districts and counties has weakened as a whole; spatial expansion into nearby districts and counties and spatial transfer of labor-intensive industries appeared alternately; and the regional linked development promotes the balanced growth of manufacturing industries in various regions and narrows the development gap. 2) The manufacturing industry in the Beijing-Tianjin-Hebei region shows a clear trend of specialization and regional division of labor, and labor-intensive industries are increasingly spreading to the periphery of the central cities and the counties in the central and southern areas of the region. Capital-intensive industries are concentrated in the industrial belt on the west coast of the Bohai Sea, the industrial output value of the peripheral areas of the region has increased significantly, and technology-intensive industries are gathered in the Beijing-Tianjin high-tech industrial belt. 3) The key driving factors of the three types of manufacturing industries are different. Labor-intensive industries are affected by investment and transportation accessibility. Capital-intensive industries are highly dependent on local market size and investment, and are insensitive to transportation accessibility. Technology-intensive industries are mainly constrained by transportation accessibility and wage levels. The three types of manufacturing industries are obviously affected by local fiscal expenditures. Select Simulation of city network accessibility and its influence on regional development pattern in China based on integrated land transport system CHEN Zhuo, LIANG Yi, JIN Fengjun PROGRESS IN GEOGRAPHY    2021, 40 (2): 183-193.   DOI: 10.18306/dlkxjz.2021.02.001 Abstract (399)   HTML (47)    PDF (13495KB)(452)       With the increasing emphasis on coordinated regional development, transport and socioeconomic developments in China have taken a new turn in recent years. Based on the present and future integrated land transport network, the trend of city network accessibility and its impact on the change of regional development patterns in China were analyzed in this study by focusing on the construction of travel circles and regional balance. The results show that the completion of the existing planning can greatly improve the accessibility of China's city network and can largely support the construction of travel circles according to the shortest travel time. By promoting the development of hub-spoke organization mode and spatial cascading order, the existing planning can guide the multi-center and networking development of spatial structure and provide a basis for the coordinated and balanced development between regions. In the future, China's transport development should continue to optimize the supply structure of transportation services and improve the ability of the integrated transport system to serve the needs of people's daily lives and production. Select Land consolidation and rural vitalization:A perspective of land use multifunctionality JIANG Yanfeng, LONG Hualou, TANG Yuting PROGRESS IN GEOGRAPHY    2021, 40 (3): 487-497.   DOI: 10.18306/dlkxjz.2021.03.012 Abstract (383)   HTML (17)    PDF (3626KB)(287)       The long-term supply-demand imbalance of rural land use functions (RLUFs) is one of the main reasons for rural issues in China. Based on the multifunctionality theory, this study explained the mutual relationship between rural land consolidation (RLC) and rural vitalization with a focus on supply-demand and element-structure-function relationships, and then discussed how to realize the supply-demand balance of RLUFs through RLC so as to promote sustainable rural development. The results show that: 1) Comprehensive rural land consolidation is a multifunctional land use method and an important means to solve rural issues for promoting rural vitalization. In essence, it is the transition from productivism that focuses on economic benefits to non-productivism that takes social, economic, and environmental benefits as a whole. 2) RLUFs include production, living, ecological, and cultural functions, corresponding to the economic, social, enviromental, and cultural demands of rural vitalization. The production functions are divided into agricultural, commercial, and industrial functions, and living functions include residential, employment, and public service functions. 3) Along the path of integrating land use elements, restructuring land use structures, and optimizing land use functions, RLC promotes the supply-demand balance of RLUFs from the supply side according to local conditions. 4) In future research, the mechanisms and modes of RLC impact on rural vitalization at different spatial scales, as well as quantitative analysis of the functional supply of land use and the functional demand of rural vitalization under the influence of RLC should be given more attention, thus laying a scientific foundation for the formulation and implementation of land use and rural vitalization planning. Select Spatial patterns and controlling factors of settlement distribution in ethnic minority settlements of southwest China: A case study of Hani terraced fields LIU Zhilin, DING Yinping, JIAO Yuanmei, WANG Jinliang, LIU Chengjing, YANG Yuliang, WEI Junfeng PROGRESS IN GEOGRAPHY    2021, 40 (2): 257-271.   DOI: 10.18306/dlkxjz.2021.02.007 Abstract (362)   HTML (15)    PDF (15380KB)(339)       Settlement pattern, an important part of the human-nature system, is the foundation of rural geography, and it has become a hotspot in geographic research. Scientific analysis and characterization of settlement patterns are significant for promoting the development of urbanization, ethnic unity, and well-off society in rural minority areas. However, there is still a lack of research on the settlement patterns of ethnic minority areas, especially in those multi-ethnic group gathered areas. This study depicted the settlement patterns of seven ethnic minority groups (including Hani, Yi, Zhuang, Han, Miao, Yao, and Dai) in the Hani Rice Terraces World Heritage area, which is a typical multi-ethnic group gathered area in the southwest of China. The results show that: 1) In terms of spatial locations, 68% of the settlements in the Hani terraced fields area are located in the west and central parts of the territory, mainly in the areas of Han, Yi, and Zhuang. 2) The ethnic settlement pattern in the Hani terraced fields is characterized by the mix of Hani-Yi, accompanied by the mix of other ethnic groups. 3) In terms of location and the environment, settlements of the seven ethnic groups have significant differences in locational and environmental characteristics such as altitude, slope, temperature, precipitation, distance to river, settlement scale, cultivated land area, distance to administrative center, and grain yields. 4) The main controlling factors of the distribution of Zhuang, Miao, and Yao settlements are economic and administrative and distance to tourism centers (86.4%, 75.3%, and 92.8%); the main controlling factor of the distribution of Yi settlements are air temperature (52.0%); and the main controlling factors of the distribution of Han, Hani, and Dai settlements are precipitation (98.7%, 52.2%, and 97.0%). 5) On the whole, the settlements of Hani terraced fields formed a three-dimensional pattern of multi-ethnic symbiosis vertically, and a multi-ethnic mosaic pattern horizontally. This research can provide a reference for the construction of new rural areas in minority regions, the optimization of settlement patterns, targeted poverty alleviation, and the construction of a well-off society in an all-round way. Select Impact of the COVID-19 pandemic on population heat map in leisure areas in Beijing on holidays ZHAO Ziyu, ZHAO Shiyao, HAN Zhonghui, XU Yunxiao, JIN Jie, WANG Shijun PROGRESS IN GEOGRAPHY    2021, 40 (7): 1073-1085.   DOI: 10.18306/dlkxjz.2021.07.001 Abstract (345)   HTML (21)    PDF (6322KB)(116)       The Chinese government has curbed the outbreak of COVID-19 through a population flow control rarely seen in history. The COVID-19 pandemic has greatly impacted the recreation industry. Using mobile location data, this study quantitatively analyzed the impact of the COVID-19 pandemic on population heat map in the leisure areas within the Third Ring Road of Beijing City on the Qingming Festival and Labor Day. The results showed that: 1) The COVID-19 pandemic significantly impacted population heat map in leisure areas in Beijing on holidays, and the population heat map values of the three types of leisure areas investigated in this study declined by 54.2% and 53.0% on the Qingming Festival and Labor Day in 2020 as compared to the 2019 values, respectively. To be specific, the population heat map values of famous scenery, shopping services, and hotel accommodation decreased by 53.6%, 57.5%, and 52.9% on the Qingming Festival, and by 48.5%, 52.0%, and 55.6% on Labor Day, respectively. 2) There were differences in the degree of the impact on population heat map in different types of areas in famous scenery. The impact on the three major segments of famous scenery can be ranked in ascending order as follows: temples and churches (41.7%, 50.3%), parks and squares (53.1%, 47.1%), and scenic spots (61.1%, 51.2%). Wilcoxon rank sum test showed that the hourly variation of population heat map in temples and churches was smaller, and the overall demand can be ranked in ascending order as follows: sightseeing, daily leisure, and religious activities. 3) The 2020 population heat map of the leisure areas within the Third Ring Road of Beijing City was significantly negatively and positively correlated with the population heat map before the pandemic and area of these leisure areas, respectively. This can be attributed to the risk perception of the leisure crowds and the spatial and environmental factors of the disease prevention and control measures. This study provides a scientific basis for assessing the impact of the COVID-19 pandemic on leisure forms in big cities of China. Select System dynamics model-based simulation of energy consumption pattern on the two sides of the Huhuanyong Line in China ZHAO Sha, HU Zui, ZHENG Wenwu PROGRESS IN GEOGRAPHY    2021, 40 (8): 1269-1283.   DOI: 10.18306/dlkxjz.2021.08.002 Abstract (344)   HTML (3)    PDF (8100KB)(57)       The Huhuanyong Line is a real portrayal of the spatial pattern of population, economic, and social development in China. It perfectly describes key characteristics of energy production and consumption. Quantitatively simulating the spatial pattern of energy consumption on the two sides of the line can provide a reference to achieve regional coordinated development. This study employed data from the China Energy Statistical Yearbook (2005-2014). We first constructed the System Dynamics Model Based on the Huhuanyong Line Energy Consumption Simulation Model (HLECSM-SD) using the $GM ( 1,1 )$ model and System Dynamics (SD) model. Then, we simulated the pattern of various energy consumptions on the two sides of the line from 2020 to 2025. Finally, this study analyzed energy consumption of China under three scenarios. The results indicate that: 1) The HLECSM-SD model fits the data well. 2) Energy consumption presents the spatial pattern of more in the east and less in the west in China. 3) The change trend of energy consumption growth rate is consistent across the two regions. The east side has a lower growth rate than the west side. 4) On the east side of the line, coal consumption has the characteristics of more in the north and less in the south. This is consistent with the spatial distribution of China's coal resources. The consumptions of petroleum, natural gas, and electricity all have the characteristics of more in the east and less in the central region. This is determined by many factors, such as resource endowment, economic development, population scale, and industrial structure of each province. 5) The influencing factors have different degrees of impact on energy consumption under different scenarios. Our findings can provide some reference for the macro decision making in the energy field. Select Geography of sustainability transitions: A sympathetic critique and research agenda YU Zhen, GONG Huiwen, HU Xiaohui PROGRESS IN GEOGRAPHY    2021, 40 (3): 498-510.   DOI: 10.18306/dlkxjz.2021.03.013 Abstract (335)   HTML (16)    PDF (1297KB)(122)       Sustainability transitions focus on the fundamental transformation of the existing socio-technical system towards a more sustainable mode of production and consumption. Emerged in Europe two decades ago, this new research field has already exerted impacts on the green transition policy practices of many countries and regions. In recent years, transition studies have increasingly taken geography into account, resulting in a new paradigm of geography of sustainability transitions. This emerging paradigm focuses on the role of spatial embeddedness and multi-scalar interactions in explaining where transitions take place. This article provides a critical overview of the development in the geography of sustainability transitions research, and suggests five promising avenues for future transition research in the Chinese context: 1) to develop concepts and theorize from the Chinese context; 2) to link sustainability transitions with latecomer regions' industry catch-up; 3) to compare the sustainability transitions in cities with different leading industries; 4) to pay more attention to the role of local agency through the lens of multi-scalar interactions; and 5) to explore the impact of digitalization and artificial intelligence on sustainability transitions. Select Toward rural-urban co-governance: An interpretation of the change of rural-urban relationship since the reform and opening up ZHANG Wenbin, ZHANG Zhibin, DONG Jianhong, ZHANG Huailin, GONG Weimin PROGRESS IN GEOGRAPHY    2021, 40 (5): 883-869.   DOI: 10.18306/dlkxjz.2021.05.014 Abstract (332)   HTML (9)    PDF (3427KB)(68)       The relationship between urban and rural areas in China has been an important relationship for economic and social development and a major concern of the party and the government. In order to explore the relationship between urban and rural areas and its governance logic, the CiteSpace software was used to analyze the research hotspots of rural-urban relationship since the reform and opening up in the 1970s and to interpret its change based on the historical background, and then reveal the contextual characteristics of rural-urban relationship and the internal logic of governance reform. The research shows that since the reform and opening up, rural-urban relationship has gone through four stages—from an improving urban-rural relationship, to rural-urban re-separation, rural-urban relationship adjustment, and integrated rural-urban development. The process reflects the governance logic of breaking the rural-urban division, favoring the urban field, balancing rural-urban development, and promoting rural-urban integration. Since the 19th National Congress of the Communist Party of China, the relationship between urban and rural areas has developed in the direction of rural-urban integration. Rural-urban co-governance is the internal demand and governance trend of integrated rural-urban development in the new era. Finally, the article discussed the prospect of integrated rural-urban development and rural-urban co-governance from the aspects of abolishing the rural-urban dual system and establishing new supporting systems and mechanisms, breaking disciplinary boundaries and integrating interdisciplinary knowledge and cross-application of practice, and organically combining the two strategies of new urbanization and rural revitalization. Select Patterns and determinants of location choice in residential mobility: A case study of Shanghai CUI Can, MU Xueying, CHANG Heying, LI Jiayi, WANG Fenglong PROGRESS IN GEOGRAPHY    2021, 40 (3): 422-432.   DOI: 10.18306/dlkxjz.2021.03.006 Abstract (323)   HTML (10)    PDF (4759KB)(134)       Since the marketization of China's housing system, urban residents' housing adjustment through making residential moves has become relatively frequent. Residential mobility, as the micro-mechanism of urban space differentiation and restructuring, has been extensively studied in urban geography and housing studies. However, the existing literature mainly focuses on the motivation underlying residential mobility and its impacts on individuals/families and urban space. Comparatively, the location changes before and after residential moves have received scant attention in previous studies. This study adopted the perspective of life course and time geography to depict the residential trajectories of Shanghai residents and explore the influencing factors of location choice in residential mobility. The data used for the empirical analysis were drawn from the 2018 "Shanghai Resident Housing and Living Space Survey", which adopted the stratified and multi-stage probability proportion to size sampling. A retrospective survey was conducted, allowing us to obtain information on the respondents' sociodemographic information and their residential trajectories. The results reveal that the dominant type of location change is outward move across the ring roads. Nevertheless, the variations in location choice between cohorts, local population and migrants, and renters and owners of properties are evident. Compared with the older cohorts, younger cohorts generally make residential moves at earlier ages, and many of them move from the central areas to the suburbs. Different from the local population, migrants' residential mobility is more constrained in terms of the timing of making residential moves and their location choice. Furthermore, this study shows that age, location of workplace, and housing tenure all significantly affect location choice in making residential moves. Specifically, the older cohorts concentrate in the central areas before as well as after a residential move. Commuting distance plays a major role in affecting people's choice of residential location, and owning an automobile has insignificant influence. A transition into homeownership is often associated with a change to an advantageous location. Select Spatio-temporal patterns of urban-rural transformation and optimal decision-making in China GUO Yuanzhi, WANG Jieyong PROGRESS IN GEOGRAPHY    2021, 40 (11): 1799-1811.   DOI: 10.18306/dlkxjz.2021.11.001 Abstract (314)   HTML (0)    PDF (3314KB)(0)       Urban-rural transformation (URT) is a comprehensive process with the characteristics of multi-domains and multi-levels. A scientific understanding of the concept and connotation of URT and a systematic discussion of the patterns and mechanism of URT are of great significance to solving the problems of unbalanced urban-rural development and insufficient rural development. Based on the theoretical cognition of URT, this study comprehensively analyzed the urban-rural development level and its spatial-temporal patterns in China, revealed the patterns of URT according to the coupling coordination degree of urban-rural development level, and discussed the key of urban-rural integrated development in different types of URT areas. The results show that URT is the result of the interaction between the change of urban regional system and the change of rural regional system, and its external representation is the coupling coordination state of the two different but closely related processes. From 2000 to 2018, the level of urban and rural development in all provinces of China's mainland has risen rapidly, and the coupling coordination degree of urban and rural development level has changed from being on the verge of imbalance to intermediate coordination. Spatially, the provincial coupling coordination degree of the central and western regions is significantly lower than that of the northeast and eastern regions. Accordingly, URT in China has realized the transformation from low-level urban-rural coordination to medium-level urban-rural integration, showing a spatial characteristic that provincial URT in the central and western regions lags behind the eastern areas, especially Beijing and the provinces in the Yangtze River Delta, where urban-rural development has entered or will soon enter the stage of high-level urban-rural integration. According to the features of URT in each province, URT in China can be divided into four types, that is, high-level urban-rural integrated area, medium-level urban-rural integrated area, low-level urban-rural integrated area Ⅰ, and low-level urban-rural integrated area Ⅱ. To continuously promote the development of new-type urbanization and the implementation of rural revitalization strategy, it is urgent to establish and improve the system and mechanism of urban-rural integrated development through measures such as deepening the reform, innovating the mechanism, and making up for the shortcomings. Select Technology-introduction pattern of cities in China and its mechanism of change based on technology relatedness and complexity JIN Zerun, ZHU Shengjun PROGRESS IN GEOGRAPHY    2021, 40 (6): 897-910.   DOI: 10.18306/dlkxjz.2021.06.001 Abstract (310)   HTML (35)    PDF (7934KB)(85)       "Development driven by innovation" is an important strategy of the Chinese government. This study used data including inter-city patent transfer from China Intellectual Property Office for 2017 and 2018 to explore the technology-introduction pattern of cities in China from the perspective of technology relatedness and complexity, using Gephi, ArcGIS, and Stata. This study hypothesized that: 1) cities tend to introduce technologies highly related to local knowledge structure; 2) the more complex a technology is, the less opportunity that cities will introduce it; and 3) the relatedness of a technology will mitigate the effect of its complexity on technology transfer. Based on the average relatedness and average complexity of technologies introduced in each city, this study identified four technology-introduction patterns, which are "high relatedness and high complexity", "low relatedness and high complexity", "low relatedness and low complexity", and "high relatedness and low complexity". Furthermore, unique mechanisms of change exist for different technology-introduction patterns. This study found that the complexity of introduced technologies increases with the economic development stage of the city, while the relatedness of that displays an inverse U-shaped mode. Hence, we divided technology introduction into three stages according to the level of urban development: 1) the learning stage dominated by low relatedness, 2) the reinforcing stage dominated by the increase in relatedness, and 3) the leaping stage dominated by diversification into unfamiliar technology fields. The empirical results show that in general, the increase in technological relatedness and the decrease in complexity of a technology will promote cities to introduce the technology, and the increase in relatedness will encourage cities to introduce more complex technology in that field. Additionally, the mechanism of change was tested through regression by groups—cities were sorted into four groups by their GDP per capita and population density, then we performed regression on technological relatedness and complexity respectively, which shows that the coefficient of relatedness lost significance in the most developed 25% cities, while it remained robust in the other three groups. The coefficient of complexity similarly lost significance in the most developed 50% cities. These results jointly verify the hypothesis of three technology-introduction stages. This study analyzed the pattern of technology-introduction empirically, stressing on the importance of relatedness and complexity in innovation research, which offers a grounded reference for guiding the innovation development path of cities. Select Mechanism of interaction between urban land expansion and ecological environment effects in the Yangtze River Delta YANG Qingke, DUAN Xuejun, WANG Lei, WANG Yazhu PROGRESS IN GEOGRAPHY    2021, 40 (2): 220-231.   DOI: 10.18306/dlkxjz.2021.02.004 Abstract (307)   HTML (10)    PDF (31825KB)(238)       Taking the Yangtze River Delta as the research object, this study established the correlation model and coupling degree model for evaluating the mechanism of interaction between urban land expansion and ecological environment effects by using grey correlation analysis method. It explored the pattern of temporal and spatial variation and coupling degree characteristics of urban land expansion and ecological environment effects and change, and analyzed the interactions between the two systems. The results show that: 1) The index of urban land expansion in the Yangtze River Delta has been increasing, and the socioeconomic development and land use development have played a significant positive role. Socioeconomic development imposes a demand for greater urban production and living space and higher environmental quality. The increase of construction land area, the high-intensity expansion and the decrease of population density are all important reasons for the increase of urban land expansion index. 2) The overall performance of the regional ecological environment quality is stable, and the ecological environment effect is reflected in its spatial differentiation, with obvious characteristics of spatial and temporal change. The ecological environment quality of the cities in Zhejiang Province is significantly higher than that of Shanghai Municipality and Jiangsu Province, which is closely related to the regional environmental carrying capacity, the construction of pollution control facilities, and the propagation of ecological protection concepts. 3) Most cities have low coupling degree between urban land expansion and ecological environment effect, and the relationship between the two systems is in a state of imbalance. In the process of urbanization, land expansion tends to be low-density and decentralized, which strongly threatens the ecological security and environmental quality and lead to the increase of spatial disparity between urban land development and ecological environment protection. 4) There is a strong interaction between the elements of urban land expansion system and ecological environment system in the Yangtze River Delta, and the forces of each element are slightly different. The stressing effect of urban land expansion on ecological environment is gradually increasing, while the restraining effect of ecological environment on urban land expansion is decreasing. Select Spatial expansion mode of manufacturing firms in big cities and its impact on firm efficiency: A case study of Beijing listed firms ZHANG Keyun, PEI Xiangye PROGRESS IN GEOGRAPHY    2021, 40 (10): 1613-1625.   DOI: 10.18306/dlkxjz.2021.10.001 Abstract (305)   HTML (5)    PDF (4699KB)(33)       Firms spatial expansion is of great significance to enterprise efficiency and regional coordinated development. Based on the data of listed manufacturing firms in Beijing and their subsidiaries from 2009 to 2018, this study examined the enterprise spatial expansion model through the changes of spatial distribution of subsidiaries, and analyzed the change of the distance between headquarters and subsidiaries brought by expansion. Furthermore, the dynamic panel measurement method was used to empirically test the impact of the change of geographical distance and economic distance between headquarters and subsidiaries on the efficiency of manufacturing enterprises with different expansion modes. The study found that: First, during the study period, the scale of expansion of the sample listed manufacturing firms in Beijing was relatively large, and the spatial expansion mode has changed from hierarchical diffusion to a combination of hierarchical diffusion and contagious diffusion, with contagious diffusion as the dominant mode. The geographical distance between headquarters and subsidiaries showed an upward trend, and the economic distance first decreased and then increased. Among these firms, technology-intensive firms and non-state-owned firms tend to experience hierarchical diffusion, while non-technology-intensive firms and state-owned firms tend to undergo contagious diffusion. Second, for the firms with contagious diffusion as the main expansion mode, geographical distance between headquarters and subsidiaries was negatively correlated with firm efficiency, but the efficiency of firms that did not take contagious diffusion as the main mode of expansion was not affected by geographical distance. Third, regardless of firm expansion mode, economic distance between headquarters and subsidiaries was positively correlated with firm efficiency. Therefore, different types of manufacturing firms should choose different expansion strategies. Select Measurement of rural poverty alleviation sustainability and return-to-poverty risk identification in Qinling-Bashan Mountains:A case study of Chengkou County, Chongqing Municipality GUO Qian, LIAO Heping, WANG Ziyi, LIU Yuanli, LI Tao PROGRESS IN GEOGRAPHY    2021, 40 (2): 232-244.   DOI: 10.18306/dlkxjz.2021.02.005 Abstract (269)   HTML (4)    PDF (10335KB)(179)       Achieving sustainable poverty alleviation and establishing a prevention and control mechanism for return-to-poverty in extreme poverty rural areas is a realistic requirement in the post-2020 era. It is also a key link between precision poverty alleviation and rural revitalization. Taking Chengkou County of Chongqing Municipality—an area of strong ecological fragility and concentrated continuous poverty—as the research area, and based on the poverty alleviation sustainability measurement model, obstacle degree model, and minimum variance model, this study explored the spatial differentiation of multidimensional poverty alleviation sustainability and the return-to-poverty risk models for 60 villages and 1950 farming households in the area. The study found that: 1) The sample villages' poverty alleviation sustainability distribution generally showed a "gourd-like" structure where the front end is narrow and the middle part protrudes. The multidimensional poverty alleviation sustainability in the area is generally low and of different degrees. 2) The return-to-poverty risk in Chengkou County can be divided into four models and 11 types, dominated by diversified integration of various resistance factors. Human capital, development opportunities, and other factors related to sustainable income growth, dynamic anti-risk capability, and endogenous drives of farmers have gradually become the focus of poverty reduction and control of return-to-poverty at this stage. 3) Local governments should give equal priority to alleviating poverty, improving the sustainability of poverty alleviation, and preventing return-to-poverty. At the same time, improve people's ability to resist risks and develop a network for preventing return-to-poverty of vulnerable groups with specific policy in each village. Select Do urban public service facilities match population demand? Assessment based on community life circle CHANG Fei, WANG Lucang, MA Yue, YAN Cuixia, LIU Haiyang PROGRESS IN GEOGRAPHY    2021, 40 (4): 607-619.   DOI: 10.18306/dlkxjz.2021.04.006 Abstract (259)   HTML (16)    PDF (14340KB)(236)       Public service facilities (PSF) are the basic guarantee for urban production and living. Whether the distribution of public service facilities is equitable is related to the healthy development of cities and the society. At present, due to the lack of urban micro-scale population distribution data, there are few studies that consider both the supply side (PSF) and the demand side (population). In view of this, using the Internet maps application programming interface (API), this study established the 5-minute, 10-minute, and 15-minute community life circle of Lanzhou City, and then used Worldpop grid data, population census data, and Baidu heat map data to simulate the population distribution at high spatial resolution and with high accuracy. We evaluated the matching relationships between population and public service facilities in Lanzhou City. The study found that: 1) The matching relationships between different types of PSF and population are very different. However, they show a common phenomenon that the matching degree close to district administrative centers is often better than that of urban fringe. 2) In Lanzhou City, the matching relationships between PSFs and population are highly polarized, that is, there are more highly matched and mismatched life circles, and the number of moderately matched and relatively poorly matched life circles is fewer. 3) Based on the coverage of moderately and highly matched life circles, the coverage of all levels of travel, medical (except community health service centers corresponding to 10-minute life circle), dining, and entertainment facilities is the widest. The allocation of elderly care facilities at all levels and grass-roots cultural facilities is seriously inadequate, and other facilities are between the two types. The study concludes that the problems that have been identified need to be addressed. It suggests that urban planning should focus on the allocation of various PSF in the urban fringe, and improve the coverage of all levels of elderly care facilities. Select Review on the urban network externalities CHENG Yuhong, SU Xiaomin PROGRESS IN GEOGRAPHY    2021, 40 (4): 713-720.   DOI: 10.18306/dlkxjz.2021.04.015 Abstract (255)   HTML (6)    PDF (722KB)(73)       Urban network research has become the frontier academic field of international urban research and has gradually become a hot spot. At present, the related literature on "urban network" mostly focuses on conceptual discussion, dimension analysis, and network structure analysis. Research on the influence of network on regional economic development is relatively weak. Externality, as an essential attribute of urban network, is of great significance to the evolution of urban network and the development of cities and regions. This article starts from a comparison of agglomeration externalities with urban network externalities, focusing on the review and evaluation of the formation mechanism, utility, and measurement methods of urban network externalities. The synergy effect, integration effect, and borrowing size are considered important reasons for the formation of urban network externalities. The research on the effectiveness of urban network externalities focuses on two aspects. The first is the role of factor flow in promoting knowledge diffusion and innovation, and the second is the impact of urban network on competitiveness and economic growth. Based on the existing literature, the research on the measurement of urban network externalities mainly involves identification and estimation, including three common methods: correlation analysis, regression analysis, and spatial econometric analysis. The existing empirical research on externalities is still mostly based on static analysis and lacks dynamic consideration. To a large extent, the existing research has insufficient theoretical framing and insufficient explanatory power, often resulting in the discovery of conditional associations, but not causal relationships. The Western research on urban network externalities is relatively early and mainly focuses on the global and regional dimensions, while Chinese scholars focus on the national and regional dimensions. In terms of empirical methods and objects, Chinese scholars have also made some innovations based on the study of world city network. The issues that need further attention in the future include theoretical understanding of urban network externalities, externality measurement methods, and empirical research.
In this article, we define and give some exercises with answers on the hyperplane linear subspace. For instance, a hyperplane is a vector subspace of dimension equal to the dimension of the whole space minus one. ## What is the hyperplane of a vector space? Definition: Let $E$ be a vector space with a finite dimension equal to $n$. A hyperplane linear subspace is a subspace $H$ that coincides with the kernel of a nonnull linear forme on $E$. That is if there exists a linear map $f: E\to\mathbb{K}$ such that $H=\ker(f)$. According to the rank theorem, the rank of a nonnull linear form is equal to one. This $H$ is hyperplane linear subspace if and only if $\dim(H)=n-1$. ## The equation of a hyperplane The hyperplanes of a vector space $E$ of dimension $n$ on a field $\mathbb{K}$ are the solutions of the equations of the form $$a_1x_1+\cdots+a_n x_n=0$$ with $(a_1,\cdots,a_n)\in \mathbb{K}^n$ and $(a_1,\cdots,a_n)\neq (0,\cdots,0)$. Remark that the hyperplane is normal to the vector $(a_1,\cdots,a_n)$. ## Exercises on the hyperplane linear subspace Exercise: Let $V$ be a vector space and $n\in\mathbb{N}$, let $H$ be a hyperplane of $V$, and let $v\in V$ be a vector. Under what condition the subspaces $H$ and ${\rm span}(v):=\{\lambda v:\lambda\in\mathbb{C}\}$ are supplementally in $V$. Solution: We shall discuss two cases: First case: if $a\in H$. Then for any $\lambda\in\mathbb{C},$ $\lambda a\in H$. Thus ${\rm span}(v)\subset H,$ so that $H$ and ${\rm span}(v)$ are not supplementally. Second case: if $a\notin H$. First of all, we have $\dim({\rm span}(v))=1$. As $H$ is a hyperplane of $V,$ it follows that $\dim(H)=n-1$. Hence \begin{align*} \dim(H)+\dim({\rm span}(v))=n=\dim(V). \end{align*} Now let $x\in H\cap {\rm span}(v)$. Then $x\in H$ and there exists $\lambda\in\mathbb{C}$ such that $x=\lambda a$. We necessarily have $\lambda=0,$ because if not, then $a=\lambda^{-1} x\in H,$ absurd. Hence $x=0_V$, and then $H\cap {\rm span}(v)={0_V}$. This show that $H+{\rm span}(v)=V$ is a direct sum. Exercise: Let $\psi: \mathbb{R}^n\to \mathbb{C}$ be a nonnull linear forme and $\Phi$ be an endomorphism of $\mathbb{R}^n$. Prove that the kernel of $\psi$ is stable by $\Phi$, i.e. $\Phi(\ker(\psi))\subset \ker(\psi)$, if and only if there exists a real $\lambda\in\mathbb{R}$ such that $\psi\circ\Phi=\lambda\psi$. Solution: Assume that there exists $\lambda\in\mathbb{R}$ such that $\psi\circ\Phi=\lambda\psi$. Let $x\in \ker(\psi)$ and prove that $\Phi(x)\in \ker(\psi)$. In fact, we have $\psi(x)=0$. On the other hand, \begin{align*} \psi(\Phi(x))=\lambda \psi(x)=0. \end{align*} This implies that $\Phi(\ker(\psi))\subset \ker(\psi)$. Conversely, assume that $\ker(\psi)$ is stable by $\Phi$. Observe that if $x\in \ker(\psi)$ then $\psi(\Phi(x))=0,$ so that \begin{align*} \psi(\Phi(x))=\lambda \psi(x),\quad \forall x\in \ker(\psi),\;\forall \lambda\in\mathbb{R}. \end{align*} It suffice then to look for real $\lambda$ and a supplementally space $K$ of $\ker(\psi)$ such that $\psi\circ\Phi=\lambda\psi$ on $K$. According to rank theorem we have $\ker(\psi)$ is a hyperplane, so $\dim(\ker(\psi))=n-1$. Thus any supplementally space $K$ of $\ker(\psi)$ will satisfies $\dim(K)=1$. Take $a\in \mathbb{R}^n$ such that $\psi(a)\neq 0$, a such $a$ exists because $\psi$ is a non null forme, so that $a\notin \ker{\psi}$. This implies that ${\rm span}(a)\cap \ker(\psi)=\{0\}$. As $\dim({\rm span}(a))=1$ and $\dim(\ker{\psi})+\dim({\rm span}(a))=n$. Then \begin{align*} \ker(\psi)\oplus {\rm span}(a)=\mathbb{R}^n. \end{align*} Then it suffices to look for real $\lambda$ such that $\psi\circ\Phi=\lambda\psi$ on ${\rm span}(a)$. In particular $\psi(\Phi(a))=\lambda \psi(a)$. We choose \begin{align*} \lambda=\frac{\psi(\Phi(a))}{\psi(a)}. \end{align*}
Copied to clipboard ## G = Dic3.D14order 336 = 24·3·7 ### 6th non-split extension by Dic3 of D14 acting via D14/D7=C2 Series: Derived Chief Lower central Upper central Derived series C1 — C42 — Dic3.D14 Chief series C1 — C7 — C21 — C42 — C3×Dic7 — S3×Dic7 — Dic3.D14 Lower central C21 — C42 — Dic3.D14 Upper central C1 — C2 — C22 Generators and relations for Dic3.D14 G = < a,b,c,d | a42=c2=d2=1, b2=a21, bab-1=a13, cac=a29, ad=da, bc=cb, bd=db, dcd=a21c > Subgroups: 428 in 80 conjugacy classes, 32 normal (all characteristic) C1, C2, C2, C3, C4, C22, C22, S3, C6, C6, C7, C2×C4, D4, Q8, Dic3, Dic3, C12, D6, D6, C2×C6, D7, C14, C14, C4○D4, C21, Dic6, C4×S3, D12, C3⋊D4, C3⋊D4, C2×C12, Dic7, Dic7, C28, D14, C2×C14, C2×C14, S3×C7, D21, C42, C42, C4○D12, Dic14, C4×D7, C2×Dic7, C2×Dic7, C7⋊D4, C7×D4, C7×Dic3, C3×Dic7, Dic21, S3×C14, D42, C2×C42, D42D7, S3×Dic7, D21⋊C4, C7⋊D12, C21⋊Q8, C6×Dic7, C7×C3⋊D4, C217D4, Dic3.D14 Quotients: C1, C2, C22, S3, C23, D6, D7, C4○D4, C22×S3, D14, C4○D12, C22×D7, S3×D7, D42D7, C2×S3×D7, Dic3.D14 Smallest permutation representation of Dic3.D14 On 168 points Generators in S168 (1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42)(43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84)(85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126)(127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168) (1 134 22 155)(2 147 23 168)(3 160 24 139)(4 131 25 152)(5 144 26 165)(6 157 27 136)(7 128 28 149)(8 141 29 162)(9 154 30 133)(10 167 31 146)(11 138 32 159)(12 151 33 130)(13 164 34 143)(14 135 35 156)(15 148 36 127)(16 161 37 140)(17 132 38 153)(18 145 39 166)(19 158 40 137)(20 129 41 150)(21 142 42 163)(43 99 64 120)(44 112 65 91)(45 125 66 104)(46 96 67 117)(47 109 68 88)(48 122 69 101)(49 93 70 114)(50 106 71 85)(51 119 72 98)(52 90 73 111)(53 103 74 124)(54 116 75 95)(55 87 76 108)(56 100 77 121)(57 113 78 92)(58 126 79 105)(59 97 80 118)(60 110 81 89)(61 123 82 102)(62 94 83 115)(63 107 84 86) (1 99)(2 86)(3 115)(4 102)(5 89)(6 118)(7 105)(8 92)(9 121)(10 108)(11 95)(12 124)(13 111)(14 98)(15 85)(16 114)(17 101)(18 88)(19 117)(20 104)(21 91)(22 120)(23 107)(24 94)(25 123)(26 110)(27 97)(28 126)(29 113)(30 100)(31 87)(32 116)(33 103)(34 90)(35 119)(36 106)(37 93)(38 122)(39 109)(40 96)(41 125)(42 112)(43 155)(44 142)(45 129)(46 158)(47 145)(48 132)(49 161)(50 148)(51 135)(52 164)(53 151)(54 138)(55 167)(56 154)(57 141)(58 128)(59 157)(60 144)(61 131)(62 160)(63 147)(64 134)(65 163)(66 150)(67 137)(68 166)(69 153)(70 140)(71 127)(72 156)(73 143)(74 130)(75 159)(76 146)(77 133)(78 162)(79 149)(80 136)(81 165)(82 152)(83 139)(84 168) (1 64)(2 65)(3 66)(4 67)(5 68)(6 69)(7 70)(8 71)(9 72)(10 73)(11 74)(12 75)(13 76)(14 77)(15 78)(16 79)(17 80)(18 81)(19 82)(20 83)(21 84)(22 43)(23 44)(24 45)(25 46)(26 47)(27 48)(28 49)(29 50)(30 51)(31 52)(32 53)(33 54)(34 55)(35 56)(36 57)(37 58)(38 59)(39 60)(40 61)(41 62)(42 63)(85 141)(86 142)(87 143)(88 144)(89 145)(90 146)(91 147)(92 148)(93 149)(94 150)(95 151)(96 152)(97 153)(98 154)(99 155)(100 156)(101 157)(102 158)(103 159)(104 160)(105 161)(106 162)(107 163)(108 164)(109 165)(110 166)(111 167)(112 168)(113 127)(114 128)(115 129)(116 130)(117 131)(118 132)(119 133)(120 134)(121 135)(122 136)(123 137)(124 138)(125 139)(126 140) G:=sub<Sym(168)| (1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42)(43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84)(85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126)(127,128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145,146,147,148,149,150,151,152,153,154,155,156,157,158,159,160,161,162,163,164,165,166,167,168), (1,134,22,155)(2,147,23,168)(3,160,24,139)(4,131,25,152)(5,144,26,165)(6,157,27,136)(7,128,28,149)(8,141,29,162)(9,154,30,133)(10,167,31,146)(11,138,32,159)(12,151,33,130)(13,164,34,143)(14,135,35,156)(15,148,36,127)(16,161,37,140)(17,132,38,153)(18,145,39,166)(19,158,40,137)(20,129,41,150)(21,142,42,163)(43,99,64,120)(44,112,65,91)(45,125,66,104)(46,96,67,117)(47,109,68,88)(48,122,69,101)(49,93,70,114)(50,106,71,85)(51,119,72,98)(52,90,73,111)(53,103,74,124)(54,116,75,95)(55,87,76,108)(56,100,77,121)(57,113,78,92)(58,126,79,105)(59,97,80,118)(60,110,81,89)(61,123,82,102)(62,94,83,115)(63,107,84,86), (1,99)(2,86)(3,115)(4,102)(5,89)(6,118)(7,105)(8,92)(9,121)(10,108)(11,95)(12,124)(13,111)(14,98)(15,85)(16,114)(17,101)(18,88)(19,117)(20,104)(21,91)(22,120)(23,107)(24,94)(25,123)(26,110)(27,97)(28,126)(29,113)(30,100)(31,87)(32,116)(33,103)(34,90)(35,119)(36,106)(37,93)(38,122)(39,109)(40,96)(41,125)(42,112)(43,155)(44,142)(45,129)(46,158)(47,145)(48,132)(49,161)(50,148)(51,135)(52,164)(53,151)(54,138)(55,167)(56,154)(57,141)(58,128)(59,157)(60,144)(61,131)(62,160)(63,147)(64,134)(65,163)(66,150)(67,137)(68,166)(69,153)(70,140)(71,127)(72,156)(73,143)(74,130)(75,159)(76,146)(77,133)(78,162)(79,149)(80,136)(81,165)(82,152)(83,139)(84,168), (1,64)(2,65)(3,66)(4,67)(5,68)(6,69)(7,70)(8,71)(9,72)(10,73)(11,74)(12,75)(13,76)(14,77)(15,78)(16,79)(17,80)(18,81)(19,82)(20,83)(21,84)(22,43)(23,44)(24,45)(25,46)(26,47)(27,48)(28,49)(29,50)(30,51)(31,52)(32,53)(33,54)(34,55)(35,56)(36,57)(37,58)(38,59)(39,60)(40,61)(41,62)(42,63)(85,141)(86,142)(87,143)(88,144)(89,145)(90,146)(91,147)(92,148)(93,149)(94,150)(95,151)(96,152)(97,153)(98,154)(99,155)(100,156)(101,157)(102,158)(103,159)(104,160)(105,161)(106,162)(107,163)(108,164)(109,165)(110,166)(111,167)(112,168)(113,127)(114,128)(115,129)(116,130)(117,131)(118,132)(119,133)(120,134)(121,135)(122,136)(123,137)(124,138)(125,139)(126,140)>; G:=Group( (1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42)(43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84)(85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126)(127,128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145,146,147,148,149,150,151,152,153,154,155,156,157,158,159,160,161,162,163,164,165,166,167,168), (1,134,22,155)(2,147,23,168)(3,160,24,139)(4,131,25,152)(5,144,26,165)(6,157,27,136)(7,128,28,149)(8,141,29,162)(9,154,30,133)(10,167,31,146)(11,138,32,159)(12,151,33,130)(13,164,34,143)(14,135,35,156)(15,148,36,127)(16,161,37,140)(17,132,38,153)(18,145,39,166)(19,158,40,137)(20,129,41,150)(21,142,42,163)(43,99,64,120)(44,112,65,91)(45,125,66,104)(46,96,67,117)(47,109,68,88)(48,122,69,101)(49,93,70,114)(50,106,71,85)(51,119,72,98)(52,90,73,111)(53,103,74,124)(54,116,75,95)(55,87,76,108)(56,100,77,121)(57,113,78,92)(58,126,79,105)(59,97,80,118)(60,110,81,89)(61,123,82,102)(62,94,83,115)(63,107,84,86), (1,99)(2,86)(3,115)(4,102)(5,89)(6,118)(7,105)(8,92)(9,121)(10,108)(11,95)(12,124)(13,111)(14,98)(15,85)(16,114)(17,101)(18,88)(19,117)(20,104)(21,91)(22,120)(23,107)(24,94)(25,123)(26,110)(27,97)(28,126)(29,113)(30,100)(31,87)(32,116)(33,103)(34,90)(35,119)(36,106)(37,93)(38,122)(39,109)(40,96)(41,125)(42,112)(43,155)(44,142)(45,129)(46,158)(47,145)(48,132)(49,161)(50,148)(51,135)(52,164)(53,151)(54,138)(55,167)(56,154)(57,141)(58,128)(59,157)(60,144)(61,131)(62,160)(63,147)(64,134)(65,163)(66,150)(67,137)(68,166)(69,153)(70,140)(71,127)(72,156)(73,143)(74,130)(75,159)(76,146)(77,133)(78,162)(79,149)(80,136)(81,165)(82,152)(83,139)(84,168), (1,64)(2,65)(3,66)(4,67)(5,68)(6,69)(7,70)(8,71)(9,72)(10,73)(11,74)(12,75)(13,76)(14,77)(15,78)(16,79)(17,80)(18,81)(19,82)(20,83)(21,84)(22,43)(23,44)(24,45)(25,46)(26,47)(27,48)(28,49)(29,50)(30,51)(31,52)(32,53)(33,54)(34,55)(35,56)(36,57)(37,58)(38,59)(39,60)(40,61)(41,62)(42,63)(85,141)(86,142)(87,143)(88,144)(89,145)(90,146)(91,147)(92,148)(93,149)(94,150)(95,151)(96,152)(97,153)(98,154)(99,155)(100,156)(101,157)(102,158)(103,159)(104,160)(105,161)(106,162)(107,163)(108,164)(109,165)(110,166)(111,167)(112,168)(113,127)(114,128)(115,129)(116,130)(117,131)(118,132)(119,133)(120,134)(121,135)(122,136)(123,137)(124,138)(125,139)(126,140) ); G=PermutationGroup([[(1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42),(43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84),(85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126),(127,128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145,146,147,148,149,150,151,152,153,154,155,156,157,158,159,160,161,162,163,164,165,166,167,168)], [(1,134,22,155),(2,147,23,168),(3,160,24,139),(4,131,25,152),(5,144,26,165),(6,157,27,136),(7,128,28,149),(8,141,29,162),(9,154,30,133),(10,167,31,146),(11,138,32,159),(12,151,33,130),(13,164,34,143),(14,135,35,156),(15,148,36,127),(16,161,37,140),(17,132,38,153),(18,145,39,166),(19,158,40,137),(20,129,41,150),(21,142,42,163),(43,99,64,120),(44,112,65,91),(45,125,66,104),(46,96,67,117),(47,109,68,88),(48,122,69,101),(49,93,70,114),(50,106,71,85),(51,119,72,98),(52,90,73,111),(53,103,74,124),(54,116,75,95),(55,87,76,108),(56,100,77,121),(57,113,78,92),(58,126,79,105),(59,97,80,118),(60,110,81,89),(61,123,82,102),(62,94,83,115),(63,107,84,86)], [(1,99),(2,86),(3,115),(4,102),(5,89),(6,118),(7,105),(8,92),(9,121),(10,108),(11,95),(12,124),(13,111),(14,98),(15,85),(16,114),(17,101),(18,88),(19,117),(20,104),(21,91),(22,120),(23,107),(24,94),(25,123),(26,110),(27,97),(28,126),(29,113),(30,100),(31,87),(32,116),(33,103),(34,90),(35,119),(36,106),(37,93),(38,122),(39,109),(40,96),(41,125),(42,112),(43,155),(44,142),(45,129),(46,158),(47,145),(48,132),(49,161),(50,148),(51,135),(52,164),(53,151),(54,138),(55,167),(56,154),(57,141),(58,128),(59,157),(60,144),(61,131),(62,160),(63,147),(64,134),(65,163),(66,150),(67,137),(68,166),(69,153),(70,140),(71,127),(72,156),(73,143),(74,130),(75,159),(76,146),(77,133),(78,162),(79,149),(80,136),(81,165),(82,152),(83,139),(84,168)], [(1,64),(2,65),(3,66),(4,67),(5,68),(6,69),(7,70),(8,71),(9,72),(10,73),(11,74),(12,75),(13,76),(14,77),(15,78),(16,79),(17,80),(18,81),(19,82),(20,83),(21,84),(22,43),(23,44),(24,45),(25,46),(26,47),(27,48),(28,49),(29,50),(30,51),(31,52),(32,53),(33,54),(34,55),(35,56),(36,57),(37,58),(38,59),(39,60),(40,61),(41,62),(42,63),(85,141),(86,142),(87,143),(88,144),(89,145),(90,146),(91,147),(92,148),(93,149),(94,150),(95,151),(96,152),(97,153),(98,154),(99,155),(100,156),(101,157),(102,158),(103,159),(104,160),(105,161),(106,162),(107,163),(108,164),(109,165),(110,166),(111,167),(112,168),(113,127),(114,128),(115,129),(116,130),(117,131),(118,132),(119,133),(120,134),(121,135),(122,136),(123,137),(124,138),(125,139),(126,140)]]) 45 conjugacy classes class 1 2A 2B 2C 2D 3 4A 4B 4C 4D 4E 6A 6B 6C 7A 7B 7C 12A 12B 12C 12D 14A 14B 14C 14D 14E 14F 14G 14H 14I 21A 21B 21C 28A 28B 28C 42A ··· 42I order 1 2 2 2 2 3 4 4 4 4 4 6 6 6 7 7 7 12 12 12 12 14 14 14 14 14 14 14 14 14 21 21 21 28 28 28 42 ··· 42 size 1 1 2 6 42 2 6 7 7 14 42 2 2 2 2 2 2 14 14 14 14 2 2 2 4 4 4 12 12 12 4 4 4 12 12 12 4 ··· 4 45 irreducible representations dim 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 4 4 4 4 type + + + + + + + + + + + + + + + + - + image C1 C2 C2 C2 C2 C2 C2 C2 S3 D6 D6 D7 C4○D4 D14 D14 D14 C4○D12 S3×D7 D4⋊2D7 C2×S3×D7 Dic3.D14 kernel Dic3.D14 S3×Dic7 D21⋊C4 C7⋊D12 C21⋊Q8 C6×Dic7 C7×C3⋊D4 C21⋊7D4 C2×Dic7 Dic7 C2×C14 C3⋊D4 C21 Dic3 D6 C2×C6 C7 C22 C3 C2 C1 # reps 1 1 1 1 1 1 1 1 1 2 1 3 2 3 3 3 4 3 3 3 6 Matrix representation of Dic3.D14 in GL4(𝔽337) generated by 0 1 0 0 336 1 0 0 0 0 228 336 0 0 229 336 , 189 0 0 0 0 189 0 0 0 0 228 336 0 0 85 109 , 322 322 0 0 307 15 0 0 0 0 1 0 0 0 0 1 , 198 278 0 0 59 139 0 0 0 0 336 0 0 0 0 336 G:=sub<GL(4,GF(337))| [0,336,0,0,1,1,0,0,0,0,228,229,0,0,336,336],[189,0,0,0,0,189,0,0,0,0,228,85,0,0,336,109],[322,307,0,0,322,15,0,0,0,0,1,0,0,0,0,1],[198,59,0,0,278,139,0,0,0,0,336,0,0,0,0,336] >; Dic3.D14 in GAP, Magma, Sage, TeX {\rm Dic}_3.D_{14} % in TeX G:=Group("Dic3.D14"); // GroupNames label G:=SmallGroup(336,155); // by ID G=gap.SmallGroup(336,155); # by ID G:=PCGroup([6,-2,-2,-2,-2,-3,-7,48,116,490,10373]); // Polycyclic G:=Group<a,b,c,d|a^42=c^2=d^2=1,b^2=a^21,b*a*b^-1=a^13,c*a*c=a^29,a*d=d*a,b*c=c*b,b*d=d*b,d*c*d=a^21*c>; // generators/relations ׿ × 𝔽
# 计算能力(Compute Capability) 5、SM的主版本号,指maxwell架构 3、SM的次版本号,拥有一些在该架构前提下的一些优化特性 Compute Capability The compute capability of a device is represented by a version number, also sometimes called its "SM version". This version number identifies the features supported by the GPU hardware and is used by applications at runtime to determine which hardware features and/or instructions are available on the present GPU. The compute capability comprises a major revision number X and a minor revision number Y and is denoted by X.Y. Devices with the same major revision number are of the same core architecture. The major revision number is 7 for devices based on the Volta architecture, 6 for devices based on the Pascal architecture, 5 for devices based on the Maxwell architecture, 3 for devices based on the Kepler architecture, 2 for devices based on the Fermi architecture, and 1 for devices based on the Tesla architecture. The minor revision number corresponds to an incremental improvement to the core architecture, possibly including new features. CUDA-Enabled GPUs lists of all CUDA-enabled devices along with their compute capability. Compute Capabilities gives the technical specifications of each compute capability. Note: The compute capability version of a particular GPU should not be confused with the CUDA version (e.g., CUDA 7.5, CUDA 8, CUDA 9), which is the version of the CUDA software platform. The CUDA platform is used by application developers to create applications that run on many generations of GPU architectures, including future GPU architectures yet to be invented. While new versions of the CUDA platform often add native support for a new GPU architecture by supporting the compute capability version of that architecture, new versions of the CUDA platform typically also include software features that are independent of hardware generation. The Tesla and Fermi architectures are no longer supported starting with CUDA 7.0 and CUDA 9.0, respectively. # 典型版本的示例 ## Compute Capability 3.x A multiprocessor consists of: 192 CUDA cores for arithmetic operations (see Arithmetic Instructions for throughputs of arithmetic operations), 32 special function units for single-precision floating-point transcendental functions, 4 warp schedulers. When a multiprocessor is given warps to execute, it first distributes them among the four schedulers. Then, at every instruction issue time, each scheduler issues two independent instructions for one of its assigned warps that is ready to execute, if any. A multiprocessor has a read-only constant cache that is shared by all functional units and speeds up reads from the constant memory space, which resides in device memory. There is an L1 cache for each multiprocessor and an L2 cache shared by all multiprocessors. The L1 cache is used to cache accesses to local memory, including temporary register spills. The L2 cache is used to cache accesses to local and global memory. The cache behavior (e.g., whether reads are cached in both L1 and L2 or in L2 only) can be partially configured on a per-access basis using modifiers to the load or store instruction. Some devices of compute capability 3.5 and devices of compute capability 3.7 allow opt-in to caching of global memory in both L1 and L2 via compiler options. The same on-chip memory is used for both L1 and shared memory: It can be configured as 48 KB of shared memory and 16 KB of L1 cache or as 16 KB of shared memory and 48 KB of L1 cache or as 32 KB of shared memory and 32 KB of L1 cache, using cudaFuncSetCacheConfig()/cuFuncSetCacheConfig(): // Device code __global__ void MyKernel() { ... } // Host code // Runtime API // cudaFuncCachePreferShared: shared memory is 48 KB // cudaFuncCachePreferEqual: shared memory is 32 KB // cudaFuncCachePreferL1: shared memory is 16 KB // cudaFuncCachePreferNone: no preference cudaFuncSetCacheConfig(MyKernel, cudaFuncCachePreferShared) L1缓存是每个多处理器单独拥有的,用于做共享内存或一级缓存,而L2缓存是所有多处理器共有的,用于做二级缓存或者全局内存。L1缓存是可配置的,可调共享内存和一级缓存比例 ## Compute Capability 5.x A multiprocessor consists of: 128 CUDA cores for arithmetic operations (see Arithmetic Instructions for throughputs of arithmetic operations), 32 special function units for single-precision floating-point transcendental functions, 4 warp schedulers. ©️2019 CSDN 皮肤主题: 编程工作室 设计师: CSDN官方博客
# How moon is visible during sun is still shining, simple explanation please! [duplicate] Sometimes I observed there is moon on sky at noon and sun is also shining. How is this possible? I also observed dull moon is still on sky at 7:00 AM to 8:00 AM while sun is rising. My local time is UTC+5. If possible, please explain it in simple words without using advanced astronomy terminology if possible, thanks! • @SteveLinton I disagree! The OP has specifically asked please explain it in simple words without using advanced astronomy terminology An answer that starts with The visual geometric albedo of the full moon is 12.5%, but much less at other phases. and continues with and since the intensity of light falls off as $\propto \frac{1}{r^{2}}$ ([inverse-square law) doesn't really fulfill that, so it's not a good duplicate because it doesn't really answer the OP's question as asked. – uhoh May 9 '19 at 13:15 • Are you wondering why the Moon is sometimes above the horizon at the same time as the Sub, or why it's bright enough to see? – Steve Linton May 9 '19 at 13:18 • @SteveLinton More about the technical definition of geometric albedo in this answer. Have a look at my answer, if you feel that it better answers the OP's question than your linked duplicate, you can consider withdrawing the close vote. There are several users here that might be configured to auto-close anything that has one plausible close vote. In this case I don't think it helps. update: too late, the insta-close has begun. – uhoh May 9 '19 at 13:19 • @SteveLinton if you are trying to clarify the meaning of the question, how can you simultaneously vote to close as duplicate? That's putting the cart before the horse. – uhoh May 9 '19 at 13:29 • This is like asking why you can see your book while you have a lamp on over your desk. Think about it for a bit... – Carl Witthoft May 9 '19 at 17:36 If you turn your back to the sun and look at a building, the sun shines on the building and you can see it. If the building is very far away but very big, you can still see it because the sun is still shining on it during the day. If you think of the moon as a very large object very far away in the same direction as that building, you can see that the sun lights it up. The only time the sun doesn't light it up is when the moon is on the side of the earth towards the sun. Otherwise, the sun will always light up the moon. You can see it because it's bright enough that the sun doesn't wash it out like it does the stars. Let's look at what the difference is between day and night. In the day time, the air that's between you and the Moon is in the sunlight, but at night, that air is in the dark. Day or night though, the air is still transparent (you can see the Moon through it), and the light from the moon is mostly unaffected. When it's in sunlight, the air scatters some of the sunlight in your direction. For the parts of the Moon that are bright, they are so bright that they look white. But the dark parts of the moon don't look black to you on Earth because there's still the fainter blue-sky light scattered by the air. Source ## Moon at sunset with faint blue sky light mixed with Earthshine Source Let's look at our solar system from above: The night side of Earth is the hemisphere facing away from the Sun. The Moon orbits around the Earth, on a path that takes 28 days. So for half of that orbit (14 days, from Last Quarter to New Moon to First Quarter), the Moon is visible from the day side of the Earth. • Although your graphic shows the moon in the night sky... :-) – Alexis Wilke May 9 '19 at 17:24 • not any more ;) – Hobbes May 9 '19 at 17:33 • @Hobbes I've proposed that the question be reopened on the basis that the question asks for a simple, non-technical explanation and that can't be found at the other question. I see you've also taken the time to write a simple explanation, perhaps you could consider a re-open vote as well? – uhoh May 9 '19 at 23:58 • IMO it's pretty well covered by the accepted answer in the linked question. Mine just duplicates that. – Hobbes May 10 '19 at 8:21
## Heisenberg interaction Hamiltonian for square lattice Hi, I just started self studying solid state and I'm having trouble figuring out what the hamiltonian for a square lattice would be when considering the heisenberg interaction. I reformulated the dot product into 1/2( Si+Si+δ+ +Si+δ+S-- ) + SizSi+δz and use Siz = S-ai+ai Si+ = √2S]ai ... Si+δz=-S+ai+δ+ai+δ ... Etc. But I'm getting for the terms of the hamiltonian aiai+δ +ai+δ+ai+ .... but don't these terms violate momentum conservation? What is the real heisenberg interaction hamiltonian for the square lattice? Firstly, let's correct your terminology a little bit. The Heisenberg interaction is just: $$\mathcal{H}=\mathcal{J}\sum_{i} S_i \cdot S_{i+\delta}$$ You have rewritten it in terms of $S^z, S^+$ and $S^-$ operators which is fine. Your next step is to write it with respect to bosonic operators $a, a^\dagger$ in the Holstein-Primakoff representation, in which case the bosonic operators create and destroy spin waves. It appears you have taken $\mathcal{J}$ to be positive, in which case you have the antiferromagnetic model where spins on neighbouring sites prefer to be antiparallel. This is implicit in your choice of S and -S in the H-P representation. So far your bosonic operators are in the position representation. When you work all this out, you get terms with $a^\dagger_i a^\dagger_{i+\delta}$. These do not violate momentum conservation because they are still in the position representation - if you fourier transform them you'll see there is no problem. You are SUPPOSED to get them. This is what makes a ferromagnet (J<0) different from an antiferromagnet (J>0). In order to diagonalize the Hamiltonian, you must do two steps. 1. Fourier transform it. 2. Use a Bogoliubov transformation to get rid of the $a^\dagger_i a^\dagger_{i+\delta}$ terms. Google this if you don't know what it is. Similar discussions for: Heisenberg interaction Hamiltonian for square lattice Thread Forum Replies Quantum Physics 1 Advanced Physics Homework 0 Atomic, Solid State, Comp. Physics 7 Advanced Physics Homework 0 Quantum Physics 6
Question # 20 ml of mixture of $$CO$$ and $${ C }_{ 2 }{ H }_{ 4 }$$ was exploded with 50 mL of $${ O }_{ 2 }$$. The volume after explosion was 45 mL of shaking with $$NaOH$$ solution only 15 mL $${ O }_{ 2 }$$ was left behind , what were the volume of $$CO$$ and $${ C }_{ 2 }{ H }_{ 4 }$$ in mixture? A CO=5mL, C2H2=15 mL B CO=10mL, C2H2=10 mL C CO=8mL, C2H2=12 mL D CO=15mL, C2H2=5 mL Solution ## The correct option is B $$CO=10mL,\ { C }_{ 2 }{ H }_{ 2 }=10\ mL$$Solution:- Let the mixture contain $$x \; mL$$ of $${C}_{2}{H}_{4}$$ and $$y \; mL$$ of $$CO$$.$$\underset{x}{{C}_{2}{H}_{4}} + 3 {O}_{2} \longrightarrow 2 \underset{2x}{C{O}_{2}} + 2 {H}_{2}O$$$$\underset{y}{CO} + \cfrac{1}{2} {O}_{2} \longrightarrow \underset{y}{C{O}_{2}}$$$$NaOH$$ absorbs the $$C{O}_{2}$$ produced in the reaction.Total amount of $$C{O}_{2}$$ produced $$= 2x + y = \left( 45 - 15 \right)$$$$\Rightarrow 2x + y = 30 ..... \left( 1 \right)$$Given that:-$$x + y = 20 ..... \left( 2 \right)$$From $${eq}^{n} \left( 1 \right) \& \left( 2 \right)$$, we have$$20 + y = 30$$$$\Rightarrow y = 10 \; mL$$Substituting the value of $$y$$ in $${eq}^{n} \left( 2 \right)$$, we have$$x + 10 = 20$$$$\Rightarrow x = 10 \; mL$$Hence the volume of $$CO$$ and $${C}_{2}{H}_{4}$$ in the mixture is $$10 \; mL$$ and $$10 \; mL$$ respectively.Chemistry Suggest Corrections 0 Similar questions View More People also searched for View More
# Falling through Earth Imagine a vertical tunnel passing through the centre of the Earth, providing a direct link between opposite points on the planet’s surface. If you jumped in, how long would it take you to reach the other end of the tunnel? To work out the equation of motion governing your descent (and subsequent ascent!), we need to know the internal gravitational field of the Earth. An analytic solution can be found by making some simplifying assumptions: • the Earth is spherical • the density of the Earth is uniform The first assertion is reasonable, since the Earth is remarkably spherical for such a vast object. It is only 43 kilometres broader across its equator than between its poles, a distance which constitutes only 0.3% of the Earth’s diameter. The second assumption requires a greater stretch of the imagination. As you might expect, the Earth’s density varies with depth; the Earth’s core is thought to be about 6 times denser than its crust. This is because deeper layers of the Earth must support, and are therefore compressed by, the weight of the layers above them. Despite these simplifications, our answer should be the right order of magnitude. $\$ Let $\bold{g}$ be the internal gravitational field of Earth. It is known that the gravitational field satisfies a form of Gauss’ law, a first-order differential equation which says that $\nabla\cdot\bold{g}(\bold{r})=-4\pi G\rho(\bold{r})$ where $G$ is the gravitational constant and $\rho$ is the mass density of space. The symbol $\nabla\cdot$ represents the divergence operator, whose form is very simple after we impose the two assertions above. Since the Earth is assumed to be spherical, the gravitational field must be spherically symmetric. It is therefore not a function of our latitude or longitude, but only our distance from Earth’s centre, which we will call $r$: $\bold{g}=\bold{g}(r)$ This spherical symmetry also means the gravitational field’s direction must have a component only in the radial direction; that is, directly towards Earth’s centre. These two assumptions reduce the divergence operator to $\displaystyle \nabla\cdot\equiv\frac{1}{r^2}\frac{d}{dr}\left(r^2 \cdot\right)$ We also impose our dodgy assumption that the density of Earth $\rho$ is constant, reducing Gauss’ law to $\displaystyle \frac{1}{r^2}\frac{d}{dr}\left(r^2 g\right)=-4\pi G\rho$ Let’s guess a solution of the form $g=\alpha r$ where $\alpha$ is a real constant. Substituting this ansatz in gives $\displaystyle \frac{1}{r^2}\frac{d}{dr}\left(\alpha r^3\right)=-4\pi G\rho$ $3\alpha=-4\pi G\rho$ Hence our solution is $g(r)=g(0)-\frac{4}{3}\pi G\rho r$ Taking the gravitational field at Earth’s centre to be zero without loss of generality, $g(r)=-\frac{4}{3}\pi G\rho r$ Therefore inside a spherical planet of uniform density, the strength of the gravitational field varies linearly with depth. You will move as if attached to a Hookean spring anchored at Earth’s centre. $\$ Given the gravitational field, we can work out of the force you experience as you fall: $F=mg$ Using Newton’s second law, we can form the equation of motion: $m\ddot{r}=-\frac{4}{3}m\pi G\rho r$ $\ddot{r}=-\frac{4}{3}\pi G\rho r$ Here we can group the constants together by defining a new quantity $\omega_0$, given by $\omega_0^2=\frac{4}{3}\pi G\rho$, reducing the differential equation to $\ddot{r}+\omega_0^2 r=0$ The most general solution to this equation is one of the form $r(t)=A\cos\omega_0 t+B\sin\omega_0 t$ where $A$ and $B$ are real constants. Stepping into the tunnel from the Earth’s surface sets $r(0)=r_0$ $\dot{r}(0)=0$ where $r_0$ is Earth’s radius. This sets $A=r_0$ and $B=0$, hence $r(t)=r_0\cos\omega_0 t$ With this equation we can answer the question: how long does it take to traverse the Earth? The time taken to reach the end of the tunnel $\tau$ is half the time taken to complete one full oscillation, hence $\displaystyle \tau=\frac{1}{2}\frac{2\pi}{\omega_0}$ $\displaystyle \tau=\frac{1}{2}2\pi\sqrt{\frac{3}{4\pi G\rho}}$ $\displaystyle \tau=\sqrt{\frac{3\pi}{4G\rho}}$ Let’s plug in some numbers. Using the mean density of the Earth, we are given the answer $\tau=42\,\text{minutes}$ to two significant figures. Pretty speedy! You could get from Spain to New Zealand in less than an hour. Why you need to make such a quick escape is your own business. Here’s another question we can answer easily with the expression for $r$ above: how fast are you travelling as you pass through the Earth’s centre? Differentiating $r$ with respect to time gives your velocity: $\dot{r}=-r_0\omega_0\sin\omega_0 t$ Hence your top speed is $\displaystyle r_0\omega_0=\sqrt{\frac{4\pi G\rho r_0^2}{3}}\approx 7 920\,\text{ms}^{-1}$ This is slightly faster than the speed at which the International Space Station orbits Earth. But don’t worry! According to our (incorrect) equation, you will pop out the other side of the planet with minimal speed. Get digging folks!
# 23.6. Exercises¶ 1. Write equivalent code using map instead of the manual accumulation below and assign it to the variable test. 1. Use manual accumulation to define the lengths function below. 1. Now define lengths using map instead. 1. Now define lengths using a list comprehension instead. 1. Write a function called positives_Acc that receives list of numbers as the input (like [3, -1, 5, 7]) and returns a list of only the positive numbers, [3, 5, 7], via manual accumulation. 1. Write a function called positives_Fil that receives list of things as the input and returns a list of only the positive things, [3, 5, 7], using the filter function. 1. Write a function called positives_Li_Com that receives list of things as the input and returns a list of only the positive things, [3, 5, 7], using the list comprehension. 1. Define longwords using manual accumulation. 1. Define longwords using filter. 1. Define longwords using a list comprehension. 1. Write a function called longlengths that returns the lengths of those strings that have at least 4 characters. Try it with a list comprehension. 1. Write a function called longlengths that returns the lengths of those strings that have at least 4 characters. Try it using map and filter. 1. Write a function that takes a list of numbers and returns the sum of the squares of all the numbers. Try it using an accumulator pattern. 1. Write a function that takes a list of numbers and returns the sum of the squares of all the numbers. Try it using map and sum. 1. Use the zip function to take the lists below and turn them into a list of tuples, with all the first items in the first tuple, etc. 1. Use zip and map or a list comprehension to make a list consisting the maximum value for each position. For L1, L2, and L3, you would end up with a list [4, 5, 3, 5]. 1. Write code to assign to the variable compri_sample all the values of the key name in the dictionary tester if they are Juniors. Do this using list comprehension. 1. Challenge The nested for loop given takes in a list of lists and combines the elements into a single list. Do the same thing using a list comprehension for the list L. Assign it to the variable result2. 1. Challenge: Write code to assign to the variable class_sched all the values of the key important classes. Do this using list comprehension. 1. Challenge: Below, we have provided a list of lists that contain numbers. Using list comprehension, create a new list threes that contains all the numbers from the original list that are divisible by 3. This can be accomplished in one line of code. Next Section - 23.7. Chapter Assessment
two limit questions related to $\sin$ I stuck with these 2 limits, can you help me please? $1.\displaystyle\quad \lim_{n \to \infty }\frac{\sin1+2\sin\frac{1}{2}+3\sin\frac{1}{3}+\cdots+n\sin\frac{1}{n}}{n}$ $2.\displaystyle\quad \lim_{n \to \infty }\frac{n}{\frac{1}{\sin1}+\frac{1/2}{\sin1/2}+\frac{1/3}{\sin1/3}+\cdots+\frac{1/n}{\sin1/n}}$ - try to prove: if $\lim_{n\to\infty}a_n=1$ then $\lim(a_1+\dots+a_n)/n=1$ –  user8268 Nov 6 '12 at 8:53 @user8268 This is exactly the point where i stuck, how can I to prove that $\sum_{i=1}^{n}n\sin\frac{1}{n}=1$? –  Tina Nov 6 '12 at 9:35 @Tina : It's not true that $\displaystyle\sum_{i=1}^n n\sin\frac1n=1$. But it is true that $\displaystyle\lim_{n\to\infty}n\sin\frac1n=1$. That's what user8268 was suggesting you use. –  Michael Hardy Nov 6 '12 at 10:29 The first one: math.stackexchange.com/questions/390115/… –  Martin Sleziak Sep 3 '14 at 14:45 We will use unnecessarily explicit inequalities to prove the result. In the first limit, the general term on top can be rewritten as $\dfrac{\sin(1/k)}{1/k}$. This reminds us of the $\frac{\sin x}{x}$ whose limit as $x\to 0$ we needed in beginning calculus. Note that for $0\lt x\le 1$, the power series $$x-\frac{x^3}{3!}+\frac{x^5}{5!} -\frac{x^7}{7!}+\cdots$$ for $\sin x$ is an alternating series. It follows that for $0\lt x\le 1$, $$x-\frac{x^3}{6}\lt \sin x\lt x.$$ and therefore $$1-\frac{x^2}{6}\lt \frac{\sin x}{x}\lt 1.$$ Put $x=1/k$. We get $$1-\frac{1}{6k^2}\lt \frac{\sin(1/k)}{1/k} \lt 1.\tag{1}$$ Add up, $k=1$ to $k=n$, and divide by $n$ Recall that $$\frac{1}{1^2}+\frac{1}{2^2}+\frac{1}{3^2}+\cdots =\frac{\pi^2}{6}.$$ We find that $$1-\frac{\pi^2}{36n}\lt \frac{\sin1+2\sin\frac{1}{2}+3\sin\frac{1}{3}+\cdots+n\sin\frac{1}{n}}{n}\lt 1.$$ From this, it follows immediately that our limit is $1$. A very similar argument works for the second limit that was asked about. It is convenient to consider instead the reciprocal, and calculate $$\lim_{n \to \infty }\frac{\frac{1}{\sin1}+\frac{1/2}{\sin1/2}+\frac{1/3}{\sin1/3}+\cdots+\frac{1/n}{\sin1/n}}{n}.$$ We can then use the inequality $$1\lt \frac{1/k}{\sin(1/k)} \lt \frac{1}{1-\frac{1}{6k^2}},$$ which is simple to obtain from the Inequalities $(1)$. Having the $1-\frac{1}{6k^2}$ in the denominator is inconvenient, so we can for example use the inequality $\dfrac{1}{1-\frac{1}{6k^2}}\lt 1+\dfrac{1}{k^2}$ to push through almost the same proof as the first one. - As $n \to \infty , \sin(1/n)\to 1/n , n\sin(1/n)\to 1$. Now, by Cesaro mean $\lim\limits_{n \to \infty} \sum_{1}^{n}n\sin(1/n)\to n$. Distributing the it over numerator and denominator $\lim\limits_{n \to \infty} \frac{\sum_{1}^{n}n\sin(1/n)}{n}= \frac{\lim\limits_{n \to \infty }n\sin(1/n)}{n}=1$ So, the answer to the first part is 1 . Same argument holds for the second part too. - $\sum_{1}^{n}n\sin(1/n)$ is not a very good notation, since you use $n$ both in range and as a variable. I think that $\sum_{k=1}^{n}k\sin(1/k)$ would be better. –  Martin Sleziak Nov 6 '12 at 10:43 Thanks for the edit Martin. This was my first answer at math stackexchange. Will keep it in mind. –  dexter04 Nov 7 '12 at 9:24 the first one , you can use the stolz theorem directly. or use the result: if $\lim\limits_{n\to \infty}a_n=a$, then $\lim\limits_{n\to \infty}\frac{a_1+a_2+.....a_n}{n}=a$, you can use the $\epsilon-N$ to illustrate it... the second is the same - This is exactly the point where i stuck, how can I to prove that $\sum_{i=1}^{n}n\sin\frac{1}{n}=1$?Thanks. –  Tina Nov 6 '12 at 9:37 your above gauss is wrong because $\lim\limits_(n\sin\frac{1}{n})\to 1$, therefore, the series cannot be convengent.... –  Tao Nov 6 '12 at 10:44
## 1. Bold, italics and underlining Simple text formatting is commonly used in documents to highlight special pieces of text, like the important concepts or definitions, making the document more readable. They can also be combined to produce bold italic text, or underlined bold text, etc. But in this case, we have to be sure that the combination exists for the font used. For instance, the combination of bold and italics is not available for the default font, but it can be used if you change the font encoding to T1 or load the lmodern package to use the Latin Modern font family. In the following example, we show how to get all these combinations: % Bold, Italics and underline text in Beamer \documentclass{beamer} % Theme choice \usetheme{CambridgeUS} % change the font encoding \usepackage[T1]{fontenc} % or either load the latin modern font % \usepackage{lmodern} \begin{document} \begin{frame}{Bold, Italics and Underlining} This is how \textbf{bold}, \textit{italized} and \underline{underlined} text looks. You can also combine them, like \textbf{\textit{bold italized}}, \underline{\textbf{bold underlined}} and \textit{\underline{italized underlined}}. Finally, you can put \textbf{\textit{\underline{everything together}}}. \end{frame} \end{document} which yields the following output: ## 2. Slanted and emphasized text ### 2.1 Slanted text Besides the bold and italic font shapes and the underlined decoration, beamer offers other font shapes which are not that common. The \textsl command creates slanted text, which has a similar slant to the right as an italic font, but it keeps the same lettering as the normal font family. As with the bold and italized text, this font shapes can be all combined to produce all sorts of different outputs, but only if the selected font supports them. In the following example, we use the KP Sans-Serif font to produce some combinations of the different shapes: % Slanted and Small Cap text \documentclass{beamer} % Theme choice \usetheme{CambridgeUS} % select the KP Sans-Serif font \usepackage[sfmath]{kpfonts} \begin{document} \begin{frame}{Slanted and small caps text} This is \textsc{small caps text} and this is \textsl{slanted text}.\\~\\ You can combine them, to produce \textsl{\textsc{small caps slanted text}} but also \textsc{\textbf{bold small caps}} or \textsl{\underline{underlined slanted text}}. \end{frame} \end{document} Compiling this code yields: ## 2.2 Emphasized text Finally, LaTeX offers the command \emph that can come very handy to emphasize text. This command will produce the correct shape to emphasize text in whichever context we use it. This means that \emph will produce italic text when writing in the usual upright font, and it will produce upright text when using it inside an italized environment. Check the following example: % Emphasized text \documentclass{beamer} % Theme choice \usetheme{CambridgeUS} \begin{document} \begin{frame}{Emphasized text} \emph{This} is emphasized and \textit{\emph{this} is also emphasized, although in a different way.} \end{frame} \end{document} which produces the output: ## 3. Bold math in beamer The following example shows how to do it: % Emphasized text \documentclass{beamer} % Theme choice \usetheme{CambridgeUS} % Required package \usepackage{bm} \begin{document} \begin{frame}{Bold math example} Let $\bm{u}$, $\bm{v}$ be vectors and $\bm{A}$ be a matrix such that $\bm{Au}=\bm{v}$. This is a bold integral: $\bm{\int_{-\infty}^{\infty} e^{-x^2}\,dx=\sqrt{\pi} }$ \end{frame} \end{document} and you can see the result of it in the following illustration: ## 4. Text decorations in Beamer We already saw how to underline text with the \underline command that comes with LaTeX. Here we want to go one step further and learn how to do all kinds of “decorations” to our text. To do so, we will be using the ulem package. But the ulem package capabilities don’t stop here, since it provides another six more different ways to decorate text. The following example shows how they work: % Text decorations in Beamer \documentclass{beamer} % Theme choice \usetheme{CambridgeUS} % Required package \usepackage{ulem} \begin{document} \begin{frame}{Text decorations provided by the \texttt{ulem} package} \uline{Underlined that breaks at the end of lines if they are too too long because the author won’t stop writing.} \\~\\ \uuline{Double-Underlined text} \\~\\ \uwave{Wavy-Underlined text} \\~\\ \sout{Strikethrough text} \\~\\ \xout{Struck with Hatching text} \\~\\ \dashuline{Dashed Underline text} \\~\\ \dotuline{Dotted Underline text} \end{frame} \end{document} Here is the output of this code: ## 5. Changing the font size in beamer This part has been extensively discussed in the following posts: ## 6. Change text color We are going to see how to change the text color in beamer to improve the appearance of our presentation, and guide the attention of our audience. The simplest way to use colors in any LaTeX document is the xcolor package. This package provides a set of commands for color manipulation. The easiest to use of these is the \color command, which let’s us set, by name, the color of an environment. The command accepts most of the usual color names. Here is a minimal working example on how to use it: % Text color in Beamer \documentclass{beamer} % Theme choice \usetheme{CambridgeUS} \begin{document} \begin{frame}{Colors in beamer} {\color{blue} This is a blue block.} {\color{red} And this is a red one.} \begin{itemize} \color{green} \item This is a list. \item And it is colored! \end{itemize} \end{frame} \end{document} which yields the following: Here is a list of predefined colros: The actual list of color names is determined by the driver used. For instance, you can use the dvipsnames as the package option to make the color names for the driver dvips available. For color names, check this interesting tutorial! ### 6.1 Highlight text In the following example, we show the usage of the useful commands \textcolor, to easily change the color of the text, and \colorbox to highlight text: % Highlight Text in Beamer \documentclass{beamer} % Theme choice \usetheme{CambridgeUS} \begin{document} \begin{frame}{Highlight text in beamer} \textcolor{red}{This is some colored text}.\\~\\ Here I want to \colorbox{yellow}{highlight some important text in yellow} while leaving the rest untouched. \end{frame} \end{document} We obtain the following result: ### 6.1 Define custom colors As was mentioned before, you can also define your own custom colors, if you find that the ones defined by your driver are not enough. The manner in which the color is defined depends on the color models supported by your driver. In general, the command to define a color is: \definecolor{name}{model}{color definition} where name is the name which will take the color, model is the model used to define it and color definition is the definition according to the selected model . The most common color models and their definition syntax are the following: • rgb: Is specified as three comma-separated values between 0 and 1 that represent the amount of red, green and blue (in this order) that the color has. • RGB: The same as before but with the values going between 0 and 255. • cmyk: Is specified as four comma-separated values between 0 and 1 that determine the amount of cyan, magenta, yellow and black (in this order) to be added using the additive model in most printers. • gray: A value in the grey scale between 0 and 1. • HTML: Consists of 6 hexadecimal digits that represent the color in HTML code. • wave: This is a fun one that may be useful when writing documents related to the field of Optics. It is a single number between 363 and 814 that specifies the wave length of the color in nanometres. This is the more general and flexible way to create a new color, but maybe it is not the easiest. The most practical is using the command: \colorlet{name}{combination} where name is the name of the new color and combination is a combination of preexisting colors. The percentage of every color in the combination is written between ! signs. For instance: \colorlet{ochre}{blue!20!yellow!80!} In the following example, we put these new techniques in use: % Define colors in Beamer \documentclass{beamer} % Theme choice \usetheme{CambridgeUS} % Custom colors \definecolor{cyanish}{RGB}{10,250,250} \definecolor{lightgreen}{HTML}{CCFF99} \definecolor{orangish}{wave}{620} \colorlet{ochre}{blue!30!yellow!70!} \begin{document} \begin{frame}{Custom colors in beamer} \textcolor{cyanish}{\textbf{This is some cyan text}} \textcolor{lightgreen}{\textbf{This is some lightgreen text}} \textcolor{orangish}{\textbf{This is some orangish text}} \textcolor{ochre}{\textbf{This is some ochre text}} \end{frame} \end{document} and the result of it is shown below: ## 7. Text alignment in beamer However, there is no built-in environment in LaTeX for fully justified text; and in beamer, by default, the text is left justified. This means that there is no straightforward way of making text fully justified in beamer. This is solved by the ragged2e package, which provides the \justifying command. This command, used inside the frame environment, or any other, will produce justified text inside that environment. The following example shows how to use the different text alignments: % Text alignment in beamer \documentclass{beamer} % Theme choice \usetheme{CambridgeUS} % to generate dummy text \usepackage{lipsum} % provides the \justifying command \usepackage{ragged2e} \begin{document} % Default alignment \begin{frame}{Default beamer alignment} \lipsum[1] \end{frame} % Flushed right alignment \begin{frame}{Flushed right} \begin{flushright} \lipsum[2] \end{flushright} \end{frame} % Centered alignment \begin{frame}{Centered} \begin{center} \lipsum[3] \end{center} \end{frame} % Fully justified alignment \begin{frame}{Fully justified} \justifying \lipsum[4] \end{frame} \end{document} which yields the following: ## 8. Line spacing in beamer If you want to use larger interline spacing in your beamer presentation, you can change its value by using the command: \linespread{factor} in the preamble of your document. The usage of factor is far from intuitive. Because of TeX internal dynamics, \linespread{1.3} will stand for one and a half spacing and \linespread{1.6} will stand for double spacing. Check the following example: % Change line spacing \documentclass{beamer} % Theme choice \usetheme{CambridgeUS} % to generate dummy text \usepackage{lipsum} % Change line spacing \begin{document} \begin{frame}{Line spacing, linespread with factor 1.3} \lipsum[2] \end{frame} \end{document} Compiling this code yields: You can compare it with the default one: Next Lesson: 10 Creating Overlays in Beamer
# Calculating a determinant using Jacobi's second theorem Prove using Jacobi's second theorem on determinants that $$\begin{vmatrix} 0 & a & b & c \\ -a & 0 & d & e \\ -b & -d & 0 & f \\ -c & -e & -f & 0 \\ \end{vmatrix} = (af-be+cd)^2$$ I can easily prove it using Laplace expansion for determinants but have no idea how to prove it using Jacobi's second theorem. A corollary of the theorem tells that the determinant would be a perfect square of a polynomial of the elements. But nothing further. Any help will be appreciated. For the theorem, have a look here: Jacobi's Second Theorem on Determinants