url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
---|---|---|---|
http://bolnica-meljine.me/nex785t/article.php?3c1892=r-lm-coefficients
|
that the default differs for lm() and For "maov" objects (produced by aov) it will be a matrix. ... Coefficients. a, b1, b2, and bn are coefficients; and x1, x2, and xn are predictor variables. # 1 5.1 3.5 1.4 0.2 setosa In R, you can run the following command to standardize all the variables in the data frame: # Suppose that raw_data is the name of the original data frame # which contains the variables X1, X2 and Y standardized_data = data.frame(scale(raw_data)) # Running the linear regression model on standardized_data # will output the standardized coefficients model = lm(Y ~ X1 + X2, data = … also in case of an over-determined system where some coefficients 1. (1992) coef is a generic function which extracts model coefficients (Note that the method is for coef and not coefficients.). We again use the Stat 100 Survey 2, Fall 2015 (combined) data we have been working on for demonstration. In R we demonstrate the use of the lm.beta () function in the QuantPsyc package (due to Thomas D. Fletcher of State Farm ). - c(2,1,3,2,5,3.3,1); >y - c(4,2,6,3,8,6,2.2); . # Sepal.Width 0.4958889 0.08606992 5.761466 4.867516e-08 The naive model is the restricted model, since the coefficients of all potential explanatory variables are restricted to equal zero. Required fields are marked *, © Copyright Data Hacks – Legal Notice & Data Protection, You need to agree with the terms to proceed, # Sepal.Length Sepal.Width Petal.Length Petal.Width Species, # 1 5.1 3.5 1.4 0.2 setosa, # 2 4.9 3.0 1.4 0.2 setosa, # 3 4.7 3.2 1.3 0.2 setosa, # 4 4.6 3.1 1.5 0.2 setosa, # 5 5.0 3.6 1.4 0.2 setosa, # 6 5.4 3.9 1.7 0.4 setosa, # Estimate Std. R Extract Rows where Data Frame Column Partially Matches Character String (Example Code), How to Write Nested for-Loops in R (Example Code), How to for-Loop Over List Elements in R (Example Code), Error in R – Object of Type Closure is not Subsettable (Example Code), How to Modify ggplot2 Plot Area Margins in R Programming (Example Code), R Identify Elements in One Vector that are not Contained in Another (2 Examples), Order Vector According to Other Vector in R (Example), How to Apply the format() Function in R (2 Examples), Extract Rows from Data Frame According to Vector in R (Example Code). alias) by default where complete = FALSE. lm() variance covariance matrix of coefficients. Interpreting the “coefficient” output of the lm function in R. Ask Question Asked 6 years, 6 months ago. - coef(lm(y~x)) >c (Intercept) x 0.5487805 1.5975610 complete: for the default (used for lm, etc) and aov methods: logical indicating if the full coefficient vector should be returned also in case of an over-determined system where some coefficients will be set to NA, see also alias.Note that the default differs for lm() and aov() results. a, b1, b2, and bn are coefficients; and x1, x2, and xn are predictor variables. We discuss interpretation of the residual quantiles and summary statistics, the standard errors and t statistics , along with the p-values of the latter, the residual standard error, and the F-test. Examples of Multiple Linear Regression in R. The lm() method can be used when constructing a prototype with more than two predictors. # Petal.Width -0.3151552 0.15119575 -2.084418 3.888826e-02 Theoretically, in simple linear regression, the coefficients are two unknown constants that represent the intercept and slope terms in the linear model. Linear models are a very simple statistical techniques and is often (if not always) a useful start for more complex analysis. Examples of Multiple Linear Regression in R. The lm() method can be used when constructing a prototype with more than two predictors. The naive model is the restricted model, since the coefficients of all potential explanatory variables are … Note complete: for the default (used for lm, etc) and aov methods: logical indicating if the full coefficient vector should be returned also in case of an over-determined system where some coefficients will be set to NA, see also alias.Note that the default differs for lm() and aov() results. Coefficients. R - Linear Regression - Regression analysis is a very widely used statistical tool to establish a relationship model between two variables. an object for which the extraction of model coefficients is meaningful. # 3 4.7 3.2 1.3 0.2 setosa coef() function extracts model coefficients from objects returned by modeling functions. # Sepal.Length Sepal.Width Petal.Length Petal.Width Species From: r-help-bounces at stat.math.ethz.ch [mailto:r-help-bounces at stat.math.ethz.ch] On Behalf Of Pablo Gonzalez Sent: Thursday, September 15, 2005 4:09 PM To: r-help at stat.math.ethz.ch Subject: [R] Coefficients from LM Hi everyone, Can anyone tell me if its possibility to extract the coefficients from the lm() command? coefficients is The complete argument also exists for compatibility with Output for R’s lm Function showing the formula used, the summary statistics for the residuals, the coefficients (or weights) of the predictor variable, and finally the performance measures including RMSE, R-squared, and the F-Statistic. In Linear Regression, the Null Hypothesis is that the coefficients associated with the variables is equal to zero. >x . 5.2 Confidence Intervals for Regression Coefficients. If we are not only fishing for stars (ie only interested if a coefficient is different for 0 or not) we can get much … Save my name, email, and website in this browser for the next time I comment. In R, the lm summary produces the standard deviation of the error with a slight twist. "Beta 0" or our intercept has a value of -87.52, which in simple words means that if other variables have a value of zero, Y will be equal to -87.52. Standard deviation is the square root of variance. R Extract Matrix Containing Regression Coefficients of lm (Example Code) This page explains how to return the regression coefficients of a linear model estimation in the R programming language. logical indicating if the full coefficient vector should be returned # (Intercept) 2.1712663 0.27979415 7.760227 1.429502e-12 Methods (by class) lm: Standardized coefficients for a linear model. As we already know, estimates of the regression coefficients $$\beta_0$$ and $$\beta_1$$ are subject to sampling uncertainty, see Chapter 4.Therefore, we will never exactly estimate the true value of these parameters from sample data in an empirical application. # Speciesvirginica -1.0234978 0.33372630 -3.066878 2.584344e-03, Your email address will not be published. The exact form of the values returned depends on the class of regression model used. the weighted residuals, the usual residuals rescaled by the square root of the weights specified in the call to lm. One of my most used R functions is the humble lm, which fits a linear regression model.The mathematics behind fitting a linear regression is relatively simple, some standard linear algebra with a touch of calculus. However, when you’re getting started, that brevity can be a bit of a curse. complete settings and the default. Wadsworth & Brooks/Cole. Output for R’s lm Function showing the formula used, the summary statistics for the residuals, the coefficients (or weights) of the predictor variable, and finally the performance measures including RMSE, R-squared, and the F-Statistic. Arguments object. Returns the summary of a regression model, with the output showing the standardized coefficients, standard error, t-values, and p-values for each predictor. Error t value Pr (>|t|) # … r, regression, r-squared, lm. Let’s prepare a dataset, to perform and understand regression in-depth now. an object for which the extraction of model coefficients is The "aov" method does not report aliased coefficients (see So let’s see how it can be performed in R and how its output values can be interpreted. aov methods: Aliased coefficients are omitted. Using lm(Y~., data = data) I get a NA as the coefficient for Q3, and a # 2 4.9 3.0 1.4 0.2 setosa lm() Function. Your email address will not be published. Interpreting linear regression coefficients in R From the screenshot of the output above, what we will focus on first is our coefficients (betas). Chambers, J. M. and Hastie, T. J. # 5 5.0 3.6 1.4 0.2 setosa Create a relationship model using the lm() functions in R. Find the coefficients from the model created and create the mathematical equation using these. I’m going to explain some of the key components to the summary() function in R for linear regression models. Basic analysis of regression results in R. Now let's get into the analytics part of the linear regression … for the default (used for lm, etc) and The packages used in this chapter include: • psych • PerformanceAnalytics • ggplot2 • rcompanion The following commands will install these packages if theyare not already installed: if(!require(psych)){install.packages("psych")} if(!require(PerformanceAnalytics)){install.packages("PerformanceAnalytics")} if(!require(ggplot2)){install.packages("ggplot2")} if(!require(rcompanion)){install.packages("rcompanion")} As the p-value is much less than 0.05, we reject the null hypothesis that β = 0.Hence there is a significant relationship between the variables in the linear regression model of the data set faithful.. Note The function is short and sweet, and takes a linear model object as argument: In this post we describe how to interpret the summary of a linear regression model in R given by summary(lm). # Speciesversicolor -0.7235620 0.24016894 -3.012721 3.059634e-03 head(iris) coefficients: a p x 4 matrix with columns for the estimated coefficient, its standard error, t-statistic and corresponding (two-sided) p-value. meaningful. Error t value Pr(>|t|) The only difference is that instead of dividing by n-1, you subtract n minus 1 + # of variables involved. Hi, I am running a simple linear model with (say) 5 independent variables. Standard Error is very similar. R Extract Matrix Containing Regression Coefficients of lm (Example Code) This page explains how to return the regression coefficients of a linear model estimation in the R programming language. y = m1.x1 + m2.x2 + m3.x3 + ... + c. If you standardize the coefficients (using standard deviation of response and predictor) you can compare coefficients against one another, as … # Petal.Length 0.8292439 0.06852765 12.100867 1.073592e-23 Next we can predict the value of the response variable for a given set of predictor variables using these coefficients. R’s lm() function is fast, easy, and succinct. there exists a relationship between the independent variable in question and the dependent variable). >>> print r.lm(r("y ~ x"), data = r.data_frame(x=my_x, y=my_y))['coefficients'] {'x': 5.3935773611970212, '(Intercept)': -16.281127993087839} Plotting the Regression line from R's Linear Model. Coefficients The second thing printed by the linear regression summary call is information about the coefficients. If we wanted to predict the Distance required for a car to stop given its speed, we would get a training set and produce estimates of the coefficients … behavior in sync. What is the adjusted R-squared formula in lm in R and how should it be interpreted? We create the regression model using the lm() function in R. The model determines the value of the coefficients using the input data.
|
2022-08-19 04:06:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44997909665107727, "perplexity": 1353.010497754521}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573623.4/warc/CC-MAIN-20220819035957-20220819065957-00316.warc.gz"}
|
https://physicsoverflow.org/user/Greg+Graviton/history?start=20
|
# Recent history for Greg Graviton
5
years
ago
received upvote on answer Global symmetries corresponding to the Altland-Zirnbauer symmetry classes
5
years
ago
posted an answer Global symmetries corresponding to the Altland-Zirnbauer symmetry classes
5
years
ago
received upvote on answer Is my simple model for fermi liquid that forms cooper pairs correct?
5
years
ago
answer commented on Is my simple model for fermi liquid that forms cooper pairs correct?
5
years
ago
edited an answer Is my simple model for fermi liquid that forms cooper pairs correct?
5
years
ago
posted an answer Is my simple model for fermi liquid that forms cooper pairs correct?
5
years
ago
posted an answer How the BCS superconductors violate the Gell-Mann-Low's Theorem?
5
years
ago
answer commented on Is the classical action for fermions grassman valued or real valued?
5
years
ago
answer commented on Is the classical action for fermions grassman valued or real valued?
5
years
ago
answer commented on Is the classical action for fermions grassman valued or real valued?
5
years
ago
answer commented on Is the classical action for fermions grassman valued or real valued?
5
years
ago
answer commented on Is the classical action for fermions grassman valued or real valued?
5
years
ago
posted a comment Is the classical action for fermions grassman valued or real valued?
5
years
ago
answer commented on Is the classical action for fermions grassman valued or real valued?
5
years
ago
answer commented on Is the classical action for fermions grassman valued or real valued?
5
years
ago
answer commented on Is the classical action for fermions grassman valued or real valued?
5
years
ago
edited an answer Is the classical action for fermions grassman valued or real valued?
5
years
ago
edited a comment Is the classical action for fermions grassman valued or real valued?
5
years
ago
edited a comment Is the classical action for fermions grassman valued or real valued?
5
years
ago
posted a comment Is the classical action for fermions grassman valued or real valued?
|
2023-03-21 08:34:12
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9110556840896606, "perplexity": 3729.74932791246}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943637.3/warc/CC-MAIN-20230321064400-20230321094400-00175.warc.gz"}
|
https://zbmath.org/authors/?q=ai%3Asimonoff.jeffrey-s
|
## Simonoff, Jeffrey S.
Compute Distance To:
Author ID: simonoff.jeffrey-s Published as: Simonoff, Jeffrey S.; Simonoff, Jeffrey; Simonoff, J. S. more...less External Links: MGP · ORCID
Documents Indexed: 40 Publications since 1983, including 5 Books 7 Contributions as Editor Co-Authors: 32 Co-Authors with 35 Joint Publications 666 Co-Co-Authors
all top 5
### Co-Authors
10 single-authored 7 Marx, Brian D. 5 Hurvich, Clifford M. 4 Komárek, Arnošt 4 Tsai, Chihling 3 Chatterjee, Samprit 3 Friedl, Herwig 3 Giloni, Avi 2 Flynn, Cheryl J. 2 Sengupta, Bhaskar 1 Aerts, Marc 1 Ding, Yufeng 1 Dong, Jianping 1 El Barmi, Hammou 1 Frydman, Halina 1 Fu, Wei 1 Handcock, Mark S. 1 Hawkins, Douglas M. 1 Hens, Niel 1 Hochberg, Yosef 1 Larocque, Denis 1 Li, Lexin 1 Moradian, Hoora 1 Perlich, Claudia 1 Provost, Foster 1 Reiser, Benjamin 1 Sela, Rebecca J. 1 Simon, Gary A. 1 Stromberg, Arnold J. 1 Tutz, Gerhard E. 1 Udina, Frederic 1 Yao, Weichi 1 Zeger, Scott L.
all top 5
### Serials
9 Statistical Modelling 3 Journal of the American Statistical Association 3 Computational Statistics and Data Analysis 2 The Annals of Statistics 2 Journal of the Royal Statistical Society. Series C 2 Statistics & Probability Letters 2 Computational Statistics 2 Journal of Statistical Computation and Simulation 2 Journal of Nonparametric Statistics 2 Journal of Machine Learning Research (JMLR) 1 The Canadian Journal of Statistics 1 The Australian Journal of Statistics 1 Biometrics 1 International Statistical Review 1 Journal of Statistical Planning and Inference 1 Naval Research Logistics 1 Technometrics 1 Statistical Science 1 Machine Learning 1 Communications in Statistics. Simulation and Computation 1 Journal of the Royal Statistical Society. Series B. Statistical Methodology 1 Journal of Applied Statistics 1 Springer Series in Statistics 1 Springer Texts in Statistics 1 Wiley Series in Probability and Statistics
### Fields
39 Statistics (62-XX) 8 Numerical analysis (65-XX) 7 General and overarching topics; collections (00-XX) 3 Computer science (68-XX) 1 Operations research, mathematical programming (90-XX)
### Citations contained in zbMATH Open
35 Publications have been cited 705 times in 636 Documents Cited by Year
Smoothing methods in statistics. Zbl 0859.62035
Simonoff, Jeffrey S.
1996
Smoothing parameter selection in nonparametric regression using an improved Akaike information criterion. Zbl 0909.62039
Hurvich, Clifford M.; Simonoff, Jeffrey S.
1998
Use of modified profile likelihood for improved tests of constancy of variance in regression. Zbl 0825.62585
Simonoff, J. S.; Tsai, C.-L.
1994
A penalty function approach to smoothing large sparse contingency tables. Zbl 0527.62043
Simonoff, Jeffrey S.
1983
Analyzing categorical data. Zbl 1028.62003
Simonoff, Jeffrey S.
2003
RE-EM trees: a data mining approach for longitudinal and clustered data. Zbl 1238.68131
Sela, Rebecca J.; Simonoff, Jeffrey S.
2012
Alternative estimation procedures for $$\Pr (X<Y)$$ in categorized data. Zbl 0613.62125
Simonoff, Jeffrey S.; Hochberg, Yosef; Reiser, Benjamin
1986
Robust weighted LAD regression. Zbl 1445.62163
Giloni, Avi; Simonoff, Jeffrey S.; Sengupta, Bhaskar
2006
Transformation-based density estimation for weighted distributions. Zbl 0971.62016
El Barmi, Hammou; Simonoff, Jeffrey S.
2000
Smoothing categorical data. Zbl 0832.62053
Simonoff, Jeffrey S.
1995
An investigation of missing data methods for classification trees applied to binary response data. Zbl 1242.62052
Ding, Yufeng; Simonoff, Jeffrey S.
2010
A geometric combination estimator for $$d$$-dimensional ordinal sparse contingency tables. Zbl 0838.62046
Dong, Jianping; Simonoff, Jeffrey S.
1995
Tree induction vs. logistic regression: a learning-curve analysis. Zbl 1093.68088
Perlich, Claudia; Provost, Foster; Simonoff, Jeffrey S.
2004
Three sides of smoothing: Categorical data smoothing, nonparametric regression, and density estimation. Zbl 0911.62034
Simonoff, Jeffrey S.
1998
Probability estimation via smoothing in sparse contingency tables with ordered categories. Zbl 0603.62065
Simonoff, Jeffrey S.
1987
Efficiency for regularization parameter selection in penalized likelihood estimation of misspecified models. Zbl 06224985
Flynn, Cheryl J.; Hurvich, Clifford M.; Simonoff, Jeffrey S.
2013
Smoothing methods for discrete data. Zbl 0980.62030
Simonoff, Jeffrey S.; Tutz, Gerhard
2000
A casebook for a first course in statistics and data analysis. Incl. 1 disk. Zbl 0833.62001
Chatterjee, Samprit; Handcock, Mark S.; Simonoff, Jeffrey S.
1995
Measuring the stability of histogram appearance when the anchor position is changed. Zbl 0875.62158
Simonoff, Jeffrey S.; Udina, Frederic
1997
A mathematical programming approach for improving the robustness of least sum of absolute deviations regression. Zbl 1127.62060
Giloni, Avi; Sengupta, Bhaskar; Simonoff, Jeffrey S.
2006
Distributing a computationally intensive estimator: the case of exact LMS regression. Zbl 0938.62070
Hawkins, Douglas M.; Simonoff, Jeffrey S.; Stromberg, Arnold J.
1994
The anchor position of histograms and frequency polygons: Quantitative and qualitative smoothing. Zbl 0850.62332
Simonoff, J. S.
1995
Jackknife-based estimators and confidence regions in nonlinear regression. Zbl 0588.62106
Simonoff, Jeffrey S.; Tsai, Chih-Ling
1986
The conditional breakdown properties of least absolute value loal polynomial estimators. Zbl 1055.62041
Giloni, Avi; Simonoff, Jeffrey S.
2005
Jackknifing and bootstrapping goodness-of-fit statistics in sparse multinomials. Zbl 0656.62047
Simonoff, Jeffrey S.
1986
Outlier detection and robust estimation of scale. Zbl 0603.62044
Simonoff, Jeffrey S.
1987
Assessing the influence of individual observations on a goodness-of-fit test based on nonparametric regression. Zbl 0746.62046
Simonoff, Jeffrey S.; Tsai, Chih-Ling
1991
Variance estimation for sample autocovariances: direct and resampling approaches. Zbl 1130.62314
Hurvich, Clifford M.; Simonoff, Jeffrey S.; Zeger, Scott L.
1991
Unbiased regression trees for longitudinal and clustered data. Zbl 1468.62058
Fu, Wei; Simonoff, Jeffrey S.
2015
Higher order effects in log-linear and log-nonlinear models for contingency tables with ordered categories. Zbl 0825.62501
Simonoff, J. S.; Tsai, C.-L.
1991
A study of the effectiveness of simple density estimation methods. Zbl 0936.62041
Simonoff, Jeffrey S.; Hurvich, Clifford M.
1993
Jackknifing and bootstrapping quasi-likelihood estimators. Zbl 0726.62128
Simonoff, Jeffrey S.; Tsai, Chih-Ling
1988
Handbook of regression analysis. Zbl 1357.62002
Chatterjee, Samprit; Simonoff, Jeffrey S.
2013
Diagnostic plots for missing data in least squares regression. Zbl 0594.62076
Simon, Gary A.; Simonoff, Jeffrey S.
1986
Model selection in regression based on pre-smoothing. Zbl 07252522
Aerts, Marc; Hens, Niel; Simonoff, Jeffrey S.
2010
Unbiased regression trees for longitudinal and clustered data. Zbl 1468.62058
Fu, Wei; Simonoff, Jeffrey S.
2015
Efficiency for regularization parameter selection in penalized likelihood estimation of misspecified models. Zbl 06224985
Flynn, Cheryl J.; Hurvich, Clifford M.; Simonoff, Jeffrey S.
2013
Handbook of regression analysis. Zbl 1357.62002
Chatterjee, Samprit; Simonoff, Jeffrey S.
2013
RE-EM trees: a data mining approach for longitudinal and clustered data. Zbl 1238.68131
Sela, Rebecca J.; Simonoff, Jeffrey S.
2012
An investigation of missing data methods for classification trees applied to binary response data. Zbl 1242.62052
Ding, Yufeng; Simonoff, Jeffrey S.
2010
Model selection in regression based on pre-smoothing. Zbl 07252522
Aerts, Marc; Hens, Niel; Simonoff, Jeffrey S.
2010
Robust weighted LAD regression. Zbl 1445.62163
Giloni, Avi; Simonoff, Jeffrey S.; Sengupta, Bhaskar
2006
A mathematical programming approach for improving the robustness of least sum of absolute deviations regression. Zbl 1127.62060
Giloni, Avi; Sengupta, Bhaskar; Simonoff, Jeffrey S.
2006
The conditional breakdown properties of least absolute value loal polynomial estimators. Zbl 1055.62041
Giloni, Avi; Simonoff, Jeffrey S.
2005
Tree induction vs. logistic regression: a learning-curve analysis. Zbl 1093.68088
Perlich, Claudia; Provost, Foster; Simonoff, Jeffrey S.
2004
Analyzing categorical data. Zbl 1028.62003
Simonoff, Jeffrey S.
2003
Transformation-based density estimation for weighted distributions. Zbl 0971.62016
El Barmi, Hammou; Simonoff, Jeffrey S.
2000
Smoothing methods for discrete data. Zbl 0980.62030
Simonoff, Jeffrey S.; Tutz, Gerhard
2000
Smoothing parameter selection in nonparametric regression using an improved Akaike information criterion. Zbl 0909.62039
Hurvich, Clifford M.; Simonoff, Jeffrey S.
1998
Three sides of smoothing: Categorical data smoothing, nonparametric regression, and density estimation. Zbl 0911.62034
Simonoff, Jeffrey S.
1998
Measuring the stability of histogram appearance when the anchor position is changed. Zbl 0875.62158
Simonoff, Jeffrey S.; Udina, Frederic
1997
Smoothing methods in statistics. Zbl 0859.62035
Simonoff, Jeffrey S.
1996
Smoothing categorical data. Zbl 0832.62053
Simonoff, Jeffrey S.
1995
A geometric combination estimator for $$d$$-dimensional ordinal sparse contingency tables. Zbl 0838.62046
Dong, Jianping; Simonoff, Jeffrey S.
1995
A casebook for a first course in statistics and data analysis. Incl. 1 disk. Zbl 0833.62001
Chatterjee, Samprit; Handcock, Mark S.; Simonoff, Jeffrey S.
1995
The anchor position of histograms and frequency polygons: Quantitative and qualitative smoothing. Zbl 0850.62332
Simonoff, J. S.
1995
Use of modified profile likelihood for improved tests of constancy of variance in regression. Zbl 0825.62585
Simonoff, J. S.; Tsai, C.-L.
1994
Distributing a computationally intensive estimator: the case of exact LMS regression. Zbl 0938.62070
Hawkins, Douglas M.; Simonoff, Jeffrey S.; Stromberg, Arnold J.
1994
A study of the effectiveness of simple density estimation methods. Zbl 0936.62041
Simonoff, Jeffrey S.; Hurvich, Clifford M.
1993
Assessing the influence of individual observations on a goodness-of-fit test based on nonparametric regression. Zbl 0746.62046
Simonoff, Jeffrey S.; Tsai, Chih-Ling
1991
Variance estimation for sample autocovariances: direct and resampling approaches. Zbl 1130.62314
Hurvich, Clifford M.; Simonoff, Jeffrey S.; Zeger, Scott L.
1991
Higher order effects in log-linear and log-nonlinear models for contingency tables with ordered categories. Zbl 0825.62501
Simonoff, J. S.; Tsai, C.-L.
1991
Jackknifing and bootstrapping quasi-likelihood estimators. Zbl 0726.62128
Simonoff, Jeffrey S.; Tsai, Chih-Ling
1988
Probability estimation via smoothing in sparse contingency tables with ordered categories. Zbl 0603.62065
Simonoff, Jeffrey S.
1987
Outlier detection and robust estimation of scale. Zbl 0603.62044
Simonoff, Jeffrey S.
1987
Alternative estimation procedures for $$\Pr (X<Y)$$ in categorized data. Zbl 0613.62125
Simonoff, Jeffrey S.; Hochberg, Yosef; Reiser, Benjamin
1986
Jackknife-based estimators and confidence regions in nonlinear regression. Zbl 0588.62106
Simonoff, Jeffrey S.; Tsai, Chih-Ling
1986
Jackknifing and bootstrapping goodness-of-fit statistics in sparse multinomials. Zbl 0656.62047
Simonoff, Jeffrey S.
1986
Diagnostic plots for missing data in least squares regression. Zbl 0594.62076
Simon, Gary A.; Simonoff, Jeffrey S.
1986
A penalty function approach to smoothing large sparse contingency tables. Zbl 0527.62043
Simonoff, Jeffrey S.
1983
all top 5
### Cited by 1,074 Authors
16 Lin, Jinguan 16 Simonoff, Jeffrey S. 10 Wei, Bocheng 9 Tsai, Chihling 9 Tutz, Gerhard E. 9 Zhu, Lixing 8 Li, Qi 8 Xie, Feng-Chang 7 Aerts, Marc 7 Duong, Tarn 7 Janssen, Paul 6 Deng, Wen-Shuenn 6 Hazelton, Martin L. 6 Huang, Li-Shan 6 Naito, Kanta 5 Agresti, Alan 5 Aydin, Dursun 5 Eilers, Paul H. C. 5 Hall, Peter Gavin 5 Jones, Michael Chris 5 Konishi, Sadanori 5 Turlach, Berwin A. 5 Yanagihara, Hirokazu 5 Zou, Guohua 4 Adjabi, Smail 4 Augustyns, Ilse 4 Cai, Zongwu 4 Cao, Chunzheng 4 Cao, Ricardo 4 Carriere, Jacques F. 4 Chu, Chih-Kang 4 Cribari-Neto, Francisco 4 Giloni, Avi 4 Guo, Xu 4 He, Hua 4 Hothorn, Torsten 4 Iannario, Maria 4 Kauermann, Goran 4 Kokonendji, Célestin Clotaire 4 Larocque, Denis 4 Lin, Lu 4 Racine, Jeffrey Scott 4 Strauss, Olivier 4 Wand, Matthew P. 4 Zougab, Nabil 3 Aneiros-Pérez, Germán 3 Bellavance, François 3 Botev, Zdravko I. 3 Bowman, Adrian W. 3 Chacón, José E. 3 Christoffersson, Jan 3 Delicado, Pedro F. 3 Dong, Jianping 3 Fan, Jianqing 3 Filzmoser, Peter 3 Guerrero, Victor M. 3 Helton, Jon C. 3 Holcapek, Michal 3 Horová, Ivana 3 Hwang, Ruey-Ching 3 Hyndman, Rob J. 3 Kiessé, Tristan Senga 3 Kim, YoungJu 3 Kneib, Thomas 3 Koláček, Jan 3 Kroese, Dirk P. 3 Liao, Jun 3 Luati, Alessandra 3 Marron, James Stephen 3 Quintela-Del-Río, Alejandro 3 Rochani, Haresh D. 3 Samawi, Hani M. 3 Shih, Yu-Shan 3 Tang, Wan 3 Vellaisamy, Palaniappan 3 Veraverbeke, Noël 3 Vogel, Robert L. 3 Wang, Xiaofeng 3 Yilmaz, Ersin 3 Yin, Jingjing 3 Yu, Keming 2 Abaffy, Jozsef 2 Alcalá, José T. 2 Arslan, Olcay 2 Bagkavos, Dimitrios 2 Beh, Eric J. 2 Belaid, Nawal 2 Berger, Moritz 2 Bertocchi, Marida 2 Billor, Nedret 2 Birch, Jeffrey B. 2 Braun, W. John 2 Bühlmann, Peter 2 Burman, Prabir 2 Camerlenghi, Federico 2 Chang, Yuan-chin Ivan 2 Chen, Min 2 Cheng, Ming-Yen 2 Chesneau, Christophe 2 Čížek, Pavel ...and 974 more Authors
all top 5
### Cited in 131 Serials
77 Computational Statistics and Data Analysis 45 Journal of Statistical Planning and Inference 32 Communications in Statistics. Theory and Methods 27 Statistics & Probability Letters 25 Journal of Statistical Computation and Simulation 24 Journal of Nonparametric Statistics 21 Communications in Statistics. Simulation and Computation 19 Journal of Multivariate Analysis 17 Computational Statistics 15 The Annals of Statistics 15 Statistical Modelling 12 Machine Learning 11 Annals of the Institute of Statistical Mathematics 11 Journal of Econometrics 11 Statistical Science 10 Biometrics 10 Test 9 Statistics 8 Econometric Reviews 8 European Journal of Operational Research 8 Journal of Applied Statistics 8 Statistics and Computing 7 Australian & New Zealand Journal of Statistics 6 The Canadian Journal of Statistics 6 Fuzzy Sets and Systems 6 Statistical Papers 6 Statistical Methods and Applications 6 Journal of the Korean Statistical Society 6 Electronic Journal of Statistics 6 The Annals of Applied Statistics 5 Scandinavian Journal of Statistics 5 Journal of the American Statistical Association 5 Bernoulli 4 International Journal of Approximate Reasoning 4 Mathematical Problems in Engineering 4 Statistical Methodology 4 Journal of Statistical Theory and Practice 4 Journal of Computational and Graphical Statistics 3 Metrika 3 Journal of Computational and Applied Mathematics 3 Insurance Mathematics & Economics 3 Acta Mathematicae Applicatae Sinica. English Series 3 Mathematical and Computer Modelling 3 Advances in Data Analysis and Classification. ADAC 3 AStA. Advances in Statistical Analysis 2 Computer Methods in Applied Mechanics and Engineering 2 Mathematical Biosciences 2 Psychometrika 2 Applied Mathematics and Computation 2 Automatica 2 Biometrical Journal 2 International Statistical Review 2 Kybernetika 2 Statistica 2 American Journal of Mathematical and Management Sciences 2 Journal of Economic Dynamics & Control 2 Abstract and Applied Analysis 2 PAA. Pattern Analysis and Applications 2 Data Mining and Knowledge Discovery 2 Journal of the Royal Statistical Society. Series B. Statistical Methodology 2 Methodology and Computing in Applied Probability 2 North American Actuarial Journal 2 Computational & Mathematical Methods in Medicine 2 Mathematical Geosciences 2 Journal of Theoretical Biology 1 Artificial Intelligence 1 International Journal of General Systems 1 Journal of the Franklin Institute 1 Information Sciences 1 International Journal of Mathematics and Mathematical Sciences 1 Journal of Mathematical Economics 1 Metron 1 Naval Research Logistics 1 Nonlinear Analysis. Theory, Methods & Applications. Series A: Theory and Methods 1 Results in Mathematics 1 Journal of Classification 1 Optimization 1 Applied Mathematics Letters 1 Stochastic Hydrology and Hydraulics 1 Science in China. Series A 1 Annals of Operations Research 1 Neural Computation 1 Economics Letters 1 Automation and Remote Control 1 International Journal of Computer Mathematics 1 Proceedings of the National Academy of Sciences of the United States of America 1 Stochastic Processes and their Applications 1 Journal of Mathematical Imaging and Vision 1 SIAM Journal on Scientific Computing 1 International Journal of Computer Vision 1 Computational and Applied Mathematics 1 Statistica Sinica 1 Lifetime Data Analysis 1 INFORMS Journal on Computing 1 European Series in Applied and Industrial Mathematics (ESAIM): Probability and Statistics 1 Soft Computing 1 Revista Matemática Complutense 1 Mathematical & Computational Applications 1 Extremes 1 Statistical Inference for Stochastic Processes ...and 31 more Serials
all top 5
### Cited in 25 Fields
588 Statistics (62-XX) 134 Numerical analysis (65-XX) 33 Computer science (68-XX) 30 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 21 Probability theory and stochastic processes (60-XX) 17 Operations research, mathematical programming (90-XX) 15 Biology and other natural sciences (92-XX) 14 Information and communication theory, circuits (94-XX) 9 Geophysics (86-XX) 8 Systems theory; control (93-XX) 5 Integral transforms, operational calculus (44-XX) 4 Harmonic analysis on Euclidean spaces (42-XX) 3 General and overarching topics; collections (00-XX) 2 Combinatorics (05-XX) 2 Linear and multilinear algebra; matrix theory (15-XX) 2 Measure and integration (28-XX) 2 Calculus of variations and optimal control; optimization (49-XX) 2 Fluid mechanics (76-XX) 1 Mathematical logic and foundations (03-XX) 1 Real functions (26-XX) 1 Ordinary differential equations (34-XX) 1 Partial differential equations (35-XX) 1 Dynamical systems and ergodic theory (37-XX) 1 Statistical mechanics, structure of matter (82-XX) 1 Astronomy and astrophysics (85-XX)
|
2022-12-08 18:48:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5182024240493774, "perplexity": 12275.107204239544}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711360.27/warc/CC-MAIN-20221208183130-20221208213130-00668.warc.gz"}
|
https://meetings3.sis-statistica.org/index.php/sis2017/sis2017/paper/view/496
|
## Open Conference Systems, STATISTICS AND DATA SCIENCE: NEW CHALLENGES, NEW GENERATIONS
Font Size:
Sparse Indirect Inference
Paola Stolfi, Mauro Bernardi, Lea Petrella
In this paper we propose a sparse indirect inference estimator. In order to achieve sparse estimation of the parameters, we add the Smoothly clipped absolute deviation $\ell_1$--penalty of Fan and Li (2001) into the indirect inference objective function introduced by Gouri{\'e}roux et al (1993). We extend the asymptotic theory and we show that the sparse--Indirect Inference estimator enjoys the oracle properties under mild regularity conditions. The method is applied to estimate the parameters of large dimensional Seemingly unrelated non-Gaussian regression models
|
2021-06-24 20:44:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3288213610649109, "perplexity": 1952.9243696086512}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488559139.95/warc/CC-MAIN-20210624202437-20210624232437-00046.warc.gz"}
|
https://socratic.org/questions/which-sample-has-the-greatest-mass-multiple-choice
|
# Which sample has the greatest mass? (multiple choice)
## A. 1 mol of Na2S B. 3.0 x ${10}^{24}$ molecule of O2 C. 8.8 x 10 g of NaCl D. 1.2 x ${10}^{24}$ atoms of K The answer is B. 3.0 x ${10}^{24}$ molecule of O2, why?
Feb 17, 2016
#### Explanation:
A
To calculate the mass of 1 mol of $\text{Na"_2"S}$ we need just to multiply it by it's molar mass: 78,0452 g/mol. Therefore the result will be 78,0452 g.
B
To calculate the mass of ${\text{O}}_{2}$, we first need to divide the result by the Avogadro constant.
3.0 × 10^24color(white)(l) "molecules O"_2 × "1 mol O"_2/(6.022 × 10^23color(white)(l) "molecules O"_2) = "4.98 mol O"_2
This gives us the moles of ${\text{O}}_{2}$.
Then we multiply by the molar mass.
${\text{4.98 mol O"_2 × "32.00 g O"_2/"1 mol O"_2 = "160 g O}}_{2}$
C
For C we only have 88 g.
D
And for D the molar mass of $\text{K}$ is 39,0983 g/mol.
This means
1.2 × 10^24color(white)(l) "atoms" × "1 mol"/(6.022 × 10^23color(white)(l) "atoms") × "39.10 g"/"1 mol" = "80 g"
|
2019-10-21 00:50:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 10, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6398957967758179, "perplexity": 8171.925416778175}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987750110.78/warc/CC-MAIN-20191020233245-20191021020745-00362.warc.gz"}
|
http://ieeexplore.ieee.org/ieee_pilot/articles/06/ttg2009061001/article.html
|
• Abstract
# Scattering Points in Parallel Coordinates
In this paper, we present a novel parallel coordinates design integrated with points (Scattering Points in Parallel Coordinates, SPPC), by taking advantage of both parallel coordinates and scatterplots. Different from most multiple views visualization frameworks involving parallel coordinates where each visualization type occupies an individual window, we convert two selected neighboring coordinate axes into a scatterplot directly. Multidimensional scaling is adopted to allow converting multiple axes into a single subplot. The transition between two visual types is designed in a seamless way. In our work, a series of interaction tools has been developed. Uniform brushing functionality is implemented to allow the user to perform data selection on both points and parallel coordinate polylines without explicitly switching tools. A GPU accelerated Dimensional Incremental Multidimensional Scaling (DIMDS) has been developed to significantly improve the system performance. Our case study shows that our scheme is more efficient than traditional multi-view methods in performing visual analysis tasks.
SECTION 1
## Introduction
Recent advances in computing science and technology have witnessed an accelerating information explosion. Data with unprecedentedly large size and high dimensionality poses a major challenge to visualization researchers, demanding the provision of effective algorithms and tools. Many techniques have been proposed for exploratory visualization of multidimensional data. Parallel coordinates scheme, introduced by Inselberg and Dimsdale [24], [25], represents an N-dimensional data tuple as one polyline crossing parallel axes. For a large multidimensional data set, parallel coordinates can turn the tuples into a compact two-dimensional visual representation. However, data cluttering in parallel coordinates is almost unavoidable due to line overdrawing on limited screen space. Compared with point representation, each data item in parallel coordinates is drawn as one poly-line crossing all dimensional axes, which occupies many more pixels. Due to the cluttering effect and interference with crossing lines, operation of data selection or clustering on parallel coordinates is not trivial when the data density is high.
Scatterplot matrix [12] is another frequently applied multidimensional visualization method. In a scatterplot, two datums of an individual in a data set are used to plot a point in two dimensional space, usually in a Cartesian coordinate system defined by two perpendicular axes, resulting in a scattering of points. The positions of the data points represent the corresponding dimension values. Scatter-plots are useful for visually determining the correlation between two selected variables of a multidimensional data set, or finding distinct clusters of individuals in the data set. One single scatterplot can only depict the correlation between two dimensions. Additional limited dimensions can be mapped to the color, size or shape of the plotting points. For visualizing multidimensional data, a matrix consisting of N2 scatterplots arranged in N rows and N columns is developed.
Compared with the scenario of parallel coordinates, points in scatterplots, instead of polylines in parallel coordinates, are the visual representation of the data. Point clouds consume fewer pixels compared with the same number of lines and it is easier for the user to detect clusters and perform selection by brushing. However, in contrast from parallel coordinates which can visualize all dimensions simultaneously, scatterplots can only display a very limited number of dimensions reliably. For higher dimensional tasks, multiple plots have to be drawn in the arrangement of a scatterplot matrix [12] with the data dimensions on the rows and columns. While a scatterplot matrix can give an overview of the structure of the whole data set, individual scatter-plots in the matrix only appear as small image sets that are difficult to explore. Multidimensional scaling [40] can project high dimensional points directly into 2D at the expense of losing individual dimensional information and at high computational cost.
Besides the clutter problem, the interpretation of parallel coordinates requires expert knowledge. Due to the data entries being represented as line segments, visually clustered data has to be interpreted based on both the slope and intercept simultaneously, which is not trivial even to an experienced analyst. When it is required to check the data correlation between multiple dimensions, the task is even more challenging. In the contrast, when visually checking the spatial distribution of the points on scatterplots or multidimensional scaling plots, clusters of points are intuitively observed. As shown in Figure 1, data can have dual forms of representation in lines (left images) or points (right images) with different comprehensibility. While the data in Figure 1(a) can be easily understood, data correlation shown in the parallel coordinates form (Figure 1(b)) can not be trivially comprehended. In the scattered point form of Figure 1(b), the information of four clusters and their distribution patterns is visually clear. It would be difficult if not impossible to discern the few data points away from the four major clusters. Figure 1(c) is another example of data with similar characteristics.
Based on the above observation, a combination of parallel coordinates and scatterplots/multidimensional scaling could utilize the advantages of both. We note that efforts have been taken already to integrate multiple views of different visualization methods into a single system to facilitate data compressibility and exploration. In the majority of existing systems, the integration is done by linking the visual effects of different visual representations, while each representation takes its own window. For example, a 2D visualization system developed by Wong et al. [41] contains parallel coordinates and scatterplots, each presented in their own regions of the window. With linking and merging, interactive changes in one representation can be reflected in the others for better support during data exploration. In their system, users need to switch back and forth from one window to another and use their mental memory for data exploration and analysis.
In this work, we present a more radical design, Scattering Points in Parallel Coordinates (SPPC), that seamlessly integrates point representation into parallel coordinates. One example visualization on analyzing DNA microarray data with SPPC is shown in Figure 9. In the design we propose, two or more selected coordinate dimensions are converted into point plots through multidimensional scaling as shown in Fig 2 (b). Scatterplots can be considered as a special case of multidimensional scaling, in which the protection domain and data domain are identical.
To avoid the context jump between polyline and point regions, curves connecting two neighboring polyline regions are drawn in the middle scattered point region. The curved lines are displayed in a consistent color scheme with neighboring polyline segments when the data trend through all dimension is examined. Further equipped with a specially designed uniform brushing tool, the user can freely explore and cluster high dimensional data without tool or context switching.
The specificbenefits of SPPC and the contribution of this research are as follows:
• Unified Line/Point Representation: Any two or more dimensions in the parallel coordinate plots can be conveniently converted to scattered points through multidimensional scaling, or converted back in reverse.
• Uniform Brushing Tool: A brushing tool allows the user to make selection on both points and line segments conveniently.
• Dimensional Incremental Multidimensional Scaling (DIMDS): A GPU accelerated multidimensional scaling algorithm.
The remainder of this paper is organized as follows. Section 2 provides a review of related works. An overview of our proposed visualization system is presented in Section 3, followed by system details in Section 4. After implementation details are revealed in Section 5 and case experiments described in Section 6, conclusions and future work are presented in Section 7.
Fig. 1. Data displayed in the top row: line representation (parallel coordinates); bottom row: point representation (Scatterplots).
SECTION 2
## Related Works
Information visualization systems with multiple coordinated views have been considered to be effective for exploratory visualization of complex multidimensional data sets. Visualization techniques in combination can complement each other and help solve challenging problems. A set of guidelines on when and how multiple view systems should be used has been provided by Baldonado et al. [5]. In our work we have developed a system that integrates the point (e.g. multidimensional scaling [41] and scatterplots [12]) and line representations (e.g. parallel coordinates [24], [25]) of multidimensional data sets.
Parallel Coordinates In the design of parallel coordinates, a direct manipulation method developed by Siirtola [34] can dynamically summarize a set of polylines through averaging and visualize correlation coefficients between subsets. Wong and Bergeron [40] used wavelet approximation to create a brushing tool which displays the brushed and non-brushed data at different resolutions. Angular brushing [20] is effective in selecting data subsets which exhibit correlation along two axes. EdgeLens [39] can interactively curve graph edges away from the focus center while keeping the nodes intact. Zhou et al. [44] adjusted the shape of edges based on visual clustering. Animation can also be applied to reduce the clutter [43]. Theisel [36] replaced line segments with free-form curves to encode extra information. Curves can also be employed to enable crossed axis tracing [19] in parallel coordinates. By modifying the axes or the line segment representations, fuzzy data and categorical data can also be visualized by parallel coordinates [7], [8], [28]. Parallel coordinates can further be extended into 3D through extrusion to visualize trajectories of higher dimensional dynamical systems [37], or novel axes arrangement can be used to allow the simultaneous examination of the relationships of a single dimension with many others in the data [26]. Artistic rendering techniques can augment parallel coordinates to increase comprehensibility for non-experts [29].
Fig. 2. (a) Traditional Parallel Coordinates; (b) Our proposed Scattering Points in Parallel Coordinates, (c) SPPC with background curves faded.
Clutter Reduction In parallel coordinates, patterns are very often difficult to detect due to the visual clutter caused by too many drawn lines. Many efforts have been proposed to reduce the clutter and facilitate user exploration. Dimension reordering based on similarity helps visual clutter minimization [1], [32], [42]. Clustering is another type of approach to reduce clutter. Multiresolutional view of the data through hierarchical clustering [18] assisted with proximity-based coloring has been developed to show aggregation information. Visual abstraction, in the form of texture stripes with various opacity, has been used to distinguish different clusters [30]. Transfer functions, either pre-defined or customized, are provided to highlight different aspects of the cluster data characteristics [27]. Artero et al. [2] filtered out information by constructing frequency and density plots from the parallel coordinate plots. Clutter reduction can also be performed in a focus+context manner [15], [31].
Coordinated Views As one of the most widely used multidimensional data visualization techniques, parallel coordinates have been extensively studied as far as how to integrate with other visualization methods to overcome shortcomings and improve efficiency [9], [13], [17], [33], [35], [41]. SpringView [9] integrates parallel coordinates with Radviz [21] to handle multidimensional datasets. Siirtola [35] combined parallel coordinates with the Reorderable Matrix. In Parallel Glyphs [17], dimension axes of a parallel coordinate plot are extended into star glyphs in 3D by unfolding them around a pivot axis to facilitate data comparison and provide capabilities for interactive exploration.
Scatterplots As an alternative to parallel coordinates, scatter-plots [12], more frequently in a form of a scatterplot matrix, depict discrete data values with two data variables as a collection of discrete points. This form can support better interactive navigation in multidimensional spaces [16] through displaying and considering transitions between different scatterplots as animated rotations in 3D space. Continuous scatterplots [4] have also been developed to visualize large scientific data. In the direction of coordinated view visualization, Schmid and Hinterberger [33] combined scatterplot matrix, parallel coordinates plot, permutation matrix, and Addrew's curve view together. A 2D visualization system developed by Wong et al. [41] contains parallel coordinates and scatterplots. Craig and Kennedy [13] studied the combination of a traditional time series graph representation with a complementary scatterplot representation. In the above efforts, each visualization metaphor is presented in their own region and the visualization exploration is performed through linking and merging so that the interactive changes of one representation can be reflected in the other.
SECTION 3
## Overview
The design of our proposed Scattering Points in Parallel Coordinates (SPPC) integrates the point representation into parallel coordinates closely.
Let X be the a set of M N-dimensional objects, i.e, TeX Source $${\bf{X}} = \{ {\bf{x}}_m = (x_{m,\,1\,x,\;2}, \cdots, x_{m,\;N})^T {\rm{|}}1 \le m \le M\}$$ where M is the number of data items, and N is the dimension of the data.
In parallel coordinates, data xm is drawn as a series of line segments: TeX Source $${\bf{l}}_m = \{ l_{m(0,\;1)}, l_m _{,\,\,(1,\;2)}, \cdots, l_{m,(N - 1,\;N)} \}$$ Each line segment lm,(n, n + 1) connects points xm, n and xm, n+ 1 on axes of An and An+ 1 respectively, as showninFigure3(a). The slope of line segment lm,(n, n+1) is defined as km,(n,n+1). Each two neighboring axes An and An+ 1 define a region R(n, n + 1). In traditional parallel coordinates, M line segments, lm,(n,n + 1), |1 ≤ mM are displayed in R(n, n + 1 Figure 2 (a) is an example of traditional parallel coordinates. Figure 2 (b) is the result of the same data set visualized with SPPC. For the simplicity of our description, we assume no dimension reordering occurs in our following discussion unless explained explicitly.
In our design of SPPC, line segments of two or more selected coordinate dimensions can be converted into point plots in the form of scatterplots or a multidimensional scaling plot. In the simplest representation of two dimensional data, data on dimension n and n + 1, e.g. line segments in region R(n,n + 1), are converted to M scattered points {p{m,(n,n + 1) |1 ≤ m} ≤ M. In our default setting, the dimension axis n + 1 is rotated 90 degrees, forming a Cartesian region with dimension axis n. Note in our implementation, it is not necessary for the two dimensions of a point plot to be consecutive in the original dimension order. In a more general case, k dimensions, where k > 1, can be converted through multidimensional scaling, generating a point cloud distributed on a 2D plane, consisting of points {pm,(n, n + 1, …, n + k − 1)|1 ≤ mM}.
To avoid the context jumps between polyline and point regions, curves connecting two neighboring axes are drawn in the point region, illustrated in Figure 3 as red lines. The curves can be displayed in a consistent color scheme with neighboring polyline segments so that the data trend can be continuously tracked throughout all dimensions. Further equipped with our proposed uniform brushing tool, the user can freely explore and cluster high dimensional data without switching between different tools and contexts. In the following sections, more details on the SPPC will be discussed.
Fig. 3. Illustrations of Scattering Points in Parallel Coordinates. (a) Lines in R(n, n + 1) are converted to points; (b) Lines in two consecutive regions R(n, n + 1) and R(n + 1 , n + 2) are converted.
SECTION 4
## Scattering Points in Parallel Coordinates
The Scattering Points in Parallel Coordinates (SPPC) is very flexible. In the following subsections, more details on our work will be given.
### 4.1 Converting Parallel Coordinates Segments to Point Plots
The simplest case of our work is to select a parallel coordinates region R(n, n + 1) and convert the line segments between {lm,(n, n + 1),|1 ≤ m ≤ M} into points {pm,(n, n + 1),|1 ≤ m ≤ M}. A naive approach of such conversion can result in the original parallel coordinates being transformed into two parallel coordinates plots with a single scatterplot in between.
It is also possible to represent multiple dimensions in a single scattered point plot using multidimensional scaling [41].
The first step is to determine the dissimilarities between all pairs of data items. Euclidean distance in k-dimensional space is the most commonly used metric, although other metrics including weighted Euclidean, Minkowski, and Manhattan (a.k.a. city blocks) can be used to measure data dissimilarity of quantitative datasets. Using Euclidean distance, the dissimilarity, δpm, pm between data m and m′ on the dimension of (n, n + 1, … n + k − 1) is given by TeX Source $$\delta _{p_{m,\;(n,\;n + 1,\, \ldots n + k - 1)}, } p_{m',\;(n,\,n + 1,\, \ldots \,n + k - 1)} = \sqrt {\sum\limits_{i = 0}^{k - 1} {(x_{m,n + i} - x_{m',\;n + i})^2 } }$$ A dataset with N records generates an N × N real symmetric dissimilarity matrix Dpm, pm). Each element of this matrix contains the dissimilarity, δpm, (n, n + 1, …, n + k − 1), pm′, (n, n + 1, …,n + k − 1), between data item m and m′ of the original data.
In our initial configuration, points are chosen as in the scatter plot of neighboring dimensions. A dissimilarity matrix, D′(δ ′pm, pm), of the initial configuration will be constructed for the evaluation of the configuration. The difference of the dissimilarity between the original data set and the configuration is defined as a difference matrix: TeX Source $$\Delta (\Delta _{p_m, \;p_{m'} }) = D(\delta _{p_m, p_{m'} }) - D'(\delta '_{p_{m,} p_{m'} })$$ For a fast construction of a configuration with sufficient confidence of accuracy, a spring model [11] is adopted which searches for an answer by iteratively updating the positions of points in the configuration. In the spring model, a spring with a relaxed length of δpm, pm connects point p and p′. All points in the configuration form a multi-body system, which reaches an equilibrium after iterating for a sufficiently long period. A simple stress function is used to determine the terminating condition: TeX Source $$Stress = \sqrt {(\sum\limits_{m = 1}^M {(\delta '_{p_m, \,p_{m'} }^2 - \delta _{p_m, p_{m'} }^2)})/(\sum\limits_{m = 1}^M {\delta _{p_m, \,p_{m'} }^2 })}$$ Our method of constructing a 2D configuration to represent k-dimensional data directly applies the spring model to selected dimensions of the original data.
### 4.2 Dimensional Incremental Multidimensional Scaling
However, the above spring model based MDS is inefficient for large datasets owing to its O(n3) computational complexity. To alleviate the computation burden and enable dynamic adding or subtracting of dimensions in the k-dimensional sub dataset for configuration construction, Dimensional Incremental Multidimensional Scaling (DIMDS) method is introduced.
To reduce the computation load of MDS for large datasets, Basalaj [6] proposed an incremental algorithm. Large datasets are divided into several parts and added into the MDS configuration step by step. In other words, Basalaj's method focuses on the increment of the size of the dataset.
The approach we propose in this paper is adding or subtracting dimensions gradually into MDS configuration, given an MDS configuration with limited difference among the dimensions included. The DIMDS method is introduced for cases where an MDS configuration already exists, some dimensions are then eliminated or added to the current configuration. Based on our observation, in the system we constructed the user is more likely to add or remove dimensions from the current multidimensional projection of the point region, e.g. in an incremental way. Our proposed DIMDS takes advantage of this and updates the MDS configuration accordingly to obtain a performance gain.
Our approach shows great flexibility of creating MDS configurations as well as effectively reducing computation demands. Moreover, this method provides an overview of the full dataset at any time during the construction of an MDS configuration since all data points are taken into consideration, while the method proposed by Basalaj and Ingram et al. shows only part of the whole dataset before finishing the construction. Since the configuration may change dramatically when new dimensions are added, users may get confused by points moving swiftly. By properly choosing the initial condition, we are able to have good continuity between MDS configurations.
DIMDS constructs new configurations by adopting the existing configuration as the initial configuration and incrementally updating the dissimilarity matrix Dpm, pm) with data from specified dimensions. Assuming that all data is normalized, in the case of changing one more dimension, b = (b1, b2, …, bN), a new dissimilarity matrix Dnew can be constructed as: TeX Source $$\displaylines{ D_{new} = D + \delta D_{\bf{b}} \cr \delta D_{\bf{b}} = \pm (bd_{ij}) \cr}$$ where δ dij = (bibj)2 is the element of matrix Dnew and the sign of q D b is positive for additive cases and negative for subtractive cases. A normalization operation may be needed after adding dimensions. After the construction of a new dissimilarity matrix, the same iterative method is applied to build the new configuration.
In this incremental approach, the existing configuration serves as the initial configuration and the effect of dimensional modification is loaded into the configuration gradually in the iterative process. This guarantees a smooth transition from the previous configuration. Different from 2D scatter plots, theoretically, MDS projections into two dimensions are free of any particular orientation and may not even converge to a unique solution. DIMDS with different incremental orders may produce different final projections. However, according to our observation, in most cases the differences are small and do not affect the data exploration. As shown in Figure 4, DIMDS generates very similar point distribution patterns as MDS does. In addition, in SPPC, MDS is employed for clustering. As long as the generated graph gives similar clustering results, differences in orientation or distribution are not fatally critical.
Fig. 4. Projection results of data with 7 dimensions by (a) MDS and (b), (c) DIMDS with two different dimension incremental orders.
### 4.3 Background Curves in Point Regions
Merely aligning the two visual forms of point and line regions together breaks the integration of the whole multidimensional dataset representation.
In the point region, in addition to the conversion of line segments between two neighboring axes to scattered points, curved lines are drawn to connect ployline segments in the neighboring regions and pass the plotted points in the middle. By introducing curved lines into the point region, our visualization unites two different visual forms into a single organic one.
We first consider the scatterplot case. As illustrated in Figure 3(a), each curved line lm,(n, n + 1) corresponds to one point pm , (n, n + 1).The following constrains are used for determining the shape of the curved line lm, (n, n + 1) :
1. line lm, (n, n + 1) passes through the point pm,(n, n + 1);
2. line lm,(n, n + 1) connects points xm, n and xm, n+ 1 on two neighboring coordinate axes correspondingly;
3. line lm,(n, n + 1) connects smoothly with the line segments lm,(n – 1 , n) and lm, (n + 1 , n + 2), e.g. lm, (n − 1 , n) and lm, (n + 1 , n + 2) are the tangent lines to the curve lm,(n, n + 1) at point xm, n and xm, n+ 1 respectively.
A Catmull-Rom spline [10] is frequently used in computer graphics for curves or smooth interpolation. The spline passes through all of the control points. It has C1 and G1 continuity but not C2 continuity. The second derivative is linearly interpolated within each segment so that the curvature varies linearly over the length of the segment. With the above desirable characteristics, a Catmull-Rom spline fits our requirement well. For each spline lm,(n, n + 1), 5 control points {Pi, i ∊ [0, 4]} are defined: P0 = xm, n −1 + pm,(n,n+1) − xm, n, P1 = xm, n, P2 = pm,(n,n+1), P3 = xm,n+1 and P4 = xm,n+2 − pm,(n,n+1) + xm,n+1.
Fig. 5. Animating Points and Curves in SPPC. From (a) to (f), points are moving from the left axis to reach their final destination. The curves are animated accompanying the movement of the scattered points.
The setting of P0 and P4 are chosen to meet the tangent requirement. The tangent Tk of the constructed curve at each control point P1 to P3 is equal to (Pk+1Pk−1)/2, where k = 1 , 2 , 3. Note that for a Catmull-Rom spline, points on a line segment may lie outside the original parallel coordinate region. In our practise, only a few lines exhibit such a problem. It does not harm the visualization. The Catmull-Rom spline also works in the situation where two point regions are next to each other as shown in Figure 3(b).
TABLE 1 Performance comparison of our GPU accelerated DIMDS with previous methods. The timing is measured for direct CPU MDS, DIMDS without and with GPU acceleration respectively. DIMDS+ refers to the time for computing one dimensional increment using DIMDS. The acceleration rate over CPU MDS is indicated in the parenthesis.
The visualization we propose also has a smooth transition during the process of converting a line region into a point region by employing an animation. During the transition animation, each point pm,(n,n+1) moves from the left axis at position xm, n to its final location. At the beginning stage of the animation, P1 and P2 are very close to each other, resulting in a self-crossing spline. To remedy this problem, we use a Cardinal spline, which is a general form of a Catmull-Rom spline. The tangent mk is defined as (1 − c)(Pk+1 + Pk–1)/2. In our animation, the parameter c is linearly interpolated from 1 to 0 as the point moves from the left axis to its destination. When the point reaches its destination, c = 0, the Cardinal spline degenerates into a Catmull-Rom spline. Figure 5 shows a sequence of snapshots depicting the animation described above. Drawing too many background curves can cause the same clutter problem as the parallel coordinate lines do. Curves may also interfere the interpolation of the scattered points. To solve the above mentioned problems, we implemented one function that the user can optionally choose automatic fading out of the background curves when the mouse hovers on one point region as shown in Figure 2(c). The fading function is only active when the user performs operations on the scattered point regions. When the point regions shows only two dimensions, e.g. the 2D scatter plot case, our system can also display a horizonal x-axis with scale of the corresponding dimension at the same time when the fading function is effective.
SECTION 5
## Implementation Details
We implemented all visualization algorithms for our experiments on a Dell Precision T3400 desktop with Intel Core 2 Duo E7400 CPUs and 1GB Memory. The graphics card used is an NVIDIA Quadro NVS290 with 256MB of DDR2 memory. The software environment is based on Windows XP SP2 with Visual Studio 2005, and NVIDIA CUDA 2.0. An NVIDIA Tesla C870 PCI-E Card with 1.5GB memory is installed for GPU accelerated computation. In the following subsections, implementation details about GPU accelerated DIMDS and more specification on user interface design will be discussed.
### 5.1 GPU Accelerated DIMDS
We implement the DIMDS by taking advantage of GPU acceleration with CUDA [22]. Ingram et al. proposed a multi-level GPU MDS algorithm [23] based on a parallel force-based subsystem simulation. Their approach organizes datasets into a hierarchy of levels, and recursively constructs an MDS configuration. The multi-level procedure together with GPU acceleration shows great performance improvement over previous methods. However, their approach focuses on the whole dataset with all dimensions included. In the case of handling large datasets with very high dimensionality, it still suffers from extremely heavy computational costs.
Our implementation begins by constructing the MDS configuration from the scatter plot between neighboring axes. As the user drags more axes into the plot, more new dimensions are added into the dissimilarity matrix. The change in the dissimilarity matrix will then result in a change of MDS configuration. Since our construction of the new configuration uses the existing plot as the initial configuration and with the adoption of the spring model, points in the plot move smoothly starting from their previous balance positions until they finally reach an equilibrium.
Note that in each iteration of the spring model, the computation mainly focuses on creating an updated dissimilarity matrix and retrieving new position vectors from the matrices. As the computation for each point in the plot is relatively independent from other points, a GPU algorithm can achieve a significant speed increase by parallel processing the computation task of each point. In our implementation, each point is treated individually in the following procedures:
1. Calculate the current dissimilarity matrix D′(δ ′pm, pm) with all points' coordinates;
2. Calculate the difference matrix Δ(Δpm, pm);
3. Calculate force matrices Fx (fx, (pm, pm)) in x direction and Fy (fy, (pm, pm)) in y direction, then merge the force matrices into two force vectors and for each point;
4. For all points, update their velocities (vx,pm, vy,pm) and coordinates (xpm, ypm) with force vectors and
Our algorithm can be summarized as follows:
1. Determine the initial configuration and initial dissimilarity matrix;
2. For all points, update their position with the above procedure;
3. Repeat step 2 until the convergence condition is reached or dimensional change is made. In the case of adding or subtracting dimensions, return to step 1.
We compare the performance between regular CPU based DMS, DIMDS, and GPU accelerated DIMDS by testing each visualization algorithm on three data sets with different sizes. All dimensions are included in the MDS or DIMDS computation. Our experiments show that our modification of MDS procedures yields remarkable speed improvements and provides satisfactory interactivity for users. As indicated in Table 1, DIMDS improves the performance over previous MDS methods when the data size is large.
In our SPPC, the most common situation is that the user adds dimensions into the point region one by one. Then there is only one dimension difference that needs to be computed. We name this situation as DIMDS+. The speedup rates range from 40-80 times, depending on the data size. The GPU accelerated DIMDS can give a further 1-3 times improvement. The total acceleration rate can reach as high as 108 times in our experiment. From the timing data, we notice that the data with larger sizes receive a high acceleration rate. Note that for the DNA data with very small data size but large dimension number, the GPU do not improve the performance too much, and even give lower frame rates for the DIMDS+ situation.
### 5.2 User Interface and Interaction Design
In addition to acceleration on the computational side, the SPPC visualization system has a carefully designed interface to facilitate usability and improve data exploration performance.
Point/Line Transition: In SPPC, a region can be switched between line and point forms back and forth simply by double clicking the mouse in the target area. During the transition, points emerge from the left axis of the specified region and move to their destination in the middle region through an animation. Corresponding curves are also drawn and animated accordingly. In the reverse process, the scattered points travel from the middle region to the left axis.
Dimension Operations: In SPPC, several operations on data dimensions can be performed. Users can reorder the coordinates by dragging the axes. The colored radial buttons at the bottom of the screen shown in Figure 6 indicate the type of the corresponding region. If two regions next to an axis are all in line form, the color of the radial button is blue. If a dimensional axis separates a point region from a line region, its corresponding button is half in blue and half in red. For the MDS region, additional dimensions are indicated by a corresponding number of buttons. In Figure 6, the MDS region visualizes 9 dimensions. Users can add or remove dimensions of an MDS region by dragging the colored buttons into or away from the MDS region. By such an intuitive way of dimension manipulation, our system provides a very convenient way for the user to explore data correlation when the data involves large numbers of dimensions.
Fig. 6. Interface of Scattering Points in Parallel Coordinates.
Fig. 7. Uniform Brushing in SPPC. Three types of brushes are enabled: Angular, Axis Range, and Point Region.
Focus+Context Operation: In our design, a user-customized region width between any two dimensions is allowed. When the mouse is placed over the focused region, scrolling the mouse wheel can interactively widen or narrow the region width, and the sizes of other regions are changed accordingly.
Uniform Brushing: To facilitate user interaction, we designed a uniform brushing tool that can be directly applied to both point and line regions in SPPC.
In the line regions, similar to general parallel coordinates, axis range and angular brushing are supported. Here we allow the user to directly apply raw sketching on the displayed visualization to take advantage of the flexibility of free-hand sketching. To select a range of axis values, the user can either directly brush on the targeted area of an axis with a line, or circle a desirable range. The system can automatically fit the drawing to the corresponding region and compute the range. Users can also apply angular brushing with a V-shape sketch gesture. Two lines are applied to fit the sketching input from the user. The formed angle defines the lines with the corresponding slope range to be selected. If sketching is applied in the point region, points in the area bounded by the input stroke are selected. The above scenario is illustrated in Figure 7.
Note that in our approach, there is no action needed for switching between range, angular, and point selection brushing. With our designed tool, the user can focus more on visual exploration with higher efficiency.
SECTION 6
## Case Study
In this section, we demonstrate the effectiveness of our SPPC visualization system through experiments on several datasets.
### 6.1 Car Data
We first tested SPPC on a car information dataset with 7 variables and 392 data items, which has been used in many publications. The relationship between "acceleration", "model year", "origin" and "number of cylinders" are explored in our system.
Fig. 8. Experiments on a dataset with 7 variables and 392 data items. Data from http://lib.stat.cmu.edu/datasets/cars.data.
Naturally, "number of cylinders" has an inverse correlation with the dimension "acceleration". When the dimension "model year" is introduced, as shown in Figure 8 (a), three major clusters can be clearly observed in the MDS point region. In Figure 8(b), after adding one more dimension "origin", the previous three clusters in Figure 8(a) split into five portions. With the assistance of the animated transition in our system, we can observe that the lowest cluster is split into three new clusters, and one new small cluster is formed by combining some data items from the originally blue and yellow clusters in Figure 8(a). Without the animation, such kinds of information are very difficult to obtain. This exploration process also indicates a hierarchical classification structure of this car data.
By further performing sketching on the resulting point plot, we can clearly view the properties of each group as shown in Figure 8. For example, the cluster shown in Figure 8(c) contains cars which are relatively new and all come from the USA. This indicates a trend of manufacturing cars with small numbers of cylinders in the USA, while other countries already have a longer history of making more economical cars. The highlighted clusters in Figure 8(d) provide a depiction of the car production in Europe in contrast of that in the USA as illustrated in Figure 8(c).
Fig. 9. Visualization of DNA Microarray data with Scattering Points in Parallel Coordinates (SPPC). Gene expression data of intestinal gastric cancer tissues (in blue) and adjacent cancer tissues (in red) from cancer patients, and normal gastric tissue (in green) from healthy people are separated.
Fig. 10. DNA microarray data visualized by (a) parallel coordinates, (b) SPPC.
### 6.2 DNA Microarray Data Analysis with SPPC
We also applied our SPPC visualization scheme on microarray analysis. Identifying genes whose expressions are specifically altered in cancer cells is essential for early cancer diagnoses. Due to the enormous amount of data generated from DNA microarray experiments, the use of certain statistical analysis is very difficult. We analyze a typical DNA microarray data with our system.
The original dataset consists of expression profile data from two types of gastric cancer versus common reference using a 21,329-oligonucleotide microarray chip. The samples had been histologically confirmed and documented to be compatible age and gender. In our experiment, we focused on the comparison of data from the intestinal gastric cancer tissue against normal gastric tissue. The profile we studied includes the data of the intestinal gastric cancer tissues and adjacent cancer tissues from 20 patients, and normal tissues from 5 healthy persons.
The expression intensity of each gene is equal to the readout intensity value from the original microarray chip of the corresponding gene spot normalized by a common reference value. Expression intensities larger than 2 or smaller than 0.5 are considered to be significant in terms of difference against the common reference. Note expression intensities between the range of 0.5 and 2 are important. Only picking up genes with the highest expression intensities (value larger than 2 or smaller than 0.5) may not generate best classification of cancer and normal issues.
In Figure 9, a visualization result with SPPC is shown. Measurement of expression intensity of each gene can be treated as one dimension. Selected 100 dimensions (with high gene expression level) of the raw data are visualized simultaneously. In the middle point region, the three types of tissues, marked with blue, red and green color respectively are separated into three clusters with DIMDS. This DIMDS involves nine genes as the classifiers, including COL18A1, VSIG2, RGS1, BDKRB2, PLEKHG2, RDHE2, etc.
With our system, the user can quickly find out the best gene combination for distinguishing different types of tissues, e.g. identify the gene classifiers for possible early cancer diagnosis. Without the integrated environment of MDS with parallel coordinates and the enabled animation during dimension addition and substraction, the identification would take much longer if using traditional approaches. Comparing the enlarged image sets in Figure 10, with only two dimensions involved in Figure 10(a), it is difficult to discern the different tissue type, while SPPC works effectively in Figure 10(b). Note due to the ambiguous nature of the data, there are no sharp boundaries between each class. However, with the power of interactive exploration and the integrated visualization environment provided by SPPC, the task of identifying genes for cancer diagnosis becomes easier.
### 6.3 Discussion
From the experiments above we can see that our SPPC visualization scheme has some clear advantages. In some multidimensional data analysis tasks, showing a point distribution plot can improve data comprehensibility. It is also easier for a user to detect the clusters directly. Integration of point/line visualization is the main feature of our scheme that differentiates our system. This marks a difference from previous methods. Comparing with other multiple view systems, in which the user performs an operation and observes the consequence in different windows, our integrated system removes the memory boundary between two different visualization algorithms. With continuous curved lines, data trends can be tracked easily.
Animation is implemented in both processes of SPPC. First, in the point/line transition process, and second in the DIMDS dimension addition/subtraction. In both processes of SPPC, animation has been implemented, e.g. in the point/line transition process and in the DIMDS dimension addition/substraction process. The animation plays a critical role. By checking the dynamics during the transition, the user can understand the underlying relationship by interactively modifying the visualization conditions. Based on the feedback from a preliminary testing by several selected users, animation is one of the most important features that helps the users understand data.
Note that many techniques have been developed for parallel coordinates to help the user in exploring clusters and correlations in the data, (e.g. clustering techniques, aggregation techniques). They many not be compatible with the the point representation part of the SPPC approach. However, such techniques can still be applied on the line representation portion of the SPPC to facilitate data comprehension. Methods such as grand tour [3], [38] which examines structure of high dimensional data from all possible angles, and projection pursuit [14] which only shows important aspects of high dimensional space, can be applied to our system.
DIMDS creates new configurations starting from a scatterplot of neighboring axis whose Stress value is zero. The incremental process only increases Stress a little bit each time. This allows us to create configurations with very low Stress values within only a very few steps of iteration. Together with the improvement through the introduction of the GPU acceleration, the interactive DIMDS greatly helps the user with data exploration and leads to new discoveries in the data domain, which is very difficult to achieve with a non-interactive MDS system. The spring model in DIMDS, although having O(n3) complexity, presents a reversible feature in the process of incrementally creating an MDS configuration. That is, by subtracting the most recently added dimensions, we are able to come back to the previous state.
As mentioned earlier, scatterplots linking to parallel coordinates has been well established. Based on feedback, users prefer our approach, mainly because of SPPC reduces content switching which is unavoidable in the multiple coordinated windows approaches. Animation of the point/line transition and the automatic background curved line fading in/out also receives positive comments for their contribution to the integration of two visual metaphors.
SECTION 7
## Conclusions and Future Work
The visualization approach as presented in this paper shows that Scattering Points in Parallel Coordinates is promising to improve usability in visualizations of large multidimensional data. By closely integrating scatterplots or multidimensional scaling plots into the drawing of parallel coordinates, an efficient method of interacting with visualization systems is possible. With our seamless design of transition between parallel coordinate representation and scattered point representation, switching between the two visual forms requires very little cost. Unform brushing enables user navigation and selection throughout the multidimensional data visualization domain.
There are many algorithms that can project data from n-D to 2-D. We would like to investigate other projection methods, including PCA, in the future. In line regions, existing parallel coordinate algorithms can still be applied to improve the system. We would like to apply several parallel coordinate techniques in our system, e.g. edge clustering [44], illustrative parallel coordinates [29]. We would also like to further investigate how other multidimensional visualization forms can be closely integrated with parallel coordinates. Finally, a more thorough user study will be conducted to further investigate the usability of our system, especially the comparison between SPPC and traditional multiple coordinated windows methods.
### Acknowledgments
This work is supported by Beijing Municipal Natural Science Foundation (No. 4092021), MOST 2009CB320903, RFDP 200800011004, Key Project of Chinese Ministry of Education (No. 109001) and HK RGC CERG 618706. We thank Dr. Youyong Lu from Peking University School of Oncology for providing the DNA microarray data. We thank Finlay Mungall for proofreading. We are grateful to the anonymous reviewers, whose valuable suggestions greatly improved this paper.
## Footnotes
Xiaoru Yuan, Peihong Guo and He Xiao are with the Key Laboratory of Machine Perception (Ministry of Education) and School of EECS, Peking University, Beijing, P.R. China. E-mail: [email protected], [email protected], [email protected]
Hong Zhou and Huamin Qu are with the Department of Computer Science and Engineering at Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong. E-mail: [email protected], [email protected]
Manuscript received 31 March 2009; accepted 27 July 2009; posted online 11 October 2009; mailed on 5 October 2009.
## References
1. Similarity clustering of dimensions for an enhanced visualization of multidimensional data.
M. Ankerst, S. Berchtold and D. A. Keim
In Proceedings of the IEEE InfoVis'98, pages 52–60, 1998.
2. Uncovering clusters in crowded parallel coordinates visualizations.
A. O. Artero, M. C. F. de Oliveira and H. Levkowitz
In Proceedings of the IEEE InfoVis'04, pages 81–88, 2004.
3. The grand tour: a tool for viewing multidimensional data.
D. Asimov
SIAM J. Sci. Stat. Comput., 6 (1): 128–143, 1985.
4. Continuous scatterplots.
S. Bachthaler and D. Weiskopf
IEEE Trans. Vis. Comput. Graph., 14 (6): 1428–1435, 2008.
5. Guidelines for using multiple views in information visualization.
M. Q. W. Baldonado, A. Woodruff and A. Kuchinsky
In Proceedings of AVI'00, pages 110–119. ACM, 2000.
6. Incremental multidimensional scaling method for database visualization.
W. Basalaj
In Proceedings of Visual Data Exploration and Analysis VI, SPIE, pages 149–158, 1999.
7. Parallel sets: visual analysis of categorical data.
F. Bendix, R. Kosara and H. Hauser
In Proceedings of the IEEE InfoVis'05, pages 133–140, 2005.
8. Visualizing fuzzy points in parallel coordinates.
M. R. Berthold and L. O. Hall
IEEE Trans. Fuzzy Sys., 11 (3): 369–374, 2003.
9. Spring View: cooperation of radviz and parallel coordinates for view optimization and clutter reduction.
E. Bertini, L. Dell'Aquila and G. Santucci
In Proceedings of CMV'05, pages 22–29, Jul. 2005.
10. A class of local interpolating splines. In R. Barn-hill and R. Riesenfe, editors,
E. Catmull and R. Rom
Computer Aided Geometric Design, pages 317–326, New York, 1974. Academic Press.
11. A linear iteration time layout algorithm for visualising high-dimensional data.
M. Chalmers
In Proceedings of the IEEE Visualization'96, pages 127–133, 1996.
12. Dynamic Graphics for Statistics
W. C. Cleveland and M. E. McGill
CRC Press, Inc., Boca Raton, FL, USA, 1988.
13. Coordinated graph and scatter-plot views for the visual exploration of microarray time-series data.
P. Craig and J. Kennedy
In Proceedings of the IEEE InfoVis'03, pages 197–201, 2003.
14. Projection pursuit techniques for the visualization of high dimensional datasets.
S. L. Crawford and T. C. Fall
Visualization in Scientific Computing, pages 94–108, 1990.
15. Enabling automatic clutter reduction in parallel coordinate plots.
G. Ellis and A. Dix
IEEE Trans. Vis. Comput. Graph., 12 (5): 717–724, 2006.
16. Rolling the dice: Multidimensional visual exploration using scatterplot matrix navigation.
N. Elmqvist, P. Dragicevic and J.-D. Fekete
IEEE Trans. Vis. Comput. Graph., 14 (6): 1539–1148, 2008.
17. An interactive 3d integration of parallel coordinates and star glyphs.
E. Fanea, S. Carpendale and T. Isenberg
In Proceedings of the IEEE InfoVis'05, pages 149–156, 2005.
18. Hierarchical parallel coordinates for exploration of large datasets.
Y.-H. Fua, M. O. Ward and E. A. Rundensteiner
In Proceedings of the IEEE Visualization'99, pages 43–50, 1999.
19. Using curves to enhance parallel coordinate visualisations.
M. Graham and J. Kennedy
In Proceedings of the Intl. Conf. on Information Visualization, pages 10–16, Jul. 2003.
20. Angular brushing of extended parallel coordinates.
H. Hauser, F. Ledermann and H. Doleisch
In Proceedings of the IEEE InfoVis'02, pages 127–130, 2002.
21. Dna visual and analytic data mining.
P. Hoffman, G. Grinstein, K. Marx, I. Grosse and E. Stanley
In Proceedings of the IEEE Visualization'97, pages 437–441, 1997.
22. Nvidia CUDA programming guide.
N. Inc
23. Glimmer: Multilevel mds on the gpu.
S. Ingram, T. Munzner and M. Olano
IEEE Trans. Vis. Comput. Graph., 15 (2): 249–261, 2009.
24. The plane with parallel coordinates.
A. Inselberg
The Visual Computer, 1 (2): 69–91, 1985.
25. Parallel coordinates: a tool for visualizing multi-dimensional geometry.
A. Inselberg and B. Dimsdale
In Proceedings of the IEEE Visualization'90, pages 361–378, 1990.
26. 3-dimensional display for clustered multi-relational parallel coordinates.
J. Johansson, M. Cooper and M. Jern
In Proceedings of the Intl. Conf. on Information Visualization, pages 188–193, 2005.
27. Revealing structure within clustered parallel coordinates displays.
J. Johansson, P. Ljung, M. Jern and M. Cooper
In Proceedings of the IEEE InfoVis'05, pages 125–132, 2005.
28. Parallel sets: interactive exploration and visual analysis of categorical data.
R. Kosara, F. Bendix and H. Hauser
IEEE Trans. Vis. Comput. Graph., 12 (4): 558–568, 2006.
29. Illustrative parallel coordinates.
K. T. McDonnell and K. Mueller
Computer Graphics Forum, 27 (3): 1031–1038, 2008.
30. Visually effective information visualization of large data.
M. Novotny
In Proceedings of CESCG'04, pages 41–48. CRC Press, 2004.
31. Outlier-preserving focus+context visualization in parallel coordinates.
M. Novotny and H. Hauser
IEEE Trans. Vis. Comput. Graph., 12 (5): 893–900, 2006.
32. Clutter reduction in multi-dimensional data visualization using dimension reordering.
W. Peng, M. O. Ward and E. A. Rundensteiner
In Proceedings of the IEEE InfoVis'04, pages 89–96, 2004.
33. Comparative multivariate visualization across conceptually different graphic displays.
C. Schmid and H. Hinterberger
In Proceedings of SSDBM'94, pages 42–51, 1994.
34. Direct manipulation of parallel coordinates.
H. Siirtola
In Proceedings of the Intl. Conf. on Information Visualization, pages 373–378, 2000.
35. Combining parallel coordinates with the reorderable matrix.
H. Siirtola
In Proceedings of CMV'03, pages 63–74, 2003.
36. Higher order parallel coordinates.
H. Theisel
In Proceedings of VMV'00, pages 415–420, 2000.
37. Visualizing the behaviour of higher dimensional dynamical systems.
R. Wegenkittl, H. Löffelmann and E. Gröller
In Proceedings of the IEEE Visualization'97, pages 119–125, 1997.
38. High dimensional clustering using parallel coordinates and the grand tour.
E. J. Wegman and Q. Luo
Computing Science and Statistics, 28: 352–360, 1997.
39. Edgelens: An interactive method for managing edge congestion in graphs.
N. Wong, S. Carpendale and S. Greenberg
In Proceedings of the IEEE InfoVis'03, pages 51–58, 2003.
40. Multiresolution multidimensional wavelet brushing.
P. C. Wong and R. D. Bergeron
In Proceedings of the IEEE Visualization'96, pages 141–148, 1996.
41. Multivariate visualization using metric scaling.
P. C. Wong and R. D. Bergeron
In Proceedings of the IEEE Visualization'97, pages 111–118, 1997.
42. Interactive hierarchical dimension ordering, spacing and filtering for exploration of high dimensional datasets.
J. Yang, W. Peng, M. O. Ward and E. A. Rundensteiner
In Proceedings of IEEE InfoVis'09, pages 105–112, 2003.
43. Splatting the lines in parallel coordinates.
H. Zhou, W. Cui, H. Qu, Y. Wu, X. Yuan and W. Zhou
Computer Graphics Forum, 28 (3): 759–766, 2009.
44. Visual clustering in parallel coordinates.
H. Zhou, X. Yuan, H. Qu, W. Cui and B. Chen
Computer Graphics Forum, 27 (3): 1047–1054, 2008.
## Cited By
No Citations Available
## Keywords
### IEEE Keywords
No Keywords Available
### More Keywords
No Keywords Available
No Corrections
## Media
No Content Available
This paper appears in:
IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS
Issue Date:
November/December 2009
On page(s):
929 - 936
ISBN:
1077-2626
Print ISBN:
N/A
INSPEC Accession Number:
10930717
Digital Object Identifier:
10.1109/TVCG.2009.179
Date of Current Version:
01 Nov, 2009
Date of Original Publication:
23 Sep, 2009
|
2016-04-29 16:40:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5265414118766785, "perplexity": 1861.8560543190463}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860111374.13/warc/CC-MAIN-20160428161511-00007-ip-10-239-7-51.ec2.internal.warc.gz"}
|
https://wsw.academickids.com/encyclopedia/index.php/Invalid_proof
|
# Invalid proof
In mathematics, there are a variety of spurious proofs of obvious contradictions. Although the proofs are flawed, the errors are comparatively subtle, usually by design. These fallacies are normally regarded as mere curiosities, but can be used to show the importance of rigor in mathematics.
Most of these proofs depend on some variation of the same error. The error is to take a function f that is not one-to-one, to observe that f(x) = f(y) for some x and y, and to (erroneously) conclude that therefore x = y. Division by zero is a special case of this; the function f is xx × 0, and the erroneous step is to start with x × 0 = y × 0 and to conclude that therefore x = y.
Contents
## Examples
### Proof that 1 equals −1
[itex]-1 = -1\ [itex]
Then we convert these into vulgar fractions
[itex]\frac{1}{-1} = \frac{-1}{1}[itex]
Applying square roots on both sides gives
[itex]\sqrt{\frac{1}{-1}} = \sqrt{\frac{-1}{1}}[itex]
Which is equal to
[itex]\frac{\sqrt{1}}{\sqrt{-1}} = \frac{\sqrt{-1}}{\sqrt{1}}[itex]
If we now clear fractions by multiplying both sides by [itex]\sqrt{-1}[itex] and then [itex]\sqrt{1}[itex], we have
[itex]\sqrt{1}\sqrt{1} = \sqrt{-1}\sqrt{-1}[itex]
But any number's square root squared gives the original number, so
[itex]1 = -1\ [itex]
This proof is invalid since it applies the following principle for square roots wrongly:
[itex]\sqrt{\frac{x}{y}} = \frac{\sqrt{x}}{\sqrt{y}}[itex]
This principle is only correct when the product of x and y is a positive number. In the "proof" above, this is not the case. Thus the proof is invalid.
### Proof that 1 is less than 0
Let us suppose that
[itex]x < 1[itex]
Now we will take the logarithm on both sides. As long as x > 0, we can do this because logarithms are monotonically increasing. Observing that the logarithm of 1 is 0, we get
[itex]\ln x < 0[itex]
Dividing by ln x gives
[itex]1 < 0[itex]
The violation is found in the last step, the division. This step is wrong because the number we are dividing by is negative, which in turn is because the argument to the logarithm is less than 1, our original assumption. A multiplication with or division by a negative number flips the inequality sign; in other words, we should obtain 1 > 0, which is indeed correct.
### Proof that 2 equals 1
Let a and b be equal quantities. It follows that:
1. [itex]a = b[itex]
2. [itex]a^2 = ab[itex]
3. [itex]a^2 - b^2 = ab - b^2[itex]
4. [itex](a - b)(a + b) = b(a - b)[itex]
5. [itex]a + b = b[itex]
6. [itex]b + b = b[itex]
7. [itex]2b = b[itex]
8. [itex]2 = 1[itex]
The fallacy is in line 5: the progression from line 4 to line 5 involves division by ab, which is zero since a equals b. Since division by zero is undefined, the argument is invalid.
### Proof that a equals b
[itex]a - b = c[itex]
• now, square both sides:
[itex]a^2 - 2ab + b^2 = c^2[itex]
• since [itex]a - b = c[itex], substitute:
[itex]a^2 - 2ab + b^2 = (a - b)c[itex]
• write out the multiplication:
[itex]a^2 - 2ab + b^2 = ac - bc[itex]
• rearranging all, we get:
[itex]a^2 - ab - ac = ab - b^2 - bc[itex]
• factorize both members:
[itex]a(a - b - c) = b(a - b - c)[itex]
• cancel the common factor:
[itex]a = b[itex]
The catch is that since ab = c, abc = 0, and as a result we have performed an illegal division by zero.
### Proof that 0 equals 1
The following is a "proof" that 0 equals 1:
[itex]0[itex] [itex]=[itex] [itex]0 + 0 + 0 + \ldots[itex] [itex]=[itex] [itex](1 - 1) + (1 - 1) + (1 - 1) + \ldots[itex] [itex]=[itex] [itex]1 + (-1 + 1) + (-1 + 1) + (-1 + 1) + \ldots[itex] (associative law) [itex]=[itex] [itex]1 + 0 + 0 + 0 + \ldots[itex] [itex]=[itex] [itex]1[itex]
The error here is that the associative law cannot be applied freely to an infinite sum unless the sum would converge without any parentheses. In this particular argument, the second line gives the sequence of partial sums 0, 0, 0, ... (which converges to 0) while the third line gives the sequence of partial sums 1, 1, 1, ... (which converges to 1), so it is unclear in what sense these expressions can be considered equal.
### Another proof that any number equals zero
Obviously, 4 * 3 = 3 + 3 + 3 + 3
In general, X * Y = Y + Y + ... + Y (X terms, for any number X and Y)
Taking the derivative with respect to X, we get: Y = 0 + 0 + ... + 0 (X terms)
In other words Y = 0, for any number Y.
The error here is that the second line only makes sense if X is a natural number. However, if it is, then the second line is not a continuous function and therefore its derivative cannot be taken.
## Conclusion
These arguments do constitute valid proofs, but not of the claimed assertions. For example, there is no a priori reason why division by zero should be defined (it's not a field axiom, for example, though 1 ≠ 0, from which 2 ≠ 1 follows, is an axiom), and the "proof" that 2 = 1 is, in fact, simply a demonstration that division by zero cannot be defined in general. A proof that division by zero could be defined would demonstrate a contradiction and show that the axiomatic system we are working under is logically inconsistent!
• Art and Cultures
• Countries of the World (http://www.academickids.com/encyclopedia/index.php/Countries)
• Space and Astronomy
|
2021-06-24 09:36:16
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8701648712158203, "perplexity": 632.0798020630126}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488552937.93/warc/CC-MAIN-20210624075940-20210624105940-00095.warc.gz"}
|
https://unapologetic.wordpress.com/2010/01/19/a-lemma-on-reflections/?like=1&source=post_flair&_wpnonce=fe7f791e1e
|
# The Unapologetic Mathematician
## A Lemma on Reflections
Here’s a fact we’ll find useful soon enough as we talk about reflections. Hopefully it will also help get back into thinking about linear transformations and inner product spaces. However, if the linear algebra gets a little hairy (or if you’re just joining us) you can just take this fact as given. Remember that we’re looking at a real vector space $V$ equipped with an inner product $\langle\underline{\hphantom{X}},\underline{\hphantom{X}}\rangle$.
Now, let’s say $\Phi$ is some finite collection of vectors which span $V$ (it doesn’t matter if they’re linearly independent or not). Let $\sigma$ be a linear transformation which leaves $\Phi$ invariant. That is, if we pick any vector $\phi\in\Phi$ then the image $\sigma(\phi)$ will be another vector in $\Phi$. Let’s also assume that there is some $n-1$-dimensional subspace $P$ which $\sigma$ leaves completely untouched. That is, $\sigma(v)=v$ for every $v\in P$. Finally, say that there’s some $\alpha\in\Phi$ so that $\sigma(\alpha)=-\alpha$ (clearly $\alpha\notin P$) and also that $\Phi$ is invariant under $\sigma_\alpha$. Then I say that $\sigma=\sigma_\alpha$ and $P=P_\alpha$.
We’ll proceed by actually considering the transformation $\tau=\sigma\sigma_\alpha$, and showing that this is the identity. First off, $\tau$ definitely fixes $\alpha$, since
$\displaystyle\tau(\alpha)=\sigma\left(\sigma_\alpha(\alpha)\right)=\sigma(-\alpha)=-(-\alpha)=\alpha$
so $\tau$ acts as the identity on the line $\mathbb{R}\alpha$. In fact, I assert that $\tau$ also acts as the identity on the quotient space $V/\mathbb{R}\alpha$. Indeed, $\sigma_\alpha$ acts trivially on $P_\alpha$, and every vector in $V/\mathbb{R}\alpha$ has a unique representative in $P_\alpha$. And then $\sigma$ acts trivially on $P$, and every vector in $V/\mathbb{R}\alpha$ has a unique representative in $P$.
This does not, however, mean that $\tau$ acts trivially on any given complement of $\mathbb{R}\alpha$. All we really know at this point is that for every $v\in V$ the difference between $v$ and $\tau(v)$ is some scalar multiple of $\alpha$. On the other hand, remember how we found upper-triangular matrices before. This time we peeled off one vector and the remaining transformation was the identity on the remaining $n-1$-dimensional space. This tells us that all of our eigenvalues are ${1}$, and the characteristic polynomial is $(T-1)^n$, where $n=\dim(V)$. We can evaluate this on the transformation $\tau$ to find that $(\tau-1)^n=0$
Now let’s try to use the collection of vectors $\Phi$. We assumed that both $\sigma$ and $\sigma_\alpha$ send vectors in $\Phi$ back to other vectors in $\Phi$, and so the same must be true of $\tau$. But there are only finitely many vectors (say $k$ of them) in $\Phi$ to begin with, so $\tau$ must act as some sort of permutation of the $k$ vectors in $\Phi$. But every permutation in $S_k$ has an order that divides $k!$. That is, applying $\tau$ $k!$ times must send every vector in $\Phi$ back to itself. But since $\Phi$ is a spanning set for $V$, this means that $\tau^{k!}=1$, or that $\tau^{k!}-1=0$
So we have two polynomial relations satisfied by $\tau$, and $\tau$ will clearly satisfy any linear combination of these relations. But Euclid’s algorithm shows us that we can write the greatest common divisor of these relations as a linear combination, and so $\tau$ must satisfy the greatest common divisor of $T^{k!}-1$ and $(T-1)^n$. It’s not hard to show that this greatest common divisor is $T-1$, which means that we must have $\tau-1=0$ or $\tau=1$.
It’s sort of convoluted, but there are some neat tricks along the way, and we’ll be able to put this result to good use soon.
January 19, 2010 - Posted by | Algebra, Geometry, Linear Algebra
2. You’ve a “$lateh_\alpha$” which ought to be fixed.
|
2015-08-29 03:04:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 77, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9638364911079407, "perplexity": 74.67934064050712}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644064167.28/warc/CC-MAIN-20150827025424-00167-ip-10-171-96-226.ec2.internal.warc.gz"}
|
https://socratic.org/questions/a-student-walks-and-jogs-to-college-each-day-the-student-averages-5-km-h-walking
|
# A student walks and jogs to college each day. The student averages 5 km/h walking and 9 km/h jogging. The distance from home to college is 8 km, and the student makes the trip in 1 hour. How far does the student jog?
Feb 7, 2016
6.75 km
#### Explanation:
Let $x$ be the duration spent jogging.
Since the total travel duration is 1 hour, $1 - x$ is the duration spent walking
$d = s t$
The distance covered by jogging is
$\implies {d}_{j} = 9 x$
Meanwhile, the distance covered by walking is
${d}_{w} = 5 \left(1 - x\right)$
Since the total distance covered is 8 km, we have
$\implies 9 x + 5 \left(1 - x\right) = 8$
$\implies 9 x + 5 - 5 x = 8$
$\implies 4 x = 3$
$\implies x = \frac{3}{4}$
We want the distance covered by jogging,
${d}_{j} = 9 x$
${d}_{j} = 9 \left(\frac{3}{4}\right)$
${d}_{j} = \frac{27}{4} = 6 \frac{3}{4} = 6.75$
|
2019-09-22 22:22:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 12, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8579744100570679, "perplexity": 2349.955416109479}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514575751.84/warc/CC-MAIN-20190922221623-20190923003623-00003.warc.gz"}
|
http://www.gamedev.net/index.php?app=forums&module=extras§ion=postHistory&pid=5054523
|
• Create Account
### #ActualBoulougou
Posted 18 April 2013 - 06:03 AM
I had a quick look at the pdf. Are you sure phi = 50 is reasonable, the pdf divides the RD by 173.7178 (for some reason) to get phi.
Thanks for the reply, you are right. Silly mistake, I should have first scale the deviation before applying the formula and then rescale again as the document says. The correct value for the new deviation is 124.646889. That seems more reasonable.
However, this means that it takes 10 years(120 rating periods, assuming one rating period is one month) for deviation to increase from 50 to 124.646889. What if I would like to keep the same rating period(one month), but I would like deviation to increase more rapidly? I think Glicko1 had some constant that could adjust this. I cannot find a similar constant for Glicko2. It has to be some way, otherwise the algorithm would be quite constraining.
### #2Boulougou
Posted 18 April 2013 - 06:02 AM
I had a quick look at the pdf. Are you sure phi = 50 is reasonable, the pdf divides the RD by 173.7178 (for some reason) to get phi.
Thanks for the reply, you are right. Silly mistake, I should have first scale the deviation before applying the formula and then rescale again. The correct value for the new deviation is 124.646889. That seems more reasonable.
However, this means that it takes 10 years(120 rating periods, assuming one rating period is one month) for deviation to increase from 50 to 124.646889. What if I would like to keep the same rating period(one month), but I would like deviation to increase more rapidly? I think Glicko1 had some constant that could adjust this. I cannot find a similar constant for Glicko2. It has to be some way, otherwise the algorithm would be quite constraining.
### #1Boulougou
Posted 18 April 2013 - 05:49 AM
I had a quick look at the pdf. Are you sure phi = 50 is reasonable, the pdf divides the RD by 173.7178 (for some reason) to get phi.
Thanks for the reply, you are right. Silly mistake, I should have first scale the deviation before applying the formula. The correct value for the new deviation is φ' = 124.646889. That seems more reasonable.
However, this means that it takes 10 years(120 rating periods, assuming one rating period is one month) for deviation to increase from 50 to 124.646889. What if I would like to keep the same rating period(one month), but I would like deviation to increase more rapidly? I think Glicko1 had some constant that could adjust this. I cannot find a similar constant for Glicko2. It has to be some way, otherwise the algorithm would be quite constraining.
PARTNERS
|
2014-10-26 05:23:39
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9134979248046875, "perplexity": 1092.4126839710095}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119655893.49/warc/CC-MAIN-20141024030055-00292-ip-10-16-133-185.ec2.internal.warc.gz"}
|
https://puszcza.gnu.org.ua/bugs/?429
|
## tex4ht - Bugs: bug #429, htlatex: Does not delete...
Show feedback again
You are not allowed to post comments on this tracker with your current authentification level.
## bug #429: htlatex: Does not delete intermediate files
Submitted by: Hilmar Preusse Submitted on: Tue Jul 2 16:12:16 2019 Category: None Priority: 5 - Normal Severity: 3 - Minor Status: None Privacy: Public Assigned to: None Open/Closed: Closed
Sun Jul 28 16:29:55 2019, comment #5:
OK, I accept your explanation. Feel free to close the case. Thanks!
Hilmar Preusse <hpreusse>
Fri Jul 5 13:13:13 2019, comment #4:
You can use a simple wrapper around htlatex if you don't mind the issues I described in my previous post. This version is for Unix based OSes:
#!/bin/bash
htlatex "$@" base=${1%.tex}
rm $base.dvi rm$base.idv
rm $base.lg rm$base.tmp
# following files should be kept IMHO
# the log file can be usefull for error investigation
rm $base.log # these files are reused between TeX runs, it is not a good idea to delete them rm$base.4tc
rm $base.4ct rm$base.aux
rm \$base.xref
I would say that deleting the dvi, idv, lg and tmp files should be quite safe, as they are generated every time from the scratch and the subsequent compilations don't depend on them.
Michal Hoftich <michal_h21>
Fri Jul 5 11:36:59 2019, comment #3:
I'm aware that TeX related programs are not very good in deleting intermediate files. However in this case we speak about htlatex, some kind of wrapper script. I would expect that this script cares about intermediate files, which are not needed any more.
Note that we speak about the files, which were created by tex4ht & t4ht i.e. the files generated by the post-processor.
Feel free to lower that bug to wish list and introduce a clear switch if you think it is not a good idea to delete all intermediate files by default.
Many thanks,
Hilmar
Hilmar Preusse <hpreusse>
Wed Jul 3 07:58:02 2019, comment #2:
Hi Hilmar, as Karl and Nasser said, there is no cleanup performed by htlatex or other tex4ht scripts by default. The auxilary files are in fact quite important for the correct conversion, so it is not a good idea to delete them after every compilation. For example some complex tables may need more than three compilations by LaTeX to correctly resolve the structure, so you would never get a correct result with removed aux files.
Michal Hoftich <michal_h21>
Tue Jul 2 21:25:07 2019, comment #1:
Hi Hilmar - What cleanup is attempted now? I wasn't aware that htlatex etc. removed anything.
In any case, I don't think it would be a good idea to change the behavior at this late date. Those intermediate files can be helpful for debugging and bug reports. I could imagine adding an option for such cleaning (if there isn't one already ... I haven't checked).
Thanks for the suggestion.
Karl Berry <karl>
Tue Jul 2 16:12:16 2019, original submission:
I've noticed that running
htlatex test.tex "xhtml,ooffice,enumerate+" "ooffice/! -cunihtf -utf8" "-coo"
pollutes current directory with the following intermediate files:
test.4ct
test.4tc
test.dvi
test.idv
test.tmp
test.xref
As htlatex is actually trying to do some cleanup, I think these should be removed as well. I only expect the following files to stay:
test.lg
test.odt
Test input file:
\documentclass{article}
\begin{document}
test
\end{document}
Hilmar Preusse <hpreusse>
No files currently attached
Depends on the following items: None found
Items that depend on this one: None found
Carbon-Copy List
• -unavailable- added by michal_h21 (Posted a comment)
• -unavailable- added by karl (Posted a comment)
• -unavailable- added by hpreusse (Submitted the item)
•
Do you think this task is very important?
This task has 0 encouragements so far.
Only logged-in users can vote.
Please enter the title of George Orwell's famous dystopian book (it's a date):
|
2022-10-04 09:07:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7065954804420471, "perplexity": 6731.480814474677}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337490.6/warc/CC-MAIN-20221004085909-20221004115909-00435.warc.gz"}
|
https://adamkucz.github.io/psych548/assignments/A01/Assignment1.html
|
Please submit the .Rmd file and all output files (.html, .pdf, etc.)
## Some quick problems
1. Create an object called myObject and assign it a value between 1 and 100
2. Add 13 to myObject, making sure the object itself stores the updated value
3. Is myObject divisible by 2? by 3? by 13? by 21? Use R code to get the answer.
4. How many times can 5 fit in myObject?
5. Add myObject to every element of a vector with values 1, 2, 3, 4, and 5
6. Fix this code (without changing any numbers) to get it to return 8
5+3^3%/%2
## R Markdown Practice
Create an R markdown document with the following components:
• A custom title (i.e., not “Untitled”) and your name
• Change the theme from the default theme
• An in-line R calculation
• Write the equation for the probability density function the normal distribution (R Markdown renders LaTeX style math, so Google will be your friend here!) $\frac{1}{\sigma\sqrt{2\pi}}e^{-\frac{1}{2}\left(\frac{x-\mu}{\sigma}\right)^2}$
• A block of R code (whatever you want) that is displayed, run, but does not show output in the rendered document
• A block of R code that is displayed, but is not run and thus does not show output in the rendered document
• A block of R code that is not shown, but produces output in the rendered document
• An ordered list
1. This
2. Is
3. an ordered
4. list!
• An unordered list
• this is an
• unordered list!
• A link to an external website like UW’s website
• A plot, 5in $$\times$$ 7in that is right-aligned in the rendered document (hint: if you need help generating a random plot, copy example code at the bottom of ?plot)
• A picture
• Bold text
• Italicized text
• Look online for an R package that might be useful for you in the future! This can be a package for conducting a specific type of analysis, creating certain figures, or anything else! Load the package in your markdown script and give me a two-sentence overview of the package.
• Bonus:
• Underlined text
• Red text
• White text with a red background
• Render a MS Word document
• Render a PDF document
|
2023-02-06 02:43:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3270254135131836, "perplexity": 3117.9123943951686}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500303.56/warc/CC-MAIN-20230206015710-20230206045710-00848.warc.gz"}
|
https://blog.quantinsti.com/vader-sentiment/
|
In Finance and Trading, a large amount of data is generated every day. This data comes in the form of News, Scheduled Economic releases, employment figures, etc. It is clear that the news has a great impact on the prices of stocks. Every trader takes great efforts in keeping track of the latest news and updates trade calls accordingly. Automating this task provides better trading opportunities.
In this blog, we are going to study what VADER Sentiment Analysis is and how to use it in our Algorithmic Trading Models using Python. Let us go through the topics first:
VADER is a less resource-consuming sentiment analysis model that uses a set of rules to specify a mathematical model without explicitly coding it. VADER consumes fewer resources as compared to Machine Learning models as there is no need for vast amounts of training data. VADER’s resource-efficient approach helps us to decode and quantify the emotions contained in streaming media such as text, audio or video. VADER doesn’t suffer severely from a speed-performance tradeoff.
VADER stands for Valence Aware Dictionary for sEntiment Reasoning. Don't worry if these words don't make any sense to you right now. By the end of this blog, you’ll have a strong grasp of what these words mean.
Moving on to the next section which discusses the classification accuracy of the VADER model and how VADER achieves it.
### What is the accuracy of VADER?
Study shows that VADER performs as good as individual human raters at matching ground truth.
Further inspecting the F1 scores (classification accuracy), we see that VADER (0.96) outperforms individual human raters (0.84) at correctly labelling the sentiment of tweets into positive, neutral, or negative classes.
The reason behind this is that VADER is sensitive to both Polarity (whether the sentiment is positive or negative) and Intensity (how positive or negative is sentiment) of emotions.
VADER incorporates this by providing a Valence Score to the word into consideration. This brings us to the next section.
### What is Valence Score?
It is a score assigned to the word under consideration by means of observation and experiences rather than pure logic.
• Consider the words 'terrible' , 'hopeless', 'miserable'. Any self-aware Human would easily gauge the sentiment of these words as Negative.
• While on the other side, words like 'marvellous', 'worthy', 'adequate' are signifying positive sentiment.
According to the academic paper on VADER, the Valence score is measured on a scale from -4 to +4, where -4 stands for the most ‘Negative’ sentiment and +4 for the most ‘Positive’ sentiment. Intuitively one can guess that midpoint 0 represents ‘Neutral’ Sentiment, and this is how it is defined actually too.
### How does VADER calculate the Valence score of an input text?
VADER relies on a dictionary that maps words and other numerous lexical features common to sentiment expression in microblogs.
These features include:
• A full list of Western-style emoticons ( for example - :D and :P )
• Sentiment-related acronyms ( for example - LOL and ROFL )
• Commonly used slang with sentiment value ( for example - Nah and meh )
Manually creating a thorough sentiment dictionary is a labour-intensive and sometimes error-prone process. Thus it is no wonder that many NLP researchers rely so heavily on existing dictionaries as primary resources.
Without going into deep technical details, here's a two-step process breakdown for creating such a dictionary.
Researchers working on VADER confirmed the general applicability of these lexical features responsible for sentiments using a 'Wisdom of the Crowd' (WotC) approach.
WotC relies on the idea that the collective knowledge of a group of people as expressed through their aggregated opinions can be trusted as an alternative to expert knowledge. This helped them acquire a valid point estimate for the sentiment valence score of each context-free text.
Amazon Mechanical Turk (MTurk) is one such famous crowdsourcing marketplace where distributed expert raters perform tasks like rating speeches remotely.
Valence score of some context-free text are:-
• Positive Valence: "okay" is 0.9 "good" is 1.9, and "great" is 3.1,
• Negative Valence: "horrible" is –2.5, emoticon ' :( ' is –2.2, and "sucks" and it's slang derivative "sux" are both –1.5.
### How does VADER calculate the Valence score of an input sentence?
VADER makes use of certain rules to incorporate the impact of each sub-text on the perceived intensity of sentiment in sentence-level text. These rules are called Heuristics. There are 5 of them.
• NOTE for Advance readers: These heuristics go beyond what would normally be captured in a typical bag-of-words model. They incorporate word-order sensitive relationships between terms.
Five Heuristics are explained below: -
1. Punctuation, namely the exclamation point (!), increases the magnitude of the intensity without modifying the semantic orientation. For example: “The weather is hot!!!” is more intense than “The weather is hot.”
2. Capitalization, specifically using ALL-CAPS to emphasize a sentiment-relevant word in the presence of other non-capitalized words, increases the magnitude of the sentiment intensity without affecting the semantic orientation. For example: “The weather is HOT.” conveys more intensity than “The weather is hot.”
3. Degree modifiers (also called intensifiers, booster words, or degree adverbs) impact sentiment intensity by either increasing or decreasing the intensity. For example: “The weather is extremely hot.” is more intense than “The weather is hot.”, whereas “The weather is slightly hot.” reduces the intensity.
4. Polarity shift due to Conjunctions, The contrastive conjunction “but” signals a shift in sentiment polarity, with the sentiment of the text following the conjunction being dominant. For example: “The weather is hot, but it is bearable.” has mixed sentiment, with the latter half dictating the overall rating.
5. Catching Polarity Negation, By examining the contiguous sequence of 3 items preceding a sentiment-laden lexical feature, we catch nearly 90% of cases where negation flips the polarity of the text. For example a negated sentence would be “The weather isn't really that hot.”.
### Compound VADER scores for analyzing sentiment
The compound score is computed by summing the valence scores of each word in the lexicon, adjusted according to the rules, and then normalized to be between -1 (most extreme negative) and +1 (most extreme positive). This is the most useful metric if you want a single unidimensional measure of sentiment for a given sentence.
As explained in the paper, researchers used below normalization.
$$x = \frac{x}{\sqrt {x^2 + \alpha}}$$
where x = sum of valence scores of constituent words, and α = Normalization constant (default value is 15)
### Python implementation of VADER - Environment Setup
VADER has been ported to other programming languages too. One can check them by clicking on this link.
Standard Python distribution doesn't come bundled with the VADER module. We'll be using the popular Python package installer, pip to do so.
A package contains all the files you need for a module. Modules are Python code libraries you can include in your project. We use the following code in Anaconda terminal to install VADER.
VADER has been included in the NLTK package itself. Module NLTK is used for natural language processing. NLTK is an acronym for Natural Language Toolkit and is one of the leading platforms for working with human language data. Alternatively one may use.
## Demo using sentences explaining 5 Heuristics
Once done with the environment set-up, it's time to get your hands dirty. I'll be using some sentences similar to those used to explain the 5 heuristics and you can yourself see the algorithm throwing different scores.
Historically, traders around the globe have been relying on the news related to relevant instruments and markets in general while making trade calls. Manual trading was subjected to risk that arises from the personal biases and emotional response that the trader might have due to any news floating around. With the advent of Algorithmic Trading, such risks were minimized. As the competition intensified, traders started coming up with new techniques to have an edge over other traders.
Incorporating sentiment analysis into algorithmic trading models is one of those emerging trends. Smart traders started using the sentiment scores generated by analyzing various headlines and articles available on the internet to refine their trading signals generated from other technical indicators.
The best part is everything from scraping news to getting sentiment scores can be automated today very easily with a few lines of code. It's up to the trader's creativity, how to make most out of the techniques available.
We'll build a simple model using Simple Moving Averages as our primary technical indicator and then use VADER sentiment scores to refine our trade calls.
AMD has been releasing some really good products ever since Dr. Lisa Su (alumnus of MIT) took over the CEO position. Due to her strong leadership skills, the company has bounced back from being heavily debt-ridden to being one of the most traded stocks for the past few years in the US S&P 500 index.
I thought it would be interesting to see how the stock moves with its news sentiment in these harsh times when the future seems uncertain.
### Introducing Newsapi.org API
News API is a simple HTTP REST API that returns JSON files with breaking news headlines and search for articles from over 30,000 news sources and blogs.
One can search for articles with any combination of the following criteria:
• Keyword or phrase, Eg: find all articles containing the word 'AMD'.
• Date published, Eg: find all articles published yesterday.
• Source name, Eg: find all articles by 'TechCrunch'.
• Source domain name, Eg: find all articles published on gizmodo.com.
• Language, Eg: find all articles written in English.
One needs an API key to use the API - this is a unique key that identifies your requests.
The best part: They're free for development, open-source, and non-commercial use. You can get one here.
Generate your free API-key. Save it for future uses. Check the Documentation to fully explore this wonderful API.
### Using final compound VADER scores with threshold to generate trade calls
Considering the volatile behavior of markets these days, we'll use 0.20 as threshold value for making trade calls in our model.
### Merging Trade Signals with SMA at higher priority and VADER for refining
You did a great job of making it to the end of the blog, so give yourself a pat on the back.
## Conclusion
• Although we used SMA as our primary technical indicator, one won't face any hassle while using VADER with others too.
• Clearly, incorporating VADER sentiment analysis gave us an edge over raw SMA model and this speaks about the power of sentiment analysis in Algorithmic Trading.
• Note that, in case of conflict we prioritized SMA and took VADER signals only for refining purposes.
• Before deploying any algorithmic model, it's very important to backtest, add safeguards, paper trade and keep on optimizing.
Now that you know how VADER works, go on and experiment with it. Have Fun! :D
You have seen how sentiments have driven the markets in recent times. You can use natural language processing to devise new trading strategies using Twitter, news sentiment data in the course on Trading using Twitter Sentiment Analysis.
### Source and References
Disclaimer: All data and information provided in this article are for informational purposes only. QuantInsti® makes no representations as to accuracy, completeness, currentness, suitability, or validity of any information in this article and will not be liable for any errors, omissions, or delays in this information or any losses, injuries, or damages arising from its display or use. All information is provided on an as-is basis.
|
2021-07-27 22:37:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20983657240867615, "perplexity": 3278.759071156227}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153491.18/warc/CC-MAIN-20210727202227-20210727232227-00567.warc.gz"}
|
https://www-cdf.fnal.gov/physics/new/top/2015/AFB_tt_CDF/index.html
|
# Combination of $$\afbtt$$ at CDF
### Lepton+Jets Final State
Dan Amidei, Myron Campbell, Ryan Edgar, Dave Mietlicki, Monica Tecchio, Jon S. Wilson, and Tom Wright
University of Michigan
Thomas A Schwarz
FNAL
Joey Huston
Michigan State University
### Dilepton Final State
Ziqing Hong, Dave Toback, and Jon S. Wilson
Texas A&M University
##### Public note: CDF11161
We present a combination of the measurements of the $$\afbtt$$ from CDF with lepton+jets and dilepton final states using the full dataset collected by the CDF II detector. The improved measurement is $$\afbtt = 0.160\pm0.045$$. The combined result is consistent with the NNLO SM calculation at $$\afbtt = 0.095 \pm 0.007$$. The differential $$\afbtt$$ as a function of $$|\dy|$$ in the two final states are also combined with a simultaneous fit, yielding a result of $$\alpha=0.227\pm0.057$$, which is $$2\sigma$$ higher than the NNLO SM calculation.
1 $\afbtt = \frac{N(\dy > 0) - N(\dy < 0)}{N(\dy > 0) + N(\dy < 0)}$ $\dy=y_{t}-y_{\bar{t}}$ $$\ttbar\rightarrow\ell\nu+jets$$: $$\ttbar\rightarrow\ell^{+}\ell^{-}+jets+\MET$$: Inclusive $$\afbtt$$ 2 Table of uncertainties of $$\afbtt$$ measurement with the lepton+jets and the dilepton final states. In the column of correlation, "0" indicates no correlation and "1" indicates fully positive correlation. 3 The combined $$\afbtt$$: $\afbtt = 0.160\pm0.045$ The weight of the lepton+jets result is 91%, and the weight of the dilepton result is 9%. The correlation between the two results is 10%. Differential $$\afbtt$$ vs. $$|\dy|$$ 4 The best fit of $$\afbtt=\alpha\cdot|\dy|$$ with measurements from both lepton+jets and dilepton final states. All correlations are taken into account. The bin centroids, differential $$\afbtt$$, as well as eigenvalues and eigenvectors of the covariance matrix is shown. 5 The best fit result is $\alpha = 0.227\pm0.057$ This result is $$2\sigma$$ larger than the NNLO SM calculation 6 Comparison of the slope $$\alpha$$ of $$\afbtt$$ vs. $$|\dy|$$ from various measurements.
Last updated :
|
2020-02-17 10:14:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6531423330307007, "perplexity": 2020.347806938185}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875141806.26/warc/CC-MAIN-20200217085334-20200217115334-00543.warc.gz"}
|
http://robust.cs.unm.edu/doku.php?id=people:andres_ruiz:project
|
# Robust-first Computing Wiki
### Site Tools
people:andres_ruiz:project
# Finding my place in the crowd: Group formation and localization in a virtual environment.
##### Abstract:
In the Movable Feast Machine, direct interaction is limited to small regions of space, but many computations could benefit from larger scale structure. This paper presents a simple aggregation and localization strategy allowing individuals to form communicating groups and find their own absolute positions within the group
##### Model Description:
Figure 1: External Link
In order to understand the model, it is first important to give a couple of definitions that will help in the complete realization of the problem. The density of the element means the amount of empty spaces that are allowed to exist between any pair of elements. The state of an element is basically a name that is given to the conditions that describe one element at an specific point of time. The attractor state is the state in which an ASDF element moves elements that are around and brings them close to where the element is located. The process of how this movement is done is expanded in the next sub-sections.
There is a two stage process that will successfully achieve the desire result of localization of each one of the elements inside their blob. These self-organization and localization stages are roughly described in section 3.1, and are given a more thorough treatment in section 3.2.
Section 3.1: Model Overview.
As mentioned earlier, the ASDF elements will self-assemble in different blobs that will then start their localization process based on where they consider their global position in the blob is. After several iterations of the localization process, convergence will be achieved and all the elements will then have an idea of where their place in the blob is. This place will be determined by the amount of cells that one element is away from the boundary elements. A boundary element is an element that at a certain coordinate (north, south, west or east) doesn’t have any neighbors towards that direction, i.e. all the neighbouring cells in that direction are empty. After the localization process, if there is some sort of interruption (like a nuke or some elements are moved by dreg), the elements would go back to the self-organization stage and then the localization process will be repeated.
We now briefly describe what an Event in the MfM architecture is because this will provide us with a useful framework to describe the behavior of the ASDF element in both, the organization and the localization stage. In its most essential case, an event is the means by which the mfm tells an atom that it is their turn to wake up, interact with the environment, change its internal and its neighbors’ state and go back to sleep, An event is assigned at random to each event, and there is no guarantee on the amount of events an atom is receiving however, because of probabilities and calculations it is true in practice that on average, all the elements will get an event after certain period of time.
Section 3.2. Model detailed description
We now give all the details on how both stages of the operations are performed, and we highlight the relevant points that allow our element to perform as expected.
Section 3.2.1: Self-organization process
The self-organization process is the basis that allows the final result to be achieved. The resulting state in which the elements are at the end of this stage, will determine how good or bad the localization process will turn out to be. The self-organization process occurs as follows, and it is summarized in algorithm 1.
Algorithm 1:
function self_organization():
if (I become attractor):
A[] = empty elements in ew.
B[] = ASDF elements in ew.
i = j = 0
while (empty_spots && elements_to_relocate):
change pos of element B[i] to A[j]
turn B[i] into attractor
increment i and j
In words, what the self-organizing procedure does is: at the beginning of an event, every element has some chances of becoming an attractor, if it turns out that the element does not become an attractor, the normal execution of the behavior function will continue; if on the contrary, the element does become an attractor, then it will gather information from it’s event window, this information includes all the elements that are empty, starting from radius 1 to radius 4, and other ADSF elements starting from radius 4 to radius 1. These two orderings of operations are important because they allow for the elements stay as close as “possible”. After having these two pieces of data, the element will start allocating the ASDF elements that were found on the outer layers into the inner layers of empty spaces, while also turning this elements into attractors creating some sort of a zombie effect.
After this step is done some more work has to be done in the density blahh..
Section 3.2.2: Localization process
In order to carry out the Localization Process (locp from now on), it is important to have the elements as close together as possible, or at least, as least spread as possible, because this will provide a better sense of self-awareness of each element’s position on the blob they belong to.
This second stage assumes that, in addition to determining whether an element is an attractor or not, the elements have to have some state stored in them. Each ASDF element has four counters for north, east, south and west which indicate their distance to the boundary of the enclosing square that is seen in the last lower square of figure 1. Algorithm 2 describes the process of localization.
Algorithm 2
function self_organization():
if (has_converged?):
if (this should check again):
if (! has_converged?):
set convergence flag OFF
alert neighbors of discrepancy
else:
this.count_north = north_n.count_south + 1
this.count_south = north_n.count_north + 1
this.count_east = north_n.count_west + 1
this.count_west = north_n.count_east + 1
# Now for each direction check the two directions
# orthogonal to it in order to determine convergence
# and to fix discrepancies.
# For north I should then check east and west and
# compare my north counter with them.
this.count_north = max(this.count_north, east_n.count_north, west_n.count_north)
# and perform this for the other three counts.
# after all has been done, check for convergence.
if (has_converged?):
set convergence flag ON
The locp can get confusing, which is why we will try to cover all the caveats that it brings along. It is helpful to look at figure 1 and understand this as a converge process, meaning all the elements in a blob will eventually have a correct value on each one of their counters, however throughout different iterations the values for some elements might be wrong. In fact, for the first iteration of each element, only the elements that adjoin with the enclosing rectangle will have the right value for only one coordinate, every other element will have their counters tweaked.
Think about the best way to explain the crap out of this!
Starting TLS failed
people/andres_ruiz/project.txt · Last modified: 2014/12/08 23:07 by afruizc
|
2023-01-31 10:34:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49047383666038513, "perplexity": 790.6768944364583}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499857.57/warc/CC-MAIN-20230131091122-20230131121122-00037.warc.gz"}
|
https://greenriver-utah.com/online-slot-casino/markov-prozesse.php
|
# Markov Prozesse
Review of: Markov Prozesse
Reviewed by:
Rating:
5
On 24.06.2020
### Summary:
Ein Austria Online Casino gut funktioniert und ein gutes Spielerlebnis bietet. Jedes Jahr verschiedenste Awards. Diese Zeiten gelten auch fГr die Kontaktaufnahme per Live Chat.
Markov-Prozesse tauchen an vielen Stellen in der Physik und Chemie auf. Zum Vergleich mit dem Kalkül der stochastischen Differentialgleichungen, auf dem. Scientific Computing in Computer Science. 7 Beispiel: Markov-Prozesse. Ein Beispiel, bei dem wir die bisher gelernten Programmiertechniken einsetzen. Eine Markow-Kette ist ein spezieller stochastischer Prozess. Ziel bei der Anwendung von Markow-Ketten ist es, Wahrscheinlichkeiten für das Eintreten zukünftiger Ereignisse anzugeben.
## Markow-Kette
Scientific Computing in Computer Science. 7 Beispiel: Markov-Prozesse. Ein Beispiel, bei dem wir die bisher gelernten Programmiertechniken einsetzen. Den Poisson-Prozess haben wir als einen besonders einfachen stochastischen Prozess kennengelernt: Ausgehend vom Zustand 0 hält er sich eine. Markov-Prozesse verallgemeinern die- ses Prinzip in dreifacher Hinsicht. Erstens starten sie in einem beliebigen Zustand. Zweitens dürfen die Parameter der.
## Markov Prozesse Markov Processes And Related Fields Video
Beispiel einer Markov Kette: stationäre Verteilung, irreduzibel, aperiodisch?
Eine Markow-Kette (englisch. Eine Markow-Kette ist ein spezieller stochastischer Prozess. Ziel bei der Anwendung von Markow-Ketten ist es, Wahrscheinlichkeiten für das Eintreten zukünftiger Ereignisse anzugeben. Markov-Prozesse. Gliederung. 1 Was ist ein Markov-Prozess? 2 Zustandswahrscheinlichkeiten. 3 Z-Transformation. 4 Übergangs-, mehrfach. Markov-Prozesse verallgemeinern die- ses Prinzip in dreifacher Hinsicht. Erstens starten sie in einem beliebigen Zustand. Zweitens dürfen die Parameter der. Eine Markow-Kette englisch Markov chain ; auch Markow-Prozessnach Andrei Andrejewitsch Markow ; andere Schreibweisen Markov-KetteMarkoff-KetteMarkof-Kette ist ein spezieller stochastischer Prozess. Ziel bei der Anwendung von Markow-Ketten ist es, Wahrscheinlichkeiten für das Eintreten zukünftiger Ereignisse anzugeben. Die mathematische Formulierung im Falle einer endlichen Zustandsmenge benötigt lediglich den Begriff der Quotenvergleich Sportwetten Verteilung sowie der bedingten Wahrscheinlichkeitwährend im zeitstetigen Falle die Konzepte der Filtration sowie der bedingten Erwartung benötigt werden. In der Tat wird in der Normierungsbedingung ja auch nur über die und nicht über Verwaltungsbezirk In Japan, also z. In mathematics, a Markov decision process (MDP) is a discrete-time stochastic control process. It provides a mathematical framework for modeling decision making in situations where outcomes are partly random and partly under the control of a decision maker. A Markov chain is a discrete-time process for which the future behavior only depends on the present and not the past state. Whereas the Markov process is the continuous-time version of a Markov chain. Markov Process. Markov processes admitting such a state space (most often N) are called Markov chains in continuous time and are interesting for a double reason: they occur frequently in applications, and on the other hand, their theory swarms with difficult mathematical problems. The Markov property. There are essentially distinct definitions of a Markov process. One of the more widely used is the following. On a probability space $(\Omega, F, {\mathsf P})$ let there be given a stochastic process $X (t)$, $t \in T$, taking values in a measurable space $(E, {\mathcal B})$, where $T$ is a subset of the real line $\mathbf R$. “Markov Processes International uses a model to infer what returns would have been from the endowments’ asset allocations. This led to two key findings ” John Authers cites MPI’s Ivy League Endowment returns analysis in his weekly Financial Times Smart Money column. The Annals Dart Ergebnisse, World Matchplay Probability. These conditional probabilities may be found by. Advances in Mathematics. Freidlin, "Random perturbations of dynamical systems"Springer Translated from Russian MR For the second straight year, Brown outperformed all other Ivy endowments by a large margin. More From Medium. A Markov chain with more than one state and just one out-going transition per state is either not irreducible Markov Prozesse not aperiodic, hence cannot be ergodic. Quantum Chromodynamics on the Lattice. But opting out of some of these cookies may have an effect on your browsing experience. It can be used to efficiently Arti Coming Soon Dalam Bahasa Indonesia the value of a policy and to solve not only Markov Decision Processes, but many other recursive problems. A user's web link transition on Markov Prozesse particular website can be modeled using Wild Growth Hearthstone or second-order Markov models and can 888 Poker Update used to make predictions regarding future navigation and to personalize the web page for an individual user. Stroock, S. At some point, it will not be profitable to continue staying in Sportwetten Ohne Steuer. Rubinstein; Dirk P. Sum Freispiele Book Of Dead each row is equal to 1. Tools What links here Related changes Special pages Printable version Permanent link Page information. From the time of the invention…. Scifi Mmorpg that the general state space continuous-time Markov chain is general to such a degree that it has no designated term. Markov Reward Process : As the name suggests, MDPs are the Markov chains with values judgement. In simpler terms, it is a process StrichmГ¤nnchen MГ¤dchen which predictions can be made regarding future outcomes Wwwoddset solely on its Spiderman Solitaire Spiele Kostenlos state and—most importantly—such predictions are just as good as the ones that could be made knowing the process's full history.
### Dass die Kleiderordnung fГr Automatenspiele und Poker entspannter genommen wird, die GlГcksspiele Markov Prozesse anbieten. - Inhaltsverzeichnis
Darauf folgt der Start von Bedienzeiten Messi Gesperrt am Ende eines Zeitschrittes das Ende von Bedienzeiten. Markov-Prozesse. June ; DOI: /_4. 6/9/ · Markov-Prozesse verallgemeinern dieses Prinzip in dreifacher Hinsicht. Erstens starten sie in einem beliebigen Zustand. Zweitens dürfen die Parameter der Exponentialverteilungen ihrer Verweildauern von ihrem aktuellen Zustand abhängen. This is a preview of subscription content, log in to check access. Cite chapter. MARKOV PROZESSE 59 Satz Sei P(t,x,Γ) ein Ubergangskern und¨ ν ∈ P(E). Nehmen wir an, dass f¨ur jedes t ≥ 0 das Mass R P(t,x,·)ν(dx) straff ist (was zutrifft, wenn (E,r) vollst¨andig und separabel ist, siehe Hilfssatz ).
Dynkin, "Theory of Markov processes" , Pergamon Translated from Russian MR MR MR MR MR MR Zbl Dynkin, "Markov processes" , 1 , Springer Translated from Russian MR Zbl Gihman, A.
Skorohod, "The theory of stochastic processes" , 2 , Springer Translated from Russian MR MR Zbl Freidlin, "Markov processes and differential equations" Itogi Nauk.
Khas'minskii, "Principle of averaging for parabolic and elliptic partial differential equations and for Markov processes with small diffusion" Theor.
Venttsel', M. Freidlin, "Random perturbations of dynamical systems" , Springer Translated from Russian MR Blumenthal, R. Getoor, "Markov processes and potential theory" , Acad.
Press MR Zbl Getoor, "Markov processes: Ray processes and right processes" , Lect. Kuznetsov, "Any Markov process in a Borel space has a transition function" Theor.
Stroock, S. Varadhan, "Multidimensional diffusion processes" , Springer MR Zbl Chung, "Lectures from Markov processes to Brownian motion" , Springer MR Zbl Doob, "Stochastic processes" , Wiley MR MR Zbl Wentzell, "A course in the theory of stochastic processes" , McGraw-Hill Translated from Russian MR MR Zbl Kurtz, "Markov processes" , Wiley MR Zbl Feller, "An introduction to probability theory and its applications" , 1—2 , Wiley MR Zbl Wax ed.
Mathematically, we can define Bellman Equation as :. Now, the question is how good it was for the robot to be in the state s.
We want to know the value of state s. The value of state s is the reward we got upon leaving that state, plus the discounted value of the state we landed upon multiplied by the transition probability that we will move into it.
The above equation can be expressed in matrix form as follows :. Where v is the value of state we were in, which is equal to the immediate reward plus the discounted value of the next state multiplied by the probability of moving into that state.
Therefore, this is clearly not a practical solution for solving larger MRPs same for MDPs , as well. In later Blogs, we will look at more efficient methods like Dynamic Programming Value iteration and Policy iteration , Monte-Claro methods and TD-Learning.
We are going to talk about the Bellman Equation in much more details in the next story. What is Markov Decision Process?
Markov Decision Process : It is Markov Reward Process with a decisions. Everything is same like MRP but now we have actual agency that makes decisions or take actions.
P and R will have slight change w. Transition Probability Matrix. Reward Function. Now, our reward function is dependent on the action.
Actually,in Markov Decision Process MDP the policy is the mechanism to take decisions. So now we have a mechanism which will choose to take an action.
Policies in an MDP depends on the current state. They do not depend on the history. So, the current state we are in characterizes the history.
We have already seen how good it is for the agent to be in a particular state State-value function. Mathematically, we can define State-action value function as :.
Now, we can see that there are no more probabilities. In fact now our agent has choices to make like after waking up ,we can choose to watch netflix or code and debug.
Of course the actions of the agent are defined w. Congratulations on sticking till the end! Till now we have talked about building blocks of MDP, in the upcoming stories, we will talk about and Bellman Expectation Equation , More on optimal Policy and optimal value function and Efficient Value Finding method i.
Dynamic Programming value iteration and policy iteration algorithms and programming it in Python. Hope this story adds value to your understanding of MDP.
Would Love to connect with you on instagram. Thanks for sharing your time with me! References :. STAY DEEP. Hands-on real-world examples, research, tutorials, and cutting-edge techniques delivered Monday to Thursday.
Make learning your daily ritual. Take a look. Get started. Open in app. Sign in. Editors' Picks Features Explore Contribute.
Each step of the way, the model will update its learnings in a Q-table. The table below, which stores possible state-action pairs, reflects current known information about the system, which will be used to drive future decisions.
Each of the cells contain Q-values, which represent the expected value of the system given the current action is taken. Does this sound familiar?
It should — this is the Bellman Equation again! All values in the table begin at 0 and are updated iteratively. Note that there is no state for A3 because the agent cannot control their movement from that point.
To update the Q-table, the agent begins by choosing an action. It cannot move up or down, but if it moves right, it suffers a penalty of -5, and the game terminates.
The Q-table can be updated accordingly. When the agent traverses the environment for the second time, it considers its options.
Given the current Q-table, it can either move right or down. Moving right yields a loss of -5, compared to moving down, currently set at 0. We can then fill in the reward that the agent received for each action they took along the way.
Obviously, this Q-table is incomplete. Even if the agent moves down from A1 to A2, there is no guarantee that it will receive a reward of After enough iterations, the agent should have traversed the environment to the point where values in the Q-table tell us the best and worst decisions to make at every location.
This example is a simplification of how Q-values are actually updated, which involves the Bellman Equation discussed above. For instance, depending on the value of gamma, we may decide that recent information collected by the agent, based on a more recent and accurate Q-table, may be more important than old information, so we can discount the importance of older information in constructing our Q-table.
If the agent traverses the correct path towards the goal but ends up, for some reason, at an unlucky penalty, it will record that negative value in the Q-table and associate every move it took with this penalty.
Alternatively, if an agent follows the path to a small reward, a purely exploitative agent will simply follow that path every time and ignore any other path, since it leads to a reward that is larger than 1.
This usually happens in the form of randomness, which allows the agent to have some sort of randomness in their decision process.
A sophisticated form of incorporating the exploration-exploitation trade-off is simulated annealing , which comes from metallurgy, the controlled heating and cooling of metals.
Instead of allowing the model to have some sort of fixed constant in choosing how explorative or exploitative it is, simulated annealing begins by having the agent heavily explore, then become more exploitative over time as it gets more information.
This method has shown enormous success in discrete problems like the Travelling Salesman Problem, so it also applies well to Markov Decision Processes.
Because simulated annealing begins with high exploration, it is able to generally gauge which solutions are promising and which are less so.
At the first time X t becomes negative, however, the portfolio is ruined. A principal problem of insurance risk theory is to find the probability of ultimate ruin.
More interesting assumptions for the insurance risk problem are that the number of claims N t is a Poisson process and the sizes of the claims V 1 , V 2 ,… are independent, identically distributed positive random variables.
Rather surprisingly, under these assumptions the probability of ultimate ruin as a function of the initial fortune x is exactly the same as the stationary probability that the waiting time in the single-server queue with Poisson input exceeds x.
As a final example, it seems appropriate to mention one of the dominant ideas of modern probability theory, which at the same time springs directly from the relation of probability to games of chance.
One of the basic results of martingale theory is that, if the gambler is free to quit the game at any time using any strategy whatever, provided only that this strategy does not foresee the future, then the game remains fair.
Strictly speaking, this result is not true without some additional conditions that must be verified for any particular application.
The expected duration of the game is obtained by a similar argument. Subsequently it has become one of the most powerful tools available to study stochastic processes.
Probability theory Article Media Additional Info. Article Contents. Load Previous Page. Markovian processes A stochastic process is called Markovian after the Russian mathematician Andrey Andreyevich Markov if at any time t the conditional probability of an arbitrary future event given the entire past of the process—i.
The Ehrenfest model of diffusion The Ehrenfest model of diffusion named after the Austrian Dutch physicist Paul Ehrenfest was proposed in the early s in order to illuminate the statistical interpretation of the second law of thermodynamics, that the entropy of a closed system can only increase.
The symmetric random walk A Markov process that behaves in quite different and surprising ways is the symmetric random walk. Queuing models The simplest service system is a single-server queue, where customers arrive, wait their turn, are served by a single server, and depart.
Martingale theory As a final example, it seems appropriate to mention one of the dominant ideas of modern probability theory, which at the same time springs directly from the relation of probability to games of chance.
|
2021-04-13 06:33:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5299705266952515, "perplexity": 2858.873413932334}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038072175.30/warc/CC-MAIN-20210413062409-20210413092409-00109.warc.gz"}
|
https://registration.mcs.cmu.edu/event/1/contributions/51/
|
Jun 2 – 7, 2019
Carnegie Mellon University
America/New_York timezone
## The prospect of precise measurement of the neutrino oscillation by JUNO
Jun 4, 2019, 2:30 PM
30m
Rangos 3
### Rangos 3
Contributed Future Facilities and Directions
### Speaker
Prof. Yuekun Heng (Institute of High Energy Physics)
### Description
The Jiangmen Underground Neutrino Observatory (JUNO) is a multipurpose neutrino experiment in China with a 20-thousand-ton liquid scintillator detector of 3% (at 1 MeV) at 700-meter deep underground. The major goal of JUNO is determining neutrino mass hierarchy by precisely measuring the energy spectrum of reactor electron antineutrinos at a distance of ~53 km from the powerful reactors of the Yangjiang and Taishan Nuclear Power Plants. Also for all 6 neutrino oscillation parameters, JUNO is going to measure the 3 parameters of Δm^2_21, Δm^2_32 and sin^2θ_12 with the precision better than 1% . Considering the precision of sin^2θ_13 can be measured to ~4% by Daya Bay, the unitarity of the neutrino mixing matrix can be probed to 1% level. Besides, JUNO has other scientific possibilities such as supernova neutrinos, geo-neutrinos, solar neutrinos, atmospheric neutrinos, and exotic searches.
|
2022-12-06 13:22:07
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8298877477645874, "perplexity": 6939.7944926687505}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711108.34/warc/CC-MAIN-20221206124909-20221206154909-00754.warc.gz"}
|
https://socratic.org/questions/find-the-length-of-the-median-of-one-of-the-legs-of-an-isosceles-triangle-with-s
|
Find the length of the median of one of the legs of an isosceles triangle with sides the lengths 18,18,and 6?
Aug 11, 2018
color(maroon)("Length of the median " bar(CD) ~~ 17.75 " units"
Explanation:
$\text{Given : " bar(AC) = bar (BC) = 18, AB = 6, " To find median } \overline{C D}$
$\Delta A B C \text{ is an isosceles triangle with " bar(CD) " perpendicular bisector of } \overline{A B}$
Applying Pythagoras theorem,
$\overline{C D} = \sqrt{{\left(A C\right)}^{2} - {\left(A D\right)}^{2}}$
$\overline{A D} = \frac{\overline{A B}}{2} = \frac{6}{2} = 3$
$\overline{C D} = \sqrt{{18}^{2} - {3}^{2}} \approx 17.75$
|
2021-06-21 07:51:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 6, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3436347246170044, "perplexity": 2427.0772040833035}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488268274.66/warc/CC-MAIN-20210621055537-20210621085537-00012.warc.gz"}
|
https://math.stackexchange.com/questions/934321/proving-that-the-1-norm-x-1-is-not-generated-by-inner-products-on-mathb
|
# Proving that the 1-norm, $||x||_1$ is not generated by inner products on $\mathbb{C}^n$
Proving that the 1-norm, $||x||_1$ is not generated by inner products on $\mathbb{C}^n$.
Is it sufficient to take $x=(1,0)$, $y=(0,1)$ in $\mathbb{C}^2$ and just showing that \begin{align} ||x+y||^2+||x-y||^2=8\\ 2||x||^2+2||y||^2=4 \end{align}
As a counter example to show that it does not satisfy the parallelogram law? Or in proving this must it be an actual formal proof?
• So for showing this for $\mathbb{C}^n$ it suffices to show it for say $\mathbb{C}^2$? – Pablo Sep 16 '14 at 23:05
• It shows it is not "generated by inner products for all $\mathbb C^n$". If you want to show for each $n \geq 2$, just take your vectors with a bunch of zeroes on the end $(1,0,0,\dots)$ and $(0,1,0,\dots)$ and your proof follows the same. – Clinton Bradford Sep 16 '14 at 23:07
|
2020-06-01 23:21:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.810021162033081, "perplexity": 151.52039146573875}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347419639.53/warc/CC-MAIN-20200601211310-20200602001310-00405.warc.gz"}
|
https://www.xpmath.com/forums/showthread.php?s=c1d77fc68f899dceb08b20e9e867de9d&p=37094
|
02-29-2012 #1
Jonathan W
Join Date: Jan 2012
Posts: 318
-$\frac{1}{2}$ + x + $\frac{2}{3}$ = -$\frac{5}{6}$
Find what x equals.
---------- Post added at 04:47 PM ---------- Previous post was at 04:40 PM ----------
Quote:
Originally Posted by Jonathan W -$\frac{1}{2}$ + x + $\frac{2}{3}$ = -$\frac{5}{6}$ Find what x equals. Please show your work!
Or try to solve this!
$\frac{1}{6}$ + $\frac{1}{x}$ = -$\frac{5}{6}$
Last edited by Jonathan W; 02-29-2012 at 05:55 PM.. Reason: Fix fraction!
02-29-2012 #2 orishorjo Join Date: May 2011 Posts: 1,633 OMG!!!!!!!!! That's really hard __________________ Do what you love, love what you do.
02-29-2012 #3
Jonathan W
Join Date: Jan 2012
Posts: 318
Quote:
Originally Posted by orishorjo OMG!!!!!!!!! That's really hard
We learned it yesterday but I was in Florida so I wasn't at school.
I think the answer is: x = 1. Mr. Hui is this answer right?
02-29-2012 #4 orishorjo Join Date: May 2011 Posts: 1,633 why don't u PM Mr.Hui the problem __________________ Do what you love, love what you do.
02-29-2012 #5 MATH master-shanto Join Date: Sep 2010 Posts: 860 thats a really hard problem jon are u in honors class?? I am pretty sure that x=2 or -2 try putting 2 instead of the variable which is 2 over 1 and it works __________________ impossible is just a word used by people who are too afraid to do something themselves. In reality there's no such thing as impossible.
02-29-2012 #6 Mr. Hui Join Date: Mar 2005 Posts: 10,609 -$\frac{1}{2}$ + x + $\frac{2}{3}$ = -$\frac{5}{6}$ First combine the 2 fractions on the left-hand side. x + $\frac{1}{6}$ = -$\frac{5}{6}$ Then subtract $\frac{1}{6}$ from both sides of the equation. x = -1 __________________ Do Math and you can do Anything!
02-29-2012 #7
MAS1
Join Date: Dec 2008
Posts: 249
Quote:
Originally Posted by Jonathan W -$\frac{1}{2}$ + x + $\frac{2}{3}$ = -$\frac{5}{6}$ Find what x equals. Please show your work! ---------- Post added at 04:47 PM ---------- Previous post was at 04:40 PM ---------- Or try to solve this! $\frac{1}{6}$ + $\frac{1}{x}$ = -$\frac{5}{6}$
(1/6) + (1/x) = -5/6
First get common denominators for the left hand side.
x/(6x) + 6/(6x) = -5/6
(x + 6)/(6x) = -5/6
Then cross multiply.
-30x = 6x + 36
-36 = 36x
-1 = x
03-01-2012 #8
Jonathan W
Join Date: Jan 2012
Posts: 318
Quote:
Originally Posted by Mr. Hui -$\frac{1}{2}$ + x + $\frac{2}{3}$ = -$\frac{5}{6}$ First combine the 2 fractions on the left-hand side. x + $\frac{1}{6}$ = -$\frac{5}{6}$ Then subtract $\frac{1}{6}$ from both sides of the equation. x = -1
Quote:
Originally Posted by MAS1 (1/6) + (1/x) = -5/6 First get common denominators for the left hand side. x/(6x) + 6/(6x) = -5/6 (x + 6)/(6x) = -5/6 Then cross multiply. -30x = 6x + 36 -36 = 36x -1 = x
Thank you Mr. Hui and MAS1!
Thread Tools Display Modes Linear Mode
Posting Rules You may not post new threads You may not post replies You may not post attachments You may not edit your posts BB code is On Smilies are On [IMG] code is On HTML code is Off Forum Rules
Forum Jump User Control Panel Private Messages Subscriptions Who's Online Search Forums Forums Home Welcome XP Math News Off-Topic Discussion Mathematics XP Math Games Worksheets Homework Help Problems Library Math Challenges
All times are GMT -4. The time now is 11:14 AM.
Contact Us - XP Math - Forums - Archive - Privacy Statement - Top
|
2022-07-05 15:14:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 27, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5477313995361328, "perplexity": 8293.950249492218}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104585887.84/warc/CC-MAIN-20220705144321-20220705174321-00141.warc.gz"}
|
http://www.math.iisc.ac.in/seminars/2021/2021-10-06-s-thangavelu.html
|
#### APRG Seminar
##### Venue: Microsoft Teams (online)
A theorem attributed to Beurling for the Fourier transform pairs asserts that for any nontrivial function $f$ on $\mathbb{R}$ the bivariate function $f(x) \hat{f}(y) e^{|xy|}$ is never integrable over $\mathbb{R}^2.$ Well known uncertainty principles such as theorems of Hardy, Cowling–Price etc. follow from this interesting result. In this talk we explore the possibility of formulating (and proving!) an analogue of Beurling’s theorem for the operator valued Fourier transform on the Heisenberg group.
The video of this talk is available on the IISc Math Department channel.
Contact: +91 (80) 2293 2711, +91 (80) 2293 2265 ; E-mail: chair.math[at]iisc[dot]ac[dot]in
Last updated: 26 Oct 2021
|
2021-10-26 05:35:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7274133563041687, "perplexity": 2084.4181072474894}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587799.46/warc/CC-MAIN-20211026042101-20211026072101-00454.warc.gz"}
|
http://math.stackexchange.com/tags/notation/hot
|
# Tag Info
18
This is more of an extended comment than an answer, but with regards to typesetting in $\LaTeX$, let me point out that typing \mathrm{d} should not take longer than typing d, as you shouldn't be doing either throughout your paper! The semantically correct thing to do is to define a macro representing your desired differential operator, for example, ...
18
I think the historical reason for the confusion stems from graphing trigonometric functions in polar form versus rectangular form. In rectangular form, the following statement below is true. $$\theta \quad =\quad x$$ (Where the meaning of this equality is that we let the measure of the angle on the unit circle be representative of the rectangular distance ...
17
$f^{-1}(A)$ is the preimage of the set $A$, it exists even if $f$ is not invertible. For $f: X\to Y$, it is defined as $$f^{-1}(A) = \{x\in X: f(x)\in A\}.$$ The notation is slightly confusing, I would say, but one gets used to it. It isn't really that bad of a notation, since, if $f$ is actually invertible and its inverse is $g$, then $g(A) = f^{-1}(A)$ ...
15
There is a $\theta$ on the right hand side. The definition of $\sin(\theta)$ is not just $y/r$; instead it is something like: $y/r$, after you've drawn a right triangle with $\theta$ as an angle, and where $y$ is the length of the side opposite $\theta$ and $r$ is the length of the hypotenuse. As you can see, that full definition does in fact contain a ...
11
I guess it depends on how you define $\sin\theta$. One possible definition is $f(\theta)\equiv \theta- \frac{\theta^3}{3!}+\frac{\theta^5}{5!}...$ This series converges for all $\theta$ and is called $\sin\theta$. For more info look up Maclaurin series. More generally I think you are confused about what a function is. The technical definition is daunting ...
11
Many excellent journals and books use $d$ in the italics form, such as the Journal of the American Mathematical Society (e.g., recent article by Terence Tao), London Mathematical Society Proceedings (e.g., equations 74 and 75 of this recent paper) and Spivak's Calculus. Given that reference quality publications use $d$--and that it is faster and cleaner to ...
10
If $A$ and $B$ are modules over a ring, their direct product $A \times B$ and their tensor product $A \otimes B$ are different things, so it would be unhelpful to use the same notation for them.
10
$f^{-1}$ here does not mean the inverse of $f$. It is a horrifically bad overuse of notation and can be very misleading. $f^{-1}(A)$ where $A$ is a set is the pre-image, i.e. $$f^{-1}(A) = \{x\in X: f(x) \in A\}.$$ The pre-image always exists. If $f$ were invertible, then this would coincide with what you think; however the pre-image takes care of the ...
9
These symbols have different meanings in different contexts. For instance, if we are talking about vector spaces then saying $V=U+W$ is different from $V=U\oplus W$
8
Quick answer: there is a standard to follow. Longer answer: while physicists write differential operators in upright fonts (because they follow the standards), mathematicians tend to typeset differential operators as variables (because we are lazy). I am joking, but it should be clear that $dx$ is not $d \cdot x$, and that $d$ is essentially an operator: ...
8
Looking at your profile, you post a lot on StackOverflow, so maybe a programming analogy will help. Say you want to make a Point class. There's a lot of ways you could do it. You could have member variables p.x and p.y, for the $x$ and $y$ coordinates (duh). That's the most common way. But in theory, you could also write it in terms of p.rad and p.theta. ...
5
It depends on how you want to define $\sin$ and $\cos$. I suspect you're looking for an explicit definition in terms of things you already know (such as polynomials), in which case @Karl's answer (the MacLaurin series definition) is what you're looking for. However, others like me find it more elegant to define them implicitly as bases for the the set of ...
5
Yes $\sin$ is just a function on the real (or complex) numbers. People often write $\sin(\theta)$ or $\sin\theta$ because the argument of the $\sin$ function is often an angle in physical applications, and $\theta$ is often used to denote angles. For your follow-up: $\sin^{−1}$ is the inverse function of the $\sin$ function. Similar to how $\log$ is the ...
4
There is no difference between them. It merely comes as a result of a choice in LaTeX formatting; specifically, some people write "\text{d}" (or some equivalent) for the upright formatting, but many other people don't do this for the sake of speed, and instead just write "d".
4
The notation is by no means "standard", but based on the context you provided it might mean: "$R(x,y)$ is a rational function in its arguments $x,y$" That is, $$R(x,y) = \frac{p(x,y)}{q(x,y)}$$ where $p$ and $q$ are polynomials in $x$ and $y$. In the examples you gave, the functions $g(\sin(x),\cos(x)),i(\sin(x),\cos(x))$, and $j(\sin(x),\cos(x))$ are ...
4
$\triangle ABC \sim \triangle DEF$.
3
For any function $f : \mathbb R \rightarrow \mathbb R$, the set $f^{-1}( a, \infty)$ always exists, and it is defined by $$\{ x: \in \mathbb R : f(x) > a \},$$ although it is not always measurable.
3
Just so that everyone knows what we are talking about here, let me rephrase in more familiar notation. Suppose $(\Omega, \mathcal{F}, P)$ is a probability space, and $(M, \mathcal{M})$ is a measurable space. If $X : \Omega \to M$ is a random variable (i.e. a $(\mathcal{F}, \mathcal{M})$-measurable function), it induces a pushforward measure on $(M, ... 3 The only difference it that it sometimes unclear if you consider$0$as an element of$\mathbb{N}$so that $$\mathbb{N}=\{0,1,2,...\}$$ or that $$\mathbb{N}=\{1,2,...\}$$ The first notation removes this ambiguity and makes things more clear. At the end - I would say that its a matter of preference and convention, I have seen both used many times ... 3 If you define$\Bbb N = \{1,2,3,\dots\}$, then yes: the two sets you've defined are identical, and describe the same infinite union. Note that some define$\Bbb N = \{0,1,2,3,\dots\}$3 Opinions on this issue differ, but I strongly believe that a basis (particularly in finite-dimensional linear algebra) should be a list, not a set. Here I am using "list" to mean the same thing as "ordered set". Here are two reasons why using sets does not work well: It is often convenient to talk about the matrix of a linear map$T \colon V \to W$with ... 2 If you wish to think about it from the perspective of "why isn't$\theta$on the right hand side," it may be helpful to approach sin and cos backwards - start with$\sin^{-1}(x)$and work our way backwards. Why? Because doing so makes it look more like a typical definition of a variable, with a variable on one side and some expression on the other side. ... 2 In Germany, there is the DIN 1338 standard, according to which the differntial operator d, as, e.g., e for the Euler number, should be typeset as an upright letter. According to Wikipedia, these letters are typeset in italic if AMS conventions are used. 2 By logic I think that is a good idea differentiate the symbol with the roman notation but there isnt a "standard", you can use any of them. In the same sense doesnt exist any kind of "standard" mathematical notation. I read a lot of books of many mathematical topics, everyone with different notations, not only just the infinitesimal symbol. The problem, ... 2 Unless you provide us with more context, it seems that$R$is just some function with$R:\mathbb{R}^2 \to \mathbb{R}$. What about this suggests it's a ratio? 2 It means$P$is a divisor of$a$. 1 P|a means P divides a.For P|a we can also write this as Pc=a where c is a constant.It simply means that $$P*c=a$$ or P is a factor of a. 1 If you only use this type of set, then this is impossible, because sets have no order, as you said. But you can use other objects, which are often helpful: use tupels (or vectors, which are basically the same). also, you can instead use$A$as a function:$A : \{1,2,3,4\} \to \mathbb R, \, A(k) = k$. then you can freely "access" the second element. 1 The symbol for denoting similar triangles is ($\color{blue}{\sim}$) Notice, suppose$\triangle ABC$&$\triangle PQR$are similar then in LaTex it is written as$\text{"\triangle ABC \sim \triangle PQR"}$surrounded in-between by 2 or 4 dollar signs which appears as follows $$\color{blue}{\triangle ABC \sim \triangle PQR}$$ 1 a)$1/999 = 0.001001001\ldots = 0.\overline{001}$Either of these is fairly standard notation. The overline format$0.\overline{001}$is a little more explicit, so I think it would be preferred. b)$.001 + .000001 + .000000001 + \cdots$This denotes an infinite sequence in the way that$1, 2, 3, \ldots\$ indicates an infinite sequence: a little ...
Only top voted, non community-wiki answers of a minimum length are eligible
|
2015-07-28 20:07:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9525272846221924, "perplexity": 261.3350038380168}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042982502.13/warc/CC-MAIN-20150728002302-00157-ip-10-236-191-2.ec2.internal.warc.gz"}
|
https://nukephysik101.wordpress.com/2017/10/12/hartree-fock-method-for-1d-infinite-potential-well/
|
Following the previous post, I tested my understanding on the Hartree method. Now, I move to the Hartree-Fock method. The “mean field” energy of the Hartree-Fock is
$\displaystyle G_{\alpha \beta}= \sum_{j=i+1}^{N} \langle \psi_{\alpha}(i) \psi_{\nu} (j) | G(i,j) \left(| \phi_{\beta}(i) \phi_{\nu}(j) \rangle - |\phi_{\nu}(i) \phi_{\beta}(j) \rangle \right) \\ = \langle \alpha \nu | \beta \nu \rangle - \langle \alpha \nu | \nu \beta \rangle$
I also use the same method, in which, the trial wave function is replaced every iteration and the integration is calculated when needed.
Since the total wave function must be anti-symmetry under permutation, therefore one state can be occupied by only one particle. Thus, if we use the ground state in the mean field, the “meaningful” wave functions are the other states.
It is interesting that the mean field energy is zero when $\mu = \nu$, the consequence is no mean field for the same state. Suppose the mean field energy is constructed using the ground state, and we only use 3 states, the direct term is
$G_D = \begin{pmatrix} \langle 11|11 \rangle & \langle 11|21 \rangle & \langle 11|31 \rangle \\ \langle 21|11 \rangle & \langle 21|21 \rangle & \langle 21|31 \rangle \\ \langle 31|11 \rangle & \langle 31|21 \rangle & \langle 31|31 \rangle \end{pmatrix}$
The exchange term is
$G_E = \begin{pmatrix} \langle 11|11 \rangle & \langle 11|12 \rangle & \langle 11|13 \rangle \\ \langle 21|11 \rangle & \langle 21|12 \rangle & \langle 21|13 \rangle \\ \langle 31|11 \rangle & \langle 31|12 \rangle & \langle 31|13 \rangle \end{pmatrix}$
Due to the symmetry of the mutual interaction. We can see that some off-diagonal terms are cancelled. For example,
$\displaystyle \langle 1 1 | 3 1 \rangle = \int \psi_1^*(x) \psi_1^*(y) \cos(x-y) \psi_3(x) \psi_1(y) dy dx$
$\displaystyle \langle 1 1 | 1 3 \rangle = \int \psi_1^*(x) \psi_1^*(y) \cos(x-y) \psi_1(x) \psi_3(y) dy dx$
These two integrals are the same. In fact,
$latex \langle \alpha \nu | \beta \nu \rangle – \langle \alpha \nu | \nu \beta \rangle =$
whenever $\alpha = \nu$
$\displaystyle \langle \nu \nu | \beta \nu \rangle = \int \psi_\nu^*(x) \psi_\nu^*(y) \cos(x-y) \psi_\beta(x) \psi_\nu(y) dy dx$
$\displaystyle \langle \nu \nu | \nu \beta \rangle = \int \psi_\nu^*(x) \psi_\nu^*(y) \cos(x-y) \psi_\nu(x) \psi_\beta(y) dy dx$
We can see, when interchange $x \leftrightarrow y$, the direct term and the exchange term are identical, and then the mean field energy is zero. Also, when $\beta = \nu$ the mean field energy is also zero.
Due to the zero mean field, the off-diagonal terms of the Hamiltonian $H_{\alpha \beta}$ with $\alpha = \nu$ or $\beta = \nu$ are zero. Then, the eigen energy is the same as the diagonal term and the eigen vector is un-change.
Back to the case, the direct matrix at the 1st trial is,
$G_D = \begin{pmatrix} 0.720506 & 0 & -0.144101 \\ 0 & 0.576405 & 0 \\ -0.144101 & 0 & 0.555819 \end{pmatrix}$
The exchange matrix is
$G_E = \begin{pmatrix} 0.720506 & 0 & -0.144101 \\ 0 & 0.25 & 0 \\ -0.144101 & 0 & 0.0288202 \end{pmatrix}$
Thus, the Fock matrix is
$F = \begin{pmatrix} 1 & 0 & 0 \\ 0 & 4.3264 & 0 \\ 0 & 0 & 9.527 \end{pmatrix}$
Therefore, the eigen states are the basis, unchanged
$\displaystyle \psi_1(x) = \sqrt{\frac{2}{\pi}} \sin(x)$
$\displaystyle \psi_2(x) = \sqrt{\frac{2}{\pi}} \sin(2x)$
$\displaystyle \psi_3(x) = \sqrt{\frac{2}{\pi}} \sin(3x)$
Only the eigen energies are changed, as $\epsilon_1 = 1$$\epsilon_2 = 4.3264$$\epsilon_3 = 9.527$
The total wave function for 2 particles at state 1 and state-\mu is
$\Psi(x,y) = \frac{1}{\sqrt{2}} ( \psi_1(x) \psi_\mu(y) - \psi_\mu(x) \psi_1(y) )$
I found that the “mean field” function is not as trivial as in the Hartree case, because of the exchange term. In principle, the mean field for particle-i at state-$\mu$ is,
$G(i) = \int \phi_\nu^*(j) G(i,j) \phi_\nu(j) dj - \int \phi_\nu^*(j) G(i,j) \phi_\mu(j) dj$
However, the direct term is multiply with $\psi_\mu(i)$, but the exchange term is multiply with $\psi_\nu(i)$, which are two different functions. i.e., the “mean field” is affecting two functions, or the “mean field” is shared by two states.
Although the mean field for single state can be defined using exchange operator symbolically, but I don’t know how to really implement into a calculation. Thus, I don’t know how to cross-check the result.
|
2018-06-18 11:10:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 28, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.920092761516571, "perplexity": 423.5659540408855}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267859766.6/warc/CC-MAIN-20180618105733-20180618125733-00292.warc.gz"}
|
https://hamstudy.org/sessions/62cb761ef531e9bbffa53c91/1
|
SESION DE EXAMEN PRESENCIAL EN ESPAÑOL E INGLES
Date:
Jul 16, 2022 (Sat)
Time:
10:00-12:00pm AST
Team:
Test Fee:
$10.00 Location: C. Leyda Irizarri, Quebradas, Yauco 00698, Puerto Rico view map VEC: Greater Los Angeles Amateur Radio Group Slots available: 4 / 10 slots claimed Notes from the team: ¡Haz tu examen en línea y en ESPAÑOL! Regístrate y recibirás un correo electrónico indicándote todos los pasos a seguir. Para registrarte haces los siguiente: 1. Presiona el botón que dice "Register" en la parte de abajo 2. Luego selecciona la opción que te corresponda "new license" o "upgrade" 3. Llena tu información correctamente, es importante que ponga la dirección donde recibe sus cartas puede verificarla "Aquí" 4. Te preguntará si has sido convicto y marcas tu respuesta 5. Te pedirá tu número de FRN (si no lo tienes, ahí mismo puede sacarlo, esta página es segura) 6. Revisas tu información y ¡listo ¡Personas que hablan español también son bienvenidas! El costo del examen es de$10. Le enviaremos un link para que haga su pago. Tu examen puede ser gratis si cumples con los siguientes requisitos:
• Menores de edad • Estudiantes con ID vigente • Militares • Veteranos • Examinadores de GLAARG que hacen upgrade a EXTRA
Haz tu examen con seguridad. Recuerda que se requiere un 74% para aprobar, por lo que lo adecuado que en tus exámenes de practica debes tener por lo menos un 85% como mínimo.
Cualquier duda, puedes escribir a [email protected]
Exámenes de Practica Gratis en español
Take your exam online! Register and you will receive an email with all the steps to follow. To register you do the following:
1. Click on the button that says "Register" at the bottom of the page.
2. Then select the option that corresponds to you "new license" or "upgrade"
3. Fill in your information correctly, it is important that you put the address where you receive your letters you can verify it "Here"
5. It will ask you for your FRN number (if you don't have it, you can get it right there, this page is secure).
6. Review your information and you are done.
Spanish speakers are also welcome to take the test!
The cost of the exam is \$10. We will send you a link to make your payment. Your exam can be free if you meet the following requirements:
• Minors • Students with valid ID • Military personnel • Veterans • Upgrading GLAARG Examiners
Take your exam with confidence. Remember that a 74% is required to pass, so it is appropriate that in your practice exams you should have at least 85% as a minimum.
Any questions, you can write to [email protected]
Free Free Practice Tests in English • https://hamstudy.org/
|
2022-12-01 01:16:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25361648201942444, "perplexity": 11199.203991924634}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710777.20/warc/CC-MAIN-20221130225142-20221201015142-00299.warc.gz"}
|
http://cpr-heplat.blogspot.com/2013/07/11095109-mario-kieburg.html
|
## Surprising Pfaffian factorizations in Random Matrix Theory with Dyson index $β=2$ [PDF]
Mario Kieburg
In the past decades, determinants and Pfaffians were found for eigenvalue correlations of various random matrix ensembles. These structures simplify the average over a large number of ratios of characteristic polynomials to integrations over one and two characteristic polynomials only. Up to now it was thought that determinants occur for ensembles with Dyson index $\beta=2$ whereas Pfaffians only for ensembles with $\beta=1,4$. We derive a non-trivial Pfaffian determinant for $\beta=2$ random matrix ensembles which is similar to the one for $\beta=1,4$. Thus, it unveils a hidden universality of this structure. We also give a general relation between the orthogonal polynomials related to the determinantal structure and the skew-orthogonal polynomials corresponding to the Pfaffian. As a particular example we consider the chiral unitary ensembles in great detail.
View original: http://arxiv.org/abs/1109.5109
|
2017-06-29 10:54:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9018855690956116, "perplexity": 602.0551112783302}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128323908.87/warc/CC-MAIN-20170629103036-20170629123036-00250.warc.gz"}
|
https://ai.stackexchange.com/questions/37166/can-i-minimize-a-mysterious-function-by-running-a-gradient-descent-on-her-neural
|
# Can I minimize a mysterious function by running a gradient descent on her neural net approximations?
So I have this function let call her $$F:[0,1]^n \rightarrow \mathbb{R}$$ and say $$10 \le n \le 100$$. I want to find some $$x_0 \in [0,1]^n$$ such that $$F(x_0)$$ is as small as possible. I don't think there is any hope of getting the global minimum. I just want a reasonably good $$x_0$$.
AFAIK the standard approach is to run an (accelerated) gradient descent a bunch of times and take the best result. But in my case values of $$F$$ are computed algorithmically and I don't have a way to compute gradients for $$F$$.
So I want to do something like this.
(A) We create a neural network which takes an $$n$$-dimensional vector as input and returns a real number as result. We want the NN to "predict" values of $$F$$ but at this point it is untrained.
(B) We take bunch of random points in $$[0,1]^n$$. We compute values of $$F$$ at those points. And we train NN using this data.
(C1) Now the neural net provides us with a reasonably smooth function $$F_1:[0,1]^n \rightarrow \mathbb{R}$$ approximating $$F$$. We run a gradient decent a bunch of times on $$F_1$$. We take the final points of those decent and compute $$F$$ on them to see if we caught any small values. Then we take whole paths of those gradient decent, compute $$F$$ on them and use this as data to retrain our neural net.
(C2) The retrained neural net provides us with a new function $$F_2$$ and we repeat the previous step
(C3) ...
Does this approach have a name? Is it used somewhere? Should I indeed use neural nets or there are better ways of constructing smooth approximations for my needs?
• – D.W.
Sep 23 at 16:28
## 2 Answers
I do not know any specific name for this method, but it is a common approach for approximating and optimizing complex functions. You can find an industrial use-case of this approach in this paper (NeuroErgo: A Deep Neural Network Method to Improve Postural Optimization for Ergonomic Human-Robot Collaboration).
• Thank you. This is useful but I think it's a bit different. If I understood correctly it of just does steps (A), (B) and (C1). So we learn a single approximation $F_1$ and then search for minimums of $F_1$. Sep 23 at 16:18
Yes, this is a standard approach. An improvement is to do gradient descent on $$F$$ (not $$F_1$$), but use the gradient of $$F_1$$ as your estimate for the gradient of $$F$$. In other words, when you calculate the function in the forward direction, you use $$F$$, but when you backprop to get the gradient, you backprop through $$F_1$$.
Since the number of dimensions is so low in your case, an alternative is to use a gradient-free black-box optimization method. One approach is to use a zeroth-order optimization methods, where you use the method of finite differences to estimate the gradient at a particular point. You can estimate the gradient of $$F$$ at a point $$x$$ by evaluating $$F$$ at $$n+1$$ points, namely $$F(x)$$ and $$F(x+\epsilon \cdot e_i)$$ where $$e_i$$ is a vector of all-zeros except it has a 1 in the $$i$$th coordinate. This will be pretty efficient since in your setting, $$n$$ is small. An improvement of that method is to use NES, where we estimate the gradient of $$F$$ as follows:
$$\nabla F(x) \approx {1 \over m} \sum_{i=1}^m F(z_i) \nabla \log p(z_i)$$
where each $$z_i$$ is sampled iid from the normal distribution $$\mathcal{N}(x,\sigma^2)$$ and $$p$$ is the pdf of $$\mathcal{N}(x,\sigma^2)$$. There are many other candidate methods as well.
• Thank you! Do you have any name or reference for the method from the first paragraph or a reference for an example of its application? Sep 23 at 16:52
• @VladimirZolotov, I don't know of a reference for it (sorry; it's a reasonable question).
– D.W.
Sep 23 at 17:05
|
2022-12-07 06:02:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 38, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7937672734260559, "perplexity": 193.17077001318412}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711150.61/warc/CC-MAIN-20221207053157-20221207083157-00864.warc.gz"}
|
https://stats.stackexchange.com/questions/25660/comparing-two-discrete-distributions-with-small-cell-counts
|
# Comparing two discrete distributions (with small cell counts)
I need to compare sample distributions with theoretical ones, which is typically done with a chi-squared test. The problem is that I have distributions where one or more cells have low values, and consequently the chi-squared test reports very small p-values. For example, a typical expected and observed frequencies are [152 2 9] and [140 5 18], with a p-value of 0.0007. Based on domain knowledge, these two distributions are not significantly different.
What test could be used instead of chi-squared, which would take out the bias that occurs with the small-valued cells?
Edit: adding some background information for this problem.
I have a number of processes which produce as output certain technical parameters, recorded as time series. I have around 4000 of such process, each producing around 150 such time series (the number of time series a process has follows a power law). I would like to find which of these processes are anomalous, i.e. producing output which is significantly different from others. To do this I cluster the time series using k-means, and then based on the clusters, produce the "expected" distribution (average over all time series) and the distribution of clusters for each process.
For example, after clustering I might have 4 clusters with following sizes.
Cluster number | Cluster size
-----------------------------
1 | 100
2 | 200
3 | 300
4 | 400
The distribution of the clusters among the processes might be the following
| Cluster 1 | Cluster 2 | Cluster 3 | Cluster 4
----------------------------------------------------------
Process 1 | 11 | 19 | 35 | 42
Process 2 | 3 | 10 | 14 | 19
Process 3 | 30 | 8 | 12 | 12 <----anomaly
....
In this case, process 1 and process 2 are sufficiently close to the expected, while process 3 has a different distribution from the average. I would like to find a good test to measure this discrepancy. (Any other suggestion for the anomaly detection is also welcome)
• It is noteworthy that the edit almost completely changes the question. (This is no longer a comparison of a distribution against a reference: it involves multiple comparisons and no reference.) There are now good reasons to suspect correlations among the counts, too, which could be induced by the nature of the clustering and the nature of the processes themselves. Answers can be derived by ignoring these complications, but they might then lead you to erroneous conclusions. A good answer would incorporate some model (or understanding) of the relationships between processes and clusters. – whuber Apr 2 '12 at 15:11
The actual chi-squared statistic for these data is $549/38 \approx 14.447$. Apparently it is far out in the upper tail of this histogram: only $25$ of the $10,000$ results (0.25%) equal or exceed it. Yes, this proportion is almost four times greater than the approximation of $0.0007$ reported by the chi-squared test, but it's still tiny. We conclude that the observed distribution is significantly different from the expected distribution.
|
2020-02-23 12:10:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6560980677604675, "perplexity": 473.51052860907936}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145767.72/warc/CC-MAIN-20200223093317-20200223123317-00436.warc.gz"}
|
https://probabilityexam.wordpress.com/2015/10/30/exam-p-practice-problem-93-determining-average-claim-frequency/
|
# Exam P Practice Problem 93 – Determining Average Claim Frequency
Problem 93-A
An actuary performs a claim frequency study on a group of auto insurance policies. She finds that the probability function of the number of claims per week arising from this set of policies is $P(N=n)$ where $n=1,2,3,\cdots$. Furthermore, she finds that $P(N=n)$ is proportional to the following function:
$\displaystyle \frac{e^{-2.9} \cdot 2.9^n}{n!} \ \ \ \ \ \ \ n=1,2,3,\cdots$
What is the weekly average number of claims arising from this group of insurance policies?
$\displaystyle (A) \ \ \ \ \ \ \ \ \ \ \ \ 2.900$
$\displaystyle (B) \ \ \ \ \ \ \ \ \ \ \ \ 3.015$
$\displaystyle (C) \ \ \ \ \ \ \ \ \ \ \ \ 3.036$
$\displaystyle (D) \ \ \ \ \ \ \ \ \ \ \ \ 3.069$
$\displaystyle (E) \ \ \ \ \ \ \ \ \ \ \ \ 3.195$
$\text{ }$
$\text{ }$
$\text{ }$
$\text{ }$
________________________________________________________
Problem 93-B
Let $N$ be the number of taxis arriving at an airport terminal per minute. It is observed that there are at least 2 arrivals of taxis in each minute. Based on a study performed by a traffic engineer, the probability $P(N=n)$ is proportional to the following function:
$\displaystyle \frac{e^{-2.9} \cdot 2.9^n}{n!} \ \ \ \ \ \ \ n=2,3,4,\cdots$
What is the average number of taxis arriving at this airport terminal per minute?
$\displaystyle (A) \ \ \ \ \ \ \ \ \ \ \ \ 2.740$
$\displaystyle (B) \ \ \ \ \ \ \ \ \ \ \ \ 2.900$
$\displaystyle (C) \ \ \ \ \ \ \ \ \ \ \ \ 3.339$
$\displaystyle (D) \ \ \ \ \ \ \ \ \ \ \ \ 3.489$
$\displaystyle (E) \ \ \ \ \ \ \ \ \ \ \ \ 3.692$
________________________________________________________
$\text{ }$
$\text{ }$
$\text{ }$
$\text{ }$
$\text{ }$
$\text{ }$
$\text{ }$
$\text{ }$
________________________________________________________
$\copyright \ 2015 \ \ \text{ Dan Ma}$
|
2018-02-18 10:27:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 30, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28627896308898926, "perplexity": 370.9301298021989}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891811830.17/warc/CC-MAIN-20180218100444-20180218120444-00461.warc.gz"}
|
http://chem-bla-ics.blogspot.kr/2008/07/
|
## Saturday, July 26, 2008
### CDK Literature #5
Time flies. Another CDK Literature (see also #1, #2, #3, #4). Quite a few papers have been published again, and I'll briefly discuss a few of them.
Detection of IUPAC names
Klinger et al. have written a paper on detection of IUPAC names. As long as semantic markup languages are not the default, this remains important. Remaining problems include correctly finding boundaries in summaries of chemical. The CDK has been used to create SMILES.
Roman Klinger, Corinna Kolárik, Juliane Fluck, Martin Hofmann-Apitius, Christoph M. Friedrich, Detection of IUPAC and IUPAC-like chemical names, Bioinformatics 2008 24(13):i268-i276; doi:10.1093/bioinformatics/btn181
Structure elucidation
Elyashberg, Williams and Martin wrote a review on structure elucidation and discuss Steinbeck's Seneca software, which uses components of the CDK, though the CDK is not directly mentioned.
M.E. Elyashberg, A.J. Williams, G.E. Martin, Computer-assisted structure verification and elucidation tools in NMR-based structure elucidation, Progress in Nuclear Magnetic Resonance Spectroscopy, 2008, 53(1-2):1-104, doi:10.1016/j.pnmrs.2007.04.003
Opensource Distributed Chemical Computing
Karthikeyan et al. have published ChemStar, an opensource distributed chemical computing system, build on top the Java Remote Method Invocation architecture, used by the original Seneca too. The CDK paper and a Fechner/Guha's CDK News paper are cited in relation to a ChemStar application of benchmarking QSAR descriptors. The article does not seem to mention the opensource license, nor have I yet found a source package download.
M. Karthikeyan, S. Krishnan, A.K. Pandey, A. Bender, A. Tropsha, Distributed Chemical Computing Using ChemStar: An Open Source Java Remote Method Invocation Architecture Applied to Large Scale Molecular Data from PubChem, J. Chem. Inf. Model., 48 (4), 691–703, 2008. 10.1021/ci700334f
Taverna's APIConsumer
Taverna has several means of making functionality available to the workflow engine. SOAP and BioMoby are two prominent ones. The APIConsumer is another one, and described in this paper. The CDK-Taverna project lead by Thomas Kuhn, is mentioned as another project that uses this approach.
Peter Li, Tom Oinn, Stian Soiland, Douglas B. Kell, Automated manipulation of systems biology models using libSBML within Taverna workflows, Bioinformatics 2008 24(2):287-289, doi:10.1093/bioinformatics/btm578
Docking for Substrate Identification
Favia uses docking to recognize interesting substrates for short-chain dehydrogenases/reductases. The CDK's fingerprinter is used to describe intermolecular similarity, by calculating the Tanimoto distances between the bit strings.
Angelo D. Favia1, Irene Nobeli, Fabian Glaser, Janet M. Thornton, Molecular Docking for Substrate Identification: The Short-Chain Dehydrogenases/Reductases, Journal of Molecular Biology, 2008, 375(3):855-874, doi:10.1016/j.jmb.2007.10.065
## Wednesday, July 23, 2008
### Molecular QSAR descriptors in the CDK
Rajarshi has patched trunk last night with his work to address a few practical issues in the molecular descriptor module of the CDK (and I peer reviewed this work yesterday). One major change is that the IMolecularDescriptor calculate() method no longer throws an Exception, but returns Double.NaN instead. The Exception is stored in the DescriptorValue for convenience. This simplifies the QSAR descriptor calculation considerably, and, importantly, makes it more robust to the input. Though only by propagating errors into descriptor matrix. Just make sure your molecular structures have explicit hydrogens and 3D coordinates, and you're fine.
Anyway, Rajarshi also added a new page to CDK Nightly to list the available descriptors:
### Commercial QSAR modeling? Sorry, already patented...
QSAR has been patented in 2001 (US patent 20010049585).
Claim 1:
A method for predicting a set of chemical, physical or biological features related to chemical substances or related to interactions of chemical substances using a system comprising a plurality of prediction means, the method comprising using at least 16 different individual prediction means, thereby providing an individual prediction of the set of features for each of the individual prediction means and predicting the set of features on the basis of combining the individual predictions, the combining being performed in such a manner that the combined prediction is more accurate on a test set than substantially any of the predictions of the individual prediction means.
They use averaging or weighted averaging of the individual predictions (claim 2). Oh, and just in case you think you are clever and you use 17, 32, etc individual predictions. Sorry, no luck either; you have to use way beyond 1M individual predictions according to the following claim ;)
Claim 2:
A method according to claim 1, wherein the number of different predictions means is at least 20, such as at least 30, such as at least 40, 50, 75, 100, 200, 400, 600, 800, 1000, 1200, 1400, 1600, 1800, 2000, 2500, 3000, 4000, 5000, 6000, 7000, 8000, 9000, 10,000, 15,000, 20,000, 30,000, 40,000, 50,000, 100,000, 200,000, 500,000, 1,000,000.
## Tuesday, July 22, 2008
### Peer reviewed Chemoinformatics: Why OpenSource Chemoinformatics should be the default
The battle for scientific publishing is continuing: openaccess, peer reviewing, how much does it cost, who should pay it, is the data in papers copyrighted, etc, etc.
The battle for chemoinformatics, however, has not even started yet. The Blue Obelisk paper (doi:10.1021/ci050400b) has gotten a lot of attention, and citations. But closed source chemoinformatics is doing fine, and have not really openly taken a standpoint against open source chemoinformatics. Actually, CambridgeSoft just received a good investment. I wonder how this investment will be used, and where the ROI will come from. More closed data and closed algorithms? Focus on services? Early access privileges? At least they had something convincing.
There are many degrees of openness, and many business models. I value open source chemoinformatics, or chemblaics, as I call it. There is a striking similarity between publishing and chemoinformatics. Both play an important role in the progress of sciences. A big difference is that (independent) peer review of published results is done in scientific publishing, but not generally to chemoinformatics. Surely, algorithms are published... Ah, no; they are not. They are described. Ask any chemoinformatician why this subtle difference is causing headaches...
Let me just briefly stress the difference between core chemoinformatics, and GUI applications. The first *must* be opensource, to allow independent Peer Review; the latter is just nice to have as opensource. Bioclipse is the GUI (doi:10.1186/1471-2105-8-59), while the CDK is our peer-reviewed chemoinformatics library (pmid:16796559). I would also like to stress that the CDK is LGPL, allowing the opensource chemoinformatics library to be used in proprietary GUI software. We deliberately choose this license, to allow embedding in proprietary code. The Java Molecular Descriptor Library of iCODONS is an example of this (that is, AFAIK it's not opensource).
So, getting back to that CambridgeSoft investment. I really hope they search the ROI in the added value of the user friendly GUI, and not in the chemoinformatics algorithm implementations, which, IMHO, should be peer-reviewed, thus open source. Meanwhile, I will continue working on the CDK project to provide open source chemoinformatics algorithms implementations, for use in opensource *and* proprietary chemoinformatics GUIs.
## Tuesday, July 15, 2008
### Metabolomics needs you
Over on Metabolomics In Europe I posted a ad for an open metabolomics position in our group. Go check it out!
## Thursday, July 10, 2008
### Going to Science Blogging 2008: London
On Saturday 30th of August I'll be in London attending the Science Blogging 2008 event. The Monday following that, I'll meet friends at the EBI, but Sunday is empty so far. I'd love to meet up that Sunday, so just ping me if interested.
Oh, and this blog is using RDFa to markup the event, as discussed here.
## Wednesday, July 09, 2008
### Chemoinformatics p0wned by cheminformatics...
Noel had a 40 people vote over chemoinformatics versus cheminformatics. What do you think?
I have thrown in two extra options: chemblaics (from my blog: chemblaics (pronounced chem-bla-ics) is the science that uses computers to address and possibly solve problems in the area of chemistry, biochemistry and related fields. The big difference between chemblaics and areas as chem(o)?informatics, chemometrics, computational chemistry, etc, is that chemblaics only uses open source software, making experimental results reproducable and validatable.) and bioinformatics (in case you believe all is life sciences now).
## Saturday, July 05, 2008
### SVN commit hooks down for CDK and Bioclipse
SourceForge has been playing with system upgrades again, and in an attempt to debug the failing CIA commits on IRC, I reinstalled the hooks for CDK and Bioclipse, so that now all hooks seem to fail, including the email hook... Apparently, it is a known bug, e.g. see this bug report. I assume SF will fix this soon.
On the bright side, I also noted an updated webpage for SF uptime/problem tracker, where it is also reported that stats are currently down for upgrade. There also has an RSS feed, which I recommend as a good monitoring tool for SF site problems.
## Friday, July 04, 2008
### Moving to Sweden: Improving CDK support in Bioclipse
This autumn I will end my current post-doc position at Plant Research International in the Applied Bioinformatics group and at Biometris (both part of Wageningen University) funded by the Netherlands Metabolomics Center (lot's of vacancies), where I had a good time, and collaborated in several projects within the NMC with much pleasure.
However, personal circumstances strengthened an older wish of me and my family to seek the adventure of living abroad, and a vacancy was available in the group of Prof. Wikberg. So, we are moving to Sweden. There, I will extend my research on effectively combining chemoinformatics (sometimes misspelled as cheminformatics ;) and chemometrics, as I did in my PhD, which fits well with the development of proteochemotrics methodology and Bioclipse as platform to transform scientific hypotheses into data queries.
|
2017-12-18 20:38:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2485576868057251, "perplexity": 5767.166295077021}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948623785.97/warc/CC-MAIN-20171218200208-20171218222208-00151.warc.gz"}
|
https://chemistry.stackexchange.com/questions/104250/reaction-between-salicylaldehyde-and-2-benzoylpyridine
|
# Reaction between Salicylaldehyde and 2-Benzoylpyridine
In a paper by Sheng et al (10.1016/j.dyepig.2018.07.036) the authors synthesize the compound MZC as shown in the scheme below.
I'm looking for information about what the reaction mechanism is that underpins this reaction. If anyone can share some insights it is much appreciated.
In the experimental section it is written
2-Benzoylpyridine (4 mmol, 0.732 g) and ammonium acetate (22 mmol, 1.70 g) was dissolved in acetic acid (30 mL) and followed by the addition of salicylaldehyde (6 mmol, 0.733 g). The reaction was refluxed for 6 h. The resulting solution mixture was cooled to room temperature and cold water was then added to the mixture. The solid was collected by filtration and washed with a small amount of cooled water, which was purified via recrystallization using acetonitrile to give MZC (0.687 g, 60 % yield).
• Treat it step by step. For example, step 1: what does a ketone form when mixed with ammonia? – Zhe Nov 12 '18 at 17:54
• I assume that ammonium acetate $\ce{H3COONH4}$ provides for $\ce{NH3}$ in the reaction mixture. Ammonia will react with the ketone group in 2-Benzoylpyridie and form an imine, and that gives a hint on how an additional nitrogen enters the ring system. So the question then is, does the imine reacts with the aldehyde group in Salicylaldehyde? – John Nov 12 '18 at 19:02
• The reaction is run in refluxing acetic acid so that is forcing conditions. Imines are not as nucleophilic as amines but they are stiil nucleophiles. – Waylander Nov 12 '18 at 20:04
• Technically, the reaction is one pot, so the order is not 100% certain. But you could also reasonably form the aldimine first on reactivity grounds (though the reaction is likely under Curtin-Hammett kinetics). Then keep in mind that the pyridine is relatively nucleophilic. I quickly tried drawing this out, and it seems feasible. – Zhe Nov 12 '18 at 20:25
|
2020-01-18 15:59:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5624985098838806, "perplexity": 3092.528434316797}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250592636.25/warc/CC-MAIN-20200118135205-20200118163205-00123.warc.gz"}
|
http://gmatclub.com/forum/if-a-is-not-equal-to-zero-is-1-a-a-b-122266-20.html
|
Find all School-related info fast with the new School-Specific MBA Forum
It is currently 20 Oct 2014, 18:12
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# If a is not equal to zero, is 1/a > a / (b^4 + 3) ? (1)
Author Message
TAGS:
Veritas Prep GMAT Instructor
Joined: 16 Oct 2010
Posts: 4870
Location: Pune, India
Followers: 1149
Kudos [?]: 5339 [1] , given: 165
Re: If a is not equal to zero, is 1/a > a / (b^4 + 3) ? (1) [#permalink] 24 Feb 2013, 19:18
1
KUDOS
Expert's post
Archit143 wrote:
Hi bunuel
can you help with this question.....the book has given statement 1 is sufficient...it says that since b^2 is always positive so "a" must also be positive......but i think a can be negative also......
regards
Archit
Responding to a pm:
A response above clarifies that this is an errata.
The correct statement 1 is a = b^2 instead of a^2 = b^2.
In that case
If a is not equal to zero, is 1/a > a / (b^4 + 3) ?
(1) a = b^2
(2) a^2 = b^4
Is\frac{1}{a} > \frac{a}{(b^4 + 3)} ?
Is \frac{(b^4 + 3)}{a} > a ?
Is \frac{(b^4 + 3)}{a} - a > 0 ?
Is \frac{(b^4 + 3 - a^2)}{a} > 0 ?
(1) a = b^2
This tells us that 'a' must be positive. Further, squaring, we get a^2 = b^4
Hence, the question becomes:
Is 3/a > 0.
It must be since a is positive. Sufficient
(2) a^2 = b^4
Doesn't tell us anything about the sign of a.
The question becomes:
Is 3/a > 0?
We cannot say. Not Sufficient
_________________
Karishma
Veritas Prep | GMAT Instructor
My Blog
Save $100 on Veritas Prep GMAT Courses And Admissions Consulting Enroll now. Pay later. Take advantage of Veritas Prep's flexible payment plan options. Veritas Prep Reviews Current Student Status: Final Lap Up!!! Affiliations: NYK Line Joined: 21 Sep 2012 Posts: 1097 Location: India GMAT 1: 410 Q35 V11 GMAT 2: 530 Q44 V20 GMAT 3: 630 Q45 V31 GPA: 3.84 WE: Engineering (Transportation) Followers: 31 Kudos [?]: 273 [0], given: 67 Re: If a is not equal to zero, is 1/a > a / (b^4 + 3) ? (1) [#permalink] 25 Feb 2013, 03:46 VeritasPrepKarishma wrote: Archit143 wrote: Hi bunuel can you help with this question.....the book has given statement 1 is sufficient...it says that since b^2 is always positive so "a" must also be positive......but i think a can be negative also...... regards Archit Responding to a pm: A response above clarifies that this is an errata. The correct statement 1 is a = b^2 instead of a^2 = b^2. In that case If a is not equal to zero, is 1/a > a / (b^4 + 3) ? (1) a = b^2 (2) a^2 = b^4 Is\frac{1}{a} > \frac{a}{(b^4 + 3)} ? Is \frac{(b^4 + 3)}{a} > a ? Is \frac{(b^4 + 3)}{a} - a > 0 ? Is \frac{(b^4 + 3 - a^2)}{a} > 0 ? (1) a = b^2 This tells us that 'a' must be positive. Further, squaring, we get a^2 = b^4 Hence, the question becomes: Is 3/a > 0. It must be since a is positive. Sufficient (2) a^2 = b^4 Doesn't tell us anything about the sign of a. The question becomes: Is 3/a > 0? We cannot say. Not Sufficient Answer (A) Hi karishma Statement 1 is a^2 = b^2 and not a = b^2 Can you clear something that i am missing here... by the way thanks for explaining and statement is insufficient that clear.... Archit Veritas Prep GMAT Instructor Joined: 16 Oct 2010 Posts: 4870 Location: Pune, India Followers: 1149 Kudos [?]: 5339 [0], given: 165 Re: If a is not equal to zero, is 1/a > a / (b^4 + 3) ? (1) [#permalink] 25 Feb 2013, 04:05 Expert's post Archit143 wrote: VeritasPrepKarishma wrote: Archit143 wrote: Hi bunuel can you help with this question.....the book has given statement 1 is sufficient...it says that since b^2 is always positive so "a" must also be positive......but i think a can be negative also...... regards Archit Responding to a pm: A response above clarifies that this is an errata. The correct statement 1 is a = b^2 instead of a^2 = b^2. In that case If a is not equal to zero, is 1/a > a / (b^4 + 3) ? (1) a = b^2 (2) a^2 = b^4 Is\frac{1}{a} > \frac{a}{(b^4 + 3)} ? Is \frac{(b^4 + 3)}{a} > a ? Is \frac{(b^4 + 3)}{a} - a > 0 ? Is \frac{(b^4 + 3 - a^2)}{a} > 0 ? (1) a = b^2 This tells us that 'a' must be positive. Further, squaring, we get a^2 = b^4 Hence, the question becomes: Is 3/a > 0. It must be since a is positive. Sufficient (2) a^2 = b^4 Doesn't tell us anything about the sign of a. The question becomes: Is 3/a > 0? We cannot say. Not Sufficient Answer (A) Hi karishma Statement 1 is a^2 = b^2 and not a = b^2 Can you clear something that i am missing here... by the way thanks for explaining and statement is insufficient that clear.... Archit It's an error in the MGMAT book. They have given it in their errata. The link of their errata is given in this post (on the previous page): if-a-is-not-equal-to-zero-is-1-a-a-b-122266.html#p989860 _________________ Karishma Veritas Prep | GMAT Instructor My Blog Save$100 on Veritas Prep GMAT Courses And Admissions Consulting
Enroll now. Pay later. Take advantage of Veritas Prep's flexible payment plan options.
Veritas Prep Reviews
Current Student
Status: Final Lap Up!!!
Affiliations: NYK Line
Joined: 21 Sep 2012
Posts: 1097
Location: India
GMAT 1: 410 Q35 V11
GMAT 2: 530 Q44 V20
GMAT 3: 630 Q45 V31
GPA: 3.84
WE: Engineering (Transportation)
Followers: 31
Kudos [?]: 273 [0], given: 67
Re: If a is not equal to zero, is 1/a > a / (b^4 + 3) ? (1) [#permalink] 25 Feb 2013, 04:34
Thanks Karishma for clearing the doubts...Can you pls explain what does IaI = IbI mean....in simpler terms....
Intern
Joined: 03 Sep 2012
Posts: 1
Followers: 0
Kudos [?]: 0 [0], given: 0
Re: If a is not equal to zero, is 1/a > a / (b^4 + 3) ? (1) [#permalink] 13 Mar 2013, 17:01
If b^4 + 3 > a^2
=> a^2 - b^4 < 3
Now
2. a^2 = b^4
=> a^2 -b^4 = 0
It tells a^2 - b^4 < 3 is sufficient
Wondering what wrong assumption I am making with above.
Veritas Prep GMAT Instructor
Joined: 16 Oct 2010
Posts: 4870
Location: Pune, India
Followers: 1149
Kudos [?]: 5339 [0], given: 165
Re: If a is not equal to zero, is 1/a > a / (b^4 + 3) ? (1) [#permalink] 13 Mar 2013, 20:56
Expert's post
Archit143 wrote:
Thanks Karishma for clearing the doubts...Can you pls explain what does IaI = IbI mean....in simpler terms....
|a| = |b| means that the distance of a from 0 is equal to the distance of b from 0.
This means, if a = 5, b = 5 or -5
Similarly, if a = -5, b = 5 or -5
So, imagine the number line. There are two points at a distance of 5 from 0. a and b could lie on any one of these points.
_________________
Karishma
Veritas Prep | GMAT Instructor
My Blog
Save $100 on Veritas Prep GMAT Courses And Admissions Consulting Enroll now. Pay later. Take advantage of Veritas Prep's flexible payment plan options. Veritas Prep Reviews Veritas Prep GMAT Instructor Joined: 16 Oct 2010 Posts: 4870 Location: Pune, India Followers: 1149 Kudos [?]: 5339 [0], given: 165 Re: If a is not equal to zero, is 1/a > a / (b^4 + 3) ? (1) [#permalink] 13 Mar 2013, 21:01 Expert's post kaushalsp wrote: If b^4 + 3 > a^2 => a^2 - b^4 < 3 Now 2. a^2 = b^4 => a^2 -b^4 = 0 It tells a^2 - b^4 < 3 is sufficient Wondering what wrong assumption I am making with above. 'Is \frac{1}{a} > \frac{a}{(b^4 + 3)}' is NOT the same as 'Isb^4 + 3 > a^2?' Mind you, it is not given that 'a' is positive. You cannot cross multiply in an inequality if you do not know the sign of the varaible. e.g. a < b/c is not the same as ac < b If we know that c is positive, then it is ok. Then a < b/c is same as ac < b If instead, c is negative, then a < b/c is the same as ac > b (Note that the inequality sign has flipped) Hence, statement (2) is not sufficient alone. Statement (1) tells us the sign of a and we see that it is sufficient alone (check my solution above) _________________ Karishma Veritas Prep | GMAT Instructor My Blog Save$100 on Veritas Prep GMAT Courses And Admissions Consulting
Enroll now. Pay later. Take advantage of Veritas Prep's flexible payment plan options.
Veritas Prep Reviews
BSchool Forum Moderator
Joined: 27 Aug 2012
Posts: 1117
Followers: 86
Kudos [?]: 561 [0], given: 107
If a NOT=0, is 1/a>a/(b^4+3) ? [#permalink] 31 Jul 2013, 07:19
Expert's post
1. If a NOT=0, is \frac{1}{a}>\frac{a}{b^4+3}
i. a^2=b^2
ii. a^2=b^4
_________________
Last edited by Zarrolou on 31 Jul 2013, 07:29, edited 1 time in total.
Edited the question.
Current Student
Joined: 14 Dec 2012
Posts: 838
Location: India
Concentration: General Management, Operations
GMAT 1: 700 Q50 V34
GPA: 3.6
Followers: 37
Kudos [?]: 611 [0], given: 197
Re: If a NOT=0, is 1/a>a/(b^4+3) ? [#permalink] 31 Jul 2013, 07:52
bagdbmba wrote:
1. If a NOT=0, is \frac{1}{a}>\frac{a}{b^4+3}
i. a^2=b^2
ii. a^2=b^4
IMO E
LET a = 1
according to statement 1==> b^4=1
now putting in equation
\frac{1}{1}>\frac{1}{1+3}==>satisfies
if a = -1 b^4 =1
\frac{1}{-1}>\frac{-1}{1+3}==>doesnt satisfies.
statement 2:
a^2 = b^2
let a=1 b^4 = 1
\frac{1}{1}>\frac{1}{1+3}==>satisfies
let a=-1 b^4 = 1
\frac{1}{-1}>\frac{-1}{1+3}===>doesnt satisfies.
combining both
a,b= 1 or -1
same cases will not satisfy
hence E
_________________
When you want to succeed as bad as you want to breathe ...then you will be successfull....
GIVE VALUE TO OFFICIAL QUESTIONS...
learn AWA writing techniques while watching video : http://www.gmatprepnow.com/module/gmat- ... assessment
Verbal Forum Moderator
Joined: 10 Oct 2012
Posts: 627
Followers: 43
Kudos [?]: 596 [0], given: 135
Re: If a NOT=0, is 1/a>a/(b^4+3) ? [#permalink] 31 Jul 2013, 09:53
Expert's post
bagdbmba wrote:
1. If a NOT=0, is \frac{1}{a}>\frac{a}{b^4+3}
i. a^2=b^2
ii. a^2=b^4
The question asks IS \frac{(b^4+3)}{a}>a \to Multiply both sides bya^2 \to a(b^4+3)>a^3 \to Is a(b^4+3-a^2)>0.
From F.S 1, the question becomes: Is a(a^4-a^2+3)>0.
Note that (a^4-a^2+3) will always be positive as because it can be represented as sum of 2 squares : (a^4-2a^2+1)+a^2+2 = (a^2-1)^2+(a^2+2)
Thus, the question now simply becomes : Is a>0
We clearly don't know that. Hence, Insufficient.
From F.S 2, the question becomes : Is a(b^4+3-a^2)>0 \to a(a^2+3-a^2)>0 \to Is 3a>0. Again, Insufficient.
Taking both together, we know thatb^2 = b^4 \to b^2(b^2-1) = 0 \to b^2=a^2 = 1 [b^2 \neq{0} asa\neq{0}]
Thus, a could be\pm1. Insufficient.
E.
Sidenote: The actual question is from Manhattan Gmat and they had published an errata for this question , which modifies the first fact statement to
a = b^2. Now, if that would have been the case,the answer would have been A. However, with the given condition, the answer is E.
_________________
Math Expert
Joined: 02 Sep 2009
Posts: 23348
Followers: 3602
Kudos [?]: 28657 [0], given: 2808
Re: If a is not equal to zero, is 1/a > a / (b^4 + 3) ? (1) [#permalink] 31 Jul 2013, 09:56
Expert's post
Merging similar topics.
_________________
BSchool Forum Moderator
Joined: 27 Aug 2012
Posts: 1117
Followers: 86
Kudos [?]: 561 [0], given: 107
Re: If a is not equal to zero, is 1/a > a / (b^4 + 3) ? (1) [#permalink] 14 Aug 2013, 10:11
Expert's post
VeritasPrepKarishma wrote:
Archit143 wrote:
Hi bunuel
can you help with this question.....the book has given statement 1 is sufficient...it says that since b^2 is always positive so "a" must also be positive......but i think a can be negative also......
regards
Archit
Responding to a pm:
A response above clarifies that this is an errata.
The correct statement 1 is a = b^2 instead of a^2 = b^2.
In that case
If a is not equal to zero, is 1/a > a / (b^4 + 3) ?
(1) a = b^2
(2) a^2 = b^4
Is\frac{1}{a} > \frac{a}{(b^4 + 3)} ?
Is \frac{(b^4 + 3)}{a} > a ?
Is \frac{(b^4 + 3)}{a} - a > 0 ?
Is \frac{(b^4 + 3 - a^2)}{a} > 0 ?
(1) a = b^2
This tells us that 'a' must be positive. Further, squaring, we get a^2 = b^4
Hence, the question becomes:
Is 3/a > 0.
It must be since a is positive. Sufficient
(2) a^2 = b^4
Doesn't tell us anything about the sign of a.
The question becomes:
Is 3/a > 0?
We cannot say. Not Sufficient
Hi Karishma,
As per the above highlighted part, if a is '-ve' then a=-b^2, so \sqrt{a} will be \sqrt{-b^2}. Hence a becomes imaginary as b^2 is always positive.So here also a must be '+ve'.
But, we've considered a as '-ve' also! Could you please explain this?
_________________
Veritas Prep GMAT Instructor
Joined: 16 Oct 2010
Posts: 4870
Location: Pune, India
Followers: 1149
Kudos [?]: 5339 [1] , given: 165
Re: If a is not equal to zero, is 1/a > a / (b^4 + 3) ? (1) [#permalink] 15 Aug 2013, 22:14
1
KUDOS
Expert's post
bagdbmba wrote:
VeritasPrepKarishma wrote:
Archit143 wrote:
Hi bunuel
can you help with this question.....the book has given statement 1 is sufficient...it says that since b^2 is always positive so "a" must also be positive......but i think a can be negative also......
regards
Archit
Responding to a pm:
A response above clarifies that this is an errata.
The correct statement 1 is a = b^2 instead of a^2 = b^2.
In that case
If a is not equal to zero, is 1/a > a / (b^4 + 3) ?
(1) a = b^2
(2) a^2 = b^4
Is\frac{1}{a} > \frac{a}{(b^4 + 3)} ?
Is \frac{(b^4 + 3)}{a} > a ?
Is \frac{(b^4 + 3)}{a} - a > 0 ?
Is \frac{(b^4 + 3 - a^2)}{a} > 0 ?
(1) a = b^2
This tells us that 'a' must be positive. Further, squaring, we get a^2 = b^4
Hence, the question becomes:
Is 3/a > 0.
It must be since a is positive. Sufficient
(2) a^2 = b^4
Doesn't tell us anything about the sign of a.
The question becomes:
Is 3/a > 0?
We cannot say. Not Sufficient
Hi Karishma,
As per the above highlighted part, if a is '-ve' then a=-b^2, so \sqrt{a} will be \sqrt{-b^2}. Hence a becomes imaginary as b^2 is always positive.So here also a must be '+ve'.
But, we've considered a as '-ve' also! Could you please explain this?
a^2 = b^4
When you take the square root of both sides here, you get |a| = |b^2| = b^2
You do not get a = b^2.
Note that when you take square root of x^2 = y^2, you get |x| = |y|, not x = y
So a can still be negative. Say a = -9, b = 3
In this case, a^2 = b^4 = (-9)^2 = 3^4
_________________
Karishma
Veritas Prep | GMAT Instructor
My Blog
Save \$100 on Veritas Prep GMAT Courses And Admissions Consulting
Enroll now. Pay later. Take advantage of Veritas Prep's flexible payment plan options.
Veritas Prep Reviews
BSchool Forum Moderator
Joined: 27 Aug 2012
Posts: 1117
Followers: 86
Kudos [?]: 561 [0], given: 107
Re: If a is not equal to zero, is 1/a > a / (b^4 + 3) ? (1) [#permalink] 09 Sep 2013, 03:27
Expert's post
Thanks Karishma for the explanation and apologies for late acknowledgement as I could figure it out later and hence this post slipped somehow
But, again much thanks. +1
_________________
Re: If a is not equal to zero, is 1/a > a / (b^4 + 3) ? (1) [#permalink] 09 Sep 2013, 03:27
Similar topics Replies Last post
Similar
Topics:
If a ≠ 0, is (1/a) > a/(b^4+3) ? 0 04 Aug 2014, 05:12
If a NOT=0, is 1/a>a/(b^4+3) ? 0 31 Jul 2013, 09:53
17 If a does not = 0, is 1/a > a/(b^4 + 3) 14 12 Sep 2011, 16:42
if a not equal to Zero, 1/a < a/(b^4+3) 1. a^2=b^2 2. a^2 7 18 Jun 2011, 10:32
1 [1/a^2+1/a^4+1/a^6+...]>[1/a+1/a^3+1/a^5] 3 18 Oct 2010, 04:03
Display posts from previous: Sort by
|
2014-10-21 02:12:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7238744497299194, "perplexity": 10151.33320563613}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507443869.1/warc/CC-MAIN-20141017005723-00323-ip-10-16-133-185.ec2.internal.warc.gz"}
|
https://www.atmos-meas-tech.net/11/3297/2018/
|
Journal cover Journal topic
Atmospheric Measurement Techniques An interactive open-access journal of the European Geosciences Union
Journal topic
Atmos. Meas. Tech., 11, 3297-3322, 2018
https://doi.org/10.5194/amt-11-3297-2018
Atmos. Meas. Tech., 11, 3297-3322, 2018
https://doi.org/10.5194/amt-11-3297-2018
Research article 11 Jun 2018
Research article | 11 Jun 2018
# Airborne wind lidar observations over the North Atlantic in 2016 for the pre-launch validation of the satellite mission Aeolus
Airborne wind lidar observations over the North Atlantic
Oliver Lux, Christian Lemmerz, Fabian Weiler, Uwe Marksteiner, Benjamin Witschas, Stephan Rahm, Andreas Schäfler, and Oliver Reitebuch Oliver Lux et al.
• Deutsches Zentrum für Luft- und Raumfahrt e.V. (DLR), Institut für Physik der Atmosphäre, Oberpfaffenhofen 82234, Germany
Abstract
In preparation of the satellite mission Aeolus carried out by the European Space Agency, airborne wind lidar observations have been performed in the frame of the North Atlantic Waveguide and Downstream Impact Experiment (NAWDEX), employing the prototype of the satellite instrument, the ALADIN Airborne Demonstrator (A2D). The direct-detection Doppler wind lidar system is composed of a frequency-stabilized Nd:YAG laser operating at 355 nm, a Cassegrain telescope and a dual-channel receiver. The latter incorporates a Fizeau interferometer and two sequential Fabry–Pérot interferometers to measure line-of-sight (LOS) wind speeds by analysing both Mie and Rayleigh backscatter signals. The benefit of the complementary design is demonstrated by airborne observations of strong wind shear related to the jet stream over the North Atlantic on 27 September and 4 October 2016, yielding high data coverage in diverse atmospheric conditions. The paper also highlights the relevance of accurate ground detection for the Rayleigh and Mie response calibration and wind retrieval. Using a detection scheme developed for the NAWDEX campaign, the obtained ground return signals are exploited for the correction of systematic wind errors. Validation of the instrument performance and retrieval algorithms was conducted by comparison with DLR's coherent wind lidar which was operated in parallel, showing a systematic error of the A2D LOS winds of less than 0.5 m s−1 and random errors from 1.5 (Mie) to 2.7 m s−1 (Rayleigh).
1 Introduction
Over the last decade, Doppler wind lidar systems (Reitebuch, 2012a) have emerged as a versatile tool for the range-resolved detection of wind shears (Shangguan et al., 2017), aircraft wake vortices (Köpp et al., 2004; Dolfi-Bouteyre et al., 2009), wind and temperature turbulence (Banakh et al., 2014) and gravity waves (Witschas et al., 2017), amongst other applications. In particular, direct-detection wind lidars have been demonstrated to provide accurate wind information from ground up to altitudes of 60 km (Dou et al., 2014) or even beyond (Baumgarten, 2010; Hildebrand et al., 2012). The most ambitious endeavour in this context is the upcoming satellite mission Aeolus of the European Space Agency (ESA), which strives for the continuous global observation of atmospheric wind profiles employing the first ever satellite-borne Doppler wind lidar instrument ALADIN (Atmospheric LAser Doppler INstrument) (ESA, 2008; Stoffelen et al., 2005). Being a part of ESA's Living Planet Programme, Aeolus will significantly contribute to the improvement in numerical weather prediction (NWP), as it will support to close the gap in wind profile data coverage, especially over the oceans, which has been identified as one of the major deficiencies in the current Global Observing System (Baker et al., 2014; Andersson, 2016). For this purpose, it will provide one line-of-sight (LOS) component of the horizontal wind vector from ground throughout the troposphere up to the lower stratosphere (about 27 km) with a vertical resolution of 0.25 to 2 km, depending on altitude and precision of 1 to 3 m s−1 (ESA, 2016; Reitebuch, 2012b). The obtained data will allow for greater accuracy of the initial atmospheric state in NWP models and thus improve the quality of weather forecasts (Tan and Andersson, 2005) as well as the understanding of atmospheric dynamics and climate processes (ESA, 2008). As a secondary product, the wind lidar system, which is scheduled for launch in 2018, will provide information on cloud top heights and on the vertical distribution of clouds and aerosol properties such as backscatter and extinction coefficients (Flamant et al., 2008; Ansmann et al., 2007).
Over the past years, a prototype of the Aeolus payload, the ALADIN Airborne Demonstrator (A2D), has been developed and deployed in several field experiments, aiming at pre-launch validation of the satellite instrument and at performing wind lidar observations under various atmospheric conditions (Reitebuch et al., 2009; Marksteiner et al., 2011, 2017). Most recently, in autumn of 2016, the A2D was employed in the frame of the North Atlantic Waveguide and Downstream Impact Experiment (NAWDEX) (Schäfler et al., 2018). Based in Keflavík, Iceland, this international field campaign had the overarching goal to investigate the influence of diabatic processes, related to clouds and radiation, on the evolution of the North Atlantic jet stream. Accurate wind speed observations of the North Atlantic jet stream form the basis for quantifying effects of disturbances for downstream propagation and related high-impact weather in Europe. For this purpose, four research aircraft equipped with diverse payloads were employed, which allowed for the observation of a large set of atmospheric parameters using a multitude of state-of-the-art remote sensing instruments, while ground stations delivered a comprehensive suite of additional measurements to complement the meteorological analysis.
With a view to the forthcoming Aeolus mission, the NAWDEX campaign was an ideal platform for extending the wind data set obtained with the A2D, as it offered the opportunity to perform wind measurements in dynamically complex scenes, including strong wind shear and varying cloud conditions. Furthermore, multiple instrument calibrations, which are a prerequisite for accurate wind retrieval, could be conducted over ice, namely the Vatnajökull glacier in Iceland, ensuring high signal-to-noise ratios (SNR) of the ground return and thus low systematic errors. In addition, the large-scale cooperation of atmospheric research groups from around the world was beneficial for the preparation of the upcoming launch of Aeolus.
Among the 14 research flights conducted in the frame of NAWDEX, the two flights performed on 27 September and on 4 October 2016 were especially interesting with regard to the instrument-driven goals of the campaign. While the former flight was characterized by exceptionally high wind speeds and strong wind shear to be sampled by the A2D, the latter one provided ground visibility which allowed for the analysis of ground return signals. In general, analysis of the ground return offers many possibilities for improving the performance of lidar instruments. Recently, Amediek and Wirth (2017) introduced a method for quantifying laser pointing uncertainties in airborne and spaceborne lidar instruments which is based on the comparison of ground elevations derived from the lidar ranging data with elevation data from a high-resolution digital elevation model (DEM). Regarding airborne wind lidar and radar systems, ground echoes can be exploited to account for systematic pointing errors and to determine the mounting angles of the instrument. Here, the ground surface is used as a zero wind reference, which allows us to estimate the contribution of the aircraft motion to the actual atmospheric wind measurement and hence to correct for inaccuracies in the aircraft attitude data as well as in the instrument's alignment (Bosart et al., 2002; Kavaya et al., 2014; Chouza et al., 2016a; Weiler, 2017). Accurate zero wind correction (ZWC), however, requires precise differentiation between atmospheric and ground return signals in order to prevent systematic errors. This is particularly true for the A2D (and ALADIN) due to its coarse vertical resolution of several hundred metres. Hence, in contrast to previous A2D airborne campaigns, an enhanced scheme for the detection of ground return signals was developed for NAWDEX.
The paper is organized as follows. First, the operation principle of the system is described with a focus on the complementary design of the instrument comprising two different receiver channels, which allow for the analysis of both particle and molecular backscatter signals. The subsequent section is devoted to the Rayleigh and Mie response calibrations, which represent an essential part of the data analysis. Here, the implemented ground detection method used for the A2D data analysis is introduced. Comparison with the approach taken in previous campaigns reveals the influence of the surface albedo on the quality of Rayleigh and Mie response calibrations and highlights the necessity of proper ground detection. Afterwards, wind observations performed with the A2D during the two above-mentioned NAWDEX flights are presented, demonstrating the ability of the lidar system to provide wind profiles with broad data coverage under various atmospheric conditions. Evaluation of the data accuracy and precision is conducted by comparing the measured wind speeds with those obtained by DLR's coherent wind lidar system (Weissmann et al., 2005; Witschas et al., 2017), which was operated in parallel from the same aircraft as a reference system. Finally, ZWC based on the refined ground detection scheme is shown to provide a significant reduction of the systematic wind error for the second flight.
Figure 1Schematic of the ALADIN Airborne Demonstrator (A2D) wind lidar instrument consisting of an injection-seeded, frequency-tripled laser transmitter, a Cassegrain telescope, front optics and a dual-channel receiver. PLL: phase locked loop; SHG: second harmonic generator; THG: third harmonic generator; IS: integrating sphere; FC: fibre coupler; BEX: beam expander; EOM: electro-optic modulator; FPI: Fabry–Pérot interferometer; ACCD: accumulation charge-coupled device.
2 The A2D direct-detection wind lidar system
The A2D wind lidar is composed of a pulsed, frequency-stable, ultraviolet (UV) laser transmitter incorporating a reference laser system, a Cassegrain telescope, a configuration of optical elements (front optics) to spatially overlap a small portion of the outgoing radiation with the return signals from the atmosphere and the ground, and a dual-channel receiver including detectors. A schematic of the lidar is depicted in Fig. 1. The individual components will be described in the following.
## 2.1 Laser transmitter, telescope and front optics
The laser transmitter of the A2D is based on a frequency-tripled Nd:YAG master oscillator power amplifier (MOPA) system, generating 20 ns pulses (full width at half maximum, FWHM) at 354.89 nm wavelength. The injection-seeded laser, which uses an active frequency stabilization technique, provides single-frequency UV pulses with energy of 60 mJ at 50 Hz repetition rate (3.0 W average power), while showing near-diffraction-limited beam quality. Concerning the spectral characteristics, the bandwidth of the transmitted UV laser pulses is 50 MHz (FWHM), while the pulse-to-pulse frequency stability is approximately 3 MHz (root mean square). A comprehensive description of the laser transmitter configuration and its performance is provided in Lemmerz et al. (2017) and Schröder et al. (2007).
In the last years, particular attention has been devoted to the cavity control mechanism which ensures high single-frequency operation stability even under vibration conditions. In addition to the strict requirements in terms of frequency stability, a further challenge is imposed by the necessity to trigger the receiver electronics about 60 µs before the laser pulse emission with an error of less than 100 ns. Therefore, a dedicated active frequency stabilization technique was developed which is based on the ramp–delay–fire method (Nicklaus et al., 2007). Fast detection of the master oscillator cavity resonances with the seed laser frequency enabled effective compensation of higher-frequency vibrations, while providing a sufficiently early trigger for the detector electronics with a timing stability of around 80 ns (Lemmerz et al., 2017). The long lead time of the detector electronics is due to an electronic preconditioning process of the accumulating charged-coupled device (ACCD) arrays described in Sect. 2.2. Although ACCDs of the same type are used for the satellite instrument, the preconditioning process is not an issue here, since the round-trip laser pulse travel time from the satellite to the first atmospheric range gate ( 2.5 ms) is sufficiently long.
Measurement of the transmitted laser frequency and calibration of the frequency-dependent transmission of the receiver spectrometers are prerequisite for accurate wind retrieval. Therefore, a small portion of the pulsed UV laser radiation, referred to as internal reference, is collected by an integrating sphere, coupled into a multi-mode fibre (200 µm core diameter) and guided to the receiver via the front optics, while allowing adjustable signal levels by using a variable fibre attenuator (not shown in Fig. 1). Another small fraction of the beam is directed to a wavelength meter (HighFinesse, WS Ultimate 2) with a relative accuracy of 10−8 in order to monitor the UV frequency of the outgoing laser pulse.
The spatial properties of the high-energy laser were characterized prior to the NAWDEX campaign according to the ISO 11146 standard (ISO, 2005), yielding a beam quality factor (M2) of 1.1 for both the major and minor beam axis. As a result, after passing through the beam expander, the collimated beam showed a full-angle divergence (±3σ, containing > 99 % of the energy) of 98 and 102 µrad at 4σ beam diameters of 7.3 and 7.1 mm for the two axes.
The UV laser is transmitted into the atmosphere via a piezo-electrically controlled mirror that is attached to the frame of a Cassegrain-type telescope, as shown in Fig. 1. In contrast to ALADIN that incorporates a 1.5 m diameter telescope and will operate at an off-nadir pointing angle of 35, the A2D employs a 0.2 m telescope which is oriented at an off-nadir angle of 20. The convex spherical secondary mirror of the telescope collects the backscattered light and guides it to the front optics of the A2D receiver assembly. The structural design of the telescope causes a range-dependent overlap function which has to be considered in the wind retrieval as it reduces the backscatter signal (Paffrath, 2006; Paffrath et al., 2009).
Aside from a narrowband UV bandpass filter (FWHM: 1.0 nm) which blocks the broadband solar background spectrum, the front optics include an electro-optic modulator (EOM). The EOM is used to avoid saturation of the ACCD by shutting the atmospheric path for several µs after transmission of the laser pulse, thus preventing strong backscattered light produced close to the instrument (up to about 1 km) from being incident on the detectors. In this way, the EOM temporally separates the atmospheric signal from the internal reference signal. The latter is injected into the front optics assembly via the aforementioned multi-mode fibre, so that both signals enter the spectrometer optics on equal paths. In addition, active stabilization of the laser beam pointing is realized by a co-alignment control loop. For this purpose, a portion of the backscattered signal passing through the front optics is imaged onto a UV camera (SONY XC-EU50CE) to monitor the horizontal and vertical position of the centre of gravity (CoG) of the beam. A reference position (CoGXCoGY) is defined and a feedback loop involving three piezo-actuators mounted on the last laser transmit mirror is applied to actively stabilize the co-alignment of the transmit and receive path of the laser beam. In this way, variations in the incidence angle of the atmospheric return signals on the receiver spectrometers are reduced. This is crucial for accurate wind measurements, especially for the Rayleigh channel, as angular variations of 1 µrad with respect to the 200 mm telescope diameter and a field of view (FOV) of 100 µrad introduce errors of the horizontal wind speeds of up to 0.4 m s−1, as derived from optical simulations and experiments (DLR, 2016). It should be noted that active stabilization of the transmit–receive co-alignment is not required for the satellite instrument, since the same telescope is used for transmission of the laser beam and reception of the backscattered signals.
## 2.2 Dual-channel receiver and detectors
The receiver optics of both the satellite instrument and the A2D are almost identical and consist of two different spectrometers, as shown on the right-hand side of Fig. 1. Two sequential Fabry–Pérot interferometers (FPIs) are employed for measuring the Doppler frequency shift of the broadband Rayleigh backscatter signal from molecules, whereas a Fizeau interferometer is used for determining the Doppler shift of the narrowband Mie signal originating from cloud and aerosol backscattering. Detection of the two signals is realized by using two ACCDs which allow for data acquisition in 24 range gates, where the vertical resolution within one profile can be varied from 296 m to about 2 km.
The wind measurement principle of the A2D wind lidar system is based on detecting frequency differences between the emitted and the backscattered laser pulses. Due to the Doppler effect, the frequency f0 of the outgoing pulse is shifted upon backscattering from particles (cloud droplets, aerosols) and molecules which move with the ambient wind. The frequency shift in the backscattered signal ΔfDoppler is proportional to the wind speed vLOS along the laser beam LOS: $\mathrm{\Delta }{f}_{\mathrm{Doppler}}\phantom{\rule{0.125em}{0ex}}=\phantom{\rule{0.125em}{0ex}}\mathrm{2}{f}_{\mathrm{0}}/c\phantom{\rule{0.25em}{0ex}}\cdot \phantom{\rule{0.25em}{0ex}}{v}_{\mathrm{LOS}}$, with c being the speed of light. For an emission frequency of f0= 844.75 THz (354.89 nm vacuum wavelength), a LOS wind speed of 1 m s−1 translates to a frequency shift of 5.63 MHz which corresponds to a wavelength shift of 2.37 fm. The required accuracy of the frequency measurement is hence on the order of 10−8 to measure wind speeds with an accuracy of 1 m s−1. Owing to the large difference in spectral width of the Mie ( 50 MHz) and Rayleigh ( 3.8 GHz at 355 nm and 293 K) atmospheric backscatter signals, two different techniques are applied for deriving the Doppler frequency shift from the two spectral contributions separately.
The measurement principle of the Rayleigh channel relies on the double-edge technique (Chanin et al., 1989; Garnier and Chanin, 1992; Flesia and Korb, 1999; Gentry et al., 2000) and involves two bandpass filters (A and B) which are placed symmetrically around the frequency of the emitted laser pulse, as illustrated in Fig. 2a. The width and spacing of the filter transmission curves (free spectral range (FSR): 10.95 GHz, FWHM: 1.78 GHz, spacing: 6.18 GHz) is chosen such that the maxima are close to the inflexion points (edges) of the molecular line that is spectrally broadened by virtue of Rayleigh–Brillouin scattering (Witschas, 2011a, b, c). The transmitted signal through each filter is proportional to the convolution of the respective filter transmission function and the line shape function of the atmospheric backscatter signal. Consequently, the contrast between the return signals IA and IB transmitted through filters A and B represents a measure of the frequency shift between the emitted and backscattered laser pulse, thus defining the frequency-dependent Rayleigh response ΨRay as follows:
$\begin{array}{}\text{(1)}& {\mathrm{\Psi }}_{\mathrm{Ray}}\left(f\right)=\frac{{I}_{\mathrm{A}}\left(f\right)-{I}_{\mathrm{B}}\left(f\right)}{{I}_{\mathrm{A}}\left(f\right)+{I}_{\mathrm{B}}\left(f\right)}.\end{array}$
Close to the filter cross point, where the transmission functions intersect, the relationship between Rayleigh response and frequency is approximately linear with a slope of about 5 × 10−4 MHz−1.
Figure 2(a) Spectral distribution of the transmitted laser pulse (purple) and the backscattered signal (black), which is composed of the narrowband Mie and the broadband Rayleigh component. The transmission spectra of the two FPI filters of the Rayleigh channel are shown in green, while the filled areas illustrate the respective intensities IA(f) and IB(f) transmitted through the filters A and B for determining the Doppler shift. (b) Operation principle of the Mie channel based on the fringe-imaging technique.
The determination of the Doppler shift from the narrowband Mie return signal is based on the fringe-imaging technique (McKay, 2002) involving the measurement of the spatial location of an interference pattern, as shown in Fig. 2b. For this purpose, a Fizeau interferometer is used consisting of two plane plates that are tilted by a small wedge angle of several µrad with respect to each other. Due to the wedge angle, the linear interference pattern (fringe) is produced at a distinct lateral position along the wedge where the condition for constructive interference is fulfilled. Hence, a Doppler frequency shift of the signal results in a spatial displacement of the fringe which is vertically imaged onto the ACCD detector, whereby the relationship between the Doppler shift and the centroid position of the fringe x is approximately linear ($\mathrm{\Delta }x\phantom{\rule{0.125em}{0ex}}\approx \phantom{\rule{0.125em}{0ex}}k\phantom{\rule{0.25em}{0ex}}\cdot \phantom{\rule{0.25em}{0ex}}\mathrm{\Delta }{f}_{\mathrm{Doppler}}\right)$, so that the Mie response reads
$\begin{array}{}\text{(2)}& {\mathrm{\Psi }}_{\mathrm{Mie}}\left(f\right)=x\left(f\right)=x\left({f}_{\mathrm{0}}\right)+\mathrm{\Delta }x\left(f\right)={x}_{\mathrm{0}}+k\cdot \mathrm{\Delta }{f}_{\mathrm{Doppler}}.\end{array}$
Here, x0 represents the Mie fringe centroid position at the frequency f0 of the emitted laser pulse and is referred to as Mie centre. Δx is the shift of the Mie fringe centroid position with respect to the Mie centre and k denotes the proportionality factor between the Doppler frequency shift ΔfDoppler and the resulting shift of the Mie fringe Δx, thus describing the sensitivity of the Mie channel. The latter is on the order of k 100 MHz pixel−1. From the Fizeau FSR of 2.2 GHz, only a section of 1.6 GHz is recorded by the 16 pixel columns of the ACCD (imaged spectral range), resulting in an effective LOS wind measurement range of ±145 m s−1.
The thinned and back-side-illuminated ACCD with 16 × 16 pixels is optimized for operation in the UV showing a high quantum efficiency of 85 %, while cooling to 30 C provides a low electronic noise level. The electronic charges generated in the imaging zone of the device are accumulated directly in a memory zone within the CCD chip, thus allowing for low readout noise (Reitebuch et al., 2009). For the ACCD used in the Mie channel, the electronic charges of all 16 rows are binned together to one row for each range gate of each laser pulse, resulting in 16 spectral channels of about 100 MHz width. For the Rayleigh channel, the two spots produced by the two FPIs are imaged onto the left and right half of a second ACCD of the same type, with the centres of the spots being separated by 8 pixels (see bottom right part of Fig. 1). As for the Mie channel, the electronic charges of all 16 rows are binned together to one row, whereas the signal of each Rayleigh filter is contained in 6 pixels that are summed up in the retrieval algorithms after digitization.
The memory zone of the ACCD contains 25 rows so that a maximum number of 25 range gates can be acquired, from which three range gates are used for detecting the background light, the detection chain offset (DCO) and the internal reference signal, while two range gates act as buffers for the internal reference. The DCO is a constant electric voltage at the analogue-to-digital converter. The atmospheric backscatter signals are collected in the remaining 20 (so-called atmospheric) range gates. The transfer time from the image to the memory zone limits the minimum temporal resolution of one range gate to 2.1 µs, which corresponds to a range resolution of 315 m and a height resolution of 296 m, taking account of the 20-off-nadir pointing of the instrument. The timing sequences of both ACCDs are programmable, providing flexible and independent vertical resolution for the Rayleigh and Mie wind profiles.
The horizontal resolution of the A2D is determined by the acquisition time of the detection unit. Here, the signals obtained from 20 laser pulses are accumulated to so-called measurements (duration 0.4 s), while the combination of the signals from 35 measurements (700 pulses) constitutes one observation (duration 14 s). Considering the time required for data read out and transfer (4 s), the separation time between two subsequent observations thus accounts for 18 s. For a typical ground speed of the Falcon aircraft of 200 m s−1, this results in a horizontal resolution of 3.6 km. Note that continuous data readout without gaps of 4 s is carried out for the satellite instrument on Aeolus, but the concept for on-chip averaging of multiple laser pulse returns to measurements is used as well. In the following, the terms observation and measurement are consistently used referring to the sampling of the A2D data.
Table 1Overview of the research flights of the Falcon aircraft conducted in the frame of the NAWDEX campaign and the wind scenes performed with the A2D. The flights on 27 September and 4 October 2016 discussed in the present work are printed in bold. The two flights on 28 September and 15 October 2016 were dedicated to response calibrations of the Rayleigh and Mie channel (see Sect. 3.1), while the first two and the last two flights on 17 September and 18 October 2016 were transfer flights between Oberpfaffenhofen, Germany, and the air base in Keflavík, Iceland.
3 Response calibrations and ground detection
The A2D direct-detection wind lidar system was employed during the NAWDEX field experiment delivering valuable data with a view to the pre-launch activities for the upcoming Aeolus mission as well as with regards to the meteorological objectives of the campaign. In the framework of NAWDEX, 14 research flights have been performed with the Falcon aircraft of DLR, including four transfer flights between Oberpfaffenhofen, Germany, and the air base in Keflavík, Iceland. An overview of the flights, wind scene periods and the number of A2D observations is presented in Table 1. Twenty-seven flight legs with continuous sampling of wind profiles were conducted with periods ranging from 11 min to more than 1 h, adding up to almost 15 h over the whole campaign. From the 14 research flights, 2 flights on 28 September and 15 October 2016 were dedicated to the calibration of the A2D instrument. This procedure represents a key part of the wind retrieval and will be described in this chapter. Here, the focus is put on a ground detection scheme that allows for accurate identification of ground signals and hence reduced systematic errors of the calibration parameters.
## 3.1 Response calibrations
Spectral response calibration of the A2D is a prerequisite for the wind retrieval, since the relationship between the Doppler frequency shift of the backscattered light, i.e. the wind speed, and the response of the two spectrometers has to be known for the wind retrieval. In particular, proper knowledge of the Rayleigh response for different altitudes is necessary, as the spectral shape of the Rayleigh–Brillouin backscatter signal significantly depends on temperature and pressure of the sampled atmospheric volume (Witschas et al., 2014) and thus varies along the laser beam path.
For deriving the frequency dependency of the Rayleigh and Mie channel spectral response, a frequency scan of the laser transmitter is carried out, thus simulating well-defined Doppler shifts of the radiation backscattered from the atmosphere within the limits of the laser frequency stability. During the calibration, the contribution of (real) wind related to molecular or particular motion along the instruments' LOS has to be eliminated, i.e. the LOS wind speed vLOS needs to be zero. In practice, this is accomplished by flying curves at a roll angle of the Falcon aircraft of 20, resulting in approximate nadir pointing of the instrument and hence vLOS 0, while assuming that the vertical wind is negligible. Consequently, regions with expectable non-zero vertical winds, e.g. introduced by gravity waves or convection, are avoided in this procedure. Nadir pointing leads to a circular flight pattern of the aircraft which is preferably located over areas with high surface albedo in the UV spectral region (e.g. over ice), hence enabling strong ground return intensities and, in turn, high SNR. In the course of the calibration procedure, which takes about 24 min, highest attention has to be paid to the minimization of all unknown contributions to the Rayleigh and Mie response such as biases resulting from inaccurate co-alignment of the transmit and receive path, temperature variations of the spectrometers or frequency fluctuations of the laser transmitter.
During NAWDEX, six response calibrations have been carried out over Iceland, four over the Vatnajökull glacier and two over ice-free land in the north of the island. During each calibration, the laser frequency was tuned in steps of 26 MHz (corresponding to 4.5 m s−1) over a 1.4 GHz interval (±125 m s−1) and the Rayleigh and Mie responses were determined after averaging over 700 pulses (1 observation) per frequency step. While the Rayleigh response is given by the intensity contrast function of filters A and B according to Eq. (1), the Mie response is described by the centroid position of the Fizeau fringe according to Eq. (2). Polynomial fitting is then performed for each individual range gate to derive polynomial coefficients that are later fed into the wind retrieval algorithm (Marksteiner, 2013). Here, a fifth-order polynomial was empirically chosen for fitting the Rayleigh response curves, whereas a linear fit is applied for the Mie response function:
$\begin{array}{}\text{(3a)}& & {\mathrm{\Psi }}_{\mathrm{Ray}}\left(f\right)={\sum }_{i=\mathrm{0}}^{\mathrm{5}}{c}_{i}{f}^{i},\text{(3b)}& & {\mathrm{\Psi }}_{\mathrm{Mie}}\left(f\right)={C}_{\mathrm{0}}+{C}_{\mathrm{1}}f.\end{array}$
The determined polynomial coefficients for each range gate are then used for the calculation of the Doppler frequency shift from the Rayleigh and Mie responses obtained for each wind observation. Since both the range gate setting and the flight altitude generally differ between the calibration flight and the actual wind scene, a linear interpolation is performed between the coefficients deduced from the calibration in order to obtain the response function for the respective bin altitudes of the wind observation.
For the satellite instrument, the atmospheric Rayleigh response function is derived after adding the return signals obtained from a number of range gates in the upper troposphere (e.g. between 6 and 16 km) in order to increase the SNR. The selection of the appropriate range for averaging is performed during on-ground processing and the information for each single range gate is still included in the downlinked raw data. In the satellite wind retrieval for the L2B product, a Rayleigh–Brillouin line shape model is used in combination with atmospheric temperature and pressure profiles from a NWP model (e.g. from ECWMF) to account for the altitude-dependence of the Rayleigh response over the entire vertical measurement range from ground to the lower stratosphere (Dabas et al., 2008; Tan et al., 2016).
Unlike for molecular scattering, the backscattering of the laser radiation from aerosols, cloud particles or hard targets does not induce a significant spectral broadening, so that the altitude-dependent variations in temperature and pressure have a negligible impact on the Mie response. Therefore, in contrast to the Rayleigh response calibration, the Mie response function determined for the ground return is sufficient for the wind retrieval and used for all the atmospheric range gates. Due to this fact, precise determination of the coefficients {C0,C1} for the ground is of utmost importance for an accurate Mie wind retrieval. A detailed study on A2D response calibrations and the various influencing factors that affect their quality will be provided in a forthcoming publication. Based on a set of criteria which have been defined over the last years, out of the six available from 2016 one particular calibration, i.e. set of response coefficients {ci} (i= 1, …, 5) and {C0,C1}, was determined as the baseline for the subsequent Rayleigh and Mie wind retrieval.
Figure 3Detection of ground signals with the A2D wind lidar. The sketch shows the vertical position of three neighbouring range gates (blue, yellow and red boxes) with respect to the ground. The ground return signals are either contained in only one range bin (a) or distributed over two range bins due to the range gate overlap (here shown for the Mie channel as green and orange areas) as well as varying elevation of the ground surface within one measurement (b). ΔH denotes the atmospheric contribution to the signal obtained from the ground bin(s). The given heights of 296 and 141 m are related to the A2D off-nadir angle of 20.
## 3.2 Refined ground detection scheme
Precise identification of the ground return signals is crucial for exploiting the information included therein. Systematic wind errors which can be caused by changes in the alignment of the transmit–receive path or inaccuracies in the aircraft attitude data can be reduced by applying ZWC. Regarding the aircraft speed of the Falcon, the specification of the incorporated GPS receiver assures an accuracy of better than 0.1 m s−1 (Weissmann et al., 2005). Due to the coarse vertical resolution (hundreds of metres) of the A2D and ALADIN, ZWC based on ground return signals is rather challenging, as the ground bin is very likely to be contaminated by atmospheric signals. For the Mie channel, strong aerosol backscatter close to the ground can influence the ground speed measurement, while the SNR of the ground measurement for the Rayleigh channel is diminished by the broad bandwidth molecular return collected from near the ground surface. Moreover, both channels are potentially affected by surface winds, which introduce systematic errors in the measurement of the ground speed or sea surface with non-zero ground speed (Li et al., 2010). This situation is aggravated by the fact that the ground signals can be distributed over multiple range bins. First, this is due to the charge transfer process of the ACCD, which leads to a temporal overlap in the acquisition of two subsequent range gates of about 1 µs. Laser timing fluctuations in combination with charge transfer inefficiency during the readout of the ACCD, especially occurring at high signal intensities, can cause a signal spread over even more than two range gates within a measurement and observation. Second, varying ground elevations during the duration of one measurement (0.4 s, 20 pulses at 50 Hz repetition rate) and laser pointing fluctuations can lead to the detection of ground signals in multiple range gates, taking into account that the laser pulses cover a distance of 80 m along track on the ground at an aircraft speed of 200 m s−1. Figure 3 illustrates this circumstance for two cases; one with ground signals completely contained in one range gate (a) and another with ground signals distributed over two range gates (b). The height difference between a reference ground elevation during one measurement and the upper bin border of the highest (or first) range gate that contains ground signals is denoted by ΔH and represents a measure of the atmospheric contribution to the ground signal detected by the A2D. The reference ground elevation per measurement is derived from the DEM ACE2, providing elevation data at a resolution of 9 arcsec (300 m × 300 m at the Equator) (Berry et al., 2010).
Figure 4Ground detection during the response calibration performed over Iceland on 15 October 2016 between 17:24 and 17:48 UTC. (a) Signal intensities measured with the A2D Rayleigh channel versus time and the range gates 8 to 24 on measurement level. (b) Mie signal intensity including Rayleigh background on measurement level. The intensities are range-corrected and scaled to the integration time of the respective range gates. Range gates 8 to 19 have a length of 592 m, while range gates 20 to 24 have a length of 296 m. Bins with signal intensities exceeding the maximum of the respective colour scale are printed in white. The Rayleigh and Mie ground masks resulting from the developed ground detection scheme are depicted in panels (c) and (d), respectively. White bins are identified as ground bins and thus considered for the determination of the ground response function.
In previous A2D studies, ground detection for the calibration mode was based on an analysis of the curtain plot depicting the Rayleigh and Mie signal intensities after range correction and normalization to the integration time of each range gate (see Fig. 4a and b). Here, high signal intensities related to strong ground return become visible as white bins, as the intensity exceeds the maximum of the respective colour scale. Ground range gates were then specified per flight leg and the corresponding signal intensities in the identified range gates were summed up (Marksteiner et al., 2013). For the example shown in Fig. 4, range gates 21 to 23 would be subjectively selected as ground range gates in the old scheme (by visual inspection by an experienced data analyst), since most of the white bins are found therein. This approach leads to an underestimation of the actual ground signal which might also be contained in adjacent range gates as well as to an additional summation of atmospheric signal causing error-prone ground data, especially for varying terrain during the flight leg. The imperfect differentiation between atmospheric and ground return signals thus introduces systematic errors in the ground response functions of both detection channels. Concerning the Mie channel, this affects the entire wind profile, as the ground response is used for the wind retrieval in all atmospheric range gates as mentioned above. The old ground detection scheme was acceptable in previous airborne campaigns where the response calibrations were performed over flat terrain, e.g. sea ice, so that ground signals were almost completely contained in only one range gate. However, since complex terrain scenes were encountered in the response calibrations during NAWDEX, the ground detection scheme was refined as explained in the following.
In order to derive more accurate ground speeds, a trade-off has to be found between summing up as much ground signal as possible and minimizing the atmospheric portion in the ground bins. For this purpose, a ground detection algorithm on measurement level was developed (Weiler, 2017). Similar to the wind retrieval algorithm employed for Aeolus (Reitebuch et al., 2017, 2018), it is based on a signal-gradient approach to estimate ground bin candidates within a predefined range around the ground level which is given by the DEM. In a range of ±3 bins around the expected ground level according to the DEM, the signal gradients of two adjacent bins are calculated for each measurement and per range gate i:
$\begin{array}{}\text{(4)}& \frac{\mathrm{\Delta }{I}_{i}}{\mathrm{\Delta }{R}_{i}}=\frac{{I}_{i+\mathrm{1}}-{I}_{i}}{{R}_{i+\mathrm{1}}-{R}_{i}}.\end{array}$
Here, I denotes integrated signal intensity per measurement, while R is the range from the instrument to the bin centre which can be calculated from the respective range gate integration time. In a next step, gradient thresholds are introduced to identify the uppermost and lowermost ground bin. For the analysed flights, thresholds of TGR,high= 0.015 arb. units km−1 and TGR,low=0.015 arb. units km−1 (arbitrary units is abbreviated arb. units) have been empirically found to yield consistent results for both the Rayleigh and Mie channel. In order to avoid large atmospheric contribution to the ground signal, another threshold ${T}_{\mathrm{GR};\mathrm{DEM}+\mathrm{1}}$ has been implemented which analyses the signal level of the range gate just above the DEM bin covering the reference ground elevation. If the intensity in this bin does not make up more than five percent of the total summed ground signal, it is not considered for the ground signal summation. Careful analysis has shown that ground intensities falling below that threshold have negligible influence on the accuracy of ground response calibration curves or ground wind speeds and thus can be omitted for the ground signal summation (Weiler, 2017). Using this approach, ΔH and hence the atmospheric portion of the ground signal can be significantly diminished. The ground detection method has been employed for the analysis of the Mie and Rayleigh response calibration data obtained in the NAWDEX campaign and formed the basis for the ZWC applied for the wind scenes on 4 October 2016 discussed in Sect. 4.2. Moreover, the comparison between refined ground detection and the previous scheme allows for the characterization of the influence of the atmospheric contamination of the ground calibration parameters.
The largest influence of the refined scheme on the calibration parameters compared to the former approach was obtained for the sixth response calibration procedure performed during NAWDEX on 15 October 2016 between 17:24 and 17:48 UTC. The Rayleigh and Mie signal intensities measured during the calibration are shown in Fig. 4a and b, respectively. The calibration flight was carried out in the region around 65.5 N and 17.8 W, which is characterized by a mountainous and ice-free terrain with ground elevations ranging from about 200 to 1200 m. Consequently, ground signals were detected in four different range gates (20 to 23) during the calibration procedure, as the Falcon aircraft flew circular patterns over this region. While the ground response calibration based on the old ground detection method would have summed up all the signals contained in these four range gates for each observation, i.e. frequency step of the calibration, the refined method only considers those bins per measurement that fulfil the threshold conditions as explained above. The corresponding Rayleigh and Mie ground masks illustrating the range bins that were identified as ground bins for each measurement are depicted in Fig. 4c and d. Due to the different sensitivities of the two receiver channels, and thus different measured signal intensities, the two masks are not fully identical.
Table 2Rayleigh response calibration parameters obtained from the six calibrations performed on 28 September and on 15 October 2016. The zero- and first-order fitting parameters c0 and c1 were derived involving the old ground and new ground (GR) detection method (see text). The atmospheric contribution ΔH (see Fig. 3) has been averaged over the respective calibration period. Calibration 1 was performed using a different alignment of the lidar system and is thus excluded from the statistical calculations.
For both channels, the atmospheric contribution is drastically reduced resulting in more accurate response values. While the mean value of ΔH over all measurements of calibration 6 is 454 and 505 m for the Rayleigh and Mie channel when the old ground detection technique is applied, it is only 207 and 249 m for the new method, respectively. An overview of the atmospheric contributions (mean ΔH) for all the six Rayleigh and Mie response calibrations (RRC and MRC) using the two different ground detection schemes is given in Tables 2 and 3. The tables also summarize the zero- and first-order polynomial coefficients {c0,c1} and {C0,C1} (referred to as intercept and slope) obtained from fitting of the response curves according to Eqs. (3a) and (3b). The second- and higher-order coefficients {ci} (i= 2, 3, 4, 5) of the Rayleigh response function are not given. Since calibration 1 was carried out using a different setting of the co-alignment loop reference position (CoGXCoGY) (see Sect. 2.1) affecting the incidence angle of the backscattered signals on the Rayleigh and Mie spectrometer, the resulting calibration parameters were disregarded in the statistical calculations leading to the values provided in Tables 2 and 3.
In general, larger deviations in the slope and intercept values between the two methods are present for the Rayleigh channel. This can be explained by the fact that the broadband Rayleigh channel is more sensitive to the broadband atmospheric molecular background signal than the narrowband Mie channel where the broadband atmospheric contribution leads to a nearly constant intensity offset to the narrowband ground signals. The impact on the Rayleigh channel is especially large in cases of low-albedo surfaces where the atmospheric contribution to the weaker ground signals is more pronounced. As a result, large discrepancies between the calibration parameters obtained with the old and new method are observed for the two last calibrations that were performed over ice-free land with low albedo in the UV. In particular, the intercept values derived for the RRC 6 discussed before differ by as much as 1.24 × 10−2. Using a typical Rayleigh response slope value of 4.6 × 10−4 MHz−1 (Table 2) and the conversion between Doppler frequency shift and LOS wind speed (1 m s−1$\stackrel{\mathrm{^}}{=}$ 5.63 MHz) introduced in Sect. 2.2, this difference in intercept translates to a wind speed difference of 4.8 m s−1. That means that ground speed values determined from RRC 6 using either the old or the new ground detection method would differ by that value. With a view to ZWC, the large discrepancy in the ground speed values underlines the relevance of proper ground detection for the wind retrieval, as the ground speeds are used as zero reference for the derived wind speeds. Likewise, using the refined ground detection method for the analysis of MRC 6 results in a change in the Mie intercept values by 11.7 × 10−3 pixel which corresponds to a wind speed difference of 0.2 m s−1, considering a typical Mie response slope of about 100 MHz pixel−1 (Table 3).
Table 3Mie response calibration parameters obtained from the six calibrations performed on 28 September and on 15 October 2016. The zero- and first-order fitting parameters C0 and C1 were derived involving the old ground and new ground (GR) detection method (see text). The atmospheric contribution ΔH (see Fig. 3) has been averaged over the respective calibration period. Calibration 1 was performed using a different alignment of the lidar system and is thus excluded from the statistical calculations.
Another aspect that becomes obvious from Tables 2 and 3 is that the spread of intercept values between the different Rayleigh response calibrations is reduced when applying the new ground detection method. The standard deviation over the five RRCs 2 to 6 is 1.02 × 10−2, whereas it is 0.68 × 10−2 for the new method. Hence, depending on the calibration used for the wind retrieval, the Rayleigh ground wind speed varies by 3.9 m s−1 if the old technique is applied. This value is reduced by more than 30 % to 2.6 m s−1 with the new scheme, which is still unsatisfactorily large regarding the consistency of Rayleigh response calibrations. For the Mie channel, no change in the spread of the calibration parameters is evident. Nevertheless, the new ground detection approach provides a considerable improvement in the accuracy of the ground calibration parameters and, in turn, of the derived ground wind speeds. With a view to the Aeolus mission, it can be concluded that calibrations should be performed over surfaces with high albedo, like ice surfaces, in order to minimize the impact of the atmospheric contamination. Furthermore, the quantity ΔH could be considered as a quality parameter for evaluating the quality of response calibrations or even to correct calibrations for the atmospheric contribution.
4 Wind retrieval and assessment of accuracy
This chapter discusses the wind results from two selected flights performed on 27 September and 4 October 2016 to demonstrate the Rayleigh and Mie wind retrieval algorithms as well as their subsequent validation by statistical comparison with the data obtained with DLR's coherent reference wind lidar system.
Figure 5(a) Flight track of the Falcon aircraft (black line) during the research flight conducted on 27 September 2016. The wind scenes performed from 10:28 to 11:38 and from 11:48 to 12:36 UTC are indicated in orange and blue. The background picture is composed of a map provided by Google Earth and satellite images from Terra MODIS (VIS channel) taken at 11:55 (right part) and 13:30 UTC (left part) (MODIS, 2017a). (b) Geopotential height (black isolines, in dekametres) and horizontal wind speed (colour shading) at 300 hPa on 27 September 2016, 12:00 UTC, from ECMWF model analysis together with the flight track of the Falcon 20 aircraft.
## 4.1 Jet stream wind observations over the North Atlantic on 27 September 2016
While the instrument response calibrations were performed during two dedicated flights over Iceland, the other 12 research flights within the NAWDEX campaign were devoted to wind observations over the North Atlantic region. Here, sampling of the jet stream was of particular interest with regards to both the pre-launch activities of Aeolus and the scientific objectives related to atmospheric dynamics. The observation of high horizontal wind speeds and large wind gradients occurring in relation to the jet provided an extensive characterization of the instrument over a large operating range and accurate wind profiles for the NAWDEX science objectives. In the context of the fourth NAWDEX intensive observation period, the goal of the flight carried out on 27 September 2016 was to observe very high jet stream wind speeds related to the former tropical cyclone Karl. As Karl moved towards the mid-latitudes, it merged with an initially weak downstream cyclone and strongly intensified. Later, at the time of the flight, the already weakened cyclone was located between Iceland and Scotland and the zonally oriented jet stream extended towards Scotland with horizontal wind speeds exceeding 80 m s−1 at altitudes of 9 to 10 km (see Fig. 5 and for a detailed description of the meteorological situation refer to Schäfler et al., 2018). To observe the high wind speeds, the Falcon aircraft flew towards the Faroe Islands and the Outer Hebrides right into the centre of the jet stream at a flight altitude of 11.5 km before returning to the air base in Keflavík. The satellite image taken from the Moderate Resolution Imaging Spectroradiometer (MODIS) instrument aboard NASA's Terra satellite (MODIS, 2017a), shown in Fig. 5a, depicts increased cloud coverage along the flight track crossing the cyclone. From the total flight duration of 3 h and 56 min (09:28 to 13:24 UTC), wind observations were conducted in the period between 10:28 and 12:36 UTC, split into two scenes of about 1 h each.
### 4.1.1 Rayleigh background subtraction and quality control
In the period from 11:41 to 11:47 UTC the A2D was operated at a different mode which aimed at the detection of the Rayleigh background signal on the Mie channel. Proper quantification of the broadband molecular return signal transmitted through the Fizeau interferometer is important for avoiding systematic errors in the determination of the fringe centroid position and, in turn, in the Mie winds. Therefore, the laser frequency was tuned away by 1.1 GHz from the Rayleigh filter cross point and the Mie spectrometer centre position which define the nominal set frequency during the wind scenes (see Fig. 2a). In this way, the laser frequency of the emitted pulses was outside of the useful spectral range of the Mie spectrometer, so that the fringe was not imaged onto the Mie ACCD and only the broadband Rayleigh signal was detected on the Mie channel. The range-dependent intensity levels per pixel were subsequently subtracted from the measured raw Mie signal. In the near-field range gates, the measured intensity distribution over the pixel array measured by the Mie and Rayleigh ACCDs is substantially impacted by the central obscuration of the telescope pupil by the secondary mirror and its supporting spider. Furthermore, the data obtained from the near-field region is affected by the incomplete overlap of the transmitted laser beam with the telescope FOV as well as by the attenuation of the signals by the EOM (Paffrath et al., 2009). Therefore, the atmospheric range gates in the region within 1.5 km below the aircraft (range gates 5 and 6) were not considered in the wind retrieval.
Figure 6Signal intensities measured for (a) the A2D Rayleigh channel and (b) the A2D Mie channel during the flight on 27 September 2016 between 10:28 and 12:36 UTC. The intensities are range-corrected and scaled to the integration time of the respective range gates. The background and detection chain offset were subtracted. For the Mie channel, the Rayleigh background signal was subtracted as explained in the text. The detection of the Rayleigh background signal was performed between 11:41 and 11:47 UTC, leading to a data gap in this period. (c) Mie SNR calculated according to Eq. (3.29) in Marksteiner (2013). Bins with signal intensities exceeding the maximum of the respective colour scale are printed in dark red.
The Rayleigh as well as the Mie signal intensities after Rayleigh background correction per observation (18 s) are shown in Fig. 6a and b, respectively. The raw signals were first corrected for the DCO and the solar background which are collected in two separate range gates. Moreover, a range correction was applied taking into account that the intensity decreases as the inverse square of the distance between the scatterer and the detector. Finally, the integration times set for each range gate were considered for normalizing the signal intensities per bin. Curve flights during the flight section are manifested in altitude variations of the range gate borders, as a change in the roll angle of the aircraft involved a change in the off-nadir angle of the A2D. While the intensity profiles for the Rayleigh channel essentially follow the vertical distribution of the atmospheric molecular density, the Mie intensity profiles display the vertical distribution of atmospheric cloud and aerosol layers along the flight track. High Rayleigh signal intensities above 3.5 arb. units (dark red bins in Fig. 6a) can be attributed to cloud layers at different altitudes along the flight track which also manifest in increased Mie signal intensities (Fig. 6b).
As a preparatory step of the wind retrieval, several quality control (QC) mechanisms were applied to exclude invalid data. The detection of corrupted measurements within one observation involved the screening for DCO outliers, saturated pixels on the ACCDs as well as for failure of the trigger that initiates the detector electronics. The latter causes an untimely ACCD acquisition, and hence an incorrect allocation of the internal reference and atmospheric return signals to their designated range gates. For the actual wind retrieval, the wind speeds for each atmospheric range gate were determined from the respective frequency differences to the internal reference frequency. The frequencies were calculated from the corresponding Rayleigh and Mie response functions (Eqs. 3a and b) derived during the calibration mode. As a result, separate wind profiles for the Rayleigh and Mie channel were obtained. While the Rayleigh profiles only contain valid wind data in range bins in which purely molecular backscattering occurred, the Mie wind profiles are composed of wind data retrieved from areas with sufficient cloud and aerosol content. However, since the retrieval initially produces wind values for all data bins in both channels, additional measures had to be taken to identify and eliminate invalid wind data. The procedures differ between the Rayleigh and Mie profiles and will be outlined in the following sections.
### 4.1.2 Rayleigh wind profiles
The identification of invalid winds retrieved from the Rayleigh channel was based on the detection of bins which were affected by particulate backscatter from clouds or aerosols, since this Mie contamination introduces systematic errors of the measured Rayleigh response (Dabas et al., 2008). Therefore, as introduced in Marksteiner (2013), bins showing range-corrected and integration time-corrected Rayleigh signal intensities that are unusually high for pure molecular backscatter were excluded from further analysis. An intensity threshold of 0.1 arb. units per measurement was found to be an appropriate value for identifying Mie-contaminated bins in the Rayleigh channel. Under clear conditions Rayleigh signal intensities on observation level (summed over 35 measurements) are well below 3.5 arb. units (see Fig. 6a). Due to the attenuation of the laser beam during propagation through the clouds, the wind information obtained from the range gates below clouds is very likely to be also derogated. Consequently, not only the cloud bins themselves are flagged invalid but also all the bins in the range gates below. Additionally, ground bins that were detected by the scheme described in Sect. 3.2 as well as bins containing valid Mie wind data (see next section) were removed from the Rayleigh wind profiles.
Figure 7LOS wind profiles (positive towards the instrument) measured during the flight on 27 September 2016 between 10:28 and 12:36 UTC using (a) the A2D Rayleigh channel and (b) the A2D Mie channel. The combination of both channels is depicted in panel (c), while panel (d) shows the corresponding wind curtain obtained with the coherent 2 µm reference wind lidar. For better comparison, the 2 µm wind data were adapted to the measurement grid of the A2D. White colour represents missing or invalid data due to low signal, e.g. in case of low aerosol loads or below dense clouds. The data gap between 11:38 and 11:48 UTC is due to an interruption of the wind measurement during a curve flight and a different operation mode of the A2D instrument aiming at the detection of the Rayleigh background signals on the Mie channel.
Figure 7a shows the processed LOS Rayleigh winds plotted versus time and altitude for the period from 10:28 to 12:36 UTC after removal of invalid bins as described above. During the first section of the flight, the horizontal component of the A2D LOS unit vector was nearly parallel to the horizontal wind vector and pointing against the wind, resulting in high positive LOS wind speeds (yellow/orange colours), whereas negative wind speeds of comparable magnitude were measured during the second flight leg when the LOS unit vector was oriented along the direction of the wind, i.e. the wind was pointing away from the instrument (blue colours). The data gap in between is due to the curve flight near the Outer Hebrides as well as the procedure required for Rayleigh background subtraction mentioned above. The figure also illustrates the range-dependent vertical resolution of the instrument. For the presented flight section, the integration time of the ACCD was set to 2.1 µs in the range gates 8 to 14 (9.4 to 7.7 km) and those close to the ground (22, 23); 4.2 µs in the range gates 7, 15 and 16 (6.1 km); and 8.4 µs in all the remaining atmospheric range gates, corresponding to a height resolution of 296, 592 and 1184 m, respectively. This range gate setting was the same for the Rayleigh and Mie channel and chosen in order to resolve the wind structure within the core of the jet stream. In this region, broad coverage of Rayleigh winds was obtained, while mid-level clouds prevented the acquisition of valid Rayleigh wind data on the edges of the jet below their tops between 4 and 7 km height. In addition, high-level clouds at the beginning of the shown flight section limited the extension of the Rayleigh wind profiles to the range from 9 to 10 km.
One characteristic of the Rayleigh channel is the fluctuating wind error from profile to profile, which becomes visible as a vertical texture in the two-dimensional wind curtain. The underlying reason is the high sensitivity of the Rayleigh response to variations in the incidence angle on the FPI. Despite the active transmit–receive co-alignment loop, residual angular variations on the order of a few µrad, which are due to atmospheric turbulence and the effect of strong cloud backscatter onto the co-alignment algorithm, cause fluctuations in the derived wind speeds of several metres per second. The introduced error is thus correlated among the atmospheric range gates, and the mean error varies from observation to observation, resulting in a vertical pattern in the Rayleigh wind curtain. Measures are being examined to reduce this fluctuation by a refined co-alignment feedback loop, for instance, by employing a UV camera with higher resolution in combination with an improved algorithm for determining the centre of gravity of the backscattered laser radiation.
### 4.1.3 Mie wind profiles
The validity of the Mie wind determined for each bin is related to the cloud and aerosol loading in the respective range gate, and thus the signal intensity detected on the Mie ACCD. For the proper identification of bins with sufficient particulate backscatter return signal, the so-called Mie SNR was defined as the quotient between the signal of the pixel with the highest intensity, i.e. the fringe centre, and the mean over the pixels that lie outside the fringe (Marksteiner, 2013). The Mie SNR calculated for the studied measurement scene is depicted in Fig. 6c. Based on the SNR profile, a threshold value was set which allowed sorting out corrupt wind measurement bins. For the analysed wind scene, a Mie SNR threshold of 5.0 was empirically chosen in order to remove those bins where low particle backscatter coefficients prevented the correct determination of the Mie fringe centroid position and thus the acquisition of accurate wind speeds.
Figure 8Flight track of the Falcon 20 aircraft for the research flight on 27 September 2016 together with the overlaid A2D HLOS wind profiles measured between 10:40 and 11:38 UTC (foreground) as well as between 11:48 and 12:12 UTC (background), whilst crossing the North Atlantic jet stream (background image: ©2017 Google). (b) Wind profiles from two selected observations starting at 11:28:21 and 11:54:09 UTC. The black squares indicate the mean bias per range gate based on the comparison with wind data from the 2 µm coherent wind lidar (see text).
The resulting two-dimensional Mie wind curtain is shown in Fig. 7b. As opposed to the Rayleigh channel, the Mie data coverage is rather sparse owing to the little cloud cover and low aerosol load during the flight. Wind data are mainly obtained from the cloudy regions mentioned above, thus complementing the wind information gained with the Rayleigh channel. The combination of the Rayleigh and Mie wind data, displayed in a composite curtain in Fig. 7c, illustrates the complementarity of the two detection channels which enables the acquisition of wind speeds under various atmospheric conditions, hence ensuring broad data coverage for the entire scene. In the case that valid winds are obtained for both channels, the Mie wind is preferred due to the higher accuracy and precision of the Mie channel for the A2D (see next sections). Figure 8a shows the combined Rayleigh and Mie wind curtain along two flight legs in the region of the jet stream. Here, the horizontal LOS (HLOS) wind speed is illustrated, which was calculated from the measured LOS wind speeds and the off-nadir angle of the instrument ( 20) per observation. Strong vertical wind gradients exceeding 10 m s−1 km−1 at about 5 km altitude become apparent in Fig. 8b, which depicts the HLOS wind profiles from two selected observations starting at 11:28:21 and 11:54:09 UTC. The vertical position of the data points corresponds to the altitude at the centre of the respective range bin. HLOS wind speeds above 80 m s−1 were measured in the centre of the sampled jet stream, which is in agreement with the modelled wind field shown in Fig. 5, considering the difference in the angle between the HLOS unit vector of the A2D and the horizontal wind vector.
### 4.1.4 Coherent wind lidar as reference system
Validation of the A2D instrument performance and wind retrieval algorithms was performed by comparing the resulting wind profiles to those obtained with DLR's well-established coherent wind lidar system emitting at 2 µm wavelength and 500 Hz repetition rate, which was operating in parallel on board the Falcon aircraft, providing accuracy of the horizontal wind speed of better than 0.1 m s−1 and precision of better than 1 m s−1 (Weissmann et al., 2005; Chouza et al., 2016b). In contrast to the A2D, the determination of the Doppler shift by the 2 µm lidar system relies on heterodyne detection using the instruments' seed laser as local oscillator (Witschas et al., 2017) and thus does not rely on any calibration procedures. Moreover, the coherent wind lidar incorporates a scanner which allows retrieving the three-dimensional horizontal wind vector from a number of LOS wind measurements with a vertical resolution of 100 m. For this purpose, the instrument performs conical scans at an off-nadir angle of 20, while the information from 21 azimuthal positions is used for the wind vector retrieval. On each azimuthal position the signal from 500 laser pulses (1 s) is averaged to obtain one LOS profile. The time for positioning the laser at its scan starting position is around 21 s, resulting in a total time of 42 s for one observation of the 2 µm wind lidar, whereas one A2D observation takes 18 s as outlined above.
For adequate comparison of the wind profiles measured with the 2 µm and the A2D wind lidar, the three-dimensional wind vectors had to be projected onto the A2D LOS axis. This was carried out for each 2 µm observation by calculating the scalar product of the measured wind vector and the mean A2D LOS unit vector under consideration of the aircraft attitude during the respective observation period. Furthermore, the different spatial and temporal resolutions of the two wind lidar instruments necessitated an adaptation of the 2 µm measurement grid to that of the A2D. This was accomplished by a weighted aerial interpolation algorithm (Marksteiner et al., 2011). Here, one considers the whole two-dimensional A2D wind curtain overlaid by the 2 µm grid. Hence, a single A2D bin can be covered by multiple 2 µm bins both horizontally and vertically. The overlapping regions form a new composite 2 µm bin. The contributions of the single 2 µm winds to the wind value allocated to the composite bin are weighted by the overlap of the respective 2 µm bins with the regarded A2D bin. In this way, the A2D and 2 µm wind profiles can be compared on a bin-to-bin basis.
In order to reduce the risk of large discrepancies between the interpolated 2 µm wind and the compared A2D wind in case of low coverage, a minimum overlap of the compared bins (coverage ratio threshold) has been introduced as a QC parameter. For the considered wind scene, a threshold value of 25 % was found to provide an optimal trade-off between comparability and quantity of the 2 µm bins, thus yielding an acceptable number (nearly 1000) of representative composite 2 µm bins used for comparison. Increasing the coverage ratio threshold, e.g. to 80 %, would have reduced the number of bins to less than 500 without significant change in the parameters resulting from the statistical comparison. Furthermore, proper analysis of the Rayleigh winds with a sufficient number of compared bins (> 300) required a threshold of less than 45 %.
The projected LOS wind curtain obtained from the 2 µm DWL after adaptation to the A2D measurement grid is depicted in Fig. 7d. Since the 2 µm DWL purely relies on particulate backscatter, the data coverage is similar to that of the A2D Mie channel, resulting in a large overlap of the two data types. Consequently, the number of bins available for comparison is greater than for the Rayleigh channel. However, the availability of 2 µm wind data from the upper region of the jet stream between 9 and 10 km altitude allows for the comparison of Rayleigh wind data over a broad range of wind speeds.
Table 4Results of the statistical comparison between the A2D and the 2 µm LOS wind data measured on 27 September 2016. The statistical comparison has been performed for the Rayleigh and Mie wind profiles (see corresponding scatterplots in Fig. 9) as well as for the combined wind curtain as shown in Fig. 7c.
Figure 9(a) A2D LOS wind speed determined with the Rayleigh (dots) and Mie (diamonds) channel versus the 2 µm LOS wind speed for comparison of the wind data measured during the flight on 27 September 2016 between 10:28 and 12:36 UTC (see corresponding curtains in Fig. 7a, b and d). The scatterplot is obtained by adaptation of the different measurement grids of the two systems based on a weighted interpolation algorithm and a subsequent bin-to-bin comparison. The corresponding probability density functions for the wind differences (A2D–2 µm) are shown in panels (b) and (c) for the Rayleigh and Mie channel, respectively. The solid lines represent Gaussian fits with the given centres and ${e}^{-\mathrm{1}/\mathrm{2}}$ widths 2w.
### 4.1.5 Statistical comparison of A2D and 2 µm DWL winds
The statistical comparison of the Rayleigh and Mie winds with the 2 µm DWL data from the discussed flight section is visualized in Fig. 9a. Here, the A2D winds are plotted versus the corresponding interpolated 2 µm winds, resulting in a cloud of data points that ideally lie on the dashed line representing vA2D=v2 µm. The non-weighted linear fit vA2D=$A\cdot {v}_{\mathrm{2}\phantom{\rule{0.125em}{0ex}}\mathrm{µ}\mathrm{m}}+B$ through the real data provides values for the slope A and intercept B that generally deviate from the ideal result A= 1 and B= 0. The statistical values derived from the scatterplot are summarized in Table 4, showing that the fitting parameters for both Rayleigh and Mie channels only slightly deviate from the ideal case (A 1, $|B|$ < 0.5 m s−1). The standard error of the slope given in the table was calculated according to
$\begin{array}{}\text{(5a)}& & {s}_{A}=\sqrt{\frac{\frac{\mathrm{1}}{n-\mathrm{2}}{\sum }_{i=\mathrm{1}}^{n}{\mathit{\epsilon }}_{i}^{\mathrm{2}}}{{\sum }_{i=\mathrm{1}}^{n}{\left({v}_{\mathrm{2}\phantom{\rule{0.125em}{0ex}}\mathrm{µ}\mathrm{m},i}-\stackrel{\mathrm{‾}}{{v}_{\mathrm{2}\phantom{\rule{0.125em}{0ex}}\mathrm{µ}\mathrm{m}}}\right)}^{\mathrm{2}}}},\phantom{\rule{0.125em}{0ex}}\phantom{\rule{0.125em}{0ex}}\phantom{\rule{0.125em}{0ex}}\phantom{\rule{0.125em}{0ex}}\text{with}\text{(5b)}& & {\mathit{\epsilon }}_{i}={v}_{\mathrm{A}\mathrm{2}\mathrm{D},i}-\left(A\cdot {v}_{\mathrm{2}\phantom{\rule{0.125em}{0ex}}\mathrm{µ}\mathrm{m},i}+B\right)\end{array}$
being the residuals of the linear regression. It should be noted that the parameters derived from the statistical comparison are influenced by the systematic and random errors of both the A2D and the 2 µm lidar. However, since the latter provides high accuracy and precision as stated above, the total errors are dominated by the systematic and random error of the A2D.
Aside from the standard deviation, the median absolute deviation (MAD) was determined as an additional parameter for evaluating the random error of the A2D wind speed measurements. It is defined as the median of the absolute variations of the measured wind speeds from the median of the wind speed differences:
$\begin{array}{ll}\mathrm{MAD}& =\mathrm{median}\left[\left|\left({v}_{\mathrm{A}\mathrm{2}\mathrm{D},i}-{v}_{\mathrm{2}\phantom{\rule{0.125em}{0ex}}\mathrm{µ}\mathrm{m},i}\right)\right\right\\ \text{(6)}& & -\mathrm{median}\left({v}_{\mathrm{A}\mathrm{2}\mathrm{D},i}-{v}_{\mathrm{2}\phantom{\rule{0.125em}{0ex}}\mathrm{µ}\mathrm{m},i}\right)|].\end{array}$
The MAD represents a robust measure of the variability of the measured wind speeds and is more immune to outliers compared to the standard deviation σ. If the random wind error is normally distributed, the MAD value is related to the standard deviation as σ 1.4826 MAD. The latter quantity is referred to as scaled MAD.
Six bins with wind speed differences vA2Dv2 µm larger than ±10 m s−1 were identified as gross errors in the Rayleigh data set and thus removed from the sample. Gross errors are assumed to be uniformly distributed over the wind speed measurement range and add to the Gaussian-distributed random errors. As described in the Mission Requirement Document of the satellite mission (ESA, 2016), the error model for Aeolus also separates between these two different errors and defines a requirement on the probability of gross outliers (< 5 %). In order to identify gross errors in the Aeolus wind results, an estimation of the random error is provided for each observation and used as QC parameter. In addition, NWP centres usually apply a QC (or even variational QC) during the assimilation of the wind products by comparing it with best guess values (background) from the model.
The scatterplot illustrates the good agreement of the A2D and 2 µm DWL data over the range of LOS wind speeds from 22 to +26 m s−1. For both detection channels the correlation coefficient is as high as r= 0.97. Aside from the different wind speed span, the Rayleigh and Mie winds primarily differ with respect to the mean bias (vA2Dv2 µm) over all data points representing the accuracy of the instrument. Here, the Mie wind bias almost vanishes (0.03 m s−1), which is due to the fact that the A2D winds are nearly symmetrically distributed about the reference 2 µm winds, leading to positive and negative deviations of similar magnitude which compensate for each other.
For the Rayleigh winds, a negative bias of 0.49 m s−1 is obtained, resulting in a mean bias of the combined Rayleigh and Mie data of about 0.21 m s−1. The corresponding HLOS wind speed bias of 0.61 m s−1 ($=-$0.21/sin(20)) is considered to be adequate with regards to the Aeolus mission where absolute HLOS mean bias values better than 0.7 m s−1 are required. However, it should be noted that the mean bias shows larger values when considered per range gate, as depicted in Fig. 8b. The extreme bias values > 3 m s−1 in range gates 8 to 10 lack statistical significance, as they result from a very small number of compared bins due to the scarce data coverage of the 2 µm DWL at altitudes between 8.5 and 9.5 km. For the other range gates, the mean bias varies between 0.7 and 0.3 m s−1.
Another important statistical parameter for the evaluation of the instrument performance is the standard deviation, which represents the random error and hence the precision of the A2D. Here, the Mie winds show a value of 1.5 m s−1 (HLOS: 4.3 m s−1) which is beyond the requirements of Aeolus. In order to meet the mission goals, the satellite instrument should provide a precision of 1 m s−1 in the planetary boundary layer, 2.5 m s−1 in the troposphere and 3 to 5 m s−1 in the stratosphere (ESA, 2016). The random error can also be approximated from probability density functions (PDFs) illustrating the frequency distribution of the wind speed differences vA2Dv2 µm, i.e. the wind error, for the Rayleigh and Mie channel (see Fig. 9b and c). For the Mie channel, the wind random error is nearly Gaussian-distributed, while a number of outliers with vA2Dv2 µm 6 m s−1 leads to a discrepancy between the mean bias (0.03 m s−1) and the centre of the Gaussian fit (0.08 m s−1). For the same reason, the ${e}^{-\mathrm{1}/\mathrm{2}}$ width of the fit (2w= 2.7 m s−1) is narrower than twice the standard deviation (2σ= 3.0 m s−1), which also considers the outliers. Finally, due to the deviation from a Gaussian distribution, the scaled MAD of 1.3 m s−1 is slightly smaller than σ.
The random error of the Rayleigh channel is even larger (σ= 2.7 m s−1). Like for the Mie channel, the PDF for the Rayleigh wind random error exhibits slight deviations from a Gaussian distribution. Consequently, the scaled MAD of 2.6 m s−1 marginally differs from the standard deviation σ= 2.7 m s−1.
### 4.1.6 Discussion of Rayleigh and Mie wind errors
Speckle noise was identified as one of the major causes for the increased random error of the A2D Rayleigh and Mie channel. The noise is introduced by the use of a fibre to transmit the internal reference signal from the laser to the front optics where it is injected into the receiver reception path and co-aligned with the atmospheric signal, as shown in Fig. 1. This is different compared to the free optical path set-up in the transceiver of the satellite instrument which does not suffer this difficulty. The speckle pattern which was estimated to consist of about only 2000 speckles is the input for the Fizeau spectrometer and, after modification by reflection, also for the Fabry–Pérot spectrometers (DLR, 2016). Although the speckle pattern is static over short timescales of a few seconds to minutes, slow changes in the intensity distribution of the internal reference signal are introduced by variations in laser frequency, polarization or (ambient) fibre temperature, which in turn modify the response of the Mie and Rayleigh spectrometers. Since the response measured for the internal reference forms the basis for the determination of the Doppler frequency shift, and thus the wind speed in each atmospheric range gate, the speckle-induced fluctuations increase the random error over the entire wind profile. Comparisons of the internal reference frequencies derived from the Rayleigh and Mie responses against the frequencies measured using the wavemeter showed random variations (2σ) on the order of 8 (Mie) and 11 MHz (Rayleigh), corresponding to LOS wind errors of 1.4 and 2.0 m s−1, respectively. Effective speckle reduction is envisaged, for example, by incorporating a moving diffuser into the beam path of the internal reference signal in order to rapidly change the speckle pattern within one observation, thus averaging out the variations.
Another contribution to the random error in the A2D Mie channel results from the combination of a heterogeneous cloud structure and strong wind shear, which is not resolved due to the coarse vertical resolution. In particular, the position of the top edges of optically thick clouds within one range gate has a significant influence on the wind data. According to Sun et al. (2014), who investigated the performance of Aeolus in heterogeneous atmospheric conditions using high-resolution radiosonde data, a non-uniform distribution of clouds and/or aerosols within a range bin introduces random errors in the Mie HLOS winds of several metres per second, depending on the bin size and altitude. This so-called height assignment error is especially large in the presence of strong wind shear in the sampling volume. Assuming a constant shear with typical amplitude of 0.01 s−1 over the bin, the Mie wind random error scales inversely proportional with the thickness of a particle layer randomly positioned inside the bin, reaching 2 m s−1 for a bin size of 1000 m and a layer thickness of 300 m (Sun et al., 2014).
Besides the speckle noise and the impact of the atmosphere, a further contribution to the random error of the Mie winds is caused by an imperfect response calibration procedure using a linear fitting function to describe the relationship between the Doppler frequency shift and the position of the fringe produced by the Fizeau interferometer. Hence, a more adequate fitting function will be applied in the future in order to take into account the Mie response nonlinearities and to improve the precision of the Mie channel.
Regarding the Rayleigh channel, the assessment of the precision and accuracy is complicated by the fact that the reference lidar relies on the presence of particles so that the statistical comparison of A2D Rayleigh winds with the 2 µm DWL is limited to atmospheric regions, where cloud and aerosol backscattering occurs. Particulate backscattering leads to systematic errors of the Rayleigh winds since the convolution of the broadband Rayleigh return signal with the narrowband Mie return signal (Fig. 2a) influences the Rayleigh response according to Eq. (1) (Dabas et al., 2008). However, it should be noted that the 2 µm DWL is very sensitive even to weak particulate backscatter return due to its coherent detection principle with small bandwidth. In addition, since the coherent DWL is deployed on the aircraft, the atmospheric altitudes with low aerosol backscatter are located in near range gates, which do not suffer remarkably from the R2 dependency of the signal and strong aerosol extinction (as it would be the case for ground-based coherent DWL). Hence, 2 µm DWL winds are even available for low scattering ratios (< 1.1), where a very small amount of aerosol contamination of the A2D Rayleigh winds can be expected. Moreover, Mie-contaminated bins in the Rayleigh data are identified by a signal threshold approach and excluded from the Rayleigh wind curtain, as explained in Sect. 4.1.2. Such range bins thus do not enter the statistical comparison with the 2 µm DWL winds. Additionally, Rayleigh winds are disregarded in the case that valid winds are detected from the A2D Mie channel, i.e. if the Mie SNR threshold is exceeded (Sect. 4.1.3).
With a view to the Aeolus mission, it is also important to note that the strategy for vertical sampling differ between the A2D and the satellite instrument ALADIN. The latter will measure wind profiles from ground up to about 25 km altitude, so that the range gates covering the troposphere will generally be fewer and larger compared to the A2D where all the atmospheric range gates are available to sample the altitude range from ground up to about 9 km. For the flights discussed in this work, the vertical sampling grid was chosen such that the wind shear in the jet stream region could be determined with the highest possible resolution. Hence, the A2D vertical sampling was adapted to the expected wind variability (from short-range NWP forecasts) and science objectives of the flights, which will not be possible for Aeolus where only a climatology-based approach for different vertical sampling schemes can be applied.
Apart from the speckle noise in the internal reference signal, the error contributions are different than for the Mie channel. The Rayleigh response calibration considers nonlinearities by using a fifth-order polynomial function for fitting the response curve. However, the measurement principle based on the double-edge technique using a sequential FPI is much more sensitive to angular variations of the backscattered light compared to the fringe-imaging technique employed in the Mie channel. As explained above, small angular fluctuations of 1 µrad with respect to the 200 mm diameter telescope with a FOV of 100 µrad introduce variations in the measured LOS wind speeds of about 0.4 m s−1 (DLR, 2016). Furthermore, the availability of 2 µm wind data in those range bins that were used for the evaluation of the Rayleigh winds suggests at least a small contamination of the Rayleigh signal by particulate backscatter, thus introducing an increased random error (Dabas et al., 2008).
In general, concerning systematic wind errors, a distinction has to be made between range-independent and range-dependent error sources. First, systematic errors are caused by inaccuracies in the aircraft attitude angles, e.g. by improper knowledge of the laser pointing, or by constant errors in the wind retrieval, e.g. introduced by uncertainties in the calibration parameters. The resulting wind bias is hence constant along the wind profile and can be reduced by applying ZWC, provided that sufficient ground return signals are available and that the atmospheric contamination of the ground return signals is low. If the latter conditions are not fulfilled, producing a wind-shear profile at the expense of one range bin is an option for eliminating this systematic error source in the analysis of the airborne observations. Similar systematic error sources, e.g. improper knowledge of pointing direction or satellite-induced LOS speed, exist for the satellite instrument producing a slowly varying bias along the orbit which will be not present in wind-shear profiles. Such errors can be compensated by means of ZWC.
The second class of systematic wind errors are range-dependent errors. One example which is specific to the A2D is the imperfect transmit–receive co-alignment, as discussed in Sect. 4.1.2. The error is largest in the near-field and decreases with increasing distance from the instrument, i.e. towards the ground. For the satellite instrument, the situation is more complicated due to the much higher ground track velocity of about 7.2 km s−1. The different travel times of laser pulses backscattered from different altitudes in combination with the angular movement of the satellite during the propagation period of the pulses leads to range-dependent incidence angles of the backscattered light on the Rayleigh and Mie spectrometers and hence to a range-dependent bias in the wind speeds. This effect will be characterized at the beginning of the Aeolus mission and can be subsequently corrected.
Figure 10(a) Flight track of the Falcon aircraft (black line) during the research flight conducted on 4 October 2016. The wind scenes performed from 09:00 to 09:44 and from 09:54 to 10:30 UTC are indicated in orange and blue. High ground visibility was obtained over the northeast of Iceland at the beginning and the end of the scenes, respectively. The background picture is composed of a map provided by Google Earth and satellite images from Aqua MODIS (VIS channel) taken at 12:15 (right part) and 13:50 UTC (left part) (MODIS, 2017b). (b) Geopotential height (black isolines, in dekametres) and horizontal wind speed (colour shading) at 300 hPa over the North Atlantic on 4 October 2016, 12:00 UTC, from ECMWF model analysis together with the flight track of the Falcon 20 aircraft.
## 4.2 Zero wind correction for the flight on 4 October 2016
The wind scene on 27 September 2016 presented in the previous sections was characterized by optically dense clouds at different altitudes. As a consequence, the ground return signals detected during the scene were too weak for reliable determination of the ground speed which could be used for ZWC. Consequently, for this particular research flight, the refined ground detection scheme could not be exploited for reducing the systematic error of the Mie and Rayleigh wind speeds. Unfortunately, this circumstance holds true for most of the flights conducted in the context of NAWDEX, since the flight planning was primarily driven by the atmospheric science objectives of the campaign, resulting in complex atmospheric conditions with rather dense cloud coverage. One exception is the flight performed on 4 October 2016, which was dedicated to the investigation of the jet stream east of Iceland. For this purpose, the Falcon aircraft crossed the jet stream with increased wind speeds twice, as it flew two legs back and forth between the way points located at 66.0 N, 17.5 W and 64.0 N, 7.0 W (see Fig. 10). To the west of the jet axis cloud-free conditions prevailed over the northeast of Iceland. Hence, high ground visibility was obtained at the beginning of the first leg and at the end of the second leg, as can be seen by the visible satellite image (MODIS, 2017b) a few hours after the flight depicted in Fig. 10a together with the flight track of the Falcon. The A2D measured wind profiles during the periods from 09:00 to 09:44 and from 09:54 to 10:30 UTC (see also Table 1). The figure reveals the contrasting atmospheric circumstances experienced during the flight which were characterized by highly variable cloud cover along the flight path.
Using the same Rayleigh and Mie response calibrations as for the flight on 27 September 2016, the results of the wind retrieval are displayed in Fig. 11. While the Rayleigh wind curtain shows good coverage at the beginning and the end of the period (Fig. 11a), valid Mie winds were primarily obtained in the vicinity of the jet stream centre, which was sampled in the middle of the flight (Fig. 11b). This again underlines the complementarity of the two channels which allows for excellent data coverage despite strongly diverse atmospheric conditions. Since the direction of the wind was towards the A2D LOS on the first leg, positive LOS wind speeds of up to 25 m s−1 (HLOS: 73 m s−1) were measured, whereas negative winds of the same magnitude were detected on the flight leg back to Iceland.
Figure 11LOS wind profiles (positive towards the instrument) measured during the flight on 4 October 2016 between 09:00 and 10:30 UTC using (a) the A2D Rayleigh channel and (b) the A2D Mie channel. The grey boxes indicate periods during which the ground visibility was sufficient for obtaining ZWC data. The corresponding ZWC values are plotted in panel (c) together with the ground speed variations introduced by the Mie response fluctuations in the internal reference signals (see text). (d) Wind curtain measured with the coherent 2 µm reference wind lidar. For better comparison, the 2 µm wind data were adapted to the measurement grid of the A2D. The data gap between 09:44 and 09:54 UTC is due to an interruption of the wind measurement during a curve flight.
Table 5Results of the statistical comparison between the A2D and the 2 µm LOS wind data measured on 4 October 2016. The statistical comparison for the Mie wind profiles was performed without and with ZWC.
The systematic and random errors for the Rayleigh and Mie winds were determined from a statistical comparison with the 2 µm reference wind lidar data. The resulting scatterplots and PDFs are shown in Fig. 12, while the statistical parameters are given in Table 5. Due to the poor overlap of the A2D Rayleigh wind data with the 2 µm wind curtain (see Fig. 11d), only a small number of data points (168) entered the comparison despite a low coverage ratio threshold of 25 %. Consequently, the calculated mean bias (1.54 m s−1) and scaled MAD (2.7 m s−1) lack statistical significance. This also becomes obvious from the shape of the histogram illustrating the distribution of the Rayleigh wind errors (Fig. 12b), which strongly deviates from a Gaussian distribution. For this reason, the following discussion concentrates on the Mie channel. Here, a scaled MAD of 2.0 m s−1 was derived from the comparison with the reference lidar, which showed large data overlap with the Mie channel, resulting in 1246 compared bins. The mean bias of 0.57 m s−1 is considerably larger than the value obtained for the flight on 27 September 2016. The increase in systematic error might result from changes in the alignment of the transmit–receive path, which can slightly vary from flight to flight. In combination with potential inaccuracies in the aircraft attitude data, this leads to unknown contributions to the retrieved LOS wind speed which are not considered in the retrieval algorithm.
Figure 12(a) A2D LOS wind speed determined with the Rayleigh (dots) and Mie (diamonds) channel versus the 2 µm LOS wind speed for comparison of the wind data measured during the flight on 4 October 2016 between 09:00 and 10:30 UTC (see corresponding curtains in Fig. 11a, b and d). The scatterplot for the Mie channel was obtained after zero wind correction was applied to the measured wind speeds. The corresponding probability density functions for the wind differences (A2D–2 µm) are shown in panels (b) and (c) for the Rayleigh and Mie channel, respectively. The solid line represents a Gaussian fit with the given centre and ${e}^{-\mathrm{1}/\mathrm{2}}$ width 2w.
The wind speed offset can, however, be reduced by ZWC based on the developed ground detection scheme. Any deviation from zero is interpreted as systematic error in the wind speed retrieval and hence subtracted from the measured wind speed. The ground speed (or ZWC) values obtained for the Mie channel during the two wind scenes on 4 October 2016 are plotted in Fig. 11c. From a total number of 268 observations, 59 observations included valid ZWC values in the ground range gates which were identified by the algorithm explained in Sect. 3.2. The respective observations are indicated as grey boxes in the Mie wind curtain. Thanks to the refined ground detection on measurement level, atmospheric contamination of the ground signals was minimized, thus ensuring that the detrimental influence of near-surface winds on the ZWC values was diminished. The mean of the ZWC values was determined to be 0.53 m s−1 with a standard deviation of 1.2 m s−1. The variation around the mean, which is also observed as random error in the atmospheric Mie wind speeds, can again be traced back to fluctuations in the Mie response measured for the internal reference. In order to confirm the correlation between the variability of the ZWC values and the internal reference variations, the Mie responses of the internal reference were converted to relative (laser) frequencies using the Mie response calibration. The obtained frequencies were compared to the frequencies measured with the high-precision wavemeter which tracked the absolute wavelength of the laser pulses emitted during the flight. The frequency difference (Mie response minus wavemeter) was finally translated into wind speed differences (1 m s${}^{-\mathrm{1}}\stackrel{\mathrm{^}}{=}$ 5.63 MHz), resulting in the dashed line plotted in Fig. 11c. The course of the curve is obviously correlated to the progression of the ZWC values, thus verifying that the noise in the internal reference considerably affects the measured ground speeds. As mentioned in the previous section, speckle noise is responsible for Mie response variations on the order of σ= 0.7 m s−1. Nevertheless, the mean value was used for correcting the Mie wind speeds, leading to the scatterplot depicted in Fig. 12a. The statistical parameters after ZWC are given in the right column Table 5. Subtraction of the mean ZWC value reduces the mean bias to 0.04 m s−1, which is comparable to the result obtained for the flight on 27 September 2016. Hence, ZWC in combination with the refined ground detection scheme improves the accuracy of the A2D remarkably for the discussed flight.
5 Summary and conclusion
The ALADIN Airborne Demonstrator (A2D) represents an essential test bed for the validation of the upcoming Aeolus mission. Due to its similar and representative design and operation principle, the A2D provides valuable information on the wind measurement strategies of the satellite instrument as well as on the optimization of the wind retrieval and related quality control algorithms. For this purpose, the A2D was successfully deployed for wind observations in the international airborne field campaign NAWDEX conducted in Iceland in autumn 2016. Within the scope of the campaign, 14 research flights were performed extending the wind and calibration dataset of the A2D for validating the retrieval algorithms and operation procedures. In particular, the recording of very high HLOS wind speeds above 80 m s−1 was obtained by sampling the North Atlantic jet stream, while the complementarity of the Rayleigh and Mie channel allowed for broad vertical and horizontal coverage across the troposphere.
Comparison of the A2D wind data with a high-resolution coherent Doppler wind lidar emitting at 2 µm wavelength enabled the evaluation of the performance of the A2D in terms of accuracy and precision. For the flight on 27 September 2016, the mean bias was found to be 0.49 m s−1 for the Rayleigh channel and 0.03 m s−1 for the Mie channel. A larger Mie wind speed bias of 0.57 m s−1 was determined for the flight on 4 October 2016, but could be reduced to 0.04 m s−1 by means of ZWC. The latter was supported by accurate ground detection using a scheme that minimizes the contribution of atmospheric return signals in the identified ground range gates. This method was also implemented in the analysis of the Rayleigh and Mie response calibrations where it is particularly effective in case of low-albedo surfaces in the UV (e.g. land) or areas with strongly varying ground elevations. The ground detection scheme is envisaged to be fully exploited in upcoming airborne campaigns to provide accurate ZWC for flights with sufficient ground visibility. In order to reduce the random error both in the detected ground speeds and in the atmospheric wind speeds, the response fluctuations in the internal reference signals need to be diminished. This problem, which is absent in the satellite instrument, is proposed to be solved by avoiding slow variations in the speckle pattern incident on the Mie and Rayleigh spectrometers, e.g. by implementing a fast diffuser.
In addition to the internal reference fluctuations, the large random errors of about 2.7 m s−1 in the Rayleigh channel can be traced back to the transmit–receive path co-alignment in combination with the high incidence angle sensitivity of the Rayleigh spectrometer, while the heterogeneity of the atmosphere and the nonlinearity of the Mie response function are considered to be additional factors contributing to the random error (1.5 m s−1) observed for the Mie winds. Hence, apart from the technical development of the A2D regarding speckle reduction and improved co-alignment, the main focus of the current research is on the improvement of the system accuracy and precision by implementing a novel Mie response calibration procedure considering nonlinearities. The modifications of the A2D are intended to be tested in the frame of forthcoming airborne campaigns which will also aim to conduct flights in coordination with the Aeolus satellite after its launch in 2018.
Data availability
Data availability.
Data used in this paper can be provided upon request by email to Oliver Reitebuch ([email protected]).
Competing interests
Competing interests.
The authors declare that they have no conflict of interest.
Acknowledgements
Acknowledgements.
The development of the ALADIN Airborne Demonstrator and the work carried out during the NAWDEX campaign were supported by the German Aerospace Center (Deutsches Zentrum für Luft- und Raumfahrt e.V., DLR) and the European Space Agency (ESA), providing funds related to the preparation of Aeolus (WindVal II, contract no. 4000114053/15/NL/FF/gp), as well as NRL Monterrey and the EUropean Facility for Airborne Research (EUFAR, project NAWDEX Influence). The first author was partly funded by a young scientist grant by ESA within the DRAGON 4 program (contract no. 4000121191/17/I-NB). The authors are especially grateful to Engelbert Nagel for his constant support throughout the campaign.
The article processing charges for this open-access
publication were covered by a Research
Centre of the Helmholtz Association.
Reviewed by: Gert-Jan Marseille and Mike Hardesty
References
Amediek, A. and Wirth, M.: Pointing Verification Method for Spaceborne Lidars, Remote Sens., 9, 56, https://doi.org/10.3390/rs9010056, 2017.
Andersson, E.: Statement of Guidance for Global Numerical Weather Prediction (NWP), World Meteorological Organisation, available at: https://www.wmo.int/pages/prog/www/OSY/SOG/SoG-Global-NWP.pdf (last access: 25 October 2017), 2016.
Ansmann, A., Wandinger, U., Le Rille, O., Lajas, D., and Straume, A. G.: Particle backscatter and extinction profiling with the spaceborne high-spectral-resolution Doppler lidar ALADIN: Methodology and simulations, Appl. Opt., 46, 6606–6622, https://doi.org/10.1364/AO.46.006606, 2007.
Baker, W. E., Atlas, R., Cardinali, C., Clement, A., Emmitt, G. D., Gentry, B. M., Hardesty, R. M., Källén, E., Kavaya, M. J., Langland, R., Ma, Z., Masutani, M., McCarty, W., Pierce, R. B., Pu, Z., Riishojgaard, L. P., Ryan, J., Tucker, S., Weissmann, M., and Yoe, J. G.: Lidar-Measured Wind Profiles: The Missing Link in the Global Observing System, B. Am. Meteorol. Soc., 95, 543–564, https://doi.org/10.1175/BAMS-D-12-00164.1, 2014.
Banakh, V. A., Smalikho, I. N., and Rahm, S.: Estimation of the refractive index structure characteristic of air from coherent Doppler wind lidar data, Opt. Lett., 39, 4321–4324, https://doi.org/10.1364/OL.39.004321, 2014.
Baumgarten, G.: Doppler Rayleigh/Mie/Raman lidar for wind and temperature measurements in the middle atmosphere up to 80 km, Atmos. Meas. Tech., 3, 1509–1518, https://doi.org/10.5194/amt-3-1509-2010, 2010.
Berry, P. A. M., Smith, R. G., and Benveniste, J.: ACE2: The New Global Digital Elevation Model, in: Gravity, Geoid and Earth Observation, edited by: Mertikas, S. P., International Association of Geodesy Symposia, Springer Berlin, Heidelberg, 231–237, 2010.
Bosart, B. L., Lee, W.-C., and Wakimoto, R. M.: Procedures to Improve the Accuracy of Airborne Doppler Radar Data, J. Atmos. Ocean. Tech., 19, 322–339, https://doi.org/10.1175/1520-0426-19.3.322, 2002.
Chanin, M. L., Garnier, A., Hauchecorne, A., and Porteneuve, J.: A Doppler lidar for measuring winds in the middle atmosphere, Geophys. Res. Lett., 16, 1273–1276, https://doi.org/10.1029/GL016i011p01273, 1989.
Chouza, F., Reitebuch, O., Jähn, M., Rahm, S., and Weinzierl, B.: Vertical wind retrieved by airborne lidar and analysis of island induced gravity waves in combination with numerical models and in situ particle measurements, Atmos. Chem. Phys., 16, 4675–4692, https://doi.org/10.5194/acp-16-4675-2016, 2016a.
Chouza, F., Reitebuch, O., Benedetti, A., and Weinzierl, B.: Saharan dust long-range transport across the Atlantic studied by an airborne Doppler wind lidar and the MACC model, Atmos. Chem. Phys., 16, 11581–11600, https://doi.org/10.5194/acp-16-11581-2016, 2016b.
Dabas, A., Denneulin, M. L., Flamant, P., Loth, C., Garnier, A., and Dolfi-Bouteyre, A.: Correcting winds measured with a Rayleigh Doppler lidar from pressure and temperature effects, Tellus A, 60, 206–215, https://doi.org/10.1111/j.1600-0870.2007.00284.x, 2008.
Deutsches Zentrum für Luft- und Raumfahrt e.V. (DLR): Final Report: Analysis of enhanced noise in A2D observations, AE.FR.DLR.A2D.CN11.110716, V 2.0, 110 pp., 2016.
Dolfi-Bouteyre, A., Canat, G., Valla, M., Augere, B., Besson, C., Goular, D., Lombard, L., Cariou, J.-P., Durecu, A., Fleury, D., Bricteux, L., Brousmiche, S., Lugan, S., and Macq, B.: Pulsed 1.5 µm LIDAR for Axial Aircraft Wake Vortex Detection Based on High-Brightness Large-Core Fiber Amplifier, IEEE J. Sel. Top. Quant., 15, 441–450, https://doi.org/10.1109/JSTQE.2008.2010463, 2009.
Dou, X., Han, Y., Sun, D., Xia, H., Shu, Z., Zhao, R., Shangguan, M., and Guo, J.: Mobile Rayleigh Doppler lidar for wind and temperature measurements in the stratosphere and lower mesosphere, Opt. Express, 22, A1203–A1221, https://doi.org/10.1364/OE.22.0A1203, 2014.
European Space Agency (ESA): ADM-Aeolus Science Report, ESA SP-1311, 121 pp., available at: http://esamultimedia.esa.int/docs/ SP-1311_ADM-Aeolus_FINAL_low-res.pdf (last access: 21 August 2017), 2008.
European Space Agency (ESA): ADM-Aeolus Mission Requirements Document, ESA EOP-SM/2047, 57 pp., available at: http://esamultimedia.esa.int/docs/EarthObservation/ADM-Aeolus_MRD.pdf (last access: 4 March 2018), 2016.
ISO 11146-1:2005(E): Lasers and laser-related equipment – Test methods for laser beam widths, divergence angles and beam propagation ratios, 2005.
Flamant, P., Cuesta, J., Denneulin, M. L., Dabas, A., and Huber, D.: ADM-Aeolus retrieval algorithms for aerosol and cloud products, Tellus A, 60, 273–288, https://doi.org/10.1111/j.1600-0870.2007.00287.x, 2008.
Flesia, C. and Korb, C. L.: Theory of the double-edge molecular technique for Doppler lidar wind measurement, Appl. Opt., 38, 432–440, https://doi.org/10.1364/AO.38.000432, 1999.
Garnier, A. and Chanin, M. L.: Description of a Doppler Rayleigh LIDAR for measuring winds in the middle atmosphere, Appl. Phys. B, 55, 35–40, https://doi.org/10.1007/BF00348610, 1992.
Gentry, B. M., Chen, H., and Li, S. X.: Wind measurements with 355-nm molecular Doppler lidar, Opt. Lett., 25, 1231–1233, https://doi.org/10.1364/OL.25.001231, 2000.
Hildebrand, J., Baumgarten, G., Fiedler, J., Hoppe, U.-P., Kaifler, B., Lübken, F.-J., and Williams, B. P.: Combined wind measurements by two different lidar instruments in the Arctic middle atmosphere, Atmos. Meas. Tech., 5, 2433–2445, https://doi.org/10.5194/amt-5-2433-2012, 2012.
Kavaya, M. J., Beyon, J. Y., Koch, G. J., Petros, M., Petzar, P. J., Singh, U. N., Trieu, B. C., and Yu, J.: The Doppler Aerosol Wind (DAWN) Airborne, Wind-Profiling Coherent-Detection Lidar System: Overview and Preliminary Flight Results, J. Atmos. Ocean. Tech., 31, 826–842, 2014.
Köpp, F., Rahm, S., and Smalikho, I.: Characterization of Aircraft Wake Vortices by 2 µm Pulsed Doppler Lidar, J. Atmos. Ocean. Tech., 21, 194–206, https://doi.org/10.1175/1520-0426(2004)021<0194:COAWVB>2.0.CO;2, 2004.
Lemmerz, C., Lux, O., Reitebuch, O., Witschas, B., and Wührer, C.: Frequency and timing stability of an airborne injection-seeded Nd: YAG laser system for direct-detection wind lidar, Appl. Opt., 56, 9057–9068, https://doi.org/10.1364/ao.56.009057, 2017.
Li, Z., Lemmerz, C., Paffrath, U., Reitebuch, O., and Witschas, B.: Airborne Doppler Lidar Investigation of Sea Surface Reflectance at a 355-nm Ultraviolet Wavelength, J. Atmos. Ocean. Tech., 27, 693–704, https://doi.org/10.1175/2009JTECHA1302.1, 2010.
Marksteiner, U., Reitebuch, O., Rahm, S., Nikolaus, I., Lemmerz, C., and Witschas, B.: Airborne direct-detection and coherent wind lidar measurements along the east coast of Greenland in 2009 supporting ESA's Aeolus mission, Proc. SPIE, 8182, 81820J, https://doi.org/10.1117/12.897528, 2011.
Marksteiner, U.: Airborne Wind Lidar Observations for the Validation of the ADM-Aeolus Instrument, PhD thesis, Technische Universität München, 180 pp., available at: http://mediatum.ub.tum.de?id=1136781 (last access: 28 August 2017), 2013.
Marksteiner, U., Reitebuch, O., Lemmerz, C., Lux, O., Rahm, S., Witschas, B., Schäfler, A., Emmitt, D., Greco, S., Kavaya, M. J., Gentry, B., Neely III, B. R., Kendall, E., and Schüttemeyer, D.: Airborne Direct-Detection and Coherent Wind Lidar Measurements over the North Atlantic in 2015 Supporting ESA's Aeolus Mission, Proc. 28th International Laser-Radar Conference, Bucharest, Romania, 25–30 June, 2017.
McKay, J. A.: Assessment of a multibeam Fizeau wedge interferometer for Doppler wind lidar, Appl. Opt., 41, 1760–1767, https://doi.org/10.1364/AO.41.001760, 2002.
MODIS: Iceland 2 Subset – Terra 1 km True Color 2016/271 (09/27), available at: https://lance-modis.eosdis.nasa.gov/imagery/subsets/?subset=Iceland2.2016271.terra.1km, last access: 8 December 2017a.
MODIS: Iceland 2 Subset – Aqua 1 km True Color 2016/278 (10/04), available at: https://lance-modis.eosdis.nasa.gov/imagery/subsets/?subset=Iceland2.2016278.aqua.1km, last access: 4 December 2017b.
Nicklaus, K., Morasch, V., Hoefer, M., Luttmann, J., Vierkötter, M., Ostermeyer, M., Höffner, J., Lemmerz, C., Hoffmann, D., Hoffman, H. J., Shori, R., and Hodgson, N.: Frequency stabilization of Q-switched Nd:YAG oscillators for airborne and spaceborne lidar systems, Proc. SPIE, 6451, 64511L, https://doi.org/10.1117/12.701187, 2007.
Paffrath, U.: Performance assessment of the Aeolus Doppler wind lidar prototype, PhD thesis, Technische Universität München, 144 pp., available at: https://mediatum.ub.tum.de/?id=601980 (last access: 23 June 2017), 2006.
Paffrath, U., Lemmerz, C., Reitebuch, O., Witschas, B., Nikolaus, I., and Freudenthaler, V.: The Airborne Demonstrator for the Direct-Detection Doppler Wind Lidar ALADIN on ADM-Aeolus. Part II: Simulations and Rayleigh Receiver Radiometric Performance, J. Atmos. Ocean. Tech., 26, 2516–2530, https://doi.org/10.1175/2009JTECHA1314.1, 2009.
Reitebuch, O.: Wind Lidar for Atmospheric Research, in: Atmospheric physics: Background, methods, trends, edited by: Schumann, U., Research Topics in Aerospace, Springer, Berlin, London, 487–507, 2012a.
Reitebuch, O.: The Spaceborne Wind Lidar Mission ADM-Aeolus, in: Atmospheric physics: Background, methods, trends, edited by: Schumann, U., Research Topics in Aerospace, Springer, Berlin, London, 815–827, 2012b.
Reitebuch, O., Lemmerz, C., Nagel, E., Paffrath, U., Durand, Y., Endemann, M., Fabre, F., and Chaloupy, M.: The Airborne Demonstrator for the Direct-Detection Doppler Wind Lidar ALADIN on ADM-Aeolus. Part I: Instrument Design and Comparison to Satellite Instrument, J. Atmos. Ocean. Tech., 26, 2501–2515, https://doi.org/10.1175/2009JTECHA1309.1, 2009.
Reitebuch, O., Marksteiner, U., Rompel, M., Meringer, M., Schmidt, K., Huber, D., Nikolaus, I., Dabas, A., Marshall, J., de Bruin, F., Kanitz, T., and Straume, A.-G.: Aeolus End-to-End Simulator and Wind Retrieval Algorithms up to Level 1B, Proc. 28th International Laser-Radar Conference (ILRC), Bucharest, Romania, 25–30 June, 2017.
Reitebuch, O., Huber, D., and Nikolaus, I.: ADM-Aeolus Algorithm Theoretical Basis Document (ATBD) Level1B Products, AE-RP-DLR-L1B-001, 4.4, 117 pp., 2018.
Schäfler, A., Craig, G., Wernli, H., Arbogast, P., Doyle, J. D., McTaggart-Cowan, R., Methven, J., Rivière, G., Ament, F., Boettcher, M., Bramberger, M., Cazenave, Q., Cotton, R., Crewell S., Delanoë, J., Dörnbrack, A., Ehrlich, A., Ewald, F., Fix, A., Grams, C. M., Gray, S. L., Grob, H., Groß, S., Hagen, M., Harvey, B., Hirsch, L., Jacob, M., Kölling, T., Konow, H., Lemmerz, C., Lux, O., Magnusson, L., Mayer, B., Mech, M., Moore, R., Pelon, J., Quinting, J., Rahm, S., Rapp, M., Rautenhaus, M., Reitebuch, O., Reynolds, C. A., Sodemann, H., Spengler, T., Vaughan, G., Wendisch, M., Wirth, M., Witschas, B., Wolf, K., and Zinner, T.: The North Atlantic Waveguide and Downstream Impact Experiment, B. Am. Meteorol. Soc., in press, https://doi.org/10.1175/BAMS-D-17-0003.1, 2018.
Schröder, T., Lemmerz, C., Reitebuch, O., Wirth, M., Wührer, C., and Treichel, R.: Frequency jitter and spectral width of an injection-seeded Q-switched Nd:YAG laser for a Doppler wind lidar, Appl. Phys. B, 87, 437–444, https://doi.org/10.1007/s00340-007-2627-5, 2007.
Shangguan, M., Xia, H., Wang, C., Qiu, J., Lin, S., Dou, X., Zhang, Q., and Pan, J.-W.: Dual-frequency Doppler lidar for wind detection with a superconducting nanowire single-photon detector, Opt. Lett., 42, 3541–3544, https://doi.org/10.1364/OL.42.003541, 2017.
Stoffelen, A., Pailleux, J., Källen, E., Vaughan, M., Isaksen, L., Flamant, P., Wergen, W., Andersson, E., Schyberg, H., Culoma, A., Meynart, R., Endemann, M., and Ingmann, P.: The Atmospheric Dynamics Mission for Global Wind Field Measurement, B. Am. Meteorol. Soc. 86, 73–87, https://doi.org/10.1175/BAMS-86-1-73, 2005.
Sun, X. J., Zhang, R. W., Marseille, G. J., Stoffelen, A., Donovan, D., Liu, L., and Zhao, J.: The performance of Aeolus in heterogeneous atmospheric conditions using high-resolution radiosonde data, Atmos. Meas. Tech., 7, 2695–2717, https://doi.org/10.5194/amt-7-2695-2014, 2014.
Tan, D. G. H. and Andersson, E.: Simulation of the yield and accuracy of wind profile measurements from the Atmospheric Dynamics Mission (ADM-Aeolus), Q. J. Roy. Meteor. Soc., 131, 1737–1757, https://doi.org/10.1256/qj.04.02, 2005.
Tan, D. G. H., Andersson, E., Kloe, J. D., Marseille, G.-J., Stoffelen, A., Poli, P., Denneulin, M.-L., Dabas, A., Huber, D., Reitebuch, O., Flamant, P., Le Rille, O., and Nett, H.: The ADM-Aeolus wind retrieval algorithms, Tellus A, 60, 191–205, https://doi.org/10.1111/j.1600-0870.2007.00285.x, 2016.
Weiler, F.: Bias correction using ground echoes for the airborne demonstrator of the wind lidar on the ADM-Aeolus mission, Master's thesis, University of Innsbruck, 96 pp., available at: https://resolver.obvsg.at/urn:nbn:at:at-ubi:1-7104 (last access: 11 October 2017), 2017.
Weissmann, M., Busen, R., Dörnbrack, A., Rahm, S., and Reitebuch, O.: Targeted Observations with an Airborne Wind Lidar, J. Atmos. Ocean. Tech., 22, 1706–1719, https://doi.org/10.1175/JTECH1801.1, 2005.
Witschas, B.: Analytical model for Rayleigh–Brillouin line shapes in air, Appl. Opt., 50, 267–270, https://doi.org/10.1364/AO.50.000267, 2011a.
Witschas, B.: Analytical model for Rayleigh–Brillouin line shapes in air: Errata, Appl. Opt., 50, 5758, https://doi.org/10.1364/AO.50.005758, 2011b.
Witschas, B.: Experiments on spontaneous Rayleigh-Brillouin scattering in air, PhD thesis, Friedrich-Schiller-Universität, Jena, 112 pp., available at: http://elib.dlr.de/98547/ (last access: 8 March 2017), 2011c.
Witschas, B., Lemmerz, C., and Reitebuch, O.: Daytime measurements of atmospheric temperature profiles (2–15 km) by lidar utilizing Rayleigh–Brillouin scattering, Opt. Lett., 39, 1972–1975, https://doi.org/10.1364/OL.39.001972, 2014.
Witschas, B., Rahm, S., Dörnbrack, A., Wagner, J., and Rapp, M.: Airborne Wind Lidar Measurements of Vertical and Horizontal Winds for the Investigation of Orographically Induced Gravity Waves, J. Atmos. Ocean. Tech., 34, 1371–1386, https://doi.org/10.1175/JTECH-D-17-0021.1, 2017.
|
2019-04-26 06:18:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 17, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7007343173027039, "perplexity": 2243.0711951627663}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578760477.95/warc/CC-MAIN-20190426053538-20190426075538-00106.warc.gz"}
|
https://discuss.codechef.com/t/rbflowers-editorial/103665
|
# RBFLOWERS - Editorial
Author: Jeevan Jyot Singh
Testers: Tejas Pandey, Hriday
Editorialist: Nishank Suresh
TBD
# PREREQUISITES:
Knapsack-style dynamic programming
# PROBLEM:
You have two arrays R and B, both of length N. At each index, you can choose either R_i or B_i. Let X denote the sum of all chosen R_i and Y denote the sum of all chosen B_i. Maximize \min(X, Y).
# EXPLANATION:
The limits on N and the values are small, so a natural knapsack-style dynamic programming solution should strike you, something along the following lines:
Let f(i, x, y) be a boolean function, where f(i, x, y) is true if and only if you can make choices among the first i elements such that the sum of reds is exactly x and the sum of blues is exactly y.
Transitions are extremely easy: f(i, x, y) = f(i-1, x - R_i, y) \vee f(i-1, x, y - B_i) (\vee denotes logical OR), and memoization naturally makes transitions \mathcal{O}(1).
The final answer is the maximum value of \min(x, y) across all (x, y) such that f(N, x, y) is true.
While this is correct, it is also too slow. x and y can be as large as 200\times N, so we have 200^2 \times N^3 states in our dp, which is way too much.
Note that the constraints do allow a solution in \mathcal{O}(200 \times N^2), i.e, kicking out one state of our dp.
We can achieve that by a relatively common trick: turn the removed state into the value of the dp!
Consider a function f(i, x) which denotes the maximum sum of blues from the first i elements, given that the sum of reds is x.
Transitions for this function are as follows:
• If we choose R_i, the sum of blues is f(i-1, x - R_i)
• Otherwise, the sum of blues is f(i-1, x) + B_i
• So, f(i, x) = \max(f(i-1, x) + B_i, f(i-1, x-R_i))
Once again, by memoizing f(i, x) values, transitions are \mathcal{O}(1), so both our time and space complexity are fine.
The final answer is the maximum of \min(x, f(N, x)) across all 0 \leq x \leq 200\cdot N.
# TIME COMPLEXITY
\mathcal{O}(N\cdot S) per test case, where S = 200\times N.
# CODE:
Tester's code (C++)
#include <bits/stdc++.h>
using namespace std;
// -------------------- Input Checker Start --------------------
long long readInt(long long l, long long r, char endd)
{
long long x = 0;
int cnt = 0, fi = -1;
bool is_neg = false;
while(true)
{
char g = getchar();
if(g == '-')
{
assert(fi == -1);
is_neg = true;
continue;
}
if('0' <= g && g <= '9')
{
x *= 10;
x += g - '0';
if(cnt == 0)
fi = g - '0';
cnt++;
assert(fi != 0 || cnt == 1);
assert(fi != 0 || is_neg == false);
assert(!(cnt > 19 || (cnt == 19 && fi > 1)));
}
else if(g == endd)
{
if(is_neg)
x = -x;
if(!(l <= x && x <= r))
{
cerr << "L: " << l << ", R: " << r << ", Value Found: " << x << '\n';
assert(false);
}
return x;
}
else
{
assert(false);
}
}
}
string readString(int l, int r, char endd)
{
string ret = "";
int cnt = 0;
while(true)
{
char g = getchar();
assert(g != -1);
if(g == endd)
break;
cnt++;
ret += g;
}
assert(l <= cnt && cnt <= r);
return ret;
}
long long readIntSp(long long l, long long r) { return readInt(l, r, ' '); }
long long readIntLn(long long l, long long r) { return readInt(l, r, '\n'); }
string readStringSp(int l, int r) { return readString(l, r, ' '); }
void readEOF() { assert(getchar() == EOF); }
vector<int> readVectorInt(int n, long long l, long long r)
{
vector<int> a(n);
for(int i = 0; i < n - 1; i++)
a[n - 1] = readIntLn(l, r);
return a;
}
// -------------------- Input Checker End --------------------
int main() {
int t;
int smn = 0;
while(t--) {
int n;
smn += n;
assert(smn <= 100);
int r[n], b[n];
for(int i = 0; i < n - 1; i++) r[i] = readIntSp(1, 200);
r[n - 1] = readIntLn(1, 200);
for(int i = 0; i < n - 1; i++) b[i] = readIntSp(1, 200);
b[n - 1] = readIntLn(1, 200);
int dp[n][n*200 + 1];
memset(dp, -1, sizeof(dp));
dp[0][0] = b[0];
dp[0][r[0]] = 0;
for(int i = 0; i < n - 1; i++) {
for(int j = 0; j <= n*200 - r[i + 1]; j++)
dp[i + 1][j + r[i + 1]] = dp[i][j];
for(int j = 0; j <= n*200; j++)
if(dp[i][j] > -1)
dp[i + 1][j] = max(dp[i + 1][j], dp[i][j] + b[i + 1]);
}
int ans = 0;
for(int j = 0; j <= n*200; j++) ans = max(ans, min(j, dp[n - 1][j]));
cout << ans << "\n";
}
return 0;
}
Editorialist's code (Python)
for _ in range(int(input())):
n = int(input())
r = list(map(int, input().split()))
b = list(map(int, input().split()))
maxS = 20004
dp = [-1]*maxS
dp[0] = 0
for i in range(n):
R, B = r[i], b[i]
for x in reversed(range(maxS)):
val = -1
if dp[x] != -1:
val = dp[x] + B
if x-R >= 0 and dp[x-R] != -1:
val = max(val, dp[x-R])
dp[x] = val
ans = 0
for i in range(maxS):
if dp[i] == -1: continue
ans = max(ans, min(i, dp[i]))
print(ans)
3 Likes
To find the solution, it’s possible to do a binary search between 0 [minimum answer possible] and min(sum(R), sum(B)) [maximum answer possible].
At each step, perform a knapsack and adjust the interval accordingly.
This is what I submitted: https://www.codechef.com/viewsolution/77676358
2 Likes
Yes. I also did the same binary search approach.
https://www.codechef.com/viewsolution/77664583
Rust based solution here.
https://www.codechef.com/viewsolution/77776924
It’s 0.01s, 5.7M
It can be faster if it applies binary search approach.
The editorial is not very clear. It does not provide the intuition for the solution.
4 Likes
Can someone post top down code ?
1 Like
Which part do you find unclear?
In my opinion, the only intuition needed for the problem is the very first step: noticing that it can be modeled as a knapsack-style dynamic programming. That, unfortunately, comes with experience and practice, and you’ll find that this is the case for most dp tasks.
Once you fit it into a knapsack the rest of the solution is fairly routine, only requiring one optimization where you turn a dp state into a value (which is itself a fairly common optimization trick).
The DP part is easy to understand. There is difficulty in understanding the part where we use only one parameter to optimize. It would have been easy to understand if the top-down approach was explained.
Please read the editorial again, it details a top-down solution by defining a recursive function that can be memoized. Only the code linked at the bottom is iterative.
Turning a dp state into a value is a very common optimization. There isn’t too much intuition there, because there’s a very limited set of things you can do at all, so you might as well try them all.
When you have too many states, there is no choice but to reduce them, otherwise your solution simply won’t run in time. When doing this, you don’t want to lose any information you have, so a lot of the time only one of three things will work:
• Looking for some relation between the dp states, for example in this problem from a couple weeks ago.
• Turning a state into a value, as explained above in the editorial.
• Looking at the recursion and realizing that it isn’t possible to reach most of the states, so the naive dp is actually fast enough. One example is this problem.
In this task, if you try the first and third optimizations you’ll probably hit a dead end, while the second one does work.
1 Like
Thanks for reply. Now I get it
This helped me understand the solution.
Thank you so much for also linking the problems related to other methods. Really helpful.
can u explain your possible() function
|
2022-12-08 03:31:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7795760035514832, "perplexity": 4050.5422781705934}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711232.54/warc/CC-MAIN-20221208014204-20221208044204-00820.warc.gz"}
|
http://mathhelpforum.com/advanced-statistics/142359-normal-distribution-please-help-asap-2-a.html
|
3) Suppose that a 100-point test (scores are whole numbers) is administered to every high school student in the
USA at the start of their senior year and that the scores on this test are normally distributed with a mean of 70
and standard deviation of 10. If 5 scores are selected at random, what is the probability that exactly 3 of these
scores are between 65 and 75, inclusive?
4) Let X be a normal random variable with mean μ and standard deviation σ. Show that the expected value and
variance of the quantity x-μ/σ (this is x-μ over σ) are 0 and 1, respectively.
2. Originally Posted by MiyuCat
3) Suppose that a 100-point test (scores are whole numbers) is administered to every high school student in the
USA at the start of their senior year and that the scores on this test are normally distributed with a mean of 70
and standard deviation of 10. If 5 scores are selected at random, what is the probability that exactly 3 of these
scores are between 65 and 75, inclusive?
You need to find the probability $a$ such that $P(65\leq X\leq 75) = P\left(\frac{65-100}{10}\leq Z\leq \frac{75-100}{10}\right)= \dots = a$
Then find $P(Y=3)$ where $Y$ is binomial with $p= a, n= 5$
3. Originally Posted by MiyuCat
[snip]
4) Let X be a normal random variable with mean μ and standard deviation σ. Show that the expected value and
variance of the quantity x-μ/σ (this is x-μ over σ) are 0 and 1, respectively.
Set up the required integrals, then make the substitution z=x-μ/σ and use standard results.
Alternatively, use the following well known theorems:
E(aX + b) = aE(X) + b and Var(aX + b) = a^2Var(X).
|
2016-12-05 11:11:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.912139356136322, "perplexity": 403.37731019399524}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541692.55/warc/CC-MAIN-20161202170901-00132-ip-10-31-129-80.ec2.internal.warc.gz"}
|
https://docs.q-ctrl.com/boulder-opal/references/qctrl/Graphs/Graph/random_uniform.html
|
# random_uniform¶
Graph.random_uniform(shape, lower_bound, upper_bound, seed=None, *, name=None)
Creates a sample of uniformly distributed random numbers.
Parameters
• shape (tuple or list) – The shape of the sampled random numbers.
• lower_bound (int or float) – The inclusive lower bound of the interval of the uniform distribution.
• upper_bound (int or float) – The exlusive upper bound of the interval of the uniform distribution.
• seed (int, optional) – A seed for the random number generator. Defaults to None, in which case a random value for the seed is used.
• name (str, optional) – The name of the node.
Returns
A tensor containing a sample of uniformly distributed random numbers with shape shape.
Return type
Tensor
|
2021-12-02 16:27:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27612394094467163, "perplexity": 4887.50815846298}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362230.18/warc/CC-MAIN-20211202145130-20211202175130-00127.warc.gz"}
|
https://www.quantstart.com/articles/Risk-Neutral-Pricing-of-a-Call-Option-with-a-Two-State-Tree
|
# Risk Neutral Pricing of a Call Option with a Two-State Tree
Risk Neutral Pricing of a Call Option with a Two-State Tree
In our last article on Hedging the sale of a Call Option with a Two-State Tree we showed that there was one unique price for a call option on an underlying stock, in a world with two-future states. This was guaranteed by the principle of no arbitrage. The most surprising consequence of the argument was that the probability of the stock going up or down did not factor into the discussion. We will now utilise a probability argument and show that the value $C$ of the call-option is achieved. Note: It will be necessary to read the prior articles on the Binomial Trees in order to familiarise yourself with the example of the stock and option before proceeding. Click for Part 1 and Part 2.
Consider the same world as before, which has a stock valued today at $S$ equal to 100, with the possibility of a rise in price to 110 or a fall in price to 90. Our task as an insurance firm is to price a call option struck at $K = 100$ such that all risk is eliminated from the sale of this option to a purchaser. We will use a probability argument for this particular technique, which is known as risk neutral pricing.
Let us assume that the probability of the stock going up to 110 is given by $p$ and that the probability of it falling is given by $1-p$. Since these are the only two probabilities, we can see that $p + (1-p) = 1$, i.e. that both probabilities sum to unity and thus one of the events must occur. Thus, the expected value of our stock $S$ tomorrow, is given by:
\begin{eqnarray*} \mathbb{E}(S_2) = 110p + 90(1-p) \end{eqnarray*}
This leads to the expected value of the option price $C$ to be:
\begin{eqnarray*} \mathbb{E}(C ) = 10p + 0(1-p) = 10p \end{eqnarray*}
The only value of $p$ which causes the option value $C$ to agree with the price obtained from the hedging argument is $p=0.5$. How does this affect the expected value of the stock in tomorrow's world? Well, $\mathbb{E}(S_2) = 110p + 90(1-p) = 110\cdot 0.5 + 90\cdot (1-0.5) = 100$. Thus the expected value of $S$ is today's price. Note that this is a risk free price because we are still setting interest rates to zero and can synthesise the stock using zero-coupon bonds worth 100, as before.
It is very important to realise that we have assumed any purchasers of this stock are risk neutral and do not need to be compensated for taking on the extra risk associated with a stock that can take on two differing values. In reality, this is not likely to be the case. These purchasers will require compensation for taking on this uncertainty, which will cause our probability $p$ to be larger than 0.5 and thus $\mathbb{E}(C )$ will be larger than the no-arbitrage value. The fact that we can hedge the entire portfolio has removed the diversifiable risk and thus eliminated the premium usually associated with holding this risk.
We haven't yet considered the possibility that $p=1$, i.e. that the stock is guaranteed to go up. This would in fact lead to an arbitrage opportunity. In order to see this, we could borrow money at the risk free rate (which is currently zero!) and then purchase some stock today. Tomorrow when the bond matures we could sell the stock, which is guaranteed to increase in value, and then pay back the bond, leaving us with a risk-free profit. Thus the probabilities in this argument are only present in order to allow both world states to occur.
So what is actually happening here and why does this method work? Once the value of the option has been specified to be its risk neutral value, which is determined by the probability $p$, we can conclude that every instrument in the market is valued today by the its risk neutral expected value tomorrow. Thus, $\mathbb{E}(C )=C_1$ implies that $\mathbb{E}(S)=S_1$.
Next we will consider a third method of pricing an option, that of replication.
|
2019-02-22 18:55:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7548850178718567, "perplexity": 367.4775958212831}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247522457.72/warc/CC-MAIN-20190222180107-20190222202107-00434.warc.gz"}
|
https://physics.stackexchange.com/questions/247696/instantaneous-energy-eigenstates-for-forced-harmonic-oscillator
|
# Instantaneous energy eigenstates for forced harmonic oscillator
I'm interested in applying the adiabatic theorem to the forced harmonic oscillator with time dependent hamiltonian of the form:
$$H(t) = \hbar \omega(a^{\dagger}a + \frac{1}{2}) - f(t)a - f^{*}(t)a^{\dagger}$$
where $f(t)$ is an arbitrary function of time and $f^{*}(t)$ is its complex conjugate. I've solved the problem exactly for the system state $|\Psi (t) \rangle$ which is a coherent state. In order to apply the adiabatic theorem I need to solve for the instantaneous eigenstates of the Hamiltonian $|E^{r}(t)\rangle$, which are not the same as the system state $|\Psi (t)\rangle$. $|E^{r}(t')\rangle$ is an eigenstate of $H(t')$ only at time $t = t'$
I'm not sure where to begin, I tried expanding the eigenstates as a linear combination of the excited states of the simple harmonic oscillator, just like a coherent state. But have gotten stuck. Can anyone point me in the right direction?
• The adiabatic theorem refers to an energy gap between states. As far as I understand, your Hamiltonian is about a single isolated state since there is no an index at the creation/annihilation operators. – freude Apr 6 '16 at 6:09
• This post should help : physics.stackexchange.com/questions/129664/… – Adam Apr 6 '16 at 6:42
• @Adam the post your referenced to has a hamiltonian where the only constant is $\omega$ so they are able to factor their hamiltonian. I'm not sure I'm able to get mine in a form like theirs. – CStarAlgebra Apr 6 '16 at 12:32
• @CStarAlgebra: have a look at the second answer. There is the general case. – Adam Apr 6 '16 at 13:51
To find the instantaneous energy eigenstates, you need to treat $t$ as a parameter and solve the problem for a time independent Hamiltonian depending on the extra parameter $t$.
$$H = \hbar \omega (A^{\dagger} A + \frac{1}{2}) - \frac{ f(t)^2}{\hbar \omega}$$
$$A = a - \frac{f(t)}{\hbar \omega}$$ $$A^{\dagger} = a^{\dagger}-\frac{f(t)}{\hbar \omega}$$
Since, the commutation relations do not change: $$[A, A^{\dagger}] = [ a, a^{\dagger}] = 1$$ This Hamiltonian is just a shifted Harmonic oscillator Hamiltonian, whose (instantaneous )eigenvalues are: $$E_n = \hbar \omega (n+\frac{1}{2}) - \frac{ f(t)^2}{\hbar \omega}$$
Now, the following caution must be exercised. In order to compare the exact and the instantaneous solutions and verify the adiabatic theorem they must be expressed in terms of the same coordinates. In the instantaneous case, the shift in the raising and lowering operators will be translated to the position operator: $$X = A + A^{\dagger} = a + a^{\dagger} = x-2\frac{f(t)}{\hbar \omega}$$ The dependence of the instantaneous eigenfunctions will be on the shifted position coordinate $\Psi_n(X)$.
|
2019-06-25 03:30:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7720699310302734, "perplexity": 199.17404084580517}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999787.0/warc/CC-MAIN-20190625031825-20190625053825-00342.warc.gz"}
|
http://mail.scipy.org/pipermail/scipy-user/2010-September/026737.html
|
# [SciPy-User] optimization routines can not handle infinity values
Enrico Avventi enrico.avventi@gmail....
Thu Sep 16 02:59:00 CDT 2010
forgot the determinant...
f(\Lambda) = trace(\Sigma \Lambda) - \int_\Pi \log \det [G(z) \Lambda
G(z^-1)'] z^-1 dz
On Thu, Sep 16, 2010 at 9:57 AM, Enrico Avventi <[email protected]> wrote:
> sure, no problem. the objective function is
>
> f(\Lambda) = trace(\Sigma \Lambda) - \int_\Pi \log [G(z) \Lambda
> G(z^-1)'] z^-1 dz
>
> where \Sigma and \Lambda are hermitian matrices, G(z) is complex matrix
> valued and analytic inside the unit disc and the integration is along the
> unit circle. the function is only defined when G(z) \Lambda G(z^-1)' is
> positive definite in the unit circle and tends to infinity when approaching
> a value of \Lambda that makes it losing rank.
> in some special cases you can then substitute w.l.o.g \lambda with some
> linear M(x) where x is a real vector in order to obtain a problem of the
> form that i was talking about.
>
> On Wed, Sep 15, 2010 at 10:16 PM, Sebastian Walter <
> [email protected]> wrote:
>
>> well, good luck then.
>>
>> I'm still curious what the objective and constraint functions of your
>> original problem are.
>> Would it be possible to post it here?
>>
>>
>> On Wed, Sep 15, 2010 at 10:05 PM, Enrico Avventi <[email protected]>wrote:
>>
>>> i'm aware of SDP solvers but they handle only linear objective functions
>>> AFAIK.
>>> and the costraints are not the problem. it is just that the function is
>>> not defined everywhere.
>>> i will experiment by changing the line search methods as i think they are
>>> the only
>>> part of the methods that needs to be aware of the domain.
>>>
>>> thanx for the help, i will post my eventual findings.
>>>
>>> On Wed, Sep 15, 2010 at 6:48 PM, Jason Rennie <[email protected]> wrote:
>>>
>>>> On Tue, Sep 14, 2010 at 9:55 AM, enrico avventi <[email protected]>wrote:
>>>>
>>>>> Some of the routines (fmin_cg comes to mind) wants to check the
>>>>> gradient at points where the objective function is infinite. Clearly in such
>>>>> cases the gradient is not defined - i.e the calculations fail - and the
>>>>> algorithm terminates.
>>>>
>>>>
>>>> IIUC, CG requires that the function is smooth, so you can't use CG for
>>>> your problem. I.e. there's nothing wrong with fmin_cg. You really need a
>>>> semidefinite programming solver, such as yalmip or sedumi. My experience
>>>> from ~5 years ago is that SDP solvers only work on relatively small problems
>>>> (1000s of variables).
>>>>
>>>> http://en.wikipedia.org/wiki/Semidefinite_programming
>>>>
>>>> Jason
>>>>
>>>> --
>>>> Jason Rennie
>>>> Research Scientist, ITA Software
>>>> 617-714-2645
>>>> http://www.itasoftware.com/
>>>>
>>>>
>>>> _______________________________________________
>>>> SciPy-User mailing list
>>>> [email protected]
>>>> http://mail.scipy.org/mailman/listinfo/scipy-user
>>>>
>>>>
>>>
>>> _______________________________________________
>>> SciPy-User mailing list
>>> [email protected]
>>> http://mail.scipy.org/mailman/listinfo/scipy-user
>>>
>>>
>>
>> _______________________________________________
>> SciPy-User mailing list
>> [email protected]
>> http://mail.scipy.org/mailman/listinfo/scipy-user
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.scipy.org/pipermail/scipy-user/attachments/20100916/39133fe0/attachment.html
|
2014-12-22 07:56:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9189329743385315, "perplexity": 7186.664473586287}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802774899.57/warc/CC-MAIN-20141217075254-00044-ip-10-231-17-201.ec2.internal.warc.gz"}
|
http://math.stackexchange.com/questions/108466/simultaneous-vector-equations
|
# Simultaneous Vector Equations
How do I solve the simultaneous vector equations for $r$
$$r \wedge a = b, \qquad r \cdot c = \alpha$$
given that $a\cdot b=0$ and $a$ is not equal to $0$?
I am required to distinguish between the cases $a\cdot c$ is not equal to $0$ and $a\cdot c=0$ and give a geometrical interpretation.
-
Are you working in $\mathbb{R}^3$ ? – Henno Brandsma Feb 12 '12 at 11:12
Yes I believe so. – Euden Feb 12 '12 at 11:57
I've been looking at books for something similar for hours now and have found nothing on this. How can i answer this question? – Euden Feb 12 '12 at 14:07
I'll try. Using the property of triple product $c\cdot (r \wedge a) = r \cdot (a \wedge c) = c\cdot b$.
So there are
$$r\cdot b = 0$$ $$r\cdot(a \wedge c) = c\cdot b$$ $$r\cdot c = \alpha$$
If $a \nparallel c$ vectors {$c, a \wedge c, b$} is basis $R^3$.
Applying Gram–Schmidt process we will have orthonormal basis: $e_1 = \frac{c}{|c|}, e_2 = \frac{a \wedge c}{|a \wedge c|}, e_3 = b - (b\cdot e_1)e_1 - (b\cdot e_2)e_2$.
Final $r = \alpha e_1 + \frac{(c\cdot b)}{|a \wedge c|}e_2 + (-\frac{(b\cdot c)}{|c|^2}\alpha -\frac{(b\cdot a \wedge c)}{|a \wedge c|^2}(b\cdot c))e_3$
I just took appropriate basis.
-
|
2015-05-25 01:31:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8760247826576233, "perplexity": 577.3276574226045}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928114.23/warc/CC-MAIN-20150521113208-00262-ip-10-180-206-219.ec2.internal.warc.gz"}
|
https://learn.careers360.com/engineering/question-a-house-is-served-by-220-v-supply-line-in-a-circuit-protected-by-a-9-ampere-fuse-266/
|
Q
# A house is served by 220 V supply line in a circuit protected by a 9 ampere fuse
A house is served by 220 V supply line in a circuit protected by a 9 ampere fuse. The maximum number of 60 W lamps in parallel that can be turned on, is A) 44 B) 20 C) 22 D) 33
$power = \frac{V^2}{R} , so R_{bulb}= \frac{220^2}{60}=806.66\ ohm \\Resistance of circuit = \frac{V}{I} = \frac{220}{9} = 24.44\ ohm\\ let\ n\ bulbs\ are\ in\ parallel, then\ net\ resistance\ is \frac{n}{R_{bulb}}\\so\frac{1}{R}=\frac{n}{R_{bulb}}\\\therefore n=\frac{806.66}{24.44}=33\ bulbs$
|
2020-02-27 20:23:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3979891240596771, "perplexity": 958.1480938505206}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146809.98/warc/CC-MAIN-20200227191150-20200227221150-00437.warc.gz"}
|
https://www.physicsforums.com/threads/any-recommendations-for-a-good-cheap-usb-oscilloscope.936167/
|
# Any recommendations for a good cheap USB oscilloscope?
Gold Member
2020 Award
There has been some discussion on another thread about possible problems with mains spikes and lighting LEDs. I have decided that I could use an entry level scope to capture and measure any that are around on my supply. My regular analogue scope is hopeless for this purpose.
I am sure that there will be some home experimenters on this forum. Is there a particular make that I should love / avoid? The Hantek range is high profile. Are they OK or is it just 'marketing'?
krater
Hi Sophicentaur,
I was considering purchasing the Hantek HT6022BC20MHZ usb scope, however, I read some reviews on Amazon and 25% of reviewers gave it one star. Evidently the software is atrocious. One of many problems is the trigger is off to the left of the screen. Inexpensive but probably unfit for your purpose. Be careful unless you like collecting useless junk.
Peace,
Fred
I have been looking at the Analog Discovery module for a while - promised to to my self a year ago, but still no.
JRMichler
Here's a possibility: https://www.omega.com/pptst/OM-USB-1208HS_SERIES.html. I have used Omega data acquisition products several times over the years with excellent results. I have no experience with the oscilloscope software that is packaged with the data acquisition boards.
Omega is a reseller. Their pricing seems to be the same as you would pay elsewhere, and they normally have everything in stock. I believe that their USB DAQ products are from Measurement Computing, with Omega labels: https://www.mccdaq.com/data-acquisition/low-cost-daq.
Gold Member
2020 Award
Hi Sophicentaur,
I was considering purchasing the Hantek HT6022BC20MHZ usb scope, however, I read some reviews on Amazon and 25% of reviewers gave it one star. Evidently the software is atrocious. One of many problems is the trigger is off to the left of the screen. Inexpensive but probably unfit for your purpose. Be careful unless you like collecting useless junk.
Peace,
Fred
Thanks Fred, I will bear that in mind. Unfortunately, the problem with Amazon reviews is that you need to know a bit about the reviewers themselves in order to get real use out of them. I have read 'bad' reviews of purchases which just reflect how inept some people can be. Personally, I have had many products which have worked find - when I have learned how to use them (RTFM etc.). I looked at all the Amazon Hantek reviews and found that over 50% were 4 or 5 stars. It we were discussing a washing machine or vacuum cleaner, I would probably take more notice of the bad comments. This is one reason why I was after personal assessment of the devices by PF members - who tend to know more about how many beans make five. Pity you haven't actually bought one and learned to tame it.
When you think how much you have to pay for an Arduino processor plus peripherals, the £50 for a working system is very cheap and it's not surprising you don't get £1k's worth of product.
@JRMichler: Also, thanks for that. The omega page shows mostly "big boys' " equipment that would require a fair bit of development work to make it work.
I have been looking at the Analog Discovery module for a while - promised to to my self a year ago, but still no.
That unit looks very useful and not a ridiculous price but it is more than I want to pay - I want to monitor mains spikes in a hunt for a way to protect my LEDs against whatever it is that's giving the such short lives. I could buy a lot of replacements for £200.
I may need to rethink what to do about this. eBay doesn't have much in the way of used PC scopes. It makes me wonder why dissatisfied users have not flooded eBay with them.
anorlunda
Staff Emeritus
There has been some discussion on another thread about possible problems with mains spikes and lighting LEDs. I have decided that I could use an entry level scope to capture and measure any that are around on my supply.
Whoa. Slow down. The kinds of spikes that might fry a LED do not occur every cycle. They might occur 3 or 4 times per year when one of your neighbors switches a big load, or when the line crews are doing maintenance and repair, or when lightning hits a power line miles away from your home.
When I was a fire fighter, I recall incidents where the wind knocked down a high voltage line. As the wire was falling, it momentarily touched a lower voltage line below. Those incidents fried TVs and other electronics in houses for miles around. Most people could not imagine what caused it.
There are nearly 2 billion cycles in a year. You can't watch all of them on your oscilloscope. So I think an oscilloscope is not the right tool for the job.
To capture those infrequent spike events, you need something that measures the highest instantaneous voltage, then retains that measurement until it is copied somewhere, or until you manually reset it.
An Arduino or a Raspberry Pi might make an excellent data logger, but even then you can't just measure voltage on an analog input in a software loop. The spike might last only a microsecond and your software may not be able to loop a million times per second. Some kind of analog latching circuit needs to be part of your solution.
Check the Arduino or Rasberry Pi, forums for "power quality monitoring" projects.
dlgoff, NTL2009 and donpacino
Borek
Mentor
If you want it cheap there is always DSO138 DIY. With all its limitations it already proved itself useful to me on several occasions.
What about a data logging multimeter with peak detect?
donpacino
Gold Member
2020 Award
Whoa. Slow down. The kinds of spikes that might fry a LED do not occur every cycle.
I thought I could put the scope on single shot (perhaps 50ms sweep) with a pre trigger of a few ms. Wouldn't that pick up a spike? I was thinking that a 20MHz-ish bandwidth scope (what you get for that sort of money) would show a blip. Looking at the display every few hours / days would show what happened at that one instant. You could do it with an old analogue storage scope as long as you could come back before the screen had bloomed into nothingness.
Do you really think my idea is a non starter? Best to strangle it at birth than find it's a non-runner.
Hantek scares me so I decided to buy Xprotolab Plain http://www.gabotronics.com/oscilloscopes/xprotolab-plain.htm. Cost: $20 . I plan on setting it up next week. Insha Allah I'll let you know how it goes. Fred I am in no rush so I can wait to see how well you get on with it. What about a data logging multimeter with peak detect? I haven't used one of those. Would a peak reading re-set after a while? To allow recording more than one peak. If you want it cheap there is always DSO138 DIY. With all its limitations it already proved itself useful to me on several occasions. That looks pretty 'entry level' and it's in kit form (isn't it?). Also cheaper than I was think of paying. Opinions on the Picoscope?? anorlunda Staff Emeritus I thought I could put the scope on single shot (perhaps 50ms sweep) with a pre trigger of a few ms. Wouldn't that pick up a spike? I was thinking that a 20MHz-ish bandwidth scope (what you get for that sort of money) would show a blip. Looking at the display every few hours / days would show what happened at that one instant. You could do it with an old analogue storage scope as long as you could come back before the screen had bloomed into nothingness. Do you really think my idea is a non starter? Best to strangle it at birth than find it's a non-runner. That depends. What are you triggering on? Is it practical to leave it set up for 3 months waiting for a trigger? Is it really the waveform you want to see, or just the value of the peak voltage? Borek Mentor That looks pretty 'entry level' and it's in kit form (isn't it?). Also cheaper than I was think of paying. Yes and yes, requires an evening with a soldering iron, but in general it is quite simple (if I made it, everyone can ). Science Advisor Gold Member 2020 Award That depends. What are you triggering on? Is it practical to leave it set up for 3 months waiting for a trigger? Is it really the waveform you want to see, or just the value of the peak voltage? I could leave it waiting for it to trigger on a significant spike whenever I wasn't using the PC. I could test all my electrical appliances without needing to be right by the logger. A trace of an actual spike could be informative. I could also log the voltage variations, which would be useful in itself. I remember, years ago, I was losing filament bulbs after only a very short time. The mains volts were peaking well over 250V and it was only when the company installed a logger (pen and paper roll) that they actually believed me. Of course, the LIARS told me there was nothing wrong with my supply but I had read the trace and told them so. Within a day or two, the volts were something much more reasonable. The house was right next to a substation which fed a couple of hundred houses, downstream. They had jacked up the transformer volts so that people at the far end of a resistive cable were getting a reasonable voltage. I never went to those houses to ask if their lights were only red hot after the reduction! Science Advisor Gold Member 2020 Award The picoscope looks a far superior unit (the cheapest is about £70) and the opinion is that the software is pretty good too. I think I will need to find another excuse for buying one before I actually part with the old mazuma, though. It will have to go somewhere on the list of priorities but not at the top. I haven't used one of those. Would a peak reading re-set after a while? To allow recording more than one peak. https://www.amazon.com/dp/B010Y71G1K/?tag=pfamazon01-20 Something like this cheap and cheerful object. It sends data to an app on your phone - the idea being, you check it every day or so for the max reading. I haven’t used one of these, but my Fluke 87 has a 250 us peak reading with max/min recording. I cancel the auto power off, and power it from a wall adapter. That way, it will pick up the max peak voltage, but won’t record how many peaks per session. I think the Owon would. Common surge protection clips any voltage over 400V - is this what you would consider a damaging level for your lights, or perhaps 10% above the usual peak of 330V, say 363V? I’d be very interested to see what you find, but I suspect it may be a while before you catch some juicy transients. How would you solve this - can you get global surge protection for whole lighting circuits? You’d need it to be connected all the time. What about modifying an LED light, that is used most of the time, with a suitable MOV? This should clip transients for the whole circuit. Better still, an earthed light fitting with three MOVs between L, N and E. Science Advisor Gold Member 2020 Award https://www.amazon.com/dp/B010Y71G1K/?tag=pfamazon01-20 Something like this cheap and cheerful object. It sends data to an app on your phone - the idea being, you check it every day or so for the max reading. I haven’t used one of these, but my Fluke 87 has a 250 us peak reading with max/min recording. I cancel the auto power off, and power it from a wall adapter. That way, it will pick up the max peak voltage, but won’t record how many peaks per session. I think the Owon would. Common surge protection clips any voltage over 400V - is this what you would consider a damaging level for your lights, or perhaps 10% above the usual peak of 330V, say 363V? I’d be very interested to see what you find, but I suspect it may be a while before you catch some juicy transients. How would you solve this - can you get global surge protection for whole lighting circuits? You’d need it to be connected all the time. What about modifying an LED light, that is used most of the time, with a suitable MOV? This should clip transients for the whole circuit. Better still, an earthed light fitting with three MOVs between L, N and E. This is the trouble with PF. People are so sensible and well informed. (Moi aussi) Yes, that solution would probably give me an answer and I would have a real problem arguing against it. However, I would really fancy a better scope than the (second hand analogue) one I have already. There's no excuse for this because I do very little construction or fault finding these days. What I want is for some equally irresponsible person to tell me that the, for example Picoscope is excellent value and works very well. Then I could go out an buy one - and regret it when I find that it doesn't solve my problem. I know that a good digital scope will cost a lot. I could spend my money on improved Astro equipment and get more use from it. But thanks for the advice in all these posts. Mainly, you have saved me from the potential Hantek Black Hole. Meanwhile, I think I will provide my cheaper DMM with a mains adaptor and just keep my eye on the running peak value it measures. That could be sufficient evidence for me. anorlunda Staff Emeritus atyy tech99 Gold Member THANK YOU @tech99 . It is not every day that I get to add an electrical word to my vocabulary. I never heard the word coherer before. That also sounds like a fun project. I bet you could adjust the gap to trigger at different voltages. The threshold voltage seems to depend on the materials and is just a few volts. There is also a "linear" mode of operation allowing detection of signals of around 50mV without amplification. The following notes might be of interest:- The device was used as an early detector of radio waves. It is usually described as a glass tube in which two metal electrodes are placed, the gap between them containing some loose metal powder or sometimes a drop of mercury. There are other types known, some of which resemble semiconductor diodes, but which might or might not employ semiconductor action, and others which are just light contacts. A bias battery of typically 1 volt is connected, no current flowing under rest conditions, but when the voltage is increased by the addition of RF voltage to the battery potential, a direct current starts to flow. This can create a click in a pair of earphones, and if the signal is amplitude modulated, the modulation can be heard. If the RF voltage is fairly large, say 1 volt or more, and the resistance in the circuit is small, the contact can micro-weld itself closed, so that a large current can flow from the battery. This produces a latching action, able to operate a relay, and a mechanical reset was often used in the form of an electromagnet to shake the coherer and break the contact. The operation of the coherer is seen to have two modes: a “linear” detection mode and a latching mode. It does not appear to involve the cohering of the metal particles together, and such a mechanical action seems unlikely in view of the use of the device at 60 GHz by J C Bose in 1895. The linear mode seems to occur due to an oxide film on a contact surface, and the response seems to be an S-shaped curve which is symmetrical with respect to battery polarity. By biasing the device a little way up the curve, an asymmetrical action occurs, and when AC is added to the bias potential, rectification occurs. However, this does not appear to be semiconductor rectifying action, as it occurs with either battery polarity, even though the materials used might resemble those of a semiconductor diode or crystal detector. The action has been said to resemble that of a Metal-Insulator-Metal (MIM) diode, where tunnelling occurs through an insulating oxide layer. My own observations support this, as I found the action to be nearly always symmetrical, and it did not occur with carbon, which does not form an oxide film. A particularly sensitive design uses copper electrodes with a drop of mercury between them, and it seems likely that the oxide barrier consists of copper oxide. It also works with iron, zinc brass etc but not carbon. There is no need for the signal to be AC; the coherer is just responding to an increase in voltage, either as a non linear conductor or a threshold device, and frequency has no relevance. Maybe the first demonstration of radio communication took place in London in February 1880, when Professor Hughes obtained a range of 500m using a mobile receiver having a steel/carbon contact. The mercury coherer was used by Marconi in 1901 for the transatlantic test, in conjunction with a very sensitive earpiece. In my own tests I was able to hear HF broadcasting with a coherer and obtained sensitivity approximately 10 dB inferior to that of a Germanium diode. sophiecentaur and anorlunda The picoscope looks a far superior unit (the cheapest is about £70) and the opinion is that the software is pretty good too. I think I will need to find another excuse for buying one before I actually part with the old mazuma, though. It will have to go somewhere on the list of priorities but not at the top. I purchased this picoscope a few months ago. Loaded the software on my linux system, and played with it a bit, and it seemed to work quite well. I haven't had any specific use for it since then, so can't comment much beyond that, but for the price it seemed good to have available in case the need came up. I gave away my dead, bulky tube Tektronix a few decades ago. I paid ~$140 US for the picoscope. It also has a built in function generator, so that's a nice plus.
https://www.amazon.com/gp/product/B00GZMRZ3M/?tag=pfamazon01-20
While they say 10MHz BW, remember you will want several samples to really see a wave shape. So a 1MHz square wave would only show the 3x 5x 7x 9x harmonics. But that's good enough for many things, just don't expect much more than go-no-go signals at much above 1 Mhz.
sophiecentaur
Gold Member
2020 Award
While they say 10MHz BW, remember you will want several samples to really see a wave shape.
I was thinking that mains circuits would probably limit spike bandwidth to not many MHz - or at least contain significant energy within that limit. This link talks in terms of several μs for transient period.
I know we would all want 100MHz scopes but, at the price. . . .
I purchased this picoscope a few months ago.
--- Can you set it up to monitor your mains, and report how it works? -- Need isolation (which becomes a filter), and set the trigger outside of the regular waveform, turning off a large appliance may be a good enough event to force a trigger..
|
2021-06-12 16:41:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.272773802280426, "perplexity": 1616.453200837479}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487586239.2/warc/CC-MAIN-20210612162957-20210612192957-00157.warc.gz"}
|
https://chemistry.stackexchange.com/tags/bond/new
|
# Tag Info
2
Lennard-Jones Parameters To quote the original source of those Lennard-Jones parameter values (Gordon and Kim), For all the systems involving atoms larger than helium, the predictions appear quite reliable... Our approach thus provides the first successful prediction of the intermolecular potentials for the rare gases (except helium) So I would not put ...
1
I'm not sure if coordination bonds as such are defined in the connectivity table in .sdf. Perhaps the original definition by MDL (back when operative)/Symyx have a line about this (still available from archive.org's snapshots by their Wayback Machine e.g., here).* While currently not functional for me, RDKit's cookbook includes a relevant entry to this, ...
-2
First of all, charged species are in general less stable than neutral species. So $\ce{O_2}$ will be the most stable species among them. In the next step compare their bond orders. The species with more bond order is more stable (in general). So next place goes to $\ce{O_2^+}$. Finally, in comparison of $\ce{C_2^+}$ and $\ce{O_2^-}$ both will have the same ...
5
Formal charge, like oxidation state, is fundamentally just a bookkeeping device (with a different counting method). This being so, formal charge can be correlated with an unequal sharing electrons between like atoms. In such cases it points to molecular polarity in situations where we would not ordinarily expect it. The classic example is ozone, $\ce{O3}$ (...
6
Formal charge is considered to be the charge present in one atom by considering all the bonds to be 100% covalent. The "charge present in one atom" is not a clear concept. A better way is to say "formal charge is the charge assigned to an atom symbol in a Lewis structure". This acknowledges that the formal charge depends on the choice of ...
3
Your problem seems to stem from confusing the number of linear independent basis functions(which is the same number as the size of the atomic orbital basis with which we started) and the number of all possible functions that can be built using this basis. Just as in a two dimensional vector space, where you have a maximum of two independent basis functions, ...
4
Yes, you are correct. The carbon-magnesium bond in a Grignard reagent is polar covalent with carbon being the negative end of the dipole, which explains its nucleophilicity and the magnesium-halogen bond is largely ionic. (image source)
2
Firstly, note that the labels $\sigma$, $\pi$, and $\delta$ aren't universally applicable to MOs; it depends on the molecular geometry. These labels are mostly useful for linear molecules. Non-linear molecules often have MOs that are labelled differently. Methane is a decent example. Other examples include water and ammonia. Restricting ourselves to linear ...
2
There are two things wrong with the premise. First, no one atom really makes any covalent bonds. A covalent bond requires at least two atoms, more if orbitals are delocalized. Second, manganese has not only two but seven valence electrons, which are plenty for bonding covalently to as many as four oxygen atoms. The additional valence electrons come from the ...
3
When you have atoms bonded all in one plane, there will be $p$ orbitals oriented perpendicular to the plane which may not interact significantly with adjacent atoms. Such orbitals would then be called nonbonding. We may compare water with carbon dioxide. Introductory textbooks often describe the oxygen as having a distorted $sp^3$ hybridization, but in ...
10
Most materials are available for the fall 2010 at http://chem125.webspace.yale.edu/indexFall10.html and for Spring 2011 (the last time the course was given) http://chem125.webspace.yale.edu the author is at [email protected]
6
There is a webpage called Freshman Organic Chemistry I described as CHEM 125a. On the bottom of the page, there is a zip of all course pages $(\pu{10 MB})$. However, there are Terms of Use described in a separate page so you may need to donate money or pay for the usage. There is an age restriction as well: Use of Open Yale Courses website is restricted to ...
6
I would add two points to MFarooqs' answer. First, I would emphasize that the basis for attractive van der Waals interactions is the polarizability of molecules. Polarizability is the property of having flexible charge distributions which may be distorted by interacting with charges outside of a molecule, leading to a more stable electronic state. ...
3
The so called "lattice energy" method doesn't work for all compounds, same is the case with Fajan's rule. These are theories which have been developed to explain the characteristics and make it somewhat believable to the general audience but we cannot compare all compounds using one theory or the other because each theory has its own drawbacks. ...
Top 50 recent answers are included
|
2021-09-24 05:34:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5568519830703735, "perplexity": 1062.4797689484835}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057504.60/warc/CC-MAIN-20210924050055-20210924080055-00431.warc.gz"}
|
https://ijfs.usb.ac.ir/article_5223.html
|
# Interval number ranking method considering multiple decision attitudes
Document Type: Research Paper
Authors
1 School of Resources and Environmental Engineering, Wuhan University of Science and Technology, Wuhan, China
2 Hubei key laboratory for efficient utilization and agglomeration of met allergic mineral resource, Wuhan University of Science and Technology, Wuhan, China
10.22111/ijfs.2020.5223
Abstract
Many interval number ranking methods cannot represent
the different attitudes of decision makers with different risk
appetites. Therefore, interval numbers are expressed in the Rectangular
Coordinate System (RCS). After mining the interval numbers in the RCS,
the Symmetry Axis Compensation Factor, which is known as $$\lambda$$,
was introduced, and the Equivalent Function of the Goal Interval Number
(GIN) was deduced. Thus, the interval number ranking method considering
symmetry axis compensation was defined along with its application
procedures. Additionally, the feasibility and effectiveness of this
method were verified through examples. This method is intuitive and
simple and can represent multiple attitudes of decision makers with
different risk appetites.
Keywords
|
2020-09-29 07:39:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24787552654743195, "perplexity": 5680.362461927642}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401632671.79/warc/CC-MAIN-20200929060555-20200929090555-00190.warc.gz"}
|
https://mathoverflow.net/questions/268013/undefinability-of-mathbbz-in-the-reals/268015
|
# Undefinability of $\mathbb{Z}$ in the reals
It is a well-known fact that $\mathbb{Z}$ is not definable in the structure $\mathcal{R}=(\mathbb{R}, +, ., < , 0, 1)$. This follows from Tarski's quantifier elimination, and in fact, we can conclude that the structure $\mathcal{R}$ is an o-minimal structure.
Another proof, suggested in the answer by Mikhail Katz, is to use the Godel's incompleteness theorem and the fact that the theory of the structure is complete.
Question. Is there a more direct proof of the above undefinability result?
I essentially mean a proof which does not use the above results of Tarski or Godel or its variants.
In general, what other different proofs of the above result exist? Providing references is appreciated.
In the paper A dichotomy for expansions of the real field a criteria is given for the undefinability of $\mathbb{Z}$ in expansions of the real field. A natural question is if we can use this criteria and prove the theorem directly?
• I am mainly interested in an argument like this: suppose $\mathbb{Z}$ is definable in the structure $\mathcal{R}$, by some formula and then work with the structure and the formula to get a contradiction. – Mohammad Golshani Apr 24 '17 at 12:05
• You wrote subtraction $-$, but did you mean multiplication? – Joel David Hamkins Apr 24 '17 at 13:11
• In particular, without multiplication, I think things would be considerably easier. – Joel David Hamkins Apr 24 '17 at 13:23
• @MohammadGolshani How can you hope to work with the structure of some formula without eliminating quantifiers from that formula and thus proving quantifier elimination? – Will Sawin Apr 24 '17 at 17:21
• @NateEldredge The statement that if it were definable, it could be written as a finite union of solution sets of systems of polynomial inequalities, is correct, but nontrivial. Indeed, this is exactly the Tarski’s theorem on quantifier elimination mentioned in the question. – Emil Jeřábek supports Monica Apr 24 '17 at 18:28
The theory of real closed fields is complete and if the integers were definable in $\mathbb R$ this would contradict Goedel's incompleteness result.
• Thanks, are there proofs avoiding Godel's incompleteness theorem too. When writing the question, I had the idea of some different proof (maybe not using Godel's theorem). – Mohammad Golshani Apr 24 '17 at 11:53
• I am not a specialist, but "naively", if you have a "real closed field" $K$, cannot you define $\mathbb{Z}$ as the ring generated by $1_K$ ? – Duchamp Gérard H. E. Apr 24 '17 at 11:56
• It is not definable in that structure. Note that by definability, I mean first order definable in the structure – Mohammad Golshani Apr 24 '17 at 11:56
• Said another way: "ring generated by" is a second-order concept. It cannot be stated in the first-order language of $(\mathbb{R}, +, -, < , 0, 1)$. – Gerald Edgar Apr 24 '17 at 12:29
This is not a real answer but rather an observation. The undefinability of $\mathbb{Z}$ follows from the fact that every infinite definable set in such a structure has uncountable cardinality. This property is strictly weaker than both o-minimality and quantifier elimination. Nevertheless, I do not know any proof of this fact that does not use neither of those. I guess this simply induces a nice sub-question of the original one.
• Or: from the similarly weak fact about Th(R) that $\forall x \exists y>x \,\phi(y) \rightarrow \exists x \forall y>x \,\phi(y)$ for any potential definition $\phi$. This might be easier to prove than quantifier elimination, without all the technicalities of Sturm's lemma, just focusing on the easy algebraic geometry near infinity. – Matt F. Apr 24 '17 at 20:34
• @MattF, it would be nice to have some parentheses in the formula you presented. Without them this is a bit tricky to read. – Mikhail Katz Apr 25 '17 at 7:18
• @MattF. True, indeed this is some sort of "o-minimality near infinity": every cofinal definable set contains an interval of the form $(a,+\infty)$ for some $a\in \mathbb{R}$. This property implies of course that cofinal definable sets are uncountable, but also states the stronger condition of containing an interval. I guess the we can weaken then the property that implies the undefinability of $\mathbb{Z}$ to be just ''cofinal definable sets are uncountable''. – Cubikova Apr 25 '17 at 8:32
• $(\forall x\, \exists y>x\, \phi(y))\rightarrow (\exists x\,\forall y>x\, \phi(y))$ – Matt F. Aug 16 '19 at 18:01
This is very similar to the answer of Mikhail Katz, but we can avoid the incompleteness theorem by using the halting problem instead.
That is, since the theory of real-closed fields is computably axiomatizable and complete, it is decidable. So if $\mathbb{Z}$ were definable in $\langle\mathbb{R},+,\cdot,0,1,<\rangle$, then arithmetic truth would be decidable, contradicting the undecidability of the halting problem.
This argument still relies, however, on Tarski's quantifier-elimination.
• I think you mean "recursively axiomatizable" not "finitely axiomatizable". – Philip Ehrlich Apr 24 '17 at 13:29
|
2020-01-23 23:19:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9279717803001404, "perplexity": 406.12360465254204}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250614086.44/warc/CC-MAIN-20200123221108-20200124010108-00264.warc.gz"}
|
https://dsp.stackexchange.com/questions/56470/compress-a-signal-by-storing-signal-diff-instead-of-actual-samples-is-there-su
|
Compress a signal by storing signal diff instead of actual samples - is there such a thing?
I am working with EMG signals sampled at 2kHz and 16 bits, and noticed that they "look smooth", that is, the signals are differentiable, and if I apply a "diff" function (numpy.diff in my case) the magnitude of the values is considerably lower than the actual samples.
So I am considering to do something like:
• Split the signal into chunks of a given size;
• Foreach chunk, using variable length quantity (or similar), create a byte list and:
• For the first sample of the chunk, add its absolute value;
• For the remaining samples of the chunk, add their difference, relative to the previous value;
This way, the smoother the signal, and the closer it is to the baseline, the more I expect to decrease the byte-size of each chunk, by decreasing the individual byte-size of a large part of the samples.
Although I suspect this would improve things for me, I also suspect that this is nothing new, and perhaps it has a proper name, and even more elegant/efficient ways to implement it.
So the question is: what is the name of this compression technique, and what are its alternatives and/or variants?
• – MBaz Apr 5 '19 at 19:03
• @MBaz I think your comment contains the correct answer. If you write it down I would most probably accept it. Thanks for now! – heltonbiker Apr 5 '19 at 19:19
• BTW: this is also done in image compresion, in PNG format, line by line (only that for each line you can choose among using difference with respect to the pixel left or up, or other two predictions - or none of them); the standard calls this "filtering", but it's actually a typical "predict and code the prediction error" scheme, of which your technique is a basic case en.wikipedia.org/wiki/Portable_Network_Graphics#Filtering – leonbloy Apr 6 '19 at 17:19
Another notion you might wanna look into for lossless compression of a bandlimited signal (it's this bandlimiting that gets you this "smoother ... signal, ...closer ... to the baseline") is Linear Predictive Coding.
I think this is historically correct that LPC was first used as a variant of Delta coding where the LPC algorithm predicts $$\hat{x}[n]$$ from the set of samples: $$x[n-1], x[n-2], ... x[n-N]$$. If the prediction is good, then the real $$x[n]$$ is not far off from the prediction $$\hat{x}[n]$$ and you need store only the delta $$x[n]-\hat{x}[n]$$ which is smaller in magnitude and a smaller word width might be sufficient. You would need to store the LPC coefficients for each block, but there are usually no more than a dozen or so of these.
This stored difference value can be compressed further using something like Huffman coding in which you would need to either store the "codebook" along with the compressed data or have some kinda codebook standardized so that both transmitter and receiver know it.
I think it's some combination of LPC and Huffman coding that is used by various lossless audio formats. Maybe there is some perceptual stuff used to, to get almost lossless compression.
You can also think of delta encoding as linear predictive coding (LPC) where only the prediction residual ($$x[n]-\hat{x}[n]$$ in @robertbristow-johnson's notation) is stored and the predictor of the current sample is the previous sample. This is a fixed linear predictor (not with arbitrary coefficients optimized to data) that can exactly predict constant signals. Run the same linear predictive coding again on the residuals, and you have exactly predicted linear signals. Next round, quadratic signals. Or run a higher-order fixed predictor once to do the same.
Such fixed predictors are listed in Tony Robinson's SHORTEN technical report, yours in Eq. 4, and are also included in the FLAC lossless audio codec although not often used. Calculating the best prediction coefficients for each data block and storing them in a header of the compressed block results in better compression than the use of fixed predictors.
For $$m$$-bit input the residual is an $$m+1$$ -bit number, because it is the difference of an $$m$$-bit input and an $$m$$-bit prediction. However, removing the most significant bit (MSB) of the residual has no consequence in $$m$$-bit modular arithmetic, so the residuals can be stored as $$m$$-bit numbers.
The linear predictor is supposed to do the whitening, making the residuals independent. In lossless compression, what is left to do is to entropy code the residuals, instead of using run-length or other symbol-based encoding that doesn't work so well on noisy signals. Typically, entropy coding is done by a prefix code (also known as prefix-free code) that assigns longer code words to large residuals, approximately minimizing the mean encoding length for an assumed distribution of the residual values. A Rice code (also known as Golomb–Rice code or GR code) variant compatible with signed numbers can be used, as is done in FLAC (Table 1), or signed exp-Golomb code as is done in the h.264 video compression standard. Rice code has a distribution parameter that needs to be optimized for the data block and saved in the block header.
Table 1. Binary codewords of 4-bit signed integers encoded in Rice code with different Rice code parameter $$p$$ values, using FLAC__bitwriter_write_rice_signed (source code). This variant of Rice code is a bit wasteful in the sense that not all binary strings are recognized as a codeword.
$$\begin{array}{rl} \begin{array}{r}\\-8\\-7\\-6\\-5\\-4\\-3\\-2\\-1\\0\\1\\2\\3\\4\\5\\6\\7\end{array}&\begin{array}{lllll} p=0&p=1&p=2&p=3\\ 000000000000001&000000010&000110&01110\\ 0000000000001&00000010&000100&01100\\ 00000000001&0000010&00110&01010\\ 000000001&000010&00100&01000\\ 0000001&00010&0110&1110\\ 00001&0010&0100&1100\\ 001&010&110&1010\\ 1&10&100&1000\\ 01&11&101&1001\\ 0001&011&111&1011\\ 000001&0011&0101&1101\\ 00000001&00011&0111&1111\\ 0000000001&000011&00101&01001\\ 000000000001&0000011&00111&01011\\ 00000000000001&00000011&000101&01101\\ 0000000000000001&000000011&000111&01111\end{array}\end{array}$$
As a further enhancement, encoding not just one but multiple residuals into a single codeword can more accurately accommodate the true distribution of residuals and may give a better compression ratio, see asymmetric numeral systems.
• as similar to your suggestion, Subband ADPCM would possibly be the best choice... – Fat32 Apr 5 '19 at 21:08
That's used a lot. See for example https://en.wikipedia.org/wiki/Delta_encoding, https://en.wikipedia.org/wiki/Run-length_encoding.
"Looking Smooth" typically means "not a lot of high frequency content". The easiest way to take advantage of this, is to figure out what the highest frequency really need then low-pass filter and choose an lower sample rate.
IF you signal has a non-flat spectrum, it's typically advantageous to "whiten" the signal, i.e. filter it so that the average spectrum is white, then encode, decode and filter with the inverse signal to recover the signal. This way you spend more bits on the high energy frequencies and less and the low energy ones. Your quantization noise follows the spectrum of the signal.
The scheme that you suggest is one of the simplest forms of this approach: your whitening filter is a differentiator and your inverse filter is an integrator.
|
2020-02-23 22:46:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 14, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.644000768661499, "perplexity": 989.7830881373078}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145859.65/warc/CC-MAIN-20200223215635-20200224005635-00255.warc.gz"}
|
https://brilliant.org/discussions/thread/chemistry-ionic-equilibrium/
|
# Chemistry Ionic Equilibrium
The pH of $0.1 M$ solution of $NaHCO_3$ (Given $pK_1 = 6.38$ and $pK_2 =10.32$ ) is
1. $8.35$
2. $6.5$
3. $4.3$
4. $3.94$
I have doubt in this question. Please type your method in solution.
Note by Megh Parikh
6 years, 7 months ago
This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science.
When posting on Brilliant:
• Use the emojis to react to an explanation, whether you're congratulating a job well done , or just really confused .
• Ask specific questions about the challenge or the steps in somebody's explanation. Well-posed questions can add a lot to the discussion, but posting "I don't understand!" doesn't help anyone.
• Try to contribute something new to the discussion, whether it is an extension, generalization or other idea related to the challenge.
MarkdownAppears as
*italics* or _italics_ italics
**bold** or __bold__ bold
- bulleted- list
• bulleted
• list
1. numbered2. list
1. numbered
2. list
Note: you must add a full line of space before and after lists for them to show up correctly
paragraph 1paragraph 2
paragraph 1
paragraph 2
[example link](https://brilliant.org)example link
> This is a quote
This is a quote
# I indented these lines
# 4 spaces, and now they show
# up as a code block.
print "hello world"
# I indented these lines
# 4 spaces, and now they show
# up as a code block.
print "hello world"
MathAppears as
Remember to wrap math in $$ ... $$ or $ ... $ to ensure proper formatting.
2 \times 3 $2 \times 3$
2^{34} $2^{34}$
a_{i-1} $a_{i-1}$
\frac{2}{3} $\frac{2}{3}$
\sqrt{2} $\sqrt{2}$
\sum_{i=1}^3 $\sum_{i=1}^3$
\sin \theta $\sin \theta$
\boxed{123} $\boxed{123}$
Sort by:
The only problem in solving this question is that $NaHCO_{3}$ (basically, the carbonate ion $HCO_3^-$) is amphiprotic (can donate or accept $H^+$ ion).
So, the total concentration of protons in the water due to the addition of $NaHCO_3$ will be equal to the number produced, minus the number lost.
Using this and then approximating yields,
$pH$ = $\frac{1}{2}$($pK_1 + pK_2$) = $\frac{1}{2}$($6.38 + 10.32$) = $8.35$.
This implies that the $pH$ is independent of the concentration.
I saw this solution here.(page 5).
By the way, what score did you get in KVPY?
- 6 years, 7 months ago
Thanks. It is indeed curious that the pH is independent on the concentration itself! Hats off to approximations;)
- 6 years, 7 months ago
Hey,@Siddharth Brahmbhatt , what score did you get in KVPY?
- 6 years, 7 months ago
Well I got 59.5 in the aptitude test but only 45.33 in the interview (yeah,it went really bad) thus ending up at 55.96. I just told @Megh Parikh that if we combined my aptitude test score with his interview score, we would have qualified! :D Jokes apart I'm a little disappointed none of us could qualify... 71 is a great score! Had you finished the course earlier? Because the course contained many topics we were yet to learn. I'd really like to know as I might try again this year.
- 6 years, 7 months ago
Yes, I had finished the course. Of course, all the extra we learnt were just mere basic concepts. We had extra classes scheduled at our tuitions for completing the course. We did that only in Chemistry and Math. In physics, we only practised what we had learnt because KVPY papers even include questions about complex electric circuits which we could never have learnt in such a short time. And yes, I also attended Biology classes specially scheduled for KVPY preparations.(Out of all sections, I got the most marks in Biology! I don't know how! :P). Surely, this time you will crack KVPY. BEST OF LUCK.
- 6 years, 7 months ago
Thanks!
- 6 years, 7 months ago
I didn't understand properly.
What is $pK_1$ and $pK_2$?
I will try to read the PDF.
Aptitude Test Marks out of 100 : 54.5
Interview Marks out of 100 : 65.17
Total Marks : 57.17
Congrats on getting qualified for scholarship
- 6 years, 7 months ago
Well, I got 71 in the Aptitude test but in interview I got only 47.5. I don't remember what I did wrong that I got such marks but whatever at least I am selected. How are your studies going on?
- 6 years, 7 months ago
Nice. Currently participating in proofathon
- 6 years, 7 months ago
How many Proofathon Competitions have you participated in? What score did you get?
- 6 years, 7 months ago
Sorry, I was busy typing proofathon solutions. This was the first in which I really participated and have attempted 6 questions.
- 6 years, 7 months ago
4
- 5 years, 3 months ago
|
2020-12-05 15:25:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 29, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9661440849304199, "perplexity": 3472.3837596277863}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141747887.95/warc/CC-MAIN-20201205135106-20201205165106-00702.warc.gz"}
|
http://mathcentral.uregina.ca/QQ/database/QQ.09.18/h/jackie1.html
|
SEARCH HOME
Math Central Quandaries & Queries
Question from Jackie: Filling a hole 25ft round 3ft deep how much dirt is needed?
Hi Jackie,
The volume of the hole is the area of the circular top times the depth. I remember that the area $A$ of a circle is given by $A = \pi \; r^2$ where $r$ is the radius. But I don't think you have the radius but you have the distance around, what I would call the circumference of the circle. This is my diagram.
The circumference $C$ of a circle of radius $r$ is given by $C = 2 \; \pi \; r.$
If you solve this equation for $r$ and substitute into the equation for the area you get
$A = \frac{C^2}{4 \; \pi}.$
When I calculated this for $C = 25$ ft I got $A = 1.99$ square feet. (you should check my arithmetic.) The volume of your hole is hence $1.99 \times 3$ cubic feet.
I hope this helps,
Penny
Math Central is supported by the University of Regina and The Pacific Institute for the Mathematical Sciences.
|
2020-07-05 11:37:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8321472406387329, "perplexity": 287.3476774584634}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655887319.41/warc/CC-MAIN-20200705090648-20200705120648-00424.warc.gz"}
|
https://www.deepdyve.com/lp/springer_journal/the-turbulence-structure-of-the-wake-of-a-thin-flat-plate-at-post-lSVahpwY0O
|
The turbulence structure of the wake of a thin flat plate at post-stall angles of attack
The turbulence structure of the wake of a thin flat plate at post-stall angles of attack The influence of post-stall angles of attack, $$\alpha$$ α , on the turbulent flow characteristics behind a thin high aspect ratio flat plate was investigated experimentally. Time-resolved stereo particle image velocimetry was used in an open-section wind tunnel at a Reynolds number of 6600. The mean field was determined along with the wake topology, force coefficients, vortex shedding frequency, and the terms in the transport equation for the turbulent kinetic energy k. Coherent and incoherent contributions to the Reynolds stress and k-transport terms were estimated. Over the measured range of $$20^\circ \le \alpha \le 90^\circ$$ 20 ∘ ≤ α ≤ 90 ∘ , quasi-periodic vortex shedding is observed and it is shown that most of the fluctuation energy contribution in the wake arises from coherent fluctuations associated with vortex shedding. As the angle of attack is reduced from $$90^\circ$$ 90 ∘ , the length of the recirculation region and the drag decrease, while the shedding frequency increases monotonically. In contrast, mean lift and k are maximized at $$\alpha \approx 40^\circ$$ α ≈ 40 ∘ , suggesting a relationship between the bound vortex circulation and the levels of k. Structural differences in the mean strain field, wake topology, relative contributions to the k-production terms, and significant differences in the incoherent field suggest changes in the wake dynamics for $$\alpha > 40^{\circ }$$ α > 40 ∘ and $$20^{\circ } \le \alpha \le 40^{\circ }$$ 20 ∘ ≤ α ≤ 40 ∘ . For $$\alpha > 40^\circ$$ α > 40 ∘ , coherent contributions to the fluctuation field result in a large region close to the plate exhibiting small levels of negative mean production and generally low levels of advection, despite very high levels of production just downstream of the recirculation region. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Experiments in Fluids Springer Journals
The turbulence structure of the wake of a thin flat plate at post-stall angles of attack
, Volume 58 (6) – May 23, 2017
18 pages
/lp/springer_journal/the-turbulence-structure-of-the-wake-of-a-thin-flat-plate-at-post-lSVahpwY0O
Publisher
Springer Berlin Heidelberg
Subject
Engineering; Engineering Fluid Dynamics; Fluid- and Aerodynamics; Engineering Thermodynamics, Heat and Mass Transfer
ISSN
0723-4864
eISSN
1432-1114
D.O.I.
10.1007/s00348-017-2352-8
Publisher site
See Article on Publisher Site
Abstract
The influence of post-stall angles of attack, $$\alpha$$ α , on the turbulent flow characteristics behind a thin high aspect ratio flat plate was investigated experimentally. Time-resolved stereo particle image velocimetry was used in an open-section wind tunnel at a Reynolds number of 6600. The mean field was determined along with the wake topology, force coefficients, vortex shedding frequency, and the terms in the transport equation for the turbulent kinetic energy k. Coherent and incoherent contributions to the Reynolds stress and k-transport terms were estimated. Over the measured range of $$20^\circ \le \alpha \le 90^\circ$$ 20 ∘ ≤ α ≤ 90 ∘ , quasi-periodic vortex shedding is observed and it is shown that most of the fluctuation energy contribution in the wake arises from coherent fluctuations associated with vortex shedding. As the angle of attack is reduced from $$90^\circ$$ 90 ∘ , the length of the recirculation region and the drag decrease, while the shedding frequency increases monotonically. In contrast, mean lift and k are maximized at $$\alpha \approx 40^\circ$$ α ≈ 40 ∘ , suggesting a relationship between the bound vortex circulation and the levels of k. Structural differences in the mean strain field, wake topology, relative contributions to the k-production terms, and significant differences in the incoherent field suggest changes in the wake dynamics for $$\alpha > 40^{\circ }$$ α > 40 ∘ and $$20^{\circ } \le \alpha \le 40^{\circ }$$ 20 ∘ ≤ α ≤ 40 ∘ . For $$\alpha > 40^\circ$$ α > 40 ∘ , coherent contributions to the fluctuation field result in a large region close to the plate exhibiting small levels of negative mean production and generally low levels of advection, despite very high levels of production just downstream of the recirculation region.
Journal
Experiments in FluidsSpringer Journals
Published: May 23, 2017
DeepDyve is your personal research library
It’s your single place to instantly
that matters to you.
over 18 million articles from more than
15,000 peer-reviewed journals.
All for just $49/month Explore the DeepDyve Library Search Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly Organize Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place. Access Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals. Your journals are on DeepDyve Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more. All the latest content is available, no embargo periods. DeepDyve Freelancer DeepDyve Pro Price FREE$49/month
\$360/year
Save searches from
PubMed
Create lists to
Export lists, citations
|
2018-09-22 03:41:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2913351356983185, "perplexity": 1668.6367398933978}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267158011.18/warc/CC-MAIN-20180922024918-20180922045318-00127.warc.gz"}
|
https://ncatlab.org/nlab/show/Todd%20class
|
Contents
# Contents
## Idea
By the Hirzebruch-Riemann-Roch theorem the index of the Dolbeault operator is the Todd genus (e.g. Gilkey 95, section 5.2 (more generally so for the Spin^c Dirac operator).
## Properties
### Relation to Thom class and Chern character
###### Proposition
(rational Todd class is Chern character of Thom class)
Let $V \to X$ be a complex vector bundle over a compact topological space. Then the Todd class $Td(V) \,\in\, H^{ev}(X; \mathbb{Q})$ of $V$ in rational cohomology equals the Chern character $ch$ of the Thom class $th(V) \,\in\, K\big( Th(V) \big)$ in the complex topological K-theory of the Thom space $Th(V)$, when both are compared via the Thom isomorphisms $\phi_E \;\colon\; E(X) \overset{\simeq}{\to} E\big( Th(V)\big)$:
$\phi_{H\mathbb{Q}} \big( Td(V) \big) \;=\; ch\big( th(V) \big) \,.$
More generally , for $x \in K(X)$ any class, we have
$\phi_{H\mathbb{Q}} \big( ch(x) \cup Td(V) \big) \;=\; ch\big( \phi_{K}(x) \big) \,,$
which specializes to the previous statement for $x = 1$.
### Relation to the Adams e-invariant
We discuss how the e-invariant in its Q/Z-incarnation (this Def.) has a natural formulation in cobordism theory (Conner-Floyd 66), by evaluating Todd classes on cobounding (U,fr)-manifolds.
This is Prop. below; but first to recall some background:
###### Remark
In generalization to how the U-bordism ring $\Omega^U_{2k}$ is represented by homotopy classes of maps into the Thom spectrum MU, so the (U,fr)-bordism ring $\Omega^{U,fr}_{2k}$ is represented by maps into the quotient spaces $MU_{2k}/S^{2k}$ (for $S^{2k} = Th(\mathbb{C}^{k}) \to Th( \mathbb{C}^k \times_{U(k)} E U(k) ) = MU_{2k}$ the canonical inclusion):
(1)$\Omega^{(U,fr)}_\bullet \;=\; \pi_{\bullet + 2k} \big( MU_{2k}/S^{2k} \big) \,, \;\;\;\;\; \text{for any} \; 2k \geq \bullet + 2 \,.$
###### Remark
The bordism rings for MU, MUFr and MFr sit in a short exact sequence of the form
(2)$0 \to \Omega^U_{\bullet+1} \overset{i}{\longrightarrow} \Omega^{U,f}_{\bullet+1} \overset{\partial}{ \longrightarrow } \Omega^{fr}_\bullet \to 0 \,,$
where $i$ is the evident inclusion, while $\partial$ is restriction to the boundary.
In particular, this means that $\partial$ is surjective, hence that every $Fr$-manifold is the boundary of a (U,fr)-manifold.
###### Proposition
(e-invariant is Todd class of cobounding (U,fr)-manifold)
Evaluation of the Todd class on (U,fr)-manifolds yields rational numbers which are integers on actual $U$-manifolds. It follows with the short exact sequence (2) that assigning to $Fr$-manifolds the Todd class of any of their cobounding $(U,fr)$-manifolds yields a well-defined element in Q/Z.
Under the Pontrjagin-Thom isomorphism between the framed bordism ring and the stable homotopy group of spheres $\pi^s_\bullet$, this assignment coincides with the Adams e-invariant in its Q/Z-incarnation:
(3)$\array{ 0 \to & \Omega^U_{\bullet+1} & \overset{i}{\longrightarrow} & \Omega^{U,f}_{\bullet+1} & \overset{\partial}{ \longrightarrow } & \Omega^{fr}_\bullet & \simeq & \pi^s_\bullet \\ & \big\downarrow{}^{\mathrlap{Td}} && \big\downarrow{}^{\mathrlap{Td}} && \big\downarrow{}^{} && \big\downarrow{}^{e} \\ 0 \to & \mathbb{Z} &\overset{\;\;\;\;\;}{\hookrightarrow}& \mathbb{Q} &\overset{\;\;\;\;}{\longrightarrow}& \mathbb{Q}/\mathbb{Z} &=& \mathbb{Q}/\mathbb{Z} } \,,$
$d$partition function in $d$-dimensional QFTsuperchargeindex in cohomology theorygenuslogarithmic coefficients of Hirzebruch series
0push-forward in ordinary cohomology: integration of differential formsorientation
1spinning particleDirac operatorKO-theory indexA-hat genusBernoulli numbersAtiyah-Bott-Shapiro orientation $M Spin \to KO$
endpoint of 2d Poisson-Chern-Simons theory stringSpin^c Dirac operator twisted by prequantum line bundlespace of quantum states of boundary phase space/Poisson manifoldTodd genusBernoulli numbersAtiyah-Bott-Shapiro orientation $M Spin^c \to KU$
endpoint of type II superstringSpin^c Dirac operator twisted by Chan-Paton gauge fieldD-brane chargeTodd genusBernoulli numbersAtiyah-Bott-Shapiro orientation $M Spin^c \to KU$
2type II superstringDirac-Ramond operatorsuperstring partition function in NS-R sectorOchanine elliptic genusSO orientation of elliptic cohomology
heterotic superstringDirac-Ramond operatorsuperstring partition functionWitten genusEisenstein seriesstring orientation of tmf
self-dual stringM5-brane charge
3w4-orientation of EO(2)-theory
## References
Named after John Arthur Todd.
Original articles:
On the Todd character:
Review:
• Peter Gilkey, Section 5.2 of: Invariance Theory: The Heat Equation and the Atiyah-Singer Index Theorem, 1995 (pdf)
|
2021-01-26 05:55:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 31, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8932627439498901, "perplexity": 3460.0052721912252}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704798089.76/warc/CC-MAIN-20210126042704-20210126072704-00406.warc.gz"}
|
https://cstheory.stackexchange.com/questions/39935/complexity-of-maximising-weighted-sum-of-and-functions-on-a-set-of-binary-variab
|
# Complexity of maximising weighted sum of and functions on a set of binary variables
Suppose we have a set of binary variables $a_1, ..., a_n$ that $a_i\in\{0,1\}$. Now we define $m$ and functions over a subset of them: $$j\in\{1,...,m\}: f_j=x_1\land x_2\land...\land x_k$$ in which $$\{x_1,...,x_k\}\subset\{a_1,...,a_n\}$$
Suppose that each variable $a_i$ has also a cost $c_i$ assigned to it and every function $f_j$ has a profit $p_j$ associated with it. Both variables are non-negative $\forall i,j: p_j,c_i\ge 0$.
The problem is how to maximise profits minus costs over the set of all possible $a_i$s: $$(1) \max_{a_1,...,a_n} \left\{\sum_j^m p_j f_j - \sum_i^n c_i a_i \right\}$$
Another related problem is to maximise the profit with constrained costs for some constant $C$: $$(2) \max_{\sum_i c_i a_i \le C} \left\{\sum_j^n p_j f_j \right\}$$.
Now here here the questions I have:
1. Can $(1)$ be solved in a strictly polynomial, or pseudo-polynomial way?
2. Can $(2)$ be solved in pseudo-polynomial time?
By pseudo-polynomial we mean assuming bounds on $|c_i|$ and $|p_j|$, can a polynomial time algorithm be achieved?
It's clear that if $(2)$ pseudo-polynomial solution $(1)$ will also have a pseudo-polynomial solution, by iterating over various values of $C$. Therefore in some sense $(2)$ is a more difficult problem. Moreover, knapsack can be seen as a special case of $(2)$ if we set $f_j=a_j$. Therefore it can't be strictly polynomial. But I can't tell much more about the complexity of these two problems.
• The version of the knapsack problem you described (with $f_j = a_j$) can be easily solved, you simply set $a_i = 1$ if $c_i \ge p_i$ and set $a_i = 0$ otherwise. – Artur Riazanov Jan 7 '18 at 12:34
• @ArturRyazanov you are right, so maximising the difference is an easier problem than the original knapsack, which is only pseudo polynomial. – AmeerJ Jan 7 '18 at 15:13
• $k-\mathrm{CLIQUE}$ can be reduced to (2). Create a variable for each vertex, set all $c_i = 1$, $C=k$ and for each edge $(u,v)$ add the function $u \land v$ with $p=1$. The answer for this instance of (2) equals $\binom{k}{2}$ iff the given graph has $k$-CLIQUE. Thus (2) has no pseudo-polynomial algorithm. – Artur Riazanov Jan 7 '18 at 18:04
• One could do the same for (1): for a case that $f_j$s have only two operands construct a similar graph, $c_i$ cost for vertex $i$ and $p_{ij}$ profit for edge $e(i,j)$ if it exists. Then the problem becomes: choose a subset of vertices with the biggest profit-cost margin. Is this a studied problem, or related to a studied problem? – AmeerJ Jan 7 '18 at 18:45
• I think (1) is, in fact, polynomially-solvable (not completely sure though), I've updated my answer. – Artur Riazanov Jan 7 '18 at 18:48
(1) has a polynomial solution. Consider a graph with source $s$, sink $t$, vertices $U$ corresponding to the variables and vertices $V$ corresponding to the functions. If $f_j = x_1 \land \ldots \land x_k$ then add edges $f_j \to x_i$ for each $i \in \{1,\ldots,k\}$ with capacities equal to $C$ for $C > \sum p_j$. Add edges $x_i \to t$ with capacities $c_i$ and edges $s \to f_j$ with capacities $p_j$. Consider minimum cut between $s$ and $t$ in this graph (it could be easily found with almost any maximal-flow algrorithm). Let $U_s \subseteq U$ and $V_s \subseteq V$ be the sets of vertices in the component of $s$. Then the value of the minimum cut is $$\sum\limits_{i \in U_s} c_i + \sum\limits_{j \not\in V_s} f_j = \underbrace{\sum\limits_{j\in V} f_j}_{\text{constant}} - \underbrace{\left(-\sum\limits_{i \in U_s} c_i + \sum\limits_{j \in V_s} f_j\right)}_{\text{objective function}}$$ Thus if this value is minimised, the objective function is maximised. On the other hand $U_s$ and $V_s$ for minimum cut have the property that if $x_1 \land \ldots \land x_k = f_j \in V_s$ then $x_1, \ldots, x_k \in U_s$ since otherwise the value of the cut is $\ge C$ which is not optimal. Thus setting $x=1$ for $x \in U_s$ and $x=0$ for $x \not\in U_s$ gives an optimal solution for (1).
The case with arbitrary $p_j$ is $\mathbf{NP}\text{-}\mathrm{complete}$ even with polynomially-bounded $|p_j|$.
If you allow $p_j$ to be negative, you can reduce $\mathrm{max}\text{-}\mathrm{SAT}$ to this problem. For each variable $x_i$ of $\mathrm{max}\text{-}\mathrm{SAT}$ add two variables $x_i$ and $\lnot x_i$ to your input. For each of these pair add the condition $x_i \land \lnot x_i$ with the cost $-C$ for a large enough constant $C$. For all clauses of $\mathrm{max}\text{-}\mathrm{SAT}$ input add the corresponding conjunction with all literals reversed: $x \lor \lnot y \lor z$ becomes $\lnot x \land y \land \lnot z$ (here $\lnot x$ and $\lnot z$ are variables in the new problem). $\lnot x \land y \land \lnot z \iff \lnot (x \lor \lnot y \lor z)$ therefore each unsatisfied conjunctions correspond to satisfied clauses. Thus you can set the cost $p$ for each conjunction as $-1$. For $c_i = 0$ for all $i \in \{1,\ldots, n\}$, $m -\max \left\{\sum\limits_j f_j p_j - \sum\limits_i a_i c_i\right\}$ (where $m$ is the number of clauses in the $\mathrm{max}\text{-}\mathrm{SAT}$ instance) is the maximum number of clauses that could be satisfied in the instance of $\mathrm{max}\text{-}\mathrm{SAT}$ or a value greater than or equal to $C - n$ if the instance is unsatisfiable. Thus $C = 3n$ is "big enough". I.e. for negative $p_j$ there is no polynomial solution even with bounded $|p_j|$ and $|c_j|$ unless $\mathbf{P} = \mathbf{NP}$.
|
2019-02-20 11:00:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8852049112319946, "perplexity": 222.9917113712874}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247494741.0/warc/CC-MAIN-20190220105613-20190220131613-00088.warc.gz"}
|
https://aimsciences.org/article/doi/10.3934/proc.2007.2007.844
|
# American Institute of Mathematical Sciences
2007, 2007(Special): 844-854. doi: 10.3934/proc.2007.2007.844
## Global attractor for a Klein-Gordon-Schrodinger type system
1 Department of Mathematics, National Technical University, Zografou Campus 157 80, Athens, Greece 2 Department of Mathematics, National Technical University, Zografou Campus 157 80, Athens, Hellas, Greece
Received September 2006 Revised July 2007 Published September 2007
In this paper we prove the existence and uniqueness of solutions for the following evolution system of Klein-Gordon-Schrodinger type
$i\psi_t + k\psi_(xx) + i\alpha\psi$ = $\phi\psi + f(x)$,
$\phi_(tt)$ - $\phi_(xx) + \phi + \lambda\phi_t$ = -$Re\psi_x + g(x)$,
$\psi(x,0)=\psi_0(x), \phi(x,0)$ = $\phi_0, \phi_t(x,0)=\phi_1(x)$
$\phi(x,t)=\phi(x,t)=0$, $x\in\partial\Omega, t>0$
where $x \in \Omega, t > 0, k > 0, \alpha > 0, \lambda > 0, f(x)$ and $g(x)$ are the driving terms and $\Omega$ (bounded) $\subset \mathbb{R}$. Also we prove the continuous dependence of solutions of the system on the initial data as well as the existence of a global attractor.
Citation: Marilena N. Poulou, Nikolaos M. Stavrakakis. Global attractor for a Klein-Gordon-Schrodinger type system. Conference Publications, 2007, 2007 (Special) : 844-854. doi: 10.3934/proc.2007.2007.844
[1] Michael Renardy. A backward uniqueness result for the wave equation with absorbing boundary conditions. Evolution Equations & Control Theory, 2015, 4 (3) : 347-353. doi: 10.3934/eect.2015.4.347 [2] Tomás Caraballo, Marta Herrera-Cobos, Pedro Marín-Rubio. Global attractor for a nonlocal p-Laplacian equation without uniqueness of solution. Discrete & Continuous Dynamical Systems - B, 2017, 22 (5) : 1801-1816. doi: 10.3934/dcdsb.2017107 [3] M. Keel, Tristan Roy, Terence Tao. Global well-posedness of the Maxwell-Klein-Gordon equation below the energy norm. Discrete & Continuous Dynamical Systems - A, 2011, 30 (3) : 573-621. doi: 10.3934/dcds.2011.30.573 [4] Ahmed Y. Abdallah. Asymptotic behavior of the Klein-Gordon-Schrödinger lattice dynamical systems. Communications on Pure & Applied Analysis, 2006, 5 (1) : 55-69. doi: 10.3934/cpaa.2006.5.55 [5] Irena Lasiecka, Roberto Triggiani. Global exact controllability of semilinear wave equations by a double compactness/uniqueness argument. Conference Publications, 2005, 2005 (Special) : 556-565. doi: 10.3934/proc.2005.2005.556 [6] Zhiming Liu, Zhijian Yang. Global attractor of multi-valued operators with applications to a strongly damped nonlinear wave equation without uniqueness. Discrete & Continuous Dynamical Systems - B, 2020, 25 (1) : 223-240. doi: 10.3934/dcdsb.2019179 [7] Andrew Comech. Weak attractor of the Klein-Gordon field in discrete space-time interacting with a nonlinear oscillator. Discrete & Continuous Dynamical Systems - A, 2013, 33 (7) : 2711-2755. doi: 10.3934/dcds.2013.33.2711 [8] Salah Missaoui, Ezzeddine Zahrouni. Regularity of the attractor for a coupled Klein-Gordon-Schrödinger system with cubic nonlinearities in $\mathbb{R}^2$. Communications on Pure & Applied Analysis, 2015, 14 (2) : 695-716. doi: 10.3934/cpaa.2015.14.695 [9] Hironobu Sasaki. Remark on the scattering problem for the Klein-Gordon equation with power nonlinearity. Conference Publications, 2007, 2007 (Special) : 903-911. doi: 10.3934/proc.2007.2007.903 [10] Karen Yagdjian. The semilinear Klein-Gordon equation in de Sitter spacetime. Discrete & Continuous Dynamical Systems - S, 2009, 2 (3) : 679-696. doi: 10.3934/dcdss.2009.2.679 [11] Satoshi Masaki, Jun-ichi Segata. Modified scattering for the Klein-Gordon equation with the critical nonlinearity in three dimensions. Communications on Pure & Applied Analysis, 2018, 17 (4) : 1595-1611. doi: 10.3934/cpaa.2018076 [12] Aslihan Demirkaya, Panayotis G. Kevrekidis, Milena Stanislavova, Atanas Stefanov. Spectral stability analysis for standing waves of a perturbed Klein-Gordon equation. Conference Publications, 2015, 2015 (special) : 359-368. doi: 10.3934/proc.2015.0359 [13] Chi-Kun Lin, Kung-Chien Wu. On the fluid dynamical approximation to the nonlinear Klein-Gordon equation. Discrete & Continuous Dynamical Systems - A, 2012, 32 (6) : 2233-2251. doi: 10.3934/dcds.2012.32.2233 [14] Hironobu Sasaki. Small data scattering for the Klein-Gordon equation with cubic convolution nonlinearity. Discrete & Continuous Dynamical Systems - A, 2006, 15 (3) : 973-981. doi: 10.3934/dcds.2006.15.973 [15] Jun Yang. Vortex structures for Klein-Gordon equation with Ginzburg-Landau nonlinearity. Discrete & Continuous Dynamical Systems - A, 2014, 34 (5) : 2359-2388. doi: 10.3934/dcds.2014.34.2359 [16] Changxing Miao, Jiqiang Zheng. Scattering theory for energy-supercritical Klein-Gordon equation. Discrete & Continuous Dynamical Systems - S, 2016, 9 (6) : 2073-2094. doi: 10.3934/dcdss.2016085 [17] Elena Kopylova. On dispersion decay for 3D Klein-Gordon equation. Discrete & Continuous Dynamical Systems - A, 2018, 38 (11) : 5765-5780. doi: 10.3934/dcds.2018251 [18] Stefano Pasquali. A Nekhoroshev type theorem for the nonlinear Klein-Gordon equation with potential. Discrete & Continuous Dynamical Systems - B, 2018, 23 (9) : 3573-3594. doi: 10.3934/dcdsb.2017215 [19] Igor Shevchenko, Barbara Kaltenbacher. Absorbing boundary conditions for the Westervelt equation. Conference Publications, 2015, 2015 (special) : 1000-1008. doi: 10.3934/proc.2015.1000 [20] Emmanuel Hebey and Frederic Robert. Compactness and global estimates for the geometric Paneitz equation in high dimensions. Electronic Research Announcements, 2004, 10: 135-141.
Impact Factor:
|
2019-10-15 21:00:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44655779004096985, "perplexity": 4502.6979223988865}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986660323.32/warc/CC-MAIN-20191015205352-20191015232852-00072.warc.gz"}
|
http://bugra.github.io/work/notes/2014-02-23/imdb-top-100-movies-analysis-in-depth-part-2/
|
# IMDB Top 100K Movies Analysis in Depth Part 2
### Data¶
Data is from IMDB and includes top 100042 popularly voted movies. This post is second in series, see the first post. I had some great feedback from HN and decided to deal with more on categories of movies. In this one, I will first look at number of movies per category over time and rating of the categories. Second, I will compare popularly voted directors with other directors, give best directors for popularly voted categories. At the end, I will look at the correlation of the categories and do PCA on the movies. As in the first post, I will let the data speak for itself rather than explaining every single graph.
### Are old movies better than the contemporary ones ?¶
#### Mean Score of Ratings over Year¶
In [201]:
In the first post, we observed that the movies that are old consistently rated higher than the contemporary ones. When we look at the mean rating of movies per year(ignoring the count), we could observe this much more clearly. But, wait a second, maybe there are a lot of outliers which manipulate the rating to lower end for the movies that are released in 90's and 00's.
#### Median Score of Ratings over Year¶
In [202]:
Nope, still they are low. Although not as low as the mean rating, they are still low. Yet, they are getting better although not as high as old movies. However, this is too coarse as there are a number of different categories and we are counting over all of the categories and then get the median rating of the movies. Maybe, some of the categories are quite different than the all movies.
## Categories over Time¶
In this section, I separated the 23 categories into 4 different sections(6,6,6,5) based on the count of the categories. My aim is to first look at the count of the categories and then track the median rating for the categories to get a better understanding of how movies per category gets rated in IMDB. I give the total count of the categories in the first post.
Movies given in the following four sections are sorted by the total number of movies. The graphs are also sorted by the count of categories. The largest category for each section would be in the bottom.
### First Section¶
• Drama
• Comedy
• Action
• Romance
• Crime
• Thriller
In [204]:
In [205]:
### Second Section¶
• Horror
• Family
• Sci-Fi
• Fantasy
• Mystery
In [206]:
In [207]:
### Third Section¶
• Musical
• War
• History
• Animation
• Western
• Biography
In [208]:
In [209]:
### Fourth Section¶
• Music
• Sport
• Film-Noir
• News
In [210]:
In [211]:
## Directors¶
In this section, I will look at the directors whose movies median rating is larger than 7 and median vote is larger than 200000. Although the thresholds are arbitrary, it gives good directors if not best. Surprisingly, some of the directors that I thought good are not on the list if they do not satisfy these requirements. For an example, Woody Allen is not on the list as his median vote is about ~24000 even though his median rating is above 7. I did not look at the number of movies, maybe I should but if the movie quality is good and watched(read this one voted), director should make to the list. Ben Affleck is such a director among others.
In the distribution graphs, turquoise is the mean distribution over all of the directors and purple is the director's distribution given on the subplot.
### Rating of Directors¶
In [213]:
Christopher Nolan is consistently higher in his all of movies. Surprisingly, Ben Affleck is on the list. Median rating tolerates one bad movie where the movie is Swept Away for Guy Ritchie has a movie rated below 4.
### Number of Movies over Year¶
In [214]:
Generally, directors produce the movies for a decade or two. Steven Spielberg, Tim Burton, George Lucas and James Cameron are the biggest exceptions.
### Runtime of Movies¶
In [215]:
|
2018-10-20 17:15:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.50632643699646, "perplexity": 1796.9900985507}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583513009.81/warc/CC-MAIN-20181020163619-20181020185119-00318.warc.gz"}
|
https://tus.elsevierpure.com/ja/publications/boundedness-and-finite-time-blow-up-in-a-quasilinear-parabolicell
|
# Boundedness and finite-time blow-up in a quasilinear parabolic–elliptic–elliptic attraction–repulsion chemotaxis system
Yutaro Chiyo, Tomomi Yokota
## 抄録
This paper deals with the quasilinear attraction–repulsion chemotaxis system {ut=∇·((u+1)m-1∇u-χu(u+1)p-2∇v+ξu(u+1)q-2∇w)+f(u),0=Δv+αu-βv,0=Δw+γu-δwin a bounded domain Ω ⊂ Rn (n∈ N) with smooth boundary ∂Ω , where m, p, q∈ R, χ, ξ, α, β, γ, δ> 0 are constants, and f is a function of logistic type such as f(u) = λu- μuκ with λ, μ> 0 and κ≥ 1 , provided that the case f(u) ≡ 0 is included in the study of boundedness, whereas κ is sufficiently close to 1 in considering blow-up in the radially symmetric setting. In the case that ξ= 0 and f(u) ≡ 0 , global existence and boundedness have already been proved under the condition p<m+2n. Also, in the case that m= 1 , p= q= 2 and f is a function of logistic type, finite-time blow-up has already been established by assuming χα- ξγ> 0. This paper classifies boundedness and blow-up into the cases p< q and p> q without any condition for the sign of χα- ξγ and the case p= q with χα- ξγ< 0 or χα- ξγ> 0.
本文言語 English 61 Zeitschrift fur Angewandte Mathematik und Physik 73 2 https://doi.org/10.1007/s00033-022-01695-y Published - 4月 2022
## フィンガープリント
「Boundedness and finite-time blow-up in a quasilinear parabolic–elliptic–elliptic attraction–repulsion chemotaxis system」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。
|
2022-06-26 03:06:26
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8529349565505981, "perplexity": 5059.211133615387}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103036363.5/warc/CC-MAIN-20220626010644-20220626040644-00670.warc.gz"}
|
https://www.physicsforums.com/threads/finding-position-from-velocity-trig-function.477826/
|
Finding position from velocity (trig function)
Homework Statement
S(0)=3, find S(2) position wise.
V(t)=xsin(x^2)
The Attempt at a Solution
I tried to integrate with u-substitution and I got -t^4/4cos(t^2). I tested it by taking the derivative and it didn't work out.
Last edited:
Related Introductory Physics Homework Help News on Phys.org
SammyS
Staff Emeritus
Homework Helper
Gold Member
$$v(t)=\frac{dx}{dt}\quad\to\quad\frac{dx}{x\sin(x^2)}=dt$$
Is the equation on the right what you integrated?
@SammyS,
Thanks for the reply, but I have never seen the method you used before. I understand that you manipulated the first equation to get the second, but I do not know why. If you could explain it a bit or give me a link to a website that explains it I would appreciate it.
gneill
Mentor
Are you sure that your velocity function is v(t) = x*sin(x^2), where x is a distance? Is it possible that it's v(t) = t*sin(t^2) instead?
SammyS
Staff Emeritus
Homework Helper
Gold Member
If $$\frac{dx}{dt}=x\sin(x^2)\,,$$
then $$\frac{1}{x\sin(x^2)}\ \frac{dx}{dt}\,dt=dt\,.$$
But, $$\frac{dx}{dt}\,dt=dx\,.$$
Therefore, $$\frac{dx}{x\sin(x^2 )}=dt$$
Now integrate both sides to find t as a function of x.
gneill
Mentor
Now integrate both sides to find t as a function of x.
I think that the LHS is going to prove to be rather difficult to integrate in closed form.
SammyS
Staff Emeritus
Your earlier suggestion: $$v(t)=t\sin(t^2)$$ is probably correct.
|
2020-11-25 03:15:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7038989067077637, "perplexity": 724.0335597424628}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141180636.17/warc/CC-MAIN-20201125012933-20201125042933-00485.warc.gz"}
|
http://www.algebra.com/tutors/your-answers.mpl?userid=solver91311&from=5190
|
Algebra -> Tutoring on algebra.com -> See tutors' answers! Log On
Tutoring Home For Students Tools for Tutors Our Tutors Register Recently Solved
By Tutor
| By Problem Number |
Tutor:
# Recent problems solved by 'solver91311'
Jump to solutions: 0..29 , 30..59 , 60..89 , 90..119 , 120..149 , 150..179 , 180..209 , 210..239 , 240..269 , 270..299 , 300..329 , 330..359 , 360..389 , 390..419 , 420..449 , 450..479 , 480..509 , 510..539 , 540..569 , 570..599 , 600..629 , 630..659 , 660..689 , 690..719 , 720..749 , 750..779 , 780..809 , 810..839 , 840..869 , 870..899 , 900..929 , 930..959 , 960..989 , 990..1019 , 1020..1049 , 1050..1079 , 1080..1109 , 1110..1139 , 1140..1169 , 1170..1199 , 1200..1229 , 1230..1259 , 1260..1289 , 1290..1319 , 1320..1349 , 1350..1379 , 1380..1409 , 1410..1439 , 1440..1469 , 1470..1499 , 1500..1529 , 1530..1559 , 1560..1589 , 1590..1619 , 1620..1649 , 1650..1679 , 1680..1709 , 1710..1739 , 1740..1769 , 1770..1799 , 1800..1829 , 1830..1859 , 1860..1889 , 1890..1919 , 1920..1949 , 1950..1979 , 1980..2009 , 2010..2039 , 2040..2069 , 2070..2099 , 2100..2129 , 2130..2159 , 2160..2189 , 2190..2219 , 2220..2249 , 2250..2279 , 2280..2309 , 2310..2339 , 2340..2369 , 2370..2399 , 2400..2429 , 2430..2459 , 2460..2489 , 2490..2519 , 2520..2549 , 2550..2579 , 2580..2609 , 2610..2639 , 2640..2669 , 2670..2699 , 2700..2729 , 2730..2759 , 2760..2789 , 2790..2819 , 2820..2849 , 2850..2879 , 2880..2909 , 2910..2939 , 2940..2969 , 2970..2999 , 3000..3029 , 3030..3059 , 3060..3089 , 3090..3119 , 3120..3149 , 3150..3179 , 3180..3209 , 3210..3239 , 3240..3269 , 3270..3299 , 3300..3329 , 3330..3359 , 3360..3389 , 3390..3419 , 3420..3449 , 3450..3479 , 3480..3509 , 3510..3539 , 3540..3569 , 3570..3599 , 3600..3629 , 3630..3659 , 3660..3689 , 3690..3719 , 3720..3749 , 3750..3779 , 3780..3809 , 3810..3839 , 3840..3869 , 3870..3899 , 3900..3929 , 3930..3959 , 3960..3989 , 3990..4019 , 4020..4049 , 4050..4079 , 4080..4109 , 4110..4139 , 4140..4169 , 4170..4199 , 4200..4229 , 4230..4259 , 4260..4289 , 4290..4319 , 4320..4349 , 4350..4379 , 4380..4409 , 4410..4439 , 4440..4469 , 4470..4499 , 4500..4529 , 4530..4559 , 4560..4589 , 4590..4619 , 4620..4649 , 4650..4679 , 4680..4709 , 4710..4739 , 4740..4769 , 4770..4799 , 4800..4829 , 4830..4859 , 4860..4889 , 4890..4919 , 4920..4949 , 4950..4979 , 4980..5009 , 5010..5039 , 5040..5069 , 5070..5099 , 5100..5129 , 5130..5159 , 5160..5189 , 5190..5219 , 5220..5249 , 5250..5279 , 5280..5309 , 5310..5339 , 5340..5369 , 5370..5399 , 5400..5429 , 5430..5459 , 5460..5489 , 5490..5519 , 5520..5549 , 5550..5579 , 5580..5609 , 5610..5639 , 5640..5669 , 5670..5699 , 5700..5729 , 5730..5759 , 5760..5789 , 5790..5819 , 5820..5849 , 5850..5879 , 5880..5909 , 5910..5939 , 5940..5969 , 5970..5999 , 6000..6029 , 6030..6059 , 6060..6089 , 6090..6119 , 6120..6149 , 6150..6179 , 6180..6209 , 6210..6239 , 6240..6269 , 6270..6299 , 6300..6329 , 6330..6359 , 6360..6389 , 6390..6419 , 6420..6449 , 6450..6479 , 6480..6509 , 6510..6539 , 6540..6569 , 6570..6599 , 6600..6629 , 6630..6659 , 6660..6689 , 6690..6719 , 6720..6749 , 6750..6779 , 6780..6809 , 6810..6839 , 6840..6869 , 6870..6899 , 6900..6929 , 6930..6959 , 6960..6989 , 6990..7019 , 7020..7049 , 7050..7079 , 7080..7109 , 7110..7139 , 7140..7169 , 7170..7199 , 7200..7229 , 7230..7259 , 7260..7289 , 7290..7319 , 7320..7349 , 7350..7379 , 7380..7409 , 7410..7439 , 7440..7469 , 7470..7499 , 7500..7529 , 7530..7559 , 7560..7589 , 7590..7619 , 7620..7649 , 7650..7679 , 7680..7709 , 7710..7739 , 7740..7769 , 7770..7799 , 7800..7829 , 7830..7859 , 7860..7889 , 7890..7919 , 7920..7949 , 7950..7979 , 7980..8009 , 8010..8039 , 8040..8069 , 8070..8099 , 8100..8129 , 8130..8159 , 8160..8189 , 8190..8219 , 8220..8249 , 8250..8279 , 8280..8309 , 8310..8339 , 8340..8369 , 8370..8399 , 8400..8429 , 8430..8459 , 8460..8489 , 8490..8519 , 8520..8549 , 8550..8579 , 8580..8609 , 8610..8639 , 8640..8669 , 8670..8699 , 8700..8729 , 8730..8759 , 8760..8789 , 8790..8819 , 8820..8849 , 8850..8879 , 8880..8909 , 8910..8939 , 8940..8969 , 8970..8999 , 9000..9029 , 9030..9059 , 9060..9089 , 9090..9119 , 9120..9149 , 9150..9179 , 9180..9209 , 9210..9239 , 9240..9269 , 9270..9299 , 9300..9329 , 9330..9359 , 9360..9389 , 9390..9419 , 9420..9449 , 9450..9479 , 9480..9509 , 9510..9539 , 9540..9569 , 9570..9599 , 9600..9629 , 9630..9659 , 9660..9689 , 9690..9719 , 9720..9749 , 9750..9779 , 9780..9809 , 9810..9839 , 9840..9869 , 9870..9899 , 9900..9929 , 9930..9959 , 9960..9989 , 9990..10019 , 10020..10049 , 10050..10079 , 10080..10109 , 10110..10139 , 10140..10169 , 10170..10199 , 10200..10229 , 10230..10259 , 10260..10289 , 10290..10319 , 10320..10349 , 10350..10379 , 10380..10409 , 10410..10439 , 10440..10469 , 10470..10499 , 10500..10529 , 10530..10559 , 10560..10589 , 10590..10619 , 10620..10649 , 10650..10679 , 10680..10709 , 10710..10739 , 10740..10769 , 10770..10799 , 10800..10829 , 10830..10859 , 10860..10889 , 10890..10919 , 10920..10949 , 10950..10979 , 10980..11009 , 11010..11039 , 11040..11069 , 11070..11099 , 11100..11129 , 11130..11159 , 11160..11189 , 11190..11219 , 11220..11249 , 11250..11279 , 11280..11309 , 11310..11339 , 11340..11369 , 11370..11399 , 11400..11429 , 11430..11459 , 11460..11489 , 11490..11519 , 11520..11549 , 11550..11579 , 11580..11609 , 11610..11639 , 11640..11669 , 11670..11699 , 11700..11729 , 11730..11759 , 11760..11789 , 11790..11819 , 11820..11849 , 11850..11879 , 11880..11909 , 11910..11939 , 11940..11969 , 11970..11999 , 12000..12029 , 12030..12059 , 12060..12089 , 12090..12119 , 12120..12149 , 12150..12179 , 12180..12209 , 12210..12239 , 12240..12269 , 12270..12299 , 12300..12329 , 12330..12359 , 12360..12389 , 12390..12419 , 12420..12449 , 12450..12479 , 12480..12509 , 12510..12539 , 12540..12569 , 12570..12599 , 12600..12629 , 12630..12659 , 12660..12689 , 12690..12719 , 12720..12749 , 12750..12779 , 12780..12809 , 12810..12839 , 12840..12869 , 12870..12899 , 12900..12929 , 12930..12959 , 12960..12989 , 12990..13019 , 13020..13049 , 13050..13079 , 13080..13109 , 13110..13139 , 13140..13169 , 13170..13199 , 13200..13229 , 13230..13259 , 13260..13289 , 13290..13319 , 13320..13349 , 13350..13379 , 13380..13409 , 13410..13439 , 13440..13469 , 13470..13499 , 13500..13529 , 13530..13559 , 13560..13589 , 13590..13619 , 13620..13649 , 13650..13679 , 13680..13709 , 13710..13739 , 13740..13769 , 13770..13799 , 13800..13829 , 13830..13859 , 13860..13889 , 13890..13919 , 13920..13949 , 13950..13979 , 13980..14009 , 14010..14039 , 14040..14069 , 14070..14099 , 14100..14129 , 14130..14159 , 14160..14189 , 14190..14219 , 14220..14249 , 14250..14279 , 14280..14309 , 14310..14339 , 14340..14369 , 14370..14399 , 14400..14429 , 14430..14459 , 14460..14489 , 14490..14519 , 14520..14549 , 14550..14579 , 14580..14609 , 14610..14639 , 14640..14669 , 14670..14699 , 14700..14729 , 14730..14759 , 14760..14789 , 14790..14819 , 14820..14849 , 14850..14879 , 14880..14909 , 14910..14939 , 14940..14969 , 14970..14999 , 15000..15029 , 15030..15059 , 15060..15089 , 15090..15119 , 15120..15149 , 15150..15179 , 15180..15209 , 15210..15239 , 15240..15269 , 15270..15299 , 15300..15329 , 15330..15359 , 15360..15389 , 15390..15419 , 15420..15449 , 15450..15479 , 15480..15509 , 15510..15539 , 15540..15569 , 15570..15599 , 15600..15629 , 15630..15659 , 15660..15689 , 15690..15719 , 15720..15749 , 15750..15779 , 15780..15809 , 15810..15839 , 15840..15869 , 15870..15899 , 15900..15929 , 15930..15959 , 15960..15989 , 15990..16019 , 16020..16049 , 16050..16079 , 16080..16109 , 16110..16139 , 16140..16169 , 16170..16199 , 16200..16229 , 16230..16259 , 16260..16289 , 16290..16319 , 16320..16349 , 16350..16379 , 16380..16409 , 16410..16439 , 16440..16469 , 16470..16499 , 16500..16529 , 16530..16559 , 16560..16589 , 16590..16619 , 16620..16649 , 16650..16679 , 16680..16709 , 16710..16739 , 16740..16769 , 16770..16799 , 16800..16829 , 16830..16859 , 16860..16889 , 16890..16919 , 16920..16949 , 16950..16979 , 16980..17009 , 17010..17039 , 17040..17069, >>Next
Angles/555879: Two angles are supplemetnary and congruent. How many degrees are there in each angle?1 solutions Answer 361975 by solver91311(17077) on 2012-01-11 18:10:13 (Show Source): You can put this solution on YOUR website! Let the measure of either angle be , then the measure of the other angle must be as well because we are given that the two angles are congruent. Two angles are supplementary if and only if the sum of their degree measures is . So: Solve for John My calculator said it, I believe it, that settles it
Numbers_Word_Problems/555867: In the back stockroom at the Wheel Shop, the number of seats and horns equaled the number of wheels. The number seats and handelbars equaled the number of horns. Twice the number of wheels is equal to three times number of handlebars. Determine the relationship of horns to seats. Copied exactly.1 solutions Answer 361974 by solver91311(17077) on 2012-01-11 18:04:08 (Show Source): You can put this solution on YOUR website! Just take it one step at a time. Let represent the number of seats, represent the number of horns, represent the number of wheels, and represent the number of handlebars. Given 1: Given 2: Given 3: Substituting, 2 into 1: Solve 3 for Substitute for Add to both sides: Multiply by 2: Substitute into given #2: John My calculator said it, I believe it, that settles it
Geometry_Word_Problems/555862: I could not get the answer for this since it is about circumference and areas of a circle. Okay this is the question,C= 14.8 km. Please HELP! 1 solutions Answer 361970 by solver91311(17077) on 2012-01-11 17:51:25 (Show Source): You can put this solution on YOUR website! The area of a circle is given by , but you are given the circumference, not the radius. The circumference is given by , therefore, . Substituting into the area formula: You can do your own arithmetic. John My calculator said it, I believe it, that settles it
Triangles/555861: Two sides are 5" and 7" in length, What couldn't be the third side? A. 11 B. 13 C. 6 D. 7 1 solutions Answer 361967 by solver91311(17077) on 2012-01-11 17:43:13 (Show Source): You can put this solution on YOUR website! In order to have a triangle, the sum of the two shortest sides must be MORE than the measure of the longest side. So which of the given answers is greater than the sum of the two given sides? John My calculator said it, I believe it, that settles it
Permutations/555825: what is the meaning of14P3 the notation of 1 solutions Answer 361965 by solver91311(17077) on 2012-01-11 17:41:05 (Show Source): You can put this solution on YOUR website! The number of permutations of 14 things taken 3 at a time. The number of permutations of things taken at a time is . For your problem, you have 14 different things and you want to know how many different ways you can choose three of them given that the order of the selection matters. For example, if you have a club with 14 members, how many ways can you choose a president, vice-president, and treasurer assuming that the three positions must be filled by three different people and a selection that puts Suzie in the president position is different than a selection that puts her in either of the other positions? Simply this: There are 14 possibilities for the first position, then for each of those possibilities there are 13 possibilities for the second position, then for each of those 14 times 13 possibilities there are 12 possibilities for the third position. In this case and so: John My calculator said it, I believe it, that settles it
Equations/555816: Please help me solve this word problem. the electric current (I), in amperes, in a circuit varies directly as the voltage (V). When 12 volts are applied, the current is 4 amperes. Predict the current when 18 volts are applied. 1 solutions Answer 361963 by solver91311(17077) on 2012-01-11 17:24:33 (Show Source): You can put this solution on YOUR website! If varies directly as then . Using the initial condition values, calculate by calculating . Then calculate any other value using your knowledge of the value of John My calculator said it, I believe it, that settles it
Probability-and-statistics/553187: The semi-major axis has length 4 units and foci are at (2,3) and (2,-3)1 solutions Answer 360730 by solver91311(17077) on 2012-01-05 17:39:57 (Show Source): You can put this solution on YOUR website! Truly fascinating. Were you going to ask a question at some point? John My calculator said it, I believe it, that settles it
Quadratic-relations-and-conic-sections/553179: Write a equation in standard form. State whether the graph of the equation is a parabola, circle, ellipse, or hyperbola. 12. 6x squared + 6y squared = 1621 solutions Answer 360729 by solver91311(17077) on 2012-01-05 17:38:17 (Show Source): You can put this solution on YOUR website! Divide both sides by 6 Has term AND term: NOT a parabola The term and the term have the same sign: NOT a hyperbola The term and the term have equal coefficients: NOT an ellipse Circle. Center at , radius John My calculator said it, I believe it, that settles it
Miscellaneous_Word_Problems/553183: If each lap in a pool is 100 meters long, how many laps equal one mile? Round the nearest tenth.1 solutions Answer 360728 by solver91311(17077) on 2012-01-05 17:31:41 (Show Source): You can put this solution on YOUR website! Multiply the number of kilometers in one mile by 10. Then round. John My calculator said it, I believe it, that settles it
Surface-area/553181: how do i find the length of the radius if the circumference 10 feet? 1 solutions Answer 360727 by solver91311(17077) on 2012-01-05 17:30:30 (Show Source): You can put this solution on YOUR website! so John My calculator said it, I believe it, that settles it
Polygons/553171: in a parallelogram with area of 40, a side is 3 less than the altitude drawn to that side. find the altitude of the parallelogram.1 solutions Answer 360719 by solver91311(17077) on 2012-01-05 17:16:10 (Show Source): You can put this solution on YOUR website! The area of a parallelogram is the measure of the altitude times the measure of the side to which the altitude is drawn, so: So solve for the positive root. John My calculator said it, I believe it, that settles it
Expressions-with-variables/553170: 63 is 1 more than twice the number of miles Timothy drove1 solutions Answer 360718 by solver91311(17077) on 2012-01-05 17:12:11 (Show Source): You can put this solution on YOUR website! Truly fascinating. Was there a question in there somewhere? John My calculator said it, I believe it, that settles it
Graphs/553159: They want me to draw a rectangle with 22 square units inside it--that's the problem, but it seems like a trick because you cannot do it. it doesn't even out. Pls help!1 solutions Answer 360717 by solver91311(17077) on 2012-01-05 17:10:44 (Show Source): You can put this solution on YOUR website! What's the matter with a rectangle that is 1 unit wide by 22 units long? Or a 2 by 11 rectangle? Or a 4 by 5.5 rectangle? Or a 0.1 by 220 rectangle. Or a by rectangle. There are an infinite number of different choices. John My calculator said it, I believe it, that settles it
Human-and-algebraic-language/553160: 1/3 less than a reciprocal of a certain number1 solutions Answer 360715 by solver91311(17077) on 2012-01-05 17:02:38 (Show Source): You can put this solution on YOUR website! Fascinating. What did you want to do with this startling little morsel of mathematical manifesto? John My calculator said it, I believe it, that settles it
Radicals/553154: I have two questions it says review the expressions below, In a paragraph, describe how to simplify each expression. 1. 16to the power of 1/2. Now i need you to understand that the 1/2 is above the 16 its little so you get it right. 2. this one im not sure how im suppose to show you it has like a check mark and a line then under the line it has 100 im not sure what that is called im sorry but the check mark is hooked on the line if that helps you at all sort of looks like a old style division problem almost.1 solutions Answer 360713 by solver91311(17077) on 2012-01-05 16:49:04 (Show Source): You can put this solution on YOUR website! Raising something to the 1/2 power is the same as taking the square root. What number whem multiplied by itself is equal to 16. Hint 3 times 3 is 9 and 5 times 5 is 25. and This means to take the positive square root of the value inside. What number when multiplied by itself is 100? to render these expressions in plain text use ^ to indicate raising to a power and sqrt to indicate square root. I.e. you should have rendered your expressions as 16^(1/2) and sqrt(100). John My calculator said it, I believe it, that settles it
Polygons/553148: how many sides does a polygon have if the sum of its angle measures is 27001 solutions Answer 360712 by solver91311(17077) on 2012-01-05 16:46:29 (Show Source): You can put this solution on YOUR website! The sum of the interior angle measures of an -sided polygon is given by: So solve: for John My calculator said it, I believe it, that settles it
Equations/553130: 6(2^x)-48=0 and please show me how you got your answer.! Thanks1 solutions Answer 360708 by solver91311(17077) on 2012-01-05 16:42:17 (Show Source): You can put this solution on YOUR website! But and since we can say Or, if you are a real stickler for rigor: John My calculator said it, I believe it, that settles it
Probability-and-statistics/553121: if you roll two dice 36 times will you get a product of 1 exactly once1 solutions Answer 360693 by solver91311(17077) on 2012-01-05 16:20:36 (Show Source): You can put this solution on YOUR website! Are you making a statement or asking a question? John My calculator said it, I believe it, that settles it
Triangles/553120: In triangle ABC,AB=3, BC=4, and AC=6. What is the largest angle? thankyou! 1 solutions Answer 360692 by solver91311(17077) on 2012-01-05 16:19:07 (Show Source): You can put this solution on YOUR website! In any triangle, the largest angle is opposite the longest side. John My calculator said it, I believe it, that settles it
Graphs/553110: Solve: Include a sketch to support answer. I isolated x to get and but I don't understand how x is greater than or equal to and less than or equal to . I also don't know how to do the sketch. 1 solutions Answer 360688 by solver91311(17077) on 2012-01-05 16:13:01 (Show Source): You can put this solution on YOUR website! Multiply both sides by Multiply both sides by . Don't forget to reverse the sense of the inequality because of multiplying by a number less than zero. Add 1 to both sides: Now when you take the root of bothsides, recognize that Which is to say that or Draw a number line. Put a filled in dot at . Make a fat arrow going to the right with an arrowhead indicating it goes on forever. Put a filled in dot at . Make a fat arrow going to the left with an arrowhead indicating it goes on forever. John My calculator said it, I believe it, that settles it
Equations/553107: Hi can you please help me solve; I added all three number then divide, what do I do after that? Jacqui has gradess of 88 and 77 on her first two algebra tests. If she wants an average of at least 71, what possible scores can she make on her third test? Thank you...1 solutions Answer 360684 by solver91311(17077) on 2012-01-05 15:52:34 (Show Source): You can put this solution on YOUR website! To find the average of three numbers, you add the numbers and divide by 3. But if you want that average of three numbers to be a particular thing, namely 71 in this case, you multiply by 3 to get the sum of the three numbers. 3 times 71 is 213. Now you have two of her scores, 88 and 77. 88 plus 77 is 165. That means the third test must be at least 213 minus 165 or 48. Presuming that 100 is the maximum score possible, the score, , on the third test must be in the interval: And replace the upper limit number if the maximum possible score is other than 100. John My calculator said it, I believe it, that settles it
Angles/553099: what is the degree of measure of an angle whose complement is 40% of its supplement? could you please show your work so i know how to do it for future reference.1 solutions Answer 360680 by solver91311(17077) on 2012-01-05 15:44:59 (Show Source): You can put this solution on YOUR website! Let represent the degree measure of the desired angle. Then is the degree measure of the supplement of the angle and is the degree measure of the complement of the angle. We are given that: Solve for . John My calculator said it, I believe it, that settles it
Parallelograms/553096: Determine whether the figure with the given vertices is a parallelogram. Use the method of the Distance and Slope Formula. R (-2, 5) O (1, 3) M (-3, -4) Y (-6, -2)1 solutions Answer 360676 by solver91311(17077) on 2012-01-05 15:30:31 (Show Source): You can put this solution on YOUR website! Use the slope formula: where and are the coordinates of the given points. To calculate the slopes of the lines containing the four segments RO, OM, MY, and YR. If ROMY is a parallelogram, then slope RO = slope MY and slope OM = slope YR. John My calculator said it, I believe it, that settles it
Numeric_Fractions/553091: How do I get help in passing the placement test on going back to college?1 solutions Answer 360675 by solver91311(17077) on 2012-01-05 15:25:18 (Show Source): You can put this solution on YOUR website! Hire a tutor would be one way. John My calculator said it, I believe it, that settles it
Trigonometry-basics/553053: Solve: , 1 solutions Answer 360668 by solver91311(17077) on 2012-01-05 15:07:02 (Show Source): You can put this solution on YOUR website! Since we can write: Multiply by Use the unit circle, recalling that sin of the angle is the -coordinate of the point of intersection of the terminal ray with the unit circle and find all angles where in your given interval. Note that the given interval is one and a half trips around the circle. Hint: start at and go backwards. John My calculator said it, I believe it, that settles it
�t�e�s�t/553054: 1.range of the following relation: R: {(3, -5), (1, 2), (-1, -4), (-1, 2)} is 2.Part 1: Create a relation of five ordered pairs that is a function. In complete sentences explain why this relation is a function. Part 2: Create a relation of five ordered pairs that is not a function. In complete sentences explain why this relation is not a function. 3.The domain of the following relation: R: {(6, -2), (1, 2), (-3, -4), (-3, 2)} is? 4.A relation is1 solutions Answer 360644 by solver91311(17077) on 2012-01-05 13:39:00 (Show Source): You can put this solution on YOUR website! Is there some part of "One question per post" that you don't quite understand, or did you simply not read the instructions for posting -- the ones that are clearly marked "Read before posting"? John My calculator said it, I believe it, that settles it
Signed-numbers/553046: I am currently tutoring a student at a local middle school. She had this pre-algebra problem from a test that I believe has a syntax error which makes it difficult to solve. However, I have forgotten the rules that would make this a syntax error for me to explain it to her. This is the equation as written: (-2)-(-8)/(6+4x(-4)-(-6))=x I have entered this equation on a spreadsheet program and it will not take the equation as written. It forces the parenthesis in this order: (-2-(-8)/(6+4*(-4)-(-6)))=x Why is the first equation incorrect and what are the steps to solve? Respectfully, Natalie Witte1 solutions Answer 360643 by solver91311(17077) on 2012-01-05 13:36:50 (Show Source): You can put this solution on YOUR website! Spreadsheet arithmetic and algebraic notation, while related, are really two different things. The first equation is incorrect in that it is ambiguous. Does it mean: or The first way has no real number solutions (the solution is a conjugate pair of complex numbers) whereas the two solutions for the second way are real but irrational -- both of which seem a bit beyond what you would typically see in middle school -- UNLESS the in the denominator expression is supposed to mean "times" as opposed to being the variable , in which case the answer is a single integer. But you still need to clarify the ambiguity, i.e., is the (-2) inside or outside of the fraction? You need to clarify what you really mean. John My calculator said it, I believe it, that settles it
Linear-equations/552624: I just can't figure this out... please help :) In 1990, the life expectancy of males in a certain country was 64.4 years. In 1995, it was 66.9 years. Let E represent the life expectancy in year t and let t represent the number of years since 1990. The linear function E(t) that fits the data is E(t)=____t+___. (round to the nearest tenth) Use the function to predict the life expectancy of males in 2005. E(15)=____ (round to the nearest tenth) Im at a loss.... Thank you in advance for your help!!!1 solutions Answer 360427 by solver91311(17077) on 2012-01-04 15:38:47 (Show Source): You can put this solution on YOUR website! You are given data for two ordered pairs, , and are told that this is a linear relationship. for 1990, , and for 1995, . So your two ordered pairs are and Use the two-point form of an equation of a line: where and are the coordinates of the given points. Then put the result into slope intercept form: For the second part of your problem, calculate the value of that represents the year 2005. Hint: Subtract 1990 from 2005. Then substitute that value of into your equation and do the arithmetic. John My calculator said it, I believe it, that settles it
Miscellaneous_Word_Problems/552618: School prom committee sold 300 tickets to the prom dinner and collected $5256. There were two choices a steak dish that costs$18 and a fish dish that cost \$16. Now write a system of equations1 solutions Answer 360417 by solver91311(17077) on 2012-01-04 15:16:04 (Show Source): You can put this solution on YOUR website! You have a question that involves two things. You have information about the total number of things, about the value of one unit of each of the things, and about the total value of both things put together. If you let represent the number of one of the things and represent the number of the other thing, and is the total number of both things together, then you can write an equation like this: Then if is the value of one item of , is the value of one item of , and is the total value of all of the things, then you can write a second equation like this: John My calculator said it, I believe it, that settles it
expressions/552016: If f(x) = x + 4, find f(3). (1 point) 1 7 –1 121 solutions Answer 360026 by solver91311(17077) on 2012-01-02 21:14:12 (Show Source): You can put this solution on YOUR website! Replace x with 3 and do the arithmetic. John My calculator said it, I believe it, that settles it
real-numbers/552009: what type of number is closed under subtraction? prime numbers 1 solutions Answer 360023 by solver91311(17077) on 2012-01-02 21:09:00 (Show Source): You can put this solution on YOUR website! If the primes are closed under subtraction, then the result of any prime subtracted from any prime will have a prime result. Is 7 minus 3 a prime number? John My calculator said it, I believe it, that settles it
|
2013-06-19 03:28:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5103558897972107, "perplexity": 3905.304484775148}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707440693/warc/CC-MAIN-20130516123040-00032-ip-10-60-113-184.ec2.internal.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/prealgebra/prealgebra-7th-edition/chapter-2-section-2-6-solving-equations-the-addition-and-multiplication-properties-exercise-set-page-149/51
|
# Chapter 2 - Section 2.6 - Solving Equations: The Addition and Multiplication Properties - Exercise Set - Page 149: 51
$-8\div x$
#### Work Step by Step
divided by indicates division
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
|
2018-12-12 06:17:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7972615361213684, "perplexity": 2176.7970372802056}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823738.9/warc/CC-MAIN-20181212044022-20181212065522-00342.warc.gz"}
|
https://astronomy.stackexchange.com/questions/14309/are-wikipedias-sun-ecliptic-coordinate-formulae-accurate
|
Are Wikipedia's sun ecliptic-coordinate formulae accurate?
I'm creating a C++ program that calculates the ecliptic coordinates based on the formulae from Wikipedia But, my calculations appear off. The mean anomaly for today, for example, should be 80.4-something; both my program and Google calculate approx. 79.2 - the result of (357.528 + 0.9856003*5927) % 360 (where 5927 is the number of days since January 1, 2000, GMT)
• where are you getting the 80.4 from? – costrom Mar 25 '16 at 19:19
Assuming that you used the time of 170900 EDT(-4 UT), which is 230900 UT, the following is my math:
Let the Julian Date for 25Mar2016 at 230900, which is 2457473.464583, be set to JD;
Given the formula (on Wikipedia:n = JD - 2451545.0 ), your n is 5928.464583. Then, taking the second formula ( g = 357.528° + 0.9856003° * n ) you get g=357.528°+5843.0964715441749 = 6200.3544715441749 mod 360 = +80.35447°. Were you to round up this would give +80.4.
Without how you handled your date/time in c++ and your rounding, I would wager that is were the error occurred.
The formulae on wikipedia are correct.
As you haven't explained where you have "80.4" from, this answer is a "best effort".
The value you have calculated is close, so there may be a small error in your code. My guess is that you haven't correctly calculated the time as a Julian Day Number. The Julian Day changes at noon UT (probably as this means that the Julian Day doesn't change during the European night) Whereas most other systems of dates start and end at midnight. This can easily introduce a 12 hour error into your calculations. When you say "number of days since Jan 1 2000, are you starting your counting at noon or midnight?
|
2021-05-10 08:01:07
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8089877367019653, "perplexity": 1711.1097102628487}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989115.2/warc/CC-MAIN-20210510064318-20210510094318-00003.warc.gz"}
|
https://www.gamedev.net/forums/topic/398614-undefined-reference-to/
|
• 13
• 18
• 19
• 27
• 10
# Undefined reference to...
This topic is 4295 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
## Recommended Posts
Agggh i get these errors every time i compile
undefined reference to Bitmap::Bitmap(HDC__*, char*)'
undefined reference to Bitmap::Draw(HDC__*, int, int)'
undefined reference to GameEngine::m_pGameEngine'
undefined reference to GameEngine::m_pGameEngine'
undefined reference to GameEngine::m_pGameEngine'
`
Any suggestions? -Thanks
##### Share on other sites
The error means just what it says, you're trying to reference something which has not been defined. You may be missing an include file or have just forgotten to define your functions/variables before trying to use them.
More code would be helpfull if you still can't figure it out.
|
2018-03-19 18:56:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19108347594738007, "perplexity": 3984.3952127779594}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647044.86/warc/CC-MAIN-20180319175337-20180319195337-00328.warc.gz"}
|
https://jpt.spe.org/
|
Share
Trending Content
Researchers are building a comprehensive database on hundreds of salt domes to help expand subsurface hydrogen storage in the US.
Controversial Sale 259 was conducted as mandated by the Inflation Reduction Act of 2022.
Chad’s dispute with ExxonMobil and Savannah Energy may be headed for a second round of arbitration before the International Chamber of Commerce in Paris.
• The Calgary-based shale producer said the deal involves at least 600 new well locations that will keep it drilling for the next 20 years.
• The proposed transaction also would be the first step in establishing a joint venture between BP and ADNOC.
• The $1.8-billion project will add crucial gas supplies to fuel Trinidad and Tobago's LNG export capabilities. Get JPT articles in your LinkedIn feed and stay current with oil and gas news and technology. • The SPE Board has approved a new policy allowing AI-generated content to be used within SPE publications but under specific conditions. • The SPE Board of Directors approved the sale of the SPE office building in Richardson, Texas, which is expected to result in SPE saving approximately$750,000 annually in operating costs.
• Fueling the success of SPE members and the future of the oil and gas industry.
• From refracturing old wells to ones that don’t have to be fractured at all, notable producers argue that experiments are paying off.
• Over 1,000 hours of remotely monitored continuous production was achieved on an unmanned platform—a first for standalone offshore solids management in the North Sea.
• Geothermal energy is bidding to emerge from its dark horse status in Texas and become a possible solution of choice for some renewable applications.
• Federal infrastructure law gives states financial incentive to remediate orphan wells.
## Stay Connected
President's Column
• The SPE Strategic Plan has been updated, and the changes to SPE’s bylaws have been finalized.
|
2023-03-30 20:30:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31415560841560364, "perplexity": 5614.747919372}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949387.98/warc/CC-MAIN-20230330194843-20230330224843-00545.warc.gz"}
|
https://www.physicsforums.com/threads/combinatorics-mathematical-induction.614187/
|
# Homework Help: Combinatorics - Mathematical Induction?
1. Jun 15, 2012
### nintendo424
Hello, I am having trouble solving this problem. Maybe I'm just overreacting to it. In my two semesters in discrete math/combinatorics, I've never seen a problem like this (with two summations) and been asked to prove it. Can some one help?
$\sum^{n}_{i=1} i^3 = \frac{n^2(n+1)^2}{4} = (\sum^{n}_{i=1} i)^2$
I mean, I know the whole S(n), S(1), S(k), S(k+1) steps, but I'm just unsure of how to write it. The solutions manual for the book skip that problem.
Book: Discrete And Combinatorial Mathematics: An Applied Introduction by Ralph P. Grimaldi, 5th Edition.
2. Jun 15, 2012
### micromass
First of all, this is a textbook problem, so it belongs in the homework forums. I moved it for you
Second, you actually need to show two things:
$$\sum_{i=1}^n i^3=\frac{n^2(n+1)^2}{4}$$
and
$$\sum_{i=1}^n i = \frac{n(n+1)}{2}$$
(and square both sides)
Can you do that?
3. Jun 15, 2012
### nintendo424
Thank you very much! That helped a lot, I just finished my proof. :D That makes sense why you'd have to break it up. I didn't put the relationship between $\sum^{n}_{i=1}i = \frac{n(n+1)}{2}$ and $(\sum^{n}_{i=1}i)^2 = \frac{n^2(n+1)^2}{4}$ together. lol
4. Jun 15, 2012
### mathman
These are two separate problems. ∑i³ is one and ∑i is the other. Have you tried either?
The question belongs in mathematics, not computer science.
Last edited: Jun 15, 2012
|
2018-08-22 00:02:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7680548429489136, "perplexity": 1659.3139607860442}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221219197.89/warc/CC-MAIN-20180821230258-20180822010258-00118.warc.gz"}
|
https://xn--2-umb.com/22/bls-signatures/index.html
|
# BLS Signatures
$$\gdef\F{\mathbb{F}} \gdef\G{\mathbb{G}} \gdef\g{\mathrm{G}} \gdef\h{\mathtt{H}} \gdef\e{\mathrm{e}}$$
Previously I wrote about several elliptic curve signature schemes. I did not cover pairing based ones. These allow for some great aggregation schemes. so let's cover those now.
## Background
### Hash to curve
Let $\h_{\mathcal S} : \mathcal I → \mathcal S$ be a hash function mapping some input set $\mathcal I$ to the output set $\mathcal S$. A superscript like $\h'$ is used to indicate two domain separated hash functions.
Securely generating elliptic curve points from binary noise is challenging. The naive solution of multiplying by a generator is not secure, since the discrete logarithm is known. The BLS paper introduced the first solution: repeatedly trying $x$ coordinates until a valid curve point is found. This method is generic but not constant time. Constant time methods exists but are curve specific.
### Pairing curves
Let $\G_1$, $\G_2$, $\G_3$ be elliptic curve groups with the same scalar field $\F$, generators $\g_1$, $\g_2$, $\g_3$ and a pairing $\e: \G_1 × \G_2 → \G_3$.
A pairing is a function that satisfies $\e(a ⋅ A, b ⋅ B) = a ⋅ b ⋅ \e(A, B)$ and is not the trivial solution $\e(\dummyarg,\dummyarg) = 0$. From this it follows that $\e(A_1 + A_2, B) = \e(A_1, B) + \e(A_2, B)$ and other useful linear properties.
Note. The pairing is symmetrical in $\G_1$ and $\G_2$ so protocols also work with groups and pairing arguments swapped. This is useful if one group has better performance than the other.
Finding a pairing that satisfy the requirements is challenging, especially when there are additional constraints such as having large binary roots of unity in $\F$. Different families of solutions have been found:
• MNT, Miyaji-Nakabayashi-Takano (2001; paper).
• BLS, Barreto-Lynn-Scott (2002; paper).
• BN, Barreto-Naehrig (2005; paper).
• KSS, Kachisa-Schaefer-Scott (2008; paper).
Two important specific solutions are
Note. The 128 in alt_bn128 revers to the target security level, but it was since found that it is "closer to 96 or so". BLS12-381 development was in part motivated to address this and targets 128 bit security.
## Signature aggregation
Like with elliptic curve protocols, the private key is a random scalar field element $x ∈ \F$ and the public key is the private key times a generator $X = x ⋅ \g_2$. However, this alone is not secure in pairing cryptography due to the rogue key attack that I will explain later. We need to prove that we actually know the private key, a proof of possession. To do this compute and publish
$$S_X = x ⋅ \h_{\G_1}'(X)$$
Now anyone with $(X, S_X)$ can verify the public key using
$$\e(S_X, \g_2) = \e(\h_{\G_1}'(X), X)$$
Details
\begin{aligned} \e(S_X, \g_2) &= \e(\h_{\G_2}'(X), X) \\ \e(x ⋅ \h_{\G_2}'(X), \g_2) &= \e(\h_{\G_2}'(X), x ⋅ \g_2) \\ x ⋅ \e(\h_{\G_2}'(X), \g_2) &= x ⋅ \e(\h_{\G_2}'(X), \g_2) \\ \end{aligned}
In the following I will assume all public keys have had their proofs of possession checked.
### One signer, one message
To sign a message $m$, hash the message $H = \h_{\G_1}(m)$ and the signature is $S = x ⋅ H$.
To verify, hash the message $H = \h_{\G_1}(m)$ and check
$$\e(S, \g_2) = \e(H, X)$$
Details
\begin{aligned} \e(S, \g_2) &= \e(H, X) \\ \e(x ⋅ H, \g_2) &= \e(H, X) \\ x ⋅ \e(H, \g_2) &= \e(H, x ⋅ \g_2) \\ x ⋅ \e(H, \g_2) &= x ⋅ \e(H, \g_2) \\ \end{aligned}
### Many signers, many messages
To aggregate many signatures, we simply sum them
$$S = S_1 + S_2 + S_3 + ⋯$$
Then to verify, we compute all the hashes for the messages $H$ and check
$$\e(S, \g_2) = \e(H_1, X_1) + \e(H_2, X_2) + \e(H_3, X_3) + ⋯$$
### Many signers, one message
In the case where all messages are the same, the verification simplifies to only two pairing operations:
$$\e(S, \g_2) = \e(H, X_1 + X_2 + X_3 + ⋯)$$
## Threshold signatures
BLS signatures also allows for a threshold signature scheme. For a scheme where $m$ out of $n$ signatories can sign we first do a setup: Assign every signer $i$ a unique $a_i ∈ \F$ and pick another unique $a ∈ \F$ that we will need soon. These values $a$ are public. Use a distributed key generation protocol to create a random $m$-term polynomial $P ∈ \F[X]$ and provide all the signers with private keys $x_i = P(a_i)$ and public keys $X_i = x_i ⋅ G_2$. Also compute the verification public key $X = P(a) ⋅ \g_2$. This is basically Shamir's Secret Sharing scheme where each private key is a share of the private to the verification key.
Signing is as with plain BLS signatures. Given message $m$ compute $H = \h_{\G_1}(m)$. Each signer computes their signature $S_i = x_i ⋅ H$.
To aggregate the treshold, first verify the individual signatures using plain BLS verification. Take $\mathcal I$ to be the set of indices $i$ of valid signatures. Compute the Lagrange interpolation weights $w_i ∈ \F$ as
$$w_i = \prod_{j ∈ \mathcal I \setminus \set{i}} \frac{a - a_j}{a_i - a_j}$$
where the $j$'s range over all valid signatory indices except $i$. Compute the aggregate signature
$$S = \sum_{i ∈ \mathcal I} w_i ⋅ S_i$$
The aggregate signature $S$ is now a valid BLS signatured for $m$ for the verification public key $S$:
$$\e(S, \g_2) = \e(H, X)$$
Details
\begin{aligned} \e(S, \g_2) &= \e(H, X) \\ \e\p{\sum_{i ∈ \mathcal I} w_i ⋅ S_i, \g_2} &= \e(H, P(a) ⋅ \g_2) \\ \e\p{\sum_{i ∈ \mathcal I} w_i ⋅ x_i ⋅ H, \g_2} &= P(a) ⋅ \e(H, \g_2) \\ \p{\sum_{i ∈ \mathcal I} w_i ⋅ x_i} ⋅ \e\p{H, \g_2} &= P(a) ⋅ \e(H, \g_2) \\ \sum_{i ∈ \mathcal I} w_i ⋅ x_i &= P(a) \\ \sum_{i ∈ \mathcal I} P(a_i) ⋅ \prod_{j ∈ \mathcal I \setminus \set{i}} \frac{a - a_j}{a_i - a_j} &= P(a) \\ \end{aligned}
This final relation is the Langrange interpolation formula evaluated at $a$, it holds if $\abs{\mathcal I} ≥ m$.
## Rogue key attack
The proof-of-possession is sometimes omitted. This makes the system vulnerable to the rogue key attack where an attacker can co-sign with forged signatures making it look like both attacker and one or more victims signed a message $m$. I'll cover the one victim case:
Given victim public key $X$, the attacker generates a random private key $a ∈ \F$ and computes a rogue public key $A = a ⋅ \g_2 - X$. The attacker signs a message $H$ as usual $S = a ⋅ H$. Now $S$ looks like the aggregate signature of $A$ and $X$ on a common message $H$:
$$\e(S, \g_2) = \e(H, A) + \e(H, X)$$
Details
\begin{aligned} \e(S, \g_2) &= \e(H, A) + \e(H, X) \\ \e(a ⋅ H, \g_2) &= \e(H, a ⋅ \g_2 - X) + \e(H, X) \\ a ⋅ \e(H, \g_2) &= a ⋅ \e(H, \g_2) - \e(H, X) + \e(H, X) \\ a ⋅ \e(H, \g_2) &= a ⋅ \e(H, \g_2) \end{aligned}
Besides proof-of-possession that was used above, there are two other notable mitigation strategies (see Boneh, Drijvers & Neven (2018)). For the first, note that the attack requires the messages to be the same. Making sure that every signer signs a different message avoids the attack. One way of achieving this is including the public key in each message:
$$H = \h_{\G_1}\p{X, m}$$
Since this makes all messages distinct, we can no longer use the "many signers, one message" optimization. A second mitigation strategy is to modify the aggregation method. To aggregate a set of signatures $\mathcal I$ we first compute pseudorandom constants $c_i$ for each signer
$$c_i = \h_{\F}\p{X_i, \setb{X_j}{j ∈ \mathcal I}}$$
Note These $c_i$ are a function of the whole set of signatories. Simplifying it to $c_i = \h_{\F}(X_i)$ is not enough. This means each $c_i$ will have to be recomputed everytime the set of signatories changes.
Using these $c_i$, we aggregate signatures as
$$S = \sum_{i∈\mathcal I} c_i ⋅ S_i$$
Signature aggregation is no longer associative. In fact, it can only be done once all signers are known. To verify the aggregate signature we use a correspondingly modified check
$$\e(S, \g_2) = \sum_{i∈\mathcal I} \e(H_i, c_i ⋅ X_i)$$
Details
\begin{aligned} \e(S, \g_2) &= \sum_{i∈\mathcal I} \e(H_i, c_i ⋅ X_i) \\ \e\p{\sum_{i∈\mathcal I} c_i ⋅ S_i, \g_2} &= \sum_{i∈\mathcal I} \e(H_i, c_i ⋅ x_i ⋅ \g_2) \\ \sum_{i∈\mathcal I} c_i ⋅ \e\p{x_i ⋅ H_i, \g_2} &= \sum_{i∈\mathcal I} c_i ⋅ x_i ⋅ \e(H_i, \g_2) \\ \sum_{i∈\mathcal I} c_i ⋅ x_i ⋅ \e\p{H_i, \g_2} &= \sum_{i∈\mathcal I} c_i ⋅ x_i ⋅ \e(H_i, \g_2) \\ \end{aligned}
This method also optimizes to two pairings in the repeated message case
$$\e(S, \g_2) = \e\p{H, \sum_{i∈\mathcal I} c_i ⋅ X_i}$$
## Other gotchas
The BLS signature scheme is very linear. This enables all the useful aggregation schemes above, but also allows for a number of unexpected things. Linearity in the public key allows the rogue key attack. This is why we need proof-of-possession.
Linearity in the private key has interesting behaviour around zero. For example a zero private key produces signatures valid for all messages: $S = 0 ⋅ \h_{\G_1}(m) = 0$. This is mitigated by rejecting the public key $0 ⋅ \g_2$. But this is not enough.
Two colluding signers pick private keys $a_1$ and $a_2 = -a_1$ and register their public keys $A_1 = a_1 ⋅ \g_2$ and $A_2 = a_2 ⋅ \g_2$. Now anyone can claim $A_1$ and $A_2$ are part of any aggregate signature with any message. Given a batch signature $S = \sum_i x_i ⋅ H_i$ such that
$$\e(S, \g_2) = \sum_i \e(H_i, X_i)$$
Then $S$ also verifies with $A_1$ and $A_2$ signing an arbitrary message $H$:
$$\e(S, \g_2) = \e(H, A_1) + \e(H, A_2) + \sum_i \e(H_i, X_i)$$
Details
\begin{aligned} \e(S, \g_2) &= \e(H, A_1) + \e(H, A_2) + \sum_i \e(H_i, X_i) \\ \e(S, \g_2) &= \e(H, a_1 ⋅ \g_2) + \e(H, a_2 ⋅ \g_2) + \sum_i \e(H_i, X_i) \\ \e(S, \g_2) &= a_1 ⋅ \e(H, \g_2) + a_2 ⋅ \e(H, \g_2) + \sum_i \e(H_i, X_i) \\ \e(S, \g_2) &= (a_1 + a_2) ⋅ \e(H, \g_2) + \sum_i \e(H_i, X_i) \\ \e(S, \g_2) &= 0 ⋅ \e(H, \g_2) + \sum_i \e(H_i, X_i) \\ \e(S, \g_2) &= \sum_i \e(H_i, X_i) \\ \end{aligned}
This generalizes to more complex linear relations among the public keys.
Linearity in the signatures can be exploited to create two signatures that are individually invalid, but their aggregate is valid. Start with two valid signatures $S_1, S_2$ and an arbitrary non-zero point $P ∈ \G_1$, then compute the two new signatures
\begin{aligned} S_1' &= S_1 + P & S_2' &= S_2 - P \end{aligned}
## References
• Dan Boneh, Ben Lynn & Hovav Shacham (2001). "Short Signatures from the Weil Pairing". pdf
• A. Faz-Hernandez et al. (2021). "Hashing to Elliptic Curves". IETF Draft. link.
• Dan Boneh et al. (2020). "BLS Signatures". IETF Draft. link
• Sean Bowe (2017). "BLS12-381: New zk-SNARK Elliptic Curve Construction" link
• Thomas Ristenpart & Scott Yilek (2007). "The Power of Proofs-of-Possession". pdf.
• Nguyen Thoi Minh Quan (2021). "Attacks and weaknesses of BLS aggregate signatures". pdf.
• Dan Boneh, Manu Drijvers & Gregory Neven (2018). "Compact Multi-Signatures for Smaller Blockchains". pdf
• Dan Boneh, Manu Drijvers & Gregory Neven (2018). "BLS Multi-Signatures With Public-Key Aggregation". link
Remco Bloemen
Math & Engineering
https://2π.com
|
2022-08-07 22:50:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9999873638153076, "perplexity": 5128.347256133649}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570730.59/warc/CC-MAIN-20220807211157-20220808001157-00699.warc.gz"}
|
https://www.piday.org/calculators/surface-area-calculator/
|
# Surface Area Calculator
Go back to Calculators page
The surface area calculator will determine the surface area of a cone, cube, cylinder, rectangular prism and sphere.
The formulas for surface area of a cone, cube, cylinder, rectangular prism and sphere are given. The calculator will do all the work for you rapidly with precise results. However, if you wish to calculate manually, the formulas will be handy. Also, an example of how to calculate the surface area of a cylinder is provided.
## How to Calculate Surface Area
The formulas for calculating the surface area differ depending on the kind of geometric solid. Here are the surface area formulas:
Cone: $$A = \pi^{2}+\pi\sqrt{(r^{2}+h^{2})}$$, where r is the radius and h is the height of the cone.
Cube: $$A = 6s^{2}$$, where s is the length of the side.
Cylinder: $$A = \pi r^{2}+ \pi r\sqrt{r^{2}+ h^{2}}$$, where r is the radius and h is the height of the cone.
Rectangular prism: $$A = 2(ab+bc+ac)$$, where a, b and c are the lengths of sides of the prism.
Sphere: $$A = 4\pi r^{2}$$, where r stands for the radius of the sphere
|
2020-07-07 18:38:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9615386724472046, "perplexity": 260.03179470839126}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655894904.17/warc/CC-MAIN-20200707173839-20200707203839-00360.warc.gz"}
|
https://tex.stackexchange.com/questions/411681/biblatex-pagetotals-localization
|
# Biblatex pagetotals localization
I'm writing a document in Italian with xetex and polyglossia, bibliography with biblatex/biber, nature style (but the issue happens with numeric too, at least).
Is it normal that I get bold pagetotals strings in the bibliography? here's an example
I don't know, it doesn't seem something that should be bold and I kind of expected a simple string like that would be automatically localized.
I can fix it with
\DefineBibliographyStrings{italian}{%
pagetotals = {pagine},
}
but I'm worried it's the symptom of some other problem.
Here's a minimal example that reproduces the issue
mwe.tex
\documentclass[11pt, a4paper]{memoir}
\usepackage{polyglossia}
\setmainlanguage{italian}
\usepackage[autostyle]{csquotes}
\usepackage[backend=biber, style=nature]{biblatex}
\bibliography{mwe}
\begin{document}
Here's some citation\cite{knuth}.
\printbibliography
\end{document}
mwe.bib
@book{knuth,
author = {Knuth, Donald E.},
title = {Sorting and Searching: The Art of Computer Programming Volume 3},
year = 1973,
pagetotal = {800}
}
Compile it with
xelatex mwe.tex
biber mwe
xelatex mwe.tex
Now that I looked at the logs I see there's a warning:
Package biblatex Warning: Bibliography string 'pagetotals' undefined
(biblatex) at entry 'knuth' on input line 13.
• Can you please add a minimal example? – egreg Jan 23 '18 at 8:33
• @egreg sure, see the edit, there's also a warning about undefined pagetotals, but where does that string come from? It's pagetotal in the bib file – filippo Jan 23 '18 at 8:47
In version 3.8 biblatex introduced dedicated bibstrings for the pagetotal field (see https://github.com/plk/biblatex/issues/534, https://github.com/plk/biblatex/pull/546). This became necessary because in some languages (Swedish) the string for "[on] pages 6--12" is not the same as in "230 pages".
Earlier all instances of 'page' used the page bibstring, but now there is also pagetotal, its plural version is pagetotals (there are also columntotal, columntotals, ... for all known pagination types). Unfortunately not all localisations have these strings yet, because we did not want to assume that pagetotal and page coincide in all languages.
\DefineBibliographyStrings{italian}{%
pagetotal = {pagina},
pagetotals = {pagine},
}
Is indeed the correct 'fix' (assuming that in Italian 'on pages 20-40' is the same as '240 pages'). But please by all means, drop the developers a line and tell them what the correct strings are so they can be included (https://github.com/plk/biblatex/issues). At the moment the Italian localisation misses a few strings, so help would definitely be appreciated.
There was a nasty bug in version 3.8 of biblatex that prevented pagetotal from being used properly, but that was resolved in version 3.9. See https://github.com/plk/biblatex/issues/653. So if you are running a version <3.9, you should consider updating before trying to use pagetotal.
• Thank you, I'm on 3.10 so I should be safe about those bugs. Will open an issue about the translation. By the way, do you know if is there a way to define both extended and abbreviated strings with something like \DefineBibliographyStrings ? I tried the syntax used in italian.lbx pagetotals = {{pagine}{pp\adddot}} but it doesn't seem to work – filippo Jan 23 '18 at 9:31
• @filippo You can't define the long and short versions with \DefineBibliographyStrings, that command supports only one string. To define both versions you need \DeclareBibliographyStrings, but that command can only be used from .lbx files. So you can only define one version from your document, if you want to define both you need an .lbx file. – moewe Jan 23 '18 at 9:33
• Hehe I completely missed it was Declare instead of Define, thanks again – filippo Jan 23 '18 at 9:36
|
2020-01-26 20:00:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7937310934066772, "perplexity": 3379.9106767127737}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251690379.95/warc/CC-MAIN-20200126195918-20200126225918-00284.warc.gz"}
|
https://mathematica.stackexchange.com/questions/128143/subtract-and-intercalate-lists
|
# Subtract and intercalate lists
I have a list where the sum of its values results 10:
timeEachStep = {{0.7646050049957724}, {1.1043813977215065}, {1.792518103577381}, {1.5769143982179603}, {2.475716623701331}, {2.2858644717860503}}
Plus @@ timeEachStep
$\left( \begin{array}{c} 0.764605 \\ 1.10438 \\ 1.79252 \\ 1.57691 \\ 2.47572 \\ 2.28586 \\ \end{array} \right)$
${10}$
And I have another value that will change the above list
tClaw = 0.2
$0.2$
With my limited knowledge I had to create a list of work
list2 = Table[tClaw, Length[timeEachStep]]
$\{0.2,0.2,0.2,0.2,0.2,0.2\}$
This list should be added to the other list, but without changing the sum of values. Then I did a subtraction, then I added the values intercalating.
list3 = timeEachStep - list2
Plus @@ list3
$\left( \begin{array}{c} 0.564605 \\ 0.904381 \\ 1.59252 \\ 1.37691 \\ 2.27572 \\ 2.08586 \\ \end{array} \right)$
${8.8}$
newList = Flatten[Transpose[{list3, list2}]]
Plus @@ newList
$\{0.564605,0.2,0.904381,0.2,1.59252,0.2,1.37691,0.2,2.27572,0.2,2.08586,0.2\}$
$10$
There is a specific function that makes this?
• Flatten@Riffle[timeEachStep - tClaw, tClaw, {2, -1, 2}] should do it. Oct 7, 2016 at 13:47
• Yes. It's that simple. Oct 7, 2016 at 14:02
• Flatten@{# - tClaw, tClaw} & /@ timeEachStep Oct 7, 2016 at 16:23
Flatten@Riffle[timeEachStep - tClaw, tClaw, {2, -1, 2}]
$\{0.564605,0.2,0.904381,0.2,1.59252,0.2,1.37691,0.2,2.27572,0.2,2.08586,0.2\}$
|
2022-05-23 11:54:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45299166440963745, "perplexity": 8820.033025173794}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662558015.52/warc/CC-MAIN-20220523101705-20220523131705-00130.warc.gz"}
|
https://byjus.com/question-answer/the-angle-between-the-vectors-overline-mathrm-a-x-overline-mathrm-b-and-overline-mathrm/
|
Question
# The angle between the vectors $$(\overline{\mathrm{A}}$$ x $$\overline{\mathrm{B}})$$ and $$(\overline{\mathrm{B}}\times\overline{\mathrm{A}})$$ is:
A
00
B
1800
C
450
D
900
Solution
## The correct option is D $$180^{0}$$($$A$$$$\times$$$$B$$) = - ($$B$$$$\times$$$$A$$) which are equal and opposite in direction.Hence it will have angle in between $$180^0$$Physics
Suggest Corrections
0
Similar questions
View More
People also searched for
View More
|
2022-01-20 23:20:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6363913416862488, "perplexity": 3422.391844079151}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320302706.62/warc/CC-MAIN-20220120220649-20220121010649-00604.warc.gz"}
|
https://math.stackexchange.com/questions/1647424/permutations-and-representations-sign-function
|
# permutations and representations , sign function.
Show that the sign representation of $S_n$ is indeed a representation.
attempt: Recall the sign function of a permutation is given by $\mathrm{sgn}(\pi) = (-1)^k$.
Then recall a representation is a homomorphism.
So we we have to show $\mathrm{sgn}(\pi_1 \pi_2)$ = $\mathrm{sgn}(\pi)\mathrm{sgn}(\pi_2)$.
This is a 3 part problem.
part a) If $\pi$ is a product of $k$ transpositions , then $k \equiv \mathrm{inv}(\pi)\pmod{2}$. Where $\mathrm{inv}(\pi)$ is the number of inversions of $\pi$.
part b) Conclude that the sign of a permutation is well defined.
So c) says "Conclude that the sign representation of $S_n$ is indeed a representation.
I am not sure what that representation might be. Or what I have to show. Can I use part $a )$ , or $b)$ to conclude $c$?
Is this what I have to show? Thanks for any feedback!
• Well, what definition of representation have you been given? – Tobias Kildetoft Feb 9 '16 at 12:16
We consider permutations $\pi,\sigma$ of $[n]:=\{1,2,\dots, n\}$. All sums below run over pairs $(i,j)$ in $[n]\times[n]$.
Using Iverson brackets, define the number of inversions of $\pi$ as $$\mbox{inv}(\pi)=\sum [i>j]\,[\pi(j)>\pi(i)].$$ We rewrite the sum above, then split it into two as below: \begin{eqnarray*} \mbox{inv}(\pi)&=&\sum [i>j]\,[\pi(j)>\pi(i)]\\ &=&\sum [\sigma(i)>\sigma(j)]\,[\pi\sigma(j)>\pi\sigma(i)]\\ &=&\sum [i>j]\, [\sigma(i)>\sigma(j)]\,[\pi\sigma(j)>\pi\sigma(i)]\\ &+&\sum [j>i]\,[\sigma(i)>\sigma(j)]\,[\pi\sigma(j)>\pi\sigma(i)]. \end{eqnarray*} Therefore (notice the switch of variables on the second sum) we have $$\mbox{inv}(\pi)=\sum [i>j]\, [\sigma(i)>\sigma(j)]\,[\pi\sigma(j)>\pi\sigma(i)]+\sum [i>j]\,[\sigma(j)>\sigma(i)]\,[\pi\sigma(i)>\pi\sigma(j)].$$ Similarly, splitting the sum defining $\mbox{inv}(\sigma)$ gives $$\mbox{inv}(\sigma)=\sum [i>j]\, [\sigma(j)>\sigma(i)]\,[\pi\sigma(j)>\pi\sigma(i)]+\sum [i>j]\,[\sigma(j)>\sigma(i)]\,[\pi\sigma(i)>\pi\sigma(j)].$$ Adding these two sums gives $$\mbox{inv}(\pi)+\mbox{inv}(\sigma)=\mbox{inv}(\pi\sigma)+ 2\sum [i>j]\,[\sigma(j)>\sigma(i)]\,[\pi\sigma(i)>\pi\sigma(j)].$$
Thus the map $\pi\mapsto \mbox{sign}(\pi):=(-1)^{\mbox{inv}(\pi)}$ is a homomorphism from $S_n$ to $\{\pm 1\}$.
|
2019-06-17 21:19:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.985369861125946, "perplexity": 233.41921813520008}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998580.10/warc/CC-MAIN-20190617203228-20190617225228-00485.warc.gz"}
|
https://elteoremadecuales.com/brauers-theorem-on-induced-characters/
|
# Brauer's theorem on induced characters
Brauer's theorem on induced characters This article includes a list of general references, but it lacks sufficient corresponding inline citations. Please help to improve this article by introducing more precise citations. (July 2020) (Learn how and when to remove this template message) Brauer's theorem on induced characters, often known as Brauer's induction theorem, and named after Richard Brauer, is a basic result in the branch of mathematics known as character theory, within representation theory of a finite group.
Contents 1 Background 2 Statement 3 Proofs 4 Applications 5 References 6 Further reading 7 Notes Background A precursor to Brauer's induction theorem was Artin's induction theorem, which states that |G| times the trivial character of G is an integer combination of characters which are each induced from trivial characters of cyclic subgroups of G. Brauer's theorem removes the factor |G|, but at the expense of expanding the collection of subgroups used. Some years after the proof of Brauer's theorem appeared, J.A. Green showed (in 1955) that no such induction theorem (with integer combinations of characters induced from linear characters) could be proved with a collection of subgroups smaller than the Brauer elementary subgroups.
Another result between Artin's induction theorem and Brauer's induction theorem, also due to Brauer and also known as Brauer's theorem or Brauer's lemma is the fact that the regular representation of G can be written as {displaystyle 1+sum lambda _{i}rho _{i}} where the {displaystyle lambda _{i}} are positive rationals and the {displaystyle rho _{i}} are induced from characters of cyclic subgroups of G. Note that in Artin's theorem the characters are induced from the trivial character of the cyclic group, while here they are induced from arbitrary characters (in applications to Artin's L functions it is important that the groups are cyclic and hence all characters are linear giving that the corresponding L functions are analytic).[1] Statement Let G be a finite group and let Char(G) denote the subring of the ring of complex-valued class functions of G consisting of integer combinations of irreducible characters. Char(G) is known as the character ring of G, and its elements are known as virtual characters (alternatively, as generalized characters, or sometimes difference characters). It is a ring by virtue of the fact that the product of characters of G is again a character of G. Its multiplication is given by the elementwise product of class functions.
Brauer's induction theorem shows that the character ring can be generated (as an abelian group) by induced characters of the form {displaystyle lambda _{H}^{G}} , where H ranges over subgroups of G and λ ranges over linear characters (having degree 1) of H.
In fact, Brauer showed that the subgroups H could be chosen from a very restricted collection, now called Brauer elementary subgroups. These are direct products of cyclic groups and groups whose order is a power of a prime.
Proofs The proof of Brauer's induction theorem exploits the ring structure of Char(G) (most proofs also make use of a slightly larger ring, Char*(G), which consists of {displaystyle mathbb {Z} [omega ]} -combinations of irreducible characters, where ω is a primitive complex |G|-th root of unity). The set of integer combinations of characters induced from linear characters of Brauer elementary subgroups is an ideal I(G) of Char(G), so the proof reduces to showing that the trivial character is in I(G). Several proofs of the theorem, beginning with a proof due to Brauer and John Tate, show that the trivial character is in the analogously defined ideal I*(G) of Char*(G) by concentrating attention on one prime p at a time, and constructing integer-valued elements of I*(G) which differ (elementwise) from the trivial character by (integer multiples of) a sufficiently high power of p. Once this is achieved for every prime divisor of |G|, some manipulations with congruences and algebraic integers, again exploiting the fact that I*(G) is an ideal of Ch*(G), place the trivial character in I(G). An auxiliary result here is that a {displaystyle mathbb {Z} [omega ]} -valued class function lies in the ideal I*(G) if its values are all divisible (in {displaystyle mathbb {Z} [omega ]} ) by |G|.
Brauer's induction theorem was proved in 1946, and there are now many alternative proofs. In 1986, Victor Snaith gave a proof by a radically different approach, topological in nature (an application of the Lefschetz fixed-point theorem). There has been related recent work on the question of finding natural and explicit forms of Brauer's theorem, notably by Robert Boltje.
Applications Using Frobenius reciprocity, Brauer's induction theorem leads easily to his fundamental characterization of characters, which asserts that a complex-valued class function of G is a virtual character if and only if its restriction to each Brauer elementary subgroup of G is a virtual character. This result, together with the fact that a virtual character θ is an irreducible character if and only if θ(1) > 0 and {displaystyle langle theta ,theta rangle =1} (where {displaystyle langle ,rangle } is the usual inner product on the ring of complex-valued class functions) gives a means of constructing irreducible characters without explicitly constructing the associated representations.
An initial motivation for Brauer's induction theorem was application to Artin L-functions. It shows that those are built up from Dirichlet L-functions, or more general Hecke L-functions. Highly significant for that application is whether each character of G is a non-negative integer combination of characters induced from linear characters of subgroups. In general, this is not the case. In fact, by a theorem of Taketa, if all characters of G are so expressible, then G must be a solvable group (although solvability alone does not guarantee such expressions- for example, the solvable group SL(2,3) has an irreducible complex character of degree 2 which is not expressible as a non-negative integer combination of characters induced from linear characters of subgroups). An ingredient of the proof of Brauer's induction theorem is that when G is a finite nilpotent group, every complex irreducible character of G is induced from a linear character of some subgroup.
References Isaacs, I.M. (1994) [1976]. Character Theory of Finite Groups. Dover. ISBN 0-486-68014-2. Zbl 0849.20004. Corrected reprint of the 1976 original, published by Academic Press. Zbl 0337.20005 Further reading Snaith, V. P. (1994). Explicit Brauer Induction: With Applications to Algebra and Number Theory. Cambridge Studies in Advanced Mathematics. Vol. 40. Cambridge University Press. ISBN 0-521-46015-8. Zbl 0991.20005. Notes ^ Serge Lang, Algebraic Number Theory, appendix to chapter XVI Categories: Representation theory of finite groupsTheorems in representation theory
Si quieres conocer otros artículos parecidos a Brauer's theorem on induced characters puedes visitar la categoría Representation theory of finite groups.
Subir
Utilizamos cookies propias y de terceros para mejorar la experiencia de usuario Más información
|
2022-11-30 20:19:54
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.831631064414978, "perplexity": 809.5340187549979}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710771.39/warc/CC-MAIN-20221130192708-20221130222708-00724.warc.gz"}
|
https://proofwiki.org/wiki/Inscribing_Equilateral_Triangle_inside_Square_with_a_Coincident_Vertex/Mistake
|
# Inscribing Equilateral Triangle inside Square with a Coincident Vertex/Mistake
## Source Work
The Puzzles:
Abul Wafa ($\text {940}$ – $\text {998}$): $38$
Solution
## Mistake
Abul Wafa gave five different solutions. Here are three of them. ...
... Join $B$ to the midpoint, $M$, of $DC$. Draw an arc with centre $B$ and radius $BA$ to cut $BM$ at $N$. Let $DN$ cut $CB$ at $H$. Then $H$ is one of the vertices sought.
## Correction
The construction is incorrect.
$DH$ is shorter than $GH$.
Let $\Box ABCD$ be embedded in a Cartesian plane such that:
$\ds A$ $=$ $\ds \tuple {0, 0}$ $\ds B$ $=$ $\ds \tuple {a, 0}$ $\ds C$ $=$ $\ds \tuple {a, a}$ $\ds D$ $=$ $\ds \tuple {0, a}$
By Equation of Straight Line in Plane and some algebra, the equation for the straight line $MB$ is:
$(1): \quad y = 2 \paren {a - x}$
By Equation of Circle in Cartesian Plane and some algebra, the equation for the circle with center at $B$ and radius $a$ is:
$(2): \quad y^2 = 2 a x - x^2$
Hence their point of intersection $M$ is found by solving the simultaneous equations $(1)$ and $(2)$:
$\ds 4 \paren {a - x}^2$ $=$ $\ds 2 a x - x^2$ substituting for $y$ in $(2)$ $\ds \leadsto \ \$ $\ds 4 a^2 - 8 a x + 4 x^2$ $=$ $\ds 2 a x - x^2$ $\ds \leadsto \ \$ $\ds 5 x^2 - 10 a x + 4 a^2$ $=$ $\ds 0$ $\ds \leadsto \ \$ $\ds x$ $=$ $\ds \dfrac {10 a \pm \sqrt {100 a^2 - 80 a^2} } {2 \times 5}$ Solution to Quadratic Equation $\ds$ $=$ $\ds a \paren {1 \pm \dfrac {\sqrt 5} 5}$ simplification
We are interested in the negative square root in this expression, as the positive square root corresponds to the point on $BM$ for negative $y$.
Thus we have:
$\ds y$ $=$ $\ds 2 \paren {a - x}$ $\ds$ $=$ $\ds a \paren {2 - 2 \paren {1 - \dfrac {\sqrt 5} 5} }$ $\ds$ $=$ $\ds \dfrac {2 a \sqrt 5} 5$
Thus we can calculate the tangent of $\angle CDH$:
$\ds \tan \angle CDH$ $=$ $\ds \dfrac {a - \frac {2 a \sqrt 5} 5} {a \paren {1 - \frac {\sqrt 5} 5} }$ $\ds$ $=$ $\ds \dfrac {5 - 2 \sqrt 5} {\paren {5 - \sqrt 5} }$ $\ds$ $=$ $\ds \dfrac {\paren {5 - 2 \sqrt 5} \paren {5 + \sqrt 5} } {\paren {5 - \sqrt 5} \paren {5 + \sqrt 5} }$ $\ds$ $=$ $\ds \dfrac {15 - 5 \sqrt 5} {25 - 5}$ $\ds$ $=$ $\ds \dfrac {3 - \sqrt 5} 4$
But from Tangent of 15 Degrees:
$\tan 15 \degrees = 2 - \sqrt 3$
So:
$\angle CDH \ne \tan 15 \degrees$
(in fact it is about $10.8 \degrees$).
$\triangle DGH$ is not an equilateral triangle.
$\blacksquare$
It appears that this mistake originates with David Wells, in his Curious and Interesting Puzzles.
This construction cannot be found in J.L. Berggren's work Episodes in the Mathematics of Medieval Islam, from which Wells claims to have sourced it.
It also appears that it may not have been the case that Abu'l-Wafa Al-Buzjani actually gave five different solutions.
Wells seems to have got his material from J.L. Berggren's Episodes in the Mathematics of Medieval Islam, as stated above, but that work does not contain any such constructions for this result.
It is not clear where Wells actually got the information about those $5$ constructions, but it certainly was not from Berggren.
Berggren's work, in section $8$ of chapter $3$, entitled Geometry with a Rusty Compass, features five problems from Abū al-Wafā's On Those Parts of Geometry Needed by Craftsmen on pp. $107$ - $111$. They are:
To construct at the endpoint $A$ of a segment $AB$ a perpendicular to that segment, without prolonging the segment beyond $A$.
To divide a line segment into any number of equal parts.
To bisect a given angle.
To construct a square in a given circle.
To construct in a given circle a regular pentagon with a compass opening equal to the radius of the circle.
Abū al-Wafā's treatise contains a wealth of beautiful constructions for regular $n$gons, including exact constructions for $n = 3, 4, 5, 6, 8, 10$. It also gives a verging construction for $n = 9$ which goes back to Archimedes and the approximation for $n = 7$ that gives the side of a regular heptagon in a circle as equal to half the side of an inscribed equilateral triangle.
Some of these do appear in Wells's Curious and Interesting Puzzles (numbers $43$, $44$ and $45$), but apparently he does not include $38$ or several others. Nor does it appear as an exercise to the chapter $3$, or in chapter $5$ on trigonometry, where Abū al-Wafā is also featured.
|
2021-06-13 23:03:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8539068102836609, "perplexity": 385.28969675269036}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487611089.19/warc/CC-MAIN-20210613222907-20210614012907-00066.warc.gz"}
|
http://mathoverflow.net/questions/107334/a-fourier-series-related-to-spin-chern-numbers-almost-commuting-matrices?sort=votes
|
# A fourier series related to spin Chern numbers almost commuting matrices
Let $$f(x)=\sin(x)\sqrt{1+\cos^{2}(x)+\cos^{4}(x)}.$$ In my study of almost commuting unitary matrices, $U$ and $V$, I have need for a bound like $$\left\Vert \tilde{f}(V)U-U\tilde{f}(V)\right\Vert \leq C\left\Vert VU-UV\right\Vert ,$$ where $\tilde{f}(e^{\pi ix})=f(x)$ so my functional calculus makes sense. (Matrix function to any applied listeners). The norm is the operator norm.
This is all in the context spin Chern numbers and the Pfaffian-Bott index, as in my paper with Hastings, Topological insulators and $C^{*}$- algebras: Theory and numerical practice'' which came out in 2011. I want precise bounds on how small a commutator I need to ensure a derived matrix remains invertible.
One way to estimate $C$ is to writing $f$ and $f^{\prime}$ as Fourier series and working term by term. Let $g=f^{\prime},$ so $$g(x)=\frac{3\cos^{5}(x)}{\sqrt{1+\cos^{2}(x)+\cos^{4}(x)}}.$$ I need to know, or get a good estimate on $\left\Vert \hat{g}\right\Vert _{1}$, meaning the $\ell^{1}$-norm of the fourier series of $g(x)$.
What is the $\ell^{1}$-norm of the Fourier series of $\frac{3\cos^{5}(x)}{\sqrt{1+\cos^{2}(x)+\cos^{4}(x)}}$, or what is a good upper bound?
I can prove this is less than $3\pi$ but I want a tighter estimate. Perhaps someone has seen this before. Is there a stategy to bound this I should follow? Better still, what is the Fourier series for $g$?
Any hints on finding $C$ by another route will be welcome.
A little more about $f$: The Bott index and its relatives require three functions with $f^2 + g^2 + h^2 = 1$ and $gh=0$ and these are to be continuous, real valued and periodic. And non-trivial, so $h=0$ is not allowed. For theoretical work these are all equally valid, but to use these in index studies of finite systems we need to pick carefully to control a bunch of commutators. This $f$ is designed to be similar to $\sin$ but will big "flat spots." Notice $$\left(f(x)+\cos^{3}(x)\right)^{2}=1.$$
-
Let $\|\cdot \|_F$ denote the $\ell^1$ norm of the Fourier series of a function on $[0,2\pi]$. This is a Banach algebra norm on functions whose Fourier series converge absolutely. Note that $$1 + \cos^2(x) + \cos^4(x) = \frac{15}{8} + \frac{1}{2} e^{2ix} + \frac{1}{2} e^{-2ix} + \frac{1}{16} e^{4ix} + \frac{1}{16} e^{-4ix}$$ Write this as $(15/8)(1 + W(x))$, where $\|W(x)\|_F = 3/5$. So $$g(x) = 3 (15/8)^{-1/2} \cos^5(x) (1+W(x))^{-1/2}= 3 (15/8)^{-1/2} \cos^5(x)\sum_{k=0}^\infty {{-1/2} \choose k} W(x)^k$$ Thus $$\|g\|_F \le 3 (15/8)^{-1/2} \sum_{k=0}^\infty \left| {{-1/2} \choose k}\right| (3/5)^k = 2 \sqrt{3}$$ Moreover, by explicitly evaluating the norm of a partial sum of the series and bounding the norm of the remainder you can get arbitrarily good approximations to the actual value of $\|g\|_F$.
I get so uncomfortable working in a norm that is not a $C^*$-norm. I really appreciate the help. I worked harder for a constant that was twice the size. – Terry Loring Sep 16 '12 at 21:19
|
2014-07-22 17:27:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9471193552017212, "perplexity": 128.51567391966185}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997860453.15/warc/CC-MAIN-20140722025740-00014-ip-10-33-131-23.ec2.internal.warc.gz"}
|
https://braindump.jethro.dev/posts/http/
|
# HTTP
tags
Web Development
## GET Requests and Request Body
Yes. In other words, any HTTP request message is allowed to contain a message body, and thus must parse messages with that in mind. Server semantics for GET, however, are restricted such that a body, if any, has no semantic meaning to the request. The requirements on parsing are separate from the requirements on method semantics.
So, yes, you can send a body with GET, and no, it is never useful to do so.
This is part of the layered design of HTTP/1.1 that will become clear again once the spec is partitioned (work in progress).
– Roy Fielding
Having servers return content based on the value of the request body in the GET request is a bad practice.
|
2021-11-27 06:03:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 39, "equation": 390, "x-ck12": 0, "texerror": 0, "math_score": 0.33206188678741455, "perplexity": 1570.0268965412224}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358118.13/warc/CC-MAIN-20211127043716-20211127073716-00067.warc.gz"}
|
https://bitbucket.org/rivanvx/beamer/issue/220/tableofcontents-currentsection-shading
|
# Issues
Issue #220 resolved
# \tableofcontents[currentsection] shading subsections of current section
Anonymous created an issue
In the Beamer 3.22 User Guide, \tableofcontents[currentsection] says "all subsections but those in the current section are shown in the semi-transparent way." It also says that the currentsection option is a shorthand for "sectionstyle=show/shaded,subsectionstyle=show/show/shaded".
However, subsections of the current section seem to be being shaded instead of shown. This is visible in page 1 of the minimal test case below (compiled with pdflatex). Note that this error is *not* present when the sectionstyle and subsectionstyle options are set explicitly (commented out below).
\documentclass{beamer}
\AtBeginSection[]
{
\begin{frame}
\frametitle{Outline}
\tableofcontents[currentsection]
\end{frame}
}
\begin{document}
\section{The single-coordinate problem}
\subsection{Signals}
\begin{frame}
\end{frame}
\end{document}
|
2014-07-10 15:37:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7199870944023132, "perplexity": 6357.88874488023}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1404776420526.72/warc/CC-MAIN-20140707234020-00060-ip-10-180-212-248.ec2.internal.warc.gz"}
|
https://workflowr.github.io/workflowr/reference/wflow_html.html
|
Workflowr custom format for converting from R Markdown to an HTML document. wflow_html has two distinct functionalities: 1) configure the formatting of the HTML by extending html_document (see the RStudio documentation for the available options), and 2) configure the workflowr reproducibility features (typically specified in a file named _workflowr.yml). wflow_html is intended to be used to generate webpages for a workflowr website, but it can also be used outside a workflowr project to implement reproducibility features for single R Markdown documents.
wflow_html(...)
## Arguments
...
Arguments passed to html_document.
## Value
An output_format object to pass to render.
## HTML formatting
wflow_html extends html_document. To set default formatting options to be shared across all of your HTML files, set them in the file analysis/_site.yml. This special file can also be used to configure other aspects of the website like the navigation bar (for more details see the documentation on R Markdown websites). For example, to use the theme "cosmo" and add a table of contents to every webpage, you would add the following to analysis/_site.yml:
output:
workflowr::wflow_html:
toc: true
theme: cosmo
Formatting options can also be set for a specific file, which will override the default options set in analysis/_site.yml. For example, to remove the table of contents from one specific file, you would add the following to the YAML header of that file:
output:
workflowr::wflow_html:
toc: false
However, this will preserve any of the other shared options (e.g. the theme in the above example). If you are not overriding any of the shared options, it is not necessary to specify wflow_html in the YAML header of your workflowr R Markdown files.
## Reproducibility features
wflow_html also implements the workflowr reproducibility features. For example, it automatically sets a seed with set.seed; inserts the current code version (i.e. Git commit ID); runs sessionInfo at the end of the document; and inserts links to past versions of the file and figures.
These reproducibility options are not passed directly as arguments to wflow_html. Instead these options are specified in _workflowr.yml or in the YAML header of an R Markdown file (using the field workflowr:). These options (along with their default values) are as follows:
knit_root_dir
The directory where code inside an R Markdown file is executed; this ultimately sets argument knit_root_dir in render. By default, wflow_start sets knit_root_dir in the file _workflowr.yml to be the path ".". This path is a relative path from the location of _workflowr.yml to the directory for the code to be executed. The path "." is shorthand for "current working directory", and thus code is executed in the root of the workflowr project. You can change this to be a relative path to any subdirectory of your project. Also, if you were to delete this line from _workflowr.yml, then this would cause the code to be executed from the same directory in which the R Markdown files are located (i.e. analysis/ in the default workflowr setup).
It is also possible (though in general not recommended) to configure the knit_root_dir to apply to only one of the R Markdown files by specifying it in the YAML header of that particular file. In this case, the supplied path is interpreted as relative to the R Markdown file itself. Thus knit_root_dir: "../data" would execute the code in the subdirectory data/.
seed
The seed argument in the call to set.seed, which is added to the beginning of an R Markdown file. In wflow_start, this is set to the date using the format YYYYMMDD. If no seed is specified, the default is 12345.
sessioninfo
The function that is run to record the session information. The default is "sessionInfo()".
github
The URL of the remote repository for creating links to past results. If unspecified, the URL is guessed from the "git remote" settings (see wflow_git_remote). Specifying this setting inside _workflowr.yml is especially helpful if multiple users are collaborating on a project since it ensures that everyone generates the same URLs.
suppress_report
By default a workflowr report is inserted at the top of every HTML file containing useful summaries of the reproducibility features and links to past versions of the analysis. To suppress this report, set suppress_report to TRUE
.
In the default workflowr setup, the file _workflowr.yml is located in the root of the project. For most users it is best to leave it there, but if you are interested in experimenting with the directory layout, the _workflowr.yml file can be located in the same directory as the R Markdown files or in any directory upstream of that directory.
Here is an example of a customized _workflowr.yml file:
# Execute code in project directory
knit_root_dir: "."
# Set a custom seed
seed: 4815162342
# Use devtools to generate the session information.
sessioninfo: "devtools::session_info()"
# Use this URL when inserting links to past results.
github: https://github.com/repoowner/mainrepo
And here is an example of a YAML header inside an R Markdown file with the same exact custom settings as above:
---
output:
workflowr::wflow_html:
toc: false
workflowr:
knit_root_dir: ".."
seed: 4815162342
sessioninfo: "devtools::session_info()"
github: https://github.com/repoowner/mainrepo
---
Note that the path passed to knit_root_dir changed to ".." because it is relative to the R Markdown file instead of _workflowr.yml. Both have the effect of having the code executed in the root of the workflowr project.
wflow_pre_knit, wflow_post_knit, wflow_pre_processor
|
2022-08-11 08:02:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36845239996910095, "perplexity": 3253.9477913925302}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571246.56/warc/CC-MAIN-20220811073058-20220811103058-00107.warc.gz"}
|
https://physics.stackexchange.com/questions/258636/polarized-moller-scattering-cross-section
|
# Polarized Moller scattering cross section
When doing a computation of scattering cross sections of particles with spin, one usually averages over the initial spins and sums over the final ones. I'm a bit puzzled as to how to do the calculation for a longitudinally polarized cross section of Moller scattering though. The reference I'm using is Scattering of longitudinally polarized fermions by A.M. Bincer, where he does exactly what I'm after, but I think the notation's a bit outdated and he omits some steps, so I've tried re-doing it for clarity.
For Moller scattering, there's two tree-level diagrams for the process, the $t$ and $u$ channel, however I am unable to complete the derivation to obtain the square of the amplitude $\left|\mathcal{M}_{fi}\right|^2$. The probability of course consists of a sum of three terms, the $t$ channel, the $u$ channel (same as $t$, but with the final momenta exchanged), and an interference term with a relative minus sign. For instance, the $\left|\mathcal{M}_{fi}\right|^2$ for the $t$ channel is:
$\frac{e^4}{(p_3-p_1)^4}\bar u^s(p_1)\gamma^\mu u^{s'}(p_2)\bar u^r(p_2)\gamma_{\mu}u^{r'}(p_4)\bar u^{s'}(p_3)\gamma^\nu u^s(p_1)\bar u^{r'}(p_4)\gamma_\nu u^r(p_2)$
where $p_1,p_2$ and $p_3,p_4$ are initial and final momenta, and upper unprimed (primed) indices denote spins of initial (final) particles. If we put spinor indices as well, we can rearrange some terms and use the standard $\Sigma u(p)\bar u(p)=\gamma\cdot{p}+m$ trick, but this doesn't work for the initial spinors as we're assuming they're eigenstates of helicity. In the reference it states that we can write the polarized spinors as: $u^s(p)=\frac{1}{2}\sum\limits_{\epsilon=\pm 1}(1+sh)u^\epsilon(p)$, where $h$ is the helicity operator, and $s=\pm 1$, however I can't for the life of me figure out how to complete the derivation. The author doesn't really show all of the steps in the calculation, and when I write out terms like $u^s_a(p_1)\bar u^s_b(p_1)$ using the said formula, it doesn't look like anything helpful as there's now a sum of the form (with spinor indices summed over):
$u^s_a(p_1)\bar u^s_b(p_1)=\frac{1}{4}\sum\limits_{\epsilon,\epsilon'}(\delta_{aq}+sh_{aq})(\delta_{rb}+sh_{rb})u^\epsilon_q(p_1)\bar u^{\epsilon'}_r(p_1)$,
which I can't seem to simplify a lot as there's now two different spin indices, and a whole bunch of other terms. There's a reference to an identity from W. Heitler - The quantum theory of radiation, but I don't have access to the book so I'm pretty much stuck. Any help or hints would be greatly appreciated.
• You just write $u_s \bar u_s$ for the particle solution with the particular helicity $s$. It's $(p\cdot \gamma +m)$ times an additional operator involving $p,\gamma$ and perhaps $\gamma_5$. You should be able to find this extra gamma-matrix-like factor from the condition that the helicity operator acting on this $u\bar u$ gives you what it should. (Helicity plus minus one annihilates it both from the right and the left). At the end, even if you compute the specific-helicity cross sections (without summing over polarizations), you will sum traces of products of gamma matrices, just harder ones – Luboš Motl May 29 '16 at 17:55
• So, if I understand you correctly, I should write $u^s(p)\bar u^s(p)=(p\cdot \gamma+m)A^s$, where $A^s$ depends on which spin state I take, $s=\pm1$, and then simply invert the relation to get $A^s$ (which I'm assuming depends on the basis, but won't matter for the end result), and then simply plug that back into $\left|\mathcal{M}\right|^2$ and do the traces? – blueshift May 29 '16 at 18:43
• Yup, I forgot what $A^s$ really is right now, it's some $(1+\gamma_5 \gamma_i\cdot p_i / |\vec p|)$, or something like that, try it or find it somewhere. When you have this form, the calculation of the cross section boils down to similar operations as in the polarization-blind case, you just need to know the traces of longer products of gamma matrices, aside from a larger number of terms etc. – Luboš Motl May 29 '16 at 18:54
• Okay, I've tried doing the calculation, but the result turns out to be too complicated to make anything out of it, can I just pick my coordinate system to be along the $z$ axis to make the eigenspinors take the form $\begin{pmatrix}\sqrt{E-p^3}\begin{pmatrix}1\\0\end{pmatrix}\\ \sqrt{E+p^3}\begin{pmatrix}1\\ 0\end{pmatrix}\end{pmatrix}$ for $s=1$, and similar for the other one? In this case, I can apparently write $A^s$ as a linear combination of the 16 matrices ($\gamma^\mu,\sigma^{\mu\nu}$ etc.) spanning the space, so I can compute the traces easily. Would that work? – blueshift May 30 '16 at 17:16
• If you picked particular directions of the momenta and a particular basis for the spinors, you could have written the explicit forms of all the gamma matrices at all other places, too. That's not what I wanted to recommend you but it's possible, too. If you have the explicit form of the spinors, you don't need to waste your time with finding $A^s$, do you? Just calculate things like $\bar u \gamma v$ from the Feynman diagrams directly in components. – Luboš Motl May 30 '16 at 17:20
|
2019-07-23 12:20:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8756780028343201, "perplexity": 237.92914003195048}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195529276.65/warc/CC-MAIN-20190723105707-20190723131707-00493.warc.gz"}
|
https://study.com/academy/answer/determine-whether-the-following-vector-fields-are-conservative-on-the-given-domain-and-if-so-give-the-potential-function-up-to-a-constant-of-which-the-field-is-the-gradient-a-f-x-y-z-x-3-plus-yz.html
|
Determine whether the following vector fields are conservative on the given domain and if so,...
Question:
Determine whether the following vector fields are conservative on the
given domain and if so, give the potential function (up to a constant) of which the field is
a) {eq}F(x,y,z)= \langle x^3+yz, y+xz, \frac{1}{z}+xy \rangle {/eq} on all of space except for the origin.
b) {eq}F(x,y)=(x-3x^2y, 1+xy){/eq} in the plane.
Conservative Vector Field:
For a vector field to be conservative, there exists a scalar function f called scalar potential such that F can be expressed as the gradient of f, {eq}F=\nabla f {/eq}, which implies that curl F = 0. So, if {eq}\nabla \times F\neq 0, {/eq} then it can be inferred that F is not conservative.
Become a Study.com member to unlock this answer! Create your account
a) For a vector field to be conservative, there exists a scalar function f such that F can be expressed as the gradient of f, {eq}\displaystyle...
Conservative Forces: Examples & Effects
from
Chapter 5 / Lesson 8
7.6K
Learn how to tell if a force is conservative and what exactly is being conserved. Then look at a couple of specific examples of forces to see how they are conservative.
|
2021-05-06 02:54:27
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9226370453834534, "perplexity": 612.5283126804605}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988725.79/warc/CC-MAIN-20210506023918-20210506053918-00149.warc.gz"}
|
https://mathematica.stackexchange.com/questions/195006/why-is-paralleldo-slower-than-do
|
# Why is ParallelDo slower than Do?
I have problems to write parallel code in mathematica. Why is
candidates = {};
SetSharedVariable[candidates];
Do[
ParallelDo[
eq = RandomReal[] + RandomReal[];
AppendTo[candidates, eq]
, {j, 1, 1000}]
, {i, 1, 10}]
slower than the non parallel version
candidates = {};
Do[
Do[
eq = RandomReal[] + RandomReal[];
AppendTo[candidates, eq]
, {j, 1, 1000}]
, {i, 1, 10}]
?
• I reverted your post to before the edit because it looks like a different question (like Henrik said in his comment). Note, however, that if you ask it in precisely such form it will be likely closed due to not enough info: you need to provide the minimal working example, not through some undefined functions into a piece of code that no one will be able to run and test. – corey979 Apr 11 '19 at 13:54
• See here mathematica.stackexchange.com/a/48296/12 I suggest you don't use SetSharedVariable until you get quite fluent in using the parallel tools. It effectively "unparallelizes" your code. – Szabolcs Apr 11 '19 at 14:09
Because managing write access to shared memory is expensive: Subprocesses have to wait until they are granted write access (because another process uses that ressource).
Moreover, it is in general more efficient to use Parallel only upon the most outer loop construct.
By the way: Using Append and AppendTo are the worst methods to build a list, because they involve a copy of the full list each time another element is appended. Instead of complexity $$O(n)$$ for a list of $$n$$ elements, you get an implementation of complexity $$O(n^2)$$. Better use Table or, if you don't know how long the list is about to get, use Sow and Reap. InternalBag is a further option, and it is even compilable.
• Thanks, that actually helped a lot. I just dont understand how to use Sow and Reap to avoid Append To be more specific: instead of ParallelDo I use now ParallelTable: eq = ParallelTable[ FNumeric[ SetPrecision[N[monlistnumeric[[i]] + monlistnumeric[[j]], 20], 10]] , {j, jj}]; FNumeric is a function, that returns either 0 or a value I want to store. I then do eq = DeleteCases[eq, 0]; candidates = Join[candidates, eq]; Is there a more efficient way to do this? – Matthias Heller Apr 11 '19 at 12:55
• @MatthiasHeller, you're welcome. How is this new code related to your post? You should consider a new post with your real problem and all relevant data. I may have a look. In general, depending on the details, there are various ways to perform the computation efficiently; these way might not use Parallel at all, but rather Compile`d code. – Henrik Schumacher Apr 11 '19 at 13:13
|
2020-10-24 09:23:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 3, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46742597222328186, "perplexity": 1375.5622738309169}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107882103.34/warc/CC-MAIN-20201024080855-20201024110855-00526.warc.gz"}
|
http://xaktly.com/Math_FTOC.html
|
### The fundamental theorem(s) of calculus – two flavors
The fundamental theorem of calculus (FTOC) is divided into parts. Often they are referred to as the "first fundamental theorem" and the "second fundamental theorem," or just FTOC-1 and FTOC-2.
Together they relate the concepts of derivative and integral to one another, uniting these concepts under the heading of calculus, and they connect the antiderivative to the concept of area under a curve.
### What they say
FTOC-1 says that the process of calculating a definite integral to find the area under a curve, say between x=a and x=b, is nothing more than finding the difference in the antiderivative of the integrand evaluated at points a and b. That's actually quite a remarkable result.
FTOC-2 is a little more abstract, but very important. It turns the definite integral into a function, one with the independent variable as the upper limit of integration, that accumulates area under a curve. This concept is important because it allows us to create a whole new class of useful functions that are only defined by the integral – integral-defined functions. One such example is the Gaussian distribution function used in statistics and probability, but many exist. Most importantly, the FTOC-2 establishes that differentiation and integration are inverse procedures.
We'll start with FTOC-1 and in this section we'll use capital letters for functions that are antiderivatives of their lower-case counterparts. So from here on you can assume that F(x) is the antiderivative of f(x), G(x) is the antiderivative of g(x), and so on. Here's the statement of FTOC-1:
Note that some sources swap the numbering of FTOC-1 and 2 from what I use here. It doesn't matter ... it's the concepts that are important.
#### Fundamental theorem – part 1
The first part of the fundamental theorem says that a definite integral, representing the area under a curve between two points, a & b, in its domain, which we understand to be a Riemann sum taken to the limit of the sum of an infinite number of rectangles of infinitessimal width, is just the simple difference of two antiderivatives, F(b) and F(a):
### Proof
We begin by converting the difference F(b) - F(a) into a sum of smaller differences. The figure below shows graphically how this is done. If we plot F(x), we can divide it into segments with endpoints xo = a, x1, x2, ... and so on. I've only gone up to x5 here, but we could make these segments as narrow as we'd like. We'll call the endpoints a and b, where a = xo and b = x5
If we calculate the widths of the segments along the y-axis, we find widths of F(b) - F(x4), F(x4) - F(x3), and so on. Notice (right column) that if we add all of these segments, we get F(b) - F(a) because of all the ± cancellations.
So we have
$$F(b) - F(a) = \sum_{i = 1}^5 \, F(x_i) - F(x_{i - 1})$$
where the summation is [F(x1) - F(xo)] + [F(x2) - F(x1)] + ... + [F(x5) - F(x4)]. Now we can in fact make any number of these partitions, so let's just make this small change to reflect that:
$$F(b) - F(a) = \sum_{i = 1}^N \, F(x_i) - F(x_{i - 1})$$
So far we have restated the right side of the FTOC-1, F(b) - F(a), as a sum of smaller divisions of the antiderivative function.
The next step is to recall the mean value theorem, which says that for every continuous function on an interval [a, b], there exists a number, c, at which the derivative (slope) of the function, f'(c) is equal to the average slope between a and b:
Remember that we really don't care where c is, just that it exists in the interval of interest. We'll rearrange that to
$$f(b) - f(a) = f'(c) (b - a)$$
Now the mean value theorem guarantees the existence of the point c on any interval, including [xi, xi-1], so we can rewrite the MVT like this: There must exist some ci in [xi, xi-1] such that F(xi) - F(xi-1) = F'(ci)(xi - xi-1). This is just the MVT re-expressed for each of our sub-intervals of [a, b].
Here that is again,
$$F(x_i ) - F(x_{i - 1}) = F'(c_i)(x_i - x_{i - 1})$$
and if we remember that because F(x) is an antiderivative of f(x), then F'(x) = f(x), we get
Now if we replace xi - xi-1 with Δx, and sum each side from 1 to N (the number of partitions), we get
$$\sum_{i - 1}^N \, F(x_i) - F(x_{i - 1}) = \sum_{i = 1}^N f(c_i) \Delta x$$
In the first part of the proof, we showed that the sum on the left is just F(b) - F(a), so we have
$$F(b) - F(a) = \sum_{i = 1}^N \, f(c_i) \Delta x$$
Finally, what's on the right is just a Riemann sum integral of the area under f(x), where the MVT guarantees that there is some point c somewhere in each partition, and Δx is just the width of the partition. As the width of those partitions (rectangles) goes to zero (Δx dx), we get the integral of the function:
$$F(b) - F(a) = \int_a^b \, f(x) \, dx$$
Quod erat demonstrandum
#### Area under a curve
FTOC-1
The area bounded by a smooth curve f(x), the x-axis, and the lines x=a and x=b is just the difference between the antiderivative of f(x) evaluated at points a and b.
where F'(x) = f(x).
#### The value of FTOC-1 cannot be understated
It's worth thinking about the first fundamental theorem of calculus one more time. It says the the integral representing the area between a function and the x-axis, an infinite sum of infinitely narrow rectangles, can be reduced to a simple difference of an antiderivative taken at the endpoints of the domain of integration [a, b].
### Part the second — FTOC-2
The second part of the fundamental theorem is of the more difficult bits of calculus to wrap your head around, so be sure to give it some time. Look at it often and work through the proofs and some examples. Like many concepts that are difficult at first, the more you look at it and work with it, the easier it gets, so hang in there.
#### Fundamental theorem – part 2
If f is a continuous function on the interval [a,b], then f has an antiderivative in [a,b].
Let the function be that antiderivative. Then:
Well, this is a very odd statement. our independent variable, x, is now the upper limit of the integral, and we are meant to treat t as a dummy variable, to be used for integration purposes only. The FTOC-2 says formally that differentiation and integration are inverse operations. Notice in the last line of equations in the box above that one need not actually do the integral to find its derivative. You only need to rewrite f(t) with x inserted for t.
The last line of the box expresses three layers of functions and operations. Inside is the function f(t), a 1:1 function that assigns one y-coordinate to every value of t.
That is inside of an integral function, with independent variable x. That integral function calculates an area below f(t) between limits a and x. Finally, all of that is inside of a derivative.
### An area-accumulation function
Another way to look at it is that we've invented a new kind of function, G(x), an integral-defined function with its independent variable as one of the limits. It's an area-accumulation function: As x grows, the amount of area under the curve increases.
### Derivative of an integral function – a graphical interpretation
Here's a nice graphical interpretation of why the second FTOC works. Take a function f(t) and graph it. Then it's easy to interpret the integrals between a and x, & a and x+h as areas:
Now if we focus on the area between x and x + h, we can express that area two different ways:
The area is approximately equal to f(x) · h,
$$A(x + h) - A(x) \approx f(x) \cdot h$$
Now dividing by h gives us an expression on the left that looks like the derivative:
$$\frac{A(x + h) - A(x)}{h} \approx f(x)$$
If we take the limit as h →0, we see that the derivative of the area function is just f(x):
$$\lim_{h\to 0} \, \frac{A(x + h) - A(x)}{h} = f(x)$$
### Area accumulation as a function | An example
The graphs below should help you understand the difference between a function and that function as used to make an integral-defined function. The panel on the left (orange) shows f(x) = sin(x2), which does not have an analytic integral (you can't just solve it on paper – it has to be done numerically). You can see that it has regions of positive and negative area, the orange shaded regions.
The
If you imagine moving our vertical line along the independent variable x, sweeping out area under the curve, that the total area would oscillate as we add negative and positive areas. It's not a stretch to see how the purple curve could be a graph of that area as a function of x. The purple graph is the integral-defined function. It's actually a pretty important function in the field of optics, and it's called the Fresnel (pronounced fruh · nel') function.
### Example: f(t) = 2t
In this example, we can easily compare the area defined by the integral with the area calculated geometrically. The area of the purple triangle under the linear function f(t) = 2t is (1/2)(x)(2x) = x2.
If we integrate (note that the lower limit is zero), then take the derivative of the result, after evaluating the limits, we get:
\begin{align} G'(x) &= \frac{d}{dx} \left[ \int_0^x \, 2t \, dt \right] \\ \\ &= \frac{d}{dx} \, x^2 = 2x = f(x) \end{align}
### The lower limit of integration doesn't matter ...
Now let's do that same problem but this time we'll use a non-zero lower limit of integration, a. We'll take the derivative with respect to x of this integral:
The graph is shown below, and the full integral is worked out. When the limits are evaluated, the value of the integral is x2 - a2.
Now we take the derivative with respect to x and the derivative of the lower limit, just the constant a2, is zero. The lower limit doesn't matter.
\begin{align} G'(x) &= \frac{d}{dx} \, \left[ \int_a^b \, 2t \, dt \right] \\ \\ &= \frac{d}{dx} (x^2 - a^2) = 2x \end{align}
So the FTOC-2 is pretty weird. Notice that when taking the derivative of such an integral, we don't actually need to integrate. We just replace t in the integrand by x, and that's it. Couldn't be simpler.
### Proof of the FTOC-2
The FTOC-2 posits that:
$$G'(x) = \frac{d}{dx} \left[ \int_a^b \, f(t) \, dt \right]$$
$$\text{for all} \; x \in [a, b]$$
So we need to prove that G'(x), as defined, is equal to f(x). To do so, we define two antiderivatives, G(x) and G(z) according to FTOC-2:
$$G(x) = \int_a^x f(t) dt \; \; \text{and} \; \; G(z) = \int_a^z f(t) dt$$
Now we're going to work toward a merging of the average value of an integral with the definition of a derivative, so the next step is to take the difference between G(z) and G(x), and we'll assume that z > x.
\begin{align} G(z) - G(x) &= \int_a^z f(t) dt - \int_a^x f(t) dt \\ \\ &= \int_a^z f(t) dt + \int_x^a f(t) dt = \int_x^z f(t) dt \end{align}
Now the average value of that integral is just the sum of all the f(t)'s over the interval, divided by the interval itself, (z - x). We'll name that average f(c) (with no particular meaning intended for the letter 'c')
\begin{align} f(c) &= \frac{1}{z - x} \int_x^z \, f(t) \, dt \\ \\ &\longrightarrow \int_x^z \, f(t) \, dt = (z - x) f(c) \end{align}
Now here's the crux: There's another way to calculate that same average. It's just the change in rise of the antiderivatives over the change in the independent variable t. It is:
$$\frac{G(z) - G(x)}{z - x} = f(c)$$
This looks like a derivative; it's just lacking the limit as x → z to give G'. Recall that we're trying to show that G'(x) = f(x). If we take that limit on both expressions for the average of the integral, we end up "squeezing" f(c) between x and z. After all, the average will always lie between the two extremes. At the limit where x = z, f(c) = f(x), and we've proved our theorem.
$$G'(x) = \lim_{z\to x} \frac{G(z) - G(x)}{z - x} = \lim_{z\to x} f(c)$$
$$G'(x) = f(x)$$
### A second proof of FTOC-1
With FTOC-2 proved (just above), we can use that result to prove FTOC-1, which says:
\begin{align} If \: F'(x) &= f(x), \\ \\ then \: \int_a^b \, f(x) \, dx &= F(b) - F(a) \end{align}
Now we've proved that G(x) is antiderivative of f(x),
$$G(x) = \int_a^x \, f(t) \, dt$$
so F(x), postulated to be an antiderivative of f(x), must be equal to G(x) to within an additive constant:
$$F(x) = G(x) + C$$
Then we can simply write
\begin{align} F(b) - F(a) &= [G(b) + C] - [G(a) + C] \\ \\ &= G(b) - G(a) \end{align}
$$= \int_a^b f(x) dx - \int_a^a f(x) dx = \int_a^b f(x) dx,$$
where the second integral is zero. Thus we have proved the FTOC-1.
### Example 1
Find the integral and its derivative: $\int_0^x \, t^2 \, dt$
Solution: Let's first find the integral in the straightforward way, using the power rule of integration and evaluating the limits:
$$\int_0^x \, t^2 \, dt = \frac{t^3}{3} \bigg|_0^x = \frac{x^3}{3}$$
Now the derivative of the integral is:
$$\frac{d}{dx} \frac{x^3}{3} = x^2$$
which is just the integrand of our original integral, with t replaced by x. And that will be the case in all such problems. All together it looks like this:
$$\frac{d}{dx} \left[ \int_0^x \, t^2 \, dt \right] = x^2$$
#### The lower limit doesn't matter
Now one thing you might be wondering about is the lower limit of integration, x=0. Let's repeat this problem, except this time with a finite lower limit; let's call it a.
$$\int_a^x \, t^2 \, dt$$
Do the integral in the same way, except now we get the answer above with a constant (-a3/3) added to it:
$$\int_a^x \, t^2 \, dt = \frac{t^3}{3} \, \bigg|_a^3 = \frac{x^3}{3} - \frac{a^3}{3}$$
Now if we take the derivative, it's the same because the second term is constant. What we find is that the lower limit just doesn't matter in this kind of expression of FTOC-2.
$$\frac{d}{dx} \left[ \frac{x^3}{3} - \frac{a^3}{3} \right] = x^2$$
Putting it all together, the statement is:
$$\frac{d}{dx} \left[ \int_a^x \, t^2 \, dt \right] = x^2$$
### Example 2: You don't even need to integrate
In the simple example to the right, we integrate as usual, evaluating the integral at limits a and x. Notice that the dummy variable t is now gone and the independent variable is x. If we now take the derivative of the result with respect to x, we just get the integrand (t-3) back, but with x substituted by t. The lower limit contributed nothing to the derivative because the integral evaluated there is a constant.
So you see, in these problems, there's no need to integrate at all. It only becomes more complicated when that x in the upper limit is a function of x, so that we'll need some kind of chain rule analogous to the chain rule of differentiation.
\begin{align} G'(x) &= \frac{d}{dx} \, \int_a^x \, t^{-3} \, dt \\ \\ &= \frac{d}{dx} \frac{-1}{2 t^2} \, \bigg|_a^x \\ \\ &= \frac{d}{dx} \left[ \frac{-1}{2x^2} + \frac{-1}{2a^2} \right] \\ \\ G'(x) &= x^{-3} \end{align}
### Example 3 — The chain rule
Consider a problem like this:
$$F(x^2) = \int_0^{x^2} \, ln(t) \, dt$$
Now instead of just having an independent variable as the upper limit of integration, we have a function of that variable — it's like a chain rule problem in differentiation. Think of it like this: If
$$F(x) = \int_0^x \, ln(t) \, dt$$
then
$$F(x^2) = \int_0^{x^2} \, ln(t) \, dt$$
Now using the chain rule of differentiation, the derivative of F(x2) is the derivative of the outer function F with respect to x2 times the derivative of x2.
Now we can just plug in the solution:
$$\frac{d}{dx} \, \int_0^{x^2} \, ln(t) \, dt = 2x \, ln(x^2)$$
With a little practice, you'll recognize that these are just like other chain-rule derivatives you've done.
### Practice problems
(1-4) Find G'(x)
1 $$G(x) = \int_a^x \, \frac{t^4}{4} \, dt$$ Solution $$G'(x) = \frac{x^4}{4}$$ 2 $$G(x) = \int_{-\pi}^x \, sin(\theta)\, d\theta$$ Solution $$G'(x) = sin(x)$$ 3 $$G(x) = \int_a^x \, cos(t) \, dt$$ Solution $$G'(x) = cos(x)$$ 4 $$G(x) = \int_{-1}x \, \frac{dt}{t}$$ Solution $$G'(x) = \frac{1}{x}$$
(5-8) Find the derivative:
5 $$\frac{d}{dx} \int_{-2}^{2x^2} \, sin(t) \, dt$$ Solution $$= 4x\cdot sin(x)$$ 6 $$\frac{d}{dx} \int_a^{sin(x)} \, \frac{1}{t} \, dt$$ Solution $$= \frac{cos(x)}{x}$$ 7 $$\frac{d}{d\theta} \int_0^{tan(\theta)} \, sec^2(t) \, dt$$ Solution $$= sec^4(\theta)$$ 8 $$\frac{d}{dx} \int_{sin(x)}^{cos(x)} \, 2 sin(t) \, dt$$ Solution $$= 2\cdot sin^2(x)$$
9 If $f(x) = \int_2^{2x} \frac{1}{\sqrt{t^3 + 1}} dt,$ then $f'(1) = ?$ Solution 10 Find the approximate average rate of change of the function $f(x) = \int_0^x sin(t^2) \, dt$ over the interval [1, 3]. Solution 11 On what interval is the graph of $g(x) = \int_0^x sin(2t) \, dt$ both decreasing and concave-upward ? Solution 12 If $g(x) = \int_{\pi/2}^x cos(t) \, dt,$ find the maximum value of g on the closed interval $[-\pi, \, \pi]$. Solution 13 If $g(x) = \int_0^x \, cos(e^{t/2}) \, dt$ for -1 ≤ x ≤ 4, find the instantaneous rate of change of g with respect to x at x = 4. Solution
### A whole new class of useful functions – integral-defined functions
We can use the FTOC-2 to create a bunch of new and useful new functions. One is the Gaussian function, more commonly known as the bell-shaped curve or bell curve, that we use in probability and statistics. It looks like the curve plotted below. A stripped-down version of the equation is:
You can read a lot more about this function in the section on probability distributions. What's important about it for our purpose here is the area under the curve (which is symmetric across the line x=0). The area between the limits -∞ and ∞ should equal one because it represents the total probability of an event happening at all, and we often include other factors to "normalize" it, or to force the total area under the curve to be 1. The ratio of any lesser area, like the one between ±a in the plot below, to that total is equal to the probability of an event occuring.
This integral can't be done analytically (with paper and pencil) – it has to be done by numerical methods, but we can still easily find its first and second derivatives through FTOC-2, and thus plot the function very well.
#### The Fresnel function
Another important curve defined by an integral function is the Fresnel function (fruh · nel'), graphed here on the right. We also considered it above. This function is very important in certain kinds of optics applications.
The Fresnel cosine function is also used frequently, depending on the situation.
xaktly.com by Dr. Jeff Cruzan is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License. © 2012, Jeff Cruzan. All text and images on this website not specifically attributed to another source were created by me and I reserve all rights as to their use. Any opinions expressed on this website are entirely mine, and do not necessarily reflect the views of any of my employers. Please feel free to send any questions or comments to [email protected].
|
2019-11-17 09:21:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 7, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9911127090454102, "perplexity": 407.33679368318576}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668910.63/warc/CC-MAIN-20191117091944-20191117115944-00524.warc.gz"}
|
https://stats.stackexchange.com/questions/238892/what-is-the-distribution-of-the-maximum-of-a-set-of-random-variables?noredirect=1
|
What is the distribution of the maximum of a set of random variables? [duplicate]
I am trying to find the distribution of the maximum of a set of four continuous independent random variables that have a general distribution.
I have found resources that discuss how to find such a distribution when the probability density function is known, but I am curious to know the generalized solution.
marked as duplicate by kjetil b halvorsen, Michael Chernick, Peter Flom♦Jun 13 '18 at 20:59
The general solution is exactly the same as a particular solution. $$P(\max_i X_i \leq t) = \prod_i P(X_i \leq t) = \prod_i F_{X_i}(t).$$
|
2019-04-25 06:02:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7427784204483032, "perplexity": 160.50259346175878}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578689448.90/warc/CC-MAIN-20190425054210-20190425080210-00443.warc.gz"}
|
http://mathhelpforum.com/advanced-statistics/191277-question-about-vector-print.html
|
• November 6th 2011, 06:13 AM
ChinaPanda
If x is a vector,and center it on the origin,but what dose the formula (x.1=0) mean?I am puzzled about why there is a point symbol behind vector x,and what does 1 mean?
I really appreciate your help!(Nod) The image below is the original text
Attachment 22683
• November 6th 2011, 06:20 AM
Moo
Hello,
Common notation for the scalar product.
• November 6th 2011, 06:32 AM
ChinaPanda
hi,can you speak in detail? My english is very poor,and can you give me a example? thank you very much
• November 6th 2011, 06:35 AM
ChinaPanda
You mean the point symbol means scalar product?but 1 means a vector that each element equals to 1?
• November 6th 2011, 06:36 AM
ChinaPanda
Quote:
Originally Posted by Moo
Hello,
Common notation for the scalar product.
You mean the point symbol means scalar product?but 1 means a vector that each element equals to 1?
• November 6th 2011, 06:38 AM
Moo
Quote:
Originally Posted by ChinaPanda
You mean the point symbol means scalar product?but 1 means a vector that each element equals to 1?
Yes, 1 stands for $\begin{pmatrix} 1\\1\\ \vdots \\ 1\end{pmatrix}$
This scalar product will actually give the sum of x's components.
duibuqi, wo bu hui shuo zhongwen shuxue :D
• November 6th 2011, 06:45 AM
ChinaPanda
Yes, 1 stands for $\begin{pmatrix} 1\\1\\ \vdots \\ 1\end{pmatrix}$
|
2014-08-22 09:21:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7768970131874084, "perplexity": 5265.731079958299}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500823333.10/warc/CC-MAIN-20140820021343-00197-ip-10-180-136-8.ec2.internal.warc.gz"}
|
http://gamepeaks.club/how-do-you-show-formulas-in-excel/
|
# How Do You Show Formulas In Excel
how do you show formulas in excel for the formulas to be hidden you must also protect the worksheet to do this select the review tab in the toolbar at the top of the screen how do you show formulas in excel 2007.
how do you show formulas in excel how do you show the formulas in excel.
how do you show formulas in excel how to subtract in excel how do you show formulas in excel spreadsheet.
how do you show formulas in excel you can use the autosum wizard to automatically build a sum formula select a range how do you show formulas in excel for mac.
how do you show formulas in excel excel record macro how do you show formulas in excel.
how do you show formulas in excel paste special with values add how do you show formulas in excel 2013.
how do you show formulas in excel excel goals seek dialog how do you show formulas in excel when printing.
how do you show formulas in excel then copy and paste formula ifa2f2 yes no into the formula bar and press the enter key see screenshot how do you show formulas in excel 2003.
how do you show formulas in excel 09noformulashowing how do you show formulas in excel.
how do you show formulas in excel 01addingcommenttoformula how do you show formulas in excel when printing.
how do you show formulas in excel display formulas in the worksheet to help check for possible errors how do you show all formulas in excel.
how do you show formulas in excel if you have classic menu for excel 2010 installed you can how do you show formulas in excel 2007.
how do you show formulas in excel auditing formulas in excel precedents dependents c how do you show all formulas in excel.
how do you show formulas in excel show formulas excel how do you show formulas in excel 2010 when printing.
how do you show formulas in excel click the trace dependents button to show formulas that reference the selected cell how do you show formulas in excel 2007.
how do you show formulas in excel 04selectingformatcells how do you show formulas in excel 2013.
how do you show formulas in excel learn how to highlight rows in excel based on duplicates accountingweb how do you show formulas in excel 2010.
how do you show formulas in excel standard error excel 2013 how do you show calculations in excel.
how do you show formulas in excel in the opening view options dialog box please check the formulas option or uncheck the formulas option for hiding formulas firstly next click the apply how do you show formulas in excel 2007.
how do you show formulas in excel adding function arguments how do you show formulas in excel for mac.
how do you show formulas in excel adding up more than 24 hours in excel how do you show formulas in excel.
how do you show formulas in excel excel index function formulas in excel cells now under this group enable show formulas in cells instead of their calculated result option click ok to continue.
how do you show formulas in excel enter the cell range for your list of numbers in the number 1 box for example if your data were in column a from row 1 to 13 you would enter a1a13 how do you show formulas in excel 2007.
how do you show formulas in excel to apply the formula to all cells in a column convert a range of cells how do you show formulas in excel 2013.
how do you show formulas in excel displaying a month name instead of a date how to edit evaluate and debug formulas in excel click the trace dependents button to show formulas that reference the selected cell.
how do you show formulas in excel percent change formula in excel how do you show formulas in excel for mac.
how do you show formulas in excel now under this group enable show formulas in cells instead of their calculated result option click ok to continue how do you not show formulas in excel.
how do you show formulas in excel excel 2016 show formulas small screen how do you show formulas in excel 2013.
how do you show formulas in excel how do you show formulas in excel 2013.
how do you show formulas in excel in the view options dialog box check or uncheck the formula bar option under the application settings section see screenshot displaying excel formulas within worksheet cell comments figure use the assigned shortcut key to display the formula within a cell comment.
how do you show formulas in excel case insensitive if formula for text values how do you show formulas in excel 2003.
how do you show formulas in excel fred pryor seminarsexcel formula remove spaces3 how do you show formulas in excel.
how do you show formulas in excel how do you show formulas in excel spreadsheet.
how do you show formulas in excel excel formula maximum if multiple criteria how do you show all formulas in excel.
how do you show formulas in excel microsoft excel how do you show formulas in excel 2013.
how do you show formulas in excel hide or show formulas hide or show formulas how do you show calculations in excel.
how do you show formulas in excel add the total row to sum only visible cells or use the sum function to how do you show formulas in excel 2003.
how do you show formulas in excel view formula in formula bar how do you show formulas in excel 2010.
how do you show formulas in excel example 6 formulas in data validation how do you show the formulas in excel.
how do you show formulas in excel screenshot of excel 2013 how do you show formulas in excel for mac.
how do you show formulas in excel how to create multiple formulas for the same space in excel techwallacom how do you show the formulas in excel.
how do you show formulas in excel excel logical formulas screen3 use an if statement to calculate sales bonus commissions how do you show formulas in excel.
how do you show formulas in excel excel3 how do you show the formulas in excel.
how do you show formulas in excel microsoft 2007 excel formulas how do you show formulas in excel 2010.
how do you show formulas in excel excel formula count cells that contain negative numbers how do you show formulas in excel.
how do you show formulas in excel excel file with suppressed divide by 0 error message how do you show formulas in excel 2013.
how do you show formulas in excel use excels autosum feature to sum filtered cells automatically how do you show formulas in excel 2010 when printing.
how do you show formulas in excel display times as hoursminutes in excel how do you show the formulas in excel.
how do you show formulas in excel show or hide the formula bar with kutools for excel how do you show formulas in excel spreadsheet.
how do you show formulas in excel formula in cell h5 how do you show calculations in excel.
how do you show formulas in excel excel formula if else how do you not show formulas in excel.
how do you show formulas in excel 3 click ok button to save the settings how do you show formulas in excel 2007.
how do you show formulas in excel and then you will see all formulas in all worksheets or active worksheets if you didnt click the apply to all sheets button are displaying automatically display times as hoursminutes in excel accountingweb display times as hoursminutes in excel.
how do you show formulas in excel 07protectsheetdialog how do you show formulas in excel.
how do you show formulas in excel ad selection lock unlock 1 how do you show formulas in excel 2013.
how do you show formulas in excel how do you not show formulas in excel.
how do you show formulas in excel excel logical formulas screen1 use an if statement to flag past due accounts so you can send notices to those customers how do you show all formulas in excel.
how do you show formulas in excel all formulas are selected how do you show formulas in excel when printing.
how do you show formulas in excel click inside the cell where you want the result from your subtraction how do you show formulas in excel 2010 when printing.
how do you show formulas in excel excel if function how do you show formulas in excel 2010 when printing.
how do you show formulas in excel how to divide in excel using a formula how do you show formulas in excel 2010 when printing.
how do you show formulas in excel microsoft excel how do you show formulas in excel 2010 when printing.
how do you show formulas in excel slide1 how do you show formulas in excel for mac.
how do you show formulas in excel use absolute or mixed cell references to copy the formula without changing references how do you show formulas in excel spreadsheet.
how do you show formulas in excel how to show formulas in cells and hide formulas completely in excel 2013 how do you show all formulas in excel.
how do you show formulas in excel unhide formula bar using excel options how to show or hide formula bar in excel in the view options dialog box check or uncheck the formula bar option under the application settings section see screenshot.
how do you show formulas in excel excellent programs excel formulas how do you show formulas in excel 2003.
how do you show formulas in excel 13several conditional formulas conditional formatting excel preventing excel divide by error productivity portfolio excel file with suppressed divide by error message.
how do you show formulas in excel enter an excel formula into multiple cells with a single key stroke how do you show formulas in excel 2003.
how do you show formulas in excel microsoft excel how do you show the formulas in excel.
how do you show formulas in excel microsoft excel how do you show calculations in excel.
how do you show formulas in excel lookup values to left in excel using the index match function how do you show formulas in excel.
how do you show formulas in excel ways to save time with excel formulas exceljet all formulas are selected.
how do you show formulas in excel show formulas in excel and google sheets using shortcut keys how do you not show formulas in excel.
how do you show formulas in excel microsoft excel how do you not show formulas in excel.
how do you show formulas in excel but in this case only one or more of the conditions within the or function needs to be true for the condition to be met microsoft excel how do you show the formulas in excel.
how do you show formulas in excel after protecting current worksheet all formulas in selected cells wont show in the formula bar how do you show formulas in excel.
how do you show formulas in excel this formula will sum b and display 30 because its test is true how do you not show formulas in excel.
how do you show formulas in excel use the fill without formatting option to copy a formula but not formatting how do you show formulas in excel 2003.
how do you show formulas in excel show formulas in cells instead of their results option how do you not show formulas in excel.
how do you show formulas in excel excel divide by zero error how do you show calculations in excel.
how do you show formulas in excel figure 6 use the assigned shortcut key to display the formula within a cell comment how do you not show formulas in excel.
how do you show formulas in excel the formula or fx bar in excel and google spreadsheets how do you not show formulas in excel.
how do you show formulas in excel enabling circular references in excel how do you show all formulas in excel.
how do you show formulas in excel step 10 once you have the formula in the first cell you can now copy that cell and paste it into all the other cells that require the same formula how do you show formulas in excel 2013.
how do you show formulas in excel select formulas how do you show formulas in excel.
how do you show formulas in excel highlight a value closest to the given value but ignore the exact match how do you show formulas in excel for mac.
how do you show formulas in excel 02clickingshowformulas how do you show formulas in excel 2007.
how do you show formulas in excel excel copies the formula down how do you show formulas in excel 2013.
how do you show formulas in excel show formulas how do you show formulas in excel when printing.
how do you show formulas in excel print all worksheets with formulas displaying with kutools for excel how do you show formulas in excel when printing.
how do you show formulas in excel excel countif function how do you show formulas in excel.
how do you show formulas in excel how this formula works it first searches vertically down the first column vlookup is short for vertical lookup when it finds photo frame how do you show formulas in excel.
how do you show formulas in excel note you can also get this show formula bar option by clicking the file or office button options advanced display show formula bar how do you show formulas in excel 2010 when printing.
how do you show formulas in excel show and hide formulas in excel how do you show formulas in excel spreadsheet.
how do you show formulas in excel microsoft excel how do you show formulas in excel for mac.
how do you show formulas in excel learn how to highlight rows in excel based on duplicates accountingweb how do you show calculations in excel.
how do you show formulas in excel press enter to copy formula down how do you show formulas in excel 2013.
how do you show formulas in excel as you can see in the illustration the function wizard automatically adds the required quotation marks around your plain text entries yes or no how do you show formulas in excel spreadsheet.
|
2018-12-13 11:44:40
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8149706125259399, "perplexity": 1548.1781918276738}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376824675.15/warc/CC-MAIN-20181213101934-20181213123434-00505.warc.gz"}
|
https://www.ies.org/definitions/bidirectional-transmittance-distribution-function-btdf-ft20/
|
[7.5.3.16] The ratio of the differential luminance dLt(θt, ϕt) for a ray transmitted in a given direction(θt, ϕt) to the differential luminous flux density dEi(θi, ϕi), incident from a given direction (θi, ϕi), that produces it (see Figure 23).
$f_{t}\left (\theta _{i},\phi _{i}; \theta _{t},\phi _{t} \right )$ $\equiv$ $\frac{dL_{t}\left ( \theta _{t},\phi _{t} \right )}{dE_{i}\left ( \theta _{i},\phi _{i} \right)}$ $\left ( sr\right )^{-1}$
$=$ $\frac{dL_{t}\left ( \theta _{t},\phi _{t} \right )}{L_{i}\left ( \theta _{i},\phi _{i} \right )d\Omega _{i}}$ ,
where dΩ = dω · cos θ
Notes:
(i) This distribution function is the basic parameter for describing (geometrically) the transmitting properties of a thin scattering film (with negligible internal scattering) so that the transmitted radiation emerges from a point that is not significantly separated from the point of incidence of the incident ray(s). The governing considerations are similar to those for application of the bidirectional reflectance-distribution function (BRDF), rather than the bidirectional scattering-surface reflectance-distribution function (BSSRDF), as discussed in Nicodemus.*
(ii) It may have any positive value and will approach infinity in the direction for regular transmission (possibly refraction but without scattering).
(iii) The spectral and polarization aspects must be defined for complete specification, since the BTDF as given above only defines the geometrical aspects.
* Nicodemus FE. et al, Geometrical Considerations and Nomenclature for Reflectance, NBS Monograph 160. Washington DC: U.S. Department of Commerce; Oct 1977.
« Back to Definitions Index
|
2018-04-23 15:08:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 6, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6877694129943848, "perplexity": 3938.1675511449866}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125946077.4/warc/CC-MAIN-20180423144933-20180423164933-00626.warc.gz"}
|
https://docs.analytica.com/index.php/Probability_density_and_mass_graphs
|
# Probability density and mass graphs
When you select the Probability density as the uncertainty view for a continuous variable, it graphs the distribution as a Probability Density function. The height of the density shows the relative likelihood the variable has that value.
Technically, the probability density of variable X, means the probability per unit increment of X. The units of probability density are the reciprocal of the units of X — if the units of X are dollars, the units of probability density are probability per dollar increment.
If you select Probability density as the uncertainty view for a discrete variable, it actually graphs the Probability Mass function — using a bar graph style to display the probability of each discrete value as the height of each bar.
Similarly, if you choose the cumulative probability uncertainty view for a discrete variable, it actually displays the cumulative probability mass distribution as a bar graph. Each bar shows the cumulative probability that X has that value or any lower value.
Is a distribution discrete or continuous? Almost always, Analytica can figure out whether a variable is discrete or continuous, and so choose the probability density or probability mass view as appropriate — so you don’t need to worry about it. If the values are text, it knows it must be discrete. If the numbers are integers, such as generated by Bernoulli, Poisson, binomial, and other discrete parametric distributions, it also assumes it is discrete.
Infrequently, a discrete distribution can contain numbers that are not integers, which it might not recognize as discrete, for example:
Chance Indiscrete := Poisson(4)*0.5
In this case, you can make sure it does what you want by specifying the domain attribute of the variable as discrete (or continuous). The next section on the domain attribute explains how.
|
2023-01-29 05:51:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7086443901062012, "perplexity": 423.6410787741521}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499700.67/warc/CC-MAIN-20230129044527-20230129074527-00761.warc.gz"}
|
https://itectec.com/database/timestamp-based-concurrency/
|
# Timestamp based concurrency
concurrencytimestamptransaction
I am wondering if someone could help me out with a question I have about time stamp based concurrency.
I found the following practice problem on a website
It basically asks which of the following 4 will occur for the LAST transaction:
(a) the request is accepted.
(b) the request is ignored.
(c) the transaction is delayed.
(d) the transaction is rolled back.
The stands for the order of the time stamps and the = read = write
st1; st2; r2(A); co2; r1(A); w1(A)
The solution they give is choice (d) which had me kind of confused as to how they are coming to that. I may be mistaken about how the time stamps work but I thought that if a transaction wishes to access a shared variable it must have the earliest time stamp. In this example transaction 2 tries to read(A) first but I thought that should be delayed (since transaction 1 has the earlier of the 2 time stamps. So once we get to the r1(A), w1(A) since transaction 1 has the earlier stamp that should succeed as I understand it so I'm not sure why they choose (d). If I am misunderstanding the concept could someone please explain it to me I would greatly appreciate it.
|
2021-10-28 01:39:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40337073802948, "perplexity": 551.5656279321246}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588246.79/warc/CC-MAIN-20211028003812-20211028033812-00542.warc.gz"}
|
https://gamedev.stackexchange.com/questions/115125/where-to-cast-light-shadows-in-a-2-5d-view
|
# Where to cast light/shadows in a 2.5D view?
I'm working on a tile-based 2D pseudo top-down/side-on game very much in the graphical style of Prison Archiect. More specifically, I'm referencing their style of drawing wall tiles where you can see the south, east and west sides of the wall at once. Here's an example from my engine:
I'm also working on a pixel-based (i.e. not tile-based) lighting and shadows implementation. I'm currently struggling with trying to decide how/where to project from a light source on intersections with these wall tiles.
I can't decide where on these tiles should occlude light cast out by light sources. If the red highlighted areas are the "top" of the wall, and the blue highlighted areas are the "side" of the wall, I believe I have two options:
A) Only occlude light from the "top" of the wall
It's worth mentioning that I also plan to use UV-mapping so that only the walls facing the light source will be illuminated, rather than the pre-shaded tiles I'm using as an example. However, that would mean that the tiles adjacent to a wall in shadow may be lit and I don't think this would look quite right. Alternatively...
B) Occlude light from the entirety of the wall tile
This seems more realistic for the ground tiles but does not let me easily illuminate the wall "sides".
I'm not really happy with either solution so my question is: is there another alternative which will give more realistic shadow-casting in a 2.5D view? I'd also rather keep the sides of the walls visible rather than use a top-down only perspective as I feel this would force the rest of the art into a top-down perspective, rather than pseudo side-on.
Going to try and doodle up what I mean here as soon as I finish typing this, but:
Use the second (occlude by base) for everything that isn't a wall and the first (occlude by tops) for lighting the walls?
You actually did this by accident in your second example, with the wall that goes off the bottom of the image. Extending this to the remaining walls won't be perfect, but it would allow some lighting of the walls that will look pretty decent.
• Thanks, that should work great if I can get the directional facing of the walls to pick up lightly correctly. I was slowly coming to this realisation but I think I was put off by the fact it'll probably double the computation required. Still, I think it's the answer I was looking for! – Ross Taylor-Turner Jan 18 '16 at 20:43
• @RossTurner Sure thing :) I'm sure there will still be some "odd" results (such as light leaking through corners to illuminate walls) but for what you're trying to do, I think the result will be sufficiently simple and sufficiently accurate. – Draco18s no longer trusts SE Jan 18 '16 at 20:46
• Nice - simple and effective, probably doesn't need too much additional work, and seems to fit the graphical theme a bit better than my answer. – DoubleDouble Jan 18 '16 at 22:52
• good idea. I would also add a falloff circle of additive ambiant light in the dark zones, to fake GI and add some mood. this version is very dark and limbo-like. depends on what stress/relief you want to achieve though. – v.oddou Jan 19 '16 at 2:10
• @v.oddou The ambient light is much darker in this example than I'm actually planning for the finished result. Thanks for the input! – Ross Taylor-Turner Jan 19 '16 at 9:11
I won't be able to make an image for you, but one trick you could do to figure out if a piece of wall should light up is to take advantage of the 'alpha' channel for determining the direction the pixel is facing, as opposed to the opacity of the pixel.
You could then determine whether the pixel should be lit between the light source and the facing of the wall pixel (alpha value). Flat Shading in 3d rendering is a cheap and effective method that usually uses a similar algorithm using the normal of the plane and the light source's position/color. In your case, the normal is interpreted from the alpha, but you could use a similar algorithm, resulting in very 3D lighting for a 2D game (which is probably overkill but still cool, in my opinion)
//some psuedocode of the important parts
Vector2 lightPosition;
Vector2 pixelPosition;
Vector2 lightDirection = lightPosition - pixelPosition;
Vector2 normalDirection = //(translate into a rotation Vector)
lightDirection.normalize();
normalDirection.normalize();
angleBetweenLightAndNormal = dotProduct(normalDirection, lightDirection);
//determine from the angle whether the normal should be lit or not
You could scale a value of 0.00 to be facing down (or whatever direction you prefer) and the value of 1.00 to also be down, as if the direction had rotated 360 degrees. This means the value of up is 0.50
If you plan on including the top of walls or the ground you may even scale it a few values short, and keep specific reserved values to mean top or ground.
In fact, if you needed it to be as simple as possible:
//0.00 = down
//0.10 = left
//0.20 = right
//0.30 = up
//0.40 = floor
//0.50 = top of wall
which then leaves plenty more values for other situations.
The drawback is that you are taking over the alpha channel, which generally relates to opacity. This means both that your engine may take some modifying to ignore alpha, and, if you do need an alpha channel, you can't do this method directly.
You could create a new image with only lighting information instead. The nice thing about creating a new image with lighting information is that your RGB values can fully translate into a 3-directional rotation for the normal that pixel is facing, and the alpha could be the "height" of the pixel (as if it were in 3D space)
• Thanks, that's a great suggestion. I've also been considering doing uv mapping with a uv map texture in the rgb channel but as I don't really need a Z component as the lighting is on the same level as the objects being lit, so storing a 360 degree normal in the alpha is a nice idea. Having said that it may get a bit tricky to draw, I guess is have to write a custom image viewer to see the information, though that shouldn't be too bad either. One to think about! – Ross Taylor-Turner Jan 19 '16 at 9:05
• This answer reminds me that La Mulana 2 is actually a 3D game precisely for lighting purposes. It still plays like a 2D side scroller, but the actual level geometry is pushed out directly towards the screen so that when they use standard point lights, statues and such cast shadows. This answer is using pixel information to approximate that kind of effect, the mental visualization I made reading the answer reminded me of that team's approach. – Draco18s no longer trusts SE Jan 19 '16 at 15:30
I don't want to take anything away from @Draco18s' answer, but I went with his suggestion (combining the two) and ended up putting together a demonstration video on how it's done (for those interested) at https://www.youtube.com/watch?v=Cabl0LMmlgY
In addition to the quick sketch that he added, I ended up using normal maps on each "face" of the wall so that if there was any "bleed over" light, the angle of incidence means that it isn't illuminated.
|
2021-05-10 02:23:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.502824604511261, "perplexity": 968.1441592841965}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989030.65/warc/CC-MAIN-20210510003422-20210510033422-00265.warc.gz"}
|
https://www.jiskha.com/similar?question=Determine+the+length+and+breadth+of+a+rectangle+if+the+length+is+3cm+less+than+twice+the+breadth%2C+and+the+perimeter+is+18+&page=385
|
# Determine the length and breadth of a rectangle if the length is 3cm less than twice the breadth, and the perimeter is 18
40,002 questions, page 385
1. ## Chemistry
[6 points] You need to make a solution of Ca(NO3)2 for a class demonstration. First, you measure a quantity of the solid Ca(NO3)2 by weighing the reagent container before obtaining the sample (4.2540 ± 0.0003 g) and after taking some reagent (3.9440 ±
asked by Alex on April 7, 2010
The poverty rate would be substantially lower if the market value of in-kind transfers were added to family income. The largest in-kind transfer is Medicaid, the government health program for the poor. Let's say the program costs $7,000 per recipient asked by Timica on October 12, 2011 3. ## Physics A plutonium- 239 nucleus initially at rest undergoes alpha decay to produce a uranium-235 nucleus. The uranium-235 nucleus has a mass of 3.90 x 10^-25 kg and moves away with a speed of 2.62 x 10^5 m/s. Determine the minimum wage electric potential asked by Veronica on May 22, 2017 4. ## Chemistry (PLZ HELP) 1. Outline a procedure to prepare an ammonia/ammonium buffer solution. I'm confused how to start it off. This is an outline of how the steps should be: Step One - Calculate the concentration of hydronium ions in the solution that requires buffering. You asked by Emily on May 24, 2018 5. ## chemistry FORMULA OF A HYDRATE Some salts, when crystallized from water solutions, retain definite proportions of water as an integral part of the crystal structure. This type of crystal is called a hydrate. In this experiment, you will determine the whole number asked by dat dude on November 27, 2018 6. ## English "Cynthia in the Snow" by Gwendolyn Brooks 1 It SUSHES 2 It hushes 3 The loudness in the road. 4 It flitter-twitter, 5 And laughs away from me. 6 It laughs a lovely whiteness, 7 And whitely whirs away, 8 To be 9 Some otherwhere, 10 Still white as milk or asked by i'm lazy and so are you on April 10, 2019 7. ## Creating a Business Plan Please check my answers thank you very much Based on the following information below please answer the questions that follow. Area A: Brooklyn Brooklyn is a very heavily populated urban area within commuting distance from Manhattan by subway, bus, or car. asked by Mari on March 14, 2009 8. ## penn foster Dr. Bob applies for medical staff privileges at General Hospital. The hospital administrator? A. is not required to look into Dr. Bob’s background. B. is required to query the National Practitioner Data Bank. C. is required to query the Data Bank unless asked by kristin on January 25, 2011 9. ## window 2003 server You are the network administrator for a telecommunications company in Rochester, New York. The network consists of two Windows Server 2003 systems and 57 Windows XP Professional systems. Both servers are used as domain controllers. One server hosts DHCP asked by ajay on February 1, 2007 10. ## help with language arts 1. read the following sentence: The reporter is heading to the marketplace to cry the news about the election results. Use the dictionary definition to determine the best meaning of cry as it used in this sentence. A. to call out for help B. to announce in asked by hay what doing on February 14, 2018 11. ## STATISTIC Joe dealt 20 cards from a standard 52-card deck, and the number of red cards exceeded the number of black cards by 8. He reshuffled the cards and dealt 30 cards. This time, the number of red cards exceeded the number of black cards by 10. Determine which asked by Vedrana on February 23, 2016 12. ## eth/125 ·Write a 700- to 1,050-word essay in which you answer the following questions: Conduct research to determine if the ethnic group colonized or if it immigrated to the United States. What country did they originate from and why did they enter the U. S.? Did asked by lisa on September 4, 2010 13. ## Math Heidi conducted a survey in her neighborhood to determine the restaurant her neighbors preferred. Restaurant Number of Neighbors Pams Diner 23 Nick's 36 Magnolia 12 Olive Groove 22 Fine Dining 20 Patsy's 22 Best Eatery 22 Heidi concluded that one of the asked by Callie on September 9, 2016 14. ## science Why do I have brown eyes? The genes we inherit from our parents determine things like our height, looks, hair color and eye color. This passing of characteristics from parent to child is called heredity. If your mother has brown eyes, and your father has asked by HERSHEYS on April 19, 2007 15. ## Chemistry Two Flasks containing the same inert gas are at the same temperature and pressure of 800 mmHg. One flask has volume of 1.0 L and the other, a volume of 2.0 L. Enough volatile liquid is injected into each of the flasks to allow phase equilibrium to be asked by Jason on January 31, 2007 16. ## Chem At 58.8 degrees C and at a total pressure of 1.00 atm the mole percent of acetone in the vapor state above a solution of acetone and water containing 70. mol % acetone is 87.5%. Assuming the solution to obey Raoult's Law, determine the vapor pressure of asked by Chris on March 27, 2007 17. ## Chemistry - Science (Dr. Bob222) A student wanted to determine the amount of copper in a sample of copper ore. The student dissolved a 2.500 ± 0.001 g piece of copper ore in about 75 mL of nitric acid; then, a complexing agent was added. The student transferred this solution to a asked by Anonymous on March 25, 2014 18. ## College Gen.Chemistry A student wanted to determine the amount of copper in a sample of copper ore. The student dissolved a 2.500 ± 0.001 g piece of copper ore in about 75 mL of nitric acid; then, a complexing agent was added. The student transferred this solution to a asked by Anonymous on March 25, 2014 19. ## AP physics Pilots of high-performance fighter planes can be subjected to large centripetal accelerations during high-speed turns. Because of these accelerations, the pilots are subjected to forces that can be much greater than their body weight, leading to an asked by Anonymous on March 4, 2014 20. ## physics Pilots of high-performance fighter planes can be subjected to large centripetal accelerations during high-speed turns. Because of these accelerations, the pilots are subjected to forces that can be much greater than their body weight, leading to an asked by Regheim Beck on October 9, 2015 21. ## Principles of Accounting Hot Dawgs! Inc. has 3 managers that work in shifts throughout the day and night. The company also employs 4 cooks and 2 waitresses per shift. There are 3 shifts: morning, afternoon, and evening. The managers are paid an annual salary with a 2% bonus every asked by Tracy on April 23, 2011 22. ## statistics A researcher wishes to estimate the proportion of college students who cheat on exams. A poll of 490 college students showed that 33% of them had, or intended to, cheat on examinations. Find the margin of error for the 95% confidence interval. solution I asked by Lucky Lady on June 19, 2014 23. ## math 1.In order to determine the effects of alcohol on reaction time, 40 randomly selected adult male individuals were assigned to four treatment groups of ten subjects each. The first group was asked to consume the alcoholic equivalent of five beers in a asked by bobby on April 10, 2018 24. ## math the following scores were recorded on a 200-point final wxam 193,185,186,192,135,158,174,188,172,168,183,195,165,183. what is the mean please and is the mean or the median mor improtant for this Please see the other posts to figure out how to find the asked by isaiah on June 12, 2007 25. ## Chemistry Commercial vinegar was titrated with NaOH solution to determine the content of acetic acid, HC2H3O2. For 20.0 milliliters of the vinegar 26.7 milliliters of 0.600-molar NaOH solution was required. What was the concentration of acetic acid in the vinegar if asked by Sarah on May 23, 2007 26. ## Chemistry(Please check) I completed a lab to find the determination of Kc. I have to find the concentrations of reactants at equilibrium using an ICE table. The equation that were are using is Fe^3+(aq) + SCN^-(aq) -> Fe(SCN)^2+(aq) I have to create 5 ICE tables because we used 5 asked by Hannah on March 5, 2012 27. ## law Dr. Bob applies for medical staff privileges at General Hospital. The hospital administrator? A. is not required to look into Dr. Bob’s background. B. is required to query the National Practitioner Data Bank. C. is required to query the Data Bank unless asked by kristin on January 25, 2011 28. ## general law Dr. Bob applies for medical staff privileges at General Hospital. The hospital administrator? A. is not required to look into Dr. Bob’s background. B. is required to query the National Practitioner Data Bank. C. is required to query the Data Bank unless asked by kristin on January 25, 2011 29. ## analytical chemistry DrBob222, it is me again. Continue the question 5mL of a solution A (unknown concentration) was transferred into sic 25mL volumetric flask. The following volumes of a standard solution of A with with a concentration 75ppm were added to the flask: 0mL, asked by katy on May 10, 2015 30. ## psy - please revise Question? What are some of the characteristics of a Type A personality? How could having a Type A personality affect a person’s reaction to stress? What would you view as some advantages or disadvantages of a Type A personality? Answer! A person with a asked by rose- on April 24, 2008 31. ## algebra (I answered a couple of them but am not confident on my answers, so can you please tell me if my answers are not correct to the ones I answered and for the ones I haven not answered that is because I do not know how to do them. PLEASE HELP!!! This asked by LeAnn/URGENT on November 16, 2009 32. ## Criminal Justice I am preparing the DSST (Dantes) examine in Criminal Justice. I had been looking for some practice tests for this examine. The only practice examines I could locate werei in "Rudman's Quesiton & Answers on the Dantes Subject Standardized Test in Ciminology asked by G on October 12, 2009 33. ## physics In many countries, automatic number plate recognition is used to catch speeders. The system takes a time-stamped license plate photo at one location (like an on-ramp to a freeway), and then takes a second time-stamped photo at a second location a known asked by Natasha on September 1, 2012 34. ## Math(Please Help, Been Stuck On These For A Week) Sydney drew a scale diagram of a circular fire pit in the centre of a circular patio with an actual circumference of 15m. The circle containing the fire put is a reduction of the circular patio by a scale factor of1/3. A.) Determine the diameter of the asked by Cherie on June 18, 2016 35. ## Biology 14. The major components of a DNA molecular subunit are a. a chromosome, deoxyribose, and double helix b. a five-carbon sugar, phosphate group, and a nitrogen-containing base c. adenine, guanine, cytosine, and thymine d. all of the above D? 16. During DNA asked by mysterychicken on February 3, 2010 36. ## Management accounting Andre has asked you to evaluate his business, Andre's Hair Styling. Andre has five barbers working for him (Andre is not one of them) Eash barber is paid$9,90 per hour and workd a 40-hour week and a 50-week, regardless of the number of haircuts. Rent and
asked by Charlotte Holmes on May 22, 2007
37. ## math 117
1. Answer the following questions. Use Equation Editor to write mathematical expressions and equations. First, save this file to your hard drive by selecting Save As from the File menu. Click the white space below each question to maintain proper
asked by Anonymous on October 9, 2011
38. ## science
in a baseball-hitting contest at the office picnic, five men participated named: Biff, Carl, Fred, Marty, and Tom. Their last names in alphabetical order are Jenkins, Keech, McNabb, Miller, and Winslow. The distance that each man hit the ball in feet are:
asked by angie on September 26, 2007
39. ## Chemistry
Calculate soln pH if 100 mL 0.10 M Na2S is mixed with 200 mL 0.050 M HCl. The hint is consider Kb for HS^-. The answer is pH=9.74. Can someone walk me through the neccessary steps to get this answer? What are the k1 and k2 values for H2S in your text?
asked by John Peterson on May 23, 2007
40. ## math
The figure below shows the ellipse $\frac{(x-20)^2}{20}+\frac{(y-16)^2}{16}=2016$. [asy] defaultpen(linewidth(0.7)); pair c=(20,16); real dist = 30; real a = sqrt(2016*20),b=sqrt(2016*16); xaxis("x",c.x-a-dist,c.x+a+3*dist,EndArrow);
asked by rui rui on June 9, 2016
41. ## math
Name: Barbara Dillon Date: July 18 2010 1. Divide 90÷18 =5 Answer is 5 2. Place the following set of numbers in ascending order. 22, –7, 8, –4, 12, –1, 14 Ascending order : -7, -4, -1, 8, 14, 22 3. Add 4. Subtract 5. Divide -18÷ -9 = 2 6. Evaluate
asked by barbara on July 18, 2010
42. ## math am i correct
In 2002 there were more than 30,000 McDonald’s restaurants in the world. Of these, approximately 14,000 were in the United States. In Great Britain there were 1,116 McDonald’s franchises. The following data are taken from those 1,116 McDonald’s.
asked by scooby91320002 on July 10, 2009
43. ## trigonometry
An object is attached by a string to the end of a spring. When the weight is released it starts oscillating vertically in a periodic way that can be modeled by a trigonometric function. The object's average height is −20 cm (measured from the top of the
asked by obet on July 31, 2014
When a computerized generator is used to generate random digits, the proability that any particular digit in the set {0,1,2, . . .,9} is generated on any individual trial is 1/10-0.1. suppose that we are generating digits one at a time and are interested
asked by david on January 23, 2007
45. ## Chemistry
I am working on a five-part question and I just need help with the last part. I'm not sure where to begin with that one. I pasted the question and answers (for a-d) to the other parts below. Thank you! Consider the proposed mechanism for the reaction
asked by Lisa on April 14, 2015
46. ## Math
Growth of Plant Sample A Time (days) Height (in.) 1 6 2 12 3 18 4 24 5 30 Growth of Plant Sample B Time (days) Height (in.) 1 2 2 5 3 10 4 17 5 26 Compare the data for the growth of two plant samples. How can you determine which data set is linear? A)
asked by Judy on March 27, 2018
47. ## AP Chemistry
An organic compound was synthesized and found to contain only C, H, N, O, and Cl. It was observed that when .150g sample of the compound was burned, it produced .138g of CO2 and .0566g of H2O. All the Nitrogen in a different .200g sample of the compound
asked by Monique on September 15, 2009
48. ## Science
Dunbar, South Africa identify at least two serious environmental problems, such as soil degradation, air or water pollution, pesticide misuse, overpopulation, wildlife extinction or threatened biodiversity, and deforestation, that impact this region. What
asked by Daisy on November 3, 2007
49. ## chemistry
1) The activation energy of a certain reaction is 35.3 kJ/mol. At 20 degrees C, the rate constant is 0.0130 s^-1. At what temperature would this reaction go twice as fast? Answer in units of degrees Celsius. i think the answer is around 34, but i keep in
asked by Evets on January 29, 2008
The state of Confusion enacted a statute requiring all trucks and towing trailers that use its highways to use a B-type truck hitch. This hitch is manufactured by only one manufacturer in Confusion. The result of this statute is that any trucker who wants
asked by Kimbry on April 11, 2011
51. ## statistics
A student is interested in whether students who study with music playing devote as much attention to their studies as do students who study under quiet conditions (he believes that studying under quiet conditions leads to better attention). he randomly
asked by kelly on March 18, 2012
Firm A has $20,000 in assets entirely financed with equity. Firm B also has$20,000 in assets, financed by $10,000 in debt (with a 10 percent rate of interest) and$10,000 in equity. Both firms sell 30,000 units at a sale price of $4.00 per unit. The asked by Jane on May 29, 2012 53. ## Chemistry which solution has the highest boiling point? a. 1 mole of NaNO3 in 500 g of water b. 1mole of NaNO3 in 1000g of water c. 1mole of NaNo3 in 750g of water d. 1mole of NaNo3 in 250 g of water The boiling point of water is elevated by about 0.5 degree for a 1 asked by Kat on March 26, 2007 54. ## Statistics for Finance Question 3 [17 Marks) There are five (5) set of data; A to E is provided for this question. Each group will be assigned to analyze ONE dataset only and answer the questions below. The assigned dataset will be determined by your lecturer in separate asked by MohammedFarhanKhan on February 11, 2017 55. ## Algebra 1 Ok, these questions were one's that I couldn't figure out how to work. If someone could set me up on how to solve each one it would be great. Thank you! 1. A movie theater had a certain number of tickets to give away. Five people got the tickets. The first asked by Sam R. on June 2, 2010 56. ## Accounting Kam Motor Company manufactures automobiles. During September 2007 the company purchased 5,000 head lamps at a cost of$9 per lamp. Kam withdrew 4,650 lamps from the warehouse during the month. Fifty of these lamps were used to replace the head lamps in
asked by Becca on February 10, 2009
57. ## chemistry
Hi, I was hoping someone could double check a chemistry answer for me. I am trying to calculate the Ksp of Ag2CrO4. This question is based on a experiment to determine ksp experimentally. In the experiment the Ksp of silver chromate was calculated using a
asked by Laura on May 26, 2013
58. ## physics for Science
2) You take the following measurements for the distance a toy car travels in 10 seconds during each 5 trials: 157cm, 175cm, 162cm, 168cm, 187cm. What are the relative uncertainties of the distance and time measurements? Which measurement is the most
asked by Alessandra Romano on September 6, 2015
59. ## classroom instruction
• Create a 4- to 6-slide Microsoft® PowerPoint® presentation of the content for the learning packet by addressing the following points: o Choose a reliable Internet site and review the content to ensure that it is appropriate for the age group and the
asked by scooby on March 14, 2010
60. ## Managerial Economics
From table 4-1 in the text, which gives the price elasticity of demand for Florida Indian River Oranges, Florida interior oranges, and California oranges, as well as the cross price elasticities among them, determine: (a) by how much the quantity demanded
asked by Econo-missed on November 1, 2008
61. ## General chem lab
This involves Enthalpy of Neutralization: A student determined the calorimeter constant of the calorimeter. Using a plotted line. The student added 50ml of cold water to 50 ml of heated water in a Styrofoam cup. The initial temperature of the cold water
asked by Toni on March 24, 2007
62. ## math
The Recall Computer Company has six territories, each represented by one salesperson. After extensive planning, the company determines that each territory would be expected to achieve the following percentages of total company sales for 2008: Territory
asked by nan on September 5, 2012
The Recall Computer Company has six territories, each represented by one salesperson. After extensive planning, the company determines that each territory would be expected to achieve the following percentages of total company sales for 2008: Territory
asked by nan on September 4, 2012
64. ## MATH/ALGEBRA
I am going over some math homework and need to know if I am getting the right answers. 1. Solving by elimination: 2x+3y=3, and 4x+6y=6 I say the determinant is o, there is no solution 2. y-11>2y-2 Is 1,-15,-17,-2 solutions: My answer = 1 is not, -15 is,
asked by HGO on December 19, 2009
65. ## Physics
Newton showed that the motion of comets is controlled by gravitational attraction of gravity. Many comets move in elliptical orbit but if they are moving very fast they will perform a hyperbolic orbit. A student attempts to model the hyperbolic trajectory
asked by Ishani on January 28, 2007
66. ## Physics
A plutonium- 239 nucleus initially at rest undergoes alpha decay to produce a uranium-235 nucleus. The uranium-235 nucleus has a mass of 3.90 x 10^-25 kg and moves away with a speed of 2.62 x 10^5 m/s. Determine the minimum wage electric potential
asked by Anonymous on May 22, 2017
67. ## Physics HELP!!!!!!!!
In exercising, a weight lifter loses 0.100 kg of water through evaporation, the heat required to evaporate the water coming from the weight lifter's body. The work done in lifting weights is 1.30 x 10^5 J. (a) Assuming that the latent heat of vaporization
asked by Mary on April 18, 2007
68. ## Physics
A personne walking at a speed of 1.7m/s see's a stoped bus 25m from him. At this moment this person starts to run with an acceleration of 1,3m/s2. After 4 secondes this personne has reached his maximum speed et it stays constant, but at the same time the
asked by Nick on March 22, 2007
69. ## Math
1. Write the following ratio in simplest form: 32 min:36 min 8:9 8:36 32:9 128:144 2. Marie saved $51. On Wednesday, she spent$8 of her savings. What ratio represents the portion of her total savings that she still has left? 43:8 8:51 43:51 59:51 3. The
asked by Anonymous on November 12, 2015
70. ## ECON
demand for sulfur dioxide by coal-fired electricity electricity producers is: P= 1,000 - 16Q where Q is quantity of sulfur dioxide measured in thousands of tons, and P is price per ton of sulfur dioxide. a)With no policies or restrictions on sulfur
asked by Sushmitha on November 14, 2011
71. ## Chemistry
Determine the entropy change when 1.80 mol of HBr(g) condenses at atmospheric pressure? I got that DeltaS fus = 12.922J/(k*mol) Delta S Vapour = 93.475J/(K*mol) the only thing I don't get is the entropy change when 1.80 mol of HBr(g) condenses at
asked by Lola on May 27, 2015
72. ## english
I have to write a compare contrast essay does this sound ok? One of the toughest decisions that you are going to face as a parent is to decide where to send your children to get their education. There are many reasons why public schools are academically
asked by westwood on January 22, 2012
73. ## Economics
A business can produce its product in different versions: Version A has a basic design and a lower cost and Version B has an upgraded design and a higher cost of production. The business knows there are different types of customers, “High” demand (H)
asked by John on April 28, 2015
74. ## statistics
Joe dealt 20 cards from a standard 52-card deck, and the number of red cards exceeded the number of black cards by 8. He reshuffled the cards and dealt 30 cards. This time, the number of red cards exceeded the number of black cards by 10. Determine which
asked by phia on April 18, 2015
75. ## algebra 2
This doesn't make ANY sense to me. The table shows the relationship between height and growing times for 8 plants of the same species. Use a scatter plot to determine which data point is an outlier. (15,6) (17,14) (20,18) (25,24) this is the table. Hours
asked by erin on July 26, 2007
76. ## statistics
Average entry level salaries for college graduates with mechanical engineering degrees and electrical engineering degrees are believed to be approximately the same. A recruiting office thinks that the average mechanical engineering salary is actually lower
asked by etoile on March 20, 2015
77. ## Math 116
Q1) there are two plans available, but they only have 38 homes available. write an equation that illustrates the situation. use x and y to denote plan 1 and plan 2. A1) X+Y=38 Q2)Plan 1 sells for $175,000 and plan 2 sells for$200,000. All available houses
asked by Carmen on November 1, 2008
78. ## physics
In a water pistol, a piston drives water through a larger tube of radius 1.00 cm into a smaller tube of radius 1.00 mm. (a) If the pistol is fired horizontally at a height of 1.5m, use ballistics (2-D projectile motion) to determine the time it takes water
asked by anon on February 4, 2013
79. ## Physics Urgent!!!!!
Newton’s Law of Gravity specifies the magnitude of the interaction force between two point masses, m1 and m2, separated by the distance r as F(r) = Gm1m2/r^2. The gravitational constant G can be determined by directly measuring the interaction force in
asked by Abi on April 30, 2011
80. ## Physics Urgent!!
Newton’s Law of Gravity specifies the magnitude of the interaction force between two point masses, m1 and m2, separated by the distance r as F(r) = Gm1m2/r^2. The gravitational constant G can be determined by directly measuring the interaction force in
asked by Abi on April 30, 2011
81. ## operations management
A critical dimension of the service quality of a call center is the wait time of a caller to get to a sales representative. Periodically, random samples of 6 customer calls are measured for time. Results from the last five samples are shown in the table.
asked by Anonymous on February 26, 2011
82. ## Finance
Houston Inc. is considering a project which involves building a new refrigerated warehouse which will cost $7,000,000 at year = 0 and which is expected to have before tax operating cash flows of$500,000 at the end of each of the next 20 years. The Net
asked by McLocs on October 21, 2011
83. ## Math
#1.) The first row of an amphitheater contains 50 seats, and each row there after contains an additional 8 seats. If the theater seats 938 people, find the number of rows in the amphitheater. #2.) At oceanside deck the first high tide today occurs at 2:00
asked by Wavikz on June 12, 2007
84. ## math
How do I figure out 6c=__?__pt showing work See my other response. =) Brooke, You REALLY need to learn to do these systematically. Learn to watch the units, make the ones you don't want cancel, and keep the units you want for the answer. For example,
asked by Brooke on November 28, 2006
85. ## math
A company makes two products , namely X and Y. Each product must be processed in 3 stages; welding , assembly and painting. Each unit X takes 2 hours in welding, assembling 3 hours and 1 hour in painting. Each unit Y takes 3 hours in welding, 2 hours
asked by Iman on September 27, 2015
86. ## Managerial econ 1
1)The Midnight hour, a local nightclub, earned $100,000 in accounting profit last year. This year the owner, who had invested$ 1 million in the club, decided to close the club. What can you say about economic profit (and the rate of return)in the
asked by Ed on July 24, 2007
87. ## Chemistry
Determine how many mL of solution A (acetic acid-indicator solution) must be added to solution B (sodium acetate-indicator solution) to obtain a buffer solution that is equimolar in acetate and acetic acid. Solution A: 10.0 mL 3.0e-4M bromescol green
asked by Jake on October 29, 2012
88. ## Physics
You are at the park. You are going to analyze a roller coaster. You ask the attendant for the height of the first hill and find out that it is 106.1 feet tall. Along with your group members, you pace off a train waiting to be loaded. The train is 10.5
asked by Help! on December 18, 2012
89. ## RepostedLiterature question
I asked this question a couple of days ago, and wanted to post what my text states.Several agreed (B) was the best answer to the question. After sharing a book with a group of children, the teacher should always: A. determine if discussion is necessary. B.
asked by Anonymous on September 11, 2007
90. ## Calculus Homework
You are blowing air into a spherical balloon at a rate of 7 cubic inches per second. The goal of this problem is to answer the following question: What is the rate of change of the surface area of the balloon at time t= 1 second, given that the balloon has
asked by Kelly on October 16, 2013
91. ## science
Imagine that you are riding a bus. A girl gets on, and you recognize her as someone you know but have not seen for two years. 1. What are the roles of the central nervous system and the peripheral nervous system in allowing you to recognize the person and
asked by megan on February 18, 2011
92. ## Math
Determine the slope m and y-intercept (if possible) of the linear equation. (If the slope is undefined, enter UNDEFINED. Enter NONE if there is no y-intercept.) y = 18 slope: m= 0 y-intercept: (x,y)=(0,18) is my answers for slope and y-intercept correct?
asked by anonymous on March 18, 2019
93. ## financial management
The HighT Company is a manufacturer of electronic products. The company is preparing a financial plan for the coming year and has the following independent projects under consideration: Project Initial investment (\$ millions) Internal rate of return (%) A
asked by john on June 16, 2011
94. ## math
1. Which is a set of collinear points? J,H,I L,H,J J,G,L L,K,H 2. Use the diagram to identify a segment parallel to. 3. The meadure of angles A is 124. Classify the angle as acute, right, obtuse, or straight. Acute Straight Right Obtuse 4. Find the
asked by i need help on March 14, 2017
95. ## physics
At the equator, the earth’s field is essentially horizontal; near the north pole, it is nearly vertical. In between, the angle varies. As you move farther north, the dip angle, the angle of the earth’s field below horizontal, steadily increases. Green
asked by Tom on April 14, 2011
96. ## Pre-Calculus
The logistic growth model p(t)=0.90/1+3.5e^-0.339t relates the proportion of new personal computers sold @ Best Buy that have Intel's latest coprecessor t months after it has been introduced. a) what proportion of new personal computers sold @ Best Buy
asked by Erica on October 19, 2006
97. ## math
a boat travels 10 miles upstream in 4 hours. the boat travels the same distance downstream in 2 hours. Determine the rate of boat in still water and the rate of the current? Downstream speed = b(still water) + c(current) Upstream speed = b - c upstream
asked by suuny on November 20, 2016
98. ## Programming
How can I lock columns on Excel and keep them locked when I put them into a Palm handheld? I'm not entirely sure what you mean by 'locked'. If you mean protecting them these are the procedures. In protecting cells, we actually do the procedure 'backwards'.
asked by bill on September 9, 2006
99. ## Physics
Suppose the skin temperature of a naked person is 34°C when the person is standing inside a room whose temperature is 25°C. The skin area of the individual is 2.0 m2 a) Assuming the emissivity is 0.80, find the net loss of radiant power from the body b)
asked by papito Urgent help needed on April 18, 2007
100. ## Physics
A steam engine goes through the following 3 step process. I) Isobaric compression from a volume of 100 L to 10 L at a pressure of 4 x 105 Pa II) Isovolumetric process from a pressure of 4 x 105 Pa to a pressure of 4 x 106 Pa at a volume of 10L. III) An
asked by Eric on December 11, 2006
|
2019-07-24 01:23:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33252713084220886, "perplexity": 2186.0101350367963}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195530246.91/warc/CC-MAIN-20190723235815-20190724021815-00397.warc.gz"}
|
http://connection.ebscohost.com/c/articles/6028154/heavy-meson-production-cosy-11
|
TITLE
# Heavy meson production at COSY-11
AUTHOR(S)
Moskal, P.; Adam, H.-H.; Balewski, J. T.; Budzanowski, A.; Goodman, C.; Grzonka, D.; Jarczyk, L.; Jochmann, M.; Khoukaz, A.; Kilian, K.; Kowina, P.; Ko¨hler, M.; Lister, T.; Oelert, W.; Quentmeier, C.; Santo, R.; Schepers, G.; Seddik, U.; Sefzick, T.; Sewerin, S.
PUB. DATE
June 2000
SOURCE
AIP Conference Proceedings;2000, Vol. 512 Issue 1, p65
SOURCE TYPE
DOC. TYPE
Article
ABSTRACT
The COSY-11 collaboration has measured the total cross section for the pp→ppη[sup ′] and pp→ppη reactions in the excess energy range from Q=1.5 MeV to Q=23.6 MeV and from Q=0.5 MeV to Q=5.4 MeV, respectively. Measurements have been performed with the total luminosity of 73 nb-1 for the pp→ppη reaction and 1360 nb-1 for the pp→ppη[sup ′] one. Recent results are presented and discussed. © 2000 American Institute of Physics.
ACCESSION #
6028154
## Related Articles
• Near threshold K[sup +]K[sup -] meson-pair production in proton-proton collisions. Khoukaz, A.; Quentmeier, C.; Adam, H.-H.; Balewski, J. T.; Budzanowski, A.; Grzonka, D.; Jarczyk, L.; Kilian, K.; Kowina, P.; Lang, N.; Lister, T.; Moskal, P.; Oelert, W.; Santo, R.; Schepers, G.; Sefzick, T.; Sewerin, S.; Siemaszko, M.; Smyrski, J.; Strzalkowski, A. // AIP Conference Proceedings;2001, Vol. 603 Issue 1, p437
The near threshold total cross section and angular distributions of K[sup +]K[sup -] pair production via the reaction pp → ppK[sup +]K[sup -] have been studied at an excess energy of Q - 17 MeV using the COSY-11 facility at the cooler synchrotron COSY. The obtained cross section as well...
• Diagrammatic approach to meson production in proton-proton collisions near threshold. Kaiser, Norbert // AIP Conference Proceedings;2000, Vol. 512 Issue 1, p96
Assesses the approach to meson production in proton-proton collisions near threshold T-matrices. Computation of the threshold amplitude; Details of the tree level and one-loop pion exchanges of T-matrix; Effects of heavy meson exchanges on proton-proton collisions.
• Neutral-meson production in pp collisions at RHIC and QCD test of z scaling. Tokarev, M. V. // Physics of Atomic Nuclei;Mar2009, Vol. 72 Issue 3, p541
New experimental data on inclusive cross section of neutral-vector-meson ( ω0, ϕ, K) production in proton-proton collisions at $$\sqrt s$$ = 200 GeV obtained at RHIC are analyzed in the framework of z scaling. Properties of z-presentation are used to predict hadron yields over a wide...
• Meson exchange models for meson production. Hanhart, C. // AIP Conference Proceedings;2000, Vol. 512 Issue 1, p81
The production of mesons in nucleon-nucleon collisions is reviewed from the viewpoint of the meson-exchange picture. In the first part various possible production mechanisms and their relative importance are discussed. In addition, general features of meson production are described. In the...
• A Proton-pentaquark mixing and the intrinsic charm model. Mikhasenko, M. // Physics of Atomic Nuclei;May2014, Vol. 77 Issue 5, p623
A new interpretation of the intrinsic charm phenomenon based on the assumption of the pentaquark $$\left| {uudc\bar c} \right\rangle$$ mixing with a proton is offered. The structure function of the c-quark in the pentaquark is constructed. The mixing ration is evaluated theoretically, using...
• Quark-gluon plasma: In the melt. Wright, Alison // Nature Physics;Sep2011, Vol. 7 Issue 9, p676
The article focuses on the result of the analysis of the upsilon mesons production in a complementary set and heavy-ion data of proton-proton data which shows the formation of quark-qluon plasma as revealed by the mesons' suppression in the lead-lead collisions.
• Recent results from the COSY-11 experiment on near-threshold meson production in pp and pd collisions. Smyrski, J.; Adam, H. H.; Budzanowski, A.; Grzonka, D.; Jarczyk, L.; Khoukaz, A.; Kilian, K.; Kolf, C.; Kowina, P.; Lang, N.; Lister, T.; Moskal, P.; Oelert, W.; Quentmeier, C.; Santo, R.; Schepers, G.; Sefzick, T.; Sewerin, S.; Siemaszko, M.; Strzalkowski, A. // AIP Conference Proceedings;2002, Vol. 610 Issue 1, p327
The near-threshold production of K[sup +]K[sup -] pairs, as well as of η mesons, has been measured in proton-proton collisions using the COSY-11 facility at the cooler synchrotron COSY. The obtained cross section for the pp→ppK[sup +]K[sup -] reaction at an excess energy of Q=17 MeV...
• Production of high-invariant-mass charmonium pairs in proton-proton interaction. Novoselov, A. // Physics of Atomic Nuclei;Nov2015, Vol. 78 Issue 8, p963
Interpretations of the most recent experimental data on double J/ ψ production are discussed. It is shown that a significant signal in the region of high invariant masses of a J/ ψ pair may be a piece of evidence not only in favor of the dominance of double parton scattering in this...
• Chiral Lagrangian Treatment of the Isosinglet Scalar Mesons in 1–2 GeV Region. Fariborz, Amir H. // AIP Conference Proceedings;2002, Vol. 646 Issue 1, p189
In this article, preliminary results on isosinglet scalar mesons below 2 GeV in the context of a non-linear chiral Lagrangian are presented.
Share
|
2017-12-13 05:35:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9659718871116638, "perplexity": 14642.013755197859}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948521292.23/warc/CC-MAIN-20171213045921-20171213065921-00148.warc.gz"}
|
https://dsp.stackexchange.com/tags/python/hot?filter=week
|
# Tag Info
As a simple example, consider the discrete-time signal $$x[n]=a^nu[n],\qquad |a|<1\tag{1}$$ where $u[n]$ is the unit step function. The discrete-time Fourier transform (DTFT) of $x[n]$ is given by $$X(e^{j\omega})=\frac{1}{1-ae^{-j\omega t}}\tag{2}$$ Note that $\omega$ is a normalized angular frequency: $$\omega=2\pi f/f_s\tag{3}$$ where $f_s$ is ...
|
2019-12-10 05:22:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9908715486526489, "perplexity": 87.40189976937668}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540525821.56/warc/CC-MAIN-20191210041836-20191210065836-00412.warc.gz"}
|
http://www.vallis.org/blogspace/preprints/1102.4855.html
|
## [1102.4855] Fast coalescence of massive black hole binaries from mergers of galactic nuclei: implications for low-frequency gravitational-wave astrophysics
Authors: Miguel Preto, Ingo Berentzen, Peter Berczik, Rainer Spurzem
Date: 23 Feb 2011
Abstract: We investigate a purely stellar dynamical solution to the Final Parsec Problem. Galactic nuclei resulting from major mergers are not spherical, but show some degree of triaxiality. With $N$-body simulations, we show that massive black hole binaries (MBHB) hosted by them will continuously interact with stars on centrophilic orbits and will thus inspiral — in much less than a Hubble time — down to separations at which gravitational wave (GW) emission is strong enough to drive them to coalescence. Such coalescences will be important sources of GWs for future space-borne detectors such as the {\it Laser Interferometer Space Antenna} (LISA). Based on our results, we expect that LISA will see between $\sim 10$ to $\sim {\rm few} \times 10ˆ2$ such events every year, depending on the particular MBH seed model as obtained in recent studies of merger trees of galaxy and MBH co-evolution. Orbital eccentricities in the LISA band will be clearly distinguishable from zero with $e \gtrsim 0.001-0.01$.
#### Feb 28, 2011
1102.4855 (/preprints)
2011-02-28, 23:28
|
2018-09-20 08:23:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8142809867858887, "perplexity": 3891.7477472634496}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267156423.25/warc/CC-MAIN-20180920081624-20180920101624-00198.warc.gz"}
|
https://tug.org/pipermail/xetex/2008-October/011056.html
|
# [XeTeX] How does xetex handle non-native image files such as eps?
Yuan Qi xetex.tex.comp.gmane.3.maxchee at spamgourmet.com
Fri Oct 24 04:39:47 CEST 2008
```Peter Dyballa <Peter_Dyballa at ...> writes:
> You should not need to care for this. XeTeX does not need an external
> file that contains the picture's dimensions, it either extracts these
> values itself (pdfTeX is able to retrieve the dimensions of an
> included graphics file and assign them to dimen variables) or leaves
> this to the output driver, since the XDV file (as a DVI file too)
> only contains a reference. Since \includegraphics not always sets the
> dimensions XeTeX obviously has the ability to extract the values from
> the graphics file and reserve space for it in the page's layout.
|
2023-03-22 10:27:21
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.850191593170166, "perplexity": 10812.900697631167}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943809.22/warc/CC-MAIN-20230322082826-20230322112826-00718.warc.gz"}
|
https://www.vedantu.com/question-answer/a-sum-of-rs1000-is-invested-at-8-simple-interest-class-10-maths-cbse-5ee0ba0b11ac812938b60b39
|
Courses
Courses for Kids
Free study material
Free LIVE classes
More
# A sum of Rs.1000 is invested at 8% simple interest per annum. Calculate the interest at the end of 1,2,3 etc years. Is the sequence of interests an AP? Find the interest at the end of 30 years.
Last updated date: 27th Mar 2023
Total views: 308.1k
Views today: 5.85k
Verified
308.1k+ views
Hint: calculate the interest at the end of each year using the simple interest formula and then try to generalise the interests obtained in order to show it as a sequence of AP or not. Then using the formula for the nth term of AP series find the interest at the end of 30 years.
Formula for simple interest is
$\text{SI=}\dfrac{P\times T\times R}{100}............\left( 1 \right)$
Where SI = simple interest
P= principal
R= rate of interest
T= time
Let us assume that the interests at the end of the years as
$\text{S}{{\text{I}}_{1}},\text{S}{{\text{I}}_{2}},\text{S}{{\text{I}}_{3}},\text{etc}$
Given in the question that
P=1000
R=8
Now to calculate the interest at the end of the 1 year we need to take
T=1
By substituting the values of P,R,T in the above simple interest formula (1) we get,
\begin{align} & \text{S}{{\text{I}}_{1}}\text{=}\frac{P\times T\times R}{100} \\ & \Rightarrow \text{S}{{\text{I}}_{1}}=\frac{1000\times 8\times 1}{100} \\ & \Rightarrow \text{S}{{\text{I}}_{1}}=10\times 8 \\ & \therefore \text{S}{{\text{I}}_{1}}=80 \\ \end{align}
Now the interest at the end of the 2 year can be calculated similarly by substituting the values
P=1000
R=8
T=2 in the above simple interest formula (1)
\begin{align} & \text{S}{{\text{I}}_{2}}\text{=}\frac{P\times T\times R}{100} \\ & \Rightarrow \text{S}{{\text{I}}_{2}}=\frac{1000\times 8\times 2}{100} \\ & \Rightarrow \text{S}{{\text{I}}_{2}}=10\times 8\times 2 \\ & \Rightarrow \text{S}{{\text{I}}_{2}}=80\times 2 \\ & \Rightarrow \text{S}{{\text{I}}_{2}}=160 \\ & \therefore \text{S}{{\text{I}}_{2}}=2\times \text{S}{{\text{I}}_{1}}\text{ }\left[ \because \text{S}{{\text{I}}_{1}}=80 \right] \\ \end{align}
Similarly interest at the end of 3 years can be calculated by substituting the respective values in simple interest formula (1)
P=1000
R=8
T=3
\begin{align} & \text{S}{{\text{I}}_{3}}\text{=}\dfrac{P\times T\times R}{100} \\ & \Rightarrow \text{S}{{\text{I}}_{3}}=\dfrac{1000\times 8\times 3}{100} \\ & \Rightarrow \text{S}{{\text{I}}_{3}}=10\times 8\times 3 \\ & \Rightarrow \text{S}{{\text{I}}_{3}}=80\times 3 \\ & \Rightarrow \text{S}{{\text{I}}_{3}}=240 \\ & \therefore \text{S}{{\text{I}}_{3}}=3\times \text{S}{{\text{I}}_{1}}\text{ }\left[ \because \text{S}{{\text{I}}_{1}}=80 \right] \\ \end{align}
Similarly for all other SI also we can write in terms of SI1 [SI1=80]
Now by considering the interests at the end of 1,2,3 years SI1 ,SI2 , SI3 we can observe that
$\text{S}{{\text{I}}_{2}}=2\times \text{S}{{\text{I}}_{1}}$
$\text{S}{{\text{I}}_{3}}=3\times \text{S}{{\text{I}}_{1}}$
etc
Hence , interest at the end of any year let’s suppose be n can be generalised as
$\text{S}{{\text{I}}_{n}}=n\times \text{S}{{\text{I}}_{1}}$
Here we can also observe that
$\text{S}{{\text{I}}_{2}}-\text{S}{{\text{I}}_{1}}=\text{S}{{\text{I}}_{1}}$
$\text{S}{{\text{I}}_{3}}-\text{S}{{\text{I}}_{2}}=\text{S}{{\text{I}}_{1}}$
etc
$\text{S}{{\text{I}}_{n}}-\text{S}{{\text{I}}_{n-1}}=\text{S}{{\text{I}}_{1}}$
As we already know that from the definition,
A sequence in which the difference of two consecutive terms is constant, is called Arithmetic Progression (AP)
Hence the given series of interests at the end of the years 1,2,3…. forms an AP of common difference SI1 i.e., 80
As we know that in an AP series the $nth\text{ term}$ can be calculated by using the formula
${{\text{a}}_{n}}=a+\left( n-1 \right)d..........(2)$
Where a= first term
d=common difference
now interest at the end of 30 years can be calculated by using the above formula of AP (2)
\begin{align} & {{\text{a}}_{n}}=a+\left( n-1 \right)d \\ & a=\text{S}{{\text{I}}_{1}}=80 \\ & d=\text{S}{{\text{I}}_{1}}=80 \\ & n=30 \\ \end{align}
Now by substituting the values in the above formula by assuming interest at ath end of 30 years as $\text{S}{{\text{I}}_{30}}$ we get,
\begin{align} & \text{S}{{\text{I}}_{30}}=\text{S}{{\text{I}}_{1}}+\left( n-1 \right)\text{S}{{\text{I}}_{1}} \\ & \Rightarrow \text{S}{{\text{I}}_{30}}=80+(30-1)80 \\ & \Rightarrow \text{S}{{\text{I}}_{30}}=80+29\times 80 \\ & \Rightarrow \text{S}{{\text{I}}_{30}}=(1+29)\times 80 \\ & \Rightarrow \text{S}{{\text{I}}_{30}}=30\times 80 \\ & \therefore \text{S}{{\text{I}}_{30}}=2400 \\ \end{align}
Hence, the interest at the end of 30 years is Rs.2400
Note:
We need to write the interests at the end of 2,3 years in terms of interest at the end of 1 year so that we can get a generalised form to calculate the interest at the end of any year easily.
Interest at the end of 30 years can also be calculated by using the simple interest formula instead of using the $nth\text{ term}$ of an AP series formula
\begin{align} & \text{SI=}\dfrac{P\times T\times R}{100} \\ & \Rightarrow \text{S}{{\text{I}}_{30}}=\frac{1000\times 30\times 8}{100} \\ & \Rightarrow \text{S}{{\text{I}}_{30}}=10\times 30\times 8 \\ & \Rightarrow \text{S}{{\text{I}}_{30}}=300\times 8 \\ & \therefore \text{S}{{\text{I}}_{30}}=2400 \\ \end{align}
Which gives the same result in either ways.
By writing the generalised form for the interests at the end of each year helps in finding whether the sequence is in AP or not.
|
2023-03-30 11:20:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 12, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000100135803223, "perplexity": 1253.3453453473298}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949181.44/warc/CC-MAIN-20230330101355-20230330131355-00114.warc.gz"}
|
https://datascience.stackexchange.com/tags/feature-selection/new
|
# Tag Info
2
370 rows is quite a few, RF does bootstrap but it is still few info. Having too many columns will lead to a more complex model (since the algorithm will work 1 000 dimensions). Consider doing a pipeline with all the steps and search for hyperparameters and feature selection there. https://scikit-learn.org/stable/modules/generated/sklearn.pipeline.Pipeline....
0
My first question is : for example, if the highest p value is for the X1X2 feature, is it okay to eliminate this feature even when X1 and X2 can be statistically significant ? Of course, the interaction can have no information about the target. Per example if the problem is perfectly defined by X1 and X2. The interaction $X_1 \cdot X_2$ won't add nothing ...
2
What helps the model more, keeping all features or removing correlated ones? There is some theory about it but in the end Machine Learning is try and error. You should give it a try with all features and then doing a feature selection to see if you are able to improve your model. What works for some models doesn´t necessarily have to work for the rest of ...
2
There is plenty of methods to calculate feature importance. I recommend trying two of them LIME and SHAP. I don't want to copy-paste material and tutorial provided by the author so please refer to these two repositories.
1
You might want to look at conditional entropy, H(A|B) and H(B|A).
2
I think merging such correlated features and create a new one, will also be a good idea. In that way we will not lose any information. For example, sum up the values of different correlated features and take an average of it, will be the very basic option.
3
An alternative to the one provided by @Kasra is dimensionality reduction. It's another way of solving your multicollinearity problems, while avoiding deleting variables more or less arbitrarily. You can use simpler, linear techniques such as PCA, or more complex non-linear techniques such as Autoencoders. t-SNE is a non-linear technique that is typically ...
2
You need to remove them. Redundant features only increase the computation time, increase model complexity (with no benefit) which means making interpretation of model/analysis more sophisticated and if they are many, removing them prunes your vector space by improving the density of information in dimensions of vector space (it helps e.g. in finding nearest ...
2
In model building there is a sort of iterative workflow that you can use: Select an appropriate model you want to build e.g. for classification maybe a XGB classifier or a logistic regression, etc. This is important because the model by itself will determine a lot about how to wrangle your data. XGB only works with numerical features so you will have to ...
0
You can only compute chi2 between two numerical arrays. You are getting that error because you are comparing a string. Also I am not sure if it works for multiclassification also. df = df.apply(LabelEncoder().fit_transform) This will solve the problem for you. But there are a thousand ways to encode features and for sure other will work better for you.
0
You could use Nonlinear Least Squares, in which one of the regressors is your arctan function with two more parameters to be estimated. In R, for example: library(minpack.lm) df <- datasets::airquality my_atan <- function(x, A, B){atan((x-A)/B)} nlsLM(Ozone ~ a + b * Temp + c * my_atan(Temp, A, B), data = df, start = list(a = 0, b = 0, ...
1
If the dimensions are not linearly correlated, you may use an autoencoder to perform the dimensionality reduction. Just like PCA that can perform a reconstruction, but with non-linearity. Then, you can perform classification with the latent space. Autoencoder is a multi-dimensional auto-regressive model with a dimensional bottleneck somewhere in the middle....
0
I'm not sure if you are bound to the type of model presented in your question. However, an alternative would be to use generalised additive models (GAM), e.g. with regression splines or locale regression. These methods usually give a very good fit with non-linear patterns in $X$ and there is no need to provide parameterization of $X$ so that it is easy to ...
1
One approach would be to use an algorithm designed for non-convex problems like Bayesian optimization. However, if you have already evaluated a fine grid of parameters this is unlikely to offer significant improvement. Here is an example of how you could implement Bayesian optimization for this problem. First, we need some data. Just for fun let’s extract ...
5
For predictive power, in general, including both shouldn't be a problem. But there is a lot of nuance here. Foremost, if predictive power isn't all you care about: if you're making statistical inferences, or care about explainability and feature importances, then including both can cause issues. Briefly, your model may split the importance of the underlying ...
0
I'll go through your questions one by one: is feature selection more important in KNN than in other algorithms? I don't think it is more important for kNN than for other kinds of algorithms. If a particular feature is not predictive in a neural network, the network will just learn to ignore it. But in KNN, it seems like it could make the prediction ...
3
I think that you need just feature_importances = rf_gridsearch.best_estimator_.feature_importances_ This provides the feature importances for all the attribures in your dataset. For more information on this as well as other options, you may also refer to Scikit-learn official documentation.
3
Like any preprocessing step, feature selection must be carried out using the training data, i.e. the process of selecting which features to include can only depend on the instances of the training set. Once the selection has been made, i.e. the set of features is fixed, the test data has to be formatted with the exact same features. This step is sometimes ...
1
Lasso stands for ´least absolute shrinkage and selection operator´. It has a penalty that is the absolute value and makes a lot of variables converge to cero. There is a ton of blogs that explain really well Lasso on the internet, have a look! Elastic Net is a combination of Ridge and Lasso. So it will also reduce the variables a lot. Ridge is a quadratic ...
0
From https://en.wikipedia.org/wiki/Shapley_value, it is possible to understand that direct computation of Shapley values is difficult with their general formula : \varphi_i(v) = \frac{1}{\text{number of players}} \sum_{\text{coalitions excluding }i} \frac{\text{marginal contribution of }i\text{ to coalition}}{\text{number of coalitions excluding } i \...
0
Check out the shap library. I think that could help you. https://github.com/slundberg/shap
0
You can train an RNN with character embeddings. This can be done by splitting the name into sequences of chars and vectorize them numerically. If you are working with Keras, you can feed them into an Embedding() layer that will learn how to represent characters. RNN layers will then process their sequence. At the output node, your Network will perform a ...
Top 50 recent answers are included
|
2020-02-21 01:22:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5127936005592346, "perplexity": 676.9756912958824}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145316.8/warc/CC-MAIN-20200220224059-20200221014059-00485.warc.gz"}
|
https://brilliant.org/problems/a-recurrence-to-relate-to/
|
# A recurrence to relate to
Algebra Level 5
$\large a_{n+2} = (n + 3)a_{n+1} - (n + 2)a_{n}$
For whole numbers $$n$$, consider the recurrence relation defined as above with $$a_{1} = 1, a_{2} = 3$$.
Find $$\displaystyle\bigg( \sum_{k=1}^{2015} a_{k} \bigg)\pmod{100}.$$
×
Problem Loading...
Note Loading...
Set Loading...
|
2017-01-19 13:16:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9798102974891663, "perplexity": 11598.851228301479}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00286-ip-10-171-10-70.ec2.internal.warc.gz"}
|
https://jehtech.com/mathsy_stuff/stats.html
|
# Statistics Notes
## References and Resources
1. "Introduction To Statistics", Thanos Margoupis, University of Bath, Peason Custom Publishing.
A technical book but quite dry (as most statistics books are!)
2. "An Adventure In Statistics, The Reality Enigma", Andy Field
A very entertaining book that explains statistics in a really intuitive way and uses examples that are actually slightly interesting!
The following are absolutely amazing, completely free, well taught resources that just put things in plain English and make concepts that much easier to understand! Definitely worth a look!
2. The amazing CK12 Foundation.
## The Basics of Discrete Probability Distributions
### Some Terminology
#### Variables...
Categorical Variable Discrete variable which can be one of a set of such as the anwer to a YES/NO question: the variable can be either YES or NO. Or the answer to a quality question where the choices are good, ambivalent or bad, for example. Consistent estimator A consistent estimator is one that converges to the value being estimated. Numerical Variable Has a value from a set of numbers. That set can be continuous, like the set of real numbers, or discrete like the set of integers. Random Variable The numerical outcome of a random experiment. Discrete if it can take on more than a countable number of values. Continuous if it can take any value over an uncountable range. I.D.D. A sequence or other collection of random variables is independent and identically distributed (i.i.d.) if each random variable has the same probability distribution as the others and all are mutually independent. Qualitative Data Data where there is no measurable difference (in a quantitative sense) between two values (that makes sense). For example, the colour of a car. The car can be "nice" or "sporty", but we can't define the the difference in terms of a number like 4.83, for example. Quantitative Data Data is numerical and the difference between data is a well defined notion. For example, if car A goes 33 MPG and care B does, 40 MPG, then we can say the difference is 7MPG. Ordinal Data The value of the data has an order with respect to other possible values.
#### Populations and samples...
It is always worth keeping in mind that probability is a measure describing the likelihood of an event from the population. It is not "in" the data set (or sample) obtained... a sample is user to infer a probability about a population parameter.
Population The complete set of items of interest. Size is very large, denoted N, possibly infinite. Population is the entire pool from which a statistical sample is draw. Sample An observed subset of the population. Size denoted n. Random Sampling Select n objects from population such that each object is equally likely to be chosen. Selecting 1 object does not influence the selection of the next. Selection is utterly by chance. Parameter Numeric measure describing a characteristic of the population. Statistic Numeric measure describing a characteristic of the sample. Statistics are used to infer population parameters. Inference The process of making conclusions about the propulation from noisy data that was drawn from it. Involves formulating conclusions using data and quantifying the uncertainty associated with those conclusions.
#### Experiments...
Random Experiment Action(s) that can lead to \ge 2 outcomes where one cannot be sure, before performing the experiment, what the outcome would be. Basic Outcomes A possible outcome from a random experiment. For example, flipping a coin has two basic outcomes: heads or tails. Sample Space The set of all possible basic outcomes (exhaustively) from a random experiment. Note that this implies that the total number of possible outcomes is, or can be, known. Event A subset of basic outcomes from a sample space. For example, a dice roll has 6 basic outcomes, 1 through 6. The sample space is therefore the set {1, 2, 3, 4, 5, 6}. The event "roll a 2 or 3" is the set {2, 3}.
#### Distributions...
Probability Mass Function (PMF) When evaluation a n function gives the probability that a random variable takes the value n. Only associated with discrete random variables. Also note that the function descibes the population. Probability Density Function (PDF) Only associated with continuous random variables. The area between two limits corresponds to the probabilty that the random variable lies within those limits. A single point has a zero probability. Also note that the function descibes the population. Cumulative Distribution Function (CDF) Returns the probability that X \le x . Quantile The \alpha^{th} quantile of a distribution, F , is the point x_\alpha such that F(x_\alpha) = \alpha .
### Population And Sample Space
We have said that the population is the complete set of items of interest.
We have said that the sample space is the set of all possible outcomes (exhaustively) from a random experiment.
So I wondered this. Take a dice roll. The population is the complete set of possible items {1, 2, 3, 4, 5, 6}. The sample space is the set of all possible outcomes, also {1, 2, 3, 4, 5, 6}. So here sample space and population appear to be the same thing, so when are they not and what are the distinguishing factors between the two??
The WikiPedia page on sample spaces caused the penny to drop for me:
...For many experiments, there may be more than one plausible sample space available, depending on what result is of interest to the experimenter. For example, when drawing a card from a standard deck of fifty-two playing cards, one possibility for the sample space could be the various ranks (Ace through King), while another could be the suits (clubs, diamonds, hearts, or spades)...
Ah ha! So my population is the set of all cards {1_heart, 2_heart, ..., ace_heart, 1_club, ...} but the sample space may be, if we are looking for the suits, just {heart, club, diamond, spade}. So the population and sample space are different here. In this case the sample space consists of cards separated into groups of suites. I.e. the popultation has been split into 4 groups because there are 4 events of interest. These events cover the sample space.
In summary the population is the set of items I'm looking at. The sample space may or may not be the population... that depends on what question about the population is being asked and how the items in the population are grouped per event.
### Classic Probability
In classic probability we assume all the basic outcomes are equally likely and therefore the probability of an event, A, is the number of basic outcomes associated with A divided by the total number of possible outcomes: Each basic output is equally likely. And, note, here we are talking about outcomes in the population.
An example of the use of classical probability might be a simple bag of marbles. There are 3 blue marbles, 2 red, and 1 yellow.
If my experiment is to draw out 1 marble then the set of basic outcomes is {B1, B2, B3, R1, R2, Y1}. This is also the population! Also, note that the sample space isn't {B, R, Y} because we can differentiate between similar coloured marbles and there is a certain quanity of each.
So, what is the probability of picking out a red. Well, here and because there are 2 red marbles in the sack. Therefore the probability is:
What if my experiment is to draw 2 marbles from the sack? Now the set of all possible basic outcomes, if the order of draw was important would be a permutation. This means that if I draw, for example, R1 then B2, I would consider it to be a distinctly different outcomes to drawing B2 then R1. That means my population is:
Selection 1 Selection 2 Selection 1 Selection 2 B1 B2 B2 B1 B1 B3 B2 B3 B1 R1 B2 R1 B1 R2 B2 R2 B1 Y1 B2 Y1 B3 B1 B3 B2 B3 R1 B3 R2 B3 Y1 R1 R2 R2 R1 R1 B1 R2 B1 R1 B2 R2 B2 R1 B3 R2 B3 R1 Y1 R2 Y1 Y1 B1 Y1 B2 Y1 B3 Y1 R1 Y1 R2
Which is a real nightmare to compute by trying to figure out all the permutations by hand, and imagine, this is only a small set! Maths to the rescue...
And that is how many permutations we have in the above table (thankfully!).
So, if my question is what is the probability of drawing a red then a yellow marble, my event space is the set {R1Y1, R2Y1}.
Thus, if we say event A is "draw a red then a yellow marble",
What if we don't care about the order. What is the probability of drawing a red and a yellow? I.e. we consider RY and YR to be the same event...
Our event space therefore becomes:
{R1Y1, R2Y1 Y1R1, Y1R2}.
Thus, if we say event A is "draw a red and a yellow marble",
We are essentially now dealing with combinations.
Therefore in the above table, where we see things like...
Selection 1 Selection 2 B1 B2 B2 B1
...we can delete one of the rows. The set of all basic outcomes therefore becomes:
Selection 1 Selection 2 Selection 1 Selection 2 B1 B2 B2 B1 B1 B3 B2 B3 B1 R1 B2 R1 B1 R2 B2 R2 B1 Y1 B2 Y1 B3 B1 B3 B2 B3 R1 B3 R2 B3 Y1 R1 R2 R2 R1 R1 B1 R2 B1 R1 B2 R2 B2 R1 B3 R2 B3 R1 Y1 R2 Y1 Y1 B1 Y1 B2 Y1 B3 Y1 R1 Y1 R2
Again a nightmare to figure out by hand, so maths to the resuce...
And, again, that is how many selections (not included the ones we've deleted using a strikethrough font) we have in the above table (thankfully!).
So, now we can't ask about drawing a red then a yellow marble anymore as order doesn't matter. We can ask about drawing a red and a yellow though. My event space is now the set
{R1Y1, R2Y1 Y1R1, Y1R2}
because I not longer care about order-of-draw... Thus, if we say event A is "draw a red and a yellow marble",
We have noted that in clasical probability that the various outcomes are equally likely. This means that from my bag of marbles am am equally likely to pick any marble. But what if this was not the case? What if the blue marbles are very heavy and sink to the bottom of the sack whilst the other marbles tend to rest on top of the blue marbles. We could hardly say that I am as likely to pick a blue as I am a red in this case. In this cas we cannot use classical probability to analyse the situation.
### Relative Frequency Probability
Here the probability of an event occuring is is the limit of the proportion of times an event occurs in a large number of trials. Why the limit? Well, the limit would mean we tested the entire popultation. Usually, however, we just pick a very large n and infer the popultation statistic from that. Where is the number of exerimental outcomes relevant to event A, and is the total number of outcomes.
So, for example, if we flip a coin 1000 times there are 1000 total number of outcomes. If we observed 490 heads and 510 tails we would have...
If the coin was entirely fair then we would expect an equal probability of getting a head or a tail. Frequentist theory states that as the sample size gets bigger, i.e., as we do more and more coin flips, if the coin is fair the probabilities will tend towards 50%. So if we did 1,000,000 samples, for example, we might expect... So... more samples, the closer our probability estimate is to the real probabilities for getting a head or tail. The limit of the probabilities as the sample size tends to infinitely will be exactly 50%. The "real" is the probability from the population. Anything less is a sample of the population.
As n tends to infinity, we can say that n tends towards the popultation size, and then at this point you will arrive back at the formula for conditional probability you saw in the previous section
### Event A OR Event B
A and B are independent: P(A \cup B) = P(A) + P(B) A and B are not independent: P(A \cup B) = P(A) + P(B) - P(A \cup B)
What is independence? A and B are said to be independent if information on the value of A does not give any information on the value of B, and vice versa.
We can visualise this as follows, using a Venn diagram...
If two events are independent then we can see that none of the basic outcomes from either event can occur for the other. Therefore, using either classical or relative-frequency probability we can see that .
Now the two events are related. They are no longer independent because some of the basic outcomes from one event are also basic outcomes of the other. Thus we can see that if we sum the probabilities for the basic outcomes for each, we will count the shared basic outcomes twice!
If that's not clear, think of it this way...
### Event A AND Event B
We just talked about independence, but how do we know if an event A, is independent of another event, B? The answer is that two events are indpendent if .
Why is this? Let's think of this from the relative frequency point of view. We know that and that for the sample and as n tends to infinity for the population.
For every basic outcome in the set A, we can pick an outcome from the set B, so the count of the combination of all possible outcomes from both sets must be . But this only makes sense if the events are mutually exlusive (i.e., don't overlap). (See the previous figure of a Venn Diagram for two independent events to make this more clear in your mind).
So, what if the events are not independent? Then it is no longer true that the count of combination of all possible outcomes is . Why is this? This is because the events that can be in both A and B now represent a much smaller portion of sample space. The probability that A and B occur is now the solid-gray shaded area of the Venn diagram for dependent events.
Think if it this way. For A and B to both occur, at least one must have occurred, so now the only possible choices from the other event are in the overlapping region, not in the entire event space. Keep reading the next section on condition probability to find out more about why this is an indicator of independence.
A \overline A B A \cap B \overline A \cap B \Sigma = P(B) \overline B A \cap \overline B \overline A \cap \overline B \Sigma = P(\overline B) \Sigma = P(A) \Sigma = P(\overline A) \Sigma = 1
### Event A GIVEN Event B
If I live with my partner it seems intuitively correct to say that the probability of myself getting a cold would be heightend if my partner caught a cold. Thus there is an intutive difference between the probability I will catch a cold and the probability that I will catch a cold, given my partner has already caught one.
This is the idea behind conditioning: conditioning on what you know can change the probability of an outcome with no apriori knowledge of the data.
Lets look at a really simple example. I roll a dice... what is the probability that I roll a 3. We assume a fair dice, so the answer is easy: 1/6. The set all of basic outcomes was {1, 2, 3, 4, 5, 6} (the sample space) and only one outcome was relevant for our event... so 1/6.
Now lets say I have rolled the dice and I have been informed that the result was an odd number. With this knowledge, what is the probability that I rolled a 3? The set of basic outcomes is now narrowed to {1, 3, 5} and so the probability is now 1/3!
The probability of event A given that event B has occurred is denoted and is defined as follows:
This makes intuitive sense. If B has occurred then the number of outcomes to "choose" from is represented by the circle for B in the Venn diagram below. The number of of outcomes that can belong to event A, given that we know event B has occurred, is the intersection. Therefore, our sample space is really now all the outcomes for event B, so , and the event of interest for A is now restricted to . So we get...
If A and B are independent then clearly, because , .
We can re-arrange the above to get another formula for ...
And now we can see why, if two events are independent that ... because when A and B are independent !. And when they are dependent that equality is not true.
A small bit on terminology... is ofter called the prior probability and the posterior probability.
...A posterior probability is the probability of assigning observations to groups given the data. A prior probability is the probability that an observation will fall into a group before you collect the data. For example, if you are classifying the buyers of a specific car, you might already know that 60% of purchasers are male and 40% are female. If you know or can estimate these probabilities, a discriminant analysis can use these prior probabilities in calculating the posterior probabilities.
-- Minitab support
From the definitions so far we can also see that But that the following does not (necessarily) equal 1,
Why is this important? Let's look at a little example. Say we have a clinical test for Influenza. Imagine that we know that for a patient the probability that they have or do not have Influenza.
Any clinical test is normally judged by it's sensitivity and specificity. Sensitivity is the probability that the test reports infection if the patient has Influenza (i.e true positives). Sepcificity is the probability that the test reports the all-clear if the patient does not have Influenza (i.e., true negatives).
Let's say the test has the following sensitivity and specificity respectively: Because we know , we can say: But a clinitian wants to know the probabilities the other way around. The clinician will ask "if my patient has the flu, what is the probability that the test will call a positive?" I.e., the clinician wants to know
We can use the conditional probability formula to work this out... We can find out because , which are quantities we already know, so we get... We're close, but what is the value for ? The answer is : We still need to know . We know that , and we know these quantities already, so: Therefore, Which means we can work out the entire thing (to 4dp):
### Sensitivity v.s. Specificity
In clinical tests, the user often wants to know about the sensitivity of a test. I.e., if the patient does have the disease being tested for, what is the probability that that the test will call a positive results. Obviously, we would like this to be as close to 100% as possible!
The clinician would also like to know, if the patient did not have the disease, what is the probability that the test calls a negative. Again, we would like this to be as close to 100% as possible!
To summarise: sensitivity is the probability of a true positive, and specificity is the probability of a true negative.
Sensitivity:
Specificity:
### Bayes' Theorem
Recall the multiplication rule: By re-arranging the above we can arrive at the following:
Because we know that we could also write: An because we know and , we can re-write this as:
Remember that we called the prior probability. The prior is the probability distribution that represents your uncertainty over the random variable X. The posterior is the distribution representing your uncertainty after you have observed events that are related to or influence your event-of-interest: It is a conditional distribution, conditioning on the observed data. Bayes' theorem has given us a way to relate the two.
#### Alternative Statement
This can sometimes be usefull re-stated as follows. If all events are mutually exclusive and exhaustive and we have some other event A then we can write: And because in this case, We can say: The advantage of this expression is that the probabilities involves are sometimes more readily available.
## Discrete Probability Distributions
### General Definitions
The Probability Mass Function (PMF) is another name for the Probability Distribution Function (PDF), , of a discrete random variable X expresses the probability that takes the value . I.e, The PMF has the following properties:
The Cumulative Mass Function (CMF) or Cumulative Probability Function (CPF), , for a random variable , gives the probability that X does not exceed the value : The CMF has the following properties:
### Expected Value and Variance
Expected value defined as: Where is called the mean and is the mean of the population.
The calculation of measurements like becomes...
Variance, is the average squared distance from the mean and is defined as follows. The square is taken so that distances don't cancel eachother out (i.e, a negative distance and positive distance could result is a very small average distance, which is not what we want). We can re-write this as follows: In summary,
Standard Deviation, , is the positive square root of the variance.
The calculation of measurements like becomes...
Why does equal ? From our definition of we know that ... And... We also know that the sum of all the probabilities in the distribution must sum to 1, so therefore... Thus we can say this... Yay!
For example, lets say that we have weighted dice so that the probabilities are as follows:
Value Probability 1 0.05 2 0.1 3 0.1 4 0.1 5 0.1 6 0.55
The population mean, becomes: Of course, the dice does not have a face value of 4.75, but over many many rolls this would be the average score. The variance and standard deviation are calculated in the same way using the above formulas.
#### Linear Functions Of X
The expected value and variance of a linear function of is another fairly useful result and we will use it to get some very important results in the section on sampling distributions. For example, we could arbitrarily define a function over dice rolls. Not sure why we'd do this, but say we said the experiement result was where is the random variable which gives the dice face rolled. Now we have:
g(x) Probability 7 0.05 9 0.1 11 0.1 13 0.1 15 0.1 17 0.55
The expected value of is therefore: Interestingly ! It looks like , and for linear functions this is the case (but not so if is not linear!). When is LINEAR and :
We can derive the first equation as follows, using our first definition of the expected value of a random variable. The same can be done for the variance can also be done in a similar manner. TODO: add this here.
### Binomial Distribution
#### Bernoulli Model
Experiment with two mutually exclusive and exhaustive outcomes. One has the probability and the other has probability . Therefore, using the formulas for mean and variance from the previous sections, we can say the following.
#### Binomial Distribution
Bernoulli experiment repeated times where the repetitions are independent. This is like selection with replacement.
We know from previous discussions that if two events are independent then and by extension that . Therefore if I do experiments and want to know the probablity of successes in a row and then failures in a row, the probability is: Where there are 's in the above equation and 's. BUT this would be the probability of getting successes in a row and then failures in a row. The question, however, doesn't care about the specific order: we dont care if we get SSFF.. or SFSF... or SSFS... and so on. We need to figure out how many of these combinations there are and account for this!
Let's take a simple example. I have a bag with 5 balls in it: 3 blue, 2 red. What is the probability of drawing 1 blue ball and 1 red ball if I select using replacement (to make the selections independent). Replacement means that what ever ball I pick first, I record the result and then put it back in the bag before making my next pick. In this way, the probability of selecting a particular colour does NOT change per pick.
Note that there are only 2 colours of ball in our bag... this is because we are talking about an experiement where there are only two outcomes, labeled "success" and "failure". We could view selecting a blue as "success" and a red as "failure", or vice versa.
So... selecting a blue and a red ball. Sounds like right? Well, almost, but not quite. Take a look at the sample space below, out of which, the events of interested are highlighted in a light blue colour.
Here I am using an alphabetical subscript to indicate the specific ball. I.e. is a different ball to . The order of selection is given by the order of writing. I.e., "" means was picked on the first turn and then was picked on the second turn.
... B_x B_y B_z R_x R_y B_x B_x B_x B_x B_y B_x B_z B_x R_x B_x R_y B_y B_y B_x B_y B_y B_y B_z B_y R_x B_y R_y B_z B_z B_x B_z B_y B_z B_z B_z R_x B_z R_y R_x R_x B_x R_x B_y R_x B_z R_x R_x R_x R_y R_y R_y B_x R_y B_y R_y B_z R_y R_x R_y R_y
There are two clear groups of outcomes that will satisfy the question. We see that in one group we drew a red ball first and in the other we drew a blue ball first. So, there are total possible events, and of these are of interest. Therefore, But, hang on minute! Isn't the same as ?! Well, no, as we can see below... So the two expressions are clearly not the same thing. To clear up my confusion I asked the guys at Maths Exchange, and got the following awesome answer from a very nice chap named Joriki. Im quoting it (almost) verbatim because it was just so good!
This confusion can be resolved by careful attention to definitions and notation.
Where you write , you call the events and "a blue" and "a red" respectively. Implicitly you're referring to two different draws (if you were referring to a single draw, you'd have ), but you're not distinguishing the events accordingly, and this leads to confusion.
The events you are interested are , a blue ball is drawn on the first draw, , a red ball is drawn on the first draw, , a blue ball is drawn on the second draw, and , a red ball is drawn on the second draw. We have and .
You want to know . Since the events and are mutually exclusive, this is and since the first and second draws are independent, this is
Note, that in Jorki's example and do not refer to different balls: the subscripts refer to different draws. Therefore, and could be the same red ball drawn on turn and turn .
So, having understood this, we can see that to get the total probability of successess (1 blue ball) out of trials (2 selections), in the case where events are independent, we are concerned with the number of combintations in which the events can occur.
The formula for the binomial distribution is... For .
This, incidentally is the same as asking for because
Let's do this to exhaustion... let's imagine another bag. It doesn't matter how many blue and red balls are in there, just so long as I can select 3 blue balls and 1 red. and . Once we've made this selection of 3 blues, 1 red, the question is, how many ways were there of getting to this outcome. Now let's check our understanding of using combinations (esp. vs. permutations) to get this...
We can draw a little outcome tree as follows:
Clearly there are 4 ways to arrive at the selection of 3 blues and 1 red, once we have made that particular selection. Remember we're not asking for a blue/red on a specific turn... we just care about the final outcome, irrespective of the order in which the balls were picked. And we can see...
Why don't we use permutations? The answer is that we don't care if we got or or etc etc, where, as before, the alphabetical substrcipts distinguish the ball, not the turn on which it is drawn as that is given by order of writing.
Having gone through this we can then use the formulas for expected value, or mean, and variance. To make the notation similar to previous examples where we used , in the above formula I just change for so that we get Where , meanding that this is the probability of successes out of trials.
Recalling that We can say that the population mean for the Bernoulli distribution is And... the rest of the proof gets a little complicated... this PDF by Joel Feldman gives the derivation, which I found by doing a quick google. Liked the explanation.
The population mean and variance are summarised as follows.
Often when talking about a binomial distribution you will see something like This is the binomial distribution for successes out of trials with the probability of success given by .
We can plot some example distributions... lets do this using Python.
import matplotlib.pyplot as pl
import numpy as np
from matplotlib.font_manager import FontProperties
from scipy.stats import binom
fontP = FontProperties()
fontP.set_size('small')
n=50
pLst=[0.1, 0.25, 0.5, 0.75, 0.9]
x = np.arange(-1, n+2)
fig, ax = pl.subplots()
for p in pLst:
dist = binom(n, p)
ax.plot(x, dist.pmf(x),linestyle='steps-mid')
ax.legend(['$p={}$'.format(p) for p in pLst],
ncol = 3,
prop=fontP,
bbox_to_anchor=[0.5, 1.0],
loc='upper center')
fig.show()
fig.savefig('binomial_distrib.png', format='png')
### Poisson Distribution
Didn't like the starting explanation in [1] so had a look on Wikipedia and found a link to the UMass Amherst Uni's stats page on the Poisson distribution which I though was really well written. That is the main reference here.
Poisson distribution gives the probability of an event occuring some number of times in a specific interval. This could be time, distance, whatever. What matters is that the interval is specific and fixed.
The example used on the UMass Amherst is letters received in a day. The interval here is one day. The poisson distribution will then tell you the probability of getting a certain number of letters in one day, the interval. Other examples could include the number of planes landing at an airport in a day or the number of linux server crashes in a year etc...
The interval is one component. The other is an already observed average rate per interval, or expected number of successes in an interval, . For example, we might have observed we get on average 5 letters per day (), or that 1003 planes land at our airport () per day or that there are 4 linux server crashes per year () etc...
So, poisson has an interval and an observed average count for that interval. The following assumptions are made:
• The #occurrences can be counted as an integer
• The average #occurrences is known
• The probability of the occurence of an event is constant for all subintervals. E.g., if we divided the day into minutes, the probability of receiving a letter in any minute of the day is the same as for any other minute.
• There can be no more than one occurrence in the subinterval
• Occurrences are independent
The distribution is defined as follows, where is the expected number of events per interval.
Eek... didn't like the look of trying to derive the mean and variance. The population mean and variance are as follows.
We can plot some example distributions... lets do this using Python.
import matplotlib.pyplot as pl
import numpy as np
from matplotlib.font_manager import FontProperties
from scipy.stats import poisson
fontP = FontProperties()
fontP.set_size('small')
expectedNumberOfSuccessesLst = [1, 5, 10, 15]
x = np.arange(-1, 31)
fig, ax = pl.subplots()
for numSuccesses in expectedNumberOfSuccessesLst:
ax.plot(x, poisson.pmf(x, numSuccesses),linestyle='steps-mid')
ax.legend(['$\lambda={}$'.format(n) for n in expectedNumberOfSuccessesLst],
ncol = 3,
prop=fontP,
bbox_to_anchor=[0.5, 1.0],
loc='upper center')
fig.show()
pl.show()
fig.savefig('poisson_distrib.png', format='png')
Lets take a little example. Looking at the UK government report on road casualties in 2014, there were a reported 194,477 casualties in 2014. This gives us an average of 532.8137 (4dp) casualties per day! So, we could ask some questions...
What is the probability that no accidents occur on any given day? Recalling that... We will set and and plug these into the above forumla... Okay... so the probability that on any given day in the UK that there are no road casualties is pretty (very scarily) small!
Of course, we could critisize this analysis as not taking into account seaons, or weather conditions etc etc. In winter, for example, when the roads are icy one might expect the probability of an accident to be greater, but for the purposes of a little example, I've kept it simple.
We might also ask, what is the probability that the number of casualties is less than, say, 300? Eek! Don't really want to be calculating this by hand so let's use Python's SciPi package:
from scipy.stats import poisson
print poisson.cdf(299, 194477.0/365)
This outputs which is still pretty unlikely!. For we get , for we get . So we can imagine that the distribution is really quite steep:
import matplotlib.pyplot as pl
import numpy as np
from scipy.stats import poisson
x = np.arange(200, 800)
fig, ax = pl.subplots()
ax.plot(x, poisson.pmf(x, 194477/365), linestyle='steps-mid')
ax.set_title('Probability of #road casualties per day in UK')
fig.show()
fig.savefig('poisson_road_fatilities_distrib.png', format='png')
### Joint Distributions
When one thing is likely to vary in dependence on another thing. For example, the likely longevity of my milk will correspond to some extent to the temperature of my fridge: there is a relationship between these two variables so it is important that any model produced includes the effect of this relationship. We need to define our probabilities that random variables simultaneously take some values.
Enter the joint probability function. It expresses the probability that each random variable takes on a value as a function of those variables.
A two variable joint probability distribution is defined as: This is a subtle difference in the notation of from our previous examples in the starting sections where we wrote things like . The former is a distribution where the random variables simultaneously take on the values and respectively and the latter is just the probability of two individual events happening simultaneously.
The marginal probabilities are the probabilities that one random variable takes on a value regardless of what the other(s) are doing. In our two variable distribution we have:
Joint probability functions have these properties:
1. for any pair of values and ,
2. The sum of the join probabilities is 1.
The conditional probability function looks like this...
The random variables are independent if their joint probability function is the product of their marginal probability functions: When they are independent we also have
The conditional mean and variance are:
### Covariance and Correllation
Measure of joint variablility: the nature and strength of a relationship between two variables.
Covariance between two random variables and is , given by: This is equivalent to the following: A covariance that is strongly negative indicates a good inverse linear relationship. Strongly positive indicates a good linear relationship and near zero indicates no relationship. Note that if two random variables are statistically independent, the covariance between them is (near) zero. But, if the covariance is (near) zero it does not necessarily mean there is no relationship, it might just not be linear.
Correlation is just a "normalisation" of the covariance such that the measure is limited to being in the range [-1, 1].
This YouTube video explains the difference really well:
## Continuous Distributions
### Cumulative Distribution Function (CDF)
CDF gives probability that continuous random variable does not exceed a particular value, : The probability that takes on a signular value is zero. I.e., . Therefore, it doesn't matter if we write or ...
To get the probability that lies within a range use:
### Probability Density Function (PDF)
Call our PDF ... Then the total area under the curve is 1. I.e., The probability that takes a value between two limits is... And the CDF is also defined by.... We can judge whether a distribution is a valid PDF using the above definitions.
### Uniform Distribution
The uniform distribution is one where the probability that takes a value in a fixed size interval is the same regardless of where the interval "starts" (as long as the interval is contained entirely within the range for which is non-zero). I.e,
### Normal Distribution
This distribution is defined by a gaussian bell-shaped curve. The probability distribution function for a normally distributed random vairable, , is given by this equation: For any normal distribution the following applies:
1. About 68% of the data will fall within one standard deviation, , of the mean, ,
2. About 95% of the data will fall within two standard deviations, , of the mean, ,
3. Over 99% of the data will fall within three standard deviations, , of the mean, ,
4. Mean is ,
5. Variance is ,
The normal distribution is defined with the following notation: There is no simple algebraic expression for the cummulative distribution function. Can't really say I fancy the idea of integrating the above function! There are many numerical approximations. Computer could do it for us, but another way is to convert every normal distribution to the standard normal distribution.
The standard normal distribution is a normal distribution where the the mean is 0 and the variance is 1: And In English we can say that the z-score/value is the number of standard deviations is away from the (population) mean .
To solve we solve the following: For which we can consult a standard normal table.
We can also get the probability of a Z score and vice versa in Python by using scipy.stats.norm.cdf() and scipy.stats.norm.ppf() respectively as documented here.
#### Percentiles
Wikipedia defines a percentile as a measure giving the value below which a given percentage of obersvations fall. If an observation is AT the xth percentile then it has the value, V, at which x% of the scores can be found. If it is IN this percentile then its value is in the range 0 to V.
Percentiles aren't really specific to normal distributions but you often get asked questions like "what is the 95th percentile of this distribution?"
In the standard normal distribution...
1. 90th percentile is at z=1.28. (This is because ).
2. 95th percentile is at z=1.645. (This is because ).
3. And so on...
## Sampling Distributions
### A Little Intro...
So far we have been talking about population statistics. The values and have been the mean and standard deviation of the population. However, generally it is pretty impossible to gather information about an entire population: this can be due to the cost that would be involved, or perhaps that time that such an endevour would take, for example. It might also be undesirable to analyse an entire population if, for example, analysis involved destruction of the samples taken!
So, what is normally done is to take a sample from the population and then use the sample statistics to make inferences about the population statistics. The image below shows three samples that have been taken from a population. Each sample set can, and will most probably, have a different shape, mean, and variance!
We can demonstrate this concept using a quick little Python program to take 4 samples from the normal distribution where each sample has 10 members:
import numpy as np
import matplotlib.pyplot as pl
numSampleSets = 4
numSamplesPerSet = 10
# Limit individual plots, otherwise the first plot takes forever and is rubbish
doShowIndividualSamples = (numSampleSets <= 8) and (numSamplesPerSet <= 50)
# Create a numSampleSets (rows) x numSamplesPerSet (cols) array where we use
# each row as a sample.
randomSamples = np.random.randn(numSampleSets, numSamplesPerSet)
# Take the mean of the rows, i.e. the mean of each sample set
means = randomSamples.mean(axis=1)
if doShowIndividualSamples:
fig, axs = pl.subplots(nrows=int(numSampleSets/2.0+0.5), ncols=2)
xticks = np.arange(numSamplesPerSet, dtype='float64')
for idx in range(numSampleSets):
ax_col = idx % 2
ax_row = int(idx/2.0)
thisAx = axs[ax_row][ax_col]
thisAx.bar(xticks, randomSamples[idx][:], width=1)
thisAx.set_xticks(xticks + 0.5)
thisAx.set_xticklabels(xticks, fontsize=8)
thisAx.axhline(y=means[idx], color="red", linewidth=2)
thisAx.set_title("Sample set #{}".format(idx))
thisAx.grid()
xticks = np.arange(numSampleSets, dtype='float64')
fig2, ax2 = pl.subplots(nrows=2)
ax2[0].bar(xticks, means)
ax2[0].grid()
ax2[0].set_title("Distribution of {} sample means (n={})".format(numSampleSets, numSamplesPerSet))
ax2[0].set_ylabel("Mean value")
ax2[0].set_xlabel("Sample set #")
ax2[1].hist(means, 50)
ax2[1].grid()
ax2[1].set_title("Histogram of {} sample means (n={})".format(numSampleSets, numSamplesPerSet))
ax2[1].set_ylabel("# sample mean's")
ax2[1].set_xlabel("Sample mean bin")
pl.tight_layout()
pl.show()
The script above produces the following graphs. The x-axis is just the sequence number of the sample member and the y-axis the value of the sample member. The horizontal line is the sample mean.
We can see from this little example that the samples in each of the 4 instances are different and that the mean is different for each sample. Keep in mind that the graph shown when you run the above script will be different as it is a random sample :)
So we can see that although the population mean is fixed the sample mean can vary from sample to sample. Therefore, the mean of a sample is itself a random variable and as a random variable it will have it's own distribution (same applies for variance).
But, we want to use the sample statistics to infer the population statistics. How can we do this if the sample mean (and variance) can vary from sample to sample? There are a few key statistical theories that will help us out...
### Independent and Identically Distributed (I.I.D.) Random Variables
Wikipedia says that ...a sequence or other collection of random variables is independent and identically distributed (I.I.D.) if each random variable has the same probability distribution as the others and all are mutually independent...
It is often the case that if the population is large enough and a sample from the population only represents a "small" fraction of the population that a set of simple random samples without replacement still qualifies I.I.D. selections. If you sample from a population and the sample size represents a significant fraction of the population then you cannot assume this to be true.
We're going to need to know this a little later on when we talk about conistentent estimators and the law of large numbers...
### The Law of Large Numbers
The law of large numbers states that the average of a very large number (note we haven't quite defined what "very large" is) of items will tend towards the average of the popultion from which the items are drawn. I.e., the sample mean (or any other parameter) tends towards the population mean (or parameter, in general) as the number of items in the sample tends to infinity.
This is shown empirically in the example below:
import numpy as np
import matplotlib.pyplot as pl
maxSampleSizes = [100, 1000, 100000]
fig, axs = pl.subplots(nrows = 3)
for idx, ssize in enumerate(maxSampleSizes):
sample_sizes = np.arange(1,ssize+1)
sample_means = np.random.randn(ssize).cumsum() / sample_sizes
axs[idx].plot(sample_sizes, sample_means)
axs[idx].set_xlabel("Sample size, max is {}".format(ssize));
axs[idx].set_ylabel("Average")
axs[idx].axhline(y=0, color="red")
fig.show(); pl.show()
Explain the code a little... np.random.randn(ssize) generates a numpy array with ssize elements. Each element is a "draw" from the standard normal distribution. The function cumsum() then produces an array of ssize elements where the second element is the sum of the first 2 elements, the third element is the sum of the first 3 elements and so on. Thus we have an array where the ith element is the sum of i samples. Dividing this by the array sample_sizes gives us an array of means where the ith element is the mean of a sample with i items.
Running the code produced the figure below...
To summarise, the law of large numbers states that the sample mean of I.I.D. samples is consistent with the population mean. The same is true for sample variance. I.e., as the number of items in the sample increases indefinitely, the sample mean will become a better and better estimate of the population mean.
### Sampling Distribution Of The Sample Mean
We've established that between samples, the mean and variance of the samples, well... varies! This implies that we can make a distribution of sample means.
The "sampling distribution of the sample mean" (a bit of a mouthful!) is the probability distribution of the sample means obtained from all possible samples of the same number of observations.
That's a bit of a mounthful! What it means is that if we took, from the population, the exhaustive set of all distinct samples of size , and then took the mean of each sample, we could figure out what the propability is that any sample of size has a specific mean. Thus we build up the probability distribution of sample means, where the sample size is . We will see that this probability distribution is centered around the mean of all the sample means and that it also has a normal distribution (see LLN and CLT).
For example if I am a factory owner and I produce machines that accurately measure distance, I could have a population of millions. I clearly do not want to test each and every device coming off my production line, especially if the time that testing requires is anything approaching the time taken to produce the item: I'd be halving (or more) my production throughput!
So what do I do? I can take a sample of say 50 devices each day. I can test these to see if they accurately measure distance, and if they do I can assume the production process is still running correctly.
But as we have seem if I test 50 devices each day for 10 days, each of my 10 sample sets will have a different mean accuracy. On the first day, the mean accuracy of 50 devices might be 95%, on the second day, the mean of the next 50 devices might be 96.53% and so on.
The sample mean has a distribution. As we can take many samples from a population we have sampled the sample mean, so to speak, hence the rather verbose title "sampling distribution of the sample mean".
Now, I can ask, on any given day, "what is the probability that the mean accuracy of my 50 devices is, say 95%?". This is the sampling distribution of the sample mean: the probability distribution of the sample means (mean accuracy of a sample of devices) obtained from all possible samples (theoretical: we can't actually measure all possible samples!) of the same number of observations (50 in this case).
The law of large numbers gives us a little clue as to how variable the sample means will be... we know that if the sample size is large then the sample mean tends towards the population mean as has a much narrower variance about the population mean. So, 50 samples is better than 10. But 10,000 samples would be even better. The question is how many samples do we need. Will try to answer that (much) later on...
#### Sample Mean And It's Expected Value (The Mean Of Sample Means)
Continuing with the example of distance measurement devices. We could say each device in our sample set is represented by random variables (we samples 50 devices each day). We can generalise this to devices in the sample set. Using our definition of expected value we can define the sample mean as the random variable , for the ith sample set of, and is defined as follows.
Now we can calculate the expected value of the sample mean, (the mean of the sample means).
Imagine that we take samples, where each sample is a set of devices from our population. If we represent the mean of each sample as the random variables , then the expected value of the distribution of sample means is the mean expected value of the samples... We saw that expected value of a linear combination of random variables is the linear combination of each random variable's expected value. This means that we can write the following. The law of large numbers also tells us that if the size of each sample is sufficiently large, it's sample mean, will tend towards the population mean, . So, if is large enough we can say... for the This means that the mean of "the sample distribution of sample means" is the population mean when the size of each sample () is sufficiently large. Put another way, the distribution of samples means in centred around the population mean.
A single sample mean can therefore be larger or smaller than the population mean, but on average, there isn't any reason to expect that it is either. As the sample size increases we also know that the likelihood of the sample mean being higher or lower than the population mean decreases (law of large numbers).
#### The Variance Of The Sample Mean w.r.t. Sample Size And Standard Error
Sample variance can be written as or . We can see that variance of the sampling distribution decreases as sample size increases. Therefore the larger the sample the more accurately we can infer population statistics.
How did we get the above results? Well, it all is a bit brain melting, but here goes. The variance of the mean of sample means is written as . Because it is the variance of the mean of sample means, we can write this: The constant can be taken out of the variance. TODO - finish this off.
The sample standard deviation is often refered to as the standard error. If sample size not super small compared to population use finite population correction factor to get
#### The Shape Of The Samplign Distribution Of Sample Means
When samples are selected from a population that has a normal distribution we will see (anecdotally) that the sampling distribution of sample means is also normally distributed.
A histogram of the sample means gives a better picture of the distribution of sample means from the 4 sample sets. Diagrams have been drawn using the previous python example.
Now observe what happens when we increase the number of sample sets taken to 50. The distribution of sample means is still pretty distributed without much definition of the shape of the distribution...
Now observe what happens when we increase the number of sample sets taken to 1000. The distribution of sample means begins to take shape...
Now observe what happens when we increase the number of sample sets taken to 1000. The distribution of sample means now really looks very much like a gaussian distribution... interesting!
Clearly when we sample from the population the sample can be, to varying degrees, either a good or bad representation of the population. To help ensure that that sample represents the population (i.e., no "section" of the population is over or under represented) a simple random sample is usually taken.
Also of great interest in the above is that as the number of samples taken increases the distribution of sample means appears to become more and more gaussian in shape!. What is always true, and even more important is that the sampling distribution becomes concentrated closer to the population mean as the sample size increases, as we have seen in the maths earlier. The example above also doesn't proove it... its just anecdotal evidence. We could also notice that as #samples increases the variance of the sample mean decreases, again as we saw in the maths earlier.
Remember: take care to differentiate between the sample mean, , sometimes witten as , and the population mean, . We do not know the population mean. We know the sample mean. The population mean is fixed. The sample mean, as seen, will vary between samples.
We must also remember to differentiate between the sample variance, and the population variance, . We do not know the population variance. We know the sample mean. The population variance is fixed. The sample variance, as seen, will vary between samples.
Also remember that the samples were drawn from a population that is normally distributed. So the sampling distribution was also normally distributed, as we have anacdotally seen. The central limit theory (CLT) will show us that is doesn't matter what the population distriution is. Enough samples and the sampling distribution of the sample mean will always tend towards normal distribution.
Because the sampling distribution of sample means is normally distributed, it means that we can also form a standard normal distribution for the sample means, which we do as follows:
### Central Limit
We saw that the mean of the sampling distribution of sample means is and that the variance of the sample mean is . We also know (although I haven't shown this... I haven't bothered looking up the proof either!) that the sampling distribution of sample means is also normal.
The central limit theorem removes the restriction that the population we draw from need be normally distributed in order for the sampling distribution of the sample mean to be normally distributed. The central limit theoem states that the mean of a random sample drawn from a population with any distribution will be approx. normally distributed as the number of samples, , becomes large.
We could test this for, say, samples drawn from the binomial distribution. We have to modify our python code a little though (and also slim line some of the bits out)...
import numpy as np
import matplotlib.pyplot as pl
numSampleSets = 4000
numSamplesPerSet = 10
p = 0.5
# Make an array of numSampleSets means of samples of size numSamplesPerSet
# drawn from the binomial distirbution with prob of success p
means = np.random.binomial(numSamplesPerSet, p, numSampleSets)
means = means.astype("float64") / numSamplesPerSet
xticks = np.arange(numSampleSets, dtype='float64')
fig2, ax2 = pl.subplots()
ax2.hist(means, 50)
ax2.grid()
ax2.set_title("Histogram of {} sample means (n={})\nSamples draw from binomian p={}".format(numSampleSets, numSamplesPerSet, p))
ax2.set_ylabel("# sample mean's")
ax2.set_xlabel("Sample mean bin")
pl.tight_layout()
pl.show()
When we set the number of sample sets taken to be small, at 10, we get this:
When we set the number of sample sets taken to be very large, at 4000, we get this:
Hmmm... anacdotally satisfying at least.
## Confidence Intervals
The following is mostly based on this Khan Achademy tutorial with some additions from my reference textbook.
A confidence interval is a range of values which we are X% certain contain an unknown population parameter (e.g. the population mean), based on information we obtain from a sample. In other words, we can infer from our sample something about an unknown population parameter.
Let's go back to our distance measurement devices. I have a test fixture that is exactly 100cm wide. Therefore a device should, when placed in this fixture, report/measure a distance of 100cm +/- some tolerance.
Remember that I don't want to test all of the sensors I've produced because this will kill the manufacturing time and significantly raise the cost per unit. So I've decide to test, let's say, 50 sensors.
As I test each device in my sample of 50 I will get back a set of test readings, one for each device. 100.1cm, 99cm, 99.24cm, ... etc. From the sample I can calculate sample mean and sample variance. But this is about all I know: I do not know my population mean or population variance!
But I have learn't something about the sampling distribution of sample means. To recap, we have seen that: The expected value of the sample mean (i.e., the mean of all possible sample means) is the same as the population mean for large enough samples. The variance of the sample mean is related to the poulation variance by sample size.
So, I also know that my specific sample mean must lie somewhere in the sampling distribution of sample means...
Continuing with the example then... let's say I have taken my sample of 50 devices. I have a sample mean of . This is shown in the graph of the sampling distribution of sampling means below. In this distribution my specific sample has a mean that lies somewhere in this distribution.
<Click me to toggle code view for graph creation>
import matplotlib.pyplot as pl
import numpy as np
import scipy.stats
sample_mean = 105
population_mean = 99
expected_sample_mean = 102
var_of_expected_sample_mean = 4
std_of_expected_sample_mean = np.sqrt(var_of_expected_sample_mean)
x = np.linspace( expected_sample_mean - 4 * var_of_expected_sample_mean
, expected_sample_mean + 4 * var_of_expected_sample_mean
, 500)
y = scipy.stats.norm.pdf( x
, loc = expected_sample_mean
, scale = var_of_expected_sample_mean)
max_y = scipy.stats.norm.pdf( expected_sample_mean
, loc = expected_sample_mean
, scale = var_of_expected_sample_mean)
fig, ax = pl.subplots()
ax.plot(x,y)
fill_mask = (x >= expected_sample_mean) & (x <= expected_sample_mean + std_of_expected_sample_mean)
ax.fill_between(x, 0, y
, interpolate = True
, facecolor = 'lightblue')
ax.annotate(r'$E(\overline{X}) = \mu$',
xy=(expected_sample_mean, 0),
xytext=(expected_sample_mean-2*var_of_expected_sample_mean, 0.02),
bbox=dict(fc="w"),
fontsize='large',
ax.annotate(''
, xy=(expected_sample_mean, max_y/2)
, xytext = (expected_sample_mean + std_of_expected_sample_mean, max_y/2)
, arrowprops=dict(arrowstyle="<->"
, connectionstyle="arc3"
)
)
ax.annotate(r'$\sigma_{\overline{X}} = \frac{\sigma}{\sqrt{n}}$',
xy=(expected_sample_mean + std_of_expected_sample_mean/2, max_y/2),
xytext=(expected_sample_mean + 2*std_of_expected_sample_mean, 3*max_y/4),
fontsize='large',
bbox=dict(fc="w"),
ax.axvline(x=population_mean, color='black', lw=2)
ax.annotate('Population mean, $\\mu$,\nis unknown!',
xy=(population_mean, max_y/2),
xytext=(x[0], 3*max_y/4),
bbox=dict(fc="w"),
ax.axvline(x=sample_mean, color='black', lw=2)
ax.annotate('Sample mean, $\\overline{{x}}$,\nis {}'.format(sample_mean),
xy=(sample_mean, max_y/2),
xytext=(sample_mean + 2*std_of_expected_sample_mean, max_y/4),
bbox=dict(fc="w"),
ax.set_title("An example sampling distribution of sample means\nfor the distance measurement devices")
ax.grid()
pl.show()
The graph above also shows the population mean. This also lies somewhere in this distribution of sample means, but we don't know where. This is the parameter that we want to infer! The single sample's mean and variance are all we have at the moment so somehow we need to move from these statistics to an assertion about the population mean.
Now I'm going to stop talking about , our specific sample mean. This is because we are taking about any sample, so we want to use a random variable that represents the mean of a random sample, .
We know is somwhere near and will lie in an interval that spans the distance . But how big is this interval? To be certain that it contains our sample mean it would have to be infinitely wide! So we can't be certain. But, we could be pretty certain. We could, for example, say we'd be satisfied if we were 95% certain that ... In other words we are asking the following question, what is... This interval that gives us a 95% probability that the mean of a random sample (of size 50 in this case) will lie within of the expected sample mean. (It doesn't have to be 95%, you choose how confident you want to be!).
Hang on! We know from the CLT that the sampling distribution should be normal. So we can figure this out using the normal distribution. Further more we know that we can normalise the distribution to arrive at the standard normal distribution. Therefore, what we are looking for in the standard normal is the following:
<Click me to toggle code view for graph creation>
import matplotlib.pyplot as pl
import numpy as np
import scipy.stats
z = np.linspace(-4, 4, 500)
y = scipy.stats.norm.pdf(z)
x_95p = scipy.stats.norm.ppf(0.975) #.025 in each tail gives 0.05 for 95% conf
x95_mid = scipy.stats.norm.ppf(0.9875)
fig, ax = pl.subplots()
ax.plot(z,y)
ax.fill_between(z, 0, y
, where = z < -x_95p
, interpolate = True
, facecolor = 'lightblue')
ax.text(-x_95p, -0.02, "$-{0:.3f}z$".format(x_95p), ha="center", fontsize='large')
ax.fill_between(z, 0, y
, where = z > x_95p
, interpolate = True
, facecolor = 'lightblue')
ax.text(x_95p, -0.02, "${0:.3f}z$".format(x_95p), ha="center", fontsize='large')
ax.plot( [0, 0], [0, max(y)], color='b')
ax.text(0, -0.02, "$0$", ha="center", fontsize='large')
ax.annotate( "0.025"
, xy=(x95_mid, scipy.stats.norm.pdf(x95_mid)/2)
, xytext=(40, 40)
, textcoords='offset points'
, bbox=dict(fc="lightblue")
, fontsize='large'
ax.annotate( "0.025"
, xy=(-x95_mid, scipy.stats.norm.pdf(-x95_mid)/2)
, xytext=(40, 40)
, textcoords='offset points'
, bbox=dict(fc="lightblue")
, fontsize='large'
ax.annotate( "0.95"
, xy=(0, max(y)/2.0)
, xytext=(40, 40)
, textcoords='offset points'
, bbox=dict(fc="w")
, fontsize='large'
pl.show()
ttp://www.bzarg.com/p/how-a-kalman-filter-works-in-pictures/
|
2021-04-13 11:07:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 44, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8076094388961792, "perplexity": 493.7745170882725}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038072180.33/warc/CC-MAIN-20210413092418-20210413122418-00157.warc.gz"}
|
https://intelligencemission.com/free-energy-electricity-free-refrigerator.html
|
Reality is never going to be accepted by tat section of the community. Thanks for writing all about the phase conjugation stuff. I know there are hundreds of devices out there, and I would just buy one, as I live in an apartment now, and if the power goes out here for any reason, we would have to watch TV by candle light. lol. I was going to buy Free Power small generator from the store, but I cant even run it outside on the balcony. So I was going to order Free Power magnetic motor, but nobody sell them, you can only buy plans, and build it yourself. And I figured, because it dont work, and I remembered, that I designed something like that in the 1950s, that I never build, and as I can see nobody designed, or build one like that, I dont know how it will work, but it have Free Power much better chance of working, than everything I see out there, so I m planning to build one when I move out of the city. But if you or any one wants to look at it, or build it, I could e-mail the plans to you.
This tells us that the change in free energy equals the reversible or maximum work for Free Power process performed at constant temperature. Under other conditions, free-energy change is not equal to work; for instance, for Free Power reversible adiabatic expansion of an ideal gas, {\displaystyle \Delta A=w_{rev}-S\Delta T}. Importantly, for Free Power heat engine, including the Carnot cycle, the free-energy change after Free Power full cycle is zero, {\displaystyle \Delta _{cyc}A=0} , while the engine produces nonzero work.
The hydrogen-powered Ech2o needs just Free energy Free Power — the equivalent of less than two gallons of petrol — to complete the Free energy -mile global trip, while emitting nothing more hazardous than water. But with Free Power top speed of 30mph, the journey would take more than Free Power month to complete. Ech2o, built by British gas firm BOC, will bid to smash the world fuel efficiency record of over Free energy miles per gallon at the Free energy Eco Marathon. The record is currently…. Free Power, 385 km/per liter [over Free Electricity mpg!]. Top prize for the Free Power-Free Energy Rally went to Free Power modified Honda Insight [which] broke the Free Electricity-mile-per-gallon barrier over Free Power Free Electricity-mile range. The car actually got Free Electricity miles-per gallon. St. Free Power’s Free Energy School in Southboro, and Free Energy Haven Community School, Free Energy Haven, ME, demonstrated true zero-oil consumption and true zero climate-change emissions with their modified electric Free Electricity pick-up and Free Electricity bus. Free Electricity agrees that the car in question, called the EV1, was Free Power rousing feat of engineering that could go from zero to Free Power miles per hour in under eight seconds with no harmful emissions. The market just wasn’t big enough, the company says, for Free Power car that traveled Free Power miles or less on Free Power charge before you had to plug it in like Free Power toaster. Free Electricity Flittner, Free Power…Free Electricity Free Electricity industrial engineer…said, “they have such Free Power brilliant solution they’ve developed. They’ve put it on the market and proved it works. Free Energy still want it and they’re taking it away and destroying it. ”Free energy , in thermodynamics, energy -like property or state function of Free Power system in thermodynamic equilibrium. Free energy has the dimensions of energy , and its value is determined by the state of the system and not by its history. Free energy is used to determine how systems change and how much work they can produce. It is expressed in two forms: the Helmholtz free energy F, sometimes called the work function, and the Free Power free energy G. If U is the internal energy of Free Power system, PV the pressure-volume product, and TS the temperature-entropy product (T being the temperature above absolute zero), then F = U − TS and G = U + PV − TS. The latter equation can also be written in the form G = H – TS, where H = U + PV is the enthalpy. Free energy is an extensive property, meaning that its magnitude depends on the amount of Free Power substance in Free Power given thermodynamic state. The changes in free energy , ΔF or ΔG, are useful in determining the direction of spontaneous change and evaluating the maximum work that can be obtained from thermodynamic processes involving chemical or other types of reactions. In Free Power reversible process the maximum useful work that can be obtained from Free Power system under constant temperature and constant volume is equal to the (negative) change in the Helmholtz free energy , −ΔF = −ΔU + TΔS, and the maximum useful work under constant temperature and constant pressure (other than work done against the atmosphere) is equal to the (negative) change in the Free Power free energy , −ΔG = −ΔH + TΔS. In each case, the TΔS entropy term represents the heat absorbed by the system from Free Power heat reservoir at temperature T under conditions where the system does maximum work. By conservation of energy , the total work done also includes the decrease in internal energy U or enthalpy H as the case may be. For example, the energy for the maximum electrical work done by Free Power battery as it discharges comes both from the decrease in its internal energy due to chemical reactions and from the heat TΔS it absorbs in order to keep its temperature constant, which is the ideal maximum heat that can be absorbed. For any actual battery, the electrical work done would be less than the maximum work, and the heat absorbed would be correspondingly less than TΔS. Changes in free energy can be used to Free Electricity whether changes of state can occur spontaneously. Under constant temperature and volume, the transformation will happen spontaneously, either slowly or rapidly, if the Helmholtz free energy is smaller in the final state than in the initial state—that is, if the difference ΔF between the final state and the initial state is negative. Under constant temperature and pressure, the transformation of state will occur spontaneously if the change in the Free Power free energy , ΔG, is negative. Phase transitions provide instructive examples, as when ice melts to form water at 0. 01 °C (T = Free energy. Free energy K), with the solid and liquid phases in equilibrium. Then ΔH = Free Power. Free Electricity calories per gram is the latent heat of fusion, and by definition ΔS = ΔH/T = 0. Free Power calories per gram∙K is the entropy change. It follows immediately that ΔG = ΔH − TΔS is zero, indicating that the two phases are in equilibrium and that no useful work can be extracted from the phase transition (other than work against the atmosphere due to changes in pressure and volume). Free Power, ΔG is negative for T > Free energy. Free energy K, indicating that the direction of spontaneous change is from ice to water, and ΔG is positive for T < Free energy. Free energy K, where the reverse reaction of freezing takes place.
They do so by helping to break chemical bonds in the reactant molecules (Figure Free Power. Free Electricity). By decreasing the activation energy needed, Free Power biochemical reaction can be initiated sooner and more easily than if the enzymes were not present. Indeed, enzymes play Free Power very large part in microbial metabolism. They facilitate each step along the metabolic pathway. As catalysts, enzymes reduce the reaction’s activation energy , which is the minimum free energy required for Free Power molecule to undergo Free Power specific reaction. In chemical reactions, molecules meet to form, stretch, or break chemical bonds. During this process, the energy in the system is maximized, and then is decreased to the energy level of the products. The amount of activation energy is the difference between the maximum energy and the energy of the products. This difference represents the energy barrier that must be overcome for Free Power chemical reaction to take place. Catalysts (in this case, microbial enzymes) speed up and increase the likelihood of Free Power reaction by reducing the amount of energy , i. e. the activation energy , needed for the reaction. Enzymes are usually quite specific. An enzyme is limited in the kinds of substrate that it will catalyze. Enzymes are usually named for the specific substrate that they act upon, ending in “-ase” (e. g. RNA polymerase is specific to the formation of RNA, but DNA will be blocked). Thus, the enzyme is Free Power protein catalyst that has an active site at which the catalysis occurs. The enzyme can bind Free Power limited number of substrate molecules. The binding site is specific, i. e. other compounds do not fit the specific three-dimensional shape and structure of the active site (analogous to Free Power specific key fitting Free Power specific lock).
We need to stop listening to articles that say what we can’t have. Life is to powerful and abundant and running without our help. We have the resources and creative thinking to match life with our thoughts. Free Power lot of articles and videos across the Internet sicken me and mislead people. The inventors need to stand out more in the corners of earth. The intelligent thinking is here and freely given power is here. We are just connecting the dots. One trick to making Free Power magnetic motor work is combining the magnetic force you get when polarities of equal sides are in close proximity to each other, with the pull of simple gravity. Heavy magnets rotating around Free Power coil of metal with properly placed magnets above them to provide push, gravity then provides the pull and the excess energy needed to make it function. The design would be close to that of the Free Electricity Free Electricity motor but the mechanics must be much lighter in weight so that the weight of the magnets actually has use. A lot of people could do well to ignore all the rules of physics sometimes. Rules are there to be broken and all the rules have done is stunt technology advances. Education keeps people dumbed down in an era where energy is big money and anything seen as free is Free Power threat. Open your eyes to the real possibilities. Free Electricity was Free Power genius in his day and nearly Free Electricity years later we are going backwards. One thing is for sure, magnets are fantastic objects. It’s not free energy as eventually even the best will demagnetise but it’s close enough for me.
Vacuums generally are thought to be voids, but Hendrik Casimir believed these pockets of nothing do indeed contain fluctuations of electromagnetic waves. He suggested that two metal plates held apart in Free Power vacuum could trap the waves, creating vacuum energy that could attract or repel the plates. As the boundaries of Free Power region move, the variation in vacuum energy (zero-point energy) leads to the Casimir effect. Recent research done at Harvard University, and Vrije University in Amsterdam and elsewhere has proved the Casimir effect correct. (source)
I spent the last week looking over some major energy forums with many thousands of posts. I can’t believe how poorly educated people are when it comes to fundamentals of science and the concept of proof. It has become cult like, where belief has overcome reason. Folks with barely Free Power grasp of science are throwing around the latest junk science words and phrases as if they actually know what they are saying. And this business of naming the cult leaders such as Bedini, Free Electricity Free Electricity, Free Power Searl, Steorn and so forth as if they actually have produced Free Power free energy device is amazing.
Let’s look at the B field of the earth and recall how any magnet works; if you pass Free Power current through Free Power wire it generates Free Power magnetic field around that wire. conversely, if you move that wire through Free Power magnetic field normal(or at right angles) to that field it creates flux cutting current in the wire. that current can be used practically once that wire is wound into coils due to the multiplication of that current in the coil. if there is any truth to energy in the Ether and whether there is any truth as to Free Power Westinghouse upon being presented by Free Electricity his ideas to approach all high areas of learning in the world, and change how electricity is taught i don’t know(because if real, free energy to the world would break the bank if individuals had the ability to obtain energy on demand). i have not studied this area. i welcome others who have to contribute to the discussion. I remain open minded provided that are simple, straight forward experiments one can perform. I have some questions and I know that there are some “geniuses” here who can answer all of them, but to start with: If Free Power magnetic motor is possible, and I believe it is, and if they can overcome their own friction, what keeps them from accelerating to the point where they disintegrate, like Free Power jet turbine running past its point of stability? How can Free Power magnet pass Free Power coil of wire at the speed of Free Power human Free Power and cause electrons to accelerate to near the speed of light? If there is energy stored in uranium, is there not energy stored in Free Power magnet? Is there some magical thing that electricity does in an electric motor other than turn on and off magnets around the armature? (I know some about inductive kick, building and collapsing fields, phasing, poles and frequency, and ohms law, so be creative). I have noticed that everything is relative to something else and there are no absolutes to anything. Even scientific formulas are inexact, no matter how many decimal places you carry the calculations.
I want to use Free Power 3D printer to create the stator and rotors. This should allow Free Power high quality build with lower cost. Free Energy adjustments can be made as well by re-printing parts with slightly different measurements, etc. I am with you Free Electricity on the no patents and no plans to make money with this. I want to free the world from this oppression. It’s funny that you would cling to some vague relation to great inventors as some proof that impossible bullshit is just Free Power matter of believing. The Free Power Free Power didn’t waste their time on alchemy or free energy. They sought to understand the physical forces around them. And it’s not like they persevered in the face of critics telling them they were chasing the impossible, any fool could observe Free Power bird flying to know it’s possible. You will never achieve anything even close to what they did because you are seeking to defy the reality of our world. You’ve got to understand before you can invent. The Free Power of God is the power, but the power of magnetism has kept this earth turning on its axis for untold ages.
NOTHING IS IMPOSSIBLE! Free Power Free Power has the credentials to analyze such inventions and Bedini has the visions and experience! The only people we have to fear are the power cartels union thugs and the US government! rychu Free Energy two books! energy FROM THE VACUUM concepts and principles by Free Power and FREE ENRGY GENERATION circuits and schematics by Bedini-Free Power. Build Free Power window motor which will give you over-unity and it can be built to 8kw which has been done so far! NOTHING IS IMPOSSIBLE! Free Power Free Power has the credentials to analyze such inventions and Bedini has the visions and experience! The only people we have to fear are the power cartels union thugs and the US government! Free Energy two books! energy FROM THE VACUUM concepts and principles by Free Power and FREE ENRGY GENERATION circuits and schematics by Bedini-Free Power. Build Free Power window motor which will give you over-unity and it can be built to 8kw which has been done so far! NOTHING IS IMPOSSIBLE! Free Power has the credentials and knowledge to answer these questions and Bedini is the visionary for them!
A device I worked on many years ago went on television in operation. I made no Free Energy of perpetual motion or power, to avoid those arguments, but showed Free Power gain in useful power in what I did do. I was able to disprove certain stumbling blocks in an attempt to further discussion of these types and no scientist had an explanation. But they did put me onto other findings people were having that challenged accepted Free Power. Dr. Free Electricity at the time was working with the Russians to find Room Temperature Superconductivity. And another Scientist from CU developed Free Power cryogenic battery. “Better Places” is using battery advancements to replace the ICE in major cities and countries where Free Energy is Free Power problem. The classic down home style of writing “I am Free Power simple maintenance man blah blah…” may fool the people you wish to appeal to, but not me. Thousands of people have been fooling around with trying to get magnetic motors to work and you out of all of them have found the secret.
I might be scrapping my motor and going back to the drawing board. Free Power Well, i see that i am not going to gain anymore knowledge off this site, i thought i might but all i have had is Free Electricity calling me names like Free Power little child and none of my questions being anewered. Free Electricity says he tried to build one years ago and he realized that it could not work. Ok tell me why. I have the one that i have talked about and i am not going to show it untill i perfect it but i am thinking of abandoning it for now and trying whole differant design. Can the expert Free Electricity answer shis? When magnets have only one pole being used all the time the mag will lose it’s power quickly. What will happen if you use both poles in the repel state? Free Electricity that ballance the mag out or drain it twice as fast? How long will Free Power mag last running in the repel state all the time? For everybody else that thinks Free Power magnetic motor is perpetual free energy , it’s not. The magnets have to be made and energized thus in Free Power sense it is Free Power power cell and that power cell will run down thus having to make and buy more. Not free energy. This is still fun to play with though.
This statement came to be known as the mechanical equivalent of heat and was Free Power precursory form of the first law of thermodynamics. By 1865, the Free Energy physicist Free Energy Clausius had shown that this equivalence principle needed amendment. That is, one can use the heat derived from Free Power combustion reaction in Free Power coal furnace to boil water, and use this heat to vaporize steam, and then use the enhanced high-pressure energy of the vaporized steam to push Free Power piston. Thus, we might naively reason that one can entirely convert the initial combustion heat of the chemical reaction into the work of pushing the piston. Clausius showed, however, that we must take into account the work that the molecules of the working body, i. e. , the water molecules in the cylinder, do on each other as they pass or transform from one step of or state of the engine cycle to the next, e. g. , from (P1, V1) to (P2, V2). Clausius originally called this the “transformation content” of the body, and then later changed the name to entropy. Thus, the heat used to transform the working body of molecules from one state to the next cannot be used to do external work, e. g. , to push the piston. Clausius defined this transformation heat as dQ = T dS. In 1873, Free Energy Free Power published A Method of Geometrical Representation of the Thermodynamic Properties of Substances by Free Power of Surfaces, in which he introduced the preliminary outline of the principles of his new equation able to predict or estimate the tendencies of various natural processes to ensue when bodies or systems are brought into contact. By studying the interactions of homogeneous substances in contact, i. e. , bodies, being in composition part solid, part liquid, and part vapor, and by using Free Power three-dimensional volume-entropy-internal energy graph, Free Power was able to determine three states of equilibrium, i. e. , “necessarily stable”, “neutral”, and “unstable”, and whether or not changes will ensue. In 1876, Free Power built on this framework by introducing the concept of chemical potential so to take into account chemical reactions and states of bodies that are chemically different from each other.
Not Free Power lot to be gained there. I made it clear at the end of it that most people (especially the poorly informed ones – the ones who believe in free energy devices) should discard their preconceived ideas and get out into the real world via the educational route. “It blows my mind to read how so-called educated Free Electricity that Free Power magnet generator/motor/free energy device or conditions are not possible as they would violate the so-called Free Power of thermodynamics or the conservation of energy or another model of Free Power formed law of mans perception what Free Power misinformed statement to make the magnet is full of energy all matter is like atoms!!”
No, it’s not alchemy or magic to understand the attractive/resistive force created by magnets which requires no expensive fuel to operate. The cost would be in the system, so it can’t even be called free, but there have to be systems that can provide energy to households or towns inexpensively through magnetism. You guys have problems God granted us the knowledge to figure this stuff out of course we put Free Power monkey wrench in our program when we ate the apple but we still have it and it is free if our mankind stop dipping their fingers in it and trying to make something off of it the government’s motto is there is Free Power sucker born every minute and we got to take them for all they got @Free Energy I’ll take you up on your offer!!! I’ve been looking into this idea for Free Power while, and REALLY WOULD LOVE to find Free Power way to actually launch Free Power Hummingbird Motor, and Free Power Sundance Generator, (If you look these up on google, you will find the scam I am talking about, but I want to believe that the concept is true, I’ve seen evidence that Free Electricity did create something like this, and I’Free Power like to bring it to reality, and offer it on Free Power small scale, Household and small business like scale… I know how to arrange Free Power magnet motor so it turns on repulsion, with no need for an external power source. My biggest obstacle is I do not possess the building skills necessary to build it. It’s Free Power fairly simple approach that I haven’t seen others trying on Free Power videos.
|
2019-03-22 05:30:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.464404433965683, "perplexity": 1117.3875242933784}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202628.42/warc/CC-MAIN-20190322034516-20190322060516-00455.warc.gz"}
|
https://trac-hacks.org/ticket/11621
|
Opened 3 years ago
## #11621 new enhancement
# ${ticket.commenters} field to include all previous commenters as recipients Reported by: Owned by: michael.snook@… ejucovy normal WorkflowNotificationPlugin normal ### Description For our internal helpdesk at work, we have staffers interact with a HelpDesk trac instance, but a lot of our correspondence happens via email threads. We'd like to make sure that staffers who comment on the ticket are automatically added to that email thread, so a${ticket.commenters} feature would be helpful.
I should note that because we also include \${tickets.cc} this is basically the same, for us, as automatically adding someone to cc whenever they comment.
### comment:1 Changed 8 weeks ago by robert.bostedt@…
I fully endorse this idea - 3 years on as still as relevant. I find no other good way of getting updates from tickets that i've written in but not own or reported. Unless turning ALWAYS_NOTIFY_UPDATER on, but then i cannot be selective on what updates to receive
|
2016-12-07 15:31:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19958122074604034, "perplexity": 3607.222964995009}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542213.61/warc/CC-MAIN-20161202170902-00470-ip-10-31-129-80.ec2.internal.warc.gz"}
|
https://stackoverflow.com/questions/25610378/doxygen-dot-draw-link-between-classes-by-annotation
|
# Doxygen dot. Draw link between classes by annotation
Doxygen can generate class diagrams with graphiz.
For example:
class A {...};
class B extends A {...};
From this code I can generate a picture where doxygen can show that one class it the parent of an other.
But is there a way to generate a picture from code with manual references between classes?
For example when I describe DB scheme and use contract classes (http://developer.android.com/training/basics/data-storage/databases.html#DefineContract) I want to perform smth like this:
class MainClass {
class A {
String COLUMN_ID = "id_a";
}
/**
**/
class B {
String COLUMN_ID = "id_b";
String COLUMN_FOREIGN_KEY_TO_A = "id_a_key";
}
}
And generate a picture of 2 classes A and B with reference between them.
I've tried to search through documentation but can't find any suitable examples or explanation about custom drawing in javadoc+doxygen+graphviz.
The closest thing I can think of is to define a custom command that expands to an inline dot graph, i.e.
class MainClass {
public class A {
String COLUMN_ID = "id_a";
}
/**
* @cond
**/
/** @endcond */
public class B {
String COLUMN_ID = "id_b";
String COLUMN_FOREIGN_KEY_TO_A = "id_a_key";
}
}
with the following ALIAS definition in doxygen's configuration file:
ALIASES = dotlinkbetween{2}="@dot digraph { node [shape=record ]; \1 [ URL=\"\ref \1\" ]; \2 [ URL=\"\ref \2\" ]; \2 -> \1; } @enddot"
note that I had to use @cond ... @endcond to let doxygen skip the @OrAnotationToDrawLink line.
• Seems like a solution for my case, one more question - is it possible to create only one occurance of each class on graph? For example in this case (dimlix.com/screens/2014_09_d(001).png) get only 1 occurance of GroupStudentEntry? Sep 5 '14 at 9:58
• And maybe print not only a name of the class, but also params like in class diagramm: dimlix.com/screens/2014_09_d(002).png Sep 5 '14 at 9:59
|
2021-10-18 19:17:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40288376808166504, "perplexity": 6958.4654133725535}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585209.43/warc/CC-MAIN-20211018190451-20211018220451-00313.warc.gz"}
|
https://www.x-mol.com/paper/math/tag/101/journal/27760
|
• Des. Codes Cryptogr. (IF 1.224) Pub Date : 2020-08-07
Seyed Hassan Alavi, Ashraf Daneshkhah, Cheryl E. Praeger
In this paper, we first study biplanes $$\mathcal {D}$$ with parameters (v, k, 2), where the block size $$k\in \{13,16\}$$. These are the smallest parameter values for which a classification is not available. We show that if $$k=13$$, then either $$\mathcal {D}$$ is the Aschbacher biplane or its dual, or $$\mathbf {Aut}(\mathcal {D})$$ is a subgroup of the cyclic group of order 3. In the case where
更新日期:2020-08-08
• Des. Codes Cryptogr. (IF 1.224) Pub Date : 2020-08-06
Martin Ekerå
We revisit the quantum algorithm for computing short discrete logarithms that was recently introduced by Ekerå and Håstad. By carefully analyzing the probability distribution induced by the algorithm, we show its success probability to be higher than previously reported. Inspired by our improved understanding of the distribution, we propose an improved post-processing algorithm that is considerably
更新日期:2020-08-06
• Des. Codes Cryptogr. (IF 1.224) Pub Date : 2020-07-31
Yan Zhu, Naoki Watamura
Relative t-designs are defined in both P- and Q-polynomial association schemes. In this paper, we investigate relative t-designs in Johnson association schemes J(v, k) for P-polynomial structure. It is known that each nontrivial shell of J(v, k) is identified with the product of two smaller Johnson association schemes. We prove that relative t-designs in J(v, k) supported by one shell are equivalent
更新日期:2020-08-01
• Des. Codes Cryptogr. (IF 1.224) Pub Date : 2020-07-03
Ignacio García-Marco, Irene Márquez-Corbella, Diego Ruano
Given a linear code $${\mathcal {C}}$$, its square code $${\mathcal {C}}^{(2)}$$ is the span of all component-wise products of two elements of $${\mathcal {C}}$$. Motivated by applications in multi-party computation, our purpose with this work is to answer the following question: which families of affine variety codes have simultaneously high dimension $$k({\mathcal {C}})$$ and high minimum distance
更新日期:2020-07-24
• Des. Codes Cryptogr. (IF 1.224) Pub Date : 2020-07-08
Alain Couvreur, Isabella Panaccione
We present a new decoding algorithm based on error locating pairs and correcting an amount of errors exceeding half the minimum distance. When applied to Reed–Solomon or algebraic geometry codes, the algorithm is a reformulation of the so-called power decoding algorithm. Asymptotically, it corrects errors up to Sudan’s radius. In addition, this new framework applies to any code benefiting from an error
更新日期:2020-07-24
• Des. Codes Cryptogr. (IF 1.224) Pub Date : 2020-06-21
Umberto Martínez-Peñas
Sum-rank Hamming codes are introduced in this work. They are essentially defined as the longest codes (thus of highest information rate) with minimum sum-rank distance at least 3 (thus one-error-correcting) for a fixed redundancy r, base-field size q and field-extension degree m (i.e., number of matrix rows). General upper bounds on their code length, number of shots or sublengths and average sublength
更新日期:2020-07-24
• Des. Codes Cryptogr. (IF 1.224) Pub Date : 2020-07-24
Yunwen Liu, Wenying Zhang, Bing Sun, Vincent Rijmen, Guoqiang Liu, Chao Li, Shaojing Fu, Meichun Cao
For differential cryptanalysis under the single-key model, the key schedules hardly need to be exploited in constructing the characteristics, which is based on the hypothesis of stochastic equivalence. In this paper, we study a profound effect of the key schedules on the validity of the differential characteristics. Noticing the sensitivity in the probability of the characteristics to specific keys
更新日期:2020-07-24
• Des. Codes Cryptogr. (IF 1.224) Pub Date : 2020-07-17
Daniel Coggia, Alain Couvreur
We present a polynomial time attack of a rank metric code based encryption scheme due to Loidreau for some parameters.
更新日期:2020-07-24
• Des. Codes Cryptogr. (IF 1.224) Pub Date : 2020-07-03
Gretchen L. Matthews, Fernando Piñero
Recently, Skabelund defined new maximal curves which are cyclic extensions of the Suzuki and Ree curves. Previously, the now well-known GK curves were found as cyclic extensions of the Hermitian curve. In this paper, we consider locally recoverable codes constructed from these new curves, complementing that done for the GK curve. Locally recoverable codes allow for the recovery of a single symbol by
更新日期:2020-07-24
• Des. Codes Cryptogr. (IF 1.224) Pub Date : 2020-07-02
Yanyan Gao, Qin Yue, Yansheng Wu
Let $$\mathbb {F}_q$$ be a finite field with q elements, $$D_{2n,\,r}$$ a generalized dihedral group with $$\gcd (2n,q)=1$$, and $$\mathbb {F}_q[D_{2n,\,r}]$$ a generalized dihedral group algebra. Firstly, an explicit expression for primitive idempotents of $$\mathbb {F}_q[D_{2n,\,r}]$$ is determined, which extends the results of Brochero Martínez (Finite Fields Appl 35:204–214, 2015). Secondly, all
更新日期:2020-07-24
• Des. Codes Cryptogr. (IF 1.224) Pub Date : 2020-04-27
Alonso Sepúlveda Castellanos, Maria Bras-Amorós
We determine the Weierstrass semigroup $$H(P_\infty ,P_1,\ldots ,P_m)$$ at several rational points on the maximal curves which cannot be covered by the Hermitian curve introduced in Tafazolian et al. (J Pure Appl Algebra 220(3):1122–1132, 2016). Furthermore, we present some conditions to find pure gaps. We use this semigroup to obtain AG codes with better relative parameters than comparable one-point
更新日期:2020-04-27
• Des. Codes Cryptogr. (IF 1.224) Pub Date : 2020-03-28
Alexandru Chirvasitu, Thomas W. Cusick
Let $$f_n(x_0, x_1, \ldots , x_{n-1})$$ denote the algebraic normal form (polynomial form) of a rotation symmetric (RS) Boolean function of degree d in $$n \ge d$$ variables and let $$wt(f_n)$$ denote the Hamming weight of this function. Let $$(0, a_1, \ldots , a_{d-1})_n$$ denote the function $$f_n$$ of degree d in n variables generated by the monomial $$x_0x_{a_1} \ldots x_{a_{d-1}}.$$ Such a function
更新日期:2020-03-28
• Des. Codes Cryptogr. (IF 1.224) Pub Date : 2020-03-21
Tovohery Hajatiana Randrianarisoa
In this work we develop a geometric approach to the study of rank metric codes. Using this method, we introduce a simpler definition for generalized rank weight of linear codes. We give a complete classification of constant rank weight code and we give their generalized rank weights.
更新日期:2020-03-21
• Des. Codes Cryptogr. (IF 1.224) Pub Date : 2020-03-10
Irene Márquez-Corbella, Edgar Martínez-Moro, Carlos Munuera
A locally recoverable code is an error-correcting code such that any erasure in a single coordinate of a codeword can be recovered from a small subset of other coordinates. In this article we develop an algorithm that computes a recovery structure as concise as possible for an arbitrary linear code $${\mathcal {C}}$$ and a recovery method that realizes it. This algorithm also provides the locality
更新日期:2020-03-10
• Des. Codes Cryptogr. (IF 1.224) Pub Date : 2020-02-26
Lucky Erap Galvez, Jon-Lark Kim
Matrix codes over a finite field $${\mathbb {F}}_q$$ are linear codes defined as subspaces of the vector space of $$m \times n$$ matrices over $${\mathbb {F}}_q$$. In this paper, we show how to obtain self-dual matrix codes from a self-dual matrix code of smaller size using a method we call the building-up construction. We show that every self-dual matrix code can be constructed using this building-up
更新日期:2020-02-26
• Des. Codes Cryptogr. (IF 1.224) Pub Date : 2020-02-11
René Bødker Christensen, Olav Geil
In this paper, we study the construction of quantum codes by applying Steane-enlargement to codes from the Hermitian function field. We cover Steane-enlargement of both usual one-point Hermitian codes and of order bound improved Hermitian codes. In particular, the paper contains two constructions of quantum codes whose parameters are described by explicit formulae, and we show that these codes compare
更新日期:2020-02-11
• Des. Codes Cryptogr. (IF 1.224) Pub Date : 2020-02-07
Hiram H. López, Gretchen L. Matthews, Ivan Soprunov
A monomial-Cartesian code is an evaluation code defined by evaluating a set of monomials over a Cartesian product. It is a generalization of some families of codes in the literature, for instance toric codes, affine Cartesian codes, and J-affine variety codes. In this work we use the vanishing ideal of the Cartesian product to give a description of the dual of a monomial-Cartesian code. Then we use
更新日期:2020-02-07
• Des. Codes Cryptogr. (IF 1.224) Pub Date : 2019-11-20
Thomas Britz, Adam Mammoliti, Keisuke Shiromoto
We extend and provide new proofs of the Wei-type duality theorems, due to Ducoat and Ravagnani, for Gabidulin–Roth rank-metric codes and for Delsarte rank-metric codes. These results follow as corollaries from fundamental Wei-type duality theorems that we prove for certain general combinatorial structures.
更新日期:2019-11-20
• Des. Codes Cryptogr. Pub Date : null
Andries E Brouwer,Sven C Polak
For n , d , w ∈ N , let A(n, d, w) denote the maximum size of a binary code of word length n, minimum distance d and constant weight w. Schrijver recently showed using semidefinite programming that A ( 23 , 8 , 11 ) = 1288 , and the second author that A ( 22 , 8 , 11 ) = 672 and A ( 22 , 8 , 10 ) = 616 . Here we show uniqueness of the codes achieving these bounds. Let A(n, d) denote the maximum size
更新日期:2019-11-01
• Des. Codes Cryptogr. Pub Date : null
Sven C Polak
For q , n , d ∈ N , let A q ( n , d ) be the maximum size of a code C ⊆ [ q ] n with minimum distance at least d. We give a divisibility argument resulting in the new upper bounds A 5 ( 8 , 6 ) ≤ 65 , A 4 ( 11 , 8 ) ≤ 60 and A 3 ( 16 , 11 ) ≤ 29 . These in turn imply the new upper bounds A 5 ( 9 , 6 ) ≤ 325 , A 5 ( 10 , 6 ) ≤ 1625 , A 5 ( 11 , 6 ) ≤ 8125 and A 4 ( 12 , 8 ) ≤ 240 . Furthermore, we prove
更新日期:2019-11-01
• Des. Codes Cryptogr. Pub Date : null
Arnold Neumaier
The paper describes improved analysis techniques for basis reduction that allow one to prove strong complexity bounds and reduced basis guarantees for traditional reduction algorithms and some of their variants. This is achieved by a careful exploitation of the linear equations and inequalities relating various bit sizes before and after one or more reduction steps.
更新日期:2019-11-01
• Des. Codes Cryptogr. Pub Date : null
Bart Litjens,Sven Polak,Alexander Schrijver
For nonnegative integers q, n, d, let A q ( n , d ) denote the maximum cardinality of a code of length n over an alphabet [q] with q letters and with minimum distance at least d. We consider the following upper bound on A q ( n , d ) . For any k, let C k be the collection of codes of cardinality at most k. Then A q ( n , d ) is at most the maximum value of ∑ v ∈ [ q ] n x ( { v } ) , where x is a function
更新日期:2019-11-01
Contents have been reproduced by permission of the publishers.
down
wechat
bug
|
2020-08-10 05:59:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8048035502433777, "perplexity": 806.2628691502509}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738609.73/warc/CC-MAIN-20200810042140-20200810072140-00044.warc.gz"}
|
https://physics.stackexchange.com/questions/439891/how-do-multi-meters-measure-capacitance
|
# How do multi-meters measure capacitance?
Many multimeters have the ability to measure capacitance along with the ability to measure AC voltage, DC voltage, current, resistance ... The only equations I am aware of that determine capacitance at least for parallel plate capacitors are:
$$$$C=\frac{Q}{V} \;\; \& \;\; C=\frac{\epsilon A}{d}$$$$
The only method I can think of is that the meter somehow measures the charge in the plate and then subsequently measures the potential difference. If anyone knows of a more detailed explanation I would be happy to hear it.
• You are basically right. The meter feeds a known current into the capacitor for a known length of time (i.e.it puts a known amount of charge into the cap) and then measures the voltage across the plates. In practice, it charges and discharges the capacitor repeatedly, rather than just making a single measurement. There are other methods, but if you haven't studied AC circuits yet an explanation of how they work would be too long for an answer here. – alephzero Nov 9 '18 at 14:24
|
2019-08-25 17:59:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.7155176401138306, "perplexity": 446.7259409490118}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027330786.8/warc/CC-MAIN-20190825173827-20190825195827-00070.warc.gz"}
|
https://www.nature.com/articles/s42254-020-0210-8?error=cookies_not_supported&code=a562f93c-fe66-4758-9f58-a76bc6e8663b
|
# Emergent mystery in the Kondo insulator samarium hexaboride
### Subjects
A Publisher Correction to this article was published on 14 August 2020
## Abstract
Samarium hexaboride (SmB6) is an example of a Kondo insulator, in which strong electron correlations cause a band gap to open. SmB6 hosts both a bulk insulating state and a conductive surface state. Within a Fermi-liquid framework, the strongly correlated ground-state electronic structure can be mapped to a simple state resembling a topological insulator. Although uncertainties remain, many experiments provide compelling evidence that the conductive surface states have a topological origin. However, the bulk behaviour is less well understood and some experiments indicate bulk in-gap states. This has inspired the development of many theories that predict the emergence of new bulk quantum phases beyond Landau’s Fermi-liquid model. We review the current progress on understanding both the surface and the bulk states, especially the experimental evidence for each. A mystery centres on the existence of the bulk in-gap states and why they appear in some experiments but not others. Adding to the mystery is why quantum oscillations in SmB6 appear only in magnetization but not in resistivity. We conclude by elaborating on three questions: why SmB6 is worth studying, what can be done to move forwards and what other correlated insulators could give additional insight.
## Key points
• The Kondo insulator samarium hexaboride (SmB6) is a perfect insulator owing to strong electronic correlations. It is the first experimentally confirmed example of a strongly correlated topological material.
• The topological band structure and the consequent metallic surface states are determined and protected by the crystal point symmetry in SmB6. The universal topological predictions are confirmed by spin-resolved and angle-resolved photoemission spectroscopy, although some unresolved issues remain.
• Surface electrical transport is established in SmB6. Spin-dependent experiments both confirm basic topological predictions and indicate the potential for spin-based electronic applications.
• There is a mystery as to whether Landau-level quantum oscillations in SmB6 have a bulk or surface-state origin, and why they appear only in magnetization.
• The mystery calls for a new growth method for SmB6, a broad search for other strongly correlated topological materials and further detailed theoretical pictures for the possible ground states of mixed-valent materials.
## Access options
from\$8.99
All prices are NET prices.
## Change history
• ### 14 August 2020
An amendment to this paper has been published and can be accessed via a link at the top of the paper.
## References
1. 1.
Hasan, M. Z. et al. Colloquium: topological insulators. Rev. Mod. Phys. 82, 3045 (2010).
2. 2.
Qi, X.-L. et al. Topological insulators and superconductors. Rev. Mod. Phys. 83, 1057 (2011).
3. 3.
Varma, C. Mixed-valence compounds. Rev. Mod. Phys. 48, 219 (1976).
4. 4.
Menth, A. et al. Magnetic and semiconducting properties of SmB6. Phys. Rev. Lett. 22, 295 (1969).
5. 5.
Dzero, M. et al. Topological Kondo insulators. Phys. Rev. Lett. 104, 106408 (2010). This letter proposed the concept of topological Kondo insulators and theoretically predicted that such a topological state shall emerge in heavy fermion insulators such as SmB 6.
6. 6.
Dzero, M. et al. Topological Kondo insulators. Annu. Rev. Condens. Matter Phys. 7, 249–280 (2016).
7. 7.
Wolgast, S. et al. Low-temperature surface conduction in the Kondo insulator SmB6. Phys. Rev. B 88, 180405 (2013). This paper presents one of the first indications of the surface states in SmB 6.
8. 8.
Zhang, X. et al. Hybridization, inter-ion correlation, and surface states in the Kondo insulator SmB6. Phys. Rev. X 3, 011011 (2013). This paper shows another of the first indications of the surface states in SmB 6.
9. 9.
Kim, D. J. et al. Surface Hall effect and nonlocal transport in SmB6: evidence for surface conduction. Sci. Rep. 3, 3150 (2013). This paper is also one of the first indications of the surface states in SmB 6.
10. 10.
Li, G. et al. Two-dimensional Fermi surfaces in Kondo insulator SmB6. Science 346, 1208–1212 (2014). This study is the first report of the quantum oscillations in magnetization in Kondo insulators.
11. 11.
Rößler, S. et al. Hybridization gap and Fano resonance in SmB6. Proc. Natl Acad. Sci. USA 111, 4798–4802 (2014).
12. 12.
Xu, N. et al. Surface and bulk electronic structure of the strongly correlated system SmB6 and implications for a topological Kondo insulator. Phys. Rev. B 88, 121102 (2013). First of the early reports of surface states observed by ARPES in SmB 6.
13. 13.
Neupane, M. et al. Non-Kondo-like electronic structure in the correlated rare-earth hexaboride YbB6. Phys. Rev. Lett. 114, 016403 (2015).
14. 14.
Song, Q. et al. Spin injection and inverse Edelstein effect in the surface states of topological Kondo insulator SmB6. Nat. Commun. 7, 13485 (2016).
15. 15.
Lee, S. et al. Observation of the superconducting proximity effect in the surface state of SmB6 thin films. Phys. Rev. X 6, 031031 (2016).
16. 16.
Tan, B. S. et al. Unconventional Fermi surface in an insulating state. Science 349, 287–290 (2015).
17. 17.
Knolle, J. et al. Excitons in topological Kondo insulators: theory of thermodynamic and transport anomalies in SmB6. Phys. Rev. Lett. 118, 096604 (2017).
18. 18.
Erten, O. et al. Skyrme insulators: insulators at the brink of superconductivity. Phys. Rev. Lett. 119, 057603 (2017).
19. 19.
Chowdhury, D. et al. Mixed-valence insulators with neutral Fermi surfaces. Nat. Commun. 9, 1766 (2018).
20. 20.
Sodemann, I. et al. Quantum oscillations in insulators with neutral Fermi surfaces. Phys. Rev. B 97, 045152 (2018).
21. 21.
Lai, H.-H. et al. Weyl–Kondo semimetal in heavy-fermion systems. Proc. Natl Acad. Sci. USA 115, 93–97 (2018).
22. 22.
Dresselhaus, M. S. et al. Group Theory: Application to the Physics of Condensed Matter (Springer, 2008).
23. 23.
Bradlyn, B. et al. Topological quantum chemistry. Nature 547, 298–305 (2017).
24. 24.
Po, H. C. et al. Symmetry-based indicators of band topology in the 230 space groups. Nat. Commun. 8, 50 (2017).
25. 25.
Song, Z. et al. Quantitative mappings between symmetry and topology in solids. Nat. Commun. 9, 3530 (2018).
26. 26.
Song, Z. et al. Diagnosis for nonmagnetic topological semimetals in the absence of spin-orbital coupling. Phys. Rev. X 8, 031069 (2018).
27. 27.
Kruthoff, J. et al. Topological classification of crystalline insulators through band structure combinatorics. Phys. Rev. X 7, 041069 (2017).
28. 28.
Zhang, T. et al. Catalogue of topological electronic materials. Nature 566, 475–479 (2019).
29. 29.
Vergniory, M. G. et al. A complete catalogue of high-quality topological materials. Nature 566, 480–485 (2019).
30. 30.
Tang, F. et al. Comprehensive search for topological materials using symmetry indicators. Nature 566, 486–489 (2019).
31. 31.
Fu, L. et al. Topological insulators with inversion symmetry. Phys. Rev. B 76, 045302 (2007).
32. 32.
Fang, C. et al. Bulk topological invariants in noninteracting point group symmetric insulators. Phys. Rev. B 86, 115112 (2012).
33. 33.
Ye, M. et al. Topological crystalline Kondo insulators and universal topological surface states of SmB6. Preprint at arXiv https://arxiv.org/abs/1307.7191 (2013).
34. 34.
Baruselli, P. P. et al. Distinct topological crystalline phases in models for the strongly correlated topological insulator SmB6. Phys. Rev. Lett. 115, 156404 (2015).
35. 35.
Legner, M. et al. Surface-state spin textures and mirror Chern numbers in topological Kondo insulators. Phys. Rev. Lett. 115, 156405 (2015).
36. 36.
Baruselli, P. P. et al. Spin textures on general surfaces of the correlated topological insulator SmB6. Phys. Rev. B 93, 195117 (2016).
37. 37.
Lu, F. et al. Correlated topological insulators with mixed valence. Phys. Rev. Lett. 110, 096401 (2013).
38. 38.
Thunström, P. et al. Multiplet effects in the electronic structure of intermediate-valence compounds. Phys. Rev. B 79, 165104 (2009).
39. 39.
Denlinger, J. D. et al. Temperature dependence of linked gap and surface state evolution in the mixed valent topological insulator SmB6. Preprint at arXiv https://arxiv.org/abs/1312.6637 (2013).
40. 40.
Kim, J. et al. Termination-dependent surface in-gap states in a potential mixed-valent topological insulator: SmB6. Phys. Rev. B 90, 075131 (2014).
41. 41.
Shick, A. B. et al. Racah materials: role of atomic multiplets in intermediate valence systems. Sci. Rep. 5, 15429 (2015).
42. 42.
Peters, R. et al. Coexistence of light and heavy surface states in a topological multiband Kondo insulator. Phys. Rev. B 93, 235159 (2016).
43. 43.
Thunström, P. et al. Topology of SmB6 determined by dynamical mean field theory. Preprint at arXiv https://arxiv.org/abs/1907.03899 (2019).
44. 44.
Werner, J. et al. Dynamically generated edge states in topological Kondo insulators. Phys. Rev. B 89, 245119 (2014).
45. 45.
Knolle, J. et al. Quantum oscillations without a Fermi surface and the anomalous de Haas–van Alphen effect. Phys. Rev. Lett. 115, 146401 (2015).
46. 46.
Shen, H. et al. Quantum oscillation from in-gap states and a non-Hermitian Landau level problem. Phys. Rev. Lett. 121, 026403 (2018).
47. 47.
Harrison, N. Highly asymmetric nodal semimetal in bulk SmB6. Phys. Rev. Lett. 121, 026602 (2018).
48. 48.
Hartstein, M. et al. Fermi surface in the absence of a Fermi liquid in the Kondo insulator SmB6. Nat. Phys. 14, 166–172 (2017).
49. 49.
Laurita, N. J. et al. Anomalous three-dimensional bulk ac conduction within the Kondo gap of SmB6 single crystals. Phys. Rev. B 94, 165154 (2016).
50. 50.
Fuhrman, W. T. et al. Screened moments and extrinsic in-gap states in samarium hexaboride. Nat. Commun. 9, 1539 (2018).
51. 51.
Fuhrman, W. T. et al. Interaction driven subgap spin exciton in the Kondo insulator SmB6. Phys. Rev. Lett. 114, 036401 (2015).
52. 52.
Canfield, P. C. et al. Growth of single crystals from metallic fluxes. Philos. Mag. B 65, 1117–1123 (1992).
53. 53.
Balakrishnan, G. et al. Growth of large single crystals of rare earth hexaborides. J. Cryst. Growth 256, 206–209 (2003).
54. 54.
Ruan, W. et al. Emergence of a coherent in-gap state in the SmB6 Kondo insulator revealed by scanning tunneling spectroscopy. Phys. Rev. Lett. 112, 136401 (2014).
55. 55.
Rößler, S. et al. Surface and electronic structure of SmB through scanning tunneling microscopy. Philos. Mag. 96, 3262–3273 (2016).
56. 56.
Jiao, L. et al. Additional energy scale in SmB6 at low-temperature. Nat. Commun. 7, 13762 (2016).
57. 57.
Miyamachi, T. et al. Evidence for in-gap surface states on the single phase SmB6(001) surface. Sci. Rep. 7, 12837 (2017).
58. 58.
Sun, Z. et al. Observation of a well-defined hybridization gap and in-gap states on the SmB6 (001) surface. Phys. Rev. B 97, 235107 (2018).
59. 59.
Jiao, L. et al. Magnetic and defect probes of the SmB6 surface state. Sci. Adv. 4, eaau4886 (2018).
60. 60.
Pirie, H. et al. Imaging emergent heavy Dirac fermions of a topological Kondo insulator. Nat. Phys. 16, 52–56 (2020).
61. 61.
Herrmann, H. et al. Contrast reversal in scanning tunneling microscopy and its implications for the topological classification of SmB6. Adv. Mater. 32, 1906725 (2020).
62. 62.
Heming, N. et al. Surface properties of SmB6 from x-ray photoelectron spectroscopy. Phys. Rev. B 90, 195128 (2014).
63. 63.
Lutz, P. et al. Valence characterisation of the subsurface region in SmB6. Philos. Mag. 96, 3307–3321 (2016).
64. 64.
He, H. et al. Irreversible proliferation of magnetic moments at cleaved surfaces of the topological Kondo insulator SmB6. Phys. Rev. B 95, 195126 (2017).
65. 65.
Zabolotnyy, V. B. et al. Chemical and valence reconstruction at the surface of SmB6 revealed by means of resonant soft x-ray reflectometry. Phys. Rev. B 97, 205416 (2018).
66. 66.
Paderno, Y. B. et al. Electrical properties of hexaborides of the alkaline- and rare-earth metals at low temperatures. Sov. Powder Metall. Met. Ceram. 8, 921–923 (1969).
67. 67.
Cohen, R. L. et al. Electronic and magnetic structure of SmB6. Phys. Rev. Lett. 24, 383 (1970).
68. 68.
Allen, J. W. et al. Large low-temperature Hall effect and resistivity in mixed-valent SmB6. Phys. Rev. B 20, 4807 (1979). First report of Hall-effect data proving that SmB 6 is an insulator.
69. 69.
Cooley, J. C. et al. SmB6: Kondo insulator or exotic metal? Phys. Rev. Lett. 74, 1629 (1995).
70. 70.
Coleman, P. et al. Theory for the anomalous Hall constant of mixed-valence systems. Phys. Rev. Lett. 55, 414 (1985).
71. 71.
Nickerson, J. C. et al. Physical properties of SmB6. Phys. Rev. B 3, 2030 (1971).
72. 72.
von Molnar, S. et al. in Valence Instabilities: Proceedings of the International Conference (eds Wachter, P. et al.) 385 (North Holland, 1982).
73. 73.
Stankiewicz, J. et al. Physical properties of SmxB6 single crystals. Phys. Rev. B 99, 045138 (2019).
74. 74.
Rakoski, A. et al. Investigation of high-temperature bulk transport characteristics and skew scattering in samarium hexaboride. J. Supercond. Nov. Magn. 33, 265–268 (2020).
75. 75.
Mott, N. F. Rare-earth compounds with mixed valencies. Philos. Mag. 30, 403–416 (1974).
76. 76.
Martin, R. M. et al. Theory of mixed valence: Metals or small gap insulators (invited). J. Appl. Phys. 50, 7561 (1979).
77. 77.
Riseborough, P. S. The electrical resistivity of mixed valence materials due to impurities. Solid State Commun. 38, 79–82 (1981).
78. 78.
Travaglini, G. et al. Intermediate-valent SmB6 and the hybridization model: An optical study. Phys. Rev. B 29, 893 (1984).
79. 79.
Gorshunov, B. et al. Low-energy electrodynamics of SmB6. Phys. Rev. B 59, 1808 (1999).
80. 80.
Nozawa, S. et al. Ultrahigh-resolution and angle-resolved photoemission study of SmB6. J. Phys. Chem. Solids 63, 1223–1226 (2002).
81. 81.
Miyazaki, H. et al. Momentum-dependent hybridization gap and dispersive in-gap state of the Kondo semiconductor SmB6. Phys. Rev. B 86, 075105 (2012).
82. 82.
Flachbart, K. et al. Energy gap of intermediate-valent SmB6 studied by point-contact spectroscopy. Phys. Rev. B 64, 085104 (2001).
83. 83.
Chen, F. et al. Magnetoresistance evidence of a surface state and a field-dependent insulating state in the Kondo insulator SmB6. Phys. Rev. B 91, 205133 (2015).
84. 84.
Yue, Z. et al. Crossover of magnetoresistance from fourfold to twofold symmetry in SmB6 single crystal, a topological Kondo insulator. J. Phys. Soc. Jpn. 84, 044717 (2015).
85. 85.
Wolgast, S. et al. Magnetotransport measurements of the surface states of samarium hexaboride using Corbino structures. Phys. Rev. B 92, 115110 (2015).
86. 86.
Wakeham, N. et al. Surface state reconstruction in ion-damaged SmB6. Phys. Rev. B 91, 085107 (2015).
87. 87.
Lu, H.-Z. et al. Weak localization of bulk channels in topological insulator thin films. Phys. Rev. B 84, 125138 (2011).
88. 88.
Dzero, M. et al. Nonuniversal weak antilocalization effect in cubic topological Kondo insulators. Phys. Rev. B 92, 165415 (2015).
89. 89.
Thomas, S. et al. Weak antilocalization and linear magnetoresistance in the surface state of SmB6. Phys. Rev. B 94, 205114 (2016).
90. 90.
Nakajima, Y. et al. One-dimensional edge state transport in a topological Kondo insulator. Nat. Phys. 12, 213–217 (2016).
91. 91.
Kim, J. et al. Electrical detection of the surface spin polarization of the candidate topological Kondo insulator SmB6. Phys. Rev. B 99, 245148 (2019).
92. 92.
Geurs, J. et al. Anomalously large spin-current voltages on the surface of SmB6. Phys. Rev. B 100, 035435 (2019).
93. 93.
Liu, T. et al. Nontrivial nature and penetration depth of topological surface states in SmB6 thin films. Phys. Rev. Lett. 120, 207206 (2018).
94. 94.
Lee, S. et al. Perfect Andreev reflection due to the Klein paradox in a topological superconducting state. Nature 570, 344–348 (2019).
95. 95.
Yong, J. et al. Robust topological surface state in Kondo insulator SmB6 thin films. Appl. Phys. Lett. 105, 222403 (2014).
96. 96.
Yong, J. et al. Magnetotransport in nanocrystalline SmB6 thin films. AIP Adv. 5, 077144 (2015).
97. 97.
Shishido, H. et al. Semi-epitaxial SmB6 thin films prepared by the molecular beam epitaxy. Phys. Procedia 75, 405–412 (2015).
98. 98.
Shaviv Petrushevsky, M. et al. Signature of surface state coupling in thin films of the topological Kondo insulator SmB6 from anisotropic magnetoresistance. Phys. Rev. B 95, 085112 (2017).
99. 99.
Batkova, M. et al. Electrical properties of SmB6 thin films prepared by pulsed laser deposition from a stoichiometric SmB6 target. J. Alloy. Compd. 744, 821–827 (2018).
100. 100.
Wolgast, S. et al. Conduction through subsurface cracks in bulk topological insulators. Preprint at arXiv https://arxiv.org/abs/1506.08233 (2015).
101. 101.
Eo, Y. S. et al. Comprehensive surface magnetotransport study of SmB6. Phys. Rev. B 101, 155109 (2020).
102. 102.
Syers, P. et al. Tuning bulk and surface conduction in the proposed topological Kondo insulator SmB6. Phys. Rev. Lett. 114, 096601 (2015).
103. 103.
Wolgast, S. et al. Reduction of the low-temperature bulk gap in samarium hexaboride under high magnetic fields. Phys. Rev. B 95, 245112 (2017).
104. 104.
Zhou, Y. et al. Quantum phase transition and destruction of Kondo effect in pressurized SmB6. Sci. Bull. 62, 1439–1444 (2017).
105. 105.
Cooley, J. C. et al. High field gap closure in the Kondo insulator SmB6. J. Supercond. 12, 171–173 (1999).
106. 106.
Kang, B. Y. et al. Magnetic and nonmagnetic doping dependence of the conducting surface states in SmB6. Phys. Rev. B 94, 165102 (2016).
107. 107.
Kim, D. J. et al. Topological surface state in the Kondo insulator samarium hexaboride. Nat. Mater. 13, 466–470 (2014).
108. 108.
Eo, Y. S. et al. Inverted resistance measurements as a method for characterizing the bulk and surface conductivities of three-dimensional topological insulators. Phys. Rev. Appl. 9, 044006 (2018).
109. 109.
Eo, Y. S. et al. Transport gap in SmB6; protected against disorder. Proc. Natl Acad. Sci. USA 116, 12638–12641 (2019). This study reveals the robust bulk insulating gap in SmB 6.
110. 110.
Rakoski, A. et al. Understanding low-temperature bulk transport in samarium hexaboride without relying on in-gap bulk states. Phys. Rev. B 95, 195133 (2017).
111. 111.
Bardeen, J. et al. Theory of superconductivity. Phys. Rev. 108, 1175 (1957).
112. 112.
Skinner, B. Properties of the donor impurity band in mixed valence insulators. Phys. Rev. Mater. 3, 104601 (2019).
113. 113.
Woolf, M. A. et al. Effect of magnetic impurities on the density of states of superconductors. Phys. Rev. 137, A557 (1965).
114. 114.
Cardona, M. et al. (eds) Photoemission in Solids I: General Principles (Springer, 1978).
115. 115.
Bardyszewski, W. et al. A new approach to the theory of photoemission from solids. Phys. Scr. 32, 439 (1985).
116. 116.
Himpsel, F. J. Angle-resolved measurements of the photoemission of electrons in the study of solids. Adv. Phys. 32, 1–51 (1983).
117. 117.
Denlinger, J. D. et al. in Proc. Int. Conf. Strongly Correlated Electron Systems (SCES2013) Vol. 3 JPS Conference Proceedings 017038 (The Physical Society of Japan, 2014).
118. 118.
Smith, N. V. et al. Photoemission linewidths and quasiparticle lifetimes. Phys. Rev. B 47, 15476 (1993).
119. 119.
Jiang, J. et al. Observation of possible topological in-gap surface states in the Kondo insulator SmB6 by photoemission. Nat. Commun. 4, 3010 (2013).
120. 120.
Neupane, M. et al. Surface electronic structure of the topological Kondo-insulator candidate correlated electron system SmB6. Nat. Commun. 4, 2991 (2013).
121. 121.
Zhu, Z. H. et al. Polarity-driven surface metallicity in SmB6. Phys. Rev. Lett. 111, 216402 (2013).
122. 122.
Frantzeskakis, E. et al. Kondo hybridization and the origin of metallic states at the (001) surface of SmB6. Phys. Rev. X 3, 041024 (2013).
123. 123.
Suga, S. et al. Spin-polarized angle-resolved photoelectron spectroscopy of the so-predicted Kondo topological insulator SmB6. J. Phys. Soc. Jpn. 83, 014705 (2014).
124. 124.
Xu, N. et al. Direct observation of the spin texture in SmB6 as evidence of the topological Kondo insulator. Nat. Commun. 5, 4566 (2014). This paper is the first direct observation of spin-textured surface states in SmB 6.
125. 125.
Min, C.-H. et al. Importance of charge fluctuations for the topological phase in SmB6. Phys. Rev. Lett. 112, 226402 (2014).
126. 126.
Xu, N. et al. Exotic Kondo crossover in a wide temperature region in the topological Kondo insulator SmB6 revealed by high-resolution ARPES. Phys. Rev. B 90, 085148 (2014).
127. 127.
Ishida, Y. et al. Emergent photovoltage on SmB6 surface upon bulk-gap evolution revealed by pump-and-probe photoemission spectroscopy. Sci. Rep. 5, 8160 (2015).
128. 128.
Min, C.-H. et al. Two-component analysis of the 4f multiplet of samarium hexaboride. J. Electron Spectrosc. Relat. Phenom. 199, 46–50 (2015).
129. 129.
Ellguth, M. et al. Momentum microscopy of single crystals with detailed surface characterisation. Philos. Mag. 96, 3284–3306 (2016).
130. 130.
Arab, A. et al. Effects of spin excitons on the surface states of SmB6: a photoemission study. Phys. Rev. B 94, 235125 (2016).
131. 131.
Ramankutty, S. V. et al. Comparative study of rare earth hexaborides using high resolution angle-resolved photoemission. J. Electron Spectrosc. Relat. Phenom. 208, 43–50 (2016).
132. 132.
Utsumi, Y. et al. Bulk and surface electronic properties of SmB6: A hard x-ray photoelectron spectroscopy study. Phys. Rev. B 96, 155130 (2017).
133. 133.
Min, C.-H. et al. Matching DMFT calculations with photoemission spectra of heavy fermion insulators: universal properties of the near-gap spectra of SmB6. Sci. Rep. 7, 11980 (2017).
134. 134.
Hlawenka, P. et al. Samarium hexaboride is a trivial surface conductor. Nat. Commun. 9, 517 (2018).
135. 135.
Ohtsubo, Y. et al. Surface electronic structure of SmB6 (111). Physica B Condens. Matter 536, 75–77 (2018).
136. 136.
Ohtsubo, Y. et al. Non-trivial surface states of samarium hexaboride at the (111) surface. Nat. Commun. 10, 2298 (2019). The first direct observation of spin-textured surface states for a SmB 6 surface other than the natural (100) cleavage plane.
137. 137.
Xu, N. et al. Spin- and angle-resolved photoemission on the topological Kondo insulator candidate: SmB6. J. Phys. Condens. Matter 28, 363001 (2016).
138. 138.
Allen, J. W. Foreward for special issue of philosophical magazine on: topological correlated insulators and SmB6. Philos. Mag. 96, 3227–3238 (2016).
139. 139.
Allen, J. W. Corrigendum for Foreword for special issue of philosophical magazine on: topological correlated insulators and SmB6. Philos. Mag. 97, 612 (2017).
140. 140.
Ishizawa, Y. et al. de Haas-van Alphen effect and Fermi surface of LaB6. J. Phys. Soc. Jpn 42, 112–118 (1977).
141. 141.
Martin, R. M. et al. in Valence Fluctuations in Solids Vol. 85 (eds Falicov, L. M. et al.) (North Holland, 1981).
142. 142.
Alexandrov, V. et al. Kondo breakdown in topological Kondo insulators. Phys. Rev. Lett. 114, 177202 (2015).
143. 143.
Dil, J. H. Spin and angle resolved photoemission on non-magnetic low-dimensional systems. J. Phys. Condens. Matter 21, 403001 (2009).
144. 144.
Heinzmann, U. et al. Spin–orbit-induced photoelectron spin polarization in angle-resolved photoemission from both atomic and condensed matter targets. J. Phys. Condens. Matter 24, 173001 (2012).
145. 145.
Xu, N. et al. Surface vs bulk electronic structures of a moderately correlated topological insulator YbB6 revealed by ARPES. Preprint at arXiv https://arxiv.org/abs/1405.0165 (2014).
146. 146.
Kang, C.-J. et al. Electronic structure of YbB6: is it a topological insulator or not? Phys. Rev. Lett. 116, 116401 (2016).
147. 147.
Xiang, Z. et al. Bulk rotational symmetry breaking in Kondo insulator SmB6. Phys. Rev. X 7, 031054 (2017).
148. 148.
Phelan, W. A. et al. Correlation between bulk thermodynamic measurements and the low-temperature-resistance plateau in SmB6. Phys. Rev. X 4, 031012 (2014).
149. 149.
Xu, Y. et al. Bulk Fermi surface of charge-neutral excitations in SmB6 or not: a heat-transport study. Phys. Rev. Lett. 116, 246403 (2016).
150. 150.
Boulanger, M. E. et al. Field-dependent heat transport in the Kondo insulator SmB6: Phonons scattered by magnetic impurities. Phys. Rev. B 97, 245141 (2018).
151. 151.
Schlottmann, P. NMR relaxation in the topological Kondo insulator SmB6. Phys. Rev. B 90, 165127 (2014).
152. 152.
Valentine, M. E. et al. Breakdown of the Kondo insulating state in SmB6 by introducing Sm vacancies. Phys. Rev. B 94, 075102 (2016).
153. 153.
Xiang, Z. et al. Quantum oscillations of electrical resistivity in an insulator. Science 362, 65–69 (2018).
154. 154.
Sato, Y. et al. Unconventional thermal metallic state of charge-neutral fermions in an insulator. Nat. Phys. 15, 954–959 (2019).
155. 155.
Stevens, K. W. H. The low lying states of intermediate-valence SmS. J. Phys. C Solid State Phys. 13, L539 (1980).
156. 156.
Martin, R. M. Fermi-surfae sum rule and its consequences for periodic Kondo and mixed-valence systems. Phys. Rev. Lett. 48, 362 (1982).
157. 157.
Geldenhuys, J. et al. The Luttinger theorem and intermediate valence. J. Phys. C Solid State Phys. 15, 221 (1982).
158. 158.
Kaplan, T. A. et al. Theory of the phase diagram in the p-T plane of SmS. J. Phys. C Solid State Phys. 12, L23 (1979).
159. 159.
Kikoin, K. A. et al. Magnetic excitations in intermediate-valence semiconductors with a singlet ground state. J. Phys. Condens. Matter 7, 307 (1995).
160. 160.
Curnoe, S. et al. 4Electron self-trapping in intermediate-valent SmB6. Phys. Rev. B 61, 15714 (2000).
161. 161.
Abele, M. et al. Topological nonmagnetic impurity states in topological Kondo insulators. Phys. Rev. B 101, 094101 (2020).
162. 162.
Park, W. K. et al. Topological surface states interacting with bulk excitations in the Kondo insulator SmB6 revealed via planar tunneling spectroscopy. Proc. Natl Acad. Sci. USA 113, 6599–6604 (2016).
163. 163.
Thomas, S. M. et al. Quantum oscillations in flux-grown SmB6 with embedded aluminum. Phys. Rev. Lett. 122, 166401 (2019).
164. 164.
Aeppli, G. et al. Comments. Condens. Matter Phys. 16, 155 (1992).
165. 165.
Hagiwara, K. et al. Surface Kondo effect and non-trivial metallic state of the Kondo insulator YbB12. Nat. Commun. 7, 12690 (2016).
166. 166.
Chang, P.-Y. et al. Parity-violating hybridization in heavy Weyl semimetals. Phys. Rev. B 97, 155134 (2018).
167. 167.
Fang, Y. et al. Evidence for a conducting surface ground state in high-quality single crystalline FeSi. Proc. Natl Acad. Sci. USA 115, 8558–8562 (2018).
168. 168.
Xu, K.-J. et al. Metallic surface states in a correlated d-electron topological Kondo insulator candidate FeSb2. Proc. Natl Acad. Sci. USA https://doi.org/10.1073/pnas.2002361117 (2020).
169. 169.
Petrovic, C. et al. Kondo insulator description of spin state transition in FeSb2. Phys. Rev. B 72, 045103 (2005).
## Acknowledgements
L.L. thanks the NSF (award no. DMR-1707620 for high-field electrical transport) and the DOE (award no. DE-SC0020184 for high-field magnetometry), and K.S. thanks the NSF (award no. NSF-EFMA-1741618 for theory) for supporting this work. All authors thank P.F.S. Rosa and Z. Fisk for illuminating discussions and P. Coleman for sharing his valuable suggestions on the demagnetization effect in isotropic paramagnets. All authors, especially J.W.A., thank J.D. Denlinger for generously sharing his extensive knowledge of SmB6 ARPES studies.
## Author information
Authors
### Contributions
All authors have read, discussed and contributed to the writing of the manuscript.
### Corresponding author
Correspondence to Lu Li.
## Ethics declarations
### Competing interests
The authors declare no competing interests.
### Peer review information
Nature Reviews Physics thanks Piers Coleman, Karol Flachbart and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.
### Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## Glossary
1 × 2 reconstruction
The freedom from complete bulk coordination can allow surface atoms to spontaneously ‘reconstruct’ to take atomic positions altered from the perfect surface termination of the bulk. In a ‘1 × 2’ or ‘2 × 1’ reconstruction, the alteration doubles the length of one of the two translation vectors of the surface unit cell, which halves the surface Brillouin zone in one direction. For the cubic crystal SmB6, this halving has the effect of rendering the $$\bar{\Gamma }$$ and the $$\bar{{\rm{X}}}$$ points of the un-reconstructed surface Brillouin zone to be equivalent.
Weak antilocalization
A quantum correction to conductance arising from quantum-interference effects in materials with strong spin–orbit interaction.
Edelstein effect
Accumulation of transverse spin due to the flow of an electric current in a thin film or a two-dimensional material with a strong spin–orbit interaction.
Corbino structures
A transport geometry with concentric circular contacts used to measure the electrical conductivity of a material.
Shubnikov–de Haas oscillations
Oscillations observed in transport measurements performed on conductors as a function of magnetic field. The oscillations arise from the formation of Landau levels separated from each other by the cyclotron energy.
Γ-pocket
Fermi-surface pockets refer to the location of conducting surface electrons or holes in reciprocal space. For the (001) surface, a cubic material with a lattice constant a, the Γ-pocket refers to the surface electrons located around the Γ point, (0,0), and the X-pockets refers to the electrons located around X points, (0,π/a) and (π/a,0).
Off-stoichiometry
Stoichiometry refers to the ratio of different atoms forming a crystal. For a stoichiometric material, the ratio is described by a fraction of natural numbers (i.e. 1:6 in the case of SmB6). Off-stoichiometry is a measure of disorder, where the ratio of constituent atoms deviates from the expected fraction of natural numbers.
Lifshitz–Kosevich model
Also known as the Lifshitz–Kosevich formula, a theoretical formula describing the magnetic-field dependence of oscillatory physical properties as a result of the Landau-level quantizations in metals. A consequence of Landau’s Fermi-liquid theory, the Lifshitz–Kosevich model explains particularly well the temperature dependence of the oscillatory magnitude of quantum oscillations.
Auxiliary-boson treatment
Also known as the auxiliary-boson method. Implies the use of any of several theoretical techniques for the study of strongly correlated quantum systems, where quantum dynamics and the effects of strong interactions among quantum particles are characterized through introducing additional (auxiliary) degrees of freedom.
## Rights and permissions
Reprints and Permissions
Li, L., Sun, K., Kurdak, C. et al. Emergent mystery in the Kondo insulator samarium hexaboride. Nat Rev Phys 2, 463–479 (2020). https://doi.org/10.1038/s42254-020-0210-8
• Accepted:
• Published:
• Issue Date:
|
2020-11-29 14:57:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8779460787773132, "perplexity": 5748.990581054887}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141198409.43/warc/CC-MAIN-20201129123729-20201129153729-00013.warc.gz"}
|
https://cdn.rawgit.com/ilarischeinin/stanley/0126954/model.html
|
Previous: 2. Process data
Next: 4. Make predictions
## 3. Train models
The summary statistics generated in the previous step are now combined with outcomes of the playoffs series, and then used to train models. To facilitate use of multiple types of statistical models, I am using the caret package, which provides a unified interface for this purpose. And in order to use both CPU cores on my laptop, I am using package doMC.
suppressMessages({
# caret uses package plyr, but whenever dplyr and plyr are both loaded,
# dplyr should be loaded after plyr. Hence plyr is loaded explicitly.
library(plyr)
library(dplyr)
library(caret)
library(doMC)
})
registerDoMC(cores=2)
Define a function to detect the winner of a series. It takes a vector of winners of individual games and returns the overall winner.
series_winner <- function(x) {
counts <- sort(table(x), decreasing=TRUE)
if ((length(counts) == 1) || (counts[1] > counts[2]))
return(names(counts)[1])
stop("I'm sorry, but I couldn't figure out the winner: ",
paste(x, collapse=", "))
}
Define a function to append game statistics to the outcome of a playoff series. Its behavior can be controlled with the argument which between these three options:
• overall: away team's overall performance - home team's overall performance
• single: away team's away performance - home team's home performance
• both: away team's away performance - home team's home performance and away team's home performance - home team's away performance (default, and the one I am using here)
add_stats <- function(games, gamestats, which=c("both", "single", "overall")) {
which <- match.arg(which)
if (which == "overall") {
away <- left_join(games, gamestats[["overall"]],
by=c("season", awayteam="team"))
home <- left_join(games, gamestats[["overall"]],
by=c("season", hometeam="team"))
} else {
away <- left_join(games, gamestats[["away"]],
by=c("season", awayteam="team"))
home <- left_join(games, gamestats[["home"]],
by=c("season", hometeam="team"))
}
if (which == "both") {
away2 <- left_join(games, gamestats[["home"]],
by=c("season", awayteam="team"))
home2 <- left_join(games, gamestats[["away"]],
by=c("season", hometeam="team"))
}
games$goals <- away$goals - home$goals games$shots <- away$shots - home$shots
games$faceoffs <- away$faceoffs - home$faceoffs games$penalties <- away$penalties - home$penalties
games$pp <- away$pp - home$pk games$pk <- away$pk - home$pp
if (which == "both") {
games$goals2 <- away2$goals - home2$goals games$shots2 <- away2$shots - home2$shots
games$faceoffs2 <- away2$faceoffs - home2$faceoffs games$penalties2 <- away2$penalties - home2$penalties
games$pp2 <- away2$pp - home2$pk games$pk2 <- away2$pk - home2$pp
}
games
}
Load data and append summary statistics.
load(file.path("source-data", "nhlscrapr-core.RData"))
rm(list=c("roster.master", "roster.unique"))
games <- tbl_df(games)
games <- games %>%
filter(status != 0, session == "Playoffs", season != "20142015") %>%
mutate(awayscore=as.integer(awayscore), homescore=as.integer(homescore))
playoffs <- games %>%
mutate(winner=ifelse(awayscore > homescore, awayteam, hometeam)) %>%
group_by(season, round=substring(gcode, 3, 3),
series=substring(gcode, 4, 4)) %>%
summarise(awayteam=first(awayteam), hometeam=first(hometeam),
winner=series_winner(winner)) %>%
ungroup() %>%
select(season, awayteam, hometeam, winner) %>%
mutate(winner=as.factor(ifelse(awayteam == winner, "away", "home")))
playoffs <- playoffs %>%
select(-season, -awayteam, -hometeam)
The training data looks like this:
head(playoffs)
winner goals shots faceoffs penalties pp pk goals2 shots2 faceoffs2 penalties2 pp2 pk2
home -0.140 -0.080 0.035 0.098 -0.006 -0.045 -0.077 0.009 0.050 -0.011 0.002 -0.040
home -0.171 -0.092 -0.041 0.108 0.080 0.019 -0.029 -0.001 0.031 0.012 0.013 0.052
home -0.104 -0.075 -0.008 0.149 -0.017 0.005 0.046 0.017 -0.004 -0.020 -0.010 -0.018
home -0.035 -0.133 -0.038 0.089 0.005 -0.029 0.061 -0.053 0.007 -0.028 0.016 0.014
home -0.118 -0.055 -0.023 0.016 0.026 -0.004 -0.025 -0.031 0.029 -0.037 -0.042 -0.051
away -0.105 -0.030 0.009 -0.001 0.014 -0.124 0.100 0.005 0.054 -0.105 0.011 -0.083
Next, define parameters for the model training. I am preprocessing the data with centering and scaling, and then using a 10-fold cross-validation repeated 10 times for parameter tuning. For each parameter, five different values are evaluated via the cross-validation, and the combination with the best overall accuracy chosen. A final model is then fitted with all of the training data and the chosen parameter values.
In order to make the analysis reproducible, sets of random seeds are generated to be used at each point of training.
method <- "repeatedcv"
number <- 10
repeats <- 10
preProcess <- c("center", "scale")
tuneLength <- 5
metric <- "Accuracy"
maxParameters <- 5
seeds <- vector(mode="list", length=repeats*number+1)
for (i in seq_along(seeds))
seeds[[i]] <- (1000*i+1):(1000*i+1+tuneLength^maxParameters)
fitControl <- trainControl(method=method, number=number, repeats=repeats,
seeds=seeds)
Define a function that trains the models.
train_model <- function(method) {
message("Training model: ", method, "...", appendLF=FALSE)
set.seed(7474505)
suppressMessages({ captured <- capture.output({
fit <- train(winner ~ ., data=playoffs,
method=method, trControl=fitControl, preProcess=preProcess,
metric=metric, tuneLength=tuneLength)
})})
message()
fit
}
Define which models we want to include, and then train them. I am including generalized linear model, linear discriminant analysis, neural network, random forests, and a support vector machine with a linear kernel. Each one of them undergoes the cross-validation for parameter tuning, and a final model is built with all of the training data.
Afterwards, save the resulting five final models.
methods <- c("glm", "lda", "nnet", "rf", "svmLinear")
models <- lapply(methods, train_model)
## Training model: glm...
## Training model: lda...
## Training model: nnet...
## Training model: rf...
## Training model: svmLinear...
names(models) <- methods
saveRDS(models, "models.rds")
Next: 4. Make predictions
Previous: 2. Process data
|
2019-03-23 23:10:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4340169131755829, "perplexity": 13083.61859886677}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912203093.63/warc/CC-MAIN-20190323221914-20190324003914-00221.warc.gz"}
|
https://yutani.rbind.io/post/gghighlight-0-1-0-is-released/
|
# gghighlight 0.1.0 Is Released!
## July 4, 2018 by Hiroaki Yutani
gghighlight 0.1.0 is on CRAN now!
## New features
As I’ve introduced on the previous post, gghighlight now can highlight any Geoms with gghighlight(). Since this function supersedes the previous functions, gghighlight_line() and gghighlight_point() are now deprecated.
## A Vignette
One more small news is, gghighlight got an introductory vignette. This is basically the shorter version of the previous post:
|
2018-11-19 14:14:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40988510847091675, "perplexity": 12605.284311073583}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039745762.76/warc/CC-MAIN-20181119130208-20181119152208-00333.warc.gz"}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.