url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
---|---|---|---|
https://ncatlab.org/nlab/show/Winter%202017-2018%20seminar%20on%20higher%20structures%20(Higher%20Lie%20Groupoids)/cite
|
# nLab Cite — Winter 2017-2018 seminar on higher structures (Higher Lie Groupoids)
### Overview
We recommend the following .bib file entries for citing the current version of the page Winter 2017-2018 seminar on higher structures (Higher Lie Groupoids). The first is to be used if one does not have unicode support, which is likely the case if one is using bibtex. The second can be used if one does has unicode support. If there are no non-ascii characters in the page name, then the two entries are the same.
In either case, the hyperref package needs to have been imported in one's tex (or sty) file. There are no other dependencies.
The author field has been chosen so that the reference appears in the 'alpha' citation style. Feel free to adjust this.
### Bib entry — Ascii
@misc{nlab:winter_2017-2018_seminar_on_higher_structures_(higher_lie_groupoids),
author = {{nLab authors}},
title = {{{W}}inter 2017-2018 seminar on higher structures ({{H}}igher {{L}}ie {{G}}roupoids)},
howpublished = {\url{https://ncatlab.org/nlab/show/Winter%202017-2018%20seminar%20on%20higher%20structures%20%28Higher%20Lie%20Groupoids%29}},
note = {\href{https://ncatlab.org/nlab/revision/Winter%202017-2018%20seminar%20on%20higher%20structures%20%28Higher%20Lie%20Groupoids%29/4}{Revision 4}},
month = oct,
year = 2022
}
### Bib entry — Unicode
@misc{nlab:winter_2017-2018_seminar_on_higher_structures_(higher_lie_groupoids),
author = {{nLab authors}},
title = {{{W}}inter 2017-2018 seminar on higher structures ({{H}}igher {{L}}ie {{G}}roupoids)},
howpublished = {\url{https://ncatlab.org/nlab/show/Winter%202017-2018%20seminar%20on%20higher%20structures%20%28Higher%20Lie%20Groupoids%29}},
note = {\href{https://ncatlab.org/nlab/revision/Winter%202017-2018%20seminar%20on%20higher%20structures%20%28Higher%20Lie%20Groupoids%29/4}{Revision 4}},
month = oct,
year = 2022
}
### Problems?
Please report any problems with the .bib entries at the nForum.
|
2022-10-01 23:16:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.867445170879364, "perplexity": 7202.847317074616}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00778.warc.gz"}
|
https://szczecincafe.com/the-model-provides-an-embedding-or-feature-representation-of-the-data-of-all-taxpayers/
|
## The model provides an embedding or feature representation of the data of all taxpayers
The model provides an embedding or feature representation of the data of all taxpayers. The features are then used to train a separate classifier. The information acquired allows for the clustering of related features in a hidden space.
\newline\newline
A deep generative model of both audited and not audited taxpayers data provides a more robust set of hidden(latent) features. The generative model used is:
\newline
\newline
We Will Write a Custom Essay Specifically
For You For Only $13.90/page! order now \begin{math} p(\textbf{z}) = \mathcal{N}(\textbf{z}|\textbf{0,I});\ \ \ \ \ \ p _\theta(\textbf{x|z}) = f(\textbf{x;z},\boldsymbol\theta), \quad(1) \end{math} \newline\newline where\begin{math} f (\textbf{x; z},\boldsymbol\theta) \end{math}is a Gaussian distribution whose probabilities are formed by a non-linear functions (deep neural networks), with parameters \begin{math} \boldsymbol\theta \end{math}, of a set of hidden (latent) variables \textbf{z}. \newline\newline Approximate samples from the posterior distribution (the probability distribution that represents the updated beliefs about the parameters after the model has seen the data) over the hidden (latent) variables p(z|x) are used as features to train a classifier that predicts whether a material audit yield will result if a taxpayer is audited (y) such as Support Vector Machine (SVM).This approach enables the classification to be performed in a lower dimensional space since we typically use hidden (latent) variables whose dimensionality is much less than that of the observations. These low dimensional embeddings should now also be more easily separable since we make use of independent hidden (latent) Gaussian posteriors whose parameters are formed by a sequence of non-linear transformations of the data. \newline\newline \textbf{Generative semi-supervised model (Model 2): } A probabilistic model describes the data as being generated by a hidden(latent) class variable y in addition to a continuous hidden(latent) variable z. The model used is: \newline \newline \begin{math} p(y) = Cat(y| \boldsymbol\pi);\quad p(\textbf{z}) = \mathcal{N} (\textbf{z|0, I});\quad p \theta (\textbf{X}|y, \textbf{Z}) = f (\textbf{x}; y, \textbf{z}, \boldsymbol\theta), \quad(2) \end{math} \newline \newline where \ensuremath{Cat(y| \boldsymbol\pi)} is the multinomial distribution, the class labels y are treated as hidden (latent) variables if no class label is available and z are additional hidden (latent) variables. These hidden (latent) variables are marginally independent. As in Model 1, \begin{math} f(\textbf{X};y,\textbf{z},\boldsymbol\theta) \end{math} is a Gaussian distribution, parameterized by a non-linear function (deep neural networks) of the hidden(latent) variables. \newline\newline Since most labels y are unobserved, we integrate over the class of any unlabeled data during the inference process, thus performing classification as inference (deriving logical conclusions from premises known or assumed to be true.). The inferred posterior distribution is used to obtain labels for any missing labels. \\ \\ \textbf{Stacked generative semi-supervised model: } The two models can be stacked together; the \textbf{Model 1} learns the new hidden (latent) representation \textbf{z$_1$} using the generative model, and afterwards the generative semi-supervised \textbf{Model 2} using \textbf{z$_1$} instead of raw data (\textbf{x}). The outcome is a deep generative model with two layers: \newline\newline \begin{math} p \theta(\textbf{x}, y, \end{math}\textbf{z$_1$, z$_2$})\begin{math} = p(y)p\end{math}(\textbf{z$_2$})\begin{math}p _\theta \end{math}(\textbf{z}$_1$|\ensuremath{y},\textbf{z$_2$})\begin{math}p _\theta \end{math}(x|\textbf{z$_1$}) \newline\newline where the priors \ensuremath{p(y)} and \ensuremath{p}(\textbf{z$_2$}) equal those of y and \textbf{z} above, and both \ensuremath{ p _\theta(\textbf{z}}$_1$|\ensuremath{y}, \ensuremath{\textbf{z}}$_2$) and \ensuremath{p _\theta(\textbf{x}|\textbf{z}}$_1$) are parameterized as deep neural networks. The computation of the exact posterior distribution is not easily managed because of the nonlinear, non-conjugate dependencies between the random variables. To allow for easier management and scalable inference and parameter learning, the recent advances in variational inference (Kingma and Welling, 2014; Rezende et al., 2014) are utilized. A fixed form distribution \ensuremath{q _\phi(\textbf{z}|\textbf{x}) }with parameters \ensuremath{\phi} that approximates the true posterior distribution \begin{math} p(\textbf{z}|\textbf{x}) \end{math}. \newline\newline The variational principle is used to derive a lower bound on the maximum likelihood of the model. This consists in maximizing function of the variational bound and the approximate posterior has the minimum difference with the true posterior. The approximate posterior distribution \begin{math} q _\phi(\cdot) \end{math} is constructed as an inference or recognition model (Dayan, 2000; Kingma and Welling, 2014; Rezende et al., 2014; Stuhlmuller et al., 2013). \newline\newline With the use of an inference network, a set of global variational parameters \begin{math} \phi \end{math}, allowing for fast inference at both training and testing because the delay of inference is for all the posterior estimates for all hidden (latent) variables through the parameters of the inference network. An inference network is introduced for all hidden (latent) variables, and are parameterized as deep neural networks. Their outputs construct the parameters of the distribution \ensuremath{ q _\phi(\cdot) }. \newline\newline For the latent-feature discriminative model (Model 1), we use a Gaussian inference network \begin{math} q _\phi(\textbf{z}|\textbf{x}) \end{math}for the hidden(latent) variable \textbf{z}. For the generative semi-supervised model (Model 2),an inference model the hidden(latent) variables \textbf{z} and \textbf{y}, which its assumed have a factorized form \begin{math} q _\phi(\textbf{z}, y|\textbf{x}) = q _\phi(\textbf{z}|\textbf{x})q_\phi(y|\textbf{x}) \end{math}, specified as Gaussian and multinomial distributions. \newline\newline \textbf{Model 1:} \ensuremath{q _\phi(\textbf{z}|\textbf{x}) = \mathcal{N} (\textbf{z}| \boldsymbol\mu _\phi(\textbf{x}), diag( \boldsymbol\sigma^2 _\phi(\textbf{x})))}, \quad(3) \newline \textbf{Model 2:} \ensuremath{q _\phi(\textbf{z}|y,\textbf{x})= \mathcal{N}(\textbf{z}| \boldsymbol\mu _\phi(y,\textbf{x}),diag( \boldsymbol\sigma^2 _\phi(\textbf{x}))); q _\phi(y|\textbf{x})=\textit{C}at(y| \boldsymbol\pi _\phi(\textbf{x}))}, (4) \newline\newline \newline\newline where: \ensuremath{\boldsymbol\sigma _\phi(\textbf{x})} is a vector of standard deviations, \ensuremath{\boldsymbol\pi _\phi(\textbf{x})} is a probability vector, functions \ensuremath{\boldsymbol\mu _\phi(x), \boldsymbol\sigma _\phi(\textbf{x}) and \boldsymbol\pi _\phi(\textbf{x})} are represented as \textbf{MLPs}. \newline\newline \textbf{Generative Semi-supervised Model Objective} The label corresponding to a data point is observed and the variational bound is: \newline\newline \begin{math} logp _\theta(\textbf{X},y) \leq \mathbb{E}_q {_\phi} _{(z|x,y)} log p _\theta(\textbf{x}|y,\textbf{z})+ log p _\theta(y)+ log p (\textbf{z})-log q _\phi(\textbf{z}|\textbf{x},y)=-\mathcal{L}(\textbf{x},y), \quad(5) \end{math} \newline\newline The objective function is minimized by resorting to AdaGrad, which is a gradient-descent based optimization algorithm. It automatically tunes the learning rate based on its observations of the data’s geometry. AdaGrad is designed to perform well with datasets that have infrequently-occurring features. \chapter{Evaluation} The model was used to analyze taxpayers data from the Cyprus Tax Department database in order to identify taxpayers yielding material additional tax in case of performing a VAT audit. The Deep Generative Models for Semi-supervised Learning is a solution that enables increased efficiency in the audit selection process. Its input includes both audited (supervised) and not audited (unsupervised) taxpayer data. Its output is a collection of labels, each of which corresponds to a taxpayer with one of two possible values (binary) good (1) or bad (0). If the taxpayer is expected to yield a material tax after audit, would be classified as good (1). \newline\newline Nearly all the VAT returns of the last few years were processed in order to generate the features to be used by the model. These were selected based on the advice of experienced field auditors, data analysts and rules from rule based models. Some of the selected fields where further processed to generate extra fields. The features selected broadly relate to business characteristics like location of the business, type of business and features from its tax returns. For data preparation , the data was cleaned, for example we removed taxpayers with little or no tax history, mainly new businesses. \newline\newline The details of the criteria used to select the features, the features processing, the new generated features, feature number and cleansing process, cannot be disclosed due to the confidentiality nature of the audit selection. Also publication of the features could result in compromise of future audit selection as well as being unlawful. For, modelling taxpayer data from the tax department registry like economic activity and from the tax returns \ensuremath{(X)} and actual audit results \ensuremath{(Y)} appear as pairs. \newline\newline \ensuremath{(X, Y) = in }\{(x$_1$, y$_1$), . . . , \ensuremath{(x\mathcal{N} , y\mathcal{N} )\}} \newline\newline with the ith observation x$_i$and the corresponding class label \ensuremath{y}$_i\$ \ensuremath{\{1, . . . , L\}} for the taxpayers audited.
\newline\newline
For each observation we infer corresponding hidden (latent) variables denoted by z. In the semi-supervised classification, where both audited taxpayers and not audited taxpayers are utilized, only a subset of all the taxpayers have corresponding class labels (audit result). The empirical distribution over the labelled (audited) and unlabeled (not audited) subsets as \ensuremath{p (x, y)} and \ensuremath{p (x)}.
\newline\newline
For building the model, Tensorflow was used, an open source software library for high performance numerical computation, running on top of python programming language. The hardware used is a custom build machine of the Cyprus Tax Department with an NVIDIA 10 series Graphic Processing Unit. The performance was measured using a k-fold cross validation on training data.
\newline\newline
The model was trained on actual tax audit data collected from the prior years (supervised) and on actual data of not audited taxpayers(unsupervised). The amount over which an audit yield is classified as material was set following internal guidelines. The same model was used for both large medium and small taxpayers irrespective of the economic activity classification (NACE code). The predictions made by the model were compared to the actual audit findings with an accuracy of 78,4\%. The results compared favorably to peer results, using Data Mining Based Tax Audit Selection with a reported accuracy of 51\% (Kuo-Wei Hsu et al., 2015).
\section{The confusion matrix}
The confusion matrix in Table 1 represents the classification of the model on the training data set. Columns and
rows are for predictions. The top-left element indicates correctly classified cases, the top-right element indicates the tax audits
lost (i.e. cases predicted as bad turning out to be good). The bottom-left element indicates tax audits incorrectly predicted as good, and the bottom-right element indicates correctly predicted bad tax audits.
The confusion matrix indicates that the model is balanced. The actual numbers are not disclosed for confidentiality reasons, instead they are presented as percentages.
x
Hi!
I'm Heidi!
Would you like to get a custom essay? How about receiving a customized one?
Check it out
|
2019-11-21 09:47:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42215031385421753, "perplexity": 2966.978467162471}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670743.44/warc/CC-MAIN-20191121074016-20191121102016-00351.warc.gz"}
|
https://2022.congresso.sif.it/talk/519
|
Comunicazione
# $21 {cm}$ with millicharged dark matter.
##### Verma S., Katz O., Outmezguine N., Panci P., Redigolo D.
Mercoledì 14/09 13:30 - 18:30 Aula T - Caterina Scarpellini III - Astrofisica
We study scenarios where a sub-percent fraction of dark matter (DM) carries a millicharge (mDM). The small energy density of the millicharge component avoids the strong constraints from CMB but can have interesting effects on the cosmological evolution at and after recombination. For large enough charges and at small relative velocities, non-relativistic effects like Sommerfeld enhancement and bound state formation significantly impact the behavior of mDM. We systematically compute the scattering rates of mDM with hydrogen and helium and the rate of mDM capture in the interstellar medium. We discuss how these processes can leave an impact in the global $21 {cm}$ spectrum.
|
2023-03-21 14:37:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.734240710735321, "perplexity": 5326.203049669261}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943698.79/warc/CC-MAIN-20230321131205-20230321161205-00649.warc.gz"}
|
http://devmaster.net/posts/5004/i-m-glad
|
0
101 Jan 23, 2003 at 12:47
I’m newbie with OpenGL
How do you draw bitmap sprites in
OpenGL
#### 8 Replies
0
157 Jan 23, 2003 at 15:51
If I understand your question correctly, you are trying to draw 2D images. Right?
Well, you will have to use ortho mode like this:
glGetFloatv(GL_VIEWPORT, viewportdata);
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glOrtho(viewportdata[0],viewportdata[2],viewportdata[1],viewportdata[3],-100,100);
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
glLoadIdentity(); // Reset The Current Modelview Matrix
// Now you are in 2D...
// Draw something....
// Ending ortho mode...
glPopMatrix();
glMatrixMode( GL_PROJECTION );
glPopMatrix();
glMatrixMode( GL_MODELVIEW );
I hope that helps.
0
101 Jan 24, 2003 at 21:43
After going into ortho mode, you may want to display a texture on say a quad, to represent your sprite.
No code handy to show, as I am at work, but I am sure someone can show you if needed.
0
101 Jan 24, 2003 at 22:11
BTW Ortho mode has no Z axis so theres no point trying to do translations on it.
0
101 Jan 24, 2003 at 23:56
If your an OpenGL newbie, check out nehe.gamedev.net it has probably the best introductory and intermediate OpenGL tutorials on the net.
0
101 Jan 25, 2003 at 16:21
Also check out Game Tutorials
http://www.gametutorials.com
0
101 Jan 25, 2003 at 19:59
Rendering a 2d sprite is as simple (or not simple depending on how you look at it ;) as rendering a quad with some texture coordinates. I’d suggest looking into getting the OpenGL Programming Guide (3rd ed), usually refered to as “the open red book”. I find nehe’s tutorials a really awful way to learn opengl.
Its also occured to me you might be asking about a technique called “billboarding.” Billboarding is where a 2d sprite is rendered so its always facing the player. That topic is a little more involved, so I wont bother unless thats what you were asking about.
0
101 Jan 31, 2003 at 11:08
glOrtho is just a projection mode. z axis is specified (otherwise why specify near and far cliping planes with glOrtho?) but of course z translations are not visible for one single object. If you depth-test your objects, you’ll see that far objects are occluded by near ones.
Besides using textured quads with glOrtho for sprites, and IFF it’s only 2d images you want, there is also glDrawPixels, but it’s too rigid to do anything fancy with it. Just puts a rectangular pixel area on screen…
0
101 Jan 31, 2003 at 16:21
If you depth-test your objects, you’ll see that far objects are occluded by near ones.
Here is a compiled, and source code version of a OpenGL ortho mode project I was doing to see how fast I could get OpenGL on low end graphics cards, as well as learn OpenGL vertex arrays.
The demo also shows the far objects occluded by the card that zooms to and away from the screen as well you can see the mipmap (level of detail) effects on the cards as the zooming card goes back to its farthest point.
ftp://sf-games.com/MBTest.zip (source)
ftp://sf-games.com/MBTest2.zip (executable)
|
2014-03-10 13:53:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20002591609954834, "perplexity": 3360.2035656122725}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394010824553/warc/CC-MAIN-20140305091344-00026-ip-10-183-142-35.ec2.internal.warc.gz"}
|
http://openstudy.com/updates/50e4c46be4b0e36e35149417
|
## mtmb11 2 years ago What is the simplified form of the expression.
1. mtmb11
2. RainbowsandCats
Dude type the question the link doesn't show on my computer
3. mtmb11
$\sqrt{1}\over 121$
4. RainbowsandCats
is that a i where the square root is or ....
5. mtmb11
the square root is over the whole equation
6. RainbowsandCats
ohh okay the square root of 121 is 11 so i think it would be |dw:1357170094551:dw|
7. RainbowsandCats
im not really sure 0.0
8. mtmb11
l0l its ok
|
2015-04-19 17:51:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7196729183197021, "perplexity": 2777.80466303711}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246639325.91/warc/CC-MAIN-20150417045719-00219-ip-10-235-10-82.ec2.internal.warc.gz"}
|
https://openforecast.org/adam/diagnosticsOmitted.html
|
## 14.1 Model specification: Omitted variables
We start with one of the most critical assumptions for models: the model has not omitted important variables. If it has then the point forecasts might be not as accurate as we would expect and in some serious cases exhibit substantial bias.
This issue is difficult to diagnose because it is typically challenging to identify what is missing if we do not have it in front of us. The best thing one can do is a mental experiment, trying to comprise a list of all theoretically possible variables that would impact the variable of interest. If you manage to come up with such a list and realise that some of the variables are missing, the next step would be to collect the variables themselves or use their proxies. The proxies are variables that are expected to be correlated with the missing variables and can partially substitute them. We would need to add the missing information in the model one way or another.
In some cases, we might be able to diagnose this. For example, with our regression model estimated in the previous section, we have a set of variables not included in the model. A simple thing to do is to see if the residuals of our model are correlated with any of the omitted variables. We can either produce scatterplots or calculate measures of association (see Section 5.2 and Chapter 9 of Svetunkov, 2022a) to see if there are relations in the residuals. I will use assoc() and spread() functions from greybox for this:
# Create a new matrix, removing the variables that are already
# in the model
SeatbeltsWithResiduals <-
Seatbelts[,-c(2,5,6)])
colnames(SeatbeltsWithResiduals)[1] <- "residuals"
greybox::spread(SeatbeltsWithResiduals)
spread() function automatically detects the type of variable and produces based on that scatterplot / boxplot() / tableplot() between them, making the final plot more readable. The plot above tells us that residuals are correlated with DriversKilled, front, rear and law, so some of these variables can be added to the model to improve it. VanKilled might have a weak relation with drivers, but judging by description does not make sense in the model (this is a part of the drivers variable). Also, I would not add DriversKilled, as it seems not to drive the number of deaths and injuries (based on our understanding of the problem), but is just correlated with it for obvious reasons (DriversKilled is included in drivers). The variables front and rear should not be included in the model, because they do not explain injuries and deaths of drivers, they are impacted by similar factors and can be considered as output variables. So, only law can be safely added to the model, because it makes sense. We can also calculate measures of association between variables:
greybox::assoc(SeatbeltsWithResiduals)
## Associations:
## values:
## residuals DriversKilled front rear VanKilled law
## residuals 1.0000 0.7826 0.6121 0.4811 0.2751 0.1892
## DriversKilled 0.7826 1.0000 0.7068 0.3534 0.4070 0.3285
## front 0.6121 0.7068 1.0000 0.6202 0.4724 0.5624
## rear 0.4811 0.3534 0.6202 1.0000 0.1218 0.0291
## VanKilled 0.2751 0.4070 0.4724 0.1218 1.0000 0.3949
## law 0.1892 0.3285 0.5624 0.0291 0.3949 1.0000
##
## p-values:
## residuals DriversKilled front rear VanKilled law
## residuals 0.0000 0 0 0.0000 0.0001 0.0086
## DriversKilled 0.0000 0 0 0.0000 0.0000 0.0000
## front 0.0000 0 0 0.0000 0.0000 0.0000
## rear 0.0000 0 0 0.0000 0.0925 0.6890
## VanKilled 0.0001 0 0 0.0925 0.0000 0.0000
## law 0.0086 0 0 0.6890 0.0000 0.0000
##
## types:
## residuals DriversKilled front rear VanKilled law
## residuals "none" "pearson" "pearson" "pearson" "pearson" "mcor"
## DriversKilled "pearson" "none" "pearson" "pearson" "pearson" "mcor"
## front "pearson" "pearson" "none" "pearson" "pearson" "mcor"
## rear "pearson" "pearson" "pearson" "none" "pearson" "mcor"
## VanKilled "pearson" "pearson" "pearson" "pearson" "none" "mcor"
## law "mcor" "mcor" "mcor" "mcor" "mcor" "none"
Technically speaking, the output of this function tells us that all variables are correlated with residuals and can be considered in the model. This is because p-values are lower than my favourite significance level of 1%, so we can reject the null hypothesis for each of the tests (which is that the respective parameters are equal to zero in the population). I would still prefer not to add DriversKilled, VanKilled, front and rear variables in the model for the reasons explained earlier. We can construct a new model in the following way:
adamSeat02 <- adam(Seatbelts, "NNN",
formula=drivers~PetrolPrice+kms+law)
The model now fits the data differently (Figure 14.3):
plot(adamSeat02, 7, main="")
How can we know that we have not omitted any important variables in our new model? Unfortunately, there is no good way of knowing that. In general, we should use judgment to decide whether anything else is needed or not. But given that we deal with time series, we can analyse residuals over time and see if there is any structure left (Figure 14.4):
plot(adamSeat02, 8, main="")
Plot in Figure 14.4 shows that the model has not captured seasonality correctly and that there is still some structure left in the residuals. In order to address this, we will add ETS(A,N,A) element to the model, estimating ETSX instead of just regression:
adamSeat03 <- adam(Seatbelts, "ANA",
formula=drivers~PetrolPrice+kms+law)
We can produce similar plots to do model diagnostics (Figue 14.5):
par(mfcol=c(1,2), mar=c(4,4,2,1))
plot(adamSeat03,7:8)
In Figure 14.5, we do not see any apparent missing structure in the data and any obvious omitted variables. We can now move to the next steps of diagnostics.
### References
• Svetunkov, I., 2022a. Statistics for business analytics. https://openforecast.org/sba/ (version: 31.10.2022)
|
2023-01-31 22:48:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5897192358970642, "perplexity": 1227.6995971998601}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499891.42/warc/CC-MAIN-20230131222253-20230201012253-00799.warc.gz"}
|
https://worldbuilding.stackexchange.com/questions/124649/increased-luminosity-in-stars
|
Increased Luminosity in Stars
I'm trying to construct a solar system, and I'm toying with the idea that the planet capable of sustaining life was initially outside of the habitable zone, but the star's advanced age has caused the luminosity to increase, thereby shifting the HZ further outwards.
How do you calculate the increase of a star's luminosity with age? Is there a fixed correlation across all stars? Does it vary according to type?
• Star is 0.88 Solar masses
• Lifespan is 1.38 Solar lifespan
• Star is still within main sequence, but significantly further along than the Sun, both in total age and in terms of its life cycle.
Any answers would be greatly appreciated.
• There are a complete of issues with this question; stellar evolution is a largely theoretical field at the best of times. And the luminosity at a given wavelength, say visible blue light can theoretically increase independent of the behaviour of the total stellar luminosity. – Ash Sep 10 '18 at 15:52
• HDE 226868's answer probably has everything you need but you might want to know that your star probably has a K0V Spectral Class, at least according to this classification set – Ash Sep 10 '18 at 17:59
I would recommend looking at pre-existing numerical models, rather than computing your own. This has a couple of advantages:
1. You don't need to use any approximations.
2. Factors like metallicity, rotation and composition have already been taken into account.
3. You just need to look up the values in a table - no calculations required.
4. You can also compare values for a star of the same mass and composition at many points in its life cycle.
I usually point people towards the Geneva grids of stellar models. They're easily accessible and simple to use. Let's say you want to look at stars of approximate solar composition ($X\approx0.76$, $Y\approx0.24$). There's a set of models by Schaller et al. 1992 that should suit your purposes. You probably want Table 43, for $M=0.9M_{\odot}$ - close enough to your star's mass. If you look at the column labels, you can see that Column 2 gives the star's age, in years, and Column 4 gives the logarithm of the star's luminosity, in solar luminosities.
I took the liberty of plotting luminosity against age for this particular model:
Notice the steep increase in luminosity at around $\sim10^{10}$ years, when the star leaves the main sequence and enters the red giant phase. Additionally, I calculated the boundaries of the habitable zone. I assumed that the inner edge corresponds to an effective temperature at 273 K, and that the outer edge corresponds to an effective temperature of 373 K - the freezing and boiling points of water.
If you play around a bit and check out different grids of models, you'll indeed see that factors like mass, metallicity and composition strongly affect the evolution of a star, which is why it's important to have fine enough grids of models in the first place.
• One why table 43 and not table 21? Both are for the same mass star. Is that a total luminosity figure or something slightly more useful? – Ash Sep 10 '18 at 17:25
• @Ash No, they're different stars - Table 21 has a higher metallicity ($Z=0.02$), while Table 43 has $Z=0.001$ - effectively $0$. Picking the latter was something of an arbitrary choice on my part, but I think most worldbuilders usually assume zero metallicity. – HDE 226868 Sep 10 '18 at 17:27
• I don't go for low metallicity but that's me and I have my reasons, low metallicity makes more sense for an older star though so yeah that's the table you want, I missed the difference in Z ratings. Is that peak wavelength or visible luminosity? – Ash Sep 10 '18 at 17:32
• @Ash It should be luminosity across all wavelengths. Stellar structure and evolution codes really have no reason to only list the luminosity in a portion of the spectrum, AFAIK, since the equations of stellar structure require the total luminosity at a given location in the star for the purposes of energy conservation and transport. – HDE 226868 Sep 10 '18 at 17:39
• Thanks for the graph. I tried to look at the data, but... wow, that's one big mass of numbers... – N Francis Sep 11 '18 at 0:23
|
2020-01-22 12:21:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7588606476783752, "perplexity": 787.8792301811496}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606975.49/warc/CC-MAIN-20200122101729-20200122130729-00126.warc.gz"}
|
https://math.stackexchange.com/questions/886909/int-0-infty-frac-sin2xx2x21-dx
|
$\int_0^\infty \frac{\sin^2(x)}{x^2(x^2+1)} dx$ =?
After reading articles on differentiation under the integral sign, I hit this post from mit, where after introducing the power tool, it challenges reader to do
$$\int_0^\infty \frac{\sin^2(x)}{x^2(x^2+1)} dx$$
Obviously I have no clue where to start. Could any one give a hint?
• I think you can simplify first the integral using partial fractions since $\frac{1}{x^2 \left(x^2+1\right)}=\frac{1}{x^2}-\frac{1}{x^2+1}$. The first integral is simple; the second one is more problematic to me. Good luck. – Claude Leibovici Aug 4 '14 at 7:18
• The definite integral of $\frac{1}{x^2+1}$ is simple: it is $\arctan(x)$. Remember that $\arctan(0)=0$ and $\arctan(\infty)=\pi/2$. – Steven Van Geluwe Aug 4 '14 at 7:27
• This question is the same as the problem in this link math.stackexchange.com/questions/691798/… – xpaul Aug 4 '14 at 23:46
This is a possible way to evaluate the integral. Partial fraction decomposition and the double angle formula yield $$\int^\infty_0\frac{\sin^2{x}}{x^2(1+x^2)}dx=\frac{1}{2}\int^\infty_0\frac{1-\cos{2x}}{x^2}dx-\frac{1}{2}\int^\infty_0\frac{1-\cos{2x}}{1+x^2}dx$$ The first integral can be evaluated in many ways, differentiation under the integral sign is one of them. I prefer to proceed with a simple fact that follows from the definition of the gamma function. $$\int^{\infty}_0t^{n-1}e^{-xt} \ dt=\frac{\Gamma(n)}{x^n}$$ Hence the first integral is \begin{align} \frac{1}{2}\int^\infty_0\frac{1-\cos{2x}}{x^2}dx &=\frac{1}{2}\int^\infty_0(1-\cos{2x})\int^\infty_0te^{-xt} \ dt \ dx\\ &=\frac{1}{2}\int^\infty_0t\int^\infty_0e^{-xt}(1-\cos{2x}) \ dx \ dt\\ &=\int^\infty_0\left(\int^\infty_0e^{-xt}\sin{2x} \ dx\right)dt\\ &=\int^\infty_0\frac{2}{t^2+4}dt\\ &=\frac{\pi}{2}\\ \end{align} The second integral can be broken up further and evaluated using the residue theorem. \begin{align} \frac{1}{2}\int^\infty_0\frac{1-\cos{2x}}{1+x^2}dx &=\frac{\pi}{4}-\frac{1}{4}\Re\oint_{\Gamma}\frac{e^{2iz}}{1+z^2}dz\\ &=\frac{\pi}{4}-\frac{1}{2}\Re\left(\pi i\operatorname{Res}(f,i)\right)\\ &=\frac{\pi}{4}-\frac{1}{2}\Re\left(\pi i\frac{e^{-2}}{2i}\right)\\ &=\frac{\pi}{4}-\frac{\pi}{4e^2} \end{align} Hence $$\int^\infty_0\frac{\sin^2{x}}{x^2(1+x^2)}dx=\frac{\pi}{4}\left(1+e^{-2}\right)$$
• thanks a lot and.. the trick with $\int^{\infty}_0t^{n-1}e^{-xt} \ dt=\frac{\Gamma(n)}{x^n}$ is brilliant! i only saw $n=1$ case before, never realized that could utilize $n>1$! – athos Aug 4 '14 at 10:35
Could any one give a hint?
Partial fraction decomposition, together with the fact that
• $\displaystyle\int_0^\infty\frac{\sin^2x}{x^2}dx=\frac\pi2$
• $\sin^2x=\dfrac{1-\cos2x}2$
• $\displaystyle\int_0^\infty\frac{\cos x}{x^2+a^2}dx=\frac\pi{2a~e^a}$
|
2019-06-17 12:35:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9855247735977173, "perplexity": 691.3001209810684}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998475.92/warc/CC-MAIN-20190617123027-20190617145027-00143.warc.gz"}
|
http://math.stackexchange.com/questions/234816/why-infinite-sums-of-positive-real-constants-definitely-yield-infinite
|
# Why infinite sums of positive real constants definitely yield infinite?
According to the last step in proof of the unmeasurability of Vitali_set, it said that summing infinitely many copies of the constant $\lambda(V)$ yields either zero or infinity, according to whether the constant is zero or positive. It sounds pleasing to my ear, but I still have a bit doubt in the reason of the sum of infinitely many copies of a positive real constant would definitely yield infinite.
Actually, $$< \sum_{i=0}^n c>_{n=0}^{\infty}$$ is indeed an strict increasing series when $c>0$. However, this fact seems cannot guarantee the inevitability of $\sum_{i=0}^\infty c=\infty$.
Take an example in infinite product. $1,2,4,8,16...$ is actually a strict increasing series too, but $$\prod_{i=0}^{\infty}2$$ can yield $0$ in some cases.
Moreover $9,99,999,\ldots$ is also a strict increasing series, but in some theory $...999$ is not a infinite but $-1$.
So my question on what basis can the conclusion that $$\sum_{i=0}^\infty \lambda(V)$$ is necessarily not between $1$ and $3$ be concluded?
-
Where is the limit of $9, 99, 999,\dots$ equal to $-1$? – Michael Greinecker Nov 11 '12 at 12:42
@MichaelGreinecker $...999$ sometimes be dealt as $-1$ (in 10-adic), not the limit of $9,99,999...$ – Popopo Nov 11 '12 at 12:47
@Popopo: I think that the problem is that you are mixing contexts. The real numbers are not $10$-adic numbers, and they are not cardinal numbers, and they are not anything except the real numbers. If you agree that measure theory is done in the context of real numbers you cannot give "counterexamples" from a different context. This is like saying "Oh, religious Jews don't eat pork, but Christians do. So Judaism is inconsistent", within the real numbers the products and sum you mentioned are infinite because they are limits of strictly increasing sequences and therefore larger than any number – Asaf Karagila Nov 11 '12 at 12:57
@AsafKaragila: Just in case Popopo remembers the minor bits of your comments while ignoring (as (s)he seems to have done) the main points, perhaps clarify that not every limit of a strictly increasing sequence of reals is larger than any number, i.e., some increasing sequences converge. – Andreas Blass Nov 11 '12 at 14:02
@Popopo: Yes. Numbers are mythical creatures if you prefer to think about them that way. But this makes even more sense. Real numbers are pixies and cardinals are trolls. The fact that there are pixies and trolls which behave similarly to unicorns (natural numbers) does not mean that pixies are trolls, nor that unicorns are either. – Asaf Karagila Nov 11 '12 at 14:13
What is $\sum_{n=0}^\infty x_n$? It is $\lim_{k\to\infty}\sum_{n=0}^k x_n$. This is a limit of real numbers.
Suppose that $x_n=1$ for all $n$, then what is this limit? The partial sum $\sum_{n=0}^k 1 = k+1$, and therefore this is the limit $\lim_{k\to\infty}(k+1)$.
Replacing $1$ by any other positive constant has the same effect.
It seems that the questions stems from mixing up contexts. One should never do that in mathematics. Real numbers are real numbers, they are not $10$-adic, they are not ordinals and they are not cardinals.
True, the natural numbers can be represented as cardinals, ordinals, real, $10$-adic numbers, and more. However each system carries out its own rules. In particular in the behavior of infinitary operations such as infinite sums and multiplications.
Even cardinals and ordinals, which are often thought as the same, behave differently with respect to infinitary multiplications. Let alone real numbers and cardinals, or real numbers and ordinals.
In measure theory we work with real numbers which means that the sums taken are sums of real numbers, and when taking infinitary sums of real numbers one apply the definitions for sums of real numbers.
For example, in the real numbers I am allowed to do this: $$\frac12\sum_{i=0}^\infty 1=\sum_{i=0}^\infty\frac12$$ Where as summation of ordinals or cardinals cannot be done because the object $\frac12$ is neither an ordinal nor a cardinal number.
-
What about $\prod_{n=0}^{\infty}x_n$ when $x_n=2$ for all $n \in \omega$? $\lim_{k \to \infty}\prod_{n=0}^{k}x_n=\omega$ but $\prod_{n=0}^{\infty}x_n=0$ in some cases... – Popopo Nov 11 '12 at 12:40
@Popopo: No. The product of real numbers is not the product of cardinals. Furthermore $\infty$ is not $\omega$. Also the product of countably many copies of $2$ is never countable (if not empty). – Asaf Karagila Nov 11 '12 at 12:42
But does $\lim_{k \to \omega}\prod_{n=0}^{k}x_n=\sup\{\prod_{n=0}^{k}x_n|k < \omega\}=\sup\{2,4,8,16,...\}=\omega$ hold? – Popopo Nov 11 '12 at 12:59
@Popopo: No!! You are taking a product of cardinals, not a product of real numbers! There can never be any real number corresponding to $\omega$. Not even a hyperreal number can correspond to $\omega$!! Read my comment to your question. Mixing up contexts is bad. This is like saying that a regular graph and a regular cardinal have something in common because both are called regular. – Asaf Karagila Nov 11 '12 at 13:01
@Popopo: And as cardinals go, the product is not continuous. This means that the limit of finite products is not the product over the infinite index. On the other hand, in ordinal arithmetic this is true, and there $2^\omega=\omega$. But those are ordinals and cardinals and neither are real numbers. – Asaf Karagila Nov 11 '12 at 13:03
|
2015-09-05 17:03:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9181510210037231, "perplexity": 435.56581696233786}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440646313806.98/warc/CC-MAIN-20150827033153-00193-ip-10-171-96-226.ec2.internal.warc.gz"}
|
https://socratic.org/questions/how-do-you-translate-the-graph-of-y-cosx-3
|
# How do you translate the graph of y=cosx+3?
Aug 20, 2017
You have to translate a graph of $y = \cos x$ $3$ units up. See explanation.
#### Explanation:
The general rule of translating graphs of functions is:
To get a graph of a function:
## $f \left(x - a\right) + b$
from the graph of $f \left(x\right)$ you have to translate it by a vector
## $\vec{u} = \left[a , b\right]$
In the given example the base function is
## $f \left(x\right) = \cos x$
The result function does not have $a$ coefficient (nothing is added or subtracted from $x$), but it has $b$ coefficient because the value (3) is added to the whole function, so the resulting vector is:
## $\vec{u} = \left[0 , 3\right]$
The vector's $y$ coordinate is $3$, so the graph is moved 3 units up along Y axis
|
2019-05-22 19:11:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 12, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7530604004859924, "perplexity": 578.4573792000452}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256948.48/warc/CC-MAIN-20190522183240-20190522205240-00151.warc.gz"}
|
http://eprint.las.ac.cn/abs/201609.00811
|
Current Location:home > Detailed Browse
# Inclusive and exclusive measurements of $B$ decays to $\chi_c1$ and $\chi_c2$ at Belle
Submit Time: 2016-09-12
Author:
Institute: 1.Belle Collaboration;
## Abstracts
We report inclusive and exclusive measurements for?χc1?and?χc2?production in?B?decays. We measure?B(B→χc1X)=?(3.03±0.05(stat)±0.24(syst))×10?3?and?B(B→χc2X)=?(0.70±0.06(stat)±0.10(syst))×10?3. For the first time,?χc2?production in exclusive?B?decays in the modes?B0→χc2π?K+?and?B+→χc2π+π?K+?has been observed, along with first evidence for the?B+→χc2π+K0S?decay mode. For?χc1?production, we report the first observation in the?B+→χc1π+π?K+,?B0→χc1π+π?K0S?and?B0→χc1π0π?K+?decay modes. Using these decay modes, we observe a difference in the production mechanism of?χc2?in comparison to?χc1?in?B?decays. In addition, we report searches for?X(3872)?and?χc1(2P)?in the?B+→(χc1π+π?)K+?decay mode. The reported results use?772×106?BB?????events collected at the?Υ(4S)?resonance with the Belle detector at the KEKB asymmetric-energy?e+e??collider.
Keywords: Belle; exclusive;
Recommended references: Bhardwaj, V. and others.(2016).Inclusive and exclusive measurements of $B$ decays to $\chi_c1$ and $\chi_c2$ at Belle.[ChinaXiv:201609.00811] (Click&Copy)
Version History
Related Paper
|
2021-04-10 15:17:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8308822512626648, "perplexity": 12771.356560433625}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038057142.4/warc/CC-MAIN-20210410134715-20210410164715-00176.warc.gz"}
|
https://physics.stackexchange.com/questions/256960/why-does-stagnation-pressure-reduce-across-a-normal-shock
|
# Why does stagnation pressure reduce across a normal shock?
I am seeking an explanation for this graph where the subscript "1" refers to the supersonic region and the subscript "2" refers to the subsonic region present beyond a normal shock.
The static pressure curve shows an increasing trend. Shouldn't the same be applicable to the stagnation pressure Po?
Is the entropy generation associated with the stagnating of the kinetic energy term so high?
• Can you describe the figure and terms a little more? What are the vertical axes, for instance? What are $(P_{o})_{1}$ and $P_{1}$? Is this for a hydrodynamic, collision-mediated shock? – honeste_vivere May 20 '16 at 17:41
• P1 stands for the static pressure in region 1 (before the shock) and (P0)1 stands for the stagnation pressure in the same region. The y-axis show the value of the ratios depicted in the graph. This is not a hydrodynamic case, its for the compressible flow of an ideal gas. – DBTKNL May 23 '16 at 7:35
This can be concluded by reviewing Gibbs equation for upstream and downstream stagnation conditions. $$T_0ds_0=dh_0-\frac 1{\rho_0}dP_0$$
Because across the shock wave is an adiabatic process, $dh_0=0$
Then Gibbs equation becomes $$ds_0=-\frac 1{\rho_0T_0}dP_0=-\frac {R}{P_0}dP_0$$
|
2020-03-29 04:04:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6059629917144775, "perplexity": 465.86630846330337}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370493684.2/warc/CC-MAIN-20200329015008-20200329045008-00546.warc.gz"}
|
https://projecteuclid.org/euclid.acta/1485802454
|
## Acta Mathematica
### Padé approximants for functions with branch points — strong asymptotics of Nuttall–Stahl polynomials
#### Abstract
Let f be a germ of an analytic function at infinity that can be analytically continued along any path in the complex plane deprived of a finite set of points, ${f \in \mathcal{A}(\bar{\mathbb{C}} \setminus A)}$, ${\# A< \infty}$. J. Nuttall has put forward the important relation between the maximal domain of f where the function has a single-valued branch and the domain of convergence of the diagonal Padé approximants for f. The Padé approximants, which are rational functions and thus single-valued, approximate a holomorphic branch of f in the domain of their convergence. At the same time most of their poles tend to the boundary of the domain of convergence and the support of their limiting distribution models the system of cuts that makes the function f single-valued. Nuttall has conjectured (and proved for many important special cases) that this system of cuts has minimal logarithmic capacity among all other systems converting the function f to a single-valued branch. Thus the domain of convergence corresponds to the maximal (in the sense of minimal boundary) domain of single-valued holomorphy for the analytic function ${f\in\mathcal{A}(\bar{\mathbb{C}} \setminus A)}$. The complete proof of Nuttall’s conjecture (even in a more general setting where the set A has logarithmic capacity 0) was obtained by H. Stahl. In this work, we derive strong asymptotics for the denominators of the diagonal Padé approximants for this problem in a rather general setting. We assume that A is a finite set of branch points of f which have the algebro-logarithmic character and which are placed in a generic position. The last restriction means that we exclude from our consideration some degenerated “constellations” of the branch points.
#### Article information
Source
Acta Math., Volume 215, Number 2 (2015), 217-280.
Dates
First available in Project Euclid: 30 January 2017
https://projecteuclid.org/euclid.acta/1485802454
Digital Object Identifier
doi:10.1007/s11511-016-0133-5
Mathematical Reviews number (MathSciNet)
MR3455234
Zentralblatt MATH identifier
0863.94012
Rights
#### Citation
Aptekarev, Alexander I.; Yattselev, Maxim L. Padé approximants for functions with branch points — strong asymptotics of Nuttall–Stahl polynomials. Acta Math. 215 (2015), no. 2, 217--280. doi:10.1007/s11511-016-0133-5. https://projecteuclid.org/euclid.acta/1485802454
#### References
• Abramowitz, M. & Stegun, I. A., Handbook of Mathematical Functions. Dover, New York, 1968.
• Akhiezer, N. I., Elements of the Theory of Elliptic Functions. Translations of Mathematical Monographs, 79. Amer. Math. Soc., Providence, RI, 1990.
• Aptekarev, A. I., Sharp constants for rational approximations of analytic functions. Mat. Sb., 193 (2002), 3–72 (Russian); English translation in Sb. Math., 193 (2002), 1–72.
• Aptekarev, A. I., Analysis of the matrix Riemann–Hilbert problems for the case of higher genus and asymptotics of polynomials orthogonal on a system of intervals. Preprints of Keldysh Institute of Applied Mathematics, Russia Acad. Sci., Moscow, 2008.
• Aptekarev, A. I. & Lysov, V. G., Systems of Markov functions generated by graphs and the asymptotics of their Hermite–Padé approximants. Mat. Sb., 201 (2010), 29–78 (Russian); English translation in Sb. Math., 201 (2010), 183–234.
• Aptekarev A. I., Van Assche W.: Scalar and matrix Riemann–Hilbert approach to the strong asymptotics of Padé approximants and complex orthogonal polynomials with varying weight. J. Approx. Theory 129, 129–166 (2004)
• Baik J., Deift P., McLaughlin K. T.-R., Miller P., Zhou X.: Optimal tail estimates for directed last passage site percolation with geometric random variables. Adv. Theor. Math. Phys. 5, 1207–1250 (2001)
• Baker, G. A. J & Graves-Morris, P., Padé Approximants. Encyclopedia of Mathematics and its Applications, 59. Cambridge Univ. Press, Cambridge, 1996.
• Baratchart L., Stahl H., Yattselev M.: Weighted extremal domains and best rational approximation. Adv. Math. 229, 357–407 (2012)
• Baratchart L., Yattselev M.: Convergent interpolation to Cauchy integrals over analytic arcs. Found. Comput. Math. 9, 675–715 (2009)
• Baratchart L., Yattselev M.: Convergent interpolation to Cauchy integrals over analytic arcs with Jacobi-type weights. Int. Math. Res. Not. 22, 4211–4275 (2010)
• Baratchart L., Yattselev M.: Padé approximants to certain elliptic-type functions. J. Anal. Math. 121, 31–86 (2013)
• Bertola M., Mo M. Y.: Commuting difference operators, spinor bundles and the asymptotics of orthogonal polynomials with respect to varying complex weights. Adv. Math. 220, 154–218 (2009)
• Deift, P., Orthogonal Polynomials and Random Matrices: A Riemann–Hilbert Approach. Courant Lecture Notes in Mathematics, 3. Amer. Math. Soc., Providence, RI, 1999.
• Deift P., Kriecherbauer T., McLaughlin K. D. T.-R., Venakides S., Zhou X.: Uniform asymptotics for polynomials orthogonal with respect to varying exponential weights and applications to universality questions in random matrix theory. Comm. Pure Appl. Math. 52, 1335–1425 (1999)
• Deift P., Kriecherbauer T., McLaughlin K. D. T.-R., Venakides S., Zhou X.: Strong asymptotics of orthogonal polynomials with respect to exponential weights. Comm. Pure Appl. Math. 52, 1491–1552 (1999)
• Deift P., Zhou X.: A steepest descent method for oscillatory Riemann–Hilbert problems. Asymptotics for the MKdV equation. Ann. of Math. 137, 295–368 (1993)
• Dieudonné, J., Foundations of Modern Analysis. Pure and Applied Mathematics, 10-I. Academic Press, New York–London, 1969.
• Dumas, S., Sur le déveleppement des fonctions elliptiques en fractions continues. Ph.D. Thesis, Universität Zürich, Zürich, 1908.
• Fokas A. S., Its A. R., Kitaev A. V.: Discrete Painlevé equations and their appearance in quantum gravity. Comm. Math. Phys. 142, 313–344 (1991)
• Fokas A. S., It.s A. R., Kitaev A. V.: The isomonodromy approach to matrix models in 2D quantum gravity. Comm. Math. Phys. 147, 395–430 (1992)
• Gakhov, F. D., Boundary Value Problems. Dover, New York, 1990.
• Gammel, J. L. & Nuttall, J., Note on generalized Jacobi polynomials, in The Riemann Problem, Complete Integrability and Arithmetic Applications (Bures-sur-Yvette/New York, 1979/1980), Lecture Notes in Math., 925, pp. 258–270. Springer, Berlin–New York, 1982.
• Goluzin, G.M., Geometric Theory of Functions of a Complex Variable. Translations of Mathematical Monographs, 26. Amer. Math. Soc., Providence, RI, 1969.
• Gonchar, A.A., The rate of rational approximation of certain analytic functions. Mat. Sb., 105(147) (1978), 147–163 (Russian); English translation in Math. USSR–Sb., 34 (1978), 164–179.
• Gonchar, A. A. & López Lagomasino, G., Markov’s theorem for multipoint Padé approximants. Mat. Sb., 105(147) (1978), 512–524 (Russian); English translation in Math. USSR–Sb., 34 (1978), 449–459.
• Gonchar, A.A. & Rakhmanov, E. A., Equilibrium distributions and the rate of rational approximation of analytic functions. Mat. Sb., 134(176) (1987), 306–352 (Russian); English translation in Math. USSR–Sb., 62 (1989), 305–348.
• Kamvissis, S., McLaughlin, K. D. T.-R. & Miller, P. D., Semiclassical Soliton Ensembles for the Focusing Nonlinear Schrödinger Equation. Annals of Mathematics Studies, 154. Princeton Univ. Press, Princeton, NJ, 2003.
• Kamvissis, S. & Rakhmanov, E. A., Existence and regularity for an energy maximization problem in two dimensions. J. Math. Phys., 46 (2005), 083505, 24 pp.
• Kriecherbauer T., McLaughlin K. D. T.-R.: Strong asymptotics of polynomials orthogonal with respect to Freud weights. Int. Math. Res. Not. 6, 299–333 (1999)
• Kuijlaars A. B. J., Martínez-Finkelshtein A.: Strong asymptotics for Jacobi polynomials with varying nonstandard parameters. J. Anal. Math. 94, 195–234 (2004)
• Kuijlaars A. B. J., McLaughlin K. T. R., Van Assche W., Vanlessen M.: The Riemann–Hilbert approach to strong asymptotics for orthogonal polynomials on [-1, 1]. Adv. Math. 188, 337–398 (2004)
• Kuijlaars A. B. J., McLaughlin K. D. T.-R.: Riemann–Hilbert analysis for Laguerre polynomials with large negative parameter. Comput. Methods Funct. Theory 1, 205–233 (2001)
• Kuijlaars A. B. J., McLaughlin K. D. T.-R.: Asymptotic zero behavior of Laguerre polynomials with negative parameter. Constr. Approx. 20, 497–523 (2004)
• Martínez-Finkelshtein A., Rakhmanov E. A.: Critical measures, quadratic differentials, and weak limits of zeros of Stieltjes polynomials. Comm. Math. Phys. 302, 53–111 (2011)
• Martínez-Finkelshtein, A., Rakhmanov, E. A. & Suetin, S.P., Heine, Hilbert, Padé, Riemann, and Stieltjes: John Nuttall’s work 25 years later, in Recent Advances in Orthogonal Polynomials, Special Functions, and their Applications, Contemp. Math., 578, pp. 165–193. Amer. Math. Soc., Providence, RI, 2012.
• Nikishin E. M.: On the convergence of diagonal Padé approximants to certain functions. Math. USSR Sb. 30, 249–260 (1976)
• Nikishin, E. M. & Sorokin, V. N., Rational Approximations and Orthogonality. Translations of Mathematical Monographs, 92. Amer. Math. Soc., Providence, RI, 1991.
• Nuttall, J., The convergence of Padé approximants to functions with branch points, in Padé and Rational Approximation (Tampa, FL, 1976), pp. 101–109. Academic Press, New York, 1977.
• Nuttall, J., Sets of minimum capacity, Padé approximants and the bubble problem, in Bifurcation Phenomena in Mathematical Physics and Related Topics (Dordrecht, 1980), pp. 185–201. Reidel, 1980.
• Nuttall J.: Asymptotics of diagonal Hermite–Padé polynomials. J. Approx. Theory 42, 299–386 (1984)
• Nuttall J.: Asymptotics of generalized Jacobi polynomials. Constr. Approx. 2, 59–77 (1986)
• Nuttall J.: Padé polynomial asymptotics from a singular integral equation. Constr. Approx. 6, 157–166 (1990)
• Nuttall J., Singh S. R.: Orthogonal polynomials and Padé approximants associated with a system of arcs. J. Approx. Theory 21, 1–42 (1977)
• Padé H.: Sur la représentation approchée d’une fonction par des fractions rationnelles. Ann. Sci. ´ Ecole Norm. Sup. 9, 3–93 (1892)
• Perevoznikova, E. A. & Rakhmanov, E. A., Variation of the equilibrium energy and S-property of compacta of minimal capacity. Manuscript, 1994.
• Pommerenke, C., Univalent Functions. Studia Mathematica/Mathematische Lehrbücher, 25. Vandenhoeck & Ruprecht, Göttingen, 1975.
• Ransford, T., Potential Theory in the Complex Plane. London Mathematical Society Student Texts, 28. Cambridge Univ. Press, Cambridge, 1995.
• Saff, E. B. & Totik, V., Logarithmic Potentials with External Fields. Grundlehren der Mathematischen Wissenschaften, 316. Springer, Berlin–Heidelberg, 1997.
• Stahl, H., Extremal domains associated with an analytic function. I, II. Complex Variables Theory Appl., 4 (1985), 311–324, 325–338.
• Stahl H.: The structure of extremal domains associated with an analytic function. Complex Variables Theory Appl. 4, 339–354 (1985)
• Stahl, H., Orthogonal polynomials with complex-valued weight function. I, II. Constr. Approx., 2 (1986), 225–240, 241–251.
• Stahl H.: On the convergence of generalized Padé approximants. Constr. Approx. 5, 221–240 (1989)
• Stahl, H., Diagonal Padé approximants to hyperelliptic functions. Ann. Fac. Sci. Toulouse Math., Special issue (1996), 121–193.
• Stahl H.: The convergence of Padé approximants to functions with branch points. J. Approx. Theory 91, 139–204 (1997)
• Suetin, S.P., On the uniform convergence of diagonal Padé approximants for hyperelliptic functions. Mat. Sb., 191 (2000), 81–114 (Russian); English translation in Sb. Math., 191 (2000), 1339–1373.
• Suetin, S.P., On the convergence of Chebyshev continued fractions for elliptic functions. Mat. Sb., 194 (2003), 63–92 (Russian); English translation in Sb. Math., 194 (2003), 1807–1835.
• Szegő, G., Orthogonal Polynomials. Colloquium Publications, 23. Amer. Math. Soc., Providence, RI, 1975.
|
2018-12-16 19:44:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 3, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6991248726844788, "perplexity": 2509.5332902301884}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376827992.73/warc/CC-MAIN-20181216191351-20181216213351-00475.warc.gz"}
|
https://math.stackexchange.com/questions/2633242/weighted-summation
|
# “Weighted” Summation
Since already two utents have misunderstood the question, I'm pointing it out: I've already proved the first problem, I'm using it just as an example and as motivation to the question, which is the one quoted
Trying to prove that
$$$\lim_{n\to \infty}\left(1+\frac{x}{n}\right)^n=\sum\limits_{k=0}^{\infty}\frac{x^k}{k!}$$$
I founded this expression:
$$$\lim_{n\to \infty}\sum\limits_{k=1}^n\frac{x^k}{k!}\frac{n(n-1)\dots(n-k)}{n^k}$$$
Prove the equivalence is possible since we can in this case use the dominated convergence, since both the series and the weight are "nice", but these led my to wondering:
Under wich condition, given that \begin{align}\forall k \lim_{n\to \infty}b_{n,k}=1\\ \sum\limits_{k<n}a_k b_{n,k} \to \sum\limits_k a_k\end{align}
I tried to use the dominated convergence theorem in general, as I did in proving the first problem, with a discrete measure on $\mathbb{N}$, but I always ended up with hypothesis that seemed to me stronger than the statement. I managed to show that $\sum\limits_{k=2}^n\frac{k^{\frac{2}{n}}}{n\ln^2(k)}$ diverges, while $\sum\limits_{k=2}^\infty\frac{1}{n\ln^2(n)}$ converges absolutely, so I think there must be some kind of hypothesis on the rate of growth of $b_{n,k}$, but I do not know how to make these ideas precise
Obviously, this is strictly related to summation's method (as the Cesàro one), but I would like to prove it relazing the condition on the average of the weights that I've always found in this topic. More, it would be a useful generalization, for example of Convergence of a modified series. Any bibliography is really welcome too
• You could prove that both converge to $e^x$, but are you looking for a proof that does not use $e^x$? – dezdichado Feb 2 '18 at 19:43
• @dezdichado The convergence of the first sum I came up with it's not the matter here, the question is the one quoted – Gabriele Cassese Feb 2 '18 at 19:45
• Not for the series but rather for the 'averaged limit', but there is a characterization theorem for the latter, such as the Silverman-Toeplitz theorem – Sangchul Lee Mar 11 '18 at 5:15
• Dominated convergence is the way to go. – Antonio Vargas Mar 11 '18 at 5:17
• @AntonioVargas Isn't there a weaker method? Assuming dominated convergence seems to much to me – Gabriele Cassese Mar 11 '18 at 6:59
|
2019-06-24 14:14:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.7374569773674011, "perplexity": 1053.3432797860387}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999539.60/warc/CC-MAIN-20190624130856-20190624152856-00409.warc.gz"}
|
https://ask.sagemath.org/answers/37720/revisions/
|
# Revision history [back]
The function (func argument) of minimize_constrained can be either a symbolic function,
sage: x = var('x')
sage: minimize_constrained(x^2, [None], [99.6])
(0.0)
or a Python function whose argument is a tuple,
sage: minimize_constrained(lambda x: x[0]^2, [None], [99.6])
(0.0)
The type error in the OP is avoided passing the initial point as a list.
The function (func argument) of passed to minimize_constrained can be either a symbolic function,
sage: x = var('x')
sage: minimize_constrained(x^2, [None], [99.6])
(0.0)
or a Python function whose argument is a tuple,
sage: minimize_constrained(lambda x: x[0]^2, [None], [99.6])
(0.0)
The type error in the OP is avoided passing the initial point as a list.
|
2023-02-04 12:46:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3599836826324463, "perplexity": 10037.657337239614}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500126.0/warc/CC-MAIN-20230204110651-20230204140651-00031.warc.gz"}
|
http://pages.uoregon.edu/vostrik/algebraabstracts/nganou.html
|
SPEAKER: Jean Bernard Nganou
TITLE: Lattice ordered groups and algebras of logic
ABSTRACT: MV-algebras were introduced in the 1930's by C. Chang
as the algebraic counterpart of Lukasiewicz's Many-value logic.
MV-algebras are BL-algebras whose negations are involutions. For any
BL-algebra $L$, we construct an associated lattice ordered Abelian group
$G_L$ that coincides with the Chang's $\ell$-group of an MV-algebra when
the BL-algebra is an MV-algebra. We prove that the Chang-Mundici's group
of the MV-center of any BL-algebra $L$ is a direct summand in $G_L$. We
also find a direct description of the complement $\mathfrak{S}(L)$ of
the Chang's group of the MV-center in terms of the filter of dense elements
of $L$. Finally, we compute some examples of the group $G_L$.
This is a joint work with C. Lele.
|
2017-12-17 04:14:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8617558479309082, "perplexity": 1660.3397068529437}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948592972.60/warc/CC-MAIN-20171217035328-20171217061328-00197.warc.gz"}
|
https://www.physicsforums.com/threads/rotational-motion-finding-the-total-energy.39164/
|
# Rotational Motion FInding the total ENergy
1. ### Schu
12
Particulars:
ball has a radius of 2.5 cm a mass of .125 and is rolling across a table with a speed of .547 m/s, this table is 1.04 m off the ground. It rolls to the edge and down a ramp How fast will it be rolling across the floor?
First I found the Gravitational Potential Energy: Ep=mgh
Initial of 1.2753 FInal = 0
THen the Linear Kinetic ENergy : 1/2 mv^2
Initial .0187005625 FInal .0625v^2
Elastic Potential Energy: .5k(delta)x^2
0 0
Rotational Kinetic Energy: 1/5mv^2
initial .007480225 FInal .025v^2
Now I need to bring them all togther and solve the final velocity.
Is the Sum of the inital energy's = to the SUM of the final energy's?
If that's true then 1.30148075 = .0875v^2
so v = 3.85 m/s
Is that at all right??
2. ### Schu
12
I need help ASAP
Is anyone out there????
I would appreciate the help
3. ### Galileo
1,999
Looks ok to me. (I got 3.86 m/s, by rounding off)
### Staff: Mentor
rotational KE
The rotational KE is ${KE}_{rot} = 1/2 I \omega^2$.
You will also need the "rolling condition": $V = \omega R$.
|
2015-11-25 10:26:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4372901916503906, "perplexity": 3809.5091731227053}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398445080.12/warc/CC-MAIN-20151124205405-00148-ip-10-71-132-137.ec2.internal.warc.gz"}
|
https://carinabarrios.com/site/4a8240-tresemm%C3%A9-pro-pure-detangle-and-smooth-leave-in-conditioner
|
realizations, so that the vector of all outputs. The assumptions above can be made even weaker (for example, by relaxing the The results of this paper confirm this intuition. adshelp[at]cfa.harvard.edu The ADS is operated by the Smithsonian Astrophysical Observatory under NASA Cooperative Agreement NNX16AC86A the OLS estimator, we need to find a consistent estimator of the long-run In this lecture we discuss covariance matrix Note that, by Assumption 1 and the Continuous Mapping theorem, we For example, the sequences and covariance matrix equal to If Assumptions 1, 2 and 3 are satisfied, then the OLS estimator 1 Asymptotic distribution of SLR 1. on the coefficients of a linear regression model in the cases discussed above, • In other words, OLS is statistically efficient. in distribution to a multivariate normal In this case, we will need additional assumptions to be able to produce $\widehat{\beta}$: $\left\{ y_{i},x_{i}\right\}$ is a … Assumption 5: the sequence The third assumption we make is that the regressors to the population means We have proved that the asymptotic covariance matrix of the OLS estimator This assumption has the following implication. the sample mean of the How to do this is discussed in the next section. and A Roadmap Consider the OLS model with just one regressor yi= βxi+ui. is Haan, Wouter J. Den, and Andrew T. Levin (1996). matrix is consistently estimated is asymptotically multivariate normal with mean equal to Asymptotic Properties of OLS Asymptotic Properties of OLS Probability Limit of from ECOM 3000 at University of Melbourne we have used the fact that Assumption 3 (orthogonality): For each In more general models we often can’t obtain exact results for estimators’ properties. mean, For a review of some of the conditions that can be imposed on a sequence to and Asymptotic Properties of OLS estimators. CONSISTENCY OF OLS, PROPERTIES OF CONVERGENCE Though this result was referred to often in class, and perhaps even proved at some point, a student has pointed out that it does not appear in the notes. -th If Assumptions 1, 2, 3, 4 and 5 are satisfied, and a consistent estimator becomesorwhich The linear regression model is “linear in parameters.”A2. satisfy sets of conditions that are sufficient for the • The asymptotic properties of estimators are their properties as the number of observations in a sample becomes very large and tends to infinity. Technical Working The OLS estimator where the outputs are denoted by correlated sequences, which are quite mild (basically, it is only required . For any other consistent estimator of ; say e ; we have that avar n1=2 ^ avar n1=2 e : 4 the to. and covariance matrix equal to. permits applications of the OLS method to various data and models, but it also renders the analysis of finite-sample properties difficult. Furthermore, fact. Linear Under the asymptotic properties, the properties of the OLS estimators depend on the sample size. . Online appendix. and the sequence has been defined above. we have used the Continuous Mapping Theorem; in step Suppose Wn is an estimator of θ on a sample of Y1, Y2, …, Yn of size n. Then, Wn is a consistent estimator of θ if for every e > 0, P(|Wn - θ| > e) → 0 as n → ∞. and With Assumption 4 in place, we are now able to prove the asymptotic normality Linear regression models have several applications in real life. ), by, First of all, we have We now consider an assumption which is weaker than Assumption 6. Consider the linear regression model where the outputs are denoted by , the associated vectors of inputs are denoted by , the vector of regression coefficients is denoted by and are unobservable error terms. is consistently estimated has full rank, then the OLS estimator is computed as theorem, we have that the probability limit of Proposition consistently estimated in the last step, we have used the fact that, by Assumption 3, In short, we can show that the OLS does not depend on see, for example, Den and Levin (1996). , and asymptotic covariance matrix equal Chebyshev's Weak Law of Large Numbers for , Proposition that are not known. . In Section 3, the properties of the ordinary least squares estimator of the identifiable elements of the CI vector obtained from a contemporaneous levels regression are examined. Am I at risk? The first assumption we make is that these sample means converge to their we have used Assumption 5; in step tothat Linear meanto If Assumptions 1, 2, 3, 4, 5 and 6 are satisfied, then the long-run covariance such as consistency and asymptotic normality. and covariance matrix equal However, these are strong assumptions and can be relaxed easily by using asymptotic theory. √ find the limit distribution of n(βˆ by Assumption 3, it because and non-parametric covariance matrix estimation procedures." regression, we have introduced OLS (Ordinary Least Squares) estimation of The second assumption we make is a rank assumption (sometimes also called Under asymptotics where the cross-section dimension, n, grows large with the time dimension, T, fixed, the estimator is consistent while allowing essentially arbitrary correlation within each individual.However, many panel data sets have a non-negligible time dimension. As a consequence, the covariance of the OLS estimator can be approximated OLS Revisited: Premultiply the ... analogy work, so that (7) gives the IV estimator that has the smallest asymptotic variance among those that could be formed from the instruments W and a weighting matrix R. ... asymptotic properties, and then return to the issue of finite-sample properties. Estimation of the variance of the error terms, Estimation of the asymptotic covariance matrix, Estimation of the long-run covariance matrix. For a review of the methods that can be used to estimate 8.2.4 Asymptotic Properties of MLEs We end this section by mentioning that MLEs have some nice asymptotic properties. by. Important to remember our assumptions though, if not homoskedastic, not true. 1. in distribution to a multivariate normal random vector having mean equal to Let us make explicit the dependence of the , distribution with mean equal to Asymptotic Properties of OLS and GLS - Volume 5 Issue 1 - Juan J. Dolado covariance stationary and row and guarantee that a Central Limit Theorem applies to its sample mean, you can go where could be assumed to satisfy the conditions of the coefficients of a linear regression model. . are orthogonal, that is,where by, First of all, we have In this case, we might consider their properties as →∞. First of all, we have It is then straightforward to prove the following proposition. ( As the asymptotic results are valid under more general conditions, the OLS If Assumptions 1, 2, 3 and 4 are satisfied, then the OLS estimator isand. Proposition Ordinary Least Squares is the most common estimation method for linear models—and that’s true for a good reason.As long as your model satisfies the OLS assumptions for linear regression, you can rest easy knowing that you’re getting the best possible estimates.. Regression is a powerful analysis that can analyze … Taboga, Marco (2017). as proved above. -th Colin Cameron: Asymptotic Theory for OLS 1. , This paper studies the asymptotic properties of a sparse linear regression estimator, referred to as broken adaptive ridge (BAR) estimator, resulting from an L 0-based iteratively reweighted L 2 penalization algorithm using the ridge estimator as its initial value. The Adobe Flash plugin is … Simple, consistent asymptotic variance matrix estimators are proposed for a broad class of problems. Asymptotic Properties of OLS. followswhere: for any termsis and If Assumptions 1, 2, 3, 4, 5 and 6b are satisfied, then the long-run I consider the asymptotic properties of a commonly advocated covariance matrix estimator for panel data. In short, we can show that the OLS thatBut for any OLS estimator solved by matrix. 1 Topic 2: Asymptotic Properties of Various Regression Estimators Our results to date apply for any finite sample size (n). vector of regression coefficients is denoted by Lecture 6: OLS Asymptotic Properties Consistency (instead of unbiasedness) First, we need to define consistency. residualswhere. estimator of the asymptotic covariance matrix is available. Proposition that is the same estimator derived in the Óö¦ûÃèn°x9äÇ}±,K¹]N,J?§?§«µßØ¡!,Ûmß*{¨:öWÿ[+o! we know that, by Assumption 1, On the other hand, the asymptotic prop-erties of the OLS estimator must be derived without resorting to LLN and CLT when y t and x t are I(1). Thus, by Slutski's theorem, we have We assume to observe a sample of realizations, so that the vector of all outputs is an vector, the design matrixis an matrix, and the vector of error termsis an vector. we have used the Continuous Mapping theorem; in step Most of the learning materials found on this website are now available in a traditional textbook format. https://www.statlect.com/fundamentals-of-statistics/OLS-estimator-properties. needs to be estimated because it depends on quantities In econometrics, Ordinary Least Squares (OLS) method is widely used to estimate the parameters of a linear regression model. Asymptotic Efficiency of OLS Estimators besides OLS will be consistent. 8 Asymptotic Properties of the OLS Estimator Assuming OLS1, OLS2, OLS3d, OLS4a or OLS4b, and OLS5 the follow-ing properties can be established for large samples. matrix, and the vector of error implies by Assumption 4, we have Under the asymptotic properties, the properties of the OLS estimators depend on the sample size. are orthogonal to the error terms . if we pre-multiply the regression In particular, we will study issues of consistency, asymptotic normality, and efficiency.Manyofthe proofs will be rigorous, to display more generally useful techniques also for later chapters. and at the cost of facing more difficulties in estimating the long-run covariance Let us make explicit the dependence of the Assumption 6: "Inferences from parametric In this section we are going to discuss a condition that, together with OLS is consistent under much weaker conditions that are required for unbiasedness or asymptotic normality. matrix. that their auto-covariances are zero on average). is uncorrelated with has full rank (as a consequence, it is invertible). . and getBut the associated is a consistent estimator of the long-run covariance matrix Note that the OLS estimator can be written as regression, if the design matrix column Ìg'}ºÊ\Ò8æ. vector. OLS Revisited: Premultiply the ... analogy work, so that (7) gives the IV estimator that has the smallest asymptotic variance among those that could be formed from the instruments W and a weighting matrix R. ... asymptotic properties, and then return to the issue of finite-sample properties. by Assumptions 1, 2, 3 and 5, Linear mean, Proposition is available, then the asymptotic variance of the OLS estimator is I provide a systematic treatment of the asymptotic properties of weighted M-estimators under standard stratified sampling. The main population counterparts, which is formalized as follows. Not even predeterminedness is required. If this assumption is satisfied, then the variance of the error terms the estimators obtained when the sample size is equal to For any other consistent estimator of … OLS estimator is denoted by iswhere … Continuous Mapping probability of its sample Assumption 6b: As in the proof of consistency, the thatconverges is consistently estimated is is. bywhich of the long-run covariance matrix identification assumption). can be estimated by the sample variance of the linear regression model. and is consistently estimated by its sample 2.4.1 Finite Sample Properties of the OLS and ML Estimates of The estimation of is the vector of regression coefficients that minimizes the sum of squared is. By asymptotic properties we mean properties that are true when the sample size becomes large. dependence of the estimator on the sample size is made explicit, so that the does not depend on We show that the BAR estimator is consistent for variable selection and has an oracle property … , By Assumption 1 and by the Asymptotic Normality Large Sample Inference t, F tests based on normality of the errors (MLR.6) if drawn from other distributions ⇒ βˆ j will not be normal ⇒ t, F statistics will not have t, F distributions solution—use CLT: OLS estimators are approximately normally … Assumption 2 (rank): the square matrix We assume to observe a sample of follows: In this section we are going to propose a set of conditions that are Continuous Mapping For the validity of OLS estimates, there are assumptions made while running linear regression models.A1. infinity, converges Nonetheless, it is relatively easy to analyze the asymptotic performance of the OLS estimator and construct large-sample tests. vector, the design where, correlated sequences, Linear • The asymptotic properties of estimators are their properties as the number of observations in a sample becomes very large and tends to infinity. is defined OLS is consistent under much weaker conditions that are required for unbiasedness or asymptotic normality. is consistently estimated by, Note that in this case the asymptotic covariance matrix of the OLS estimator Chebyshev's Weak Law of Large Numbers for the long-run covariance matrix We see from Result LS-OLS-3, asymptotic normality for OLS, that avar n1=2 ^ = lim n!1 var n1=2 ^ = (plim(X0X=n)) 1 ˙2 u Under A.MLR1-2, A.MLR3™and A.MLR4-5, the OLS estimator has the smallest asymptotic variance. 2.4.1 Finite Sample Properties of the OLS … is uncorrelated with By Assumption 1 and by the is a consistent estimator of estimator on the sample size and denote by ) Derivation of the OLS estimator and its asymptotic properties Population equation of interest: (5) y= x +u where: xis a 1 Kvector = ( 1;:::; K) x 1 1: with intercept Sample of size N: f(x Limit Theorem applies to its sample that the sequences are is a consistent estimator of an is orthogonal to PPT – Multiple Regression Model: Asymptotic Properties OLS Estimator PowerPoint presentation | free to download - id: 1bdede-ZDc1Z. View Asymptotic_properties.pdf from ECO MISC at College of Staten Island, CUNY. , Assumption 4 (Central Limit Theorem): the sequence Theorem. Stack Exchange network consists of 177 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share … then, as Title: PowerPoint Presentation Author: Angie Mangels Created Date: 11/12/2015 12:21:59 PM covariance matrix vectors of inputs are denoted by is Kindle Direct Publishing. satisfies a set of conditions that are sufficient to guarantee that a Central Therefore, in this lecture, we study the asymptotic properties or large sample properties of the OLS estimators. theorem, we have that the probability limit of When we want to study the properties of the obtained estimators, it is convenient to distinguish between two categories of properties: i) the small (or finite) sample properties, which are valid whatever the sample size, and ii) the asymptotic properties, which are associated with large samples, i.e., when tends to . 7.2.1 Asymptotic Properties of the OLS Estimator To illustrate, we first consider the simplest AR(1) specification: y t = αy t−1 +e t. (7.1) Suppose that {y t} is a random walk such that … which do not depend on We show that the BAR estimator is consistent for variable selection and has an oracle property for parameter estimation. . . The OLS estimator βb = ³P N i=1 x 2 i ´−1 P i=1 xiyicanbewrittenas bβ = β+ 1 N PN i=1 xiui 1 N PN i=1 x 2 i. On the other hand, the asymptotic prop-erties of the OLS estimator must be derived without resorting to LLN and CLT when y t and x t are I(1). regression - Hypothesis testing discusses how to carry out . ªÀ ±Úc×ö^!ܰ6mTXhºU#Ð1¹ºMn«²ÐÏQìu8¿^Þ¯ë²dé:yzñ½±5¬Ê ÿú#EïÜ´4V?¤;Ë>øËÁ!ðÙâ¥ÕØ9©ÐK[#dI¹Ïv' ~ÖÉvκUêGzò÷sö&"¥éL|&ígÚìgí0Q,i'ÈØe©ûÅݧ¢ucñ±c׺è2ò+À ³]y³ However, these are strong assumptions and can be relaxed easily by using asymptotic theory. which thatFurthermore,where sufficient for the consistency , the OLS estimator obtained when the sample size is equal to . Asymptotic distribution of OLS Estimator. regression - Hypothesis testing. OLS estimator (matrix form) 2. Under Assumptions 1, 2, 3, and 5, it can be proved that satisfies a set of conditions that are sufficient for the convergence in Usually, the matrix The OLS estimator is consistent: plim b= The OLS estimator is asymptotically normally distributed under OLS4a as p N( b )!d N 0;˙2Q 1 XX and … There is a random sampling of observations.A3. In the lecture entitled convergence in probability of their sample means we have used the hypothesis that is is consistently estimated The next proposition characterizes consistent estimators is. the entry at the intersection of its The OLS estimator is the vector of regression coefficients that minimizes the sum of squared residuals: As proved in the lecture entitled Li… Asymptotic and finite-sample properties of estimators based on stochastic gradients Panos Toulis and Edoardo M. Airoldi University of Chicago and Harvard University Panagiotis (Panos) Toulis is an Assistant Professor of Econometrics and Statistics at University of Chicago, Booth School of Business ([email protected]). OLS Estimator Properties and Sampling Schemes 1.1. see how this is done, consider, for example, the is Section 8: Asymptotic Properties of the MLE In this part of the course, we will consider the asymptotic properties of the maximum likelihood estimator. equationby However, under the Gauss-Markov assumptions, the OLS estimators will have the smallest asymptotic variances. Assumptions 1-3 above, is sufficient for the asymptotic normality of OLS Paper Series, NBER. hypothesis tests Therefore, in this lecture, we study the asymptotic properties or large sample properties of the OLS estimators. to the lecture entitled Central Limit When we want to study the properties of the obtained estimators, it is convenient to distinguish between two categories of properties: i) the small (or finite) sample properties, which are valid whatever the sample size, and ii) the asymptotic properties, which are associated with large samples, i.e., when tends to . What is the origin of Americans sometimes refering to the Second World War "the Good War"? asymptotic results will not apply to these estimators. that is, when the OLS estimator is asymptotically normal and a consistent under which assumptions OLS estimators enjoy desirable statistical properties We say that OLS is asymptotically efficient. where: "Properties of the OLS estimator", Lectures on probability theory and mathematical statistics, Third edition. of satisfies. matrix of OLS estimators. and the fact that, by Assumption 1, the sample mean of the matrix matrixis that. by the Continuous Mapping theorem, the long-run covariance matrix HT1o0
w~Å©2×ÉJJMªts¤±òï}\$mc}ßùùÛ»ÂèØ»ëÕ GhµiýÕ)/Ú O Ñj)|UWYøtFì residuals: As proved in the lecture entitled . an Proposition This paper studies the asymptotic properties of a sparse linear regression estimator, referred to as broken adaptive ridge (BAR) estimator, resulting from an L 0-based iteratively reweighted L 2 penalization algorithm using the ridge estimator as its initial value. byand , Before providing some examples of such assumptions, we need the following We see from Result LS-OLS-3, asymptotic normality for OLS, that avar n1=2 ^ = lim n!1 var n1=2 ^ = (plim(X0X=n)) 1 ˙2 u Under A.MLR1-2, A.MLR3™and A.MLR4-5, the OLS estimator has the smallest asymptotic variance. and we take expected values, we and estimators on the sample size and denote by tends to CONSISTENCY OF OLS, PROPERTIES OF CONVERGENCE Though this result was referred to often in class, and perhaps even proved at some point, a student has pointed out that it does not appear in the notes. Hot Network Questions I want to travel to Germany, but fear conscription. normal is a consistent estimator of are unobservable error terms. is uncorrelated with the population mean byTherefore, To hypothesis that thatconverges I provide a systematic treatment of the asymptotic properties of weighted M-estimators under standard stratified sampling. by, This is proved as matrixThen, in steps of the OLS estimators. estimators. Under Assumptions 3 and 4, the long-run covariance matrix . , for any . In any case, remember that if a Central Limit Theorem applies to • Some texts state that OLS is the Best Linear Unbiased Estimator (BLUE) Note: we need three assumptions ”Exogeneity” (SLR.3), an Proposition is a consistent estimator of and in step and The conditional mean should be zero.A4. We now allow, $X$ to be random variables $\varepsilon$ to not necessarily be normally distributed. . requires some assumptions on the covariances between the terms of the sequence The lecture entitled in the last step we have applied the Continuous Mapping theorem separately to haveFurthermore, Asymptotic distribution of the OLS estimator Summary and Conclusions Assumptions and properties of the OLS estimator The role of heteroscedasticity 2.9 Mean and Variance of the OLS Estimator Variance of the OLS Estimator I Proposition: The variance of the ordinary least squares estimate is var ( b~) = (X TX) 1X X(X X) where = var (Y~). Derivation of the OLS estimator and its asymptotic properties Population equation of interest: (5) y= x +u where: xis a 1 Kvector = ( … and Now, each entry of the matrices in square brackets, together with the fact that Assumption 1 (convergence): both the sequence Efficiency of OLS Gauss-Markov theorem: OLS estimator b 1 has smaller variance than any other linear unbiased estimator of β 1. the sample mean of the Thus, in order to derive a consistent estimator of the covariance matrix of in distribution to a multivariate normal vector with mean equal to matrix endstream endobj 106 0 obj<> endobj 107 0 obj<> endobj 108 0 obj<> endobj 109 0 obj<> endobj 110 0 obj<> endobj 111 0 obj<> endobj 112 0 obj<> endobj 113 0 obj<> endobj 114 0 obj<>stream In statistics, ordinary least squares (OLS) is a type of linear least squares method for estimating the unknown parameters in a linear regression model. Simple, consistent asymptotic variance matrix estimators are proposed for a broad class of problems. : OLS estimator is a consistent estimator of is statistically efficient other words, OLS is consistent much... Ols model with just one regressor yi= βxi+ui 's Weak Law of large Numbers for correlated sequences, regression... Sometimes also called identification assumption ) of finite-sample properties difficult means converge to their counterparts! Inferences from parametric and non-parametric covariance matrix satisfies estimated by the sample size ( and ) that not. Estimator for panel data we often can ’ t obtain exact results for estimators ’.. Important to remember our assumptions though, if not homoskedastic, not true of Island. As →∞ if assumptions 1, 2, 3, finite-sample properties difficult consider assumption! Is then straightforward to prove the following fact probability theory and mathematical statistics, third edition OLS... Data and models, but it also renders the analysis of finite-sample properties difficult M-estimators under standard stratified sampling parameter! The validity of OLS estimates, there are assumptions made while running linear regression model exact for. ( OLS ) method is widely used to estimate, see, for example, Den and Levin 1996! Bar estimator is consistent under much weaker conditions that are not known as a consequence, it is easy. Other consistent estimator of matrix estimation procedures. yi= βxi+ui Topic 2: asymptotic properties of the OLS.. Asymptotic variance matrix estimators are proposed for a broad class of problems estimator and construct large-sample.. Second assumption we make is that the BAR estimator is consistent under much weaker conditions that are required for or. Full rank ( as a consequence, it is relatively easy to analyze asymptotic! Easily by using asymptotic theory this lecture, we study the asymptotic normality rank (... Yi= βxi+ui regressor yi= βxi+ui needs to be estimated by the sample size becomes.., it can be relaxed easily by using asymptotic theory as →∞ proved that asymptotic properties of ols asymptotic matrix!, for example, Den and Levin ( 1996 ) terms of the OLS to. Therefore, in the next section estimators are proposed for a broad class of problems consistent asymptotic matrix! Staten Island, CUNY ( n ) converge to their population counterparts which! Conditions that are not known of … asymptotic properties we mean properties that are not.! We have used the fact that, by assumption 3 ( orthogonality ): for each and! Levin ( 1996 ) long-run covariance matrix of the long-run covariance matrix estimation procedures. orthogonal, that.... Date apply for any other consistent estimator of β 1, and,... General models we often can ’ t obtain exact results for estimators ’ properties the of., Lectures on probability theory and mathematical statistics, third edition made while running linear regression models.A1 regression models.A1 or. Assumption ( sometimes also called identification assumption ) asymptotic variance matrix estimators are proposed for broad... Wouter J. Den, and is uncorrelated with for any finite sample of. Mathematical statistics, third edition Den, and the vector of all outputs in parameters. ” A2: asymptotic of., the properties of a linear regression model is “ linear in parameters. ” A2 needs be. The smallest asymptotic variances an assumption which is formalized as follows by using asymptotic theory properties. An oracle property for parameter estimation of … asymptotic properties of various regression estimators results. Regression model is “ linear in parameters. ” A2 of β 1 rank as... Panel data it depends on quantities ( and ) that are true the! Identification assumption ) mathematical statistics, third edition asymptotic properties of ols, it is then to! Misc at College of Staten Island, CUNY ( as a consequence, it is relatively easy analyze! Regression models.A1 and is uncorrelated with for any finite sample properties of weighted M-estimators under stratified. Be relaxed easily by using asymptotic theory enjoy desirable statistical properties such as consistency and normality! As a consequence, it is then straightforward to prove the asymptotic of. And is uncorrelated with for any and view Asymptotic_properties.pdf from ECO MISC at College of Staten,. Assumptions though, if not homoskedastic, not true OLS estimates, there assumptions. We show that the BAR estimator is consistent under much weaker conditions that are true when sample. To the error terms can be relaxed easily by using asymptotic theory we make is the...: the square matrix has full rank ( as a consequence, it is then straightforward to prove the properties!, under the Gauss-Markov assumptions, we study the asymptotic covariance matrix rank ( as consequence. I provide a systematic treatment of the long-run covariance matrix, and are orthogonal to the error terms such,... Is satisfied, then the variance of the learning materials found on this website now! Terms can be estimated because it depends on asymptotic properties of ols ( and ) that are not known are orthogonal for! I provide a systematic treatment of the residualswhere relatively easy to analyze the asymptotic properties, the properties of error! Assume to observe a sample of asymptotic properties of ols, so that the vector of termsis! Conditions that are required for unbiasedness or asymptotic normality if assumptions 1, 2, 3,,! Matrix is defined by t obtain exact results for estimators ’ properties properties difficult ’.. Where, in the last step, we study the asymptotic properties or large sample properties of weighted under! Often can ’ t obtain exact results for estimators ’ properties are true when the sample size becomes.... Usually, the properties of the OLS estimator iswhere the long-run covariance matrix linear regression have! For estimators ’ properties to the Second assumption we make is that regressors. Under standard stratified sampling consistent under much weaker conditions that are true when the variance. Inferences from parametric and non-parametric covariance matrix of the variance of the error.! Ordinary Least Squares ( OLS ) method is widely used to estimate see! At College of Staten Island, CUNY estimators our results to date apply for any, and orthogonal! To various data and models, but fear conscription lecture we discuss under which assumptions estimators..., consistent asymptotic variance matrix estimators are proposed for a review of the asymptotic normality of the residualswhere analyze asymptotic. Selection and has an oracle property for parameter estimation properties as →∞ when the sample size the... I want to travel to Germany, but it also renders the analysis of finite-sample properties difficult iswhere... To date apply for any finite sample properties of weighted M-estimators under stratified! Nonetheless, it is then straightforward to prove the asymptotic performance of the sequence 's Weak Law of Numbers! Some examples of such assumptions, the properties of the OLS model with just one regressor βxi+ui! The parameters of a commonly advocated covariance matrix of the OLS estimators enjoy desirable statistical properties such consistency. Roadmap consider the asymptotic performance of the asymptotic properties of a linear regression models have applications. The design matrixis an matrix, estimation of the OLS estimator is consistent for variable selection and has oracle... Terms can be relaxed easily by using asymptotic theory estimated because it depends on (... The square matrix has full rank ( as a consequence, it is relatively easy to analyze the asymptotic matrix! Full rank ( as a consequence, it can be used to,... Discuss under which assumptions OLS estimators is weaker than assumption 6: is orthogonal to the error terms can relaxed! First assumption we make is that the vector of all outputs, we study the asymptotic of... And can be relaxed easily by using asymptotic theory large sample properties of various regression estimators results! Property for parameter estimation, 3, view Asymptotic_properties.pdf from ECO MISC at College of Staten Island CUNY... Size ( n ), OLS is consistent under much weaker conditions that are required for unbiasedness asymptotic... Of realizations, so that the regressors are orthogonal to the error.! Invertible ) OLS … I provide a systematic treatment of the OLS is! Remember our assumptions though, if not homoskedastic, not true mathematical statistics, third.... Is orthogonal to the error terms on the sample variance of the OLS.... Regression model is “ linear in parameters. asymptotic properties of ols A2 to the Second assumption make! Network Questions I want to travel to Germany, but it also renders the analysis of finite-sample properties difficult data... Estimate, see, for example, Den and Levin ( 1996 ) parametric! Is defined by under standard stratified sampling with for any and: is to! Ols method to various data and models, but it also renders the analysis of finite-sample properties...., for example, Den and Levin ( 1996 ) is a rank (. The Adobe Flash plugin is … asymptotic properties we mean properties that are not known estimators ’ properties:. Misc at College of Staten Island, CUNY an vector a broad class problems. Terms, estimation of the error terms for any finite sample properties of a linear regression - Hypothesis testing consistent. Converge to their population counterparts, which is formalized as follows estimator for panel data date apply for,... Much weaker conditions that are true when the sample size rank assumption ( sometimes also identification! 2 and 3 are satisfied, then the variance of the error terms estimation... Means converge to their population counterparts, which is weaker than assumption 6: is orthogonal to the error can... ( orthogonality ): for each, and the vector of error termsis vector! But fear conscription available in a traditional textbook format consider an assumption which is formalized as follows quantities. Parameter estimation, 2, 3, OLS ) method is widely used to estimate the of...
|
2021-01-16 17:20:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8334040641784668, "perplexity": 722.6970519742803}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703506832.21/warc/CC-MAIN-20210116165621-20210116195621-00371.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/algebra/intermediate-algebra-for-college-students-7th-edition/chapter-9-section-9-3-logarithmic-functions-exercise-set-page-700/48
|
## Intermediate Algebra for College Students (7th Edition)
$(-6, +\infty)$.
RECALL: The domain of the logarithmic function $f(x) = \log_a{x}$ is $x \gt 0$. Thus, the domain of the given function is the set of all real numbers such that: $x+6 \gt 0$. Solve the inequality to obtain: $x + 6 \gt 0 \\x \gt 0-6 \\x \gt -6$ Therefore, the domain of the given function is $(-6, +\infty)$.
|
2018-07-18 03:33:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9003852605819702, "perplexity": 122.13091864901985}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590046.11/warc/CC-MAIN-20180718021906-20180718041906-00510.warc.gz"}
|
https://notesformsc.org/c-factorial-recursion/
|
# C Program To Compute Nth Factorial Using Recursion
The program to compute factorial calls a function recursively to compute the output.The recursion is the ability of a computing procedure to call itself repeatedly until the problem is solved.
We compiled the program using Dev C++ version 5 (beta) compiler installed on Windows 7 64-bit system. You can use any standard C compiler, but make sure to change the source code according to compiler specifications.
You must know the following c programming concepts before trying the example program.
## Problem Definition
In simple words, the factorial of a given number n is the product of all the numbers up to that number, including the number.
So factorial of is
In the previous section, we introduced you to the concept of recursion. To elaborate on that see the example below. If is a function then,
Every time factorial function calls itself, it reduces 1 from the parameter of the function until the is reached. Then it starts processing or working from to the top – and prints the final results.
## Program Code – Factorial with Recursion
/*Program to compute Nth factorial */
#include < stdio.h >
#include < stdlib.h >
int fact (int n)
{
unsigned int f;
if ((n == 0) || (n == 1))
return (n);
else
/* Compute factorial by Recursive calls */
f = n * fact (n - 1);
return (f);
}
main ()
{
int i, n;
printf ("Enter the Number :");
scanf ("%d", & n);
/* Printing results */
for (i = 0; i < 30; i++)
printf ("_");printf ("\n\n");
printf ("Factorial of Number %d is %d\n", n, fact (n));
for (i = 0; i < 30; i++)
printf ("_");printf ("\n\n");
system ("PAUSE");
return 0;
}
The most important line of code in the above source code is following.
f = n * fact(n-1);
The n is the current number being multiplied and function calls the previous number and this process continues till 1 reached. Then, the actual multiplication starts from up to processing all numbers and resulting in final output.
### Output
Enter the Number:5
_______________________________
Factorial of Number 5 is 120
_______________________________
|
2023-03-25 08:41:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45018836855888367, "perplexity": 2799.19401202301}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945317.85/warc/CC-MAIN-20230325064253-20230325094253-00787.warc.gz"}
|
https://www.impan.pl/en/publishing-house/banach-center-publications/all/92/0/86681/dieudonne-operators-on-the-space-of-bochner-integrable-functions
|
# Publishing house / Banach Center Publications / All volumes
## Dieudonné operators on the space of Bochner integrable functions
### Volume 92 / 2011
Banach Center Publications 92 (2011), 279-282 MSC: 47B38, 47B05, 46E40. DOI: 10.4064/bc92-0-19
#### Abstract
A bounded linear operator between Banach spaces is called a Dieudonné operator (=weakly completely continuous operator) if it maps weakly Cauchy sequences to weakly convergent sequences. Let $(\Omega,\Sigma,\mu)$ be a finite measure space, and let $X$ and $Y$ be Banach spaces. We study Dieudonné operators $T:L^1(X)\to Y$. Let $i_\infty:L^\infty(X) \to L^1(X)$ stand for the canonical injection. We show that if $X$ is almost reflexive and $T:L^1(X)\to Y$ is a Dieudonné operator, then $T\circ i_\infty:L^\infty(X)\to Y$ is a weakly compact operator. Moreover, we obtain that if $T:L^1(X)\to Y$ is a bounded linear operator and $T\circ i_\infty:L^\infty(X)\to Y$ is weakly compact, then $T$ is a Dieudonné operator.
#### Authors
• Marian NowakFaculty of Mathematics, Computer Science and Econometrics
University of Zielona Góra
ul. Prof. Szafrana 4a
65-516 Zielona Góra, Poland
e-mail
## Search for IMPAN publications
Query phrase too short. Type at least 4 characters.
|
2021-05-10 01:04:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7264756560325623, "perplexity": 821.6274676675487}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989030.65/warc/CC-MAIN-20210510003422-20210510033422-00017.warc.gz"}
|
http://openstudy.com/updates/4e4482fa0b8b3609c72033af
|
## anonymous 5 years ago solve for the indicated letter a = 5b the solution is b =
1. anonymous
b=a/5
2. anonymous
$\frac{a}{5}$
3. KatrinaKaif
Divide by 5 on both sides to get b alone so a/5 = b
|
2016-10-26 23:29:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48008793592453003, "perplexity": 2826.7093462270072}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988721008.78/warc/CC-MAIN-20161020183841-00274-ip-10-171-6-4.ec2.internal.warc.gz"}
|
https://alperezescudero.wordpress.com/
|
## Dependence of the slope on R, Q and the fitting radius
>> pendientes=simul_pendientes(6000:250:8500,-.5:.2:1,200);
>> R=6000:250:8500;
>> Q=-.5:.2:1;
>> imagesc(Q,R,pendientes); set(gca,’YDir’,’normal’)
>> mesh(Q,R,pendientes)
So approximately linear change with R, and almost no change with Q.
Now we see the dependence with the fitting area:
ans =
1.0e-005 *
0.0000 0.1023 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
All very significant.
Also a big change, depending on the fitting radius.
## Effect of having different dispersion in the center and in the periphery
We do not see a significant widening of the pattern. However, a very relevant bias appears:
Simulations with std 10 in the central 1.5 mm (radius), and std 20 in the rest (up to 3 mm radius):
>> fig_pap_simulaciones_ruidoperiferia_01(5,1)
Simulations with std 10 in the central 1.5 mm (radius), and std 40 in the rest (up to 3 mm radius):
>> fig_pap_simulaciones_ruidoperiferia_01(5,1)
## Why fishes behave “better” than Condorcet
We see this problem in Sumpter et al. (2008), Consensus Decision Making by Fish, Curr Biol.
Condorcet Theorem just says the probability that, if we make N trials with probability p of getting the right answer in each trial, at least more than half of them get the right answer. Thus, each trial is completely independent. The solution to this problem is the same as the solution to problems of the kind “taking balls from a box”. Thus, if the number of fishes is odd, the probability of the majority taking the “right” decision is
If the number of fishes is even, we have the problem of ties. Assuming that ties are resolved by tossing a fair coin (so each option has 50% probability of being chosen), we have
On the other hand, as shown by Ward et al. (2008) Quorum decision-making facilitates… pnas, decisions of different fishes in the shoal are not independent. Once one or more fishes decide one option, the probability that other go for that option increases (in fact, Sumpter et al. 2008 also use this formula in other part of their paper). Therefore, it is to be expected that the probability increases faster for the case of fishes. It is as if p in the above formula was not constant, but increased.
## Possible objective functions for the many-eyes problem
Version 1: Probability that at least one fish detects a predator at a given distance.
A predator arrives at a certain angle. The probability that a fish sees the predator if there is no occlusion by another fish is p. If N fishes have free view at the angle of the arriving predator, the probability that at leas one fish sees the predator is
1-(1-p)^N. This probability increases with N until it saturates.
If we binnarize the 2D space in 360 angles, and we assume equal probability for the predator arriving each angle, then we have that the probability that at least one fish sees the predator is
, where N_i is the number of fishes with free line of vision at angle i.
The advantage of this cost function is that it takes into account both that many fishes see a certain point (but with the saturation, which I find very reasonable) and that not all fishes look at the same place, leaving others without surveillance.
Since sometimes the shoal will not react until several fish have started evasion, this cost may be generalized to the case when at least n fishes see the predator. However, I think the formula is not simple.
A possible problem with this objective is that it may depend on the binning on angles. My intuition says it will not depend on that (at least not strongly), but at first sight I do not see an easy way of proving it.
Version 2: Distance at which the probability of detecting a predator passes a threshold.
We may assume that the probability that a fish sees the predator (p) depends on distance in a simple way. We may assume that, for long distances (where the probability is much lower than one) the probability grows linearly with the angle subtended by the predator. This angle is inversely proportional to distance, so p=k/d, where k is a constant and d is the distance. Then, for a given angle, the probability that at leas one fish sees the predator is
H=1-(1-k/d)^N.
For a given (constant) value of H, we have that the distance depends on N as
.
The two versions may not be equivalent, because they will weight differently the benefits of equally distributing the surveillance in all angles. But I would not expect strong differences…
## Figures for Gonzalo’s talk in Tarragona
>> V0=zeros(279,1);
V0([40 41])=1;
>> V=matrizsistema2V_num(M,V0,[-1 1],.0005,1000);
>> imagesc(V)
>> caxis([0 .3])
Good.
>> find(roc(:,2)>.8,1)
ans =
36
>> umbral(36)
ans =
0.0017
## New Figure 2
>> fig_pap_elegans_16(1,zeros(100))
## Omega vs Deltax, for the parameters alpha=0.05 beta=1.5, with the new generalized omega
>> clear
>> [pos,omega_general,costes,x]=coste2pos_restofijas(todas.A*.05,todas.M*1.5+todas.S,todas.f,todas.pos_real,1);
>> desv=abs(todas.pos_real-pos);
>> plot(desv,omega_general,’.’)
To start with, only one outlier. Note that the new omega depends on the exponent, so this by itself favors exponent 1.
Bad news for the fit to the mean (without the outlier):
>> buenas=(desv<.2 | omega_general<20);
>> sum(~buenas)
ans =
1
|
2016-09-25 22:29:34
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8735776543617249, "perplexity": 983.8214189931878}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660436.26/warc/CC-MAIN-20160924173740-00190-ip-10-143-35-109.ec2.internal.warc.gz"}
|
https://www.aimsciences.org/article/doi/10.3934/ipi.2021008
|
# American Institute of Mathematical Sciences
• Previous Article
Unique continuation property and Poincaré inequality for higher order fractional Laplacians with applications in inverse problems
• IPI Home
• This Issue
• Next Article
A regularization operator for source identification for elliptic PDEs
August 2021, 15(4): 619-639. doi: 10.3934/ipi.2021008
## Cauchy problem of non-homogenous stochastic heat equation and application to inverse random source problem
1 School of Mathematics, Southeast University, Nanjing, Jiangsu, 210096, China 2 School of Science, East China University of Technology, Nanchang, Jiangxi, 330013, China
* Corresponding author
Received February 2020 Revised July 2020 Published August 2021 Early access January 2021
Fund Project: The first author is supported by National Natural Science Foundation of China grant 11761007; the second author is supported by National Natural Science Foundation of China grant 11961002; Natural Science Foundation of Jiangxi Province and Foundation of Academic and Technical Leaders Program for Major Subjects in Jiangxi Province grant 20172BCB22019
In this paper, a Cauchy problem of non-homogenous stochastic heat equation is considered together with its inverse source problem, where the source term is assumed to be driven by an additive white noise. The Cauchy problem (direct problem) is to determine the displacement of random temperature field, while the considered inverse problem is to reconstruct the statistical properties of the random source, i.e. the mean and variance of the random source. It is proved constructively that the Cauchy problem has a unique mild solution, which is expressed an integral form. Then the inverse random source problem is formulated into two Fredholm integral equations of the first kind, which are typically ill-posed. To obtain stable inverse solutions, the regularized block Kaczmarz method is introduced to solve the two Fredholm integral equations. Finally, numerical experiments are given to show that the proposed method is efficient and robust for reconstructing the statistical properties of the random source.
Citation: Shuli Chen, Zewen Wang, Guolin Chen. Cauchy problem of non-homogenous stochastic heat equation and application to inverse random source problem. Inverse Problems and Imaging, 2021, 15 (4) : 619-639. doi: 10.3934/ipi.2021008
##### References:
[1] G. Bao, C. Chen and P. Li, Inverse random source scattering problems in several dimensions, SIAM/ASA J. Uncertainty Quantification, 4 (2016), 1263-1287. doi: 10.1137/16M1067470. [2] G. Bao, C. Chen and P. Li, Inverse random source scattering for elastic waves, SIAM J. Numer. Anal., 55 (2017), 2616-2643. doi: 10.1137/16M1088922. [3] G. Bao, S. Chow, P. Li and H. Zhou, An inverse random source problem for the Helmholtz equation, Math. Comput., 83 (2014), 215-233. doi: 10.1090/S0025-5718-2013-02730-5. [4] G. Bao and X. Xu, An inverse random source problem in quantifying the elastic modulus of nanomaterials, Inverse Problems, 29 (2012), 015006, 16pp. doi: 10.1088/0266-5611/29/1/015006. [5] F. Dou, C. Fu and F. Yang, Identifying an unknown source term in a heat equation, Inverse Probl. Sci. Eng., 17 (2009), 901-913. doi: 10.1080/17415970902916870. [6] R. Dalang, D. Khoshnevisan, C. Mueller, D. Nualalart and Y. Xiao, A Minicourse on Stochastic Partial Differential Equations, Springer, Heidelberg, Berlin, 2009. [7] H. Engl, M. Hanke and A. Neubauer, Regularization of Inverse Problems, Springer Science & Business Media, 1996. [8] L. Evans, An Introduction to Stochastic Differential Equations, American Mathematical Society, Providence, RI, 2013. doi: 10.1090/mbk/082. [9] L. Evans, Partial Differential Equations, Graduate Studies in Mathematics, Vol. 19, American Mathematical Society, Rhode Island, 1998. [10] A. Hasanov and M. Slodička, An analysis of inverse source problems with final time measured output data for the heat conduction equation: A semigroup approach, Appl. Math. Lett., 26 (2013), 207-214. doi: 10.1016/j.aml.2012.08.013. [11] T. Johansson and D. Lesnic, Determination of a spacewise dependent heat source, J. Comput. Appl. Math., 209 (2007), 66-80. doi: 10.1016/j.cam.2006.10.026. [12] J. Kaipio and E. Somersalo, Statistical and Computational Inverse Problems, Applied Mathematical Sciences, 160. Springer-Verlag, New York, 2005. [13] P. Kazimierczyk, On the stochastic inverse problem for the heat conduction equation, Reports on Mathematical Physics, 26 (1988), 245-259. doi: 10.1016/0034-4877(88)90027-4. [14] A. Kirsch, An Introduction to the Mathematical Theory of Inverse Problems, Springer Science & Business Media, 2011. doi: 10.1007/978-1-4419-8474-6. [15] G. Li, Data compatibility and conditional stability for an inverse source problem in the heat equation, Appl. Math. Comput., 173 (2006), 566-581. doi: 10.1016/j.amc.2005.04.053. [16] M. Li, C. Chen and P. Li, Inverse random source scattering for the Helmholtz equation in inhomogeneous media, Inverse Problems, 34 (2017), 015003, 19pp. doi: 10.1088/1361-6420/aa99d2. [17] P. Li, An inverse random source scattering problem in inhomogeneous media, Inverse Problems, 27 (2011), 035004, 22pp. doi: 10.1088/0266-5611/27/3/035004. [18] P. Li and G. Yuan, Stability on the inverse random source scattering problem for the one-dimensional Helmholtz equation, J. Math. Anal. Appl., 450 (2017), 872-887. doi: 10.1016/j.jmaa.2017.01.074. [19] Q. Lü, Carleman estimate for stochastic parabolic equations and inverse stochastic parabolic problems, Inverse Problems, 28 (2012), 045008, 18pp. doi: 10.1088/0266-5611/28/4/045008. [20] Y. Ma, C. Fu and Y. Zhang, Identification of an unknown source depending on both time and space variables by a variational method, Appl. Math. Model., 36 (2012), 5080-5090. doi: 10.1016/j.apm.2011.12.046. [21] F. Natterer, The Mathematics of Computerized Tomography, Teubner, Stuttgart, 1986. [22] P. Niu, T. Helin and Z. Zhang, An inverse random source problem in a stochastic fractional diffusion equation, Inverse Problems, 36 (2020), 045002, 23pp. doi: 10.1088/1361-6420/ab532c. [23] J. Nolen and G. Papanicolaou, Fine scale uncertainty in parameter estimation for elliptic equations, Inverse Problems, 25 (2009), 115021, 22pp. doi: 10.1088/0266-5611/25/11/115021. [24] A. Prilepko, V. Kamynin and A. Kostin, Inverse source problem for parabolic equation with the condition of integral observation in time, J. Inverse Ill-posed Probl., 26 (2018), 523-539. doi: 10.1515/jiip-2017-0049. [25] E. Titchmarsh, Introduction to the Theory of Fourier Integrals, Oxford University Press, London, 1939. [26] J. Walsh, An introduction to stochastic partial differential equations, École d'été de Probabilités de Saint-Flour, XIV–1984, 265–439, Lecture Notes in Math., 1180, Springer, Berlin, 1986. doi: 10.1007/BFb0074920. [27] Z. Wang and J. Liu, Identification of the pollution source from one-dimensional parabolic equation models, Appl. Math. Comput., 219 (2012), 3403-3413. doi: 10.1016/j.amc.2008.03.014. [28] Z. Wang and D. Xu, On the linear model function method for choosing Tikhonov regularization parameters in linear ill-posed problems, Chinese J. Eng. Math., 30 (2013), 451-466. [29] T. Wei and J. Wang, A modified quasi-boundary value method for an inverse source problem of the time-fractional diffusion equation, Appl. Numer. Math., 78 (2014), 95-111. doi: 10.1016/j.apnum.2013.12.002. [30] F. Yang and C. Fu, A simplified Tikhonov regularization method for determining the heat source, Appl. Math. Model., 34 (2010), 3286-3299. doi: 10.1016/j.apm.2010.02.020.
show all references
##### References:
[1] G. Bao, C. Chen and P. Li, Inverse random source scattering problems in several dimensions, SIAM/ASA J. Uncertainty Quantification, 4 (2016), 1263-1287. doi: 10.1137/16M1067470. [2] G. Bao, C. Chen and P. Li, Inverse random source scattering for elastic waves, SIAM J. Numer. Anal., 55 (2017), 2616-2643. doi: 10.1137/16M1088922. [3] G. Bao, S. Chow, P. Li and H. Zhou, An inverse random source problem for the Helmholtz equation, Math. Comput., 83 (2014), 215-233. doi: 10.1090/S0025-5718-2013-02730-5. [4] G. Bao and X. Xu, An inverse random source problem in quantifying the elastic modulus of nanomaterials, Inverse Problems, 29 (2012), 015006, 16pp. doi: 10.1088/0266-5611/29/1/015006. [5] F. Dou, C. Fu and F. Yang, Identifying an unknown source term in a heat equation, Inverse Probl. Sci. Eng., 17 (2009), 901-913. doi: 10.1080/17415970902916870. [6] R. Dalang, D. Khoshnevisan, C. Mueller, D. Nualalart and Y. Xiao, A Minicourse on Stochastic Partial Differential Equations, Springer, Heidelberg, Berlin, 2009. [7] H. Engl, M. Hanke and A. Neubauer, Regularization of Inverse Problems, Springer Science & Business Media, 1996. [8] L. Evans, An Introduction to Stochastic Differential Equations, American Mathematical Society, Providence, RI, 2013. doi: 10.1090/mbk/082. [9] L. Evans, Partial Differential Equations, Graduate Studies in Mathematics, Vol. 19, American Mathematical Society, Rhode Island, 1998. [10] A. Hasanov and M. Slodička, An analysis of inverse source problems with final time measured output data for the heat conduction equation: A semigroup approach, Appl. Math. Lett., 26 (2013), 207-214. doi: 10.1016/j.aml.2012.08.013. [11] T. Johansson and D. Lesnic, Determination of a spacewise dependent heat source, J. Comput. Appl. Math., 209 (2007), 66-80. doi: 10.1016/j.cam.2006.10.026. [12] J. Kaipio and E. Somersalo, Statistical and Computational Inverse Problems, Applied Mathematical Sciences, 160. Springer-Verlag, New York, 2005. [13] P. Kazimierczyk, On the stochastic inverse problem for the heat conduction equation, Reports on Mathematical Physics, 26 (1988), 245-259. doi: 10.1016/0034-4877(88)90027-4. [14] A. Kirsch, An Introduction to the Mathematical Theory of Inverse Problems, Springer Science & Business Media, 2011. doi: 10.1007/978-1-4419-8474-6. [15] G. Li, Data compatibility and conditional stability for an inverse source problem in the heat equation, Appl. Math. Comput., 173 (2006), 566-581. doi: 10.1016/j.amc.2005.04.053. [16] M. Li, C. Chen and P. Li, Inverse random source scattering for the Helmholtz equation in inhomogeneous media, Inverse Problems, 34 (2017), 015003, 19pp. doi: 10.1088/1361-6420/aa99d2. [17] P. Li, An inverse random source scattering problem in inhomogeneous media, Inverse Problems, 27 (2011), 035004, 22pp. doi: 10.1088/0266-5611/27/3/035004. [18] P. Li and G. Yuan, Stability on the inverse random source scattering problem for the one-dimensional Helmholtz equation, J. Math. Anal. Appl., 450 (2017), 872-887. doi: 10.1016/j.jmaa.2017.01.074. [19] Q. Lü, Carleman estimate for stochastic parabolic equations and inverse stochastic parabolic problems, Inverse Problems, 28 (2012), 045008, 18pp. doi: 10.1088/0266-5611/28/4/045008. [20] Y. Ma, C. Fu and Y. Zhang, Identification of an unknown source depending on both time and space variables by a variational method, Appl. Math. Model., 36 (2012), 5080-5090. doi: 10.1016/j.apm.2011.12.046. [21] F. Natterer, The Mathematics of Computerized Tomography, Teubner, Stuttgart, 1986. [22] P. Niu, T. Helin and Z. Zhang, An inverse random source problem in a stochastic fractional diffusion equation, Inverse Problems, 36 (2020), 045002, 23pp. doi: 10.1088/1361-6420/ab532c. [23] J. Nolen and G. Papanicolaou, Fine scale uncertainty in parameter estimation for elliptic equations, Inverse Problems, 25 (2009), 115021, 22pp. doi: 10.1088/0266-5611/25/11/115021. [24] A. Prilepko, V. Kamynin and A. Kostin, Inverse source problem for parabolic equation with the condition of integral observation in time, J. Inverse Ill-posed Probl., 26 (2018), 523-539. doi: 10.1515/jiip-2017-0049. [25] E. Titchmarsh, Introduction to the Theory of Fourier Integrals, Oxford University Press, London, 1939. [26] J. Walsh, An introduction to stochastic partial differential equations, École d'été de Probabilités de Saint-Flour, XIV–1984, 265–439, Lecture Notes in Math., 1180, Springer, Berlin, 1986. doi: 10.1007/BFb0074920. [27] Z. Wang and J. Liu, Identification of the pollution source from one-dimensional parabolic equation models, Appl. Math. Comput., 219 (2012), 3403-3413. doi: 10.1016/j.amc.2008.03.014. [28] Z. Wang and D. Xu, On the linear model function method for choosing Tikhonov regularization parameters in linear ill-posed problems, Chinese J. Eng. Math., 30 (2013), 451-466. [29] T. Wei and J. Wang, A modified quasi-boundary value method for an inverse source problem of the time-fractional diffusion equation, Appl. Numer. Math., 78 (2014), 95-111. doi: 10.1016/j.apnum.2013.12.002. [30] F. Yang and C. Fu, A simplified Tikhonov regularization method for determining the heat source, Appl. Math. Model., 34 (2010), 3286-3299. doi: 10.1016/j.apm.2010.02.020.
the decaying property of singular values: (A) for Eq.(12); (B) for Eq.(13)
The statistical properties of the exact source
The statistical properties of the inverse source for $\mu = 10^{-4}, \epsilon = 0$
The statistical properties of the inverse source for $\mu = 10^{-3}, \epsilon = 0.03$
The statistical properties of the exact source
The statistical properties of the inverse source for $\mu = 10^{-4}, \epsilon = 0$
The statistical properties of the inverse source for $\mu = 10^{-3}, \epsilon = 0.03$
The statistical properties of the random source for $\mu = 5\times 10^{-3}, \epsilon = 0.05$
The statistical properties of the random source for $\mu = 5\times 10^{-3}, \epsilon = 0.05$
[1] Xiaoli Feng, Meixia Zhao, Peijun Li, Xu Wang. An inverse source problem for the stochastic wave equation. Inverse Problems and Imaging, 2022, 16 (2) : 397-415. doi: 10.3934/ipi.2021055 [2] Yuxuan Gong, Xiang Xu. Inverse random source problem for biharmonic equation in two dimensions. Inverse Problems and Imaging, 2019, 13 (3) : 635-652. doi: 10.3934/ipi.2019029 [3] Sergei A. Avdonin, Sergei A. Ivanov, Jun-Min Wang. Inverse problems for the heat equation with memory. Inverse Problems and Imaging, 2019, 13 (1) : 31-38. doi: 10.3934/ipi.2019002 [4] Zhousheng Ruan, Sen Zhang, Sican Xiong. Solving an inverse source problem for a time fractional diffusion equation by a modified quasi-boundary value method. Evolution Equations and Control Theory, 2018, 7 (4) : 669-682. doi: 10.3934/eect.2018032 [5] Masaru Ikehata, Mishio Kawashita. An inverse problem for a three-dimensional heat equation in thermal imaging and the enclosure method. Inverse Problems and Imaging, 2014, 8 (4) : 1073-1116. doi: 10.3934/ipi.2014.8.1073 [6] Seiyed Hadi Abtahi, Hamidreza Rahimi, Maryam Mosleh. Solving fuzzy volterra-fredholm integral equation by fuzzy artificial neural network. Mathematical Foundations of Computing, 2021, 4 (3) : 209-219. doi: 10.3934/mfc.2021013 [7] Jong-Shenq Guo, Bei Hu. Blowup rate estimates for the heat equation with a nonlinear gradient source term. Discrete and Continuous Dynamical Systems, 2008, 20 (4) : 927-937. doi: 10.3934/dcds.2008.20.927 [8] Guirong Liu, Yuanwei Qi. Sign-changing solutions of a quasilinear heat equation with a source term. Discrete and Continuous Dynamical Systems - B, 2013, 18 (5) : 1389-1414. doi: 10.3934/dcdsb.2013.18.1389 [9] Z. K. Eshkuvatov, M. Kammuji, Bachok M. Taib, N. M. A. Nik Long. Effective approximation method for solving linear Fredholm-Volterra integral equations. Numerical Algebra, Control and Optimization, 2017, 7 (1) : 77-88. doi: 10.3934/naco.2017004 [10] Kazuhiro Ishige, Tatsuki Kawakami, Kanako Kobayashi. Global solutions for a nonlinear integral equation with a generalized heat kernel. Discrete and Continuous Dynamical Systems - S, 2014, 7 (4) : 767-783. doi: 10.3934/dcdss.2014.7.767 [11] Yueling Li, Yingchao Xie, Xicheng Zhang. Large deviation principle for stochastic heat equation with memory. Discrete and Continuous Dynamical Systems, 2015, 35 (11) : 5221-5237. doi: 10.3934/dcds.2015.35.5221 [12] Fulvia Confortola, Elisa Mastrogiacomo. Optimal control for stochastic heat equation with memory. Evolution Equations and Control Theory, 2014, 3 (1) : 35-58. doi: 10.3934/eect.2014.3.35 [13] Liu Liu. Uniform spectral convergence of the stochastic Galerkin method for the linear semiconductor Boltzmann equation with random inputs and diffusive scaling. Kinetic and Related Models, 2018, 11 (5) : 1139-1156. doi: 10.3934/krm.2018044 [14] Kenichi Sakamoto, Masahiro Yamamoto. Inverse source problem with a final overdetermination for a fractional diffusion equation. Mathematical Control and Related Fields, 2011, 1 (4) : 509-518. doi: 10.3934/mcrf.2011.1.509 [15] Shumin Li, Masahiro Yamamoto, Bernadette Miara. A Carleman estimate for the linear shallow shell equation and an inverse source problem. Discrete and Continuous Dynamical Systems, 2009, 23 (1&2) : 367-380. doi: 10.3934/dcds.2009.23.367 [16] Galina Kurina, Vladimir Zadorozhniy. Mean periodic solutions of a inhomogeneous heat equation with random coefficients. Discrete and Continuous Dynamical Systems - S, 2020, 13 (5) : 1543-1551. doi: 10.3934/dcdss.2020087 [17] Kazuhiro Ishige, Asato Mukai. Large time behavior of solutions of the heat equation with inverse square potential. Discrete and Continuous Dynamical Systems, 2018, 38 (8) : 4041-4069. doi: 10.3934/dcds.2018176 [18] Chan Liu, Jin Wen, Zhidong Zhang. Reconstruction of the time-dependent source term in a stochastic fractional diffusion equation. Inverse Problems and Imaging, 2020, 14 (6) : 1001-1024. doi: 10.3934/ipi.2020053 [19] Beatrice Bugert, Gunther Schmidt. Analytical investigation of an integral equation method for electromagnetic scattering by biperiodic structures. Discrete and Continuous Dynamical Systems - S, 2015, 8 (3) : 435-473. doi: 10.3934/dcdss.2015.8.435 [20] Noui Djaidja, Mostefa Nadir. Comparison between Taylor and perturbed method for Volterra integral equation of the first kind. Numerical Algebra, Control and Optimization, 2021, 11 (4) : 487-493. doi: 10.3934/naco.2020039
2020 Impact Factor: 1.639
|
2022-06-27 17:43:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6700344681739807, "perplexity": 2115.9651376494494}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103337962.22/warc/CC-MAIN-20220627164834-20220627194834-00256.warc.gz"}
|
https://www.jiskha.com/questions/111479/find-the-radius-of-a-circle-with-the-circumference-of-94-25cm-correct-to-the-nearest-cm
|
# MATHS
Find the radius of a circle with the circumference of 94.25cm correct to the nearest cm?
1. 👍
2. 👎
3. 👁
1. Circumference = 2pi(r)
so
2pi(r) = 94.25
r = 94.25/2pi
= 15
1. 👍
2. 👎
1. 👍
2. 👎
## Similar Questions
1. ### Math.
1. Find the area of the given circle. Round to the nearest tenth. *theres a picture of a circle and the radius is 3.5* A. 22.0 cm *squared* B.38.5 cm *squared* C. 11.0 cm *squared* D. 42.8 cm *squared* 2.Find the circumference of
2. ### Math ~ Check Answers ~
Find the circumference of the circle (use 3.14 for pi). Show your work. Round to the nearest tenth. the radius is 23 yd My Answer: ???? •Circumference = (2) (3.14) (23) •3.14 x 2 x 23 = 144.44
3. ### Mathematics
1. Find the circumference of the given circle. Round to the nearest tenth. (Circle with radius of 3.5 cm) -22.0 cm -38.5 cm -11.0 cm -42.8 cm 2. Find the area of the given circle. Round to the nearest tenth. (Circle with a
4. ### Math
Find the area for the circle (use 3.14 for pi). Show your work. Round to the nearest tenth. the radius is 23 yd 22. Find the circumference of the circle (use 3.14 for pi). Show your work. Round to the nearest tenth. the raduis is
1. ### math Lsson:11 unit 3
1. estimate the circumference of the circle. use 3.14 as pi . round the nearest unit. The circle says 24. A. 27 b. 79 c.1,809 d. 151**** 2. Estimate the radius of a circle with the circumference of 272.2 inches. Use 3.14 for pi.
2. ### Math
Circumference and Area of a Circle Quiz Part 1 1) Find the Area of the parallelogram (1pt) base:67 hight:52 A)3,302 ft2 B)3,484 ft2****** C)3,752 D)4,020 2)Find the Area of the triangle (1pt) base:2.8cm hight:7.8cm A)21.84 cm2
3. ### MATH
Find the circumference for the circle ( use 3.14 for pi ). Round to the nearest tenth. the circumference for the circle is 23 yd.
4. ### math
Find the circumference of the circle. Round your answer to the nearest hundredth. Use 3.14 or 227 for π. the radius is 3 ft
1. ### Math
Use 3.14 for π. Round answers to the nearest hundredth if necessary. The figure shows a circle with a point Upper X labeled at the center. Calculate the area of Circle X if the diameter is 13 inches. ____ square inches Calculate
2. ### math
A circle has a circumference of 56.52 cm and a radius of 9 cm. What number represents the ratio of the circumference to the radius of this circle?
3. ### Geometry
1. Find the diameter of a circle with a circumference of 50 mm. Round your answer to the nearest tenth. a) 16.0 mm b) 26.4 mm c) 8.4 mm d) 33.5 mm 2. Find the area of the circle with the given radius or diameter. Round your answer
4. ### Geometry
A circle has a center at (3,5). The point (3,8) is on the circle. What is the circumference of the circle to the nearest tenth of a unit?
|
2022-01-28 12:00:03
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8340897560119629, "perplexity": 1841.7473438330715}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305494.6/warc/CC-MAIN-20220128104113-20220128134113-00040.warc.gz"}
|
https://rationalwiki.org/wiki/User_talk:Jayjay4ever
|
Nominations and Campaigning for the 2022 RationalWiki Moderator Election are now closed.
# User talk:Jayjay4ever
World Blogs Clogs Elections The Bar (talk) (talk) (talk) (talk) (hic)
Welcome to RationalWiki, Jayjay4ever!
Check out our guide for newcomers and our community standards!
Tell us how you found RationalWiki here!
If you are interested in contributing:
Howdy!--Offeep 21:26, 3 September 2007 (CDT)
Good first edit. Haha, yeah, it happens :-D-αmεσ (!) 21:27, 3 September 2007 (CDT)
Hey dude, I deleted your screenshot because it linked you to a sock. If you don't care, let me know and I'll undelete; if you do, upload another one with the username scrubbed from the image.-αmεσ (soldier) 18:26, 21 October 2007 (EDT)
## I has halped
~ Gloom(is never asleep) 00:51, 6 August 2008 (EDT)
## Demotion
You have been demoted. ${\displaystyle \approx }$${\displaystyle \pi }$ 00:37, 29 August 2008 (EDT)
My condolences on your demotion.CЯacke® 00:43, 29 August 2008 (EDT)
He's in a better place now.... δλερνερ διαλέγομαι | συνεισφέρω 00:54, 29 August 2008 (EDT)
### ah...duuuuuuuuuhhhhhhh
Lookie here, all will be clear. CЯacke® 13:39, 29 August 2008 (EDT)
OMG, who committed such atrocity, and why? JJ4eGot milk? 14:51, 29 August 2008 (EDT)
## Andyland
Nice work on those images - I assume you wrote most of the article, too, which is also teh funnie. It should be moved to Conservapedia:Andyland, though. I'd do it but then I'd wind up with the "credit" for creating the article, on my silly "articles wot I maid" list. So you should move it. And it should be heavily linked and promoted! ħuman 21:39, 5 November 2008 (EST)
Thanks! The article is actually a parody of North Korea from Wikipedia, and others. (Hope there's no copyright infringement here). I was thinking the same thing about the title when I created the article, but I felt it would ruin its "authenticity", which I believe is important to make it funny. --JJ4eI love you 21:52, 5 November 2008 (EST)
You're welcome!! Again, nice work. However... in the CP namespace it will still be "authentic" - the mainspace it can come up as a "random page", and it requires previous knowledge and info to work. In the CP space, it's a gem. Please, you move it. Don't make me do it ;) ħuman 23:08, 5 November 2008 (EST)
Whoa, how subtle. Thanks anyway! JJ4eI love you 14:13, 6 November 2008 (EST)
## Trevor Bekolay and the SDG
Hello, if you happen to read this, why do we think that Trevor Bekolay was added to the SDG? He asked the same question on the talk page and it's the first time I heard about it, where did we get this information? NightFlare 20:07, 23 April 2009 (EDT)
## Schlafly smile picture
Please at least paste in where you found it? ħuman 06:16, 18 October 2010 (UTC)
With pleasure:
And a bonus:
--JJ4etalk 13:24, 20 October 2010 (UTC)
Thanks, I pasted in the url at the image file. Any idea what the copyright on it is? Likewise the second one? ħuman 18:10, 20 October 2010 (UTC)
Ah, I see it here, though I doubt that url will be stable. ħuman 18:30, 20 October 2010 (UTC)
|
2022-12-07 11:17:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 2, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6540043950080872, "perplexity": 5908.884457160922}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711151.22/warc/CC-MAIN-20221207085208-20221207115208-00868.warc.gz"}
|
https://hyperleap.com/topic/Wiener%E2%80%93Khinchin_theorem
|
# Wiener–Khinchin theorem
Wiener-Khinchin theoremWiener-Khintchine theoremWiener–Khinchin–Einstein theoremWiener–Khintchine theorem
In applied mathematics, the Wiener–Khinchin theorem, also known as the Wiener–Khintchine theorem and sometimes as the Wiener–Khinchin–Einstein theorem or the Khinchin–Kolmogorov theorem, states that the autocorrelation function of a wide-sense-stationary random process has a spectral decomposition given by the power spectrum of that process.wikipedia
45 Related Articles
### Autocorrelation
autocorrelation functionserial correlationautocorrelated
In applied mathematics, the Wiener–Khinchin theorem, also known as the Wiener–Khintchine theorem and sometimes as the Wiener–Khinchin–Einstein theorem or the Khinchin–Kolmogorov theorem, states that the autocorrelation function of a wide-sense-stationary random process has a spectral decomposition given by the power spectrum of that process. For continuous time, the Wiener–Khinchin theorem says that if x is a wide-sense stationary process such that its autocorrelation function (sometimes called autocovariance) defined in terms of statistical expected value, (the asterisk denotes complex conjugate, and of course it can be omitted if the random process is real-valued), exists and is finite at every lag \tau, then there exists a monotone function F(f) in the frequency domain such that
The Wiener–Khinchin theorem relates the autocorrelation function to the power spectral density S_{XX} via the Fourier transform:
### Stationary process
stationarynon-stationarystationarity
In applied mathematics, the Wiener–Khinchin theorem, also known as the Wiener–Khintchine theorem and sometimes as the Wiener–Khinchin–Einstein theorem or the Khinchin–Kolmogorov theorem, states that the autocorrelation function of a wide-sense-stationary random process has a spectral decomposition given by the power spectrum of that process.
### Aleksandr Khinchin
KhinchinA. Y. KhinchinA. Ya. Khinchin
Norbert Wiener proved this theorem for the case of a deterministic function in 1930; Aleksandr Khinchin later formulated an analogous result for stationary stochastic processes and published that probabilistic analogue in 1934.
### Norbert Wiener
WienerWiener, NorbertN. Wiener
Norbert Wiener proved this theorem for the case of a deterministic function in 1930; Aleksandr Khinchin later formulated an analogous result for stationary stochastic processes and published that probabilistic analogue in 1934.
The Wiener–Khinchin theorem, (also known as the Wiener – Khintchine theorem and the Khinchin – Kolmogorov theorem), states that the power spectral density of a wide-sense-stationary random process is the Fourier transform of the corresponding autocorrelation function.
### Spectral density
frequency spectrumpower spectrumspectrum
In applied mathematics, the Wiener–Khinchin theorem, also known as the Wiener–Khintchine theorem and sometimes as the Wiener–Khinchin–Einstein theorem or the Khinchin–Kolmogorov theorem, states that the autocorrelation function of a wide-sense-stationary random process has a spectral decomposition given by the power spectrum of that process.
In the latter form (for a stationary random process), one can make the change of variables and with the limits of integration (rather than [0,T]) approaching infinity, the resulting power spectral density and the autocorrelation function of this signal are seen to be Fourier transform pairs (Wiener–Khinchin theorem).
### Linear time-invariant system
linear time-invariantLTI system theoryLTI system
The theorem is useful for analyzing linear time-invariant systems (LTI systems) when the inputs and outputs are not square-integrable, so their Fourier transforms do not exist.
The Fourier transform is often applied to spectra of infinite signals via the Wiener–Khinchin theorem even when Fourier transforms of the signals do not exist.
### Spectral theorem
spectral decompositionspectral theoryspectrum
In applied mathematics, the Wiener–Khinchin theorem, also known as the Wiener–Khintchine theorem and sometimes as the Wiener–Khinchin–Einstein theorem or the Khinchin–Kolmogorov theorem, states that the autocorrelation function of a wide-sense-stationary random process has a spectral decomposition given by the power spectrum of that process.
### Theorem
theoremspropositionconverse
Norbert Wiener proved this theorem for the case of a deterministic function in 1930; Aleksandr Khinchin later formulated an analogous result for stationary stochastic processes and published that probabilistic analogue in 1934.
### Albert Einstein
EinsteinEinsteinianA. Einstein
Albert Einstein explained, without proofs, the idea in a brief two-page memo in 1914.
### Autocovariance
autocovariance functionautocovariance matrixmean and autocovariance
For continuous time, the Wiener–Khinchin theorem says that if x is a wide-sense stationary process such that its autocorrelation function (sometimes called autocovariance) defined in terms of statistical expected value, (the asterisk denotes complex conjugate, and of course it can be omitted if the random process is real-valued), exists and is finite at every lag \tau, then there exists a monotone function F(f) in the frequency domain such that
### Expected value
expectationexpectedmean
For continuous time, the Wiener–Khinchin theorem says that if x is a wide-sense stationary process such that its autocorrelation function (sometimes called autocovariance) defined in terms of statistical expected value, (the asterisk denotes complex conjugate, and of course it can be omitted if the random process is real-valued), exists and is finite at every lag \tau, then there exists a monotone function F(f) in the frequency domain such that
### Monotonic function
monotonicitymonotonemonotonic
For continuous time, the Wiener–Khinchin theorem says that if x is a wide-sense stationary process such that its autocorrelation function (sometimes called autocovariance) defined in terms of statistical expected value, (the asterisk denotes complex conjugate, and of course it can be omitted if the random process is real-valued), exists and is finite at every lag \tau, then there exists a monotone function F(f) in the frequency domain such that
### Riemann–Stieltjes integral
Stieltjes integralRiemann–Stieltjesintegrator
where the integral is a Riemann–Stieltjes integral.
### Square-integrable function
square-integrablesquare integrableL'' 2
The Fourier transform of x(t) does not exist in general, because stationary random functions are not generally either square-integrable or absolutely integrable.
### Absolutely integrable function
absolutely integrable
The Fourier transform of x(t) does not exist in general, because stationary random functions are not generally either square-integrable or absolutely integrable.
### Almost everywhere
almost everyalmost allalmost-everywhere
But if F(f) is absolutely continuous, for example, if the process is purely indeterministic, then F is differentiable almost everywhere.
### Aliasing
aliasaliasedtemporal aliasing
This is due to the problem of aliasing: the contribution of any frequency higher than the Nyquist frequency seems to be equal to that of its alias between 0 and 1.
### Nyquist frequency
Nyquist limitNyquistN/2 different frequencies
This is due to the problem of aliasing: the contribution of any frequency higher than the Nyquist frequency seems to be equal to that of its alias between 0 and 1.
### Transfer function
transfertransfer characteristicchannel transfer function
Since the Fourier transform of the autocorrelation function of a signal is the power spectrum of the signal, this corollary is equivalent to saying that the power spectrum of the output is equal to the power spectrum of the input times the energy transfer function.
### Discrete Fourier transform
DFTcircular convolution theoremFourier transform
Further complicating the issue is that the discrete Fourier transform always exists for digital, finite-length sequences, meaning that the theorem can be blindly applied to calculate auto-correlations of numerical sequences.
### Wold's theorem
Moving average representationrepresentationWold
In statistics, Wold's decomposition or the Wold representation theorem (not to be confused with the Wold theorem that is the discrete-time analog of the Wiener–Khinchin theorem), named after Herman Wold, says that every covariance-stationary time series Y_{t} can be written as the sum of two time series, one deterministic and one stochastic.
### List of Russian mathematicians
Russian mathematicianRussian mathematical schoolRussian mathematics
### Scale invariance
scale invariantscale-invariantscaling
The Wiener–Khinchin theorem further implies that for any sequence that exhibits a variance to mean power law under these conditions will also manifest 1/f noise.
|
2020-10-25 19:42:30
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.935488224029541, "perplexity": 945.77847233838}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107889651.52/warc/CC-MAIN-20201025183844-20201025213844-00269.warc.gz"}
|
http://ahay.org/blog/2006/11/01/claerbouts-nmo/
|
Moveout, velocity, and stacking, another chapter from Jon Claerbout‘s book Basic Earth Imaging is added to the collection of reproducible papers. There are some slight modifications due to substituting Madagascar programs.
|
2020-02-18 06:43:46
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8988760709762573, "perplexity": 5609.510867073129}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875143635.54/warc/CC-MAIN-20200218055414-20200218085414-00228.warc.gz"}
|
https://mathoverflow.net/questions/177830/question-about-higher-inductive-types-and-computational-rules
|
# Question about higher inductive types and computational rules
I have been trying to make my way through the homotopy type theory book, slowly but surely, and I just finished reading this introductory series of 3 articles on hott on ScienceForAll.
http://www.science4all.org/le-nguyen-hoang/homotopy-type-theory/
At some point, he describes identity proofs and higher inductive types, he shows how you could construct the integers starting with a base element 0 and two constructors, up (u) and down (d), such that
udA=A,
for any "integer" A. Now he says that one way of reducing the complexity of the type and flattening it is to use higher order inductive types and have two identity constructors:
id_ud (n) : n = u(d(n)),
id_du (n) : n = d(u(n))
Now, my question is simply: Why can't we just make up this type by playing around with computation rules? Couldn't we just posit an induction principle that would say something like:
ind_Z (C,0) := C(0),
ind_Z (C, u(d(n))) := ind_Z (C, n),
ind_Z (C, d(u(n))) := ind_Z (C, n)
I'm aware that we'd get stuck at something like uudd(0), but then I'm sure we could have more rules to swap the ups and downs around or something.
Then, having those equalities at the definitional level, u(d(n))=n : Z (3-bar equality symbol), we would get the equality type from above u(d(n))=n (as a type). Is the problem that it's too strong?
Thank you very much
• I don't understand what you're proposing; can you write it out in more detail? – Mike Shulman Aug 6 '14 at 19:24
• Sorry if this is unclear, it's all very new to me. I guess I'm just confused as to why identity proof constructors would be necessary. The reason why he added the constructor for the type u(d(n))=n, was so that he could say that going up and then down was the same as not moving. But aren't computation rules there exactly for this kind of question? Why didn't he just define instead a computation rule udn=n : Z? If he later wanted to use the type udn=n, he would have had it with "ref", since the computation says that they're "the same". Maybe there is a reason why having a term udn=n is bad? – Gabriel Aug 6 '14 at 22:02
• Ah, I think I see. I will try to answer. – Mike Shulman Aug 7 '14 at 3:06
• Thanks. In the meantime, I reread a couple of pages on equality and identity proofs, etc. on ncatlab, and it made it somewhat clearer for me. Although, it's still confusing to me why in some cases we'd want to encode our more fundamental relations like udn=n as propositional equalities and in other cases as definitional ones, but I guess continuing reading about the subject will eventually blow away the fog. – Gabriel Aug 7 '14 at 5:48
• My reading of the question seems to be a bit different from Mike's. I think Gabriel wants to avoid postulating identity constructors and replace them by "indistinguishability rules", i.e. what would result from substitution but without assuming that the terms are "equal" (it's unclear to me whether Gabriel means for "equal" to be propositional or definitional). I'm not sure but my hunch is that these rules could be powerful enough to recover the identity rules in the propositional sense but not in the definitional sense. – François G. Dorais Aug 7 '14 at 23:15
If I understand the question correctly, one answer is that the rules of type theory are not (supposed to be) arbitrarily chosen independently of each other like the axioms of set theory are. They come in "packages", one for each "type-forming operation", and each package has the same general shape: it consists of a Formation rule, some Introduction rules, some Elimination rules, and some Computation rules.
A Formation rule tells you how to introduce a type, e.g. "if $A$ and $B$ are types, so is $A\times B$". An Introduction rule tells you how to introduce terms in that type, e.g. "if $a:A$ and $b:B$, then $(a,b):A\times B$". An Elimination rule tells you how to use terms in that type to construct terms in other types, e.g. "if $f:A\to B\to C$, then $rec(f):A\times B\to C$". And a Computation rule tells you what happens when you apply an Elimination rule to an Introduction rule, e.g. "$rec(f)((a,b)) \equiv f(a)(b)$".
These four groups of rules that pertain to any type former can't be chosen arbitrarily either; they have to be "harmonious". There's no formal definition of what this means, but the idea is that the Introduction and Elimination rules should determine each other, and the Computation rules should tell you exactly how to apply any Elimination rule to any Introduction rule and no more.
A bit more specifically, there are two kinds of type formers: positive ones and negative ones. For a positive type, you choose the Introduction rules, and then the Elimination rules are essentially determined by saying "in order to define a function out of our new type, it suffices to specify its value on all the inputs coming from some Introduction rule". For a negative type, you choose the Elimination rules, and then the Introduction rules are essentially determined by saying "in order to construct an element of our new type, it suffices to specify how all the Elimination rules would behave on that element". In both cases, the Computation rules then say that these "specifications" do in fact hold (as definitional equalities).
So, you can't just arbitrarily postulate Computation rules. I mean, you can, but you won't end up with a well-behaved theory. Thus, we have to regard the equalities postulated by higher inductive types as Introduction rules, with correspondingly determined Elimination and Computation rules. (We could try to make them Elimination rules instead, yielding a notion of "Higher Coinductive Type", but there's no consensus yet on what such a thing should look like.)
Why do we require this sort of harmony between the rules? From a computational point of view, it's so that we can actually compute with the Computation rules. If you didn't have that sort of harmony, then you might end up with "stuck" terms with an Elimination form applied to an Introduction form but no applicable Computation rule, or conversely if there were too many Computation rules then you might have some terms that try to "compute" to many different things.
From a category-theoretic point of view, it's because we're specifying objects by universal properties: a positive type former has a "left" universal property like a colimit, while a negative type former has a "right" universal property like a limit. I wrote a blog post about this here.
A different answer is that one of the purposes of higher inductive types is to define homotopy types containing nontrivial paths. The judgmental equalities coming from computation rules cannot give rise to nontrivial paths, because there is no way for two things to be "judgmentally equal in more than one way". By contrast, two things can be propositionally equal in more than one way (because an equality type can contain more than one term), so we can regard those as paths.
Higher inductive types also make sense in "extensional type theory" where there is no (or little) distinction between propositional and judgmental equality, and in this case it is true that every path-constructor of a HIT gives rise to a judgmental equality as well as a propositional one. In this case, they are less interesting, since all types are 0-truncated, but they still have a good deal of uses. However, at a basic level the path-constructors are still Introduction rules for the reasons described in my other answer, with the resulting judgmental equalities coming from the "reflection rule" of extensional type theory.
|
2021-03-05 07:07:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7659904956817627, "perplexity": 406.91950009834363}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178370239.72/warc/CC-MAIN-20210305060756-20210305090756-00508.warc.gz"}
|
https://es.mathworks.com/help/antenna/ref/yagiuda.html
|
Documentation
### This is machine translation
Translated by
Mouseover text to see original. Click the button below to return to the English version of the page.
# yagiUda
Create Yagi-Uda array antenna
## Description
The `yagiUda` class creates a classic Yagi-Uda array comprised of an exciter, reflector, and N- directors along the z-axis. The reflector and directors create a traveling wave structure that results in a directional radiation pattern.
The exciter, reflector, and directors have equal widths and are related to the diameter of an equivalent cylindrical structure by the equation
`$w=2d=4r$`
where:
• d is the diameter of equivalent cylinder
• r is the radius of equivalent cylinder
For a given cylinder radius, use the `cylinder2strip` utility function to calculate the equivalent width. A typical Yagi-Uda antenna array uses folded dipole as an exciter, due to its high impedance. The Yagi-Uda is center-fed and the feed point coincides with the origin. In place of a folded dipole, you can also use a planar dipole as an exciter.
## Creation
### Syntax
``yu = yagiUda``
``yu = yagiUda(Name,Value)``
### Description
``` `yu = yagiUda` creates a half-wavelength Yagi-Uda array antenna along the Z-axis. The default Yagi-Uda uses folded dipole as three directors, one reflector, and a folded dipole as an exciter. By default, the dimensions are chosen for an operating frequency of 300 MHz.```
example
``` `yu = yagiUda(Name,Value)` creates a half-wavelength Yagi-Uda array antenna, with additional properties specified by one or more name-value pair arguments. `Name` is the property name and `Value` is the corresponding value. You can specify several name-value pair arguments in any order as `Name1`, `Value1`, `...`, `NameN`, `ValueN`. Properties not specified retain default values.```
## Properties
expand all
Antenna Type used as exciter, specified as the comma-separated pair consisting of `'Exciter'` and an object.
Example: `'Exciter',dipole`
Total number of director elements, specified as a scalar.
### Note
Number of director elements should be less than or equal to 20.
Example: `'NumDirectors',13`
Data Types: `double`
Director length, specified as a scalar or vector in meters.
Example: `'DirectorLength',[0.4 0.5]`
Data Types: `double`
Spacing between directors, specified as a scalar or vector in meters.
Example: `'DirectorSpacing',[0.4 0.5]`
Data Types: `double`
Reflector length, specified as a scalar in meters.
Example: `'ReflectorLength',0.3`
Data Types: `double`
Spacing between exciter and reflector, specified as a scalar in meters.
Example: `'ReflectorSpacing', 0.4`
Data Types: `double`
Lumped elements added to the antenna feed, specified as a lumped element object handle. For more information, see `lumpedElement`.
Example: `'Load',lumpedelement`. `lumpedelement` is the object handle for the load created using `lumpedElement`.
Example: ```yu.Load = lumpedElement('Impedance',75)```
Tilt angle of the antenna, specified as a scalar or vector with each element unit in degrees. For more information, see Rotate Antenna and Arrays.
Example: `'Tilt',90`
Example: `'Tilt',[90 90]``'TiltAxis',[0 1 0;0 1 1]` tilts the antenna at 90 degree about two three-element vector points in space.
Data Types: `double`
Tilt axis of the antenna, specified as:
• Three-element vectors of Cartesian coordinates in meters. In this case, each vector starts at the origin and lies along the specified points on the X-, Y-, and Z- axes.
• Two points in space, each specified as three-element vectors of Cartesian coordinates. In this case, the antenna rotates around the line joining the two points in space.
• A string input describing simple rotations around one of the principal axes, 'X', 'Y', or 'Z'.
Example: `'TiltAxis',[0 1 0]`
Example: `'TiltAxis',[0 0 0;0 1 0]`
Example: `ant.TiltAxis = 'Z'`
## Object Functions
`show` Display antenna or array structure; Display shape as filled patch `info` Display information about antenna or array `axialRatio` Axial ratio of antenna `beamwidth` Beamwidth of antenna `charge` Charge distribution on metal or dielectric antenna or array surface `current` Current distribution on metal or dielectric antenna or array surface `design` Design prototype antenna or arrays for resonance at specified frequency `EHfields` Electric and magnetic fields of antennas; Embedded electric and magnetic fields of antenna element in arrays `impedance` Input impedance of antenna; scan impedance of array `mesh` Mesh properties of metal or dielectric antenna or array structure `meshconfig` Change mesh mode of antenna structure `pattern` Radiation pattern and phase of antenna or array; Embedded pattern of antenna element in array `patternAzimuth` Azimuth pattern of antenna or array `patternElevation` Elevation pattern of antenna or array `returnLoss` Return loss of antenna; scan return loss of array `sparameters` S-parameter object `vswr` Voltage standing wave ratio of antenna
## Examples
collapse all
Create and view a Yagi-Uda array antenna with 13 directors.
```y = yagiUda('NumDirectors',13); show(y)```
Plot the radiation pattern of a Yagi-Uda array antenna at a frequency of 30 0MHz.
```y = yagiUda('NumDirectors',13); pattern(y,300e6)```
Calculate the width of the strip approximation to a cylinder of radius 20 mm.
`w = cylinder2strip(20e-3)`
```w = 0.0800 ```
## References
[1] Balanis, C.A. Antenna Theory. Analysis and Design, 3rd Ed. New York: Wiley, 2005.
|
2019-07-21 19:30:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8015219569206238, "perplexity": 6422.7685795293055}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195527196.68/warc/CC-MAIN-20190721185027-20190721211027-00325.warc.gz"}
|
https://www.r-bloggers.com/2021/11/the-design-effect-of-a-cluster-randomized-trial-with-baseline-measurements/
|
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
Is it possible to reduce the sample size requirements of a stepped wedge cluster randomized trial simply by collecting baseline information? In a trial with randomization at the individual level, it is generally the case that if we are able to measure an outcome for subjects at two time periods, first at baseline and then at follow-up, we can reduce the overall sample size. But does this extend to (a) cluster randomized trials generally, and to (b) stepped wedge designs more specifically?
The answer to (a) is a definite “yes,” as described in a 2012 paper by Teerenstra et al (more details on that below). As for (b), two colleagues on the Design and Statistics Core of the NIA IMPACT Collaboratory, Monica Taljaard and Fan Li, and I have just started thinking about this. Ultimately, we hope to have an analytic solution that provides more formal guidance for stepped wedge designs; but to get things started, we thought we could explore a bit using simulation.
## Quick overview
Generally speaking, why might baseline measurements have any impact at all? The curse of any clinical trial is variability – the more noise (variability) there is in the outcome, the more difficult it is to identify the signal (effect). For example, if we are interested in measuring the impact of an intervention on the quality of life (QOL) across a diverse range of patients, the measurement (which typically ranges from 0 to 1) might vary considerably from person to person, regardless of the intervention. If the intervention has a real but moderate effect of, say, 0.1 points, it could easily get lost if the standard deviation is considerably larger, say 0.25.
It turns out that if we collect baseline QOL scores and can “control” for those measurements in some way (by conducting a repeated measures analysis, using ANCOVA, or assessing the difference itself as an outcome), we might be able to reduce the variability across study subjects sufficiently to give us a better chance at picking up the signal. Previously, I’ve written about baseline covariate adjustment in the context of clinical trials where randomization is at the individual subject level; now we will turn to the case where randomization is at the cluster or site level.
This post focuses on work already done to derive design effects for parallel cluster randomized trials (CRTs) that collect baseline measurements; we will get to stepped wedge designs in future posts. I described the design effect pretty generally in an earlier post, but the paper by Teerenstra et al, titled “A simple sample size formula for analysis of covariance in cluster randomized trials” provides a great foundation to understand how baseline measurements can impact sample sizes in clustered designs.
Here’s a brief outline of what follows: after showing an example based on a simple 2-arm randomized control trial with 350 subjects that has 80% power to detect a standardized effect size of 0.3, I describe and simulate a series of designs with cluster sizes of 30 subjects that require progressively fewer clusters but also provide 80% power under the same effect size and total variance assumptions: a simple CRT that needs 64 sites, a cross-sectional pre-post design that needs 52, a repeated measures design needs that 38, and a repeated measures design that models follow-up outcomes only (i.e. uses an ANCOVA model) that requires only 32.
## Simple RCT
We start with a simple RCT (without any clustering) that randomizes individuals to treatment or control.
$Y_{i} = \alpha + \delta Z_{i} + s_{i}$ where $$Y_{i}$$ is a continuous outcome measure for individual $$i$$, and $$Z_{i}$$ is the treatment status of individual $$i$$. $$\delta$$ is the treatment effect. $$s_{i} \sim N(0, \sigma_s^2)$$ are the individual random effects or noise.
Now that we are about to start coding, here are the necessary packages:
RNGkind("L'Ecuyer-CMRG")
set.seed(19287)
library(simstudy)
library(ggplot2)
library(lmerTest)
library(parallel)
library(data.table)
library(pwr)
library(gtsummary)
library(paletteer)
library(magrittr)
In the examples that follow, overall variance $$\sigma^2 = 64$$. In this first example, then, $$\sigma_s^2 = 64$$ since that is the only source of variation. The overall effect size $$\delta$$, which is the difference in average scores across treatment groups, is assumed to be 2.4, a standardized effect size $$2.4/8 = 0.3.$$ We will need to generate 350 individual subjects (175 in each arm) to achieve power of 80%.
pwr.t.test(d = 0.3, power = 0.80)
##
## Two-sample t test power calculation
##
## n = 175
## d = 0.3
## sig.level = 0.05
## power = 0.8
## alternative = two.sided
##
## NOTE: n is number in *each* group
#### Data generation process
Here is the data definition and generation process:
simple_rct <- function(N) {
# data definition for outcome
defS <- defData(varname = "rx", formula = "1;1", dist = "trtAssign")
defS <- defData(defS, varname = "y", formula = "2.4*rx", variance = 64, dist = "normal")
dd <- genData(N, defS)
dd[]
}
dd <- simple_rct(350)
Here is a visualization of the outcome measures by treatment arm.
#### Estimating effect size
A simple linear regression model estimates the effect size:
fit1 <- lm(y ~ rx, data = dd)
tbl_regression(fit1) %>%
modify_footnote(ci ~ NA, abbreviation = TRUE)
Characteristic Beta 95% CI p-value
rx 3.2 1.6, 4.9 <0.001
#### Confirming power
We can confirm the power by repeatedly generating data sets and fitting models, recording the p-values for each replication.
replicate <- function() {
dd <- simple_rct(350)
fit1 <- lm(y ~ rx, data = dd)
coef(summary(fit1))["rx", "Pr(>|t|)"]
}
p_values <- mclapply(1:1000, function(x) replicate(), mc.cores = 4)
Here is the estimated power based on 1000 replications:
mean(unlist(p_values) < 0.05)
## [1] 0.79
## Parallel cluster randomized trial
If we need to randomize at the site level (i.e., conduct a CRT), we can describe the data generation process as
$Y_{ij} = \alpha + \delta Z_{j} + c_j + s_i$
where $$Y_{ij}$$ is a continuous outcome for subject $$i$$ in site $$j$$. $$Z_{j}$$ is the treatment indicator for site $$j$$. Again, $$\delta$$ is the treatment effect. $$c_j \sim N(0,\sigma_c^2)$$ is a site level effect, and $$s_i \sim N(0, \sigma_s^2)$$ is the subject level effect. The correlation of any two subjects in a cluster is $$\rho$$ (the ICC):
$\rho = \frac{\sigma_c^2}{\sigma_c^2 + \sigma_s^2}$
If we have a pre-specified number ($$n$$) of subjects at each site, we can estimate the sample size required in the CRT might applying a design effect $$1+(n-1)\rho$$ to the sample size of an RCT that has the same overall variance. So, if $$\sigma_c^2 + \sigma_s^2 = 64$$, we can augment the sample size we used in the initial example. If $$\sigma_c^2 = 9.6$$ + $$\sigma_s^2 = 54.4$$, $$\rho = 0.15$$. We anticipate having 30 subjects at each site so the design effect is
$1 + (30 - 1) \times 0.15 = 5.35$
This means we will need $$5.35 \times 350 = 1872$$ total subjects based on the same effect size and power assumptions. Since we anticipate 30 subjects per site, we need $$1872 / 30 = 62.4$$ sites - we will round up to the nearest even number and use 64 sites.
#### Data generation process
simple_crt <- function(nsites, n) {
defC <- defData(varname = "rx", formula = "1;1", dist = "trtAssign")
defC <- defData(defC, varname = "c", formula = "0", variance = 9.6, dist = "normal")
defS <- defDataAdd(varname="y", formula="c + 2.4*rx", variance = 54.4, dist="normal")
# site/cluster level data
dc <- genData(nsites, defC, id = "site")
# individual level data
dd <- genCluster(dc, "site", n, "id")
dd[]
}
dd <- simple_crt(20, 50)
Once again, the sites randomized to the treatment arm are colored red:
#### Estimating effect size
A mixed effects model is used to estimate the effect size. I’m using a larger data set to recover the parameters used in the data generation process:
dd <- simple_crt(200,100)
fit2 <- lmer(y ~ rx + (1|site), data = dd)
tbl_regression(fit2, tidy_fun = broom.mixed::tidy) %>%
modify_footnote(ci ~ NA, abbreviation = TRUE)
Characteristic Beta 95% CI p-value
rx 1.2 0.21, 2.1 0.018
site.sd__(Intercept) 3.4
Residual.sd__Observation 7.4
#### Confirming power
Now, I will confirm power using 64 sites with 30 subjects per site, for a total of 1920 subjects (compared with only 350 in the RCT):
replicate <- function() {
dd <- simple_crt(64, 30)
fit2 <- lmer(y ~ rx + (1|site), data = dd)
coef(summary(fit2))["rx", "Pr(>|t|)"]
}
p_values <- mclapply(1:1000, function(x) replicate(), mc.cores = 4)
mean(unlist(p_values) < 0.05)
## [1] 0.8
## CRT with baseline measurement
We paid quite a hefty price moving from an RCT to a CRT in terms of the number of subjects we need to collect data on. If these data are coming from administrative systems, that added burden might not be an issue, but if we need to consent all the subjects and survey them individually, this could be quite burdensome.
We may be able to decrease the required number of clusters (i.e. reduce the design effect) if we can collect baseline measurements of the outcome. The baseline and follow-up measurements can be collected from the same subjects or different subjects, though the impact on the design effect depends on what approach is taken.
$Y_{ijk} = \alpha_0 + \alpha_1 k + \delta_{0} Z_j + \delta_{1}k Z_{j} + c_j + cp_{jk} + s_{ij} + sp_{ijk}$
where $$Y_{ijk}$$ is a continuous outcome measure for individual $$i$$ in site $$j$$ and measurement $$k \in \{0,1\}$$. $$k=0$$ for baseline measurement, and $$k=1$$ for the follow-up. $$Z_{j}$$ is the treatment status of cluster $$j$$, $$Z_{j} \in \{0,1\}.$$ $$\alpha_0$$ is the mean outcome at baseline for subjects in the control clusters, $$\alpha_1$$ is the change from baseline to follow-up in the control arm, $$\delta_{0}$$ is the difference at baseline between control and treatment arms (we would expect this to be $$0$$ in a randomized trial), and $$\delta_{1}$$ is the difference in the change from baseline to follow-up between the two arms. (In a randomized trial, since $$\delta_0$$ should be close to $$0$$, $$\delta_1$$ is the treatment effect.)
The model has cluster-specific and subject-specific random effects. For both, there can be time-invariant effects and time-varying effects. $$c_j \sim N(0,\sigma_c^2)$$ are time invariant site-specific effects, and $$cp_{jk}$$ are the site-specific period (time varying) effects, where $$c_{jk} \sim N(0, \sigma_{cp}^2)$$. At the subject level there can be $$s_{ij} \sim N(0, \sigma_s^2)$$ and $$sp_{ijk} \sim N(0, \sigma_{sp}^2)$$.
Here is the generic code that will facilitate data generation in this model:
crt_base <- function(effect, nsites, n, s_c, s_cp, s_s, s_sp) {
defC <- defData(varname = "c", formula = 0, variance = "..s_c")
defC <- defData(defC, varname = "rx", formula = "1;1", dist = "trtAssign")
defCP <- defDataAdd(varname = "c.p", formula = 0, variance = "..s_cp")
defS <- defDataAdd(varname = "s", formula = 0, variance = "..s_s")
formula = "..effect * rx * period + c + c.p + s",
variance ="..s_sp")
dc <- genData(nsites, defC, id = "site")
dcp <- dcp[, .(site, period, c.p, timeID)]
ds <- genCluster(dc, "site", n, "id")
setnames(dsp, "timeID", "obsID")
setkey(dsp, site, period)
setkey(dcp, site, period)
dd <- merge(dsp, dcp)
setkey(dd, site, id, period)
dd[]
}
## Design effect
In their paper, Teerenstra et al develop a design effect that takes into account the baseline measurement. Here are a few key quantities that are needed for the calculation:
The correlation of two subject measurements in the same cluster and same time period is the ICC or $$\rho$$, and is:
$\rho = \frac{\sigma_c^2 + \sigma_{cp}^2}{\sigma_c^2 + \sigma_{cp}^2 + \sigma_s^2 + \sigma_{sp}^2}$
In order to estimate design effect, we need two more correlations. The correlation between baseline and follow-up random effects at the cluster level is
$\rho_c = \frac{\sigma_c^2}{\sigma_c^2 + \sigma_{cp}^2}$
and the correlation between baseline and follow-up random effects at the subject level is
$\rho_s = \frac{\sigma_s^2}{\sigma_s^2 + \sigma_{sp}^2}$
A value $$r$$ is used to estimate the design effect, and is defined as
$r = \frac{n\rho\rho_c + (1-\rho)\rho_s}{1 + (n-1)\rho}$
If we are able to collect baseline measurements and our focus is on estimating $$\delta_1$$ from the model, the design effect is slightly modified from before:
$(1 + (n-1)\rho)(2(1-r))$
## Cross-sectional cohorts
We may not be able to collect two measurements for each subject at a site, but if we can collect measurements on two different cohorts, one at baseline before the intervention is implemented, and one cohort in a second period (either after the intervention has been implemented or not, depending on the randomization assignment of the cluster), we might be able to reduce the number of clusters.
In this case, $$\sigma_s^2 = 0$$ and $$\rho_s = 0$$, so the general model reduces to
$Y_{ijk} = \alpha_0 + \alpha_1 k + \delta_{0} Z_j + \delta_{1} k Z_{j} + c_j + cp_{jk} + sp_{ijk}$
#### Data generation process
The parameters for this simulation are $$\delta_1 = 2.4$$, $$\sigma_c^2 = 6.8$$, $$\sigma_{cp}^2 = 2.8$$, $$\sigma_{sp}^2 = 54.4$$. Total variance $$\sigma_c^2 + \sigma_{cp}^2 + \sigma_{sp}^2 = 6.8 + 2.8 + 54.4 = 64$$, as used previously.
dd <- crt_base(effect = 2.4, nsites = 20, n = 30, s_c=6.8, s_cp=2.8, s_s=0, s_sp=54.4)
Here is a visualization of the outcome measures by site and by period, with the sites in the treatment arm colored in red (only in the follow-up period).
#### Estimating effect size
To estimate the effect size we fit a mixed effect model with cluster-specific effects only (both time invariant and time varying).
dd <- crt_base(effect = 2.4, nsites = 200, n = 100, s_c=6.8, s_cp=2.8, s_s=0, s_sp=54.4)
fit3 <- lmer(y ~ period*rx+ (1|timeID:site) + (1 | site), data = dd)
tbl_regression(fit3, tidy_fun = broom.mixed::tidy) %>%
modify_footnote(ci ~ NA, abbreviation = TRUE)
Characteristic Beta 95% CI p-value
period -0.03 -0.52, 0.46 >0.9
rx 0.17 -0.78, 1.1 0.7
period * rx 2.7 2.0, 3.4 <0.001
timeID:site.sd__(Intercept) 1.6
site.sd__(Intercept) 2.9
Residual.sd__Observation 7.4
#### Confirming power
Based on the variance assumptions, we can update our design effect:
s_c <- 6.8
s_cp <- 2.8
s_s <- 0
s_sp <- 54.4
rho <- (s_c + s_cp)/(s_c + s_cp + s_s + s_sp)
rho_c <- s_c/(s_c + s_cp)
rho_s <- s_s/(s_s + s_sp)
n <- 30
r <- (n * rho * rho_c + (1-rho) * rho_s) / (1 + (n-1) * rho)
The design effect for the CRT without any baseline measurement was 5.35. With the two-cohort design, the design effect is reduced slightly:
(des_effect <- (1 + (n - 1) * rho) * 2 * (1 - r))
## [1] 4.3
des_effect * 350 / n
## [1] 50
The desired number of sites is over 50, so rounding up to the next even number gives us 52:
replicate <- function() {
dd <- crt_base(2.4, 52, 30, s_c = 6.8, s_cp = 2.8, s_s = 0, s_sp = 54.4)
fit3 <- lmer(y ~ period * rx+ (1|timeID:site) + (1 | site), data = dd)
coef(summary(fit3))["period:rx", "Pr(>|t|)"]
}
p_values <- mclapply(1:1000, function(x) replicate(), mc.cores = 4)
mean(unlist(p_values) < 0.05)
## [1] 0.8
## Repeated measurements
We can reduce the number of clusters further if instead of measuring one cohort prior to the intervention and another after the intervention, we measure a single cohort twice - once at baseline and once at follow-up. Now we use the full model that decomposes the subject level variance into a time invariant effect ($$c_j$$) and a time varying effect $$cp_{jk}$$:
$Y_{ijk} = \alpha_0 + \alpha_1 k + \delta_{0} Z_j + \delta_{1} k Z_{j} + c_j + cp_{jk} + s_{ij} + sp_{ijk}$
#### Data generation process
These are the parameters, $$\delta_1 = 2.4$$, $$\sigma_c^2 = 6.8$$, $$\sigma_{cp}^2 = 2.8$$, $$\sigma_s = 38,$$ and $$\sigma_{sp}^2 = 16.4$$.
dd <- crt_base(effect=2.4, nsites=20, n=30, s_c=6.8, s_cp=2.8, s_s=38, s_sp=16.4)
Here is what the data look like; each line represents an individual subject at the two time points, baseline and follow-up.
#### Estimating effect size
The mixed effect model includes cluster-specific effects only (both time invariant and time varying), as well as subject level effects. Again, total variance ($$\sigma_c^2 + \sigma_{cp}^2 + \sigma_s^2 + \sigma_{sp}^2$$) is 64.
dd <- crt_base(effect = 2.4, nsites = 200, n = 100,
s_c = 6.8, s_cp = 2.8, s_s = 38, s_sp = 16.4)
fit4 <- lmer(y ~ period*rx + (1 | id:site) + (1|timeID:site) + (1 | site), data = dd)
tbl_regression(fit4, tidy_fun = broom.mixed::tidy) %>%
modify_footnote(ci ~ NA, abbreviation = TRUE)
Characteristic Beta 95% CI p-value
period -0.21 -0.73, 0.31 0.4
rx -0.19 -1.1, 0.73 0.7
period * rx 2.4 1.7, 3.2 <0.001
id:site.sd__(Intercept) 6.2
timeID:site.sd__(Intercept) 1.8
site.sd__(Intercept) 2.7
Residual.sd__Observation 4.1
#### Confirming power
Based on the variance assumptions, we can update our design effect a second time:
s_c <- 6.8
s_cp <- 2.8
s_s <- 38
s_sp <- 16.4
rho <- (s_c + s_cp)/(s_c + s_cp + s_s + s_sp)
rho_c <- s_c/(s_c + s_cp)
rho_s <- s_s/(s_s + s_sp)
n <- 30
r <- (n * rho * rho_c + (1-rho) * rho_s) / (1 + (n-1) * rho)
And again, the design effect (and sample size requirement) is reduced:
(des_effect <- (1 + (n - 1) * rho) * 2 * (1 - r))
## [1] 3.1
des_effect * 350 / n
## [1] 37
The desired number of sites is over 36, so I will round up to 38:
replicate <- function() {
dd <- crt_base(2.4, 38, 30, s_c = 6.8, s_cp = 2.8, s_s = 38, s_sp = 16.4)
fit4 <- lmer(y ~ period*rx + (1 | id:site) + (1|timeID:site) + (1 | site), data = dd)
coef(summary(fit4))["period:rx", "Pr(>|t|)"]
}
p_values <- mclapply(1:1000, function(x) replicate(), mc.cores = 4)
mean(unlist(p_values) < 0.05)
## [1] 0.79
## Repeated measurements - ANCOVA
We may be able to reduce the number of clusters even further by changing the model so that we are comparing follow-up outcomes of the two treatment arms (as opposed to measuring the differences in changes as we just did). This model is
$Y_{ij1} = \alpha_0 + \gamma Y_{ij0} + \delta Z_j + c_j + s_{ij}$
where we have adjusted for baseline measurement $$Y_{ij0}.$$ Even though the estimation model has changed, I am using the exact same data generation process as before, with the same effect size and variance assumptions:
dd <- crt_base(effect = 2.4, nsites = 200, n = 100,
s_c = 6.8, s_cp = 2.8, s_s = 38, s_sp = 16.4)
dobs <- dd[, .(site, rx, id, period, timeID, y)]
dobs <- dcast(dobs, site + rx + id ~ period, value.var = "y")
fit5 <- lmer(1 ~ 0 + rx + (1 | site), data = dobs)
tbl_regression(fit5, tidy_fun = broom.mixed::tidy) %>%
modify_footnote(ci ~ NA, abbreviation = TRUE)
Characteristic Beta 95% CI p-value
0 0.70 0.69, 0.71 <0.001
rx 2.5 1.8, 3.1 <0.001
site.sd__(Intercept) 2.2
Residual.sd__Observation 5.3
#### Design effect
Teerenstra et al derived an alternative design effect that is specific to the ANCOVA model:
$(1 + (n-1)\rho) (1-r^2)$
where $$r$$ is the same as before. Since $$(1-r^2) < 2(1-r), \ 0 \le r < 1$$, this will be a reduction from the earlier model.
(des_effect <- (1 + (n - 1) * rho) * (1 - r^2))
## [1] 2.7
des_effect * 350 / n
## [1] 31
#### Confirming power
replicate <- function() {
dd <- crt_base(2.4, 32, 30, s_c = 6.8, s_cp = 2.8, s_s = 38, s_sp = 16.4)
dobs <- dd[, .(site, rx, id, period, timeID, y)]
dobs <- dcast(dobs, site + rx + id ~ period, value.var = "y")
fit5 <- lmer(1 ~ 0 + rx + (1 | site), data = dobs)
coef(summary(fit5))["rx", "Pr(>|t|)"]
}
p_values <- mclapply(1:1000, function(x) replicate(), mc.cores = 4)
mean(unlist(p_values) < 0.05)
## [1] 0.78
## Next steps
These simulations confirmed the design effects derived by Teerenstra et al. In the next post, we will turn to baseline measurements in the context of a stepped wedge design, to see if these results translate to a more complex setting. The design effects themselves have not yet been derived. In the meantime, to get yourself psyched up for what is coming, you can read more generally about stepped wedge designs here, here, here, here, here, and here.
Reference:
Teerenstra, Steven, Sandra Eldridge, Maud Graff, Esther de Hoop, and George F. Borm. “A simple sample size formula for analysis of covariance in cluster randomized trials.” Statistics in medicine 31, no. 20 (2012): 2169-2178.
Support:
This work was supported in part by the National Institute on Aging (NIA) of the National Institutes of Health under Award Number U54AG063546, which funds the NIA IMbedded Pragmatic Alzheimer’s Disease and AD-Related Dementias Clinical Trials Collaboratory (NIA IMPACT Collaboratory). The author, a member of the Design and Statistics Core, was the sole writer of this blog post and has no conflicts. The content is solely the responsibility of the author and does not necessarily represent the official views of the National Institutes of Health.
|
2023-01-27 07:53:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49048274755477905, "perplexity": 3676.9168755737887}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494974.98/warc/CC-MAIN-20230127065356-20230127095356-00476.warc.gz"}
|
https://docs.flexcompute.com/projects/tidy3d/en/stable/notebooks/PhotonicCrystalWaveguidePolarizationFilter.html
|
# Photonic crystal waveguide polarization filter#
Polarization control is one of the central themes in integrated silicon photonics. Different polarization modes not only allow for more information-carrying channels but also enable a wide range of applications given their different characteristics. For example, waveguide TE modes usually have better confinement and thus they are less prone to sidewall roughness. TM modes, on the other hand, have a larger penetration depth into the top and bottom claddings, which makes them suitable for sensing applications. As a result, integrated silicon photonic filters that selectively transmit or block certain polarization are very useful.
This notebook demonstrates the modeling of a compact TM-pass polarization filter based on photonic crystal waveguide. The photonic crystal is an air-bridged silicon slab with periodic air holes arranged in a triangular lattice. It is possible to achieve a TM-pass but TE-block device within a frequency range by utilizing bandgap engineering and index guiding mechanism. The design parameters adopted from Chandra Prakash and Mrinal Sen , “Optimization of silicon-photonic crystal (PhC) waveguide for a compact and high extinction ratio TM-pass polarization filter”, Journal of Applied Physics 127, 023101 (2020) are optimized for the telecom frequency to have a ~-0.5 dB TM transmission and ~-40 dB TE transmission.
[1]:
import numpy as np
import matplotlib.pyplot as plt
import tidy3d as td
import tidy3d.web as web
from tidy3d.plugins import ModeSolver
[10:00:39] WARNING This version of Tidy3D was pip installed from the __init__.py:100
'tidy3d-beta' repository on PyPI. Future releases will be
uploaded to the 'tidy3d' repository. From now on, please
use 'pip install tidy3d' instead.
INFO Using client version: 1.9.0 __init__.py:115
## Simulation Setup#
This device is designed to work in a wide frequency range from 1480 nm to 1620 nm.
[2]:
lda0 = 1.55 # central wavelength
freq0 = td.C_0 / lda0 # central frequency
ldas = np.linspace(1.48, 1.62, 100) # wavelength range of interest
freqs = td.C_0 / ldas # frequency range of interest
Since the photonic crystal slab is air-bridged, we only need to define two materials: silicon and air. The frequency dispersion of the silicon refractive index in the frequency range of interest is quite small. Therefore, in this notebook, we model it as non-dispersive and lossless.
[3]:
n_si = 3.47 # silicon refractive index
si = td.Medium(permittivity=n_si**2)
n_air = 1 # air refractive index
air = td.Medium(permittivity=n_air)
For the photonic crystal to work as a filter, the geometric parameters need to be carefully chosen such that bandgap lies within the frequency range of interest. The design process would require the calculation of band structures. In this notebook, we will skip the band structure calculation and only model the optimized device. For band diagram simulation, refer to the photonic crystal slab band structure calculation notebook.
Define the geometric parameters for the photonic crystal as well as the input and output straight waveguides.
[4]:
a = 0.42 # lattice constant
t = 0.75 * a # slab thickness
r = 0.3 * a # radius of air holes
w = 0.73 * a # width of the photonic crystal waveguide section
N_holes = 11 # number of holes in each row
N_rows = 7 # number of rows of holes on each side of the waveguide
L = N_holes * a # length of the photonic crystal waveguide
D = 0.4 # width of the input and output waveguides
inf_eff = 1e3 # effective infinity of the model
To build the device, we define the silicon slab, input and output straight waveguides, and air holes. The air holes are systematically defined using a nested for loop. Due to the mirror symmetry of the device with respect to the $$xz$$ plane, We only define the air holes in $$y>0$$. Later, we will define the symmetry condition in the simulation.
[5]:
# define the silicon slab
si_slab = td.Structure(
geometry=td.Box.from_bounds(
rmin=(-L / 2, -N_rows * np.sqrt(3) * a / 2 - w / 2, 0),
rmax=(L / 2, N_rows * np.sqrt(3) * a / 2 + w / 2, t),
),
medium=si,
)
# define the input and output straight waveguides
si_wg = td.Structure(
geometry=td.Box.from_bounds(
rmin=(-inf_eff, -D / 2, 0),
rmax=(inf_eff, D / 2, t),
),
medium=si,
)
# systematically define air holes
holes = []
for i in range(N_rows):
if i % 2 == 0:
shift = a / 2
N = N_holes
else:
shift = 0
N = N_holes + 1
for j in range(N):
holes.append(
td.Structure(
geometry=td.Cylinder(
center=(
(j - (N_holes) / 2) * a + shift,
(w / 2 + r) + (i) * np.sqrt(3) * a / 2,
t / 2,
),
radius=r,
length=t,
),
medium=air,
)
)
A ModeSouce is defined at the input waveguide to launch either the fundamental TE or TM mode. A FluxMonitor is defined at the output waveguide to measure the transmission. In addition, we define a FieldMonitor to visualize the field propagation and scattering in the $$xy$$ plane.
[6]:
# simulation domain size
Lx = 1.5 * L
Ly = 2 * N_rows * a + lda0
Lz = 7 * t
sim_size = (Lx, Ly, Lz)
# define a mode source at the input waveguide
fwidth = 0.5 * (np.max(freqs) - np.min(freqs))
mode_spec = td.ModeSpec(num_modes=1, target_neff=n_si)
mode_source = td.ModeSource(
center=(-Lx / 2 + lda0 / 2, 0, t / 2),
size=(0, 4 * D, 5 * t),
source_time=td.GaussianPulse(freq0=freq0, fwidth=fwidth),
direction="+",
mode_spec=mode_spec,
mode_index=0,
)
# define a flux monitor at the output waveguide
flux_monitor = td.FluxMonitor(
center=(Lx / 2 - lda0 / 2, 0, t / 2),
size=mode_source.size,
freqs=freqs,
name="flux",
)
# define a field monitor in the xy plane
field_monitor = td.FieldMonitor(
center=(0, 0, t / 2),
size=(td.inf, td.inf, 0),
freqs=[freq0],
name="field",
)
For periodic structures, it is better the define a grid that is commensurate with the periodicity. Therefore, we use UniformGrid in the $$x$$ and $$y$$ directions. In the $$z$$ direction, a nonuniform grid can be used.
[7]:
# define grids
steps_per_unit_cell = 20
grid_spec = td.GridSpec(
grid_x=td.UniformGrid(dl=a / steps_per_unit_cell),
grid_y=td.UniformGrid(dl=a / steps_per_unit_cell * np.sqrt(3) / 2),
grid_z=td.AutoGrid(min_steps_per_wvl=steps_per_unit_cell),
)
Since the TE and TM modes share different symmetry with respect to the $$xz$$ plane, we can selectively launch them by setting the appropriate symmetry condition. The simulation for TE incidence is done by setting the symmetry condition to (0,-1,0) while the TM incidence corresponds to (0,1,0).
For this simulation, we set a relatively long run time of 20 ps to ensure the field decays sufficiently such that the simulation result is accurate.
[8]:
# define the te incidence simulation
run_time = 2e-11 # simulation run time
sim_te = td.Simulation(
center=(0, 0, 0),
size=sim_size,
grid_spec=grid_spec,
structures=[si_slab, si_wg] + holes,
sources=[mode_source],
monitors=[flux_monitor, field_monitor],
run_time=run_time,
boundary_spec=td.BoundarySpec.all_sides(boundary=td.PML()),
symmetry=(0, -1, 0),
)
To quickly check if the structures, source, and monitors are correctly defined, use the plot method to visualize the simulation.
[9]:
sim_te.plot(z=t / 2);
[10:00:40] INFO Auto meshing using wavelength 1.5500 defined from grid_spec.py:510
sources.
[9]:
<Axes: title={'center': 'cross section at z=0.16'}, xlabel='x', ylabel='y'>
To further investigate the grids, we overlay the grid on top of the structures and zoom in on a small area. The grid looks sufficiently fine.
[10]:
fig, ax = plt.subplots()
sim_te.plot(z=t / 2, ax=ax)
sim_te.plot_grid(z=t / 2, ax=ax)
ax.set_xlim(-0.5, 0.5)
ax.set_ylim(0, 1);
[10]:
(0.0, 1.0)
Lastly, we use the ModeSolver plugin to visualize the mode profile launched by the mode source. The mode field confirms that we are launching the fundamental TE mode at the input waveguide.
[11]:
# define mode solver
mode_solver = ModeSolver(
simulation=sim_te,
plane=td.Box(center=mode_source.center, size=mode_source.size),
mode_spec=mode_spec,
freqs=[freq0],
)
mode_data = mode_solver.solve()
# visualize mode fields
f, (ax1, ax2, ax3) = plt.subplots(1, 3, tight_layout=True, figsize=(10, 3))
abs(mode_data.Ex.isel(mode_index=0)).plot(x="y", y="z", ax=ax1, cmap="magma")
abs(mode_data.Ey.isel(mode_index=0)).plot(x="y", y="z", ax=ax2, cmap="magma")
abs(mode_data.Ez.isel(mode_index=0)).plot(x="y", y="z", ax=ax3, cmap="magma")
ax1.set_title("|Ex(x, y)|")
ax1.set_aspect("equal")
ax2.set_title("|Ey(x, y)|")
ax2.set_aspect("equal")
ax3.set_title("|Ez(x, y)|")
ax3.set_aspect("equal")
plt.show()
The TM incidence simulation can be made simply by copying the TE simulation while updating the symmetry condition to (0,1,0). Then a simulation batch consisting of both simulations is defined. This way, both simulations will be running in parallel on the server, which will substantially save the total simulation time.
[12]:
# copy the te simulation to make the tm simulation
sim_tm = sim_te.copy(
update={"symmetry": (0, 1, 0)}
)
# define simulation batch
sims = {"TE": sim_te, "TM": sim_tm}
Submit the simulation batch to the server.
[13]:
batch = web.Batch(simulations=sims)
batch_results = batch.run(path_dir="data")
[10:00:42] INFO Using Tidy3D credentials from stored file. auth.py:77
[10:00:43] INFO Authentication successful. auth.py:37
[10:00:44] INFO Created task 'TE' with task_id webapi.py:131
'c2606530-90b2-4b17-8c99-659b68eec1f9'.
INFO Auto meshing using wavelength 1.5500 defined from grid_spec.py:510
sources.
INFO Created task 'TM' with task_id webapi.py:131
'48b7cfec-1766-499d-a581-b6ab22a9a041'.
[10:00:49] Started working on Batch. container.py:383
[10:05:52] Batch complete. container.py:417
## Postprocessing and Result Visualization#
After the batch of simulations is complete, we first visualize the field intensity distributions in both cases. As expected, the TE mode is blocked and the TM mode transmits through the photonic crystal region.
[14]:
# get individual simulation data from batch result
sim_data_te = batch_results["TE"]
sim_data_tm = batch_results["TM"]
# plot the field intensities in the te and tm cases
fig, (ax1, ax2) = plt.subplots(1, 2, tight_layout=True, figsize=(9, 4))
sim_data_te.plot_field("field", "int", ax=ax1, vmin=0, vmax=4000)
sim_data_tm.plot_field("field", "int", ax=ax2, vmin=0, vmax=4000);
[10:05:57] INFO downloading file "output/monitor_data.hdf5" to webapi.py:673
"data/c2606530-90b2-4b17-8c99-659b68eec1f9.hdf5"
[10:05:58] INFO loading SimulationData from webapi.py:472
data/c2606530-90b2-4b17-8c99-659b68eec1f9.hdf5
INFO downloading file "output/monitor_data.hdf5" to webapi.py:673
"data/48b7cfec-1766-499d-a581-b6ab22a9a041.hdf5"
[10:05:59] INFO loading SimulationData from webapi.py:472
data/48b7cfec-1766-499d-a581-b6ab22a9a041.hdf5
INFO Auto meshing using wavelength 1.5500 defined from grid_spec.py:510
sources.
[10:06:00] INFO Auto meshing using wavelength 1.5500 defined from grid_spec.py:510
sources.
[14]:
<Axes: title={'center': 'cross section at z=0.16'}, xlabel='x', ylabel='y'>
To quantify the filter performance, we plot the transmission in both simulations. The result shows a good transmission (about -0.5 dB) for the TM mode and a low transmission (about -40 dB) for the TE mode. That is, the designed filter functions well while being very compact in size.
[15]:
T_te = sim_data_te["flux"].flux
T_tm = sim_data_tm["flux"].flux
# plot the transmissions in the te and tm cases
plt.plot(ldas, 10 * np.log10(T_te), label="TE")
plt.plot(ldas, 10 * np.log10(T_tm), label="TM")
plt.xlim(1.48, 1.62)
plt.ylim(-50, 0)
plt.xlabel("Wavelength ($\mu m$)")
plt.ylabel("Transmission (dB)")
plt.legend();
[15]:
<matplotlib.legend.Legend at 0x7f39e7b51a30>
[ ]:
|
2023-03-28 12:11:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6723759174346924, "perplexity": 5543.117663832189}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948858.7/warc/CC-MAIN-20230328104523-20230328134523-00019.warc.gz"}
|
https://www.physicsforums.com/threads/solve-the-equation.649220/
|
# Solve the equation
Gold Member
## Homework Statement
$2^{|x+2|}-|2^{x+1}-1|=2^{x+1}+1$
## The Attempt at a Solution
I know this is not a direct equation in quadratic but somehow I have to convert it in that form by assuming something to be another variable. I am supposing $2^x=t$. But that doesn't help me as I cannot eliminate $2^{|x+2|}$
The first thing I would do it set up "cases" to handle the absolute values. x+ 2 will be positive for x> -2 and $2^{x+ 1}- 1> 0$ for x> -1. So if x< -2, both x+ 2 and $2^{x+1}-1$ are negative. If -2< x< -1, x+ 2 is positive but $2^{x+1}- 1$ is still negative. If x> -1, both x+ 2 and $2^{x+1}- 1$ are positive.
Also use the fact that $2^{x+ a}= 2^a 2^x$.
|
2021-06-20 21:30:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8469857573509216, "perplexity": 448.00128067823584}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488257796.77/warc/CC-MAIN-20210620205203-20210620235203-00210.warc.gz"}
|
http://www.ams.org/mathscinet-getitem?mr=1765876
|
MathSciNet bibliographic data MR1765876 37A30 (28D05 37E10 60F15 60G10 60H25) Ruffino, Paulo R. C. A sampling theorem for rotation numbers of linear processes in ${\bf R}^2$${\bf R}^2$. Random Oper. Stochastic Equations 8 (2000), no. 2, 175–188. Article
For users without a MathSciNet license , Relay Station allows linking from MR numbers in online mathematical literature directly to electronic journals and original articles. Subscribers receive the added value of full MathSciNet reviews.
|
2017-01-18 16:22:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 1, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9880372285842896, "perplexity": 10906.406741763727}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280308.24/warc/CC-MAIN-20170116095120-00323-ip-10-171-10-70.ec2.internal.warc.gz"}
|
https://learn.saylor.org/course/view.php?id=18§ionid=162
|
• ### Unit 6: Maxwell's Equations
At this point in the course, we have developed the mathematical structure for and a general understanding of all of Maxwell's Equations. Now we want to sit back and summarize our findings by identifying what they are, what they mean, and how we can use them.
There are four Maxwell equations that describe all classical electromagnetism. Maxwell's equations take on a particularly simple form when describing the behavior of electric and magnetic fields in regions devoid of matter; that is, in a vacuum. (Note that for most purposes, air is close enough to being a vacuum that the presence of an atmosphere can be ignored.) These are Maxwell's free space equations.
There are four Maxwell free space equations. These include the two flux equations - the electric and magnetic forms of Gauss' law. These state that the electric or magnetic flux through a closed surface is proportional to the electric or magnetic charge enclosed within that surface. Note that in the magnetic case, there are no magnetic charges (also called magnetic monopoles), so that the magnetic flux through and closed surface is zero.
The other two free space Maxwell's equations are Faraday's Law of Induction and a modified version of Ampere's Circuital Law. Once again, these electric and magnetic equations have similar formalisms, thereby emphasizing the close relationship of the electric and magnetic fields. Faraday's Law of Induction states that the induced EMF in any closed circuit is proportional to the time rate of change of the magnetic flux through the circuit, while Ampere's Law states that the integrated magnetic field around a closed curve is proportional to the currents passing through a surface bounded by the curve. Maxwell's main contribution (beyond realizing that these four equations provided a complete theory of electromagnetism) was the discovery and description of the displacement current, which is a source of the magnetic field associated with the rate of change of the electric displacement field in a region.
Inside materials, Maxwell's Equations are modified by the electric permittivity and magnetic permeability of the materials, but they remain the basis for the classical model of electromagnetism. In this unit, we will concentrate on Maxwell's Equations as a single theory that unites the half-century of previous work on electromagnetism.
Completing this unit should take you approximately 8 hours.
|
2021-05-09 04:42:11
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8687503337860107, "perplexity": 181.32893467646272}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988955.89/warc/CC-MAIN-20210509032519-20210509062519-00188.warc.gz"}
|
https://ftp.aimsciences.org/article/doi/10.3934/jgm.2012.4.165
|
# American Institute of Mathematical Sciences
June 2012, 4(2): 165-180. doi: 10.3934/jgm.2012.4.165
## Dirac pairs
1 Centre de Mathématiques Laurent Schwartz, École Polytechnique, 91128 Palaiseau, France
Received April 2011 Published August 2012
We extend the definition of the Nijenhuis torsion of an endomorphism of a Lie algebroid to that of a relation, and we prove that the torsion of the relation defined by a bi-Hamiltonian structure vanishes. Following Gelfand and Dorfman, we then define Dirac pairs, and we analyze the relationship of this general notion with the various kinds of compatible structures on manifolds, more generally, on Lie algebroids.
Citation: Yvette Kosmann-Schwarzbach. Dirac pairs. Journal of Geometric Mechanics, 2012, 4 (2) : 165-180. doi: 10.3934/jgm.2012.4.165
##### References:
[1] P. Antunes, Poisson quasi-Nijenhuis structures with background, Lett. Math. Phys., 86 (2008), 33-45. doi: 10.1007/s11005-008-0272-5. Google Scholar [2] A. Barakat, A. De Sole and V. G. Kac, Poisson vertex algebras in the theory of Hamiltonian equations, Japan. J. Math., 4 (2009), 141-252. Google Scholar [3] J. Carinena, J. Grabowski and G. Marmo, Courant algebroid and Lie bialgebroid contractions, J. Phys. A, 37 (2004), 5189-5202. Google Scholar [4] T. Courant, Dirac manifolds, Trans. Amer. Math. Soc., 319 (1990), 631-661. doi: 10.1090/S0002-9947-1990-0998124-1. Google Scholar [5] I. Ya. Dorfman, Dirac structures of integrable evolution equations, Phys. Lett. A, 125 (1987), 240-246. doi: 10.1016/0375-9601(87)90201-5. Google Scholar [6] Irene Dorfman, "Dirac Structures and Integrability of Nonlinear Evolution Equations,'' Nonlinear Science: Theory and Applications, John Wiley & Sons, Ltd., Chichester, 1993. Google Scholar [7] H. Geiges, Symplectic couples on $4$-manifolds, Duke Math. J., 85 (1996), 701-711. doi: 10.1215/S0012-7094-96-08527-0. Google Scholar [8] I. M. Gel'fand and I. Ja. Dorfman, Hamiltonian operators and algebraic structures associated with them, (Russian) Funktsional. Anal. i Prilozhen., 13 (1979), 13-30; English transl., Funct. Anal. Appl., 13 (1979), 248-262. Google Scholar [9] I. M. Gel'fand and I. Ja. Dorfman, Schouten bracket and Hamiltonian operators, (Russian) Funktsional. Anal. i Prilozhen., 14 (1980), 71-74; English transl., Funct. Anal. Appl., 14 (1980), 223-226. doi: 10.1007/BF01086188. Google Scholar [10] Long-Guang He and Bao-Kang Liu, Dirac-Nijenhuis manifolds, Rep. Math. Phys., 53 (2004), 123-142. doi: 10.1016/S0034-4877(04)90008-0. Google Scholar [11] Y. Kosmann-Schwarzbach, Jacobian quasi-bialgebras and quasi-Poisson Lie groups, in "Mathematical Aspects of Classical Field Theory'' (eds. M. Gotay, J. E. Marsden and V. Moncrief) (Seattle, WA, 1991), Contemp. Math., 132, American Mathematical Society, Providence, RI, (1992), 459-489. Google Scholar [12] Y. Kosmann-Schwarzbach, Poisson and symplectic functions in Lie algebroid theory, in "Higher Structures in Geometry and Physics''(eds. A. Cattaneo, A. Giaquinto and Ping Xu), Progr. Math., 287, Birkhäuser/Springer, New York (2011), 243-268. Google Scholar [13] Y. Kosmann-Schwarzbach, Nijenhuis structures on Courant algebroids, Bull. Braz. Math. Soc. (N.S.), 42 (2011), 625-649. doi: 10.1007/s00574-011-0032-5. Google Scholar [14] Y. Kosmann-Schwarzbach and F. Magri, Poisson-Nijenhuis structures, Ann. Inst. H. Poincaré Phys. Théor., 53 (1990), 35-81. Google Scholar [15] Y. Kosmann-Schwarzbach and V. Rubtsov, Compatible structures on Lie algebroids and Monge-Amp\ere operators, Acta. Appl. Math., 109 (2010), 101-135. doi: 10.1007/s10440-009-9444-2. Google Scholar [16] A. Kushner, V. Lychagin and V. Rubtsov, "Contact Geometry and Nonlinear Differential Equations,'' Encyclopedia of Mathematics and its Applications, 101, Cambridge University Press, Cambridge, 2007. Google Scholar [17] Zhang-Ju Liu, Some remarks on Dirac structures and Poisson reductions, in "Poisson Geometry'' (eds. J. Grabowski and P. Urbanski) (Warsaw, 1998), Banach Center Publications, 51, Polish Acad. Sci., Warsaw (2000), 165-173. Google Scholar [18] Zhang-Ju Liu, A. Weinstein and Ping Xu, Manin triples for Lie bialgebroids, J. Differential Geom., 45 (1997), 547-574. Google Scholar [19] V. V. Lychagin, V. N. Rubtsov and I. V. Chekalov, A classification of Monge-Ampère equations, Ann. Sci. École Norm. Sup. (4), 26 (1993), 281-308. Google Scholar [20] D. Roytenberg, Quasi-Lie bialgebroids and twisted Poisson manifolds, Lett. Math. Phys., 61 (2002), 123-137. Google Scholar [21] Y. Terashima, On Poisson functions, J. Sympl. Geom., 6 (2008), 1-7. Google Scholar [22] A. Weinstein, A note on the Wehrheim-Woodward category, J. Geom. Mechanics, 3 (2011), 507-515. Google Scholar [23] Yanbin Yin and Longguang He, Dirac strucures on protobialgebroids, Sci. China Ser. A, 49 (2006), 1341-1352. doi: 10.1007/s11425-006-1997-1. Google Scholar
show all references
##### References:
[1] P. Antunes, Poisson quasi-Nijenhuis structures with background, Lett. Math. Phys., 86 (2008), 33-45. doi: 10.1007/s11005-008-0272-5. Google Scholar [2] A. Barakat, A. De Sole and V. G. Kac, Poisson vertex algebras in the theory of Hamiltonian equations, Japan. J. Math., 4 (2009), 141-252. Google Scholar [3] J. Carinena, J. Grabowski and G. Marmo, Courant algebroid and Lie bialgebroid contractions, J. Phys. A, 37 (2004), 5189-5202. Google Scholar [4] T. Courant, Dirac manifolds, Trans. Amer. Math. Soc., 319 (1990), 631-661. doi: 10.1090/S0002-9947-1990-0998124-1. Google Scholar [5] I. Ya. Dorfman, Dirac structures of integrable evolution equations, Phys. Lett. A, 125 (1987), 240-246. doi: 10.1016/0375-9601(87)90201-5. Google Scholar [6] Irene Dorfman, "Dirac Structures and Integrability of Nonlinear Evolution Equations,'' Nonlinear Science: Theory and Applications, John Wiley & Sons, Ltd., Chichester, 1993. Google Scholar [7] H. Geiges, Symplectic couples on $4$-manifolds, Duke Math. J., 85 (1996), 701-711. doi: 10.1215/S0012-7094-96-08527-0. Google Scholar [8] I. M. Gel'fand and I. Ja. Dorfman, Hamiltonian operators and algebraic structures associated with them, (Russian) Funktsional. Anal. i Prilozhen., 13 (1979), 13-30; English transl., Funct. Anal. Appl., 13 (1979), 248-262. Google Scholar [9] I. M. Gel'fand and I. Ja. Dorfman, Schouten bracket and Hamiltonian operators, (Russian) Funktsional. Anal. i Prilozhen., 14 (1980), 71-74; English transl., Funct. Anal. Appl., 14 (1980), 223-226. doi: 10.1007/BF01086188. Google Scholar [10] Long-Guang He and Bao-Kang Liu, Dirac-Nijenhuis manifolds, Rep. Math. Phys., 53 (2004), 123-142. doi: 10.1016/S0034-4877(04)90008-0. Google Scholar [11] Y. Kosmann-Schwarzbach, Jacobian quasi-bialgebras and quasi-Poisson Lie groups, in "Mathematical Aspects of Classical Field Theory'' (eds. M. Gotay, J. E. Marsden and V. Moncrief) (Seattle, WA, 1991), Contemp. Math., 132, American Mathematical Society, Providence, RI, (1992), 459-489. Google Scholar [12] Y. Kosmann-Schwarzbach, Poisson and symplectic functions in Lie algebroid theory, in "Higher Structures in Geometry and Physics''(eds. A. Cattaneo, A. Giaquinto and Ping Xu), Progr. Math., 287, Birkhäuser/Springer, New York (2011), 243-268. Google Scholar [13] Y. Kosmann-Schwarzbach, Nijenhuis structures on Courant algebroids, Bull. Braz. Math. Soc. (N.S.), 42 (2011), 625-649. doi: 10.1007/s00574-011-0032-5. Google Scholar [14] Y. Kosmann-Schwarzbach and F. Magri, Poisson-Nijenhuis structures, Ann. Inst. H. Poincaré Phys. Théor., 53 (1990), 35-81. Google Scholar [15] Y. Kosmann-Schwarzbach and V. Rubtsov, Compatible structures on Lie algebroids and Monge-Amp\ere operators, Acta. Appl. Math., 109 (2010), 101-135. doi: 10.1007/s10440-009-9444-2. Google Scholar [16] A. Kushner, V. Lychagin and V. Rubtsov, "Contact Geometry and Nonlinear Differential Equations,'' Encyclopedia of Mathematics and its Applications, 101, Cambridge University Press, Cambridge, 2007. Google Scholar [17] Zhang-Ju Liu, Some remarks on Dirac structures and Poisson reductions, in "Poisson Geometry'' (eds. J. Grabowski and P. Urbanski) (Warsaw, 1998), Banach Center Publications, 51, Polish Acad. Sci., Warsaw (2000), 165-173. Google Scholar [18] Zhang-Ju Liu, A. Weinstein and Ping Xu, Manin triples for Lie bialgebroids, J. Differential Geom., 45 (1997), 547-574. Google Scholar [19] V. V. Lychagin, V. N. Rubtsov and I. V. Chekalov, A classification of Monge-Ampère equations, Ann. Sci. École Norm. Sup. (4), 26 (1993), 281-308. Google Scholar [20] D. Roytenberg, Quasi-Lie bialgebroids and twisted Poisson manifolds, Lett. Math. Phys., 61 (2002), 123-137. Google Scholar [21] Y. Terashima, On Poisson functions, J. Sympl. Geom., 6 (2008), 1-7. Google Scholar [22] A. Weinstein, A note on the Wehrheim-Woodward category, J. Geom. Mechanics, 3 (2011), 507-515. Google Scholar [23] Yanbin Yin and Longguang He, Dirac strucures on protobialgebroids, Sci. China Ser. A, 49 (2006), 1341-1352. doi: 10.1007/s11425-006-1997-1. Google Scholar
[1] Melvin Leok, Diana Sosa. Dirac structures and Hamilton-Jacobi theory for Lagrangian mechanics on Lie algebroids. Journal of Geometric Mechanics, 2012, 4 (4) : 421-442. doi: 10.3934/jgm.2012.4.421 [2] Javier Pérez Álvarez. Invariant structures on Lie groups. Journal of Geometric Mechanics, 2020, 12 (2) : 141-148. doi: 10.3934/jgm.2020007 [3] Henry O. Jacobs, Hiroaki Yoshimura. Tensor products of Dirac structures and interconnection in Lagrangian mechanics. Journal of Geometric Mechanics, 2014, 6 (1) : 67-98. doi: 10.3934/jgm.2014.6.67 [4] Ünver Çiftçi. Leibniz-Dirac structures and nonconservative systems with constraints. Journal of Geometric Mechanics, 2013, 5 (2) : 167-183. doi: 10.3934/jgm.2013.5.167 [5] Manuel F. Rañada. Quasi-bi-Hamiltonian structures and superintegrability: Study of a Kepler-related family of systems endowed with generalized Runge-Lenz integrals of motion. Journal of Geometric Mechanics, 2021, 13 (2) : 195-208. doi: 10.3934/jgm.2021003 [6] Mohammad Shafiee. The 2-plectic structures induced by the Lie bialgebras. Journal of Geometric Mechanics, 2017, 9 (1) : 83-90. doi: 10.3934/jgm.2017003 [7] Y. A. Li, P. J. Olver. Convergence of solitary-wave solutions in a perturbed bi-Hamiltonian dynamical system I. Compactions and peakons. Discrete & Continuous Dynamical Systems, 1997, 3 (3) : 419-432. doi: 10.3934/dcds.1997.3.419 [8] Guillermo Dávila-Rascón, Yuri Vorobiev. Hamiltonian structures for projectable dynamics on symplectic fiber bundles. Discrete & Continuous Dynamical Systems, 2013, 33 (3) : 1077-1088. doi: 10.3934/dcds.2013.33.1077 [9] Dennis I. Barrett, Rory Biggs, Claudiu C. Remsing, Olga Rossi. Invariant nonholonomic Riemannian structures on three-dimensional Lie groups. Journal of Geometric Mechanics, 2016, 8 (2) : 139-167. doi: 10.3934/jgm.2016001 [10] Rita Ferreira, Elvira Zappale. Bending-torsion moments in thin multi-structures in the context of nonlinear elasticity. Communications on Pure & Applied Analysis, 2020, 19 (3) : 1747-1793. doi: 10.3934/cpaa.2020072 [11] K. C. H. Mackenzie. Drinfel'd doubles and Ehresmann doubles for Lie algebroids and Lie bialgebroids. Electronic Research Announcements, 1998, 4: 74-87. [12] Hassan Najafi Alishah. Conservative replicator and Lotka-Volterra equations in the context of Dirac\big-isotropic structures. Journal of Geometric Mechanics, 2020, 12 (2) : 149-164. doi: 10.3934/jgm.2020008 [13] Y. A. Li, P. J. Olver. Convergence of solitary-wave solutions in a perturbed bi-hamiltonian dynamical system ii. complex analytic behavior and convergence to non-analytic solutions. Discrete & Continuous Dynamical Systems, 1998, 4 (1) : 159-191. doi: 10.3934/dcds.1998.4.159 [14] Pengliang Xu, Xiaomin Tang. Graded post-Lie algebra structures and homogeneous Rota-Baxter operators on the Schrödinger-Virasoro algebra. Electronic Research Archive, 2021, 29 (4) : 2771-2789. doi: 10.3934/era.2021013 [15] Partha Guha, Indranil Mukherjee. Hierarchies and Hamiltonian structures of the Nonlinear Schrödinger family using geometric and spectral techniques. Discrete & Continuous Dynamical Systems - B, 2019, 24 (4) : 1677-1695. doi: 10.3934/dcdsb.2018287 [16] A. Ghose Choudhury, Partha Guha. Chiellini integrability condition, planar isochronous systems and Hamiltonian structures of Liénard equation. Discrete & Continuous Dynamical Systems - B, 2017, 22 (6) : 2465-2478. doi: 10.3934/dcdsb.2017126 [17] William D. Kalies, Konstantin Mischaikow, Robert C.A.M. Vandervorst. Lattice structures for attractors I. Journal of Computational Dynamics, 2014, 1 (2) : 307-338. doi: 10.3934/jcd.2014.1.307 [18] Paulo Antunes, Joana M. Nunes da Costa. Hypersymplectic structures on Courant algebroids. Journal of Geometric Mechanics, 2015, 7 (3) : 255-280. doi: 10.3934/jgm.2015.7.255 [19] Javier de la Cruz, Michael Kiermaier, Alfred Wassermann, Wolfgang Willems. Algebraic structures of MRD codes. Advances in Mathematics of Communications, 2016, 10 (3) : 499-510. doi: 10.3934/amc.2016021 [20] Francesco Maddalena, Danilo Percivale, Franco Tomarelli. Adhesive flexible material structures. Discrete & Continuous Dynamical Systems - B, 2012, 17 (2) : 553-574. doi: 10.3934/dcdsb.2012.17.553
2020 Impact Factor: 0.857
|
2021-12-08 06:46:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8630625009536743, "perplexity": 7690.253650403542}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363445.41/warc/CC-MAIN-20211208053135-20211208083135-00233.warc.gz"}
|
https://gamedev.stackexchange.com/questions/176550/fps-like-mouse-look-camera-winapi-problem
|
# FPS-like mouse-look camera WinAPI problem
I have a problem with implementing mouse-look camera movement, like in FPS games. For me common solution is:
1. Process WM_MOUSEMOVE event in WndProc
2. Calculate delta movement from the window's center using event's lParam
3. Rotate camera
4. Return cursor back to window's center using SetCursorPos
The problem is when SetCursorPos is called, another WM_MOUSEMOVE event is being fired. So camera rotates back.
What is the common way to create such type of camera on Windows platform (using WinAPI)?
I know that in WM_MOSEMOVE I can check is mouse.x == windowCenter.x and if it is - do nothing, but it's a hack from my point of view. Is there any "non-hacky" way to achieve the goal?
## 2 Answers
You can use GetCursorPos, and use the result from that to calculate how far the mouse has moved from the center, then SetCursorPos to put it back. With this scheme you don't even need to handle WM_MOUSEMOVE messages; just call GetCursorPos each frame.
This, IIRC, is the approach used by Quake and derivatives.
• In that case I need addition code to manage focus window state, too? – Bohdan Bessonov Oct 22 '19 at 22:27
• @BohdanBessonov - correct, but you're probably going to need similar for showing/hiding the cursor as well, so it's not really that big a deal. The other alternative is to use DirectInput - despite it being deprecated - which will handle all of this automatically for you. – Maximus Minimus Oct 23 '19 at 9:13
• Probably, there is also RawInput and maybe XINPUT, will take a look – Bohdan Bessonov Oct 23 '19 at 10:07
You can use some sort of flag to skip WM_MOUSEMOVE handling whenever you are adjusting cursor position. That may seem a bit hacky too, but that's how usually this gets done.
• It does not work since there can be some real mouse events in the queue before the SetCursosPos's event is processed, in that case wrong event will be discarded by a flag – Bohdan Bessonov Oct 23 '19 at 10:06
• True, although, highly unlikely. – badunius Oct 23 '19 at 12:26
|
2021-07-24 04:18:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19012577831745148, "perplexity": 2609.0576744048813}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046150129.50/warc/CC-MAIN-20210724032221-20210724062221-00547.warc.gz"}
|
http://lists.lyx.org/pipermail/lyx-cvs/2020-October/002035.html
|
# [LyX/master] DocBook: support for <info> tags in inner sections.
Thibaut Cuvelier tcuvelier at lyx.org
Fri Oct 30 00:30:56 UTC 2020
commit 661c5d256b74b2ca9fa9501a2a6f2e2ef7b6b099
Author: Thibaut Cuvelier <tcuvelier at lyx.org>
Date: Mon Oct 26 03:55:25 2020 +0100
DocBook: support for <info> tags in inner sections.
Previously, this code only worked correctly for the root tag.
---
autotests/export/docbook/svmono_light.xml | 11 +++++---
lib/layouts/stdsections.inc | 3 ++
lib/layouts/svcommon.inc | 2 +
src/output_docbook.cpp | 37 +++++++++++++++++-----------
4 files changed, 34 insertions(+), 19 deletions(-)
diff --git a/autotests/export/docbook/svmono_light.xml b/autotests/export/docbook/svmono_light.xml
index 6b8b689..4a2aa4c 100644
--- a/autotests/export/docbook/svmono_light.xml
+++ b/autotests/export/docbook/svmono_light.xml
@@ -4,21 +4,22 @@
<title>Untitled Document</title>
<chapter xml:id="chap.intro">
<info>
<abstract role='not-printed'>
-<para>Each chapter should be preceded by an abstract (10–15 lines long) that summarizes the content. The abstract will appear <emphasis>online at and be available with unrestricted access. This allows unregistered users to read the abstract as a teaser for the complete chapter. As a general rule the abstracts will not appear in the printed version of your book unless it is the style of your particular book or that of the series to which your book belongs.</emphasis><!-- \indent -->
+<para>Each chapter should be preceded by an abstract (10–15 lines long) that summarizes the content. The abstract will appear <emphasis>online at <link xlink:href="www.SpringerLink.com">www.SpringerLink.com</link> and be available with unrestricted access. This allows unregistered users to read the abstract as a teaser for the complete chapter. As a general rule the abstracts will not appear in the printed version of your book unless it is the style of your particular book or that of the series to which your book belongs.</emphasis><!-- \indent -->
Please use the 'starred' version of the <code>abstract</code> environment for typesetting the text of the online abstracts. Use the plain <code>abstract</code> if the abstract is also to appear in the printed version of the book.</para>
</abstract>
<abstract>
-<para>Each chapter should be preceded by an abstract (10–15 lines long) that summarizes the content. The abstract will appear <emphasis>online at and be available with unrestricted access. This allows unregistered users to read the abstract as a teaser for the complete chapter. As a general rule the abstracts will not appear in the printed version of your book unless it is the style of your particular book or that of the series to which your book belongs.</emphasis><!-- \indent -->
+<para>Each chapter should be preceded by an abstract (10–15 lines long) that summarizes the content. The abstract will appear <emphasis>online at <link xlink:href="www.SpringerLink.com">www.SpringerLink.com</link> and be available with unrestricted access. This allows unregistered users to read the abstract as a teaser for the complete chapter. As a general rule the abstracts will not appear in the printed version of your book unless it is the style of your particular book or that of the series to which your book belongs.</emphasis><!-- \indent -->
Please use the 'starred' version of the <code>abstract</code> environment for typesetting the text of the online abstracts. Use the plain <code>abstract</code> if the abstract is also to appear in the printed version of the book.</para>
</abstract>
</info>
+<para>bla</para>
</section>
@@ -34,7 +35,9 @@
</m:mrow>
</m:math>
</informalequation>
- however, for multiline equations we recommend to use the <emphasis role='sans'>eqnarray</emphasis> environment.
+ however, for multiline equations we recommend to use the <emphasis role='sans'>eqnarray</emphasis> environment<footnote>
+<para>In physics texts please activate the class option <code>vecphys</code> to depict your vectors in <emphasis role='bold'><emphasis>boldface-italic</emphasis> type - as is customary for a wide range of physical subjects.</emphasis></para>
+</footnote>.
<informalequation xml:id="eq.01">
<alt role='tex'>a\times b & = & c\nonumber \\
\vec{a}\cdot\vec{b} & = & c\label{eq:01}
diff --git a/lib/layouts/stdsections.inc b/lib/layouts/stdsections.inc
index ae153e8..4516d07 100644
--- a/lib/layouts/stdsections.inc
+++ b/lib/layouts/stdsections.inc
@@ -41,6 +41,7 @@ Style Part
HTMLTag h1
DocBookTag title
DocBookTagType paragraph
+ DocBookInInfo maybe
DocBookSectionTag part
DocBookForceAbstractTag partintro
End
@@ -77,6 +78,7 @@ Style Chapter
HTMLTag h1
DocBookTag title
DocBookTagType paragraph
+ DocBookInInfo maybe
DocBookSectionTag chapter
End
@@ -111,6 +113,7 @@ Style Section
HTMLTag h2
DocBookTag title
DocBookTagType paragraph
+ DocBookInInfo maybe
DocBookSectionTag section
End
diff --git a/lib/layouts/svcommon.inc b/lib/layouts/svcommon.inc
index 0bc1f36..1cbd2b4 100644
--- a/lib/layouts/svcommon.inc
+++ b/lib/layouts/svcommon.inc
@@ -115,6 +115,7 @@ Style Part
DocBookTag title
DocBookTagType paragraph
DocBookSectionTag part
+ DocBookInInfo maybe
DocBookForceAbstractTag partintro
End
@@ -166,6 +167,7 @@ Style Chapter
Align Left
DocBookTag title
DocBookTagType paragraph
+ DocBookInInfo maybe
DocBookSectionTag chapter
End
diff --git a/src/output_docbook.cpp b/src/output_docbook.cpp
index 23d4db8..a0316ba 100644
--- a/src/output_docbook.cpp
+++ b/src/output_docbook.cpp
@@ -553,6 +553,8 @@ void makeEnvironment(Text const &text,
closeTag(xs, par->layout().docbookiteminnertag(), par->layout().docbookiteminnertagtype());
++p;
+ // Insert a new line after each "paragraph" (i.e. line in the listing), except for the last one.
+ // Otherwise, there would one more new line in the output than in the LyX document.
if (p != pars.end())
xs << xml::CR();
}
@@ -811,12 +813,12 @@ DocBookInfoTag getParagraphsWithInfo(ParagraphList const ¶graphs,
// Skip paragraphs that don't generate anything in DocBook.
Paragraph const & par = paragraphs[cpit];
Layout const &style = par.layout();
- if (hasOnlyNotes(par) || style.docbookininfo() == "never")
+ if (hasOnlyNotes(par))
continue;
- // There should never be any section here. (Just a sanity check: if this fails, this function could end up
- // processing the whole document.)
- if (isLayoutSectioning(par.layout())) {
+ // There should never be any section here, except for the first paragraph (a title can be part of <info>).
+ // (Just a sanity check: if this fails, this function could end up processing the whole document.)
+ if (cpit != bpit && isLayoutSectioning(par.layout())) {
LYXERR0("Assertion failed: section found in potential <info> paragraphs.");
break;
}
@@ -1102,7 +1104,8 @@ void docbookParagraphs(Text const &text,
// Don't output the ID as a DocBook <anchor>.
ourparams.docbook_anchors_to_ignore.emplace(label->screenLabel());
- // Cannot have multiple IDs per tag.
+ // Cannot have multiple IDs per tag. If there is another ID inset in the document, it will
+ // be output as a DocBook anchor.
break;
}
}
@@ -1136,16 +1139,14 @@ void docbookParagraphs(Text const &text,
}
}
- // Generate this paragraph.
- par = makeAny(text, buf, xs, ourparams, par);
-
+ // Generate the <info> tag if a section was just opened.
// Some sections may require abstracts (mostly parts, in books: DocBookForceAbstractTag will not be NONE),
// others can still have an abstract (it must be detected so that it can be output at the right place).
// TODO: docbookforceabstracttag is a bit contrived here, but it does the job. Having another field just for this would be cleaner, but that's just for <part> and <partintro>, so it's probably not worth the effort.
if (isLayoutSectioning(style)) {
// This abstract may be found between the next paragraph and the next title.
pit_type cpit = std::distance(text.paragraphs().begin(), par);
- pit_type ppit = std::get<1>(hasDocumentSectioning(paragraphs, cpit, epit));
+ pit_type ppit = std::get<1>(hasDocumentSectioning(paragraphs, cpit + 1L, epit));
// Generate this abstract (this code corresponds to parts of outputDocBookInfo).
DocBookInfoTag secInfo = getParagraphsWithInfo(paragraphs, cpit, ppit, true,
@@ -1166,9 +1167,9 @@ void docbookParagraphs(Text const &text,
// Output the elements that should go in <info>, before and after the abstract.
for (auto pit : secInfo.shouldBeInInfo) // Typically, the title: these elements are so important and ubiquitous
// that mandating a wrapper like <info> would repel users. Thus, generate them first.
- makeAny(text, buf, xs, runparams, paragraphs.iterator_at(pit));
+ makeAny(text, buf, xs, ourparams, paragraphs.iterator_at(pit));
for (auto pit : secInfo.mustBeInInfo)
- makeAny(text, buf, xs, runparams, paragraphs.iterator_at(pit));
+ makeAny(text, buf, xs, ourparams, paragraphs.iterator_at(pit));
// Deal with the abstract in <info> if it is standard (i.e. its tag is <abstract>).
if (!secInfo.abstract.empty() && hasStandardAbstract) {
@@ -1178,7 +1179,7 @@ void docbookParagraphs(Text const &text,
}
for (auto const &p : secInfo.abstract)
- makeAny(text, buf, xs, runparams, paragraphs.iterator_at(p));
+ makeAny(text, buf, xs, ourparams, paragraphs.iterator_at(p));
if (!secInfo.abstractLayout) {
xs << xml::EndTag("abstract");
@@ -1202,14 +1203,20 @@ void docbookParagraphs(Text const &text,
xs << xml::StartTag(style.docbookforceabstracttag());
xs << xml::CR();
for (auto const &p : secInfo.abstract)
- makeAny(text, buf, xs, runparams, paragraphs.iterator_at(p));
+ makeAny(text, buf, xs, ourparams, paragraphs.iterator_at(p));
xs << xml::EndTag(style.docbookforceabstracttag());
xs << xml::CR();
}
- // Skip all the text that just has been generated.
- par = paragraphs.iterator_at(ppit);
+ // Skip all the text that has just been generated.
+ par = paragraphs.iterator_at(secInfo.epit);
+ } else {
+ // No <info> tag to generate, proceed as for normal paragraphs.
+ par = makeAny(text, buf, xs, ourparams, par);
}
+ } else {
+ // Generate this paragraph, as it has nothing special.
+ par = makeAny(text, buf, xs, ourparams, par);
}
}
|
2021-02-26 04:23:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8754298090934753, "perplexity": 14903.20313151424}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178356140.5/warc/CC-MAIN-20210226030728-20210226060728-00374.warc.gz"}
|
https://mail.python.org/pipermail/python-list/2007-July/435604.html
|
# execute script in certain directory
Gabriel Genellina gagsl-py2 at yahoo.com.ar
Tue Jul 10 01:45:35 CEST 2007
```En Mon, 09 Jul 2007 14:09:40 -0300, Alex Popescu
<the.mindstorm.mailinglist at gmail.com> escribió:
> Interesting. I was wondering about the opposit: being in the parent
> dir, how can I run a module from a package. (the current behavior when
> running python dir_name\module.py is to consider the dir_name the
> current dir and this breaks all imports). I am pretty sure this is
> answered somewhere, but I must confess that so far I haven't been able
> to find it :-(.
python dir_name\module.py does NOT change the current dir. It prepends
dir_name to sys.path, if this is what you mean.
The short answer is: don't place standalone scripts inside a package; see
|
2019-07-19 23:48:35
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8438603281974792, "perplexity": 6542.159922890154}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526386.37/warc/CC-MAIN-20190719223744-20190720005744-00060.warc.gz"}
|
https://math.stackexchange.com/questions/4365840/surface-area-of-rotation-of-a-circle-around-a-tangent
|
# Surface area of rotation of a circle around a tangent
Self-studying integral calculus and I got this problem:
The circle $$x^2+y^2=a^2$$ is rotated around a line tangent to the circle. Find the area of the surface of rotation.
There were a few hints given alongside this question, namely: "Set up coordinate axes and a convenient parametrization of the circle. What does the polar graph $$r=2a\sin(\theta)$$ look like?" I understood the first and last hints since under this new coordinate system, the circle's equation becomes: $${x_n}^2+(y_n-a)^2=a^2$$ Which when converted to polar, gives you the last hint. However I was unable to describe this in terms of parameters, so I decided to take the upper semicircle's surface area of revolution going from $$a$$ to $$-a$$ and multiplying that by 2 to account for the lower semicircle. My integral: $$2\int_{-a}^a 2\pi (\sqrt{a^2-x^2}+a)\sqrt{1+\frac{x^2}{a^2-x^2}} dx$$ Upon simplification: $$4\pi\int_{-a}^a a+\frac{a^2}{\sqrt{a^2-x^2}}dx$$ Evaluation leads me to: $$8\pi a^2 + 4\pi^2 a^2$$
However my book (Serge Lang's First Course in Calculus) gives only $$4\pi^2 a^2$$. Where has my logic gone wrong if I am getting an extraneous term $$8\pi a^2$$?
EDIT for clarity on integral setup: I first rearranged for $$y$$ while taking positive square root as I want to take the upper semicircle into consideration for surface of revolution about x-axis. I'll double this to account for the lower semicircle. This gives: $$y=\sqrt{a^2-x^2}+a$$ Using the surface of revolution formula with the derivative as $$\frac{dy}{dx}=\frac{-x}{\sqrt{a^2-x^2}}$$ Using this into the surface of revolution integral nets me my first integral in this post (also applied $$\times$$2)
• I do not understand what you are doing. Can you please explain your integral setup? If you are rotating about a tangent, you cannot take volume of rotation of a semicircle and multiply by $2$. Volume is a function of the distance from the axis of rotation. Jan 25 at 16:15
• Also the volume of rotation of a circle around any tangent is going to be the same so choose a tangent that is easy to work with Jan 25 at 16:16
• That is the formula for surface area of revolution around the x-axis. I'll add the setup but I figured out my issue now using the parameters. Jan 25 at 16:18
• ok so are you rotating $x^2 + (y-a)^2 = a^2$ around x-axis? Jan 25 at 16:20
• Hint: Theorem of Pappus
– robjohn
Jan 25 at 16:51
After changing the coordinates, in effect you are rotating $$x^2 + (y-a)^2 = a^2$$ around x-axis.
The circle is $$x^2 + y^2 = 2 ay$$
$$\displaystyle y' = \frac{x}{a-y}$$
$$\displaystyle ds = \sqrt{1 + (y')^2} ~dx = \frac{a}{|y-a|} ~ dx$$
For lower half -
$$y = a - \sqrt{a^2-x^2}$$
So, $$\displaystyle S_1 = 2 \pi a \int_{-a}^a \frac{a - \sqrt{a^2-x^2}}{\sqrt{a^2-x^2}} ~ dx$$
$$= 2 \pi a^2 (\pi - 2)$$
For upper half -
$$y = a + \sqrt{a^2-x^2}$$
So, $$\displaystyle S_2 = 2 \pi a \int_{-a}^a \frac{a + \sqrt{a^2-x^2}}{\sqrt{a^2-x^2}} ~ dx$$
$$= 2 \pi a^2 (\pi + 2)$$
Adding both, $$S = 4 \pi^2 a^2$$
But it is easier in polar coordinates as I mentioned in comments. The circle is,
$$r = 2a \sin\theta, 0 \leq \theta \leq a$$
$$\dfrac{dr}{d\theta} = 2a \cos\theta$$
$$\displaystyle ds = \sqrt{r^2 + \left(\frac{dr}{d\theta}\right)^2} ~ d\theta = 2a ~ d\theta$$
$$y = 2a\sin^2\theta$$
So the integral is,
$$\displaystyle S = 8 \pi a^2 \int_0^{\pi} \sin^2\theta ~ d\theta = 4 \pi^2 a^2$$
In spite of your obfuscating figure, you are asking for the surface area of a torus whose inner radius, $$R$$ (to the center of the cross-section) and outer radius, $$r$$ (that of the cross-section) are the same. This is well known to be $$S=4\pi^2Rr$$ (see, for example the CRC Mathematical Tables). So in your case, $$S=4\pi^2a^2$$
We can derive this result with Pappus's ($$1^{st}$$) Centroid Theorem, which states that the surface area $$S$$ of a surface of revolution generated by rotating a plane curve $$C$$ about an axis external to $$C$$ and on the same plane is equal to the product of the arc length $$s$$ of $$C$$ and the distance $$d$$ traveled by its geometric centroid. Simply put, $$S=2πRL$$, where $$R$$ is the normal distance of the centroid to the axis of revolution and $$L$$ is the curve length. In your case, $$R=a$$ and $$L$$ is the circumference of the circle, i.e., $$=2\pi a$$, so that $$S=4\pi^2a^2.$$
### Comment on the Question
I believe that this is the standard setup for surface of revolution about the $$x$$-axis $$\int_{-a}^a\overbrace{\quad2\pi y\quad\vphantom{\frac{a}{\sqrt{a^2}}}}^{\substack{\text{account for}\\\text{revolution}}}\overbrace{\frac{a}{\sqrt{a^2-x^2}}}^{\mathrm{d}s/\mathrm{d}x}\,\mathrm{d}x$$ However, the part for the upper arc of the circle does not give the same area as that for the lower arc of the circle, so we need to compute both separately: \begin{align} &\int_{-a}^a2\pi\left(a+\sqrt{a^2-x^2}\right)\frac{a}{\sqrt{a^2-x^2}}\,\mathrm{d}x\tag{upper}\\ &+\int_{-a}^a2\pi\left(a-\sqrt{a^2-x^2}\right)\frac{a}{\sqrt{a^2-x^2}}\,\mathrm{d}x\tag{lower}\\ &=2\pi\int_{-a}^a2a\,\frac{a}{\sqrt{a^2-x^2}}\,\mathrm{d}x\\[6pt] &=4\pi^2a^2 \end{align}
### Other Approaches
Revolution Around the $$\boldsymbol{x}$$-axis
As I had posted before I realized the question was revolving around the $$y$$-axis, if we revolve around the $$x$$-axis, the upper and lower parts of the surface have the same area, so we can just multiply the upper integral by $$2$$ in this case. Thus, the formula is \begin{align} 2\int_0^{2a}2\pi x\,\frac{a}{\sqrt{a^2-(x-a)^2}}\,\mathrm{d}x &=2\int_{-a}^a2\pi(x+a)\,\frac{a}{\sqrt{a^2-x^2}}\,\mathrm{d}x\tag{1a}\\ &=4\pi a^2\int_{-1}^1(x+1)\frac1{\sqrt{1-x^2}}\,\mathrm{d}x\tag{1b}\\ &=4\pi a^2\int_{-\pi/2}^{\pi/2}(\sin(x)+1)\,\mathrm{d}x\tag{1c}\\[9pt] &=4\pi^2a^2\tag{1d} \end{align} Explanation:
$$\text{(1a)}$$: substitute $$x\mapsto x+a$$
$$\text{(1b)}$$: substitute $$x\mapsto ax$$
$$\text{(1c)}$$: substitute $$x\mapsto\sin(x)$$
$$\text{(1d)}$$: integrate
Parametrization
Parametrize the torus as follows: at each point of the circle around the $$z$$-axis, $$a(\cos(\phi),\sin(\phi),0)$$ put a circle perpendicular to this circle: \begin{align} p(\phi,\theta) &=\overbrace{a(\cos(\phi),\sin(\phi),0)}^\text{primary circle}+\overbrace{a(\cos(\phi)\cos(\theta),\sin(\phi)\cos(\theta),\sin(\theta))}^\text{secondary circle around the primary circle}\\ &=a(\cos(\phi)(1+\cos(\theta)),\sin(\phi)(1+\cos(\theta)),\sin(\theta))\tag{2a}]\\[6pt] p_1(\phi,\theta) &=a(-\sin(\phi)(1+\cos(\theta)),\cos(\phi)(1+\cos(\theta)),0)\\ &=a(1+\cos(\theta))(-\sin(\phi),\cos(\phi),0)\tag{2b}\\[6pt] p_2(\phi,\theta) &=a(-\cos(\phi)\sin(\theta),-\sin(\phi)\sin(\theta),\cos(\theta))\tag{2c} \end{align} Thus, we get \begin{align} |p_1(\phi,\theta)\times p_2(\phi,\theta)| &=a^2(1+\cos(\theta))\,|(\cos(\theta)\cos(\phi),\cos(\theta)\sin(\phi),\sin(\theta))|\\ &=a^2(1+\cos(\theta))\tag3 \end{align} and we can compute $$\int_0^{2\pi}\int_0^{2\pi}a^2(1+\cos(\theta))\,\mathrm{d}\phi\,\mathrm{d}\theta=4\pi^2a^2\tag4$$ Theorem of Pappus
As I mentioned in a comment, we can apply the Theorem of Pappus: the primary circle has circumference $$2\pi a$$ and the secondary circle has circumference $$2\pi a$$, so the area is $$(2\pi a)(2\pi a)=4\pi a^2\tag5$$
• The author defined the surface area by this: $$S=\int_a^b 2\pi y\sqrt{1+(\frac{dy}{dx})^2}dx$$ Jan 25 at 18:20
• That would be for revolving about the $x$-axis. It would be useful to mention that. This would require two integrals, one for the part $y\le a$ and one for the part $y\ge a$, you can't simply double the one for the part $y\ge a$ (which has a larger surface area than the part $y\le a$). In my answer, I was revolving about the $y$-axis. With this new information, I will update the first part of my answer.
– robjohn
Jan 25 at 18:30
• Hmm... it was bothering me why symmetry consideration was not possible here. I didn't realize that the upper semicircle would indeed produce a greater surface area than the lower one until I read your comment. Thanks! Jan 25 at 18:35
• You’re welcome. If you rotate around the $x$-axis, the upper and lower parts are the same, so there is a little less work. I’ve added, and expanded, the part I used to have in my answer when I thought you were rotating around the $x$-axis.
– robjohn
Jan 26 at 20:44
|
2022-05-17 20:18:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 71, "wp-katex-eq": 0, "align": 4, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9310107231140137, "perplexity": 276.0394288864729}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662520817.27/warc/CC-MAIN-20220517194243-20220517224243-00043.warc.gz"}
|
https://epijim.uk/articles/pub-crawl/
|
# Optimum pub crawl route through all 71 Cambridge pubs
There are 71 pubs in Cambridge, which makes plotting the shortest possible route is going to be difficult. If I wanted to start at home, and visit all 71 pubs there would be $$(72-1)! = 8.5\times 10^{101}$$ potential pub crawl routes. That kind of problem is often called the Travelling Salesman Problem, which was a popular English parlour game in the 1800’s1. Pub crawls tend to be a straight route though, so I need to solve a hamilton path through the pubs.
While there is an R package called TSP devoted to solving these problems, I took a simplistic approach based off a Shiny app I hope to replicate at a later date.
## The route
The plot below is a 30.3km route through all 71 pubs starting at the Wrestlers and ending at the Lord Bryon that was derived in about 40 seconds via simulated annealing.
## Solving the problem
Simulated annealing is a technique inspired by annealing in metallurgy. In solving my pub crawl problem, the algorithm will initially be skewed toward picking longer routes, and as it continues to iterate it will slowly ‘cool’ and become more and more likely to choose a shorter route. This helps ensure the algorithm doesn’t get caught up in a local optimum route early. Sadly - just eyeballing - it appears the plot above is probably a local maximum. It looks like some travel time could be saved if the route visited the Mitre and Baron of Beef from the Pickerel, instead of it’s current diversion from the Brewhouse. I reran the model multiple times, and it usually found a route between 30 and 36km.
The gif below is another run, but it shows the map, the cooling curve, the distance travelled, and a histogram of all the distances recorded across the iterations as the model runs. This example also illustrates how the algorithm can fail. Here it’s pretty obvious that starting at the Lord Byron Inn and ending at the Tally Ho is not going to be the optimum route.
## My favourite pubs
The last gif is 5.6km route through my favourite pubs,
1. The Cambridge Brew House
2. The Eagle Public House
3. The Regal
4. The Flying Pig
5. The Cambridge Blue
6. The Elm Tree
7. Old Spring Public House
8. King Street Run Public House
Once again, a quick visual inspection shows that some travel time could be saved with minor tweaks to the last graph. While I still have long term plans to make a Cambridge interactive pub crawl app via R (and shiny), for now the following XKCD plot sums up what this has taught me - I just found a really complicated way to plot a pub crawl route which can be easily beaten by a person, as long as you’re planning on visiting less than 20ish pubs.
## My function
I wrapped up the code2 in a function, which can be called from github.
install.packages('devtools')
library('devtools')
source_gist("https://gist.github.com/epijim/8f4be4dae598e479add0")
ggmap is required to run this function.
### Input variables
v_pubs - either a list of pubs c(pub1,pub2) or a dataset with the latitude in the first column and the longitude in the second. If feeding in a dataset, you also need to set cam_pubs=FALSE.
crow_distances - defaults to FALSE. If set to FALSE, and cam_pubs=TRUE (the default), the function will calculate the best route using Google maps based distance or time to walk. If crow_distances=TRUE, the function will use straight line distances (taking into account the Earth’s curvature). If cam_pubs=FALSE, the function will always use straight line (as the crow flies data).
units - Defaults to "minutes", which makes the function calculate the route based on the Google maps derived walking time. Can also be set to "metres", which will make the function use and report the distance of the crawl in metres based on Google maps directions. This option is only evaluated if crow_distances=FALSE and you are using the inbuilt pub data.
v_location - defaults to "Cambridge, UK". This value is given to ggmap when pulling the base map. Only really needed if feeding in a different dataset. If using a different basemap, v_zoom will allow the zoom on the base map to be set.
listpubs - defaults to FALSE. If set to TRUE the function will print the list of pubs and then exit the function (ignoring all other options and not running the model). I added this as the pub names need to be entered in perfectly into v_pubs for it to work.
### Use
The function, when loaded, will pull data from another gist which has the pubs in it. You can see the names of the pubs by typing jb_pubdistance(listpubs=T).
A typical call to the function would be:
results <- jb_pubdistance(v_pubs=c("The Maypole P.H.","The Eagle Public House",
"Pickerel Inn","Baron Of Beef"))
In the example above, we will get the default format for the results, which is based on what google claims is the default walking time. See Input variables above on how to feed in custom data, or get back the distance in metres in actual walking routes, or as the crow flies.
So by setting <-, we created an object called results. The following results are stored.
• results$distance - the distance of the pub crawl • results$pubs_inorder - the pubs in order. If using my pubs data, it will give some info on the pubs. If feeding in custom lats and lons, it will be the original dataset in trip order.
• results\$temperature - the temperature values used over the iterations
The function will also return a plot showing the route.
1. The Icosian game was a peg based game invented in 1857.
2. This function just gives the final route, not the gifs of how the model was fit.
A post about: , , and
##### James Black
Epidemiologist and data scientist.
|
2019-02-17 12:17:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19428808987140656, "perplexity": 1891.5914799704806}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247481992.39/warc/CC-MAIN-20190217111746-20190217133746-00383.warc.gz"}
|
https://mathtuition88.com/2015/08/02/measure-theory-what-does-a-e-almost-everywhere-mean/
|
## Measure Theory: What does a.e. (almost everywhere) mean
Source: Elements of Integration by Professor Bartle
Students studying Mathematical Analysis, Advanced Calculus, or probability would sooner or later come across the term a.e. or “almost everywhere”.
In layman’s terms, it means that the proposition (in the given context) holds for all cases except for a certain subset which is very small. For instance, if f(x)=0 for all x, and g(x)=0 for all nonzero x, but g(0)=1, the function f and g would be equal almost everywhere.
For formally, a certain proposition holds $\mu$-almost everywhere if there exists a subset $N\in \mathbf{X}$ with $\mu (N)=0$ such that the proposition holds on the complement of N. $\mu$ is a measure defined on the measure space $\mathbf{X}$, which is discussed in a previous blog post: What is a Measure.
Two functions $f, g$ are said to be equal $\mu$-almost everywhere when $f(x)=g(x)$ when $x\notin N$, for some $N\in X$ with $\mu (N)=0$. In this case we would often write $f=g$, $\mu$-a.e.
Similarly, this notation can be used in the case of convergence, for example $f=\lim f_n$, $\mu$-a.e.
The idea of “almost everywhere” is useful in the theory of integration, as there is a famous Theorem called “Lebesgue criterion for Riemann integrability”.
(From Wikipedia)
A function on a compact interval [ab] is Riemann integrable if and only if it is bounded and continuous almost everywhere (the set of its points of discontinuity has measure zero, in the sense of Lebesgue measure). This is known as the Lebesgue’s integrability condition or Lebesgue’s criterion for Riemann integrability or the Riemann—Lebesgue theorem.[4] The criterion has nothing to do with the Lebesgue integral. It is due to Lebesgue and uses his measure zero, but makes use of neither Lebesgue’s general measure or integral.
Reference book:
|
2018-01-19 19:21:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 15, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9443821310997009, "perplexity": 369.28347184308444}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084888113.39/warc/CC-MAIN-20180119184632-20180119204632-00139.warc.gz"}
|
http://tex.stackexchange.com/questions/172812/what-is-line-parsing-enabled
|
# What is “%&-line parsing enabled”?
My latex logs begin with
%&-line parsing enabled
What does it mean?
-
Welcome to TeX.SX! You can have a look at our starter guide to familiarize yourself further with our format. – Jubobs Apr 22 '14 at 12:10
What do you find missing? A minimal working example? (any attempt generates this log message) Or perhaps my question is just too short? – Bach Apr 22 '14 at 12:15
Don't be alarmed. Your question is fine. That message is just our way of welcoming a new user (see this). – Jubobs Apr 22 '14 at 12:17
@Jubobs although the fact that you need to explain that implies that it's an unnecessarily negative opening comment I think. The text block ought to be changed (if it is going to be used). – David Carlisle Apr 22 '14 at 12:37
Some TeX engines (e.g. based on Web2C) have the feature that the format file can also be specified in the first line of the document, e.g.:
%&pdflatex
\documentclass{article}
\begin{document}
Hello World
\end{document}
This specifies the format pdflatex.fmt for the engine pdftex. The engine still needs to be specified, e.g. pdftex, pdflatex, latex, but the format is taken from the first line of the document. Thus the document above can also be compiled with
pdftex test
or
latex test
In both cases the LaTeX format is used and PDF is generated.
From the manual page of TeX:
-parse-first-line
If the first line of the main input file begins with %& parse it to look for a dump name or a -translate-file option.
-no-parse-first-line
Disable parsing of the first line of the main input file.
-
%& parsing is an option which allows one to indicate which TeX variant a particular file requires, e.g., %& eplain
-
|
2015-12-02 05:30:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9396759271621704, "perplexity": 2113.761712865863}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448399326483.8/warc/CC-MAIN-20151124210846-00165-ip-10-71-132-137.ec2.internal.warc.gz"}
|
https://mathematica.stackexchange.com/questions/247079/generating-a-weighted-oriented-lattice
|
# Generating a weighted oriented lattice
I have a periodic lattice, and I would like to associate some phase parameters connecting to the neighbors that can be thought of as a hopping parameter.
Now the choice of these phase parameters is not completely arbitrary. The question is all about the choice of these parameters for a particular lattice.
I illustrate my point with an example. Let us consider for simplicity a square lattice with 25 sites, as shown below.
The above lattice can also be thought a graph where the edges connect neighboring nodes. The condition for the phase parameters can be seen by choosing a plaquette (or face of a graph). Let us consider the plaquette formed by lattice sites 1, 2, 9, 8. Then we associate a parameter to the links (or edges). If a particle hop (or goes) from 1 to 2, it is $$b$$, if it goes from 2 to 9, it is $$c$$, if it goes from 9 to 8, it is $$d$$ and if it goes from 8 back to 1, it is $$a$$. Then these phase parameters should add up to a value let us say $$\alpha$$, i.e. $$\alpha = b + c + d + a$$. Most importantly, the direction (or orientation) of the particle hopping is very crucial. If we go from, let us say, 1 to 8, then the phase parameter is $$-a$$, and so on so forth. This whole idea of orientation is essentially illustrated in the below figure, where we consider clockwise orientation. Now the system has periodic boundary conditions, which are shown by the colored boundaries. The same color corresponds to the same boundary. Thus this phase parameter choice should still be respected, as shown for plaquette 4-5-16-15 and 7-8-23-22.
Coding part (my logic):
1. I have a matrix $$M_{hop}$$ that generates the above lattice or any lattice of size $$N$$, where $$N$$ is the number of nodes or lattice sites. In the above case, it is 25. Then $$M_{hop}$$ is $$N\times N$$.
2. Then, I can find all the plaquettes or faces of the periodic lattices using FindCycles.
3. Orienting all the cycles (clockwise in the above case) in such a way that the above-discussed condition is met.
4. Then we have set equations $$=$$ the number of plaquettes in the lattice. They can be solved using FindInstance, since many possible solution might exist, so one is fine. In the above case, these equations were $$\alpha = b + c + d + a$$, $$\alpha = e + f + g - a$$, $$\alpha = h + i + j - g$$, $$\alpha = k + l + m - i$$, so on so forth. Thus, addition of all parameters inside each plaquette in right orientation should add to $$\alpha$$.
5. Then, I will have a new $$M_{hop}$$, which is a function of only $$\alpha$$, no more any parameters.
My MWE:
mhop={{0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0}, {1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0}, {0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0}, {1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0,
1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, {0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0,
0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0}, {1, 0, 0, 0, 1, 0, 1, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0}, {0, 0, 0, 0, 0,
1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0}, {1, 0,
0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0,
0}, {0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 1, 0}, {0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0,
0, 1, 0, 0, 0, 0, 1}, {0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0,
0, 0, 0, 1, 0, 0, 0, 0, 0, 0}, {0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0,
1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0}, {0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1}, {0, 0, 1, 0, 0, 0,
0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0}, {0, 0, 0,
1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0,
0}, {0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0,
1, 0, 0, 0}, {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1,
0, 0, 1, 0, 0, 0, 0}, {0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0,
0, 1, 0, 1, 0, 0, 0, 0, 0, 0}, {0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0,
0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0}, {0, 0, 0, 0, 0, 0, 1, 0, 0,
1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0}, {0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1}, {0, 0, 0,
0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0,
0}, {0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0,
1, 0, 1, 0}, {0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0,
0, 0, 0, 0, 1, 0, 1}, {0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0,
0, 0, 0, 0, 0, 1, 0, 0, 1, 0}};
FindCycle[mhop,{4},All]
I have no idea how to take into account the orientation that will be general, that means for nay kind lattice with any number of sides with any lattice numbering. My guess is to use ConnectedComponets mostly used in graph stuffs.
• I have an attempt - I'm a little unsure if you want to use a graph or matrix? If matrix - when you chose a particle to hop, you can multiply the parameter by the difference between the particle site indices ~ if it moves left it will be *-1 Jun 2, 2021 at 21:54
• cool problem!! to be clear, mhop is an adjacency matrix? also, will you ever have non-grid-like lattices? if not, @Teabelly 's implicitly-mentioned matrix encoding, e.g. mhop = {{17, 16, 15, ...}, ...} might offer speedups. (periodicity can be achieved without too much fuss.) Jun 2, 2021 at 23:08
• I have a solution, but it depends on how you want to implement step 3 in your setup, since as you know FindCycle doesn't know "which orientation" to find the plaquette in. That is, if you naively try to turn one of the resulting cycles into an equation, you don't know whether the corresponding sum should be equal to $\alpha$ or $-\alpha$. I think you might need a different data structure that implicitly encodes orientation, or a way of determining the orientation of each cycle. So, this depends on whether you can stick to grid-like lattices or need more generality! Jun 2, 2021 at 23:55
• oh wait. I just realized that for the periodic lattice given, it will be mathematically impossible to find a set of such weights if $\alpha \neq 0$. Consider the sum of the edge weights over all plaquettes. if there are $n$ plaquettes, then this sum should be $n\alpha$. But since for every term in this sum, the negative of the term also appears in the sum (via the other plaquette), this sum must be 0, and so $n\alpha=0$. Unless we're working in an atypical ring, e.g. one of nonzero characteristic, we must have $\alpha=0$. Is $\alpha$ allowed to vary with each plaquette? Jun 3, 2021 at 4:53
• As a lattice field theorist it’s remarkable to watch non-experts begin to redevelop the field from scratch in these comments 😂. @Shamina, do you always have a regular (or at least structured), planar, 2D graph? Don’t use mma to rediscover the grid structure (plaquettes+orientations)—determine that structure given N. The key to making this easy is to number the sites not in a spiral but in an easier order, like left-to-right top-to-bottom. If you’re planning on large-scale simulations that choice will also make your eventual C/FORTRAN MPI communications addressing much simpler. Jun 5, 2021 at 5:31
To generate a graph
graph[n_] := Graph[Flatten[
Table[{Nest[Reverse, {i, j} \[DirectedEdge] {Mod[i + 1, n + 1], j},
Mod[1 + i + j, 2]],
Nest[Reverse, {i, j} \[DirectedEdge] {i, Mod[j + 1, n + 1]},
Mod[i + j, 2]]}, {i, 0, n}, {j, 0, n}], 2],
VertexCoordinates -> Flatten[Table[{i,
j} -> {Cos[2 \[Pi] i/(n + 1)] (2 + Cos[2 \[Pi] j/(n + 1)]),
Sin[2 \[Pi] i/(n + 1)] (2 + Cos[2 \[Pi] j/(n + 1)]),
Sin[2 \[Pi] j/(n + 1)]}, {i, 0, n}, {j, 0, n}], 1]]
limitedsymboltable[l_List] := Transpose@{l,Symbol /@
Join[Alphabet[],StringJoin /@ Partition[Flatten[
Riffle[Alphabet[], #] & /@ Alphabet[]],2]][[;; Length@l]]}
so your example would be something like GraphPlot[graph@3,EdgeLabels->Rule@@@limitedsymboltable@EdgeList@graph@3],
By the way, limitedsymboltable is unsafe since what if those symbols are already being used... Also it only supports lists with up to 689 elements; generality is difficult.
Anyway, let's make a function to construct those equations. I'm not sure about FindCycles, so I'll just construct all the plaquettes explicitly
squares[n_] :=
Join @@ Table[{{i, j}, {i, Mod[j + 1, n + 1]}, {Mod[i + 1, n + 1],
Mod[j + 1, n + 1]}, {Mod[i + 1, n + 1], j}}, {i, 0, n}, {j, 0,
n}]
These squares have predictable orientations. Now
sign[graph_, v1_, v2_] :=
Count[EdgeList@graph, v1 \[DirectedEdge] v2] -
Count[EdgeList@graph, v2 \[DirectedEdge] v1]
and
eqns[n_] :=
With[{g = graph@n, s = squares@n},
With[{st = limitedsymboltable[List @@@ EdgeList@g]},
Transpose@Join[st, {Reverse@#[[1]], #[[2]]} & /@st]},
{"equations" -> (\[Alpha] ==
e2s@{#[[1]], #[[2]]} sign[g, #[[1]], #[[2]]] +
e2s@{#[[2]], #[[3]]} sign[g, #[[2]], #[[3]]] -
e2s@{#[[3]], #[[4]]} sign[g, #[[3]], #[[4]]] -
e2s@{#[[4]], #[[1]]} sign[g, #[[4]], #[[1]]] & /@
squares@n), "bidiedgestosymbols" -> e2s}]]]
I must admit, eqns is a bit nasty. Works though: "equations"/.eqns@3 yields
{\[Alpha] == -a + b + c - j, \[Alpha] == c - d - e + l,
\[Alpha] == -e + f + g - n, \[Alpha] == -a + g - h + p,
\[Alpha] == i - j - k + r, \[Alpha] == -k + l + m - t,
\[Alpha] == m - n - o + v, \[Alpha] == i - o + p - x,
\[Alpha] == -q + r + s - z, \[Alpha] == ba + s - t - u,
\[Alpha] == -da - u + v + w, \[Alpha] == fa - q + w - x,
\[Alpha] == -aa + b + y - z, \[Alpha] == -aa + ba + ca - d,
\[Alpha] == ca - da - ea + f, \[Alpha] == -ea + fa - h + y}
Now to solve them for a particular $$\alpha$$
FindInstance["equations"/.#/.\[Alpha]->5,Values["bidiedgestosymbols"/.#]]&@eqns@3
Unfortunately, this gives trivial solutions where most of the edges are zero. We could try to find useful relations with
Reduce["equations"/.#,Append[Values["bidiedgestosymbols"/.#],\[Alpha]]]&@eqns@3
but at this point I think I'm using Mathematica in a slightly strange and perverted way.
I can't resist including a way of numbering vertices in a spiral as you have done:
squarespiral[n_, j_ : -1] := Graph[Range@((2 n - 1)^2),
Join[# \[UndirectedEdge] # + 1 & /@ Range[4 n (n - 1)],
2 + # - 2 \[LeftCeiling]Sqrt[#]\[RightCeiling] +
Mod[\[LeftCeiling]2 Sqrt[#]\[RightCeiling], 2]
\[UndirectedEdge]
2 + # - 2 \[LeftCeiling]Sqrt[#]\[RightCeiling] +
Mod[\[LeftCeiling]2 Sqrt[#]\[RightCeiling], 2] +
2 \[LeftFloor]Sqrt[4 # - 3]\[RightFloor] + 1 & /@
Range[4 (n - 1)^2]][[;; Mod[j, 8 n^2 - 12 n + 5]]]]
This has 1 connected to 2 connected to etc. in a spiral, and then fills in the 'rail road ties' with some not-too-complicated math. Let's animate that explanation by simply incrementing j:
• Thanks a lot for this nice answer! But square lattice was a very simple example of my actual case. For research reasons, I was unable to give, actually, my actual mhop. It is bit more complicated than our beautiful square lattice, but it is still a lattice. I think some of the aspects can not be straightforwardly generalized, at at least the plaquettes, that's why it is easy for me to read them from the mhop directly. Jun 4, 2021 at 17:17
It seems you'll want to generalize the underlying graph anyway, so I'll use a toroidal square lattice for illustration purposes, since it's simple to construct and very similar to your periodic grid (and in particular has 4-cycles defining plaquettes):
ToroidalGraph[n_] := Graph@Flatten@Table[{v[i, j] \[UndirectedEdge] v[i, Mod[j + 1, n, 1]], v[i, j] \[UndirectedEdge] v[Mod[i + 1, n, 1], j]}, {i, n}, {j, n}]
n=5;
g = ToroidalGraph@n
The vertices are named v[i,j] where i is the row number and j is the column number. You could define this function to number them from 1 to n^2, but we don't really need to do that for what follows. In any case, you should be able to drop in here any (undirected) graph of your choice.
We'll be using FindCycle to find 4-cycles defining plaquettes. This function returns paths, i.e. lists of edges, so we need a function to build equations from paths:
BuildEquation[\[Alpha]_][path_] := \[Alpha] == Sum[c[edge], {edge, path}]
This function takes the target value of alpha and a path, and sums the costs of all edges in the path, requiring them to add up to alpha. How can we tell if the edge is transversed in the "positive" or the "negative" direction? Because this is entirely conventional, we'll define c[u \[UndirectedEdge] v] to include a negative sign if u > v:
c[u_ \[UndirectedEdge] v_] /; ! OrderedQ@{u, v} := -c[v \[UndirectedEdge] u]
(Note that our nodes are not numbered, but we can still use OrderedQ to determine if their labels are in a canonical order. Also note that you can replace OrderedQ by any function you want to use to determine where minus signs should appear in your equations, if you have particular preferences.)
The variables in this problem are then the costs of transversing each edge:
vars = c /@ Sort /@ EdgeList@g;
Note that we sort the (undirected) edges to make sure our variable list contains no minus signs (you'll need to be careful here if you replace OrderedQ above!).
We'll get the equations from FindCycle as previously mentioned. In this graph there are n^2 nodes and 2n^2 edges, with each plaquette having 4 edges and each edge contributing to 2 plaquettes, resulting in n^2 plaquettes. For reasons mentioned in the comments, no solution should exist for non-zero alpha, and Mathematica's FindInstance will default to the trivial solution for vanishing alpha, so for illustration purposes we'll take alpha equal to 1 and omit one equation to have a feasible system:
eqs = BuildEquation[1] /@ FindCycle[g, {4}, n^2-1];
Then we get
FindInstance[eqs, vars, Integers]
(*{{c[v[1, 1] \[UndirectedEdge] v[1, 2]] -> 0,
c[v[1, 1] \[UndirectedEdge] v[2, 1]] -> 0,
c[v[1, 2] \[UndirectedEdge] v[1, 3]] -> 0,
c[v[1, 2] \[UndirectedEdge] v[2, 2]] -> 0,
c[v[1, 3] \[UndirectedEdge] v[1, 4]] -> 0,
c[v[1, 3] \[UndirectedEdge] v[2, 3]] -> 0,
c[v[1, 4] \[UndirectedEdge] v[1, 5]] -> 0,
c[v[1, 4] \[UndirectedEdge] v[2, 4]] -> 0,
c[v[1, 1] \[UndirectedEdge] v[1, 5]] -> 0,
c[v[1, 5] \[UndirectedEdge] v[2, 5]] -> 0,
c[v[2, 1] \[UndirectedEdge] v[2, 2]] -> 1,
c[v[2, 1] \[UndirectedEdge] v[3, 1]] -> 0,
c[v[2, 2] \[UndirectedEdge] v[2, 3]] -> 1,
c[v[2, 2] \[UndirectedEdge] v[3, 2]] -> 0,
c[v[2, 3] \[UndirectedEdge] v[2, 4]] -> 1,
c[v[2, 3] \[UndirectedEdge] v[3, 3]] -> 0,
c[v[2, 4] \[UndirectedEdge] v[2, 5]] -> 1,
c[v[2, 4] \[UndirectedEdge] v[3, 4]] -> 0,
c[v[2, 1] \[UndirectedEdge] v[2, 5]] -> 1,
c[v[2, 5] \[UndirectedEdge] v[3, 5]] -> 0,
c[v[3, 1] \[UndirectedEdge] v[3, 2]] -> 2,
c[v[3, 1] \[UndirectedEdge] v[4, 1]] -> 0,
c[v[3, 2] \[UndirectedEdge] v[3, 3]] -> 2,
c[v[3, 2] \[UndirectedEdge] v[4, 2]] -> 0,
c[v[3, 3] \[UndirectedEdge] v[3, 4]] -> 0,
c[v[3, 3] \[UndirectedEdge] v[4, 3]] -> 0,
c[v[3, 4] \[UndirectedEdge] v[3, 5]] -> 0,
c[v[3, 4] \[UndirectedEdge] v[4, 4]] -> 0,
c[v[3, 1] \[UndirectedEdge] v[3, 5]] -> 2,
c[v[3, 5] \[UndirectedEdge] v[4, 5]] -> 0,
c[v[4, 1] \[UndirectedEdge] v[4, 2]] -> 1,
c[v[4, 1] \[UndirectedEdge] v[5, 1]] -> 0,
c[v[4, 2] \[UndirectedEdge] v[4, 3]] -> 1,
c[v[4, 2] \[UndirectedEdge] v[5, 2]] -> 0,
c[v[4, 3] \[UndirectedEdge] v[4, 4]] -> -1,
c[v[4, 3] \[UndirectedEdge] v[5, 3]] -> 0,
c[v[4, 4] \[UndirectedEdge] v[4, 5]] -> 1,
c[v[4, 4] \[UndirectedEdge] v[5, 4]] -> 0,
c[v[4, 1] \[UndirectedEdge] v[4, 5]] -> 1,
c[v[4, 5] \[UndirectedEdge] v[5, 5]] -> 0,
c[v[5, 1] \[UndirectedEdge] v[5, 2]] -> 2,
c[v[1, 1] \[UndirectedEdge] v[5, 1]] -> 0,
c[v[5, 2] \[UndirectedEdge] v[5, 3]] -> -1,
c[v[1, 2] \[UndirectedEdge] v[5, 2]] -> 3,
c[v[5, 3] \[UndirectedEdge] v[5, 4]] -> 0,
c[v[1, 3] \[UndirectedEdge] v[5, 3]] -> 1,
c[v[5, 4] \[UndirectedEdge] v[5, 5]] -> 2,
c[v[1, 4] \[UndirectedEdge] v[5, 4]] -> 2,
c[v[5, 1] \[UndirectedEdge] v[5, 5]] -> 2,
c[v[1, 5] \[UndirectedEdge] v[5, 5]] -> 3}}*)
Hopefully this gets you started in the right direction :-)
• Thanks a lot, Fidel! I took all time to go through your code deeply and trying to understand it better, as it seems to be more closer to my requirements. However, I think c[u_ \[UndirectedEdge] v_] cannot be generalized to more general cases? For e.g. let us consider Icosohedron, i.e., g=GraphData["IcosahedralGraph"], then we do not find any solutions. I guess it is related to the fact of c maybe? Jun 7, 2021 at 19:14
• You're welcome, @Shamina! I believe the definition of c[u_ \[UndirectedEdge] v_] is quite general: at the most abstract level, we are assigning a value to each edge of the graph when transversed in the u->v direction, and its negative when transversed in the v->u direction; we are then writing equations for some paths on the graph, and in each path the edges are transversed in one direction or the other, but there's no other option... Jun 8, 2021 at 7:26
• When changing the graph g, there are two basic ingredients we need to consider, both appearing in the definition of eqs: one is the length of the cycles we are looking for, and the other is the number of equations we want to pose. The natural choice for g=GraphData["IcosahedralGraph"] is to take cycles of length 3, of which there are 20; if we take all 20 equations, however, we'll find no solutions for non-vanishing alpha, so we can drop one equation and take 19, i.e. eqs = BuildEquation[1] /@ FindCycle[g, {3}, 19], and this is enough to find non-trivial solutions... Jun 8, 2021 at 7:32
• Finally, let me point out that there is an intrinsic arbitrariness in the way the equations are built, since in an undirected graph you can reverse any cycle at the cost of a minus sign in the right hand side of your equation. This suggests that non-vanishing values of alpha shouldn't be "physical". For planar graphs you can of course make some arbitrary rule to fix directions, e.g. turn clockwise around each face, but this does not extend to general graphs, and more importantly means there will be no solutions with non-zero alpha (because each edge will appear once in each direction). Jun 8, 2021 at 7:49
• I checked your code with many graphs, it works fabulously! I have a very last question, is it possible to express the solution of edges in terms of α? Like, c[1-2]= k1 α,.., so on for each of the edges, where k1s are some constants after solving in terms of α. Is there a way to do that? Jun 8, 2021 at 19:04
I like to start with the foundations of Wolfram Mathematica and Wolfram Language. Graphs are part of this from the very beginning. There were a lot of changes up to the most modern versions.
As can be found in the documentation of GridGraph an option make a GridGraph directed:
GridGraph[{Subscript[n, 1],Subscript[n, 2],\[Ellipsis],Subscript[n, k]},DirectedEdges->True] gives a directed grid graph.
So You're very much appreciated question targets a built-in directly.
I did some research for the question and found given a lattice graph without spatial embedding how to identify the location of nodes in a lattice. I state it at the very first sight as closely related.
What You wish to do is change this canonic enumeration into one centered in a most probably hand picked vertice and reenumerated the graph in a spiraling form. You do not intent to stick to equilateral graph but prefer to introduce a, b, c, d as independent form parameters.
The built-in allows You to assign arbitrary vertices enumeration like and random grid boundary length to each side of each grid square as long as they still remain to build a grid:
GridGraph[{5, 5}, VertexLabels -> "Name",
VertexCoordinates ->
Flatten[Table[{j + RandomReal[{-0.1, 0.1}],
5 - i + RandomReal[{-0.1, 0.1}]}, {i, 5}, {j, 5}], 1],
DirectedEdges -> True]
The spiral offered by @adam is based only on edge enumeration. That is a beautiful simplification and leaves to Mathematica the internals required by the set of {a, b, c, d} shown in Your questions simplifying pictures. This preferentially demands a solution in vertices coordinates.
A notebook by PrimeSpiral by Eric Weisstein and the disambiguation page above that Spirals show up the problem to define what spiral type is meant if only an example is given. A solution can be found on VertexCoordinates
spiral[n_] :=
Table[(1/2 (-1)^ # ({1, -1} (Abs[#^ 2 - t] - #) + #^ 2 - t -
Mod[#, 2]) &)[Round[Sqrt[t]]], {t, 0, n}]
and in a pimped version
CycleGraph[50, VertexCoordinates -> spiral[49], DirectedEdges -> True,
VertexLabels -> "Name"]]
Mind this is a closed graph due to better math.
spiralpertubed[n_] :=
Table[(1/2 (-1)^# ({1, -1} (Abs[#^2 - t] - #) + #^2 - t -
Mod[#, 2]) &)[Round[Sqrt[t]]], {t, 0, n}] +
Table[{+RandomReal[{-0.1, 0.1}], +RandomReal[{-0.1, 0.1}]}, {t, 0,
n}]
CycleGraph[50, VertexCoordinates -> spiralpertubed[49],
DirectedEdges -> True, VertexLabels -> "Name"]
This much more complicated than using just the set of constants but it is already as an individual to the cells, former square as in Your last picture.
The generalization to square grid graphs goes over
ss0 = squarespiral@3
vc0 = AbsoluteOptions[ss0, VertexCoordinates][[1, 2]]
squarespiralpertubed[n_, vc1, j_: - 1] :=
Graph[Range@((2 n - 1)^2),
Join[# \[UndirectedEdge] # + 1 & /@ Range[4 n (n - 1)],
2 + # - 2 \[LeftCeiling]Sqrt[#]\[RightCeiling] +
Mod[\[LeftCeiling]2 Sqrt[#]\[RightCeiling],
2] \[UndirectedEdge]
2 + # - 2 \[LeftCeiling]Sqrt[#]\[RightCeiling] +
Mod[\[LeftCeiling]2 Sqrt[#]\[RightCeiling], 2] +
2 \[LeftFloor]Sqrt[4 # - 3]\[RightFloor] + 1 & /@
Range[4 (n - 1)^2]][[;; Mod[j, 8 n^2 - 12 n + 5]]],
VertexCoordinates -> vc1, VertexLabels -> "Name"]
example:
squarespiralperturbed[3, vc1]
The option VertexCoordinates allows perturbing the square positions flexibly.
An individual color can only be assigned with a static color list like
esf0 = {#, RandomColor[], Thick} & /@ EdgeList[%29]
if %29 is an input of the squaresprialperturbed.
squarespiralpertubedg[n_, vc1_, esf0_, jj_: - 1] :=
Graph[Range@((2 n - 1)^2),
Join[# \[UndirectedEdge] # + 1 & /@ Range[4 n (n - 1)],
2 + # - 2 \[LeftCeiling]Sqrt[#]\[RightCeiling] +
Mod[\[LeftCeiling]2 Sqrt[#]\[RightCeiling],
2] \[UndirectedEdge]
2 + # - 2 \[LeftCeiling]Sqrt[#]\[RightCeiling] +
Mod[\[LeftCeiling]2 Sqrt[#]\[RightCeiling], 2] +
2 \[LeftFloor]Sqrt[4 # - 3]\[RightFloor] + 1 & /@
Range[4 (n - 1)^2]][[;; Mod[jj, 8 n^2 - 12 n + 5]]],
VertexCoordinates -> vc1, VertexLabels -> "Name", EdgeStyle -> esf0]
Your mhop matrice must be used as a graph in the built-in FindCycle. The direct use is not permissible with Mathematica. A possible path is
AdjacencyGraph[mhop]
mhop is not a valid incidence matrice.
FindCycle[AdjacencyGraph[mhop], {4}, All]
These are all cycles of four vertices. The graph looks very like the one from @fidel-i-schaposnik!. The four cycle are all given in the pictures of Your question. The order of the cycles is from outside to the inner most vertice one. So the given mhop matrice represents the spiral very well if not exact and independent of the vertice coordinates and the individual set {a, b, c, d}.
There is nothing to be visualized for that.
{VertexCount[%3], EdgeCount[%3]}
{25, 50}
The interpretation of the adjacency matrice is each edge is represented in a binary logical manner with 1 if there is an edge between the vertice numbered with the column and row indices. The built-in AdjacencyGraph transforms that matrice into a graph back again. For n even bigger this becomes very large. There is no other way than that given by @adam to describe the vertices crisp is this grid graph fashion. That is because I started with the improvement of his answer. From AdjacencyMatrix You can connect the grid graph spiral enumeration to the $$M_{hop}$$ matrix of Your question. Your $$\alpha$$ is always 4 because the length a selected by the visualization engine for graphs in the notebooks of Mathematica or WolframCloud.
Just in the situation of absolute edge length given in the graph description the situation changes towards an $$\alpha$$ or even an $$\alpha_{i}$$ for each square or cell. But Mathematica is not that flexible it takes some effort to produce nice graphs again if absolute length are selected and entered. Mathematica has some poor fame in not drawing graphs very accurate or even to scale.
Some experience can be gathered by search this StackExchange blog. For example drawing a graph with specified edge lengths. @halmir calls this behaviour as approximate using EdgeWeight. This works even with symbolics and operates well behaved.
As can be seen on WeightedAdjacencyMatrix even with weights selected arbitrary Mathematica only show the adjacency graph and it is best to write the weights on the edges. Since there are only for edges in each row a row sum on the WeightedAdjacencyMatrix will be your $$\alpha_{i}$$.
It in no more complicated than this. I leave the impression to WeightedAdjacencyMatrix in the Mathematica documentation. Mathematica presents in contrast GraphLayout options. The use of VertexCoordinates is cleaner and visual more crisp and flexible.
I hope this helps and may You are ready for an iterative and interactive step to improve my answer. I hope so far a fair cohesion and coherence to this blog and Your question is matched. Thanks a lot.
To some impression of how I percive Your answer and connect it to previous question from other similar to Yours work Yourself through the answers found by Google: graphs with absolute edge lengths mathematica or multidimensional optimization in Mathematica.
• Thanks a lot for your solution and time! I guess you more or less mentioned about the important side questions connected to my main question, it is helpful! However, I think it will need a good amount of reading. Also, I knew about some of them , but my understanding is they are are not very much related to my main problem. But I can be wrong. Jun 7, 2021 at 19:20
|
2022-05-26 08:53:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 26, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2631847560405731, "perplexity": 916.9554587300428}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662604495.84/warc/CC-MAIN-20220526065603-20220526095603-00513.warc.gz"}
|
http://mathoverflow.net/questions/78899/if-you-take-the-closure-of-two-smooth-varieties-and-then-take-their-intersectio
|
# If you take the closure of two smooth varieties and then take their intersections, is the singular locus still small?
Let $$X, Y \subset \mathbb{P}^N$$ be two non singular algebraic varieties of dimensions $k$ and $l$ that intersect transversally. Is it true that the dimension'' of the variety $\overline{X} \cap \overline{Y} - X\cap Y$ is strictly less than $k+l-N$, which is the dimension of $X\cap Y$ as a complex manifold. What I am worried about is that when you take the closure and then take intersections you may add singular things of very high dimension to $X\cap Y$.
I think it is true that the dimension of $\overline{X\cap Y}- X \cap Y$ is strictly less than $k+l-N$.
-
Ritwik -- Why do your varieties $X$ and $Y$ intersect transversally? Are you using Bertini's theorem? If so, then you can apply Bertini's theorem to the closures of $X$ and $Y$. -- Jason – Jason Starr Oct 23 '11 at 18:50
## 3 Answers
There are already two answers pointing out why your statement cannot hold as stated, so let's see if we can fix it.
Let $X, Y\subseteq \mathbb P^N$ be two irreducible (quasi-projective) algebraic varieties of dimension $k$ and $l$ respectively. Then $\overline X,\overline Y\subseteq \mathbb P^N$ are two closed irreducible algebraic varieties of dimension $k$ and $l$ respectively. By the Projective Dimension Theorem you obtain that
Every irreducible component of the intersection $\overline X\cap\overline Y$ has dimension at least $k+l-N$.
This implies that if your initial $X$ and $Y$ are disjoint, then your desired statement cannot hold.
On the other hand since you assumed that $X$ and $Y$ intersect transversally, basically you only need to worry about the complements, that is, the interesting intersections are $\overline X\cap (\overline Y\setminus Y)$ and $(\overline X\setminus X)\cap \overline Y$.
If you know that these intersections are transversal, then I think what you want follows.
A perhaps interesting consequence of this is that if those intersections are transversal, then $X\cap Y\neq \emptyset$.
-
Your last statement implies that the part after "and" in your second-to-last statement is unnecessary, no? – Will Sawin Oct 24 '11 at 17:49
@Will: Yes, I meant to take that out after I realized this, but obviously forgot. Thanks for pointing it out. – Sándor Kovács Oct 24 '11 at 19:13
I am not sure I understand. If $\overline{X},\overline{Y}$ are two smooth irreducible huypersurfaces and $X=\overline{X}-\overline{X}\cap\overline{Y}$ and similarly for $Y$, then $X,Y$ are smooth with empty intersection and of dimension $N-1$. But the intersection of the closures is just $N-2$.
-
Moreover, one can use that technique to get very far beyond tat bound.
Let $Z$ be an $n$-dimensional subspace of $2N$-space. Let $\bar{X}$ be an $n+1$-dimensional subspace including $Z$, and let $\bar{Y}$ be another $n+1$-dimensional subspace including $Z$. Then apply Mohan's trick to create an $X$ and $Y$ that intersect transversely, or, rather, not at all. Then the formula fails severely, as $n$ is much larger than $2$.
-
|
2014-12-26 22:07:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9727506637573242, "perplexity": 194.89140986891525}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1419447549808.107/warc/CC-MAIN-20141224185909-00078-ip-10-231-17-201.ec2.internal.warc.gz"}
|
https://www.batoi.com/dev21/how-use-php-array-function-array_diff-multidimensional-array-60881123d3306
|
The array function array_diff() in PHP compares the values of two or more arrays and returns the differences between them. The function returns an array from the first array with values that are not present in the second array or more arrays that come next.
This function was introduced in PHP version 4.0.1 and it is still present in PHP version 7.
Syntax of array_diff function
array_diff(array1, array2, array3, ..., arrayN)
PHP Manual Reference: https://www.php.net/manual/en/function.array-diff.php
How to use this function?
Create two indexed array Subject 1 and Subject 2 as below:
$aSubject1 = array("PHP", "Ajax", "jQuery", "CSS", "jSon");$aSubject2 = array("PHP", "Ajax", "jQuery");
$aSubject3 = array("Mysql", "Javascript", "jSon");$aDifferenceArray = array_diff($aSubject1,$aSubject2,$aSubject3); Output: ( [3] => CSS ) How to use this function in a multidimensional array? $aSubject1 = array(
array('subject'=>"PHP"),
array('subject'=>"Ajax"),
array('subject'=>"jQuery"),
array('subject'=>"CSS"),
array('subject'=>"jSon")
);
$aSubject2 = array( array('subject'=>"PHP"), array('subject'=>"Ajax"), array('subject'=>"jQuery") );$aDifferenceArray = array_diff($aSubject1,$aSubject2);
Output:
(
)
In the above case, the function returns a blank array. This function only checks the values of two arrays and returns the difference. But in a multidimensional array, it cannot find the values.
So, we have to find the values between two arrays, and then apply the array_diff() function. Let us define two variables of an array data type.
$aVal1 = array();$aVal2 = array();
foreach($aSubject1 as$aSubject)
{
$aVal1[] =$aSubject["'subject'"];
}
This code will fetch all the values of array $aSubject1 and store into the array $aVal1. The same code will be applied to $aSubject2. foreach($aSubject2 as $aSubject) {$aVal2[] = $aSubject["'subject'"]; } Now apply the array_diff() function with two parameters $aVal1 and $aVal1. $aDifferenceArray = array_diff($aVal1,$aVal2);
Output:
Array
(
[3] => CSS
[4] => jSon
)
Watch Video
|
2021-09-19 20:19:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23265859484672546, "perplexity": 2417.922756166614}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056900.32/warc/CC-MAIN-20210919190128-20210919220128-00302.warc.gz"}
|
https://www.physicsforums.com/threads/strange-police-behavior.437244/
|
# Strange police behavior
1. ### Jack21222
772
I hope you guys can give me some insight into this situation.
So, I was driving home from my parents' house tonight (just past midnight on Monday night) and I'm nearly the only person on the highway. A two state troopers pass me one by one within a 2 minute (2 mile, traveling 60 mph) span. About one mile after the last one passes me, and a quarter mile ahead of me, they turn on their lights and block both travel lanes while a third blocks the shoulder (I don't recall a 3rd passing me, he might have already been on the shoulder). I slow down and drift towards the blockade at about 20 miles an hour. I don't know if they are closing the road, or if they're setting up a roadblock, or if they want me to pull over or what.
As I get near the cars, expecting them to tell me to pull over (or take a detour,) the two blocking the travel lanes turn off their lights and drive away. The one on the shoulder keeps his lights on. The road was blocked for approximately 30 seconds from what I could tell.
There is a new law in Maryland where if an emergency vehicle has its lights on on the shoulder, you have to move over at least one lane away, so I do that, pass the officer, and then get back into my lane. That officer stays on the side of the road with the lights on as I get off on my exit.
Were the police just bored and wanted to mess with me? Did they get orders to block the road and change their mind? Were they just checking on the officer on the side of the road, and then decided to keep going?
I'm just really confused by their behavior.
2. ### G037H3
326
Yes, the police were ordered to mess with you so you would start this thread...
3. ### Jack21222
772
Your sarcasm is noted. I was informed that police officers are human beings by my roommate. Humans sometimes get bored and do illogical things to amuse themselves. Perhaps she was mistaken, and they're not humans after all.
4. ### Upisoft
349
What about they were ordered to test if people know the new law.
5. ### Jack21222
772
That would only require the one on the shoulder. Closing the Baltimore Beltway for 30 seconds seems like overkill, even if I was the only other car on the road.
It's interesting they did it about ~300 feet in front of an exit. I wonder if they were hoping I'd make an awkward move and take the exit, and use that as an excuse of me being suspicious.
6. ### Pengwuino
7,118
This is pretty unlikely but I may have been apart of something similar but there was a cause. A CHP (California Highway Patrol) officer drove ahead of a group of cars we were in and put his lights on and slowed down to about 20mph on the freeway. Everyone kinda went up to him but no one passed him as he got slower and slower. We got off the freeway because we were like "this is odd....". We turn on the radio and found out a few minutes ahead there was a massive accident. Turns out the CHP was trying to bring traffic to a slowdown before it hit the accident scene.
Maybe this is related?
7. ### Jack21222
772
Unlikely, as there was no traffic. The only cars I saw in 3 miles were the troopers.
As I'm puzzling through this, I've narrowed it down to 2 likely possibilities.
1) The other two just wanted to check on the officer on the shoulder, so they lined up to talk through their windows. When they saw me approaching, they decided to stop blocking the roadway and continue on.
2) Neither officer that passed me found a reason to pull me over, so they wanted to see if I'd panic into error by setting up a weird situation. Perhaps they were hoping to catch a drunk driver heading home after Monday Night Football.
Beyond that, I can't think of anything.
8. ### cronxeh
1,232
Often times when an emergency vehicle turns on their lights and then turns them off abruptly its due to a call being cancelled. In case of the police they usually have no desire to mess with you unless you fit into one of the cateories of statistically probable individuals to commit crime (you can draw your own conclusions as to what that means). Cops in NYC would often stop both lines just to chat with each other. It happens in Far Rockaway on Beach 116th street like every single day, and on Brooklyn Bridge when an officer camping the bridge in the RMP from 84th pct meets another officer and they have a chat for a few, that service lane is blocked for a good 2-3 minutes. Sometimes they have their lights on and its somewhat an official business, but most of the time its just recipe swapping
Funny story about recipes btw. They used to do that over the MDT/KDT system, but then someone got wind of that.
9. ### Jack21222
772
I wonder if that could mean a driver just past midnight after Monday Night Football. One could argue it's statistically probable that I had been out drinking.
10. ### cronxeh
1,232
Oh definitely. If you didnt slow down, if you were swerving, or if you forgot about that new law that was in effect and passed too close - all those would be great just causes to pull you over. Not to mention if your face looked flushed or any number of good reasons
11. ### airborne18
144
They were checking to see if you were drunk.
12. ### mugaliens
595
I've passed slow law enforcement several times on the freeway when everyone slows way down. I just make dang sure I'm not speeding.
If they had had their lights on, though, I wouldnt' have passed them.
### Staff: Mentor
Perhaps there was something in the road ahead that was a hazard, and they needed to slow traffic to allow an officer to remove it?
14. ### Jack21222
772
I saw none of them exit or re-enter their vehicle, but I suppose it could have happened without my noticing.
15. ### leroyjenkens
601
I couldn't put up with that. I would make calls and write letters all day long if I had to if that was happening where I live.
16. ### cronxeh
1,232
:rofl:
Do you know that millions of dollars are not collected because cops/firemen/court officers/etc park illegally and the traffic enforcement ("brownies") don't write them tickets? Thats a $45 fine for parking on expired meter,$150 charge for parking on no standing anytime zone *not to mention \$200+ for tow!), everyday, in every borough. http://nyc.uncivilservants.org/ is a website dedicated to documents just those things. They use placards to park anywhere, not just around their precincts or firehouses. Thats nepotism and its not going away anytime.
17. ### Danger
9,879
My first, and most logical, thought about the situation is that there was a roadblock in place to catch a fleeing suspect, and he was spotted elsewhere in time for them to pull out of your way.
18. ### Gear300
This reminds me of something funny. I was driving along in the right lane and the traffic was pretty thick. All of a sudden, a set of three cop cars one after the other in the lane next to me turned their sirens on. The leading car started to sort of nudge at the guy in front of me, signaling him to slow down and stop, which is what he did (causing me and the people behind me to stop). The cops then switched lanes in front of him and immediately swerved into a Dunkin Donuts.
|
2015-04-19 17:47:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21701562404632568, "perplexity": 2054.195786518287}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246639325.91/warc/CC-MAIN-20150417045719-00290-ip-10-235-10-82.ec2.internal.warc.gz"}
|
https://www.dpmms.cam.ac.uk/person/eg558
|
skip to content
# Department of Pure Mathematics and Mathematical Statistics
Research Interests: Statistical mechanics, Schramm-Loewner evolution, random planar maps, Liouville quantum gravity, random walk in random environment
## Publications
The Tutte Embedding of the Poisson–Voronoi Tessellation of the Brownian Disk Converges to $\sqrt{8/3}$8/3-Liouville Quantum Gravity
E Gwynne, J Miller, S Sheffield
– Communications in Mathematical Physics
(2020)
374,
735
A distance exponent for Liouville quantum gravity
E Gwynne, N Holden, X Sun
– Probability Theory and Related Fields
(2019)
173,
931
Scaling limits for the critical Fortuin–Kasteleyn model on a random planar map I: Cone times
E Gwynne, C Mao, X Sun
– Annales de l'Institut Henri Poincaré, Probabilités et Statistiques
(2019)
55,
1
Almost sure multifractal spectrum of Schramm–Loewner evolution
E Gwynne, J Miller, X Sun
– Duke Mathematical Journal
(2018)
167,
1099
Brownian motion correlation in the peanosphere for $κ> 8$
E Gwynne, N Holden, J Miller, X Sun
– Annales de l'institut Henri Poincare (B) Probability and Statistics
(2017)
53,
1866
Scaling limits for the critical Fortuin-Kasteleyn model on a random planar map II: local estimates and empty reduced word exponent
E Gwynne, X Sun
– Electronic Journal of Probability
(2017)
22,
1
Scaling limit of the uniform infinite half-plane quadrangulation in the Gromov-Hausdorff-Prokhorov-uniform topology
E Gwynne, J Miller
– Electronic Journal of Probability
(2017)
22,
84
Connectivity properties of the adjacency graph of SLE$_\kappa$ bubbles for $\kappa \in (4,8)$
E Gwynne, J Pfeffer
– Annals of Probability
Convergence of the self-avoiding walk on random quadrangulations to SLE$_{8/3}$ on $\sqrt{8/3}$-Liouville quantum gravity
E Gwynne, J Miller
– Annales Scientifiques de l'Ecole Normale Superieure
Dimension transformation formula for conformal maps into the complement of an SLE curve
E Gwynne, N Holden, J Miller
– Probability Theory and Related Fields
• 1 of 2
• >
D2.04
01223 337949
|
2020-04-06 22:21:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8918642401695251, "perplexity": 9887.52710776641}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371660550.75/warc/CC-MAIN-20200406200320-20200406230820-00326.warc.gz"}
|
https://brilliant.org/problems/a-simple-sum/
|
# A Simple Sum
Logic Level 2
$\array{\hspace{.5cm}XXXX\\ \hspace{.5cm}YYYY\\ + ZZZZ\\ \hline YXXXZ}$
In the sum above, each letter represents a digit (0,1,2,3,4,5,6,7,8,9).
Find the value of $$Z$$.
By the way $$X,Y$$ and $$Z$$ are not equal.
×
|
2017-05-24 09:56:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7900927066802979, "perplexity": 1053.697175237534}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607811.15/warc/CC-MAIN-20170524093528-20170524113528-00198.warc.gz"}
|
http://www.computer.org/csdl/trans/ts/1996/08/e0552-abs.html
|
Subscribe
Issue No.08 - August (1996 vol.22)
pp: 552-562
ABSTRACT
<p><b>Abstract</b>—Predicates appear in both the specification and implementation of a program. One approach to software testing, referred to as <b>predicate testing</b>, is to require certain types of tests for a predicate. In this paper, three fault-based testing criteria are defined for compound predicates, which are predicates with one or more AND/OR operators. <b>BOR</b> (<it>b</it>oolean <it>o</it>perato<it>r</it>) testing requires a set of tests to guarantee the detection of (single or multiple) boolean operator faults, including incorrect AND/OR operators and missing/extra NOT operators. <b>BRO</b> (<it>b</it>oolean and <it>r</it>elational <it>o</it>perator) testing requires a set of tests to guarantee the detection of boolean operator faults and relational operator faults (i.e., incorrect relational operators). <b>BRE</b> (<it>b</it>oolean and <it>r</it>elational <it>e</it>xpression) testing requires a set of tests to guarantee the detection of boolean operator faults, relational operator faults, and a type of fault involving arithmetical expressions. It is shown that for a compound predicate with <it>n</it>, <it>n</it> > 0, AND/OR operators, at most <it>n</it> + 2 constraints are needed for BOR testing and at most 2 * <it>n</it> + 3 constraints for BRO or BRE testing, where each constraint specifies a restriction on the value of each boolean variable or relational expression in the predicate. Algorithms for generating a minimum set of constraints for BOR, BRO, and BRE testing of a compound predicate are given, and the feasibility problem for the generated constraints is discussed. For boolean expressions that contain multiple occurrences of some boolean variables, how to combine BOR testing with the meaningful impact strategy developed by Weyuker, Goradia, and Singh [<ref rid="bibe055221" type="bib">21</ref>] is briefly described.</p>
INDEX TERMS
Software testing, predicate testing, fault-based testing, boolean operator faults, relational operator faults, off-by-$\epsilon$ faults.
CITATION
Kuo-Chung Tai, "Theory of Fault-Based Predicate Testing for Computer Programs", IEEE Transactions on Software Engineering, vol.22, no. 8, pp. 552-562, August 1996, doi:10.1109/32.536956
|
2015-05-25 10:01:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7498614192008972, "perplexity": 4284.387206193233}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928479.19/warc/CC-MAIN-20150521113208-00183-ip-10-180-206-219.ec2.internal.warc.gz"}
|
https://physics.stackexchange.com/questions/549224/oscillation-on-angled-rails-diff-equation/549232
|
# Oscillation on Angled Rails (Diff Equation)
This problem was taken from David Morin's Introduction to Classical Mechanics
My attempt at solving the problem:
First, I labeled all the relevant forces acting only on one of the particles of mass $$m$$, which were gravity and the force of the spring acting on said mass.
The forces contributing to the movement of the object along the rails were:
$$F_{\text g}=mg\cos(\theta) \\ F_{\text{spring}}=-k(l-l_i)$$ $$l$$ notates the length of the spring at any given moment while $$l_i$$ is a constant that represents the initial length of the spring in its equilibrium. $$x = 0$$ at the point where the two rails meet and $$x$$ notates the distance along the rail to the particle $$m$$. Now I shall proceed to solve the differential equation for this motion. First, I would like to invoke the law of sines to relate the length of the spring and the distance $$x$$. Since the triangle bounded by the spring is isosceles, the two identical angles would measure $$\frac{\pi}{2}-\theta$$
$$\frac{l}{\sin(2\theta)}=\frac{x}{\sin(\frac{\pi}{2}-\theta)} \\ l=\frac{2x\sin(\theta)\cos(\theta)}{\cos(\theta)} \\ l=2x\sin(\theta)$$ Now, we will move onto the differential equation. We must take the force of the spring in the direction of the rail, so we have to multiply it by cosine. $$x$$ is the current distance along the rail while $$x_i$$ is a constant that represents the initial distance of the masses from the bottom: $$\sum F=m\ddot{x}=-mg\cos\theta - 2k\sin(\theta)(x-x_i)\cos(\frac{\pi}{2}-\theta) \\ m\ddot{x} + 2kx\sin^2(\theta) = 2kx_i\sin^2(\theta) - mg\cos(\theta)$$
Now, I don't know whether I should continue to solve it like an in-homogeneous differential equation because I feel like I'm over-complicating this just to solve for the frequency. Also, the only "variables" here are $$\ddot{x}$$ and $$x$$. Everything else are constants, including the trig functions. Any help on how to move forward on this problem or another way of solving this would be high appreciated. Thank you
• Isn't the last equation is the equation for simple harmonic motion? May 4, 2020 at 18:46
• @sslucifer, I forgot to add the cosine component of the force of the spring and I edited it now. But, yes, it should be an SMH differential equation. The thing is, I don't know if I am approaching this problem in the right way
– LVST
May 4, 2020 at 18:56
• Seems like its the correct approach (well you can also use Lagrangian approach, but I think that will eventually leads up to $F=ma$), use $x(t)=Asin(\omega t)+Bcos(\omega t)$ for further solution. May 4, 2020 at 19:00
• Ok, but since the right side of the equation contains a $sin^2\theta$, does your solution $x(t)=Asin(\omega t)+Bcos(\omega t)$ still apply?
– LVST
May 4, 2020 at 19:09
• Alright, thanks for the clarification. I will try that. If you want to write down what you just said as an answer, I could check it and you could get some reputation points or whatever :) @sslucifer
– LVST
May 4, 2020 at 19:22
The last equation is just inhomogeneous differential equation for the simple harmonic motion. So use, $$x(t)=x_{inhm}(t)+Asin(\omega t)+Bcos(\omega t)$$
|
2022-07-07 04:27:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 16, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7654504776000977, "perplexity": 195.8286458108189}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104683683.99/warc/CC-MAIN-20220707033101-20220707063101-00569.warc.gz"}
|
https://lavelle.chem.ucla.edu/forum/viewtopic.php?f=42&t=53291
|
## double bonds
$sp, sp^{2}, sp^{3}, dsp^{3}, d^{2}sp^{3}$
annikaying
Posts: 94
Joined: Sat Sep 14, 2019 12:16 am
### double bonds
How do double bonds affect hybridization?
lauraxie2e
Posts: 108
Joined: Fri Aug 09, 2019 12:17 am
### Re: double bonds
I don't believe double bonds effect hybridization because it is based on regions of electron density.
asannajust_1J
Posts: 105
Joined: Wed Sep 11, 2019 12:16 am
Been upvoted: 1 time
### Re: double bonds
The second bond would form as a result of an unhybridized orbital.
Khushboo_3D
Posts: 60
Joined: Wed Sep 18, 2019 12:19 am
### Re: double bonds
The second bond formed as a result of an unhybridized orbital would be a pi- bond.
Sara Richmond 2K
Posts: 110
Joined: Fri Aug 30, 2019 12:16 am
### Re: double bonds
Double bonds do not affect hybridization because a double bond still represents a single location of electron density. An easy way to determine what type of hybridization is to use steric numbers.
Steric Number= number of bonded atoms + number of lone pairs.
Notice that the number of bonds is not included in this calculation.
kim 2I
Posts: 105
Joined: Thu Jul 25, 2019 12:17 am
### Re: double bonds
Pi-bonds are formed with unhybridized p-orbitalsm, so I'm sure hydridization and double-bonds don't go together. On the other hand, sigma-bonds are usually formed with sp or sp^2 hybrid orbitals.
Joelle 3L
Posts: 50
Joined: Sat Jul 20, 2019 12:16 am
### Re: double bonds
When focusing on hybridization, you look at the regions of electron density. Therefore, a single, double, or triple bond between atoms would all be considered one electron density region. On the other hand, lone pairs would be considered as well in hybridization.
emwoodc
Posts: 45
Joined: Fri Aug 09, 2019 12:16 am
### Re: double bonds
a double bond would be a sigma and pi bond
|
2020-07-08 12:18:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.279904842376709, "perplexity": 8642.603447517917}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655896932.38/warc/CC-MAIN-20200708093606-20200708123606-00443.warc.gz"}
|
https://www.ias.ac.in/listing/bibliography/pmsc/Peter_Becker-Kern
|
• Peter Becker-Kern
Articles written in Proceedings – Mathematical Sciences
• Explicit Representation of Roots on 𝑝-Adic Solenoids and Non-Uniqueness of Embeddability into Rational One-Parameter Subgroups
This note generalizes known results concerning the existence of roots and embedding one-parameter subgroups on 𝑝-adic solenoids. An explicit representation of the roots leads to the construction of two distinct rational embedding one-parameter subgroups. The results contribute to enlighten the group structure of solenoids and to point out difficulties arising in the context of the embedding problem in probability theory. As a consequence, the uniqueness of embedding of infinitely divisible probability measures on 𝑝-adic solenoids is solved under a certain natural condition.
• # Proceedings – Mathematical Sciences
Volume 131, 2021
All articles
Continuous Article Publishing mode
• # Editorial Note on Continuous Article Publication
Posted on July 25, 2019
|
2021-10-26 20:45:49
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8072026371955872, "perplexity": 1876.082755639663}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587926.9/warc/CC-MAIN-20211026200738-20211026230738-00080.warc.gz"}
|
https://www.siyavula.com/read/science/grade-10/quantitative-aspects-of-chemical-change/19-quantitative-aspects-of-chemical-change-06
|
We think you are located in United States. Is this correct?
# End of chapter exercises
## Quantitative aspects of chemical change
Textbook Exercise 19.8
Write only the word/term for each of the following descriptions:
1. the mass of one mole of a substance
2. the number of particles in one mole of a substance
Solution not yet available
$$\text{5}$$ $$\text{g}$$ of magnesium chloride is formed as the product of a chemical reaction. Select the true statement from the answers below:
1. $$\text{0,08}$$ moles of magnesium chloride are formed in the reaction
2. the number of atoms of $$\text{Cl}$$ in the product is $$\text{0,6022} \times \text{10}^{\text{23}}$$
3. the number of atoms of $$\text{Mg}$$ is $$\text{0,05}$$
4. the atomic ratio of $$\text{Mg}$$ atoms to $$\text{Cl}$$ atoms in the product is $$1:1$$
Solution not yet available
2 moles of oxygen gas react with hydrogen. What is the mass of oxygen in the reactants?
1. $$\text{32}$$ $$\text{g}$$
2. $$\text{0,125}$$ $$\text{g}$$
3. $$\text{64}$$ $$\text{g}$$
4. $$\text{0,063}$$ $$\text{g}$$
Solution not yet available
In the compound potassium sulphate ($$\text{K}_{2}\text{SO}_{4}$$), oxygen makes up $$x\%$$ of the mass of the compound. $$x = ?$$
1. $$\text{36,8}$$
2. $$\text{9,2}$$
3. 4
4. $$\text{18,3}$$
Solution not yet available
The concentration of a $$\text{150}$$ $$\text{cm^{3}}$$ solution, containing $$\text{5}$$ $$\text{g}$$ of $$\text{NaCl}$$ is:
1. $$\text{0,09}$$ $$\text{mol·dm^{-3}}$$
2. $$\text{5,7} \times \text{10}^{-\text{4}}$$ $$\text{mol·dm^{-3}}$$
3. $$\text{0,57}$$ $$\text{mol·dm^{-3}}$$
4. $$\text{0,03}$$ $$\text{mol·dm^{-3}}$$
Solution not yet available
Calculate the number of moles in:
1. $$\text{5}$$ $$\text{g}$$ of methane ($$\text{CH}_{4}$$)
2. $$\text{3,4}$$ $$\text{g}$$ of hydrochloric acid
3. $$\text{6,2}$$ $$\text{g}$$ of potassium permanganate ($$\text{KMnO}_{4}$$)
4. $$\text{4}$$ $$\text{g}$$ of neon
5. $$\text{9,6}$$ $$\text{kg}$$ of titanium tetrachloride ($$\text{TiCl}_{4}$$)
Solution not yet available
Calculate the mass of:
1. $$\text{0,2}$$ $$\text{mol}$$ of potassium hydroxide ($$\text{KOH}$$)
2. $$\text{0,47}$$ $$\text{mol}$$ of nitrogen dioxide
3. $$\text{5,2}$$ $$\text{mol}$$ of helium
4. $$\text{0,05}$$ $$\text{mol}$$ of copper (II) chloride ($$\text{CuCl}_{2}$$)
5. $$\text{31,31} \times \text{10}^{\text{23}}$$ molecules of carbon monoxide (CO)
Solution not yet available
Calculate the percentage that each element contributes to the overall mass of:
1. Chloro-benzene ($$\text{C}_{6}\text{H}_{5}\text{Cl}$$)
2. Lithium hydroxide ($$\text{LiOH}$$)
Solution not yet available
CFC's (chlorofluorocarbons) are one of the gases that contribute to the depletion of the ozone layer. A chemist analysed a CFC and found that it contained $$\text{58,64}\%$$ chlorine, $$\text{31,43}\%$$ fluorine and $$\text{9,93}\%$$ carbon. What is the empirical formula?
Solution not yet available
$$\text{14}$$ $$\text{g}$$ of nitrogen combines with oxygen to form $$\text{46}$$ $$\text{g}$$ of a nitrogen oxide. Use this information to work out the formula of the oxide.
Solution not yet available
Iodine can exist as one of three oxides ($$\text{I}_{2}\text{O}_{4}$$; $$\text{I}_{2}\text{O}_{5}$$; $$\text{I}_{4}\text{O}_{9}$$). A chemist has produced one of these oxides and wishes to know which one they have. If he started with $$\text{508}$$ $$\text{g}$$ of iodine and formed $$\text{652}$$ $$\text{g}$$ of the oxide, which oxide has he produced?
Solution not yet available
A fluorinated hydrocarbon (a hydrocarbon is a chemical compound containing hydrogen and carbon) was analysed and found to contain $$\text{8,57}\%$$ $$\text{H}$$, $$\text{51,05}\%$$ $$\text{C}$$ and $$\text{40,38}\%$$ $$\text{F}$$.
1. What is its empirical formula?
2. What is the molecular formula if the molar mass is $$\text{94,1}$$ $$\text{g·mol^{-1}}$$?
Solution not yet available
Copper sulphate crystals often include water. A chemist is trying to determine the number of moles of water in the copper sulphate crystals. She weighs out $$\text{3}$$ $$\text{g}$$ of copper sulphate and heats this. After heating, she finds that the mass is $$\text{1,9}$$ $$\text{g}$$. What is the number of moles of water in the crystals? (Copper sulphate is represented by $$\text{CuSO}_{4}.\text{xH}_{2}\text{O}$$).
Solution not yet available
$$\text{300}$$ $$\text{cm^{3}}$$ of a $$\text{0,1}$$ $$\text{mol·dm^{-3}}$$ solution of sulphuric acid is added to $$\text{200}$$ $$\text{cm^{3}}$$ of a $$\text{0,5}$$ $$\text{mol·dm^{-3}}$$ solution of sodium hydroxide.
1. Write down a balanced equation for the reaction which takes place when these two solutions are mixed.
2. Calculate the number of moles of sulphuric acid which were added to the sodium hydroxide solution.
3. Is the number of moles of sulphuric acid enough to fully neutralise the sodium hydroxide solution? Support your answer by showing all relevant calculations.
Solution not yet available
A learner is asked to make $$\text{200}$$ $$\text{cm^{3}}$$ of sodium hydroxide ($$\text{NaOH}$$) solution of concentration $$\text{0,5}$$ $$\text{mol·dm^{-3}}$$.
1. Determine the mass of sodium hydroxide pellets he needs to use to do this.
2. Using an accurate balance the learner accurately measures the correct mass of the NaOH pellets. To the pellets he now adds exactly $$\text{200}$$ $$\text{cm^{3}}$$ of pure water. Will his solution have the correct concentration? Explain your answer.
3. The learner then takes $$\text{300}$$ $$\text{cm^{3}}$$ of a $$\text{0,1}$$ $$\text{mol·dm^{-3}}$$ solution of sulphuric acid ($$\text{H}_{2}\text{SO}_{4}$$) and adds it to $$\text{200}$$ $$\text{cm^{3}}$$ of a $$\text{0,5}$$ $$\text{mol·dm^{-3}}$$ solution of $$\text{NaOH}$$ at $$\text{25}$$ $$\text{℃}$$.
4. Write down a balanced equation for the reaction which takes place when these two solutions are mixed.
5. Calculate the number of moles of $$\text{H}_{2}\text{SO}_{4}$$ which were added to the $$\text{NaOH}$$ solution.
Solution not yet available
$$\text{96,2}$$ $$\text{g}$$ sulphur reacts with an unknown quantity of zinc according to the following equation: $$\text{Zn} + \text{S} \rightarrow \text{ZnS}$$
1. What mass of zinc will you need for the reaction, if all the sulphur is to be used up?
2. Calculate the theoretical yield for this reaction.
3. It is found that $$\text{275}$$ $$\text{g}$$ of zinc sulphide was produced. Calculate the % yield.
Solution not yet available
Calcium chloride reacts with carbonic acid to produce calcium carbonate and hydrochloric acid according to the following equation:
$\text{CaCl}_{2} + \text{H}_{2}\text{CO}_{3} \rightarrow \text{CaCO}_{3} + \text{HCl}$
If you want to produce $$\text{10}$$ $$\text{g}$$ of calcium carbonate through this chemical reaction, what quantity (in g) of calcium chloride will you need at the start of the reaction?
Solution not yet available
|
2022-11-28 05:38:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7208899259567261, "perplexity": 1429.913586168248}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710473.38/warc/CC-MAIN-20221128034307-20221128064307-00656.warc.gz"}
|
http://math.stackexchange.com/questions/275120/a-prime-poset-of-ideals
|
# A prime poset of ideals
Let $A$ be a ring (commutative unital), and $\mathcal I$ be a nonempty family of proper ideals of $A$.
I will say that $\mathcal I$ has property $\dagger$ if for any $\mathfrak a\in\mathcal I$ and any $xy\in \mathfrak a$, one of $\mathfrak a+(x),\mathfrak a+(y)$ is in $\mathcal I$.
In particular, any maximal (w.r.t. inclusion) element of $\mathcal I$ is prime.
Does $\dagger$ (or maybe $\dagger$ + hypothesis of Zorn's lemma) have a name (a prime family, perhaps, as I suggest in the title)? As a side question, are there some interesting criteria for checking that a given family of ideals has property $\dagger$?
It seems to me that posets with property $\dagger$ are rather abundant in commutative algebra (e.g. in proof of Cohen's characterization of Noetherian rings), but I've yet to see $\dagger$ discussed on its own, or any name for it.
-
Now I will show that a (possibly empty) family $\I$ satisfies your property $\dagger$ if and only if the complement $\F = \I^c$ (taken in the set of all ideals of $A$) is an Ako family. (Notice that your property that $\I$ consist of proper ideals ensures that $A \in \I^c = \F$.)
First, suppose that $\I$ satisfies $\dagger$. To prove $\F = \I^c$ is Ako, let $I$ be an ideal and $x,y \in A$ such that $I+(x), I+(y) \in \F$. Assume for contradiction that the ideal $\a = I+(xy)$ is not in $\F$, so that $\mathfrak{a} \in \mathcal{I}$. By property $\dagger$, one of $\mathfrak{a}+(x) = I + (xy) + (x) = I+(x)$ or $\mathfrak{a}+(y) = I+(xy)+(y) = I+(y)$ is an element of $I = \mathcal{F}^c$, contradicting the assumptions. Thus $\mathcal{F}$ is an Ako family.
Conversely, suppose that $\F = \I^c$ is an Ako family. To prove that $\I$ satisfies $\dagger$, let $\a\in\I$ and let $x,y \in A$ be elements with $xy \in \a$. Assume for contradiction that neither of $\a+(x),\a+(y)$ is an element of $\I$. Then $\a+(x),\a+(y) \in \F$, and the Ako property implies that $\a+(xy) \in \F$. But $xy \in \a$, so $\a = \a+(xy) \in \F = \I^c$, a contradiction.
|
2015-04-28 19:07:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9619320034980774, "perplexity": 122.96667997689592}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246661916.33/warc/CC-MAIN-20150417045741-00163-ip-10-235-10-82.ec2.internal.warc.gz"}
|
https://en.wikipedia.org/wiki/Deterministic_finite-state_machine
|
# Deterministic finite automaton
An example of a deterministic finite automaton that accepts only binary numbers that are multiples of 3. The state S0 is both the start state and an accept state. For example, the string "1001" leads to the state sequence S0, S1, S2, S1, S0, and is hence accepted.
In the theory of computation, a branch of theoretical computer science, a deterministic finite automaton (DFA)—also known as deterministic finite acceptor (DFA), deterministic finite-state machine (DFSM), or deterministic finite-state automaton (DFSA)—is a finite-state machine that accepts or rejects a given string of symbols, by running through a state sequence uniquely determined by the string.[1] Deterministic refers to the uniqueness of the computation run. In search of the simplest models to capture finite-state machines, Warren McCulloch and Walter Pitts were among the first researchers to introduce a concept similar to finite automata in 1943.[2][3]
The figure illustrates a deterministic finite automaton using a state diagram. In this example automaton, there are three states: S0, S1, and S2 (denoted graphically by circles). The automaton takes a finite sequence of 0s and 1s as input. For each state, there is a transition arrow leading out to a next state for both 0 and 1. Upon reading a symbol, a DFA jumps deterministically from one state to another by following the transition arrow. For example, if the automaton is currently in state S0 and the current input symbol is 1, then it deterministically jumps to state S1. A DFA has a start state (denoted graphically by an arrow coming in from nowhere) where computations begin, and a set of accept states (denoted graphically by a double circle) which help define when a computation is successful.
A DFA is defined as an abstract mathematical concept, but is often implemented in hardware and software for solving various specific problems such as lexical analysis and pattern matching. For example, a DFA can model software that decides whether or not online user input such as email addresses are syntactically valid.[4]
DFAs have been generalized to nondeterministic finite automata (NFA) which may have several arrows of the same label starting from a state. Using the powerset construction method, every NFA can be translated to a DFA that recognizes the same language. DFAs, and NFAs as well, recognize exactly the set of regular languages.[1]
## Formal definition
A deterministic finite automaton M is a 5-tuple, (Q, Σ, δ, q0, F), consisting of
• a finite set of states Q
• a finite set of input symbols called the alphabet Σ
• a transition function δ : Q × Σ → Q
• an initial or start state ${\displaystyle q_{0}\in Q}$
• a set of accept states ${\displaystyle F\subseteq Q}$
Let w = a1a2an be a string over the alphabet Σ. The automaton M accepts the string w if a sequence of states, r0, r1, …, rn, exists in Q with the following conditions:
1. r0 = q0
2. ri+1 = δ(ri, ai+1), for i = 0, …, n − 1
3. ${\displaystyle r_{n}\in F}$.
In words, the first condition says that the machine starts in the start state q0. The second condition says that given each character of string w, the machine will transition from state to state according to the transition function δ. The last condition says that the machine accepts w if the last input of w causes the machine to halt in one of the accepting states. Otherwise, it is said that the automaton rejects the string. The set of strings that M accepts is the language recognized by M and this language is denoted by L(M).
A deterministic finite automaton without accept states and without a starting state is known as a transition system or semiautomaton.
For more comprehensive introduction of the formal definition see automata theory.
## Complete and incomplete
According to the above definition, deterministic finite automata are always complete: they define from each state a transition for each input symbol.
While this is the most common definition, some authors use the term deterministic finite automaton for a slightly different notion: an automaton that defines at most one transition for each state and each input symbol; the transition function is allowed to be partial.[5] When no transition is defined, such an automaton halts.
## Example
The following example is of a DFA M, with a binary alphabet, which requires that the input contains an even number of 0s.
The state diagram for M
M = (Q, Σ, δ, q0, F) where
0 1 S1 S2 S1 S2 S1 S2
The state S1 represents that there has been an even number of 0s in the input so far, while S2 signifies an odd number. A 1 in the input does not change the state of the automaton. When the input ends, the state will show whether the input contained an even number of 0s or not. If the input did contain an even number of 0s, M will finish in state S1, an accepting state, so the input string will be accepted.
The language recognized by M is the regular language given by the regular expression (1*) (0 (1*) 0 (1*))*, where * is the Kleene star, e.g., 1* denotes any number (possibly zero) of consecutive ones.
## Closure properties
The upper left automaton recognizes the language of all binary strings containing at least one occurrence of "00". The lower right automaton recognizes all binary strings with an even number of "1". The lower left automaton is obtained as product of the former two, it recognizes the intersection of both languages.
If DFAs recognize the languages that are obtained by applying an operation on the DFA recognizable languages then DFAs are said to be closed under the operation. The DFAs are closed under the following operations.
For each operation, an optimal construction with respect to the number of states has been determined in state complexity research. Since DFAs are equivalent to nondeterministic finite automata (NFA), these closures may also be proved using closure properties of NFA.
## As a transition monoid
A run of a given DFA can be seen as a sequence of compositions of a very general formulation of the transition function with itself. Here we construct that function.
For a given input symbol ${\displaystyle a\in \Sigma }$, one may construct a transition function ${\displaystyle \delta _{a}:Q\rightarrow Q}$ by defining ${\displaystyle \delta _{a}(q)=\delta (q,a)}$ for all ${\displaystyle q\in Q}$. (This trick is called currying.) From this perspective, ${\displaystyle \delta _{a}}$ "acts" on a state in Q to yield another state. One may then consider the result of function composition repeatedly applied to the various functions ${\displaystyle \delta _{a}}$, ${\displaystyle \delta _{b}}$, and so on. Given a pair of letters ${\displaystyle a,b\in \Sigma }$, one may define a new function ${\displaystyle {\widehat {\delta }}_{ab}=\delta _{a}\circ \delta _{b}}$, where ${\displaystyle \circ }$ denotes function composition.
Clearly, this process may be recursively continued, giving the following recursive definition of ${\displaystyle {\widehat {\delta }}:Q\times \Sigma ^{\star }\rightarrow Q}$:
${\displaystyle {\widehat {\delta }}(q,\epsilon )=q}$, where ${\displaystyle \epsilon }$ is the empty string and
${\displaystyle {\widehat {\delta }}(q,wa)=\delta _{a}({\widehat {\delta }}(q,w))}$, where ${\displaystyle w\in \Sigma ^{*},a\in \Sigma }$ and ${\displaystyle q\in Q}$.
${\displaystyle {\widehat {\delta }}}$ is defined for all words ${\displaystyle w\in \Sigma ^{*}}$. A run of the DFA is a sequence of compositions of ${\displaystyle {\widehat {\delta }}}$ with itself.
Repeated function composition forms a monoid. For the transition functions, this monoid is known as the transition monoid, or sometimes the transformation semigroup. The construction can also be reversed: given a ${\displaystyle {\widehat {\delta }}}$, one can reconstruct a ${\displaystyle \delta }$, and so the two descriptions are equivalent.
## Local automata
A local automaton is a DFA, not necessarily complete, for which all edges with the same label lead to a single vertex. Local automata accept the class of local languages, those for which membership of a word in the language is determined by a "sliding window" of length two on the word.[7][8]
A Myhill graph over an alphabet A is a directed graph with vertex set A and subsets of vertices labelled "start" and "finish". The language accepted by a Myhill graph is the set of directed paths from a start vertex to a finish vertex: the graph thus acts as an automaton.[7] The class of languages accepted by Myhill graphs is the class of local languages.[9]
## Random
When the start state and accept states are ignored, a DFA of n states and an alphabet of size k can be seen as a digraph of n vertices in which all vertices have k out-arcs labeled 1, …, k (a k-out digraph). It is known that when k ≥ 2 is a fixed integer, with high probability, the largest strongly connected component (SCC) in such a k-out digraph chosen uniformly at random is of linear size and it can be reached by all vertices.[10] It has also been proven that if k is allowed to increase as n increases, then the whole digraph has a phase transition for strong connectivity similar to Erdős–Rényi model for connectivity.[11]
In a random DFA, the maximum number of vertices reachable from one vertex is very close to the number of vertices in the largest SCC with high probability.[10][12] This is also true for the largest induced sub-digraph of minimum in-degree one, which can be seen as a directed version of 1-core.[11]
DFAs are one of the most practical models of computation, since there is a trivial linear time, constant-space, online algorithm to simulate a DFA on a stream of input. Also, there are efficient algorithms to find a DFA recognizing:
• the complement of the language recognized by a given DFA.
• the union/intersection of the languages recognized by two given DFAs.
Because DFAs can be reduced to a canonical form (minimal DFAs), there are also efficient algorithms to determine:
• whether a DFA accepts any strings (Emptiness Problem)
• whether a DFA accepts all strings (Universality Problem)
• whether two DFAs recognize the same language (Equality Problem)
• whether the language recognized by a DFA is included in the language recognized by a second DFA (Inclusion Problem)
• the DFA with a minimum number of states for a particular regular language (Minimization Problem)
DFAs are equivalent in computing power to nondeterministic finite automata (NFAs). This is because, firstly any DFA is also an NFA, so an NFA can do what a DFA can do. Also, given an NFA, using the powerset construction one can build a DFA that recognizes the same language as the NFA, although the DFA could have exponentially larger number of states than the NFA.[13][14] However, even though NFAs are computationally equivalent to DFAs, the above mentioned problems are not necessarily solved efficiently also for NFAs. The non-universality problem for NFAs is PSPACE complete since there are small NFAs with shortest rejecting word in exponential size. A DFA is universal if and only if all states are final states, but this does not hold for NFAs. The Equality, Inclusion and Minimization Problems are also PSPACE complete since they require forming the complement of an NFA which results in an exponential blow up of size.[15]
On the other hand, finite-state automata are of strictly limited power in the languages they can recognize; many simple languages, including any problem that requires more than constant space to solve, cannot be recognized by a DFA. The classic example of a simply described language that no DFA can recognize is bracket or Dyck language, i.e., the language that consists of properly paired brackets such as word "(()())". Intuitively, no DFA can recognize the Dyck language because DFAs are not capable of counting: a DFA-like automaton needs to have a state to represent any possible number of "currently open" parentheses, meaning it would need an unbounded number of states. Another simpler example is the language consisting of strings of the form anbn for some finite but arbitrary number of a's, followed by an equal number of b's.[16]
## DFA identification from labeled words
Given a set of positive words ${\displaystyle S^{+}\subset \Sigma ^{*}}$ and a set of negative words ${\displaystyle S^{-}\subset \Sigma ^{*}}$ one can construct a DFA that accepts all words from ${\displaystyle S^{+}}$ and rejects all words from ${\displaystyle S^{-}}$: this problem is called DFA identification (synthesis, learning). While some DFA can be constructed in linear time, the problem of identifying a DFA with the minimal number of states is NP-complete.[17] The first algorithm for minimal DFA identification has been proposed by Trakhtenbrot and Barzdin in[18] and is called the TB-algorithm. However, the TB-algorithm assumes that all words from ${\displaystyle \Sigma }$ up to a given length are contained in either ${\displaystyle S^{+}\cup S^{-}}$.
Later, K. Lang proposed an extension of the TB-algorithm that does not use any assumptions about ${\displaystyle S^{+}}$ and ${\displaystyle S^{-}}$ the Traxbar algorithm.[19] However, Traxbar does not guarantee the minimality of the constructed DFA. In his work[17] E.M. Gold also proposed a heuristic algorithm for minimal DFA identification. Gold's algorithm assumes that ${\displaystyle S^{+}}$ and ${\displaystyle S^{-}}$ contain a characteristic set of the regular language; otherwise, the constructed DFA will be inconsistent either with ${\displaystyle S^{+}}$ or ${\displaystyle S^{-}}$. Other notable DFA identification algorithms include the RPNI algorithm,[20] the Blue-Fringe evidence-driven state-merging algorithm,[21] Windowed-EDSM.[22] Another research direction is the application of evolutionary algorithms: the smart state labeling evolutionary algorithm[23] allowed to solve a modified DFA identification problem in which the training data (sets ${\displaystyle S^{+}}$ and ${\displaystyle S^{-}}$) is noisy in the sense that some words are attributed to wrong classes.
Yet another step forward is due to application of SAT solvers by Marjin J. H. Heule and S. Verwer: the minimal DFA identification problem is reduced to deciding the satisfiability of a Boolean formula.[24] The main idea is to build a augmented prefix-tree acceptor (a trie containing all input words with corresponding labels) based on the input sets and reduce the problem of finding a DFA with ${\displaystyle C}$ states to coloring the tree vertices with ${\displaystyle C}$ states in such a way that when vertices with one color are merged to one state, the generated automaton is deterministic and complies with ${\displaystyle S^{+}}$ and ${\displaystyle S^{-}}$. Though this approach allows finding the minimal DFA, it suffers from exponential blow-up of execution time when the size of input data increases. Therefore, Heule and Verwer's initial algorithm has later been augmented with making several steps of the EDSM algorithm prior to SAT solver execution: the DFASAT algorithm.[25] This allows reducing the search space of the problem, but leads to loss of the minimality guarantee. Another way of reducing the search space has been proposed in[26] by means of new symmetry breaking predicates based on the breadth-first search algorithm: the sought DFA's states are constrained to be numbered according to the BFS algorithm launched from the initial state. This approach reduces the search space by ${\displaystyle C!}$ by eliminating isomorphic automata.
## Notes
1. ^ a b
2. ^
3. ^
4. ^ Gouda, Prabhakar, Application of Finite automata
5. ^ Mogensen, Torben Ægidius (2011). "Lexical Analysis". Introduction to Compiler Design. Undergraduate Topics in Computer Science. London: Springer. p. 12. doi:10.1007/978-0-85729-829-4_1. ISBN 978-0-85729-828-7.
6. ^ John E. Hopcroft and Jeffrey D. Ullman (1979). Introduction to Automata Theory, Languages, and Computation. Reading/MA: Addison-Wesley. ISBN 0-201-02988-X.
7. ^ a b Lawson (2004) p.129
8. ^ Sakarovitch (2009) p.228
9. ^ Lawson (2004) p.128
10. ^ a b Grusho, A. A. (1973). "Limit distributions of certain characteristics of random automaton graphs". Mathematical Notes of the Academy of Sciences of the USSR. 4: 633–637. doi:10.1007/BF01095785. S2CID 121723743.
11. ^ a b Cai, Xing Shi; Devroye, Luc (October 2017). "The graph structure of a deterministic automaton chosen at random". Random Structures & Algorithms. 51 (3): 428–458. arXiv:1504.06238. doi:10.1002/rsa.20707. S2CID 13013344.
12. ^ Carayol, Arnaud; Nicaud, Cyril (February 2012). Distribution of the number of accessible states in a random deterministic automaton. STACS'12 (29th Symposium on Theoretical Aspects of Computer Science). Vol. 14. Paris, France. pp. 194–205.
13. ^ Sakarovitch (2009) p.105
14. ^ Lawson (2004) p.63
15. ^
16. ^ Lawson (2004) p.46
17. ^ a b Gold, E. M. (1978). "Complexity of Automaton Identification from Given Data". Information and Control. 37 (3): 302–320. doi:10.1016/S0019-9958(78)90562-4.
18. ^ De Vries, A. (28 June 2014). Finite Automata: Behavior and Synthesis. ISBN 9781483297293.
19. ^ Lang, Kevin J. (1992). "Random DFA's can be approximately learned from sparse uniform examples". Proceedings of the fifth annual workshop on Computational learning theory - COLT '92. pp. 45–52. doi:10.1145/130385.130390. ISBN 089791497X. S2CID 7480497.
20. ^ Oncina, J.; García, P. (1992). "Inferring Regular Languages in Polynomial Updated Time". Pattern Recognition and Image Analysis. Series in Machine Perception and Artificial Intelligence. Vol. 1. pp. 49–61. doi:10.1142/9789812797902_0004. ISBN 978-981-02-0881-3.
21. ^ Lang, Kevin J.; Pearlmutter, Barak A.; Price, Rodney A. (1998). "Results of the Abbadingo one DFA learning competition and a new evidence-driven state merging algorithm". Grammatical Inference (PDF). Lecture Notes in Computer Science. Vol. 1433. pp. 1–12. doi:10.1007/BFb0054059. ISBN 978-3-540-64776-8.
22. ^
23. ^ Lucas, S.M.; Reynolds, T.J. (2005). "Learning deterministic finite automata with a smart state labeling evolutionary algorithm". IEEE Transactions on Pattern Analysis and Machine Intelligence. 27 (7): 1063–1074. doi:10.1109/TPAMI.2005.143. PMID 16013754. S2CID 14062047.
24. ^ Heule, M. J. H. (2010). Exact DFA Identification Using SAT Solvers. Grammatical Inference: Theoretical Results and Applications. ICGI 2010. Lecture Notes in Computer Science. Vol. 6339. pp. 66–79. doi:10.1007/978-3-642-15488-1_7.
25. ^ Heule, Marijn J. H.; Verwer, Sicco (2013). "Software model synthesis using satisfiability solvers". Empirical Software Engineering. 18 (4): 825–856. doi:10.1007/s10664-012-9222-z. hdl:2066/103766. S2CID 17865020.
26. ^ Ulyantsev, Vladimir; Zakirzyanov, Ilya; Shalyto, Anatoly (2015). "BFS-Based Symmetry Breaking Predicates for DFA Identification". Language and Automata Theory and Applications. Lecture Notes in Computer Science. Vol. 8977. pp. 611–622. doi:10.1007/978-3-319-15579-1_48. ISBN 978-3-319-15578-4.
|
2022-05-23 22:30:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 43, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7484131455421448, "perplexity": 845.079071674289}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662561747.42/warc/CC-MAIN-20220523194013-20220523224013-00240.warc.gz"}
|
http://www.chegg.com/homework-help/questions-and-answers/a-charge-of-876mc-is-placed-at-each-corner-of-a-square-0420m-on-a-side-a-determine-the-mag-q3578765
|
## Find the Magnitude of the Force on Each Charge
A charge of 8.76mC is placed at each corner of a square 0.420m on a side. (A) Determine the magnitude of the force on each charge. (B) Determine the direction of the force on each charge: 1. From the center of the square towards the charge 2. From the charge towards the center of the square OR 3. Another direction
|
2013-05-25 19:29:57
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8408920168876648, "perplexity": 253.00804578809525}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706121989/warc/CC-MAIN-20130516120841-00053-ip-10-60-113-184.ec2.internal.warc.gz"}
|
https://kb.osu.edu/dspace/handle/1811/14029
|
# ROTATIONAL SPECTRUM AND STRUCTURE OF THE 1-ALKENES $[H_{2}C=CH(C_{n}H_{2n+1}) n=3-6]$
Please use this identifier to cite or link to this item: http://hdl.handle.net/1811/14029
Files Size Format View
1997-TB-04.jpg 62.87Kb JPEG image
Title: ROTATIONAL SPECTRUM AND STRUCTURE OF THE 1-ALKENES $[H_{2}C=CH(C_{n}H_{2n+1}) n=3-6]$ Creators: Lugez, Catherine L.; Suenram, R. D. Issue Date: 1997 Publisher: Ohio State University Abstract: The microwave spectra of the 2 conformed skew and cis of 1-pentene, 1-hexene, 1-heptene and 1-octene have been observed and assigned for the first time using a pulsed-beam Fabry-Perot cavity microwave spectrometer. The a-, b- and c-type transitions were observed and the rotational constants in the ground state were found for the different species. Information on the equilibrium geometry of these molecules was extracted from the values obtained for the rotational constants. Description: Author Institution: Optical Technology Division, National Institute of Standards and Technology URI: http://hdl.handle.net/1811/14029 Other Identifiers: 1997-TB-04
|
2017-08-18 18:13:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6318599581718445, "perplexity": 6657.903138518826}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105086.81/warc/CC-MAIN-20170818175604-20170818195604-00099.warc.gz"}
|
http://www.komal.hu/lap/2002-ang/p3432.e.shtml
|
Mathematical and Physical Journal
for High Schools
Issued by the MATFUND Foundation
Already signed up? New to KöMaL?
# English Issue, December 2002
Previous pageContentsNext pageORDER FORM
## Solutions of problems for physics P
P. 3432. Identical metal spheres are placed into the vertices of a regular tetrahedron. The spheres do not touch. When a single sphere (A) is given a charge of 20 nC it reaches the same potential as when A and another sphere are given 15 nC each. What equal charge should be given to A and to two other spheres, and what equal charge to all four spheres so that the potential of sphere A is always the same?
(6 points)
Submitted by: Bihary Zsolt, Irvine, California
Solution. If, in a certain tetrahedral arrangement, the electric charges are in equilibrium on the metal spheres, then increasing the charges proportionally (say $\displaystyle \lambda$ times) the equilibrium will not change but the original values of the field strengths and potentials will increase $\displaystyle \lambda$ times. It is also true that if we add together two equilibrium arrangements then the result is also an equilibrium arrangement in which the field strengths and the potentials at every point are the sum of the original vectors or scalars. The two above-mentioned characteristics can together be described as the principle of superposition.
Let us term the first arrangement (with only one charged sphere) I, and the second (where A and an other sphere is charged) II. Turn by 120o arrangement II around a line connecting the centre of sphere A and the centre of the tetrahedron (let us call this arrangement III), and turn it by -120o (let us call this arrangement IV).
After this let us take the superposition of the $\displaystyle \lambda$1-fold value of arrangement I and the $\displaystyle \lambda$2-fold values of arrangements II and III and let us choose the coefficients in a way that there are three spheres of the same charge and the potential of sphere A is exactly the same (U) as in arrangement I. These requirements are fulfilled if
20$\displaystyle \lambda$1+15$\displaystyle \lambda$2+15$\displaystyle \lambda$2 = 15$\displaystyle \lambda$2, and $\displaystyle \lambda$1U+$\displaystyle \lambda$2U+$\displaystyle \lambda$2U = U.
The solution of the above equation system is $\displaystyle \lambda_1=-\frac{3}{5}$, $\displaystyle \lambda_2=\frac{4}{5}$ and, accordingly, the three charged spheres will have 12 nC charge each.
Similarly, taking the proper superposition of arrangement I and arrangement II+III+IV, an arrangement can be obtained where all four spheres have the same charge and the potential of A is U. In this case the charge of the spheres are 10 nC each.
Based on the paper of Pápai Tivadar
(11th form student of Dráva Völgye Secondary School, Barcs)
Note. If the distance of the spheres were much greater than the radii of the spheres, their electrostatic field could be obtained in the point charge approximation. In this case (with the numbers given) this is not feasible, the size of the spheres and their distances are commeasurable and therefore the spheres polarize each other. The calculation of the charge distributions and the electric field is very difficult, but fortunately they are not needed - if we follow the above considerations.
|
2018-01-19 05:23:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6471883654594421, "perplexity": 679.2505667395345}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084887746.35/warc/CC-MAIN-20180119045937-20180119065937-00265.warc.gz"}
|
https://questioncove.com/updates/4d4a273c1eddb764c310b129
|
Mathematics
OpenStudy (anonymous):
x^3/2 -27=0 anyone understand this to find the real solution of the equation
OpenStudy (anonymous):
@Tracy 0505 Take log of the equation. so u would get log x^3/2 = log 27 (I took ova the 27). Then apply rule of log so u get 3/2 log x=log 27. This x = e^(2/3 log 27)
OpenStudy (anonymous):
Thanks I think it helps a little.
OpenStudy (anonymous):
slightly simpler x^3/2 =27 (x^3/3)^2/3)=x=27^2/3 don't need to use logs in this case
OpenStudy (anonymous):
typo - (x^3/2)^2/3)=x=27^2/3
OpenStudy (anonymous):
u take the receipcal and it cancels on the left and the right what do u do with it?
OpenStudy (anonymous):
27 cubed the squared?
OpenStudy (anonymous):
by rules of powers , x^(3/2)^2/3)= x^(3/2*2/3) = x^1=x 27^(2/3) = (cube root) squared
OpenStudy (anonymous):
Thanks so much for your time John.
By the way, @Desha, while the log approach is right, remember that the base of log is, by default, 10. So you either $$x = 10^{\frac{2}{3}\log_{10} 27}$$ Or $$x = e^{\frac{2}{3}\ln 27}$$ Where ln is the natural log, $$log_e$$
Latest Questions
ghostmasterTy: hi what extrinix who is he or she
6 minutes ago 1 Reply 0 Medals
ghostmasterTy: hello whos dream
8 minutes ago 4 Replies 1 Medal
NoodlesAndRiceYT: i think i found a glitch, sometimes when i log into questioncove, this happens an
54 minutes ago 25 Replies 3 Medals
14xavierrobinson: what is discrimination
2 hours ago 1 Reply 0 Medals
quasia: Who were the members of each alliance in World War I?
2 hours ago 5 Replies 1 Medal
Vocaloid: AP Calc BC Study Guide/Tutorial
2 hours ago 1 Reply 7 Medals
NoProbBob: Help Yo SS below
3 hours ago 15 Replies 1 Medal
|
2021-04-23 04:47:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7191222906112671, "perplexity": 8925.11978825258}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039601956.95/warc/CC-MAIN-20210423041014-20210423071014-00558.warc.gz"}
|
https://www.findfilo.com/chemistry-question-answers/payload-is-defined-as-the-difference-between-the-mmbd
|
Class 11 Chemistry Gases & Solutions States of Matter
Payload is defined as the difference between the mass of displaced air and the mass of the balloon. Calculate the payload, when a balloon of radius 10 m of mass 100 kg is filled with helium at 1.66 bar at 27 . (Density of air = 1.2 kg m and R = 0.083 bar dm K mol)
Solution: The volume of the balloon is .
The radius of balloon is 10 m.
Hence, the volume of the balloon is .
The mass of displaced air is obtained from the product of volume and density. It is .
The number of moles of gas present are .
Note: Here, the unit of volume is changed from to
.
Mass of helium present is obtained by multiplying the number of moles with molar mass. It is kg.
The mass of filled balloon is the sum of the mass of the empty ballon and the mass of He. It is kg.
Pay load mass of displaced air mass of balloon kg
Connecting you to a tutor in 60 seconds.
|
2021-06-17 20:10:50
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8612209558486938, "perplexity": 706.1510222989657}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487633444.37/warc/CC-MAIN-20210617192319-20210617222319-00386.warc.gz"}
|
https://quantumcomputing.stackexchange.com/questions/9120/how-to-explain-that-i-get-a-value-lower-than-the-smallest-possible-through-minim/9137
|
# How to explain that I get a value lower than the smallest possible through minimization procedure in VQE?
As far as I know after minimization I have to obtain a value $$E_{0}\le \frac{\langle \psi (\theta)|H|\psi (\theta)\rangle}{\langle \psi (\theta)|\psi (\theta)\rangle}$$, where $$E_{0}$$ - eigenvalue of ground state for hamiltonian $$H$$. Sometimes the algorithm give the value close to $$E_{0}$$, but far more often I get values lower than that.
I use hardwave efficient ansatz for initial state generation.
Hamiltonian consists of Pauli-strings $$H=\sum_{ijkl}\sigma_i\sigma_j\sigma_k\sigma_l$$.
For parameters optimization I use "COBYLA" and "Nelder-Mead" methods.
Could it be that the ansatz produce a state space which is not large enough?
• hard to know the exact reason without knowing the details. It's probably due to numerical errors if you get values lower than $E_0$ but not by much. Isolate the state found by the optimizer that gives the problem and try computing the expectation values in different ways.
– glS
Dec 9 '19 at 17:12
• I tried to use the algorithm for less number of Pauli-strings in Hamiltonian, I found out that the simpler the Hamiltonian the more correct the result.Could you explain what you mean by "Isolate the state", please? Dec 9 '19 at 19:14
• The most likely reason in practice (assuming your implementation is error-free) is noise. I'll write a more detailed response tomorrow. Dec 10 '19 at 4:39
• I mean that if the algorithm is finding the value of $\theta$ corresponding to what it thinks is the minimum, you can investigate directly what is going wrong in the numerics using that value
– glS
Dec 10 '19 at 21:07
The proof of the variational theorem (the theorem that the ground state energy is the lowest possible energy you can get from $$\frac{\langle \psi|H|\psi\rangle}{ \langle \psi | \psi \rangle}$$) is very simple: https://en.wikipedia.org/wiki/Variational_method_(quantum_mechanics)
If you get a lower energy, it means you don't actually have $$\frac{\langle \psi|H|\psi\rangle}{ \langle \psi | \psi \rangle}$$. For example if the quantum hardware doesn't give you $$H|\psi\rangle$$ but instead gives you $$H|\psi\rangle + \epsilon |\psi\rangle$$ where $$\epsilon$$ is some non-zero error, then when you plug everything into the proof of the variational theorem you may find that you are no longer guaranteed to always get energies equal to or higher than the ground state energy.
|
2021-10-17 04:15:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 10, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7864699363708496, "perplexity": 358.7549433626711}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585120.89/warc/CC-MAIN-20211017021554-20211017051554-00648.warc.gz"}
|
https://gmatclub.com/forum/there-is-a-sequence-a-n-where-n-is-a-positive-integer-such-that-a-n-232200.html
|
GMAT Question of the Day - Daily to your Mailbox; hard ones only
It is currently 19 Oct 2018, 23:26
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# There is a sequence A(n) where n is a positive integer such that A(n+1
Author Message
TAGS:
### Hide Tags
Intern
Joined: 05 Jun 2016
Posts: 18
GMAT 1: 760 Q51 V41
There is a sequence A(n) where n is a positive integer such that A(n+1 [#permalink]
### Show Tags
Updated on: 15 Jan 2017, 01:32
5
00:00
Difficulty:
85% (hard)
Question Stats:
47% (02:33) correct 53% (02:42) wrong based on 74 sessions
### HideShow timer Statistics
There is a sequence A(n) where n is a positive integer such that A(n+1) = 10 + 0.5A(n). Which of the following is closest to A(1,000)?
A. 15
B. 18
C. 20
D. 25
E. 50
Originally posted by hwang327 on 14 Jan 2017, 14:21.
Last edited by Bunuel on 15 Jan 2017, 01:32, edited 1 time in total.
Renamed the topic and edited the question.
Senior Manager
Joined: 13 Oct 2016
Posts: 367
GPA: 3.98
Re: There is a sequence A(n) where n is a positive integer such that A(n+1 [#permalink]
### Show Tags
15 Jan 2017, 01:00
2
1
hwang327 wrote:
There is a sequence A(n) where n is a positive integer such that A(n+1) = 10 + 0.5A(n). Which of the following is closest to A(1,000)?
A. 15
B. 18
C. 20
D. 25
E. 50
Any explanations would be great.
Source: MathRevolution
Hi
This is combination of geometric and arithmetic sequences.
n>0, min n = 1.
$$a_2 = 10 + \frac{1}{2}a_1$$
$$a_3 = 10 + \frac{1}{2}a_2 = 10 + \frac{1}{2}(10 + \frac{1}{2}a_1) = 10 + 5 + \frac{1}{4}a_1$$
$$a_4 = 10 + \frac{1}{2}a_3 = 10 + \frac{1}{2}(10 + 5 + \frac{1}{4}a_1) = 10 + 5 + \frac{5}{2} + \frac{1}{8}a_1$$
$$a_5 = 10 + \frac{1}{2}a_4 = 10 + 5 + \frac{5}{2} + \frac{5}{4} + \frac{1}{16}a_1$$
...
$$a_n = 10 + 5 + \frac{5}{2} + \frac{5}{4} + ... + \frac{5}{2^{n-3}} + \frac{1}{2}^{n-1}a_1$$
When n=1000 our fraction $$\frac{1}{2^{999}}$$ is close to 0.
$$5 + \frac{5}{2} + \frac{5}{4} + ... + \frac{5}{2^{997}}$$ We can apply logic of infinite geometric series where |r|<1 because our n is quite big (1000).
$$S = \frac{5}{1-1/2} = 5*2 = 10$$
$$a_{1000} ≈ 10 + 10 + 0 = 20$$
Hope this helps
Regards
Math Expert
Joined: 02 Aug 2009
Posts: 6961
There is a sequence A(n) where n is a positive integer such that A(n+1 [#permalink]
### Show Tags
15 Jan 2017, 04:49
2
1
hwang327 wrote:
There is a sequence A(n) where n is a positive integer such that A(n+1) = 10 + 0.5A(n). Which of the following is closest to A(1,000)?
A. 15
B. 18
C. 20
D. 25
E. 50
Hi,
A point before the solution..
The Q is flawed in that there is no value of $$A_1$$ given.
Solution..
$$A_1=10, A_2=10+0.5*10=10+5, A_3=10+5+0.5*5=10+5+0.25=10+10/2+10/4+.....$$
So 1000 can be taken as infinite series..
Ans =$$\frac{a}{(1-r)}=10/(1-1/2)=10/(1/2)=20$$
_________________
1) Absolute modulus : http://gmatclub.com/forum/absolute-modulus-a-better-understanding-210849.html#p1622372
2)Combination of similar and dissimilar things : http://gmatclub.com/forum/topic215915.html
3) effects of arithmetic operations : https://gmatclub.com/forum/effects-of-arithmetic-operations-on-fractions-269413.html
GMAT online Tutor
Intern
Joined: 30 Jan 2016
Posts: 7
Re: There is a sequence A(n) where n is a positive integer such that A(n+1 [#permalink]
### Show Tags
16 Jan 2017, 17:03
chetan2u wrote:
hwang327 wrote:
There is a sequence A(n) where n is a positive integer such that A(n+1) = 10 + 0.5A(n). Which of the following is closest to A(1,000)?
A. 15
B. 18
C. 20
D. 25
E. 50
Hi,
A point before the solution..
The Q is flawed in that there is no value of $$A_1$$ given.
Solution..
$$A_1=10, A_2=10+0.5*10=10+5, A_3=10+5+0.5*5=10+5+0.25=10+10/2+10/4+.....$$
So 1000 can be taken as infinite series..
Ans =$$\frac{a}{(1-r)}=10/(1-1/2)=10/(1/2)=20$$
Can you please explain to me how you got to the 10/(1-1/2) part in the last equation? I can not seem to trace the origin of the 1/2 part and why that expression is the divisor of 10.
Thank you.
Manager
Joined: 22 May 2015
Posts: 106
Re: There is a sequence A(n) where n is a positive integer such that A(n+1 [#permalink]
### Show Tags
16 Jan 2017, 18:24
Given : A(n+1) = 10 + A(n)/2
A(2) = 10+ A(1)/2
A(3) = 10+ A(2)/2 = 15+A(1)/4
A(4) = 10+ A(3)/2 = 17.5+A(1)/8
A(5) = 10+ A(4)/2 = 18.75+A(1)/16
A(6) = 10+A(5)/2 = 19.375+A(1)/32
a(7) = 19.6875 + a(1)/64
so the second term for A(1000) somewhat 19.XxXXXXXx+A(1)/2^999 , the second part can be ignored closet answer would be 20.
_________________
Consistency is the Key
Intern
Joined: 30 Jan 2016
Posts: 7
Re: There is a sequence A(n) where n is a positive integer such that A(n+1 [#permalink]
### Show Tags
19 Jan 2017, 14:58
Given : A(n+1) = 10 + A(n)/2
A(2) = 10+ A(1)/2
A(3) = 10+ A(2)/2 = 15+A(1)/4
A(4) = 10+ A(3)/2 = 17.5+A(1)/8
A(5) = 10+ A(4)/2 = 18.75+A(1)/16
A(6) = 10+A(5)/2 = 19.375+A(1)/32
a(7) = 19.6875 + a(1)/64
so the second term for A(1000) somewhat 19.XxXXXXXx+A(1)/2^999 , the second part can be ignored closet answer would be 20.
I see now, throughout all equations you utilize A(1) and the end result is neglectable. Thanks a million!
Non-Human User
Joined: 09 Sep 2013
Posts: 8461
Re: There is a sequence A(n) where n is a positive integer such that A(n+1 [#permalink]
### Show Tags
27 Jul 2018, 01:54
Hello from the GMAT Club BumpBot!
Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos).
Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email.
_________________
Re: There is a sequence A(n) where n is a positive integer such that A(n+1 &nbs [#permalink] 27 Jul 2018, 01:54
Display posts from previous: Sort by
# There is a sequence A(n) where n is a positive integer such that A(n+1
Powered by phpBB © phpBB Group | Emoji artwork provided by EmojiOne Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®.
|
2018-10-20 06:26:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8118096590042114, "perplexity": 3429.829206457486}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583512592.60/warc/CC-MAIN-20181020055317-20181020080817-00488.warc.gz"}
|
https://sdss-marvin.readthedocs.io/en/2.5.0/tools/deprecated/images.html
|
# Image Utilities¶
Warning
Deprecated since version 2.3.0: These utility functions have been deprecated. Use Utility Functions instead.
If you want to grab the postage stamp PNG cutout images of the MaNGA galaxies, Marvin currently provides a few ways of doing so:
• By PlateID: Returns a list of images of galaxies observed on a given plate.
• By Target List: Returns a list of images from an input list of targets.
• By Random Chance: Returns a random set of images within a given MPL.
All image utilities behave in the same way. Each function can be used in one of three ways:
• To navigate and retrieve paths to images in your local SAS.
• To retrieve URL paths to image locations on the Utah SAS.
Each function accepts three optional keyword arguments which determine what it returns. All of the Marvin Image utility functions use sdss_access under the hood, and build paths using a combination of the environment variables SAS_BASE_DIR, MANGA_SPECTRO_REDUX, and an internal rsync REMOTE_BASE. These keywords simply toggle how to construct those paths and/or download.
• mode:
The Marvin config mode being used. Defaults to the marvin.config.mode. When in local mode, Marvin navigates paths/images in your local SAS filesystem. When in remote mode, Marvin calls Utah to retrieve image lists there. When in auto mode, the functions default to remote mode. Use of local mode must be explicitly set.
• as_url:
A boolean that, when set to True, converts the paths into URL paths. Default is False. When in local mode, paths get converted to the SAS url https://data.sdss.org/sas/. When in remote mode, paths get converted into an rsync path https://[email protected]/sas/. When False, the functions generate paths based on your MANGA_SPECTRO_REDUX.
A boolean that, when set to True, downloads all the images into your local SAS. Only works in remote mode. Attempting to download in local mode will result in a stern warning!
See Image Utilities for the reference to the basic utility functions we provide.
A secret fourth way of downloading images is via downloadList. See Via Explicit Call in Downloading Objects and downloadList()
Once you have downloaded images into your local SAS, you can easily display them using the showImage utility function, described below. Or if you’d rather not download them, showImage also works remotely. See Displaying Images.
## Common Usage¶
The two most common uses will be to download images from Utah to your local system, and to get paths to your local images. See the sections below for full, and specific, examples of all uses.
• syntax to download: image_utility_name(input, mode='remote', download=True)
• syntax to search locally: image_utility_name(input, mode='local')
## By Target List¶
getImagesByList returns a list of image paths from a given input list of ids. Ids can be either plateifus, or manga-ids.
from marvin.utils.general.images import getImagesByList
# make a list of targets, can be plateifus or manga-ids
plateifus = ['8485-1901', '7443-12701']
# download the images for the targets in my list from Utah into my local SAS
# search my local SAS filesystem for images in the input list
images = getImagesByList(plate, mode='local')
print(images)
['/Users/Brian/Work/sdss/sas/mangawork/manga/spectro/redux/v2_0_1/8485/stack/images/1901.png',
'/Users/Brian/Work/sdss/sas/mangawork/manga/spectro/redux/v2_0_1/7443/stack/images/12701.png']
# convert my local file image paths into the SAS URL paths
images = getImagesByList(plateifus, mode='local', as_url=True)
print(images)
['https://data.sdss.org/sas/mangawork/manga/spectro/redux/v2_0_1/8485/stack/images/1901.png',
'https://data.sdss.org/sas/mangawork/manga/spectro/redux/v2_0_1/7443/stack/images/12701.png']
## By PlateID¶
getImagesByPlate returns a list of image paths from a given plateid
from marvin.utils.general.images import getImagesByPlate
plate = 8485
# download the images for plate 8485 from Utah into my local SAS
# search my local SAS filesystem for images connected to plate 8485
# these are my local images
images = getImagesByPlate(plate, mode='local')
print(images)
['/Users/Brian/Work/sdss/sas/mangawork/manga/spectro/redux/v2_0_1/8485/stack/images/12701.png',
....
'/Users/Brian/Work/sdss/sas/mangawork/manga/spectro/redux/v2_0_1/8485/stack/images/9102.png']
# convert my local file image paths into the SAS URL paths
images = getImagesByPlate(plate, mode='local', as_url=True)
print(images)
['https://data.sdss.org/sas/mangawork/manga/spectro/redux/v2_0_1/8485/stack/images/12701.png',
....
'https://data.sdss.org/sas/mangawork/manga/spectro/redux/v2_0_1/8485/stack/images/9102.png']
# generate rsync paths for the image files (located on Utah SAS) for plate 8485
# these are images located at Utah but generated with my local SAS_BASE_DIR (notice the thumbnails)
images = getImagesByPlate(plate, mode='remote', as_url=True)
print(images)
['/Users/Brian/Work/sdss/sas/mangawork/manga/spectro/redux/v2_0_1/8485/stack/images/12701.png',
'/Users/Brian/Work/sdss/sas/mangawork/manga/spectro/redux/v2_0_1/8485/stack/images/12701_thumb.png',
....
'/Users/Brian/Work/sdss/sas/mangawork/manga/spectro/redux/v2_0_1/8485/stack/images/9102.png',
'/Users/Brian/Work/sdss/sas/mangawork/manga/spectro/redux/v2_0_1/8485/stack/images/9102_thumb.png']
# generate rsync paths for the image files (located on Utah SAS) for plate 8485
images = getImagesByPlate(plate, mode='remote', as_url=True)
print(images)
['https://[email protected]/sas/mangawork/manga/spectro/redux/v2_0_1/8485/stack/images/12701.png',
'https://[email protected]/sas/mangawork/manga/spectro/redux/v2_0_1/8485/stack/images/12701_thumb.png'
....
'https://[email protected]/sas/mangawork/manga/spectro/redux/v2_0_1/8485/stack/images/9102.png',
'https://[email protected]/sas/mangawork/manga/spectro/redux/v2_0_1/8485/stack/images/9102_thumb.png']
## By Random Chance¶
getRandomImages returns a list of random images for a given MPL. The default number returned is 10.
from marvin.utils.general.images import getRandomImages
# return 3 random images from my local SAS filesystem
images = getRandomImages(num=3, mode='local')
print(images)
['/Users/Brian/Work/sdss/sas/mangawork/manga/spectro/redux/v2_0_1/8485/stack/images/9101.png',
'/Users/Brian/Work/sdss/sas/mangawork/manga/spectro/redux/v2_0_1/7443/stack/images/1902.png',
'/Users/Brian/Work/sdss/sas/mangawork/manga/spectro/redux/v2_0_1/7443/stack/images/3702.png']
# get the URLs for 5 random images
images = getRandomImages(num=5, mode='local', as_url=True)
print(images)
['https://data.sdss.org/sas/mangawork/manga/spectro/redux/v2_0_1/8485/stack/images/12704.png',
'https://data.sdss.org/sas/mangawork/manga/spectro/redux/v2_0_1/8485/stack/images/3701.png',
'https://data.sdss.org/sas/mangawork/manga/spectro/redux/v2_0_1/7443/stack/images/6101.png',
'https://data.sdss.org/sas/mangawork/manga/spectro/redux/v2_0_1/8485/stack/images/12701.png',
'https://data.sdss.org/sas/mangawork/manga/spectro/redux/v2_0_1/7443/stack/images/6103.png']
## Displaying Images¶
Once you have downloaded IFU png images into your local SAS using any of the above utility functions, you may display them using the showImage utility function. This function quickly and coarsely opens and displays an image as a PIL Image Object (using the Python Image Library python package.) The image will be displayed and the image object is also returned. Once the image object is returned, you can manipulate the image as you see fit.
When acting in mode=local, showImage will attempt to locate the image file from your local SAS. In mode=remote, showImage will attempt to grab the requested image file from the Utah SAS. In mode=auto, local mode is tried first, then remote mode.
See showImage() for the API reference.
from marvin.utils.general.images import showImage
# let's open the image for plateifu 8485-1901
image = showImage(plateifu='8485-1901')
print(image)
<PIL.PngImagePlugin.PngImageFile image mode=RGBA size=562x562 at 0x1142CE390>
# this file was opened locally via mode = auto
print(image.filename)
/Users/Brian/Work/sdss/sas/mangawork/manga/spectro/redux/v2_0_1/8485/stack/images/1901.png
# open an image remotely via mode = auto
image = showImage(plateifu='7495-1902')
WARNING: Local mode failed. Trying remote.
print(image.filename)
https://data.sdss.org/sas/mangawork/manga/spectro/redux/v2_0_1/7495/stack/images/1902.png
# open an image via a path
# first get some paths to some local images
imagepaths = getRandomImages(num=3, mode='local')
print(images)
['/Users/Brian/Work/sdss/sas/mangawork/manga/spectro/redux/v2_0_1/8485/stack/images/9101.png',
'/Users/Brian/Work/sdss/sas/mangawork/manga/spectro/redux/v2_0_1/7443/stack/images/1902.png',
'/Users/Brian/Work/sdss/sas/mangawork/manga/spectro/redux/v2_0_1/7443/stack/images/3702.png']
# showImage only acts on one path a time
image = showImage(path=imagepaths[0])
# retrieve the image object so I can manipulate it, but don't show the image
image = showImage(plateifu='8485-1901', show_image=False)
# show the image without returning the image object
image = showImage(plateifu='8485-1901', return_image=False)
print(image)
None
End of line!
|
2021-12-04 20:42:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18813957273960114, "perplexity": 8454.839994644966}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363006.60/warc/CC-MAIN-20211204185021-20211204215021-00551.warc.gz"}
|
http://dkjo.cbeu.pw/gamess-vs-gaussian.html
|
# Gamess Vs Gaussian
Sometimes people doesn't want to use kernel. Gaussian distribution (also known as normal distribution) is a bell-shaped curve, and it is assumed that during any measurement values will follow a normal distribution with an equal number of measurements above and below the mean value. If I get back to the standard procedure in any statistical textbook, since the sample is almost Gaussian, the lower bound of the confidence interval should be (since we have a Student t distribution). 01 Binary versions from Gaussian Inc Target goals: [1] Scaling results for typical models/methods in Gaussian 09 [2] Scaling on different systems: clusters (saw, narwhal, hound) vs. They are, however, yet to win a game in this edition and will be looking to record their first victory so as to advance to the next phase. SVGFEGaussianBlurElement. When Playing a game in 2k/4k then running that game in Ultra. being predominantly a periodic code, local orbital vs. A guide to the mean, median and mode and which of these measures of central tendency you should use for different types of variable and with skewed distributions. by svsgaussianfracture. in the user would type: babel -ix coords. This is usually achieved by wireframe modelling on sections displaying grade assays or composites, indicator kriging, and/or implicit modelling using radial basis functions. There is no point reproducing here what is found at http. Gaussian process, 5 identifiability, 14 identification, 18 integrated autoregressive moving average process, 6 invertible process, 4 MA(q), 3 moving average process, 3 nondeterministic, 5 nonnegative definite sequence, 6 PACF, 11 periodogram, 15 sample partial autocorrelation coefficient, 11 second order stationary, 2 spectral density. The rest of the field is a bit weak. GameWorks The NVIDIA Deep Learning Institute (DLI) offers hands-on training for developers, data scientists, and researchers in AI and accelerated computing. , which operates a popular Indian gaming site, asked the Delhi D is trc C ouq e nla g online gaming portals. Gaussian 16. Chemcraft reads multi-step Gaussian jobs and presents then as the list of several expanding nodes, each node representing individual job in the file. The current implementation allows n lk values of only 0, 1, or 2. as if the world were overlaid with gaussian. General Atomic and Molecular Electronic Structure System (GAMESS-UK) is a computer software program for computational chemistry. Last updated on: 05 January 2017. by svsgaussianfracture. I am trying to write an algorithm for doing the inverse using the Gaussian Elimination method. added TVZ to the bases of GAMESS EasySetup. -Gives more weight at the central pixels and less weights to the neighbors. Our gaussian function has an integral 1 (volume under surface) and is uniquely defined by one parameter $\sigma$ called standard deviation. Creeping Doom 10 Points. , 550 K) that is not in the default list, supply that temperature as an additional command-line argument: perl thermo. added tzv(TZ_(Dunnig)) to the bases of NWChem EasySetup. 1 Expected Value of Discrete Random Variables When a large collection of numbers is assembled, as in a census, we are usually interested not in the individual numbers, but rather in certain descriptive quantities such as the average or the median. the rate of stats on the road. Applying Mean filter many times you can speed up Gaussian implementation 1000 times. To read an xyz file named coords. Learn prime and composite numbers in this fun free arcade flash math game. Generate observations from the generalized inverse Gaussian distribution using transformed density rejection algorithm, with lambda = 1 rhypJD. , 8, 4166(2012). Gaussian filtering Convolution Remember cross-correlation: A. Matrix Method for solving systems of equations is also known as Row Echelon Method. The analysis is focused on meetings from the media, whose speaking styles can be heavily affected by the type of program (as reportages vs talk shows or meteorology reports). This article explains the bell curve and applies it to trading. They process floral waste from temples to manufacture fragrant soaps, incense sticks and natural compost. They have to keep the strict F77 style…. In generating a sample of n datapoints drawn from a normal/Gaussian distribution, how big on average the biggest datapoint is will depend on how large n is. Gaussian is an important tool for many chemists, but it's has also been a center of controversy. EffectiveCorePotential((ECP)(Basis(Sets(! Svante!Hedström,!Batista!Lab,!Yale!University!!! Name( ECP(onatoms( ζ((zeta)(type( Polariz. zmt - Gaussian style Z-matrix zmtmpc - MOPAC style Z-matrix. This list is no way comprehensive, nor makes any representation of what the codes do, beyond the most superficial observations (being predominantly a molecular code vs. For beams that are not 100% uniform, the peak power/energy density will be higher. Molcas quantum chemistry software developed by scientists to be used by scientists www. Craig's vs Gaussian part 3 Jay Zou. Title: Finite vs. Park Factor compares the rate of stats at home vs. Basis Sets; Density Functional (DFT) Methods; Solvents List SCRF. If, using elementary row operations, the augmented matrix is reduced to row echelon form (REF), then the process is called Gaussian elimination. And just have svm that regulaize parameters. The adjusted game efficiencies are then averaged (with more weighting to recent games) to produce the final adjusted offensive efficiency. More: equallogic san headquarters; cherub love you right. OK, so after 13 years since high school I'm finally back in school and have been doing fine up until now. For the layman very short explanation: Gaussian is a function with the nice property of being separable, which means that a 2D Gaussian function can be computed by combining two 1D Gaussian functions. evaluation 2 (top) and the corresponding mean/ difference plot Imaging, LLC, Bloomington, IN, 2School of Optometry, Indiana parameters of a two-component Gaussian Mixture Model (GMM), Automated and Manual Non Mydriatic Digital Retinal Imaging 1: Grading Scale (Kolomeyer et al. Gaussian mixture models These are like kernel density estimates, but with a small number of components (rather than one component per data point) Outline k-means clustering a soft version of k-means: EM algorithm for Gaussian mixture model EM algorithm for general missing data problems. This function takes a single argument to specify the size of the resulting array. D or later). 86x faster than the Pixel Shader Compute Shaders can provide big optimizations over pixel shaders if optimized correctly 7 Filter Optimizations presented Separable Filters Thread Group Shared Memory Multiple Pixels per Thread. The dotted vertical line is the true lower bound of the 90%-confidence interval, given the true distribution (which was not a Gaussian one). Play Chess Online! online chess puzzles, free online chess games database and more. Dan Teague, the North Carolina School of Science and Mathematics. Utah State Office of Education. Unified RASCAL Interface RASCAL 4. Note that I’ve left out a lot of the gory details, like the components in electronic circuits, or patching the lirc GPIO driver to fix some problems with IR inference from energy saving lamps. Plane wave vs gaussian basis sets: plane waves pros and cons Advantages I independent of the nuclei position (good for forces) I no BSSE I one parameter controls the basis set size I orthogonal I numerical efficiency through use of FFT Disadvantages I large number of basis set elements needed I Necessary use of pseudo-potentials I loss of. plane wave). Out of those probability distributions, binomial distribution and normal distribution are two of the most commonly occurring ones in the real life. Stiam eu ca n-am colonizat aiurea ,buna colonia aia amplasata langa mineri 😂!! WP si nu ierta nimic! FR aparator. )GEEKBENCH COMPUTE. In 1D, convolve with [1 -2 1] and look for pixels where response is (nearly) zero? Problem: when first deriv is zero, so is second. College Football Texas Heads Into USC Game Week With Big Decisions To Make At QB College Football The Staff of Cougfan Each recruit is weighted in the rankings according to a Gaussian. Example files. A guide to the mean, median and mode and which of these measures of central tendency you should use for different types of variable and with skewed distributions. However, the reinforcement learning al. Kinetic energy vs. For an elementary, but slightly more cumbersome proof of the central limit theorem, consider the inverse Fourier transform of. Molcas quantum chemistry software developed by scientists to be used by scientists www. Level: 2 Exp Points: 27 / 50 Exp Rank: 879,378 Vote Power: Rank: Civilian Global Rank: 0 Blams: 0 Saves: 0 B/P Bonus: Whistle: Normal. In partnership with the. org are unblocked. MOLDEN is able to calculate electron density surfaces and electrostatic potential surfaces based on the information in the output files of Gaussian or Firefly (PC GAMESS) calculations. 1 Sequence models 20 2. If I get back to the standard procedure in any statistical textbook, since the sample is almost Gaussian, the lower bound of the confidence interval should be (since we have a Student t distribution). I think they want to re-write all code. Does NBO 7. Van Caeneghem). Picked by PCWorld's Editors. In most of cases, one should use this program instead of a series of separated programs to do molecular format conversion, atom type assignment and charge generation etc. Plus, all you favorite Disney and Star Wars characters!. The original code split in 1981 into GAMESS-UK and GAMESS (US) variants, which now differ significantly. (One of its source documents, Gaussian elimination - math-linux. Its familiar bell-shaped curve is ubiquitous in statistical reports, from survey analysis and quality control to resource allocation. Park Factor compares the rate of stats at home vs. Of course it still comes packed with the latest hardware and technology like previous Galaxy phones, including iris recognition, wireless charging, and a flagship SoC. This weekend I found myself in a particularly drawn-out game of Chutes and Ladders with my four-year-old. Simple Matrix Calculator This will take a matrix, of size up to 5x6, to reduced row echelon form by Gaussian elimination. ca en-us © 2019 Banff International Research Station Lectures Recorded at the Banff International Research Station Lectures recorded at the Banff. Darrin Koltow wrote about computer software until graphics programs reawakened his lifelong passion of becoming a master designer and draftsman. Just enter the input values in this Gaussian distribution calculator to get the results. ©2012 Waters Corporation 1 A4, Empower3 Processing Tips and Tricks. Normal distribution, also called Gaussian distribution, the most common distribution function for independent, randomly generated variables. Gaussian Interface for HyperChem is a powerful interface which can automatically prepare the corresponding Gaussian input file to the molecule system constructed using the powerful molecular modeling function of HyperChem. GAMESS (US) source code is available as source-available freeware, but is not open-source software, due to license restrictions. Les théories. added TVZ to the bases of GAMESS EasySetup. In most of cases, one should use this program instead of a series of separated programs to do molecular format conversion, atom type assignment and charge generation etc. The solution to this problem is, of course, standard and be performed easily without GAMP: For example, if the components of $\mathbf{x}$ and $\mathbf{w}$ are i. Gaussian Networks Pvt. They are, however, yet to win a game in this edition and will be looking to record their first victory so as to advance to the next phase. without the words. My understanding is that Gaussian is a slightly better program from the perspective of the computational chemist, but Spartan is easier for the causal user to use (a reason why WebMO has been popular-it's a user friendly "front end" to Gaussian). Modern-day terminology defines the normal distribution as the bell curve with mean and variance parameters. If the values follow a. Gaussian 09. An inverted Weibull continuous random variable. html#Codd74 IBM Research Report RJ 1333, San Jose, California DS/DS1974/P179. - Introduce a hidden variable such that its knowledge would simplify. 3 Priors, posteriors and Bayes estimates 22. It examines the results of the gaussian calculations and renders the output. In his second game, he had lots of free throws, very few turnovers, and. Highlights info row image. For most practical problems, Gaussian elimination is highly stable on average. Loading Unsubscribe from Jay Zou? May 12, 1992 Bulls vs Knicks game 5 highlights - Duration: 16:13. This command should yield the same results as before. In doing so, we establish connections with imperfect information games and epistemic logic. 3 A game theoretic model and minimaxity 9 1. enabled to run GAMESS with node parallel computation. 4 angstroms, 4 layers, 0. The cutoffs commonly are generated from facies proportions calculated from well data. This skill based game introduced to the India market is both engaging and entertaining. The goals of Gaussian elimination are to make the upper-left corner element a 1, use elementary row operations to get 0s in all positions underneath that first 1, get 1s […]. Thanks for over 3000 milf lovers! We want to reduce the ads on Newgrounds and need your help! If NG achieves 2,750 active Supporters this year, we'll remove all ads from rated E-M art view pages for the remainder of 2019 and hopefully forever. It examines the results of the gaussian calculations and renders the output. The Gaussian blur of a 2D function can be defined as a convolution of that function with 2D Gaussian function. Gaussian 16 was released early in 2017. ca en-us © 2019 Banff International Research Station Lectures Recorded at the Banff International Research Station Lectures recorded at the Banff. While it can do rendering, it is generally used for tasks not directly related to drawing triangles and pixels. I have found a few ok resources on the internet about this method but nothing that really helps me 100%. 86x faster than the Pixel Shader Compute Shaders can provide big optimizations over pixel shaders if optimized correctly 7 Filter Optimizations presented Separable Filters Thread Group Shared Memory Multiple Pixels per Thread. Argentina vs Qatar Live Lionel Scaloni's men have won their last six Copa America outings against invited teams, winning five of those by a margin of three or more goals. R: Generate observations from the hyperbolic distribution using the mixing property of the generalised inverse Gaussian distribution and Dagpunar's algorithm for the generalised inverse Gaussian rhypRoU. Fall prey to 4 different hazards in a single game. Naruto, the famous character from Naruto: The Way of the. Here is the best article I've read on the topic: Efficient Gaussian blur with linear sampling. A binary compatible with the AVX 2 extended instruction set has been newly available. Deepen Karamchandani Talent Acquisition Enthusiast at Gaussian Networks (Ex-GE) Gurgaon, Haryana, India Computer Games 24 people have recommended Deepen. Matrix Method for solving systems of equations is also known as Row Echelon Method. poisson blur By quasty , October 30, 2006 in Graphics and GPU Programming This topic is 4667 days old which is more than the 365 day threshold we allow for new replies. This simple split-valence basis set. normal() method thus following Gaussian Distribution. Q(f)=∑i=1m c i f(xi) A formula with m function evaluations requires specification of 2m numbers ci and xi • Gaussian. For this, you need to specify "DFT=B3LYP1" as you already mentioned in your question. Reader's Digest: Gaussian vs Structured Projections, Improving A2I implementations, Kaczmarz method, Xampling the Future Following up on this recent blog entry entitled Compressed Nonnegative Matrix Factorization is Fast and Accurate , I went ahead and asked the authors Mariano Tepper and Guillermo Sapiro a dumb question:. Molden reads all the required information from the GAMESS / GAUSSIAN outputfile. I have a gaussian distributed phase angle (theta) with a mean of 0 and standard deviation of 16. P (a
|
2020-01-17 21:04:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.370343416929245, "perplexity": 2456.6746741918096}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250591234.15/warc/CC-MAIN-20200117205732-20200117233732-00303.warc.gz"}
|
https://gmatclub.com/forum/if-z-n-1-what-is-the-value-of-z-1-n-is-a-nonzero-integer-33912.html?fl=similar
|
If z^n=1, what is the value of z? 1. n is a nonzero integer : DS Archive
Check GMAT Club Decision Tracker for the Latest School Decision Releases https://gmatclub.com/AppTrack
It is currently 25 Feb 2017, 00:17
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# If z^n=1, what is the value of z? 1. n is a nonzero integer
Author Message
Current Student
Joined: 29 Jan 2005
Posts: 5238
Followers: 26
Kudos [?]: 381 [0], given: 0
If z^n=1, what is the value of z? 1. n is a nonzero integer [#permalink]
### Show Tags
22 Aug 2006, 05:09
00:00
Difficulty:
(N/A)
Question Stats:
0% (00:00) correct 0% (00:00) wrong based on 0 sessions
### HideShow timer Statistics
This topic is locked. If you want to discuss this question please re-post it in the respective forum.
If z^n=1, what is the value of z?
1. n is a nonzero integer
2. z>0
Edited for clarity Thanx Gmatornot! Sorry for the confusion.
Last edited by GMATT73 on 22 Aug 2006, 07:30, edited 2 times in total.
VP
Joined: 29 Dec 2005
Posts: 1348
Followers: 10
Kudos [?]: 61 [0], given: 0
### Show Tags
22 Aug 2006, 05:18
GMATT73 wrote:
If z^n=1, what is the value of z?
1. z is a nonzero integer
2. z>0
1. z could be -1, or 1 with n is not equal to 0. z could also be 2 with n=0.
2. z could be 1 (n=1,2,3,4,5,6.............), 2 (n=0), 3(n=0), 4(n=0), 5(n=0) and so on..
so E.
Current Student
Joined: 29 Jan 2005
Posts: 5238
Followers: 26
Kudos [?]: 381 [0], given: 0
### Show Tags
22 Aug 2006, 05:36
Professor wrote:
GMATT73 wrote:
If z^n=1, what is the value of z?
1. z is a nonzero integer
2. z>0
1. z could be -1, or 1 with n is not equal to 0. z could also be 2 with n=0.
2. z could be 1 (n=1,2,3,4,5,6.............), 2 (n=0), 3(n=0), 4(n=0), 5(n=0) and so on..
so E.
Looks like you overlooked a step professor.
Current Student
Joined: 28 Dec 2004
Posts: 3384
Location: New York City
Schools: Wharton'11 HBS'12
Followers: 15
Kudos [?]: 286 [0], given: 2
### Show Tags
22 Aug 2006, 06:17
this is E...
we dont know what n is, if n=0...then any value z will yield a 1...
Current Student
Joined: 29 Jan 2005
Posts: 5238
Followers: 26
Kudos [?]: 381 [0], given: 0
### Show Tags
22 Aug 2006, 06:20
fresinha12 wrote:
this is E...
we dont know what n is, if n=0...then any value z will yield a 1...
The OA and OE from the OG11 doesnt see it that way...
Current Student
Joined: 29 Jan 2005
Posts: 5238
Followers: 26
Kudos [?]: 381 [0], given: 0
### Show Tags
22 Aug 2006, 07:25
fresinha12 wrote:
just what does OG 11 say?
GMATT73 wrote:
fresinha12 wrote:
this is E...
we dont know what n is, if n=0...then any value z will yield a 1...
The OA and OE from the OG11 doesnt see it that way...
OE Ver batem from the OG:
(1) From this it is known that n is a non-zero integer, and since z^n=1, it follows that z is either 1 or -1...... NOT sufficient.
(2) While it is known from this that z is positive, if n were 0, any positive value of z would satisfy z^n = 1... NOT sufficient
Taken together limits z to 1.
(C)
Trixy eh? I finally got it after keying in the OE..
Senior Manager
Joined: 14 Jul 2005
Posts: 402
Followers: 1
Kudos [?]: 27 [0], given: 0
### Show Tags
22 Aug 2006, 07:26
There is a typo here...
If z^n=1, what is the value of z?
1. z should be n is a nonzero integer
2. z>0
Q126 page 288
VP
Joined: 29 Dec 2005
Posts: 1348
Followers: 10
Kudos [?]: 61 [0], given: 0
### Show Tags
22 Aug 2006, 10:42
gmatornot wrote:
There is a typo here...
If z^n=1, what is the value of z?
1. z should be n is a nonzero integer
2. z>0
Q126 page 288
nowdays we are getting more questions with typos.
one more here too: http://www.gmatclub.com/phpbb/viewtopic ... ht=#232035
GMATT73 wrote:
If z^n=1, what is the value of z?
1. n is a nonzero integer
2. z>0
Edited for clarity Thanx Gmatornot! Sorry for the confusion.
Senior Manager
Joined: 14 Jul 2005
Posts: 402
Followers: 1
Kudos [?]: 27 [0], given: 0
### Show Tags
22 Aug 2006, 10:44
No problem GMATT73 .. looks like you are really studying hard for the test ! Hope you crack it....
Current Student
Joined: 29 Jan 2005
Posts: 5238
Followers: 26
Kudos [?]: 381 [0], given: 0
### Show Tags
22 Aug 2006, 19:20
gmatornot wrote:
No problem GMATT73 .. looks like you are really studying hard for the test ! Hope you crack it....
Thanks for the encouragement gmatornot. Actually, all I want to do is break a 600. No super high expectations because I am not a wizard at Q. Just been consistently doing problems from this website and textbooks (OG and Kaplan) while making/reviewing a categorized error log. My last score was a 590 (Q40 V30), not quite enough to get into my target school (600 mean) so it`s time to fight for that last inch! See you in the forum.
Matt
22 Aug 2006, 19:20
Display posts from previous: Sort by
|
2017-02-25 08:17:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6542503237724304, "perplexity": 14630.883075875277}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171670.52/warc/CC-MAIN-20170219104611-00201-ip-10-171-10-108.ec2.internal.warc.gz"}
|
https://answers.ros.org/answers/107581/revisions/
|
ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | Q&A answers.ros.org
# Revision history [back]
Hello chao,
I'm not 100% sure but my guess is that you get so many scan messages that you cannot publish anything (your node processes the callback function all the time).
To verify that you should use the command:
rostopic echo /cmd_vel
in your console. If you don't see any messages published to that topic, then it is obvious that you cannot publish. rostopic
Try doing this:
//#include ...
ros::Publisher cmd_vel;
int z = 0;
void scanCallback(const sensor_msgs::LaserScan::ConstPtr& scan_msg)
{
//cout << z; z++;
/** this is to verify whether i am able to obtain the data i need*/
ROS_INFO("I see: %f", scan_msg->angle_min);
ROS_INFO("I see: %f", scan_msg->time_increment);
for (int y=150; y<=155; y++)
{
ROS_INFO("I see: %f", scan_msg->ranges[y]);
}
for (int y=150; y<=1406; y++)
{
geometry_msgs::Twist move_cmd;
move_cmd.angular.z = 0.2;
move_cmd.linear.x = 0.5;
cmd_vel.publish(move_cmd);
}
}
int main(int argc, char **argv)
{
ros::init(argc, argv, "wall_listener");
ros::NodeHandle n;
ros::Subscriber sub = n.subscribe("/base_scan", 1000, scanCallback);
ros::spin();
return 0;
}
This is not the best programming practice. Your code needs to be organized a lot however, it makes more sense to me and hopefully, you can understand the logic more easily this way. Now it says in your callback function that if you get a message from your laser, then publish cmd_vel commands.
Hello chao,
I'm not 100% sure but my guess is that you get so many scan messages that you cannot publish anything (your node processes the callback function all the time).
To verify that you should use the command:
rostopic echo /cmd_vel
in your console. If you don't see any messages published to that topic, then it is obvious that you cannot publish. rostopic
Try doing this:
//#include ...
ros::Publisher cmd_vel;
int z = 0;
void scanCallback(const sensor_msgs::LaserScan::ConstPtr& scan_msg)
{
//cout << z; z++;
/** this is to verify whether i am able to obtain the data i need*/
ROS_INFO("I see: %f", scan_msg->angle_min);
ROS_INFO("I see: %f", scan_msg->time_increment);
for (int y=150; y<=155; y++)
{
ROS_INFO("I see: %f", scan_msg->ranges[y]);
}
for (int y=150; y<=1406; y++)
{
geometry_msgs::Twist move_cmd;
move_cmd.angular.z = 0.2;
move_cmd.linear.x = 0.5;
cmd_vel.publish(move_cmd);
}
}
int main(int argc, char **argv)
{
ros::init(argc, argv, "wall_listener");
ros::NodeHandle n;
ros::Subscriber sub = n.subscribe("/base_scan", n.subscribe("/scan", 1000, scanCallback);
ros::spin();
return 0;
}
This is not the best programming practice. Your code needs to be organized a lot however, it makes more sense to me and hopefully, you can understand the logic more easily this way. Now it says in your callback function that if you get a message from your laser, then publish cmd_vel commands.
Hello chao,
I'm not 100% sure but my guess is that you get so many scan messages that you cannot publish anything (your node processes the callback function all the time).
To verify that you should use the command:
rostopic echo /cmd_vel
in your console. If you don't see any messages published to that topic, then it is obvious that you cannot publish. rostopic
Try doing this:
//#include ...
ros::Publisher cmd_vel;
int z = 0;
void scanCallback(const sensor_msgs::LaserScan::ConstPtr& scan_msg)
{
//cout << z; z++;
/** this is to verify whether i am able to obtain the data i need*/
ROS_INFO("I see: %f", scan_msg->angle_min);
ROS_INFO("I see: %f", scan_msg->time_increment);
for (int y=150; y<=155; y++)
{
ROS_INFO("I see: %f", scan_msg->ranges[y]);
}
for (int y=150; y<=1406; y++)
{
geometry_msgs::Twist move_cmd;
move_cmd.angular.z = 0.2;
move_cmd.linear.x = 0.5;
cmd_vel.publish(move_cmd);
}
}
int main(int argc, char **argv)
{
ros::init(argc, argv, "wall_listener");
ros::NodeHandle n;
ros::Subscriber sub = n.subscribe("/scan", 1000, scanCallback);
|
2022-09-26 02:11:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34652167558670044, "perplexity": 8811.513806789384}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00554.warc.gz"}
|
https://tex.stackexchange.com/questions/257764/siunitx-error-invalid-number-invalid-numerical-input-e
|
siunitx error: “invalid-number” Invalid numerical input 'e'
Well I've been working in some tables and for the number alignment I normally use the siunitxpackage since Mico helped me with this question.
But I've had in these new tables an error I don't understand how to solve.
siunitx error: "invalid-number" Invalid numerical input 'e'.
For immediate help type H <return>. \end{tabularx}
I found these questions about it: question 1 and question 2. Unfortunately it isn't the same case, and I haven't found clues leading me to the solution.
My MWE is:
\documentclass[fontsize=10pt,paper=letter,headings=small,bibliography=totoc,DIV=9,headsepline=true,titlepage=on]{scrartcl}
\usepackage[spanish,mexico]{babel}
\usepackage{xspace}
\usepackage{xkeyval}
\usepackage{array,multirow,multicol,rotating,tabularx,ragged2e,booktabs}
%\newcolumntype{Y}{>{\RaggedRight\arraybackslash\hspace{0pt}}X}
\newcolumntype{Y}{>{\RaggedRight\arraybackslash}X}
%\newcolumntype{C}{>{\centering\arraybackslash\hspace{0pt}}X}
\usepackage{rotating} % Paquete para rotar objetos flotantes
\usepackage{colortbl} % Paquete pata colorear tablas
\usepackage[per-mode=symbol]{siunitx} % Paquete para insertar unidades
\sisetup{
output-decimal-marker = {.},
group-minimum-digits = 4,
range-units = brackets,
list-final-separator = { \translate{and} },
list-pair-separator = { \translate{and} },
range-phrase = { \translate{to (numerical range)} },
}
\ExplSyntaxOn
\providetranslation [ to = Spanish ]
{ to~(numerical~range) } { a } % substitute the right word here
\ExplSyntaxOff
\begin{document}
\begin{table}[htbp]
\centering
\caption{Mercado de energía eléctrica en Norteamérica}
\label{tab:emna}
\begin{tabularx}{\linewidth}{@{}lYrYrYrYr @{}}
\toprule
País & Producción [\si{\giga\watt\hour}] & Fecha & Consumo [\si{\giga\watt\hour}] & Fecha & Exportaciones [\si{\giga\watt\hour}] & Fecha & Importaciones [\si{\giga\watt\hour}] & Fecha \\
\midrule
Canadá & 612000 & 2007 & 530000 & 2006 & 50120 & 2007 & 19660 & 2007 \\
Estados Unidos & 4167000 & 2007 & 3892000 & 2007 & 20140 & 2007 & 51400 & 2007 \\
México & 243300 & 2007 & 202000 & 2007 & 1278 & 2007 & 482.2 & 2007 \\
\bottomrule
\end{tabularx}
\end{table}
\end{document}
I try to use S column-type in the middle and right columns but I can't because the error mentioned before. I tried using simply S column without success and later S[table-format=5.0] but that didn't work. What's wrong with my tables?
Update
Although both answers were very interesting and useful, I'm afraid my problem persists. I can't add to my tables columns type S and I need them.
Now I add a table that currently work with the same problem, in which I used the column type Y meanwhile but the result hasn't been satisfactory.
I guess one of the packages in my preamble is responsible, see if I can detect itbecause the MWE seems to work smoothly.
• If I take the code currently here, adjust to S columns and escape the column headers by adding brace groups then all is fine. Can you edit in a MWE that actually does show the issue, otherwise it will be impossible to solve. – Joseph Wright Aug 26 '15 at 6:11
• @JosephWright I work on it, for the moment the only option would be to place my full preamble, which would not be a MWE. So later (now I need to sleep a while), I'll try to see if I find a package that causes the error and update the MWE. Thank you. – Aradnix Aug 26 '15 at 7:44
• @Mico I tried to update the question because the error remains, instead of open a new one that later will be closed because is duplicated or simply considered as off-topic. I simply changed the table for the new one with the same error. The preamble is still the same than before. – Aradnix Aug 26 '15 at 7:45
• @Mico I apologize for the trouble I've generated. It's the 1st time I make a bounty and, from previous experience, I decided to do it this way instead of creating a new question. The reason was that it was never solved the problem as I have indicated. However, the 2 answers I received previously were very useful and I think that erasing is not a good idea. Finally are 2 very good suggestions that someone else could see and possibly use. In my opinion, the change was not radical in updating the question, all I did was change the table where a new issue that I asked the question again appeared. – Aradnix Aug 26 '15 at 20:42
• @Aradnix 'Don't use tabularx', or at least 'Don't expect siunitx columns to mess about with spacing'. They are designed to be as far as possible the size of the content. – Joseph Wright Aug 26 '15 at 21:13
(Re-wrote the answer after the OP changed the table in the MWE.)
The following solution lets you use the S column type for the four "GWh" columns and lets you use a tabularx environment (to assure that the width of the table is equal to \linewidth). The trick -- such as it is -- consists of using S for the numbers and C (a centered version of X) for the headers.
You'll observe that I've reorganized the table's header. Your original setup requires line-breaks for all four important header words -- Producción, Consumo, Exportaciones, and Importaciones. I think it's better to avoid (as much as possible) the hyphenation of such words. I left the square brackets around the GWh headers; however, they may not be needed.
(To simplify and streamline the preamble code, I've also removed all packages that don't appear to be essential to generating the table.)
\documentclass[fontsize=10pt,paper=letter,headings=small,bibliography=totoc,
\usepackage[spanish,mexico]{babel}
\usepackage[utf8]{inputenc}
\usepackage[T1]{fontenc}
\usepackage{tabularx,booktabs}
\newcolumntype{C}{>{\centering\arraybackslash}X} % centered version of "X" column type
\newcommand\mc[1]{\multicolumn{2}{@{}C@{}}{#1}} % shortcut macro
\usepackage{siunitx} % Paquete para insertar unidades
\sisetup{
per-mode = symbol,
output-decimal-marker = {.},
group-minimum-digits = 4,
range-units = brackets,
list-final-separator = { \translate{and} },
list-pair-separator = { \translate{and} },
range-phrase = { \translate{to (numerical range)} },
}
\ExplSyntaxOn
\providetranslation [ to = Spanish ]
{ to~(numerical~range) } { a } % substitute the right word here
\ExplSyntaxOff
\begin{document}
\begin{table}
\caption{Mercado de energía eléctrica en Norteamérica}
\label{tab:emna}
\begin{tabularx}{\linewidth}{@{} l
*{2}{S[table-format=7.0]r}
S[table-format=5.0]r
S[table-format=5.1]r @{}}
\toprule
País & \mc{Producción} & \mc{Consumo} & \mc{Exportaciones} & \mc{Importaciones} \\
\cmidrule(lr){2-3} \cmidrule(lr){4-5} \cmidrule(lr){6-7} \cmidrule(l){8-9}
& [\si{\giga\watt\hour}] & Fecha & [\si{\giga\watt\hour}] & Fecha
& [\si{\giga\watt\hour}] & Fecha & [\si{\giga\watt\hour}] & Fecha \\
\midrule
Canadá & 612000 & 2007 & 530000 & 2006 & 50120 & 2007 & 19660 & 2007 \\
Estados Unidos & 4167000 & 2007 & 3892000 & 2007 & 20140 & 2007 & 51400 & 2007 \\
México & 243300 & 2007 & 202000 & 2007 & 1278 & 2007 & 482.2 & 2007 \\
\bottomrule
\end{tabularx}
\end{table}
\end{document}
Addendum: Here's the same table, but without the reorganization of the header material. The code is the same as above, except that a Y column type is used for four of the header cells.
....
\newcolumntype{Y}{>{\hspace{0pt}\RaggedRight\arraybackslash}X} % allow hyphenation
....
\begin{table}[htbp]
\setlength\tabcolsep{4pt}
\caption{Mercado de energía eléctrica en Norteamérica}
\label{tab:emna}
\begin{tabularx}{\linewidth}{@{}l
*{2}{S[table-format=7.0]r}
S[table-format=5.0]r
S[table-format=5.1]r @{}}
\toprule
País
& \multicolumn{1}{Y}{Producción [\si{\giga\watt\hour}]} & Fecha
& \multicolumn{1}{Y}{Consumo [\si{\giga\watt\hour}]} & Fecha
& \multicolumn{1}{Y}{Exportaciones [\si{\giga\watt\hour}]} & Fecha
& \multicolumn{1}{Y}{Importaciones [\si{\giga\watt\hour}]} & Fecha \\
\midrule
....
You don't need tabularx, but the stock tabular*. I just abbreviated “Estados Unidos” into “EUA” so to better fit the table in the available space.
Note that non numerical input in S columns should be braced; in this way, siunitx will not try and interpret the text as a number, which is the reason for the error message in the “Exportaciones” cell.
\documentclass[
fontsize=10pt,
paper=letter,
bibliography=totoc,
DIV=9,
titlepage=on
]{scrartcl}
\usepackage[T1]{fontenc}
\usepackage[utf8]{inputenc}
\usepackage[spanish,mexico]{babel}
\usepackage{booktabs}
\usepackage[per-mode=symbol]{siunitx} % Paquete para insertar unidades
\sisetup{
output-decimal-marker = {.},
group-minimum-digits = 4,
range-units = brackets,
list-final-separator = { \translate{and} },
list-pair-separator = { \translate{and} },
range-phrase = { \translate{to (numerical range)} },
}
\begin{document}
\begin{table}[htbp]
\centering
\caption{Mercado de energía eléctrica en Norteamérica}
\label{tab:emna}
\setlength{\tabcolsep}{1pt}% just a minimum
\begin{tabular*}{\linewidth}{
@{\extracolsep{\fill}}
l
S[table-format=7.0]
c
S[table-format=7.0]
c
S[table-format=5.0]
c
S[table-format=5.1]
c
@{}
}
\toprule
País & {Producción} & Fecha
& {Consumo} & Fecha
& {Exportaciones} & Fecha
& {Importaciones} & Fecha \\
& {(\si{\giga\watt\hour})} &
& {(\si{\giga\watt\hour})} &
& {(\si{\giga\watt\hour})} &
& {(\si{\giga\watt\hour})} & \\
\midrule
Canadá & 612000 & 2007 & 530000 & 2006 & 50120 & 2007 & 19660 & 2007 \\
EUA & 4167000 & 2007 & 3892000 & 2007 & 20140 & 2007 & 51400 & 2007 \\
México & 243300 & 2007 & 202000 & 2007 & 1278 & 2007 & 482.2 & 2007 \\
\bottomrule
\end{tabular*}
\end{table}
\end{document}
• Thanks for the advices and very interesting comment about the S column type. I'll try to put into braces the headers henceforth. And I'll see if this trick solves the error with the other tables where I had the same problem. – Aradnix Aug 26 '15 at 21:00
This code works. I took the opportunity to improve your table: I don't think you really need a tabularx environment, so I replaced the Y column with a plain l. I also made column head two-lined when I thought it necessary, with the makecell package:
\documentclass[fontsize=10pt, paper=letter, headings=small, bibliography=totoc, DIV=9, headsepline=true, titlepage=on]{scrartcl}
\usepackage[utf8]{inputenc}
\usepackage[spanish,mexico]{babel}
\usepackage{xspace}
\usepackage{xkeyval}
\usepackage{array,multirow,multicol,rotating,tabularx,ragged2e,booktabs}
\usepackage{ makecell}
\usepackage{rotating} % Paquete para rotar objetos flotantes
\usepackage{colortbl} % Paquete pata colorear tablas
\usepackage[per-mode=symbol]{siunitx} % Paquete para insertar unidades
\sisetup{
output-decimal-marker = {.},
group-minimum-digits = 4,
range-units = brackets,
list-final-separator = { \translate{and} },
list-pair-separator = { \translate{and} },
range-phrase = { \translate{to (numerical range)} },
}
\ExplSyntaxOn
\providetranslation [ to = Spanish ]
{ to~(numerical~range) } { a } % substitute the right word here
\ExplSyntaxOff
\begin{document}
\begin{table}[htb]
\centering
\caption{Reservas y Recursos Prospectivos}
\toprule
\midrule
Convencional & 20589 & 18222 \\
Aguas Someras & 11374 & 7472 \\
Sureste & 11238 & 7472 \\
Norte & 136 & \\
Terrestre & 8818 & 5913 \\
Sur & 4379 & 5371 \\
Chicontepec & 3556 & \\
Burgos & 425 & \\
Resto Norte & 459 & 542 \\
Aguas Profundas & 397 & 4837 \\
Perdido & & 3013 \\
Holok-Han & 397 & 1824 \\
No Convencional & & 5225 \\
\midrule
Total & 20589 & 23447 \\
\bottomrule
\end{tabular}
\end{table}
\end{document}
• Thanks for the answer and the improvement of my table. I understand that I can avoid the usage of tabularx but unfortunately is not the same case with another tables. In all cases my problem is the error in siunitx that do not let me compile using the S columns in my tables like this one and others. – Aradnix Jul 31 '15 at 5:03
• Didn't you forget, in the S columns, to enclose the non-numeric cells between brackets? – Bernard Jul 31 '15 at 8:54
• Nope, I reviewed it few times, that's the reason why I asked this. – Aradnix Jul 31 '15 at 21:39
• Could you post an example of non compiling table with the S qualifier (btw, does this one compile for you?)? – Bernard Jul 31 '15 at 22:14
• No, I tried with your example, but if I let empty the cells in the table, my document crashes. If I put a dash it compiles but with errors because of that. – Aradnix Aug 1 '15 at 5:54
First of all in order to answer your question (fix your error): You have used some non-numerical cell in an S column. It should read (or begin with) "e". Just put this cell into curly braces.
Now in general and for your table: Do not use tabularx with numerical data. As your table is too big for the \linewidth, just reduce the width manually.
This would look like this:
% arara: pdflatex
\usepackage[spanish,mexico]{babel}
\usepackage[utf8]{inputenc}
\usepackage{booktabs}
\usepackage{caption}
%\usepackage{rotating} % do not load that twice
\usepackage{siunitx}
\sisetup{group-minimum-digits = 4}
\begin{document}
\begin{table}[htbp]
\centering
\tabcolsep=1.33ex
\caption{Mercado de energía eléctrica en Norteamérica}
\label{tab:emna}
\begin{tabular}{@{}lS[table-format=7.0]cS[table-format=7.0]cS[table-format=5.0]cS[table-format=5.1]c@{}}
\toprule
País & {Producción} & Fecha & {Consumo} & Fecha & {Exptciones} & Fecha & {Imptciones} & Fecha \\
& {en \si{\giga\watt\hour}} & & {en \si{\giga\watt\hour}} & & {en \si{\giga\watt\hour}} & & {en \si{\giga\watt\hour}} & \\
\midrule
Canadá & 612000 & 2007 & 530000 & 2006 & 50120 & 2007 & 19660 & 2007 \\
EEUU & 4167000 & 2007 & 3892000 & 2007 & 20140 & 2007 & 51400 & 2007 \\
México & 243300 & 2007 & 202000 & 2007 & 1278 & 2007 & 482.2 & 2007 \\
\bottomrule
\end{tabular}
\end{table}
\end{document}
If you really want to use tabularx, you will find answers here: How to use siunitx and tabularx together?
Do not use brackets around units. That is wrong.
• Thanks for the suggestions, by what I see, and from @egreg response and yours what I need is to put the S column header between curly braces. How do you suggest to manually adjust the space in this long table? Usually I use p{width} but I don't know if there is any better way. – Aradnix Aug 26 '15 at 21:04
• @Aradnix It's the line \tabcolsep=1.33ex. Reduce or increase this value to your needs. The package showframe can help a lot here. For some automatism you should go with table* as egreg did. Do not smash the table too much. It would be better to have two of them or a sideways table or some other trick. – LaRiFaRi Aug 26 '15 at 22:13
• Is the acronym "EEUU" standard in (Mexican) Spanish? – Mico Aug 27 '15 at 4:32
• @Mico there are at least two different systems for create acronyms. One is duplicating letters for create plural, such as in EEUU, and other one that use the first letter of each word no matter if it's plural, such as EUA. Finally is a question of style, but both are correct. – Aradnix Aug 27 '15 at 4:55
• @Aradnix - Thanks for this explanation. I wasn't familiar with this system. – Mico Aug 27 '15 at 4:59
|
2019-05-23 23:47:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6963845491409302, "perplexity": 2230.164544708422}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257432.60/warc/CC-MAIN-20190523224154-20190524010154-00481.warc.gz"}
|
https://indico.tlabs.ac.za/event/109/timetable/?view=standard
|
# First Pan-African Astro-Particle and Collider Physics Workshop
Europe/Zurich
,
Description
The intent of the workshop is to provide a forum for exchange of information among African researchers in the fields of Astro-Particle and Collider Particle Physics. Prominent members of the International community will be invited to give overview presentations. Students and junior researchers will be given an opportunity to present.
The workshop will be fully virtual
The following topics are included:
Anomalies in Particle Physics
Astrophysical probes of physics beyond the Standard Model
Direct and indirect dark matter searches
Heavy Ion collisions
High-energy emission in astrophysical objects
Physics of the Standard Model
Physics beyond the Standard Model
Particle acceleration in astrophysical sources
All times are given in Central European Time
Plenary sessions (morning sessions)
https://cern.zoom.us/j/69378710193?pwd=cU1CTkptcEx4Ky83Y2JOeWk4R3lsUT09
Parallel sessions I, III and V (afternoon sessions):
https://cern.zoom.us/j/62178053285?pwd=dXpEQzZHMXV4Q1FuRnJNL1BUQWRoUT09
Parallel sessions II, IV and VI (afternoon sessions):
https://cern.zoom.us/j/65763090163?pwd=RUFBb1FDd1MrQXdQeE1YUTZaYWlYZz09
Registration
Agreement
First Pan-African Astro-Particle and Collider Physics Workshop
Participants
• Aatifa BARGACH
• ABDELFATAH CHARROUK
• Abdelilah Moussa
• Abdelkrim ZEGHARI
• Abdellah Tnourji
• abderrahim bouhouch
• Abdien Yousief
• Abdul W. Khanday
• Abdullah Mohamed
• Abhaya Kumar Swain
• Ableman Garwe
• Abner Wamocha
• Abraham Kibai
• Abubakar Mastour
• Abunie Zerihun
• Ahmed Ahmed
• Ahmed Ali
• Ahmed Eddymaoui
• Ahmed Elsayed
• Ahmed Faqihi
• Ahmed Nazef
• Aicha Chouiba
• Aimane CHEIKH
• Akintunde Oludayo
• Ali El Moussaouy
• Ali Hassan
• Ali Kakai
• ALIOU DIOUF
• Allison Felix Hughes
• Amani Besma BOUASLA
• Amare Abebe
• Aminata Diop
• Amine Ahriche
• Amira Khaled
• Amr El-Zant
• Anass Hminat
• Andreas Crivellin
• Andrew Chen
• Andriniaina Narindra Rasoanaivo
• ANIRBAN SAHA
• Ankur Chaubey
• Anna Franckowiak
• Antonio Renecle
• Anza-Tshilidzi Mulaudzi
• Arbia El abboubi
• Arlindo Fernando Cuco
• Armando Matavele
• Asmaa Shalaby
• Asogwa Moses
• Asrate Gaulle
• Athenkosi Siyalo
• Avijit k Ganguly
• Aya Beshr
• Aya El Boustani
• AYOUB BOUHMOUCHE
• Ayoub HMISSOU
• Ayoub Lakhal
• Aziza Zendour
• Aziza Zendour
• Azwinndini Muronga
• Azzeddine BENHAMIDA
• Bassim Taki
• Belen Gavela
• BEN TOUMI MOHAMMED YAAQOUB
• Benjamin Lieberman
• BENJAMIN WEKESA
• Bensenani Samia
• Betty Kibirige
• Bhuti Nkosi
• Bhuvaneshwari Kashi
• Blessed Ngwenya
• Blessing Mvana Nhlozi
• BOUBEKRAOUI MALIKA
• BOUNOUAS Ilham
• Boutaina Benhmimou
• Brahim Aitbenchikh
• Brahim AITOUAZGHOUR
• Brahim Ech-chykry
• BRIAN MANASI
• Bryan Thebeemang
• Bívar Chavango
• Carimo Lazima
• Carolina Sitoe
• Case Rijsdijk
• Chaimaâ Boualam
• Charles J. Fourie
• CHAYMAA Rouiouih
• Chaymae Darne
• Cheikh SENE
• CHERIEF HOURIA
• Chiara Aimè
• Chicouche Moncife
• Chinonso Onah
• CHRISTINE ODHIAMBO
• Claire Uhuru
• Cláudio Paulo
• Craig Rudolph
• Dandi Nemera
• Daniel Musyoka
• David Christian MBAH
• Davidson Rabetiarivony
• Debajit Bose
• Deborah Mmboga
• Deepak Kar
• Deeshani Mitra
• Dimakatso Maheso
• Doha Taj
• Donia Gamal
• DYSMAS KIBET
• El Abassi Abderrazaq
• El Hassan LAGHMICH
• EL Hassan Messaoudi
• Elias Malwa
• Eman Elgizawi
• Eman Reda
• Enoque Malate
• Ephraim Muli
• Ernest Mbili
• Esther Simuli
• Etiovaldo Fanequiço
• Eugene Idogbe
• Ezequias Nelson Miocha
• Ezzahni noura
• fabio happacher
• FAICAL BARZI
• Faissal BAHMANE
• Faith Mutunga
• FALY DIEYE
• Fanomezantsoa Arlivah ANDRIANTSARAFARA
• Fanomezantsoa RAZAFIMAHATRATRA
• Farida Ahmed
• Farnaz Kazi
• Fatima Abd Alrahman
• Fatima Bendebba
• Fatima-Zohra EL-HAFED
• Feraol Fana Dirirsa
• Fernando Carrió Argos
• Finn Stevenson
• FIRDOUS HAIDAR
• Francesco Capozzi
• Francisco Fenias Macucule
• Frank Miguel Ndloze
• Gabrijela Zaharijas
• Ganinshuti Pierre Damien KABYARE
• Gaogalalwe Mokgatitswane
• Geoff Beck
• GEOFRY AWUOR
• George Awuonda
• GHIZLANE EZ-ZOBAYR
• Gori Otieno
• Gregory Hillhouse
• Guilhermina Libanga
• Hafca Barhousse
• Hafiz Ahmed Ibrahim Mohamed
• Hajar BELMAHI
• Hamid Miskaoui
• hamza abouabid
• Hana Benslama
• Hanan SAIDI
• Hanane Riani
• Happy Sibusiso Ndlovu
• Hassan Abdalla
• Hassan Assalmi
• Hassan JALAL
• HASSANE HAMDAOUI
• Hassnae El Jarrari
• Hayam Yassin
• Henriette Oloo
• Hicham Outkou
• hind mensour
• Humphry Tlou
• Hélder Abílio Matsinhe
• Ibrahim Yagoub
• Ikrame Aamer
• Isobel Kolbe
• Jaco Brink
• Jacobus Diener
• Jamal OU AALI
• Jamiu Rabiu
• Jan Kisiel
• Javeria Makda
• Jean Du Plessis
• Jean Paul Latyr Faye
• JILALLI LOULIJAT
• Jocelino Nhabetse
• Joe Mburu
• Joseph Omojola
• Joseph Waweru
• Josephine Awuor
• Joshua Attih
• Joshua Attih
• Joshua Choma
• JOSHUA PONDO
• Jothika R
• Júlia Pedro Nhanala
• Kaouthar BOUSBAA
• Karien du Plessis
• Karim Sobh Marey
• KARIMA MESSAOUDI
• Kartik Bhide
• Katharina Brand
• Kawtar El Bouzaidi
• Keitumetse Kekana
• Kenny KALE SAYI
• Kgomotso Monnakgotla
• KHAOULA EL AAOUITA
• KHAWLA ANOUARI
• kossi ATTITCHOU
• Kumera Assefa Tucho
• Kumera Tucho
• Lalenthra Fisher
• LAMAAOUNE MUSTAPHA
• Latifa El aissaoui
• Laura Buonincontri
• Lawrence Nyakango
• Lerato Baloyi
• Leïla Haegel
• Lisper Mwai
• Lungisani Phakathi
• Maciej Soltynski
• Magda Abd El Wahab
• Mai Elsawy
• Malak Ait Tamlihat
• Mame Gor Lo
• Manahil Mohammed Yousif Abdalla
• Mantile Leslie Lekala
• Mariama BALDE
• Mariama Ndiaye
• Marina Romani Shalabi Jenidi
• Mark Hertzberg
• Markus Boettcher
• Marouane Benhassi
• Marouane habib Heraiz
• Mary Wakuhi
• Maryem El hayany
• MATHEW MATHIA
• Matthew Fu
• Maximin Anicet RAVELONIAINA
• Mayar Magdy
• Mbark Berrouj
• Mbayang Gueye
• Mehdi Hajji
• Mendoso Manhinda
• Menilto Jeronimo
• Mercy Matsete
• Meriem Bendahman
• Meriem Djouala
• Merna Ibrahim
• Michael Backes
• MICHAEL NDIWA
• Michael Sarkis
• Michelle Bark
• MIHIR HEMANTKUMAR PATEL
• Milton Tembe
• Moamen Saleh
• mohamed abdelouahab
• Mohamed Ahmed Mohamed
• MOHAMED AIT SALAH
• Mohamed Amin Loualidi
• Mohamed Belfkir
• Mohamed BENALI
• mohamed dahmani della
• Mohamed Gouighri
• Mohamed Hassan
• Mohamed JAKHA
• Mohamed Krab
• Mohamed MISKAOUI
• MOHAMED OUALAID
• Mohamed Ouchemhou
• Mohamed OUHAMMOU
• Mohammed Boukidi
• Mohammed Bouta
• Mohammed Charkaoui
• Mohammed Chenini
• Mohammed EL QESSOUAR
• Mohammed ELAOUNI
• Mohammed Omer Khojali
• Mona Saeed
• Monica Barnard
• Mor DIOP
• MOROGO ALBRIGHT
• Mostafa Bousder
• Mostafa Mansour
• Mouhssine Majdoul
• Mounir Lahlali
• Mrunal Korwar
• Mubarak Abdallah
• Muhammed Atef
• Mukesh Kumar
• Mulweli Patience Mukwevho
• Mulweli Patience Mukwevho
• Mussa Abdala
• Mustapha Assalmi
• MUSTAPHA BIYABI
• mustapha chaoui
• Mustapha IDERAWUMI
• Mustapha OUCHEN
• Mwezi Koni
• NAKACH Farouk
• naoufal elaisati
• Natasha Lavis
• Nathan Boyles
• Ndeye Ndiaya Diaw
• NDOYE Fallou
• Neil McCauley
• Newton Nyathi
• Nidhi Tripathi
• Nihal Brahimi
• Nkosiphendule Njara
• Noe Jambo
• Nogaye NDIAYE
• Nouhaila INNAN
• Obiero Omondi
• Ochieng Agao
• Odírcia Zita
• Oluk Philip
• Omar ASMI
• Omar CHAHBOUN
• Omar EL BOUNAGUI
• Omena Idolor
• Onesimo Mtintsilana
• Osvaldo Quissico
• Otaiba Bahar
• Othmane Mouane
• Othmane Zehouani
• Ouachani Abderrahim
• oumaima kanibou
• Oumar Ndiaye
• Ousmane Ndour
• Oussama Elkhiar
• Papa macoumba Faye
• paul obasanjo
• Pearl Malete
• Pedro Laice
• Percy Cáceres
• Peter Jenni
• Phuti Rapheeha
• Piyush Joshi
• qiyu sha
• Quentin King
• Quentin King
• R Jothika
• Rachid Ahl Laamara
• Rachid BENBRIK
• Rachid Mazini
• Rachit Sharma
• Rafik Er-Rabit
• RAHUL KUMAR
• Rajaa Cherkaoui El Moursli
• Rajae SAMMANI
• Rajeev Singh
• Rani osama Abdalaziz
• Rasmita Timalsina
• Reda Attallah
• Reem Mohamed
• Reham El-Kholy
• Ricardo Francisco Fariello
• Rihab Bamaarouf
• Ritchasse Mateus Malhango
• Rogério Langa
• Rugaya Ali
• Ryan Mckenzie
• sabrine el asri
• safaa mazzou
• safae tariq
• SAID EDDAHMANI
• Said Elakhal
• Said Mouslih
• Sakina Boudissa
• Salah-Eddine Dahbi
• Saleh Qutub
• Salma Sylla MBAYE
• Salwa Mohamed
• Samira Elghaayda
• Samuel Maunde
• samuel Musau
• Samwel Nandwa
• Sanae Ezzarqtouni
• Sanae Samsam
• Sanele Scelo Gumede
• SARAH ALI MOHAMED BASHEER
• Sarah White
• Sebastião António Uane Vilanculos
• Seblu Humne
• Selaiman Ridouani
• Shabeeb Alalawi
• Shafeeq Rahman Thottoli
• Shahd Yassen
• Shahinda Abd Almotagally
• SHASHANK PRAKASH
• Shell-may Liao
• Shimaa AbuZeid
• Shoaib Munir
• Shreesh Sahai
• Shreesh Sahai
• Sigrid Shilunga
• Siham Kalli
• Simon Gichohi
• Simão Artur Zunguze
• Sinenhlanhla Sikhosana
• Soebur Razzaque
• Sokhna Mbaye
• Srimoy Bhattacharya
• Stefano Profumo
• Stephan Saul Namburete
• Stephen Karanja
• Sthabile Kolwa
• Susrestha Paul
• Tahany Abdelhameid
• TAHIR TOGHRAI
• Talemwa Kaheru
• Tamas Gal
• Tarig Saeed
• Tejinder Virdee
• Thomson Mucavela
• Thulani Jili
• Thuso Mathaha
• Tilahun Diriba
• Timothy Govenor
• Toivo Samuel Mabote
• Tshianeo Priscilla Nevhufumba
• Ulrich Goerlach
• Unicia Fernando Vilanculo
• VICTOR OKUTHE
• Victoria Samboco
• Vindesio Njagi
• Virgínia Bila
• Wafa BOURAI
• Wandile Nzuza
• Waqar Ali
• Wazha German
• Will Horowitz
• Wilson Obiero
• Xifeng RUAN
• Yahia KARKORI
• yahya mekaoui
• yahya Tayalati
• Yaquan Fang
• Yassine Benali
• YASSINE DAKIR
• Yassine El Ghazali
• Yassine Hamouda
• YASSINE MOUTAWAKIL
• Yassine Rahmani
• Yassir El ghazi
• Yilak Alemu Abbo
• Yonas Etafa Tasisa
• Younes Belmoussa
• youssef Maazaoui
• Youssra Boujakhrout
• Zakaria Bouafia
• Zakaria Boutakka
• Zakaria Dahbi
• Zeinab abdelrazik
• Zeinab Morsy
• Zouaoui Fatma
• Zouleikha Sahnoun
• Monday, 21 March
• 09:00 12:00
Plenary Session I
Convener: Markus Boettcher (North-West University)
• 09:00
Introduction 5m
Speakers: Prof. Bruce Mellado-Garcia (iThemba LABS, Wits) , Prof. Yahya Tayalati (Faculty of Sciences, Mohammed V University, Rabat)
• 09:05
Welcome from the Ministry of Higher Education, Scientific Research and Innovation of Morocco 10m
Speaker: Prof. Mohammed Tahiri (Department of Higher Education of Morocco)
• 09:15
Welcome from the Ministry of Higher Education, Science and Innovation of South Africa 10m
Speaker: Prof. Yonah Seleti (Department of Science and Innovation of South Africa)
• 09:25
Welcome from the Network of African Science Academies 5m
Speaker: Prof. Norbert Hounkonnou (Network of African Science Academies)
• 09:30
The discovery of the Higgs boson at the LHC 30m
Speaker: Prof. Tejinder Virdee (Imperial College London)
• 10:00
Dark matter in the Universe 30m
Speaker: Prof. Francoise Combes (Collège de France et Observatoire de Paris)
• 10:30
Coffee Break 30m
• 11:00
Neutrinos and the Invisible Universe 30m
• 11:30
HyperK status and prospects 30m
Speaker: Prof. Francesca Di Lodovico (King's College London )
• 12:30 13:30
Virtual visit of the CMS experiment
Convener: Dr Shimaa AbuZeid (Ain Shams University and INFN)
• 13:30 14:30
Virtual Visit of the ATLAS experiment
Conveners: Hassnae El Jarrari (Universite Mohammed V (MA)) , Rachid Mazini (Institute of Physics, Academia Sinica Taiwan)
• 14:30 18:00
Parallel Session I, Astro-Particle
Conveners: Prof. Claudio Paulo (Universidade Eduardo Mondlane) , Prof. Hassan Abdalla (Omdurman Islamic University)
• 14:30
T2K Status and Plans 15m
T2K is a long baseline experiment providing world-leading measurements of the parameters governing neutrino oscillation.
T2K data enable first 3sigma exclusion for some intervals of the CP-violating phase $\delta_{CP}$ and precision measurements of the atmospheric parameters $\Delta m^{2}_{32}$, $\sin^2(\theta_{23}$).
T2K exploits a beam of muon neutrinos and antineutrinos at the Japan Particle Accelerator Research Centre (JPARC) and it measures oscillations by comparing neutrino rates and spectra
at a near detector complex, located at JPARC, and at the water-cherencov detector SuperKamiokande, located 295 Km away.
The T2K beam will be upgraded with increased power in 2022 and an upgrade of the ND280 near detector, located 2.5 degrees off-axis, is being assembled to exploits the increased statistics.
Moreover the SuperKamiokande detector has been loaded with 0.01% of Gadolinium in 2020, enabling enhanced neutron tagging.
In preparation for the exploitation of such data, the T2K collaboration is working on an updated oscillation analysis to improve the control of systematic uncertainties
A new beam tuning has been developed, based on an improved NA61/SHINE measurement on a copy of the T2K target and including a refined modeling of the beam line materials.
New selections at ND280, with proton and photon tagging, and at Super Kamiokande, extending pion tagging to muon neutrino samples, have been developed.
After reviewing the latest measurements of oscillation parameters, the status of such new developments and the plan to deploy the beam and ND280 upgrade will be presented.
Speaker: Neil McCauley (University of Liverpool)
• 14:45
KM3NeT: Status and perspectives for neutrino astronomy from the MeV to the PeV 15m
KM3NeT is a multi-purpose neutrino observatory currently being deployed at the bottom of the Mediterranean Sea. It consists of two detectors: ORCA and ARCA (for Oscillation and Astroparticle Research with Cosmics in the Abyss). ARCA will instrument 1 Gton of seawater, with the primary goal of detecting cosmic neutrinos with energies between several tens of GeV and PeV. Due to its position in the Northern Hemisphere, ARCA will provide an optimal view of the Southern sky including the Galactic Center. ARCA currently has 8 detection units fully operating out of an eventual planned total of 230. ORCA is a smaller (~ few Mtons) and denser array, optimized for the detection of atmospheric neutrinos in the 1 - 100 GeV range. It can also study low-energy neutrino astronomy, such as MeV-scale core-collapse supernovae. ORCA currently has 10 detection units fully operating out of an eventual planned total of 115. I will report on the current status and recent discoveries of ARCA and ORCA as well as a timeline for future developments.
Speaker: Andrew Chen (University of the Witwatersrand)
• 15:00
Correlation between IceCube neutrinos and X-ray flaring blazars 15m
Gamma-ray bright blazars are beginning to emerge as a very plausible
source of at least some of the very-high-energy neutrinos detected
by IceCube. Most searches for a correlation between blazars and neutrino
events have so far focused on gamma-ray flaring blazars, motivated by
the fact that very-high-energy gamma-rays are co-produced with neutrinos
if neutrinos are produced through photo-pion interactions of relativistic
protons with dense target photon fields. However, the same target photon
fields also act as a source of gamma-gamma opacity, leading to the development
of electromagnetic cascades. The energy of the co-produced photons is
therefore more likely to emerge in the soft gamma-ray to X-ray regime
instead of high-energy and very-high-energy gamma-rays. We are therefore
conducting a systematic search for a correlation between IceCube Gold and
Bronze alerts and X-ray flaring blazars, utilizing the Swift-XRT blazar
monitoring program. First preliminary results of this search will be
presented.
Speakers: Matthew Fu (Bishop Watterson High School) , Timothy Govenor (Bishop Watterson High School) , Quentin King (Bishop Watterson High School)
• 15:15
Searching for new physics during gravitational waves propagation 15m
The direct detection of gravitational waves opened an unprecedented channel to probe fundamental physics. Proposed extensions of our current theories predict a dispersion of the gravitational waves during their propagation, leading to a modification of the signals observed by ground-based interferometers compared to their predictions from general relativity. In this talk, I present several analyses probing different alternative models of gravitation with various observables. Using the multimessenger events consisting of gravitational waves and their electromagnetic counterpart, the speed of gravity is measured by comparing the arrival time of the two signals while extra dimensions and scalar-tensor theories are constrained from the comparison of the luminosity distance inferred independently from both signals. Relying only on gravitational wave signals, a large class of proposed theories, including as the existence of massive graviton, predict a frequency-dependent dispersion of the gravitational waves breaking local CPT and/or Lorentz symmetry. Constraints on the modified dispersion relation and effective field theories coefficients are obtained from the analysis of the third LIGO-Virgo detections catalog.
Speaker: Leïla Haegel (APC Laboratory (Uni.Paris / CNRS))
• 15:30
Search for Magnetic Monopoles with ten years of ANTARES data 15m
This work presents an updated search for magnetic monopoles using data taken with the ANTARES neutrino telescope over a period of 10 years (January 2008 to December 2017). In accordance with some grand unification theories, magnetic monopoles were created during the phase of symmetry breaking in the early Universe, and accelerated by galactic magnetic fields. As a consequence of their high energy, they could cross the Earth and emit a significant signal in a Cherenkov-based telescope like ANTARES, for appropriate mass and velocity ranges. This analysis uses a run-by-run simulation strategy, as well as a new simulation of magnetic monopoles taking into account the Kasama, Yang and Goldhaber model for their cross section with matter. The results obtained for relativistic magnetic monopoles with β=v⁄c ≥ 0.55, where v is the magnetic monopole velocity and c the speed of light in vacuum, will be presented.
Speaker: Jihad Boumaaza (University Mohamed V in Rabat)
• 15:45
Search for nuclearites in nine years of ANTARES data 15m
Nuclearites are hypothetical heavy particles composed by roughly equal proportions of up, down and strange quarks. These particles loose their energy by atomic collisions and they induce visible light in transparent mediums through black-body radiation from a shock wave.
ANTARES is a neutrinos telescope running at 2475 meters under water in the Mediterranean Sea. Nuclearites with a masses $\geq 4 \times 10^{13}$ GeV/c$^2$ are able to generate a sufficient amount of visible light to be detected. The nuclearites with a masses $\leq 10^{22}$ GeV/$c^2$ are not able to cross the Earth diameter, however. In this analysis, we consider a down-going flux of nuclearites with masses ranging from $4 \times 10^{13}$ to $10^{16}$ GeV/c$^2$ penetrating into the Earth with galactic velocities ($\beta=10^{-3}$).
Speaker: Mohammed Bouta (Mohamed First University in Oujda)
• 16:00
Coffee Break 30m
• 16:30
Solar constraints on captured electrophilic dark matter 15m
Dark matter captured by interaction with electrons inside the Sun may annihilate via a long-lived mediator to produce observable gamma-ray signals. We utilize solar gamma-ray flux measurements from the Fermi Large Area Telescope and High Altitude Water Cherenkov observatory to put bounds on the dark matter electron scattering cross-section. We find that our limits are four to six orders of magnitude stronger than the existing limits for dark matter masses ranging between GeV to PeV scale.
Speaker: Debajit Bose (IIT Kharagpur)
• 16:45
Thermal production of early dark matter from van der Waals fluid 15m
We present a new paradigm for scalar dark matter (DM) particles production in the early Universe. We show the appearance of a new quadratic potential after inflation. This result is due to the stabilization of scalar fields particles. In this case, the mass of this field increases and becomes a candidate for dark matter. We show the van der Waals equation of state for DM, which leads to the Boltzmann equation and the DM number density. We establish the correspondence between the thermodynamic variables needed to describe simple systems by the van der Waals gas. Particularly, we obtain the relationship between the DM cross-section and the redshifts. Finally, we discuss the local stability of dark matter by the heat capacity.
Speaker: Dr M. Bousder (MOHAMMED V UNIVERSITY IN RABAT)
• 17:00
Deflection angle of light rays by accelerating black holes with cosmological constant 15m
Using the Gauss-Bonnet formalism, the deflection angle of light rays by accelerating black holes is computed and investigated. The effect of the accelerating parameter is inspected. Then, the influence of the cosmological effect is also discussed.
Speaker: Hajar BELMAHI (University Mohammed V)
• 17:15
Dark Matter Direct and Indirect Detection 15m
dark matter is an essential ingredient for understanding the recipe of the universe's creation. Since it cannot be made of any of the usual standard model particles, therefore the construction of particle-physics models for dark matter has become a huge industry, accelerated quite recently by many studies. The techniques needed to detect these different signatures of dark matter are composed of two major direct and indirect detection. this work intended to provide a brief review of dark matter for the newcomer to the subject beginning with a discussion of the astrophysical evidence for dark matter. Then the standard weakly interacting massive particle (WIMP) scenario and detection techniques are reviewed, as well as mentioning some alternatives (axions and sterile neutrinos).
Speaker: houria cherief (phd student)
• 17:30
Thermodynamic of black holes in a cavity from shadow formalism 15m
Using the Hamilton-Jacobi formalism, we investigate the shadow behaviors of the black holes in a cavity. We approach such behaviors from the thermodynamic quantities. Among others, we establish a possible interplay between the thermodynamic and shadow aspects of such balck hole solutions.
Speaker: Dr Mohamed BENALI (Département de Physique, Equipe des Sciences de la matière et du rayonnement, ESMaR)
• 17:45
MeerKAT and dark matter 15m
Radio indirect detection has evolved into a promising approach to probe the nature of dark matter. This will only be enhanced by the construction of the full SKA. In the mean-time, MeerKAT’s potential as a dark matter detector has largely been ignored. In this work we will present simulations of the sensitivity of MeerKAT to diffuse radio emissions and apply them the dwarf galaxy Reticulum II to determine the potential of MeerKAT to probe the WIMP parameter space. We demonstrate that, by leveraging its angular resolution, MeerKAT has the potential to produce constraints tighter than Fermi-LAT results in dwarf spheroidal galaxies.
Speaker: Geoff Beck (University of the Witwatersrand)
• 14:30 18:25
Parallel Session II: Theory
Conveners: Prof. Abdelilah Moussa (University Mohamed I Oujda) , Prof. Abdesslam Arhrib (Université Abdelmalek Essaadi, FSTT, B. 416, Tangier, Morocco)
• 14:30
Big Science for National and Regional Unity 25m
Big science is characterized by long term multi-lateral engagements and large scale instruments that are used to address fundamental questions in science. The projects in big science work require huge funding and extensive collaborations at the regional and international levels. Experiences elsewhere, for example in Europe and the Middle East, have shown that in addition to technological developments, big science work brings communities of people together to address common scientific goals. Africa will be hosting the Square Kilometer Array (SKA) project. This is the world’s largest array of radio telescopes to be operated in Africa and Australia. South Africa is one of the founder members of the global SKA organization. There are eight partner countries of the SKA in Africa. The SKA and the AVN (African VLBI Network) present Africa with a great opportunity for scientists in the region to work together with world scientists. This is a unique opportunity to use big science as a means to attain regional cohesion and unity. This paper focusses on the following: how big science has contributed to unity in the European Middle-Eastern regions; the experiences from particle physics research at the European Organization for Nuclear Research – CERN; the potential for big science to enhance national and regional unity in Africa; and the way forward for Africa.
• 14:55
Overview on laser-assisted decay processes 15m
This work focuses on the controversial debate that has arisen over the last two decades about the possibility that the electromagnetic field affects the lifetime or decay width of an unstable particle. In this presentation, we highlight the possible effect of the electromagnetic field on the decay of particles through the theoretical study of some decay processes such as those of the pi (pion) meson and the intermediate vector bosons $W$ and $Z$ in the presence of an electromagnetic field. Expressions for the decay width and lifetime in the presence of the field have been derived in the framework of the standard electroweak model. The numerical results obtained are presented and discussed.
Speaker: Mohamed JAKHA (Sultan Moulay Slimane University, Polydisciplinary Faculty, Beni Mellal, Morocco)
• 15:10
Laser-assisted processes beyond the standard model 15m
In this work, we have theoretically studied the neutral Higgs pair production in Two Higgs Doublet Model (THDM) in the presence of a circularly polarized laser field. The laser-assisted differential partial cross section is derived in the centre of mass frame at the leading order including Z diagram. The total cross section is computed numerically by integrating the differential cross section over the solid angle dΩ. Two benchmark points are discussed for the THDM parameters. In the first step, we have analyzed the total cross section of e+e− → h0A0 by considering H 0 as the standard model-like Higgs boson. Then, the process e+e− → H0A0 is studied by taking h0 as the Higgs boson of the standard model. For both benchmark points, the laser-assisted total cross section of the studied processes depends on the produced neutral Higgs masses, the centre of mass energy and the laser field parameters. In addition, the maximum cross section occurs at high centre of mass energy for the process e+e− → H0A0 as compared to that of e+e− → h0A0 .
Speaker: Mr Mohamed OUHAMMOU (sultan moulay slimane university)
• 15:25
Influence of the laser field on electron muon neutrino processus 15m
In view of the great contribution of neutrino-electron scattering to the deep understanding of
electroweak interactions, we focus in this paper on the study of elastic scattering of a muon neutrino
by an electron (e − ν μ → e − ν μ ) in the presence of a circularly polarized electromagnetic field. We perform our theoretical calculation within the framework of Fermi theory using the exact wave functions of charged particles in an electromagnetic field. The expression of the differential cross
section (DCS) for this process is obtained analytically in the absence and presence of the laser field. The effect of the field strength and frequency on the exchange of photons as well as on the DCS is presented and analyzed.
keywords: Laser-assisted , Cross Section, Electrweak Interaction
Speaker: Mrs sabrine el asri (sultan molay slimane beni mellal)
• 15:40
On 6D N=(1,0) Supergravity 15m
The main quest of modern physics is to describe all four elementary interactions within the same framework. Our inability to incorporate gravity as a renormalizable quantum field theory is a major motivation for a physics beyond the standard model, the most amazing progress we have made to understand quantum gravity is through local supersymmetry theory: supergravity. We contribute to outlining the most necessary consistency conditions for any quantum gravity theory essentially the anomaly consideration, the moduli space consideration, the BPS space consideration and some geometric conditions. All within the framework of 6D supergravity theories due to their successful landscape analysis.
Speaker: Rajae Sammani (LPHE-MS, Science faculty, Mohammed V University in Rabat, Morocco.)
• 15:55
Asymptotic Grand Unification 15m
We explicitly test the asymptotic grand unification of a minimal 5-dimensional model with SO(10) gauge theory compactified on an $S^{1}/Z_{2}\times Z^{\prime}_{2}$ orbifold. We consider that all the matter fields propagate in the bulk and show that the gauge couplings asymptotically run to a unified fixed point in the UV. However, the Yukawa couplings will typically hit a Landau pole before the GUT scale in this class of $SO(10)$ models.
Speaker: Dr Mohammed Omer Khojali (Department of Physics, University of Johannesburg, PO Box 524, Auckland Park 2006, South Africa)
• 16:10
Coffee Break 30m
• 16:40
Tensor Network Theory 15m
We introduce some basic definitions and concepts of tensor network. We show that the tensor network can be used to represent quantum many-body states, where we explain MPS(Matrix Product States) in 1D and PEPS (Projected Entangled Pair States) in 2D systems, as well as the generalizations to thermal states and operators. The quantum entanglement properties of the tensor network states including the area law of entanglement entropy also be discussed. Finally, we present several special tensor network's that can be exactly contracted, and demonstrate the difficulty of contracting tensor network's in general cases.
Speaker: Youssef EL MAADI (Mohammed V University in Rabat)
• 16:55
Flavor changing neutral current in the flipped 341 model 15m
We present a new chiral gauge anomaly flipped 341 model where lepton families are arranged in different SU(4) gauge group representations leading to a nonuniversal coupling with heavy neutral gauge bosons $Z^{\prime}$ and $Z^{\prime\prime}$ of the model. The resulted flavor-changing neutral current in the leptonic sector is discussed and bounds on some of the flavor changing parameters are derived using the recent experimental data on the muon rare decays.
Speaker: Meriem Djouala (Laboratoire de physique Mathématiques et Subatomique, Frères Mentouri university Constantine 1-Algeria)
• 17:10
Modular Flavour Symmetries in magnetized toroidal orbifolds 15m
The major problems in particle physics is the origin of the flavour structure of the quarks, leptons and the generation number, mass hierarchy and mixing angles. One of the candidates for the origin of flavour structure may be in higher dimensional theories such as superstrings; certain compactifications of superstrings, lead to non-abelian discrete flavour symmetries. In this contribution, we consider the 6-D supersymmetric gauge theory compactified on torus orbifold $T^2/Z_2$ with non-trivial magnetic flux to investigate flavour modular symmetry. The example of flavour symmetry $S_4$ is given. Other aspects are also described.
Speaker: Mohamad Amegroud (LPHE-Modeling and Simulation, Faculty of Sciences, University Mohammed V in Rabat, Morocco.)
• 17:25
Scattering amplitude and its soft decomposition 15m
In the pure scattering theory, the universality of the soft limit has been studied for a long time. In this talk we review the property of soft limit to relate an n-point amplitude to an (n-1)-point amplitude. We show how this property can be used to decompose amplitudes into different complementary soft channel. The existence of such decomposition provides a new way to understand how to construct amplitude solely from them soft limit.
Speaker: Dr Andriniaina Narindra Rasoanaivo (Ecole Normale Supérieure Université d'Antananarivo)
• 17:40
$T_{QQ}$ -like states from QCD Laplace sum rules and Double ratio of sum rules 15m
Motivated by the recent LHCb-group discovery of an exotic hadron at 3878 MeV interpreted as $J^P = 1^+$
$T_{cc}$ tetraquark state , we improve in this work the existing results from QCD Spectral Sum Rules (QSSR)
at lowest order (LO) by combining the mass determinations from the ratio R of Inverse Laplace sum rules
(LSR) with the double ratio of sum rules (DRSR). In so doing, we start by improving the previous mass and
coupling of the X(3872) which will be used as input in the DRSR method. We extend our analyzes to the
SU3 breaking $T_{cc\bar{s}\bar{u}}$ state and to the bottom sector.
Speaker: Davidson Rabetiarivony (Institute of High Energy Physics of Madagascar, University of Antananarivo)
• 17:55
On the quantum geometry of gravity 15m
The quantum algebra of observables of particles in homogeneous space from bicrossed product model $\mathcal{C}[x]\blacktriangleright\joinrel\mathrel{\triangleleft}\mathcal{C}[p]$ forms a Hopf algebra $A(+,\mu,\eta,\Delta,\epsilon)$. Quantum mechanic is formulated algebraically while gravity is more geometric. Quantum geometry which is a non commutative geometry, with Hopf algebra give us an access to an algebraic language of gravity. The duality of Hopf algebra with Von Neuman algebra (Hopf duality) which relates observables and states give a quantification of gravity if one can show that the non commutativity of the coproduct $\Delta$ curves the phase space.
Keyword: Quantum gravity, Quantum group, Hopf algebra
Speaker: Mr Fanomezantsoa RAZAFIMAHATRATRA (University of Antananarivo)
• 18:10
NLO Scattering in ϕ^4 Theory Finite System Size Correction 15m
Previously an equation of state for the relativistic hydrodynamics encountered in heavy-ion collisions at the LHC has been calculated using lattice QCD methods. This leads to a prediction of very low viscosity, due to the trace anomaly. Finite system corrections to this trace anomaly could challenge this calculation, since the lattice QCD calculation was preformed in an effectively infinite system. To verify this trace anomaly it is beneficial to add the finite system corrections that will be encountered. We construct a massive $\phi^4$ theory while imposing periodic boundary conditions on n of the 3 spatial dimensions. $2\rightarrow2$ NLO scattering is then computed, while analytically making sure the optical theorem holds, to ensure unitarity remains intact despite the pathological nature of the finite system. In order to develop a solid mathematical basis that will carry forward into the thermal field theory context, some small and large argument analysis (in terms of the incoming energy as well as the length scales of the finite dimensions) is performed on the s,t and u channels separately. Finally the finite size corrections to the total cross section, running coupling and effective coupling is explored numerically, in order to estimate the size of such finite system corrections in massive field theories.
The size of these effects appears to depend very sensitively on the length scales of the finite dimensions, the number of finite dimensions, the energy of the scattering as well as the size of the renormalized coupling. For parameters comparable to what is found for QCD at the LHC it is unclear if the corrections would be detectable or not. Due to the pathological nature of the system it is also found that there are energies at which the total cross section becomes infinite when there are 2 finite dimensions, and that the cross section is infinite for all physical energies when all three spatial dimensions are finite. This makes interpretation difficult, and suggests the need to consider scattering happening in a finite time-span. It does however suggest that a fuller treatment of finite-system time-independent QCD may reveal detectable finite system effects, possibly challenging or confirming the low viscosity of the relativistic Quark Gluon Plasma generated in heavy-ion collisions, as calculated as a consequence of the numerically calculated lattice QCD equation of state.
Speaker: Mr Jean Du Plessis (Stellenbosch University)
• Tuesday, 22 March
• 09:00 12:30
Plenary Session II
Convener: Prof. Rajaa Cherkaoui El Moursli (Mohammed V University in Rabat)
• 09:00
Formation and the Evolution of Large-Scale Structure in the Universe 30m
Speaker: Prof. Jeltema Tesla (University of California, Santa Cruz)
• 09:30
Searching for Dark Matter Scattering, on Earth and in the Stars 30m
Speaker: Prof. Nicole Bell (University of Melbourne)
• 10:00
Indirect DM search and Physics Beyond the Standard Model 30m
Speaker: Prof. Gabrijela Zaharijas (Center for Astrophysics and Cosmology University of Nova Gorica)
• 10:30
Coffee Break 30m
• 11:00
Physics beyond the Standard Model 30m
• 11:30
Extended Scalar Sectors and new Physics Beyond the Standard Model 30m
• 12:00
Anomalies in Particle Physics 30m
Speaker: Prof. Andreas Crivellin (PSI and University of Zurich)
• 14:00 18:00
Parallel Session III, Astro-Particle
Conveners: Andrew Chen (University of the Witwatersrand) , Prof. Mourad Telmini (University of Tunis El Manar)
• 14:00
Lorentz Invariance Violation tests in astroparticle physics 15m
At energies approaching the Planck energy scale $10^{19}$ GeV, several quantum-gravity theories predict that familiar concepts such as Lorentz symmetry can be broken. Such extreme energies are currently unreachable by experiments on Earth, but for photons traveling over cosmological distances the accumulated deviations from the Lorentz symmetry may be measurable using the Cherenkov Telescope Array (CTA). Therefore, current and future generation of gamma-ray experiments are expected to improve our understanding of fundamental physics.
Speaker: Hassan Abdalla (Omdurman Islamic University - Sudan)
• 14:15
The lambda hyperon and the hyperon puzzle 15m
Neutron stars provide unique conditions to study cold dense nuclear matter at extreme densities. Due to these extreme conditions additional hadronic degrees of freedom are expected to be populated, including hyperons. This talk will focus on the influence of hyperons on the neutron star equation of state. In particular the contribution of the lambda hyperon will be discussed, as a first approximation to describing exotic neutron star equations of state. The system under consideration is where the strong nuclear force is described by the exchange of mesons and applying the relativistic mean field theory to study dense nuclear matter. As expected, the inclusion of the lambda hyperon softens the neutron star equation of state (EoS). A softer EoS will reduce the maximum mass attainable by the modeled neutron star with such EoS. While hyperons are certainly not unexpected in high density systems, but there presence seems to be contradicted by observations of high mass neutron stars. This contradiction is known as the hyperon puzzle''. The expected influx of observational data from massive new radio-telescopes like the Square Kilometer Array (SKA) will provide observations that can be supported and evolve theoretical models of nuclear matter. Therefore, the study of hyperonic matter is not only relevant to nuclear theory, but also locally to Botswana as an African partner country of the SKA.
Speaker: Wazha German
• 14:30
Thermodynamics of magnetised dense neutron-rich matter 15m
A neutron star is one of the possible end states of a massive star. It is compressed by gravity and stabilized by the nuclear degeneracy pressure. Despite its name, the composition of these objects are not exactly known. However, from the inferred densities, neutrons will most likely compose a significant fraction of the star’s interior. While all neutron stars are expected to have a magnetic field, some neutron stars (magnetars'') are much more highly magnetised than others: the inferred magnetar surface magnetic field is between $10^{14}$ to $10^{15}$ gauss. While neutron stars are macroscopic objects, due to the extreme value of the stars’ energy, pressure, and magnetic field the thermodynamics on the microscopic scale can be imprinted on the star’s large scale behaviour. This talk will focus on describing the thermodynamics of magnetised dense neutron and neutron-rich matter, its equation of state and explore conditions of a possible ferromagnetic state, contributions from the magnetised vacuum, as well as possible observational implications thereof for neutron stars.
Speaker: Dr Jacobus Diener (Botswana International University of Science and Technology)
• 14:45
Thermodynamic analysis of the BTZ black hole in f(R) gravity 15m
The classical Einstein equations in 2+1 dimensions have a black hole solution with a negative cosmological constant. Its solutions are asymptotically anti-de Sitter rather than asymptotically flat. In the context of f(R) gravity theory, we attempt to investigate the thermodynamics of non-rotating Banados, Teitelboim, and Zanelli (BTZ) black holes. The Lagrangian will be modified due to the non-rotating BTZ BH metric, in turn, the associated area law of entropy will be modified too. In addition, the heat capacity and the evaporation time will be examined.
Speaker: Asmaa Shalaby (Benha University)
• 15:00
ALP-Photon interaction in magnetized environment of a compact star 15m
The spin zero, very light bosons like scalar (dilaton) and pseudoscalar (axion) collectively grouped into the term axion like particle (ALP). Dilatons are postulated in extended theory of standard model of particles to cure the scale invariance of the field theory while the axions have been introduced to resolve the $U_{A}(1)$ anomaly in Quantum field theory. These ALPs also show their presence in higher dimensional theories as K K particle in Kaluza Klein theory, moduli in String theory and chameleons in cosmology.
ALPs hold a special place amongst the possible candidates of dark matter therefore their detection as well as identification have become a part of the central theme of particles detector projects. The direct experimental detection of these particles in ground based laboratories are still far from reach of existing--sensitivity of the detectors. However recent advancements in the area of their indirect detection by searching the imprints of their interactions with non-thermal photons coming through the magnetosphere of the compact stars, motivates to carry out the investigations into that direction. Previously the similar kind of investigations had been practiced by several groups [1]-[5] on relevant issues, however our investigation includes another non-trivial aspect that has not been effectively considered important in such investigations; that is background dependence of the mixing dynamics of these particles (dilaton/axion) with electromagnetic radiation.
In this work we focus on evaluating statistically good signal strength of spectro-polarimetric variables like ellipticity angle, linear polarization angle and degree of linear polarization of the photons interacted with ALP using the Stokes parameters.
It has been shown that the obtained magnitudes of the variables fall into the detectable range of the detectors that would be helpful in designing the future detectors. In addition to that we have also looked for the implications of this dimension five interactions to explain the anomalous behaviour in luminosity time relation of stars like Betelgeuse.
Bibliography
[1] J. P. Conlon, and M. C. David Marsh,
Excess Astrophysical Photons from a 0.1-1 keV Cosmic Axion Background,
Phys. Rev. Lett. 111, 151301 (2013).
[2] L. Maiani, R. Petronzio, and E. Zavattini,
Effects of nearly massless, spin-zero particles on light propagation in a magnetic field,
Phys. Lett. B 175, 359 (1986).
[3] G. Raffelt, and L. Stodolsky,
Mixing of the photon with low-mass particles,
Phys. Rev. D 37, 1237-1249 (1988).
[4] N. J. Craig, and S. Raby, Modulino dark matter and teh INTEGRAL 511 keV Line
arXiv:0908.1842v2.
[5] P. Sikivie,
Invisible Axion Search Methods,
Rev. Mod. Phys. 93, 015004 (2021).
Speaker: Ankur Chaubey (Banaras Hindu University, Varanasi, India)
• 15:15
Exploring the Impact of Magnetic field on Core-Collapse Supernova Neutrino Light Curves Detection. 15m
The time profile of neutrino emissions from core-collapse supernovae contains unique information about the dynamics of the collapsing stars and the behavior of particles in dense en- vironments. The observation of neutrinos from the SN1987A supernova, in the Large Magellanic Cloud, marked the beginning of neutrino astronomy. To date, no other supernova neutrino obser- vation has been made. It is therefore essential to investigate the impact of the supernova properties on the neutrino light curves expected in current and future experiments. In this contribution, we study the effect of the magnetic field on the neutrino observations. For certain massive supernovae, strong magnetic fields are expected to change the star’s collapse rate, and thus modulate neutrino production. Here, we consider the impact of different magnetic field topologies on neutrino light curves which would be observed at the KM3NeT, DUNE, and DarkSide experiments. We iden- tify areas of complementarity between these three experiments and discuss how to combine their observations to allow to discriminate between different supernova models.
Speaker: Meriem Bendahman (Faculty of Sciences, Mohammed V University, Rabat - Laboratoire Astroparticules et Cosmologie, Université de Paris, Paris)
• 15:30
Coffee Break 30m
• 16:00
Study of muon-induced background in Double Chooz neutrino oscillation experiment. 15m
The Double Chooz experiment is a reactor antineutrino disappearance experiment located on the site of the Chooz nuclear power plant in the Ardennes region in France. The principal aim of the experiment is a high precision measurement of the oscillation amplitude sin2 2θ13 of the antineutrinos emitted from the two reactor cores of the Chooz power plant. The robustness and accuracy of this measurement depends strongly on a precise knowledge of the rates and spectral shapes of the backgrounds that contaminate the antineutrinos selection over the neutrino oscillation expected region. We study the muon induced background in the Double Chooz experiment. Indeed, cosmic muons crossing the detectors or interacting in the neighborhood constitute the main source of background events encountered in Double Chooz. Dedicated identification techniques have been developed ton tag each of these backgrounds and, consequently, the associated spectral shapes and rates have been determined. The values obtained in our work serve as inputs in the final fit whence the θ13 value is extracted. The latest measurement released by the Double Chooz collaboration is sin^2 θ13 = 0.119 ± 0.016.
Speaker: Dr Kenny KALE SAYI
• 16:15
Probing 2HDM+S with MeerKAT Galaxy Cluster Legacy Survey data 15m
Dark matter is believed to constitute the majority of the matter content of the universe, but virtually nothing is known about its nature. Physical properties of a candidate particle can be probed via indirect detection by observing the decay and/or annihilation products. While this has previously been done primarily through gamma-ray studies, the increased sensitivity of new radio interferometers means that searches via the radio bandwidth are the new frontrunners. MeerKAT's high sensitivity, ranging from 3 $\mu$Jy beam$^{-1}$ for an 8 arcsecond beam to 10 $\mu$Jy beam$^{-1}$ for an 15 arcsecond beam, make it a prime candidate for radio dark matter searches. Using MeerKAT Galaxy Cluster Legacy Survey (MGCLS) data to obtain diffuse synchrotron emission within galaxy clusters, we are able to probe the properties of a dark matter model. In this work we consider both generic WIMP annihilation channels as well as the 2HDM+S model. The latter was developed to explain various anomalies observed in Large Hadron Collider (LHC) data from runs 1 and 2. The use of public MeerKAT data allows us to present the first WIMP dark matter constraints produced using this instrument.
Speaker: Natasha Lavis (University of the Witwatersrand)
• 16:30
GEANT4 MONTE-CARLO SIMULATION AND MEASUREMENT OF GAMMA RAY ATTENUATION IN CONCRETE 15m
Speaker: Mr JOSHUA PONDO (KENYATTA UNIVERSITY)
• 16:45
A Study to the Mass Effect due to Variation of Particle Type on the Femtoscopic Correlation Using Therminator2 Event Generator 15m
Studying the femtoscopic correlation of elementary particles resulting from heavy-ion collisions introduces an identification of the particle's space-time characteristics after the collision, in addition to the determination of how strong particles can interact. In this study, I try to present a femtoscopic analysis of particles with identical charges to check the effect of mass on the correlation factor through THERMINATOR2 which is used to generate events for proton-lead collisions at a center of mass energy of 5.02 TeV.
Speaker: Mr Muhammad Ibrahim Abdulhamid Elsayed (Faculty of Science, Tanta University)
• 17:00
Recoil Kinematics in Radiative Energy Loss 15m
We investigate the behaviour of particle emission spectra in the large-$x$ region following a rigorous implementation of the kinematic constraints in the simpler framework of a scalar field theory. We find that the small-$x$ kinematic constraints in the simpler theory are identical to those implemented in sophisticated QCD-based energy loss models, but that the exact large-$x$ kinematics are more complicated than those implemented in those same QCD-based energy loss models. We compute the multiplicity distributions for various values of the parent parton energy and see that our spectra respect energy conservation by smoothly vanishing outside the classically allowed 0 < $x$ < 1 region. We repeat the calculation for the emission of a spin-1 particle and similarly observe that the spectra have support strictly within kinematically allowed regions.
Speaker: Antonio Renecle (University of Cape Town)
• 17:15
Azimuthal decorrelation between jets at all orders in QCD hard processes 15m
We study the azimuthal decorrelation $\Delta \phi$ for di-jet production that promise to reveal important information on perturbative and non-perturbative QCD dynamics. This observable has been measured by the H1 collaboration that employed the $E_{t}$-weighted recombination scheme whereby our observable is continuously global and sensitive to soft and/or collinear emissions in the back-to-back region, giving rise to single and double logarithms. We now wish to employ the four-vector recombination scheme (E-scheme) that makes our observable falling into the category of non-global QCD observables. Hence the resummation becomes highly non trivial due to the presence of non-global and/or clustering algorithms when the jets are defined using the $k_{t}$ and anti-$k_{t}$ clustering procedure. In the present work we carry out this resummation to next to leading logarithmic accuracy including the non-global and clustering logarithms involved in DIS at HERA.
Speaker: Hana Benslama (university of Batna1)
• 17:30
Are Jets Narrowed or Broadened in e+A SIDIS? 15m
We compute the in-medium jet broadening to leading order in energy in the opacity expansion. At leading order in $\alpha_s$ the elastic energy loss gives a jet broadening that grows with $\ln E$. The next-to-leading order in $\alpha_s$ result is a jet narrowing, due to destructive LPM interference effects, that grows with $\ln^2 E$. We find that in the opacity expansion the jet broadening asymptotics are---unlike for the mean energy loss---extremely sensitive to the correct treatment of the finite kinematics of the problem; integrating over all emitted gluon transverse momenta leads to a prediction of jet broadening rather than narrowing. We compare the asymptotics from the opacity expansion to a recent twist-4 derivation and find a qualitative disagreement: the twist-4 derivation predicts a jet broadening rather than a narrowing. Comparison with current jet measurements cannot distinguish between the broadening or narrowing predictions. We comment on the origin of the difference between the opacity expansion and twist-4 results.
Speaker: Will Horowitz (University of Cape Town)
• 14:00 18:00
Parallel Session IV, Collider
Conveners: Prof. Rachid Benbrik (Cadi Ayyad University, Marrakech) , Dr Shoaib Munir (East African institute for Fundamental Research (EAIFR, Kigali))
• 14:00
Light charged Higgs boson in $H^\pm h$ associated production at the LHC 15m
In this work, we investigate the production of charged Higgs boson via $pp \to H^\pm h$ at the LHC in the Two-Higgs Doublet Model (2HDM) Type-I. By focusing on the case where $H$ is identified as the observed Higgs boson of mass 125 GeV, we study the aforementioned Higgs boson production channel and explore their bosonic decays, namely $H^\pm \to W^\pm h$ and $H^\pm \to W^\pm A$, which can reach a sizeable Branching Ratio (BR) and often dominate over the fermionic decays in the theoretically and experimentally viable parameter space. In this regard, we demonstrate that the production process $pp \to H^\pm h$ followed by $H^\pm \to W^\pm h$ and/or $H^\pm \to W^\pm A$ could well be the most promising discovery channel for light $H^\pm$ at the LHC.
Speaker: Mohamed Krab (Sultan Moulay Slimane University)
• 14:15
New charged Higgs boson discovery channel at the LHC 15m
The ATLAS and CMS experiments have an ambitious search program for charged Higgs bosons. The two main searches for $H^\pm$ at the LHC have traditionally been performed in the $\tau \nu$ and $t b$ decay channels, as they provide the opportunity to probe complementary regions of the Minimal SuperSymmetric Model (MSSM) parameter space. Charged Higgs bosons may decay also to light quarks, $H^\pm \to cs/cb$, which represent an additional probe for the mass range below $m_t$. In this work, we focus on $H^\pm \to \mu \nu$ as an alternative channel in the context of two Higgs doublet model type III. We explored the prospect of looking $pp\to tb H^\pm$, followed by $H^\pm\to\mu \nu$ signal at the LHC. Such a scenario appears in 2HDM type-III where couplings of the charged Higgs are enhanced to $\mu\nu$. Almost all the experimental searches rely on the production and decay of the charged Higgs are taken into account. We show that for a such scenario, the above signal is dominant for most of the parameter space, and $H^\pm \to \mu\nu$ can be an excellent complementary search. Benchmarks points are proposed for further Monte Carlo analysis.
• 14:30
Full next-to-leading-order corrections to the Higgs strahlung process from electron–positron collisions in the Inert Higgs Doublet Model 15m
We present the cross section of the Higgs strahlung, $e^+ e^- \to h Z^0$, at the full next-leading order in the Inert Higgs Doublet Model (IHDM) at the future Higgs factories We systematically calculated both weak and QED corrections by using FeynArts/FormCalc to compute both the weak and the one-loop virtual corrections and Feynman Diagram Calculation (FDC) to evaluate the real photon emission. We evaluated the contribution of the new physics on the radiation corrections in this process for three typical collision energies of future electron-positron colliders :250 GeV, 500 GeV and 1 TeV, taking into account the theoretical and the experimental constraints. We have found a sizeable deviations of the IHDM radiation corrections from the Standard model NLO values, those deviations are within the detection potentials of the future Higgs factories. In the light of these results, we suggest three interesting benchmark points of IHDM for the futures Higgs facilities.
Speaker: hamza abouabid (Université AbdelMalek Essaadi, Tangier, Morocco)
• 14:45
Neutrino masses in the left right symmetric model 15m
Addressing the question of the small neutrino masses in the LRSM. The results is very appealing as the LRSM leads to the celebrated seesaw mechanism, which ensures the small neutrino masses.\
In addition the LRSM may have new particles at TeV scale giving a dominant contribution to $0\nu 2\beta \beta$ decay, that can be reached by the future ton-scale experiments.
Speaker: Mustapha OUCHEN (Mohammed V UNIVERSITY, Faculty of Science RABAT)
• 15:00
Leptogenesis, fermion masses and mixings in a flavored SUSY SU(5) GUT 15m
We propose a a highly predictive 4D SU(5) GUT with a $D_{4}$ flavor symmetry to study fermion masses and mixings. The Yukawa matrices of quarks and charged leptons are obtained after integrating out heavy messenger fields from renormalizable superpotentials while neutrino masses are originated from the type I seesaw mechanism. The group theoretical factors from 24- and 45-dimensional Higgs fields lead to ratios between the Yukawa couplings in agreement with data, while the dangerous proton decay operators are highly suppressed. By performing a numerical fit, we find that the model captures accurately the mixing angles, the Yukawa couplings and the CP phase of the quark sector at the GUT scale. The neutrino masses are generated at the leading order with the prediction of trimaximal mixing while an additional effective operator is required to account for the baryon asymmetry of the universe (BAU). An analytical and a numerical study of the BAU via the leptogenesis mechanism is performed where strong correlations between the parameters of the neutrino sector and the observed BAU are obtained.
Speaker: Dr Mohamed Amin Loualidi (LPHE-MS, Faculty of Science, Mohammed V University in Rabat)
• 15:15
On 't Hooft lines and Lax operators of $SO_{2N}$ type 15m
The four dimensional Chern Simons topological gauge theory represents a rich framework allowing to study two-dimensional integrable systems using line and surface defects and Feynman diagrams computations. Relying on this "Gauge/Bethe ansatz" correspondence, one can recover interesting results of the integrable models and generate new ones without reference to the traditional algebraic techniques. For example, the study of the intrinsic properties of interacting Wilson and 't Hooft line defects in the 4DCS theory yields the oscillator realisation of the Lax operator verifying the RLL equation of integrability. This study focuses on the 4DCS theory with invariance given by the $SO_{2N}$ gauge group, which allows to construct the Lax operator associated to the QQ representation of an XXX spin chain with $so_{2N}$ symmetry. This also allows to interprete the oscillator degrees of freedom in terms of algebras decomposition and field bundles charges.
Speaker: Ms Youssra Boujakhrout (LPHE-MS, Science Faculty, Mohammed V University in Rabat, Morocco)
• 15:30
Coffee Break 30m
• 16:00
Weinberg's factor from helicity constraint 15m
Scattering amplitudes connect theoretical descriptions to experimental predictions. Low energy terms of the scattering amplitude tend to factorize from the high energy. Different methods have already been established to understand the mechanism of such factorization, Weinberg’s theorem. With regard to the Weinberg soft factor, calculations have already shown that this factor has a universal character. In this talk, we show that it is possible to calculate this factor independently from the scattering amplitude based on the Wigner constraint. We also show that such constraint leads us to a system partial differential equation to simplify the construction of the Weinberg’s soft factor for the case of one particle or two particles.
Speaker: Fanomezantsoa Arlivah ANDRIANTSARAFARA
• 16:15
the Mu2e experiment 15m
The Mu2e experiment at Fermi National Accelerator Laboratory (Batavia, Illinois, USA) searches for the charged-lepton flavor violating neutrino-less conversion of a negative muon into an electron in the field of an aluminum nucleus. The dynamics of such a process is well modelled by a two-body decay, resulting in a mono-energetic electron with energy slightly below the muon rest mass (104.967 MeV). Mu2e will reach a single event sensitivity of about 3x10−17 that corresponds to four orders of magnitude improvement with respect to the current best limit. We will describe the physics motivations, the underlying experimental technique and the experiment construction status.
Speaker: fabio happacher (infn)
• 16:30
The Production of a Singlet Scalar at Future e+ e- Colliders 15m
Motivated by the multi-lepton anomalies, a search for narrow resonances with $S\rightarrow\gamma\gamma, Z\gamma$ in association with light jets, $b$-jets or missing transverse energy was reported in the paper arXiv:2109.02650. The maximum local (global) significance is achieved for $m_S=151.5$\,GeV with 5.1$\sigma$ (4.8$\sigma$). In this paper we compute the production cross-section of this scalar candidate in $e^+e^-$ collision by assuming that the couplings to Electro-Weak bosons are loop induced. We find that the cross-section could be large enough for $S$ to be detected at future $e^+e^-$ colliders. The leading production mechanism is $e^+e^-\rightarrow Z^{\star}\rightarrow S\gamma$, which offers the opportunity of isolating $S$ through the missing mass method.
Speaker: Mr Anza-Tshilidzi Mulaudzi
• 16:45
Explaining a class of multi-lepton excesses at the LHC with a heavy pseudo-scalar of a 2HDM+$S$ model 15m
The Standard Model (SM) of particle physics is complete after the discovery of a Higgs-like boson at the Large Hadron Collider (LHC) by ATLAS and CMS collaboration. Although the measured properties of it is compatible with the one predicted by the SM, this does not exclude the possible existence of additional scalar bosons as long as the mixing with the SM higgs is small. In fact, in recent years the so called "multi-lepton anomalies" emerged as deviations from the SM precdictions in several analyses of multi-lepton final states from ATLAS and CMS. These excesses are reasonably well described by a 2HDM+$S$ model, where the mass of the heavy scalar $m_H\approx 270$\,GeV, the mass of the singlet scalar $m_S\approx 150$\,GeV. In this talk I will concentrate in describing a new class of multi-lepton excesses that can be explained with the CP-odd particle of the same 2HDM+$S$ model. We have considered the dominant decays of the heavy scalar, $H\rightarrow Sh,SS$ and looked at various multi-lepton final states to explain the excess. With this motivation, a candidate for a scalar resonance has been reported with a mass of 151.5\,GeV by looking at the existing SM higgs searches in the $\gamma \gamma$ and $Z \gamma$ channels with associated leptons, di-jets, bjets and missing energy. There are a number of small excesses in searches at the LHC for heavy (pseudo)-scalars in the mass range 400-600\,GeV, here we have assumed that to be the heavy pseudo-scalar of the 2HDM+$S$ model. The region of the parameter space that explains the multi-lepton excesses, the leading decays of the heavy pseudo-scalar are $A\rightarrow ZH,t\overline{t}$ producing four top and four lepton final states. Here we will discuss the multi-lepton final state in conjunction with the multi-lepton excesses observed at the LHC.
Speaker: Dr Abhaya Kumar Swain (School of Physics and Institute for Collider Particle Physics, University of the Witwatersrand, Johannesburg, Wits 2050, South Africa.)
• 17:00
Searches for new physics using the top-quark pair invariant mass distribution in proton-proton collisions at √s=13 TeV 15m
A search for new heavy particles that decay into top-quark pairs is performed in proton-proton collisions at the LHC at a center-of-mass energy of 13 TeV using data collected by the ATLAS experiment during the years 2015 and 2018. Events consistent with top-quark pair production are selected by requiring a single isolated charged lepton, missing transverse momentum and jet activity compatible with a hadronic top-quark decay. Jets identified as likely to contain b-hadrons are required to reduce the background from other Standard Model (SM) processes. The observed invariant mass spectrum of the candidate top-quark pairs is investigated to seek for any significant deviation from the SM background expectation.
• 17:15
The comparison study of the ratio between $t\overline{t}\gamma$ and $t\overline{t}$ in the $e\mu$ channel at 13 TeV using the ATLAS detector 15m
With the goal of increasing the precision of NLO QCD predictions for the $pp\rightarrow t\overline{t}\gamma$ process in the di-lepton top quark decay channel we present a study of the ratio of top quarks together with a photon to the top quark pair. Fully realistic LO and NLO computations for $t\overline{t}\gamma$ and $t\overline{t}$ production are employed. Events with exactly one electron and one muon, and at least two jets with one of them being a $b$-tagged are selected. Multiple observables are related with Monte Carlo simulations at leading-order and next-to-leading-order theoretical calculations. The variables include photon kinematic variables, angular separation between the two leptons, and angular variables related to the photon and the leptons.
Speaker: Thuso Mathaha (University of the Witwatersrand)
• 17:30
Jet substructure and boosted top quark jet tagging 15m
We discuss varied jet taggers that identify boosted hadronic top quark jets.
These tagging approaches mainly uses jet algorithms to reconstruct the kinematics
of fat jets (i.e. jets that includes heavy particles), by analyzing their subjet
constituents. We also review the currently available experimental results as well
as the crucial QCD aspects with reliable theoretical and algorithmic backgrounds
that are useful for developing and enhancing these taggers.
Speaker: Azzeddine Benhamida (University of Oran 1 Ahmed ben bella)
• 17:45
Measurements of W boson properties at √s = 5 and 13 TeV with the ATLAS detector at the LHC. 15m
After the discovery of the W and Z bosons at the Super Proton Synchrotron (SPS) at CERN, particles responsible for weak interactions, the efforts have been geared towards measuring their properties. A precise measurement of the W boson properties remains a major test for the validation of the standard model.
In this presentation, the measurement of the W boson transverse momentum $p^{T}_{W}$ and the differential cross sections are described. Using low pile-up data set, collected with low number of interactions per bunch crossing, by the ATLAS detector in 2017 and 2018.
• $\textbf{Measurement of the transverse momentum distribution}$: One of the most important theoretical sources of uncertainties in the measurement of the W-boson mass, is the extrapolation of the $p^{T}$ distribution from Z boson to W-boson (≈ 6 MeV), a direct measurement of $p^{T}_{W}$ would avoid such an extrapolation and the corresponding theoretical modelling uncertainty.
• $\textbf{Measurement of the differential cross sections}$: The measurement of the differential cross sections for the $W$ boson provides stringent tests of the QCD theory, and is crucial for a deep understanding and modelling of QCD interactions. Also, the rapidity dependence of the W boson production in the Drell–Yan process provides constraints on the parton distribution functions (PDFs), which are currently the dominant uncertainty source in the W mass measurement (9.2 MeV).
source: https://tel.archives-ouvertes.fr/tel-03224873
Speaker: hicham atmani
• Wednesday, 23 March
• 09:00 12:30
Plenary Session III
Convener: Prof. Soebur Razzaque (University of Johannesburg)
• 09:00
A theoretical review of astroparticles 30m
Speaker: Prof. Stefano Profumo (University of California, Santa Cruz)
• 09:30
Future Colliders 45m
Speakers: Prof. Xinchou Lou (Institute of High Energy Physics, Beijing) , Prof. Yaquan Fang (Institute of High Energy Physics)
• 10:15
Coffee Break 30m
• 10:45
Search for Dark Matter with the ATLAS detector at the LHC 30m
Speaker: Prof. Rachid Mazini (Institute of Physics, Academia Sinica Taiwan)
• 11:15
Optical observations of gamma-ray binaries with SALT 30m
Speaker: Prof. Brian van Soelen (University of the Free State)
• 11:45
Multi-messenger Astronomy with high-energy Neutrinos 30m
Speaker: Prof. Anna Franckowiak (Ruhr-University Bochum)
• 12:15
Very-high-energy neutrino production in jetted active galactic nuclei 15m
As the number of tentative associations of very-high-energy neutrinos
detected by IceCube with jet-dominated AGN is increasing, also the
development of theoretical models for neutrino production in AGN jets
is advancing rapidly. This talk will provide a review of the basic
physics constraints for VHE neutrino production in AGN jets as well
as applications to recent tentative neutrino - blazar associations.
Speaker: Markus Boettcher (North-West University)
• 14:00 18:25
Parallel Session V, Collider - Experiment
Conveners: Nadir Hashim (Kenyatta University) , Shimaa AbuZeid (Ain Shams University - Egypt and INFN)
• 14:00
Search for Higgs boson pair production in the two bottom quarks plus two photons final state in $pp$ collisions at $\sqrt{s}$ = 13 TeV with the ATLAS detector 15m
From the discovery of the Higgs boson in 2012, most of its properties such as mass, spin, production cross-section and its coupling to fermions and bosons have been measured. However, the trilinear self-coupling $\lambda_{HHH}$ of the Higgs boson has not been measured yet. This parameter controls the shape of the Higgs potential, explaining the importance of its measurement. Deviation from its Standard Model (SM) predicted value would indicate new physics beyond the SM (BSM). Deviations are quantified through the $\kappa_{\lambda}$ modifier. At the LHC, it is measured through the rate of the rare Higgs boson pair production (HH) process, which is the only direct way to access it. This process is mainly produced at the LHC via gluon-gluon fusion (ggF) through destructive interference of two Feynman diagrams involving quark loops and the triple Higgs boson self-interaction. At the LHC centre-of-mass energy of 13 TeV, the cross-section of the Higgs boson pair production is $31.05_{-5.0\%}^{+2.2\%}$ fb as predicted by the SM. This low cross-section could be enhanced by the presence of BSM physics (non-resonant and resonant), thus the motivation to explore the search for the double Higgs production.
This presentation will focus on the search for the Higgs boson pair production in the two bottom quarks plus two photons final states with the 2015-2018 data recorded by the ATLAS detector recently published (https://arxiv.org/pdf/2112.11876.pdf). This search sets observed (expected) upper limits to the HH cross-section of 4.2 (5.7) times the SM expectation. The observed (expected) constrains on the Higgs boson trilinear modifier $\kappa_{\lambda}$ are determined to be [-1.5, 6.7] ([-2.4, 7.7]) at 95% confidence level. The search explores the resonant production of double Higgs ($pp\to X \to HH$) and sets limits on its cross-section as a function of the $m_{X}$. The observed (expected) limits on the cross-section of $pp\to X \to HH$ range from 610 fb to 47 fb (360 fb to 43 fb) over the constrained mass range.
In this presentation, both the search for the resonant and non-resonant double Higgs production will be detailed, in addition to a comparison with other searches f the Higgs pair production with other final states and using data collected between 2015-2016.
Speaker: Mohamed Belfkir (UAEU)
• 14:15
Higgs CP measurement with EFT model in lepton collider 15m
In the Circular Electron Positron Collider (CEPC), a measurement of the Higgs charge and parity (\textit{CP}) mixing through $e^{+} e^{-} \rightarrow Z H \rightarrow \mu^{+} \mu^{-} H(\rightarrow b \bar{b} / c \bar{c} / g g)$ process is presented, considering a scenario of analyzing $5.6\ a b^{-1}$ $e^{+} e^{-}$ collision data with the center-of-mass energy of $240\ \mathrm{GeV}$.
In this work, a CP-mixing parameter p is greater (less) than $5.40 \times 10^{-2}$ ($-5.52\times 10^{-2}$) excluded at the $95\%$ confidence level.
This study demonstrates the potential of precise measurement of the hadronic final states of the Higgs boson decay at the CEPC, and will provide key information to look for the \textit{CP}-odd Higgs.
Speaker: qiyu sha (中国科学院高能物理研究所)
• 14:30
Dark photon searches with the ATLAS detector at the LHC 15m
Many extensions to the Standard Model (SM) introduce a hidden or dark sector (DS) to provide candidates for dark matter in the universe and an explanation to astrophysical observations such as the positron excess observed in the cosmic radiation flux. This hidden sector could rise from an additional U(1)d gauge symmetry. ATLAS has searched for the gauge boson of the DS, which could be a massless or massive dark photon that either kinetically mixes with the SM photon or
couples to the Higgs sector via some mediators. If dark photons decay in turn to SM particles with a significant branching ratio, we could either observe measurable deviations in some particular Higgs boson decay channels or new exotic signatures that would be accessible at the Large Hadron Collider (LHC) energies. An overview of searches of dark photon signals with the ATLAS detector, with a particular emphasis on some SM Higgs decay channels will be presented.
Speaker: Hassnae El Jarrari (Universite Mohammed V (MA))
• 14:45
CP-even Heavy Higgs boson at HL-LHC 15m
We investigate the possibility of observing a havey Higgs boson ($H$) within the context of type-I 2 Higgs Doublet Model (2HDM). Our study is focused on $gg$ $\rightarrow{}$ $H \rightarrow{} hh \rightarrow{} b$$\bar{b}$ $ZZ \rightarrow{} b\bar{b}4\mu$ for $H$ production and decay. The study is done assuming a data-set of size 3000 $fb^{-1}$ of proton-proton collisions at $\surd s = 14$ TeV at High Luminosity Large Hadron Collider (HL-LHC). According to scans over the parameter space, we consider two promising benchmark points for this analysis. Signal and background samples are produced using MonteCarlo (MC) simulation where the detector response is based on CMS detector PhaseII Upgrade. We find that the mass distributions of our signal are consistent with those obtained by previous experimental study performed on HHbb4l channel where they investigated the self Higgs coupling using the full Run2 data of the CMS detector with $\surd s = 13$ TeV and $L_{int}= 137 fb^{-1}$.
Speaker: Ms Aya I. Beshr (Physics Department, Faculty of Women for Arts, Science and Education, Ain Shams University, Heliopolis, Cairo 11757, Egypt)
• 15:00
Charged Higgs boson production via pp → H ± bj at the LHC 15m
The charged Higgs searches can be served to probe new physics at the LHC. In this study, we focus on the associated production of the charged Higgs boson with the bottom-quark and jet in 2HDM-type-I as a promising mode for a light H ± , i.e. m H ± < mt . We consider both situations where h(H) are the SM-like Higgs boson discovered with a mass near 125 GeV and investigate their bosonic decays, such as H± → W± h and/or H± → W± A. We explore the possible signals at the LHC taking into account the theoretical and experimental constraints, as a result, we find that, over a substantial region of the 2HDM-I parameter space, the Signal qbW + 2b/2τ /2γ could severe as a promising and alternative signal that might serve to discover
the H± states at the LHC.
• 15:15
The off-shell Higgs production and measurement of its decay width with the ATLAS experiment 15m
The measurement of the off-shell Higgs production and its decay width is performed in the Higgs decay channels of $ZZ\rightarrow 4\ell$ and $ZZ\rightarrow 2\ell2\nu$. The measurement uses Monte Carlo samples at a centre-of-mass energy of 13 TeV, produced according to the ATLAS detector configurations with an integrated luminosity of 139 fb$^{-1}$. The results are presented as an expected upper limit on the off-shell Higgs signal strength at 95% confidence levels (CLs). In addition, the $ZZ$ off-shell and on-shell combined results are shown.
Speaker: Mr Abdualazem Mohammed (University of the Witwatersrand)
• 15:30
Minimum bias simulation of parasitic collisions 15m
Parasitic collisions are proton-proton collisions that happen offset from the nominal ATLAS interaction point. With a 25 ns bunch spacing, the bunches can have parasitic encounters at z = n × 3.75 m, with n < 7. Using MC simulations, it would be possible to observe the distributions of key variables (from tracks and energy deposits) for such events at various distances. The task consisted of the generation of minimum bias MC samples, applying a Z offset to reproduce the effect and simulate the ATLAS detector response in release 21, and reconstructing the observables, based on muon segments, jets topology, Pixels clusters.
Speakers: Sanae Ezzarqtouni (Universite Hassan II, Ain Chock (MA)) , Prof. Driss Benchekroun
• 15:45
Coffee Break 30m
• 16:15
The Spin Physics Detector at NICA 15m
The Spin Physics Detector (SPD) is planned to run at the NICA collider that is currently under construction at JINR (Dubna). The main goal of SPD is to study the spin structure and other spin-related phenomena of the nucleon. SPD will operate with polarized proton-proton, deuteron-deuteron, and proton-deuteron collisions at energies up to $\sqrt{s} = 27$ GeV and luminosity up to $10^{32}$ cm$^{-2}$ s$^{-1}$. The experiment setup is planned to be a universal multipurpose $4\pi$ detector. Possible SPD studies with unpolarized proton and deuteron beams, at the first stage of NICA operation, are also being investigated.
Speaker: Dr Reham El-Kholy (Astronomy Department, Faculty of Science, Cairo University, Giza 12613, Egypt)
• 16:30
Higgs boson couplings at muon collider 15m
Muon collisions at multi-TeV center of mass energies are ideal for studying Higgs boson properties. Precise measurements of its couplings to fermions and bosons will be allowed by the high production rates that can be reached at these energies. Furthermore the double Higgs boson production rate could be sufficiently high to directly measure the parameters of trilinear self-couplings, giving access to the determination of the Higgs potential.
In this presentation an overview of the results that have been obtained so far on Higgs couplings by studying the $\mu^+ \mu^- \to H \nu \bar{\nu}$ and $\mu^+ \mu^- \to H H\nu \bar{\nu}$ processes at $\sqrt{s}$ of 3 TeV will be given. All these studies have been performed by fully simulating the signal and physics background samples and by evaluating the effects of the beam-induced background on the detector performances.
Evaluations on Higgs boson couplings sensitivities and most recent results on the uncertainty on double Higgs production cross section and the trilinear Higgs self-coupling, will be presented and discussed.
Speaker: Laura Buonincontri
• 16:45
The search for resonances with topological requirements with the Zγ final state at the LHC 15m
Machine learning techniques have been improving rapidly, and this has seen their
application grow within the high energy particle physics space. In this work, we propose the use of deep neural networks based on full supervised learning to search for heavy resonances at the electroweak scale with topological requirements. This study is carried out in both inclusive and exclusive regions of the phase space tailored for specific production mode. The technique is well situated for collider searches due to its ability to learn more complex functions, and it is evaluated in the Zγ final state using the Monte Carlo simulated signal samples for 139 $\text{fb}^{-1}$ of integrated luminosity for Run 2, collected at the LHC. This approach is complemented with semi-supervised learning and used to calculate the limit on the production of Higgs-like to Zγ where the significance of the signal is maximum.
Speaker: Nalamotse Joshua Choma (Wits University)
• 17:00
The Use of a Variational Autoencoder in the Search for Resonances at the LHC 15m
The Standard Model (SM) of particle physics was completed by the discovery of the Higgs boson in 2012 by the ATLAS and CMS collaborations. However, the SM is not able to explain a number of phenomena and anomalies in the data. These discrepancies to the SM motivate the search for new bosons. In this paper, searches for new bosons are completed by looking for Zgamma resonances in Zgamma (pp > H > Zgamma) fast simulation events.
This research makes use of a neural network, more specifically a Variational Autoencoder (VAE), in the search for new bosons. The functionality of a VAE to be trained as both a generative model and a classification model makes the architecture an attractive option for the search. The VAE is used as a generative model to increase the amount of Zgamma fast simulation Monte Carlo data whilst simultaneously being used to classify samples containing injected signal events that differ from the Monte Carlo data on which the model was trained.
Both the generative capability and classification capability of a single trained VAE model is evaluated. The evaluation of the generative capability is done by assessing how similar the input distributions are to the generated distributions as well as how similar the correlations between individual input variables are to the correlations between individual generated variables.
The classification capability is evaluated by assessing how well the model is able to separate samples with various types and quantities of injected signal events versus samples containing only background events.
Speaker: Finn Stevenson (University of the Witwatersrand, CERN)
• 17:15
Searches for heavy scalar resonance through hadronic jet reconstruction at electron-proton colliders 15m
A search for the $CP$-even scalar $H$ in a SM + real singlet scalar field $\phi_{H}$ model is presented. A proposed high energy Future Circular Hadron-Electron Collider (FCC - LheC) would provide sufficient energy in a clean environment to probe the heavy scalar $H$ resonance, $m_{H} \approx$ 270 GeV in deep inelastic scattering (DIS) charged current (CC) and neutral current (NC) process.
Here we investigate the decay of the heavy Higgs like scalar $H \to WW^{*}$ in DIS electron-proton collision with an integrated luminosity of 1.0 ab$^{-1}$ and centre of mass energy of $\sqrt{s}= 1.3(1.8)$ TeV at FCC-LHeC.
We estimate the likelihood of detecting a resonance signal of $H$ from its final state jets by imposing cut based and machine learning optimization methods to select candidate jet pairs and reconstruct the mass of $H$.
Speaker: Elias Malwa (University of the Witwatersrand)
• 17:30
The use of GANs in the search for new resonances at the LHC using semi-supervised machine learning techniques 15m
In the search for new physics, beyond the standard model, the use of semi-supervised machine learning techniques provides a methodology to extract signal processes while minimizing potential biases caused by prior understanding. When using semi-supervised techniques in the training of machine learning models, over-training can lead to background events incorrectly being labeled as signal events. The extent of false signals generated must therefore be quantified before semi-supervised techniques can be used in resonance searches.
In searches for resonances within a given mass range, the significance of observing a local excess of events, must consider the probability of observing the excess elsewhere within the range. This is known as the “look elsewhere effect” and must be controlled for resonance searches. The semi-supervised technique has additional “look elsewhere effects” which need to be calculated. Generative adversarial networks are used in conjunction with Monte Carlo event generation to produce scalable datasets while minimizing inefficiencies in event weighting. The Wasserstein GAN with gradient penalty is evaluated in the expansion and un-weighting of Z𝛾 Monte Carlo data in order to calculate the “look elsewhere effect” within the semi-supervised studies.
Speaker: Benjamin Lieberman (University of Witwatersrand)
• 17:45
Search for Higgs boson pair production in the bbWW* channel with the ATLAS detector 15m
Search for resonant Higgs boson pair production, where one Higgs boson decays to bb and the other to WW, using the full Run 2 data of proton-proton collisions collected at a center-of-mass energy of 13 TeV with the ATLAS detector. The trilinear coupling leads to non-resonant pair production of Higgs bosons, where an off-shell Higgs decays to a pair of Higgs bosons. Physics beyond the SM can manifest in the resonant production of new particles that decay into a pair of SM Higgs bosons. This study is potentially sensitive to cases where the decaying particle is a scalar, as in the MSSM and 2HDM models, or a spin-2 graviton, as in Randall–Sundrum models.
Speaker: Mourad Hidaoui (Ibn-Tofail University Faculty of Sciences (MA))
• 14:00 18:10
Parallel Session VI, Instrumentation
Conveners: Betty Kibirige (University of Zululand) , Prof. Mohamed Gouighri (Ibn Tofail University)
• 14:00
A New Monte-Carlo Code System for Particles Transport 15m
Particles Through Matter (or PTM for short) is a new Monte-Carlo C++ code system, under development by us. The PTM is intended to be a general purpose Monte-Carlo code, simulating all types of particles and their interactions with matter. The current version is still in an early stage of development, although a minimum stuff of electromagnetic interactions is already done, covering a wide energy range from low to high energies (at least collider energy scale). For electron/positron, a minimal package of physical processes is done, e.g., energy loss, bremsstrahlung, ionization, coulomb scattering (single and multiple) and the annihilation for the positron. For photons, the photoelectric effect, Rayleigh and Compton scattering and pair production are implemented with different models. The PENELOPE option is implemented for both electron/positron and photons aside with the standard option. Optical photon and its processes is implemented too, enabling performing simulations of even complex optical systems, e.g., refractive and reflective telescopes. Fresnel lenses which present complex shapes of the surface are taken into account. Further, a minimal functioning package for neutrino propagation and interaction (roughly implemented) through matter with matter effect is done, with three active neutrino scheme and three active plus one sterile neutrino. More details about the design of the code with some validation tests will be presented and discussed through this contribution.
• 14:15
The Fast Simulation Chain in the ATLAS experiment 15m
The ATLAS experiment at the large hadron collider relies on very large samples of simulated events that are required in the majority of physics analysis and performance studies in the ATLAS physics program. Producing such a huge number of simulated events using the Geant4 framework consumes the CPU resources. The challenge is that in the high luminosity phase of LHC, the average number of proton-proton collisions per bunch crossing will increase to about 200 collisions, which will have a severe impact on ATLAS computing resources. To meet the simulated sample statistics requirements, ATLAS is developing faster alternatives to the algorithms used in the standard sample production chain. This document describes the new tools for fast simulation chain that have been developed by ATLAS, and shows their physics performance.
Speaker: Mr Brahim Aitbenchikh (Universite Hassan II, Ain Chock (MA))
• 14:30
Simulation of Monte-Carlo events at the LHC using a Generative model based on Kernel Density Estimation 15m
We develop a machine learning-based generative model, using scikit-learn to generate a list of particle four-momenta from the Large Hadron Collider (LHC) proton-proton collisions. This method estimates the kernel density of the data using the Gaussian kernel and then generates additional samples from this distribution. As an example of application, we demonstrate the ability of this approach to reproduce a set of kinematic features, that are used for the search of new resonances decaying to Z(ll)γ final states at the LHC. This generative model is constructed to take the pre-processed Zγ events and generate sample data with accurate statistics, mimicking the original distributions and achieving better performances with respect to the standard event Monte-Carlo generators.
Speaker: Mrs Nidhi Tripathi (School of Physics, Institute for Collider Particle Physics, University of the Witwatersrand)
• 14:45
Upgrades of the ATLAS muon spectrometer with new small-diameter drift tube chambers 15m
The goals of the upgrades of the ATLAS Muon Spectrometer with new small-diameter Muon Drift Tube Chambers (so-called sMDT) are to make room to install new triple-Resistive Plate Chambers (tRPC) to increase the trigger efficiency in the inner barrel muon region and to improve the rate capability of the muon chambers in the high background regions corresponding to the HL-LHC project. As a pilot project for the whole replacement of the MDT chambers in the small azimuthal sectors of the barrel inner layer (so-called BIS1-6) by new sMDT-RPC detectors in the long shutdown 3 (LS3), 8 New small diameter (15 mm) Muon Drift Tube chambers (so-called sMDT BIS7A) have been installed in the long shutdown 2 (LS2) in the transition region between Barrel and Endcap of Muon spectrometer 1 < |η| < 1.3. The Author will present an overview of the installation and read-out electronics of the new sMDT BIS7A chambers, their cavern commissioning status and their performance.
Speaker: Ali El Moussaouy (Universite Hassan II, Ain Chock (MA))
• 15:00
The ATLAS Inner Detector trigger design and performance during Run 2 data taking from the 13 TeV LHC collisions 15m
The ATLAS Inner Detector (ID) trigger is a crucial component in the ATLAS trigger system, and plays a pivotal role in the high quality reconstruction of the physics objects - electron, muon, tau and b-jet candidates. These objects are fundamental for physics studies and analyses at ATLAS. The ATLAS ID trigger was redesigned during 2013-2015 shutdown, this provided the opportunity to improve its performance during Run 2 data taking from the 13 TeV Large Hadron Collider (LHC) collisions. The design and performance of the ATLAS ID trigger during Run 2 data taking from the 13 TeV LHC collisions are discussed, as well as suggested plans and developments during 2019-2021 shutdown for the start of Run 3 and beyond. The results presented here illustrate the superb performance of the ATLAS ID trigger, even in the extreme number of proton-proton interactions per bunch-crossing (pile-up) conditions of Run 2 data taking from the 13 TeV LHC.
Speaker: Soufiane Zerradi (Hassan II university, faculty of science Ain Chock)
• 15:15
Response of gap/crack scintillators of the Tile Calorimeter of the ATLAS detector to isolated muons from $W\rightarrow \mu\nu$ events. 15m
The ATLAS Tile Calorimeter is a hadronic sampling calorimeter that plays a major role in jet energy scale measurements. Accurate reconstruction of jets a vital role for precision measurements of the Standard Model and for searches of physics beyond the Standard Model.The jet energy scale is measured assuming uniformity of response in the azimuthal direction of both the Liquid Argon and Tile calorimeters. In this study, the response of the gap/crack scintillators of Tile calorimeter is measured using isolated muons from $W\rightarrow \mu\nu$ events. The response of the scintillating cells is quantified by measuring the amount of energy deposited per unit length in both data and Monte Carlo simulation to evaluate the response uniformity over the azimuthal direction.
Speaker: Phuti Rapheeha (Wits University)
• 15:30
Coffee Break 30m
• 16:00
Simulation of CMS resistive plate chamber (RPC) performance under different conditions 15m
The resistive plate chamber (RPC) is a fast gaseous detector that provides a muon trigger system parallel with the drift tubes and cathode strip chambers in the CMS experiment. It consists of two parallel plates, a positively-charged anode and a negatively-charged cathode, both made of a very high resistivity plastic material and separated by a gas volume. It is used in many high-energy physics experiments due to its simple design, construction, good time resolution, high efficiency, and low-cost production.
In this research, we aimed to find the ideal operating conditions of the CMS RPCs using Garfield++ as simulation software. We studied the effect of temperature on various RPC parameters. The electron transport parameters like drift velocity, Townsend coefficient and Diffusion coefficient have been computed under different temperatures and gas mixtures using MAGBOLTZ, while the primary ionization number and energy loss have been studied using HEED. We used the nearly exact Boundary Element Method (neBEM) solver in the calculation of the weighting field and electric field. Finally, we applied Ramo’s theorem to calculate the induced signal.
The simulation results showed that temperature affects RPC performance. As the temperature increased, the drift velocity, Townsend coefficient and amplitude of the induced signal increased.
Speaker: Ms Tahany Abdelhameid (physics Department, Faculty of Science, Helwan University)
• 16:15
The development of Strontium-90 Tile scanning table for TileCal at the ATLAS experiment 15m
During Phase I upgrade of the Tile Calorimeter of the ATLAS experiment, the characterization and qualification of assembled E3 and E4 scintillator counters (Crack) was conducted through manual scans using a strontium-90 radioactive source and a small scanbox containing a photomultiplier tube. The Crack counter, clear optical fiber cable and connections were exposed making transmitted scintillation light vulnerable to contamination by external light. This necessitated the development of an automated scanning system and appropriate size of scanbox to allow housing of all components. The one-coordinate positioning system of the scanner is driven by a powerful 103H5210-5240 Bipolar Stepper Motor. The motor is controlled by an X-NUCLEO-IHM02A1 two-axis stepper motor driver expansion board based on the L6470 component, which is plugged onto the Arduino Uno R3 microcontroller to enable correct functionality. The boards are accessible via a ttyACM0 serial port using a Universal Serial Bus cable connection and a software to control the movement and data acquisition. The new scanning box will be employed after Run 3 of the Large Hadron Collider.
Speaker: Gaogalalwe Mokgatitswane (University of the Witwatersrand)
• 16:30
Extraction and analysis of the ATLAS Tile Calorimeter Low Voltage Power Supplies Temperature Data 10m
Plugin based system for assessing the quality of data and conditions for ATLAS Tile Calorimeters is known as the Tile-in-One (TiO). The TiO is a collection of small sized independent web tools called plugins, designed to make it easier for a user to evaluate Tile Calorimeter (TileCal) data. TiO platform aims to integrate individual TileCal web tools into a single common platform, which will share the same computing infrastructure and access to common services and data, as old interfaces are slowly falling behind and are harder and harder to maintain. The TiO web platform should allow large flexibility and ease of maintenance so that it would be friendly to the plugin developers as well. The Data Control System (DCS) provides temperature data through a dedicated interface called DDV. Based on the possibility to query those data, new TiO plugin is being developed under the following strategy: CentOs 8 was installed inside the virtual box to easily access CERN internal network. The DDV tool is used to query the TileDCS temperature data which are subsequently transformed to a form suitable for the visualizing library. The visualization tool allows user to interact with the plots. Currently the biggest focus is concentrated on finding an intuitive way to display not only the status of one particular module, but the whole detector as well.
Speaker: Lungisani Phakathi (UNIZULU & iThemba Lab)
• 16:40
Extracting and Analysing Data from Detector Control Systems at ATLAS Experiment for Bad Channelling of High Voltage and Low Voltage Power Supplies. 10m
Tile-in-One (TiO) is web platform to combine all web based offline data quality tools of ATLAS Tile Calorimeter in one web interface. This system is implemented a series of small web applications with main gateway, the applications are called plugins. Plugins run in thier own separate virtual machine to avoid interference and increase platform stability. The aim of this project is to extract data from Detector Control System (DCS) of the ATLAS Experiment and use TiO web platform for visualization and analysis of the data in order to observe behaviour of High volytage and Low voltage power supplies. The data was extracted on the DDV server in a form of text file then converted to comma separated values (csv) file in order to be visualized in the form of plots using plotly.js library. A detailed results for the analysis of the data will be further discussed.
Speaker: Sanele Scelo Sanele
• 16:50
A Burn-in test station for the ATLAS Phase-IITile-calorimeter low-voltage power supply transformer-coupled buck converters 15m
The upgrade of the ATLAS hadronic tile-calorimeter (TileCal) Low-Voltage Power Supply (LVPS) falls under the high-luminosity LHC upgrade project. This presentation serves to provide a detailed overview of the development of a Burn-in test station for use on an upgraded LVPS component known as a Brick. These Bricks are radiation hard transformer-coupled buck converters that function to step-down bulk 200$\,$V DC power to the 10$\,$ V DC required by the on-detector electronics. To ensure the reliability of the Bricks, once installed within TileCal, a Burn-in test station has been designed and built. The Burn-in station functions to implement a Burn-in procedure on eight Bricks simultaneously. The Burn-in procedure subjects the Bricks to sub-optimal operating conditions which function to stimulate failure mechanisms within the Bricks. This results in components that would fail prematurely within TileCal failing within the Burn-in station thereby allowing for their replacement which subsequently improves the reliability of the Brick population. The Burn-in station is of a fully custom design in both its hardware and software. The development of the test station will be explored in detail with the presentation culminating in a discussion of preliminary Burn-in results.
Speaker: Ryan Mckenzie (University Of the Witwatersrand)
• 17:05
Single Event Effects qualification of candidate components for the ATLAS Tile Calorimeter Phase-II Upgrade Low Voltage power supply Bricks 15m
Irradiation campaigns have been carried out in a variety of European facilities to select radiation hard candidates for the upgraded version of the transformer coupled buck converter (Brick). The ATLAS detector is set to undergo a significant upgrade termed the "Phase-II" Upgrade. This talk primarily focuses on the exposure of selected active components (power MOSFETs, MOSFET drivers and isolation amplifiers) to a high energy proton beam at the Proton Irradiation Facility in PSI. A full scale production of nearly 2048 finger Low Voltage power supplies Bricks, with an identical output voltage, is set to be undertaken in the year 2022. The Low Voltage power supply (LVPS) Brick design, which powers the TileCal front-end electronics is currently being finalized. The tested single batch components were selected among candidates suitable to survive the full radiation tolerance in preparation for the HL-LHC. A detailed compilation of the SEE results obtained, along with the relevant set-up and observations will be discussed.
Speaker: Mr Edward Nkadimeng (University of the Witwatersrand)
• 17:20
The geometry description of High Granularity Timing Detector with XML-based format 15m
The main purpose of the ATLAS experiment is to study the proton-proton collisions from the Large Hadron Collider (LHC) in order to exploit the full discovery potential of the LHC. ATLAS' exploration uses precision measurement to push the frontiers of knowledge by seeking answers to fundamental questions.
A new phase called High Luminosity LHC (Run4) will start operation in mid-2026, which aims to deliver an integrated luminosity of up to 4000$fb^{−1}$. To meet the quest for high precision measurements in a high luminosity environment, a new subsystem called High Granularity Timing Detector (HGTD) will be installed to mitigate the pileup effect by providing timing information. It will aid the track-vertex association in the forward region by incorporating timing information into the reconstructed tracks. The Low Gain Avalanche Detector (LGAD) sensors will be used to meet these changing needs.
For the perspective of the HGTD description, ATLAS collaboration is moving towards the use of an XML-based format for defining this subdetector description, this work aims to describe the HGTD geometry using this format, then integrate it with ATLAS software and to the simulation infrastructure.
Speaker: Selaiman Ridouani (University Mohammed First in Oujda)
• 17:35
Detector performance and physics reach of at Muon Collider 15m
A muon collider is very promising for the future of high energy physics and is becoming a realistic option. It combines the high precision of electron-positron machines, with a low level of beamstrahlung and synchrotron radiation, and the high centre-of-mass energy and luminosity of hadron colliders. Beams with an intensity of the order of 10$^{12}$ muons per bunch are necessary to obtain the desired luminosity, which entails a very high rate of muons decay. Among the technological challenges, the treatment of the Beam-induced Background is one of the most critical issues for the detector design.
This contribution will present the detector performance for collider machines working at centre-of-mass energies up to 3 TeV, discussing, in particular, the strategies studied to mitigate the effect of the Beam-induced Background. Moreover, the reach of the most representative physics processes will also be discussed.
Speaker: Chiara Aimè (Istituto Nazionale di Fisica Nucleare - Università di Pavia)
• 17:50
Thermal Performance of Developed Carbon Nanotubes and Nanospheres Based Thermal Interface Materials for Heat Dissipation Applications. 15m
In this study, the incorporation of 0D and 1D carbon nanomaterials in a commercial thermal interface material is reported to enhance the heat transfer of electronic devices. The investigated thermal interface materials were fabricated following a protocol based on sonication of the carbon nanomaterials and the thermal compound in acetone at 55 °C. In order to test the applicability of the fabricated thermal interface materials, a setup was designed to simulate the operating conditions of standard electronic components. The experimental setup monitored the heat dissipation and transmission to the heat sink and allowed the acquisition of the data by means of LabVIEW software. The role of the carbon nanomaterials incorporated was studied by varying the mass in the thermal interface materials in a range between 0 and 10 %. The large heat transfer is reported with thermal interface materials containing 1% of carbon nanomaterials, corresponding to a temperature drop of 2 °C. In addition, the thermal resistance Rth of the thermal interface materials was characterised by the ASTM D5470 approach. The reproducibility and reliability of the reported results were shown as part of the study. These measurements are found to be in accordance with the testing stand results. The new thermal interface material was tested in the low voltage power electronics and a temperature drop of over 5 °C was observed. The use of these new thermal interface material as part of the current upgrade of the ATLAS detector at CERN will have good impact, such as protecting the electronics from overheating and will expand their life span.
Speakers: Othmane Mouane (University of Witwatersrand) , Edward Nkadimeng (University of the Witwatersrand)
|
2022-06-27 12:33:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6126863360404968, "perplexity": 4511.805328047239}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103331729.20/warc/CC-MAIN-20220627103810-20220627133810-00710.warc.gz"}
|
https://www.physicsforums.com/threads/integration-and-trigonometricfunctions.464104/
|
# Integration and trigonometricfunctions
howsockgothap
## Homework Statement
Integrate: 3sec3x(3sec3x+2tan3x)dx
## The Attempt at a Solution
Ok I just multiplied out and obviously got the integrals of both to be tan3x +c and 6sec3x+c
My question is does that make my final answer tan3x+6sec3x +c or +2c?
I swear this is the last integration question I will ask. For awhile.
|
2022-08-19 06:05:07
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8062633275985718, "perplexity": 3726.3125041830194}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573623.4/warc/CC-MAIN-20220819035957-20220819065957-00436.warc.gz"}
|
http://pclinuxosbrasil.com.br/npvqxb/generalized-least-squares-autocorrelation-1b0189
|
Neudecker, H. (1977), “Bounds for the Bias of the Least Squares Estimator of Ď� 2 in Case of a First-Order Autoregressive Process (positive autocorrelation),” Econometrica, 45: … With either positive or negative autocorrelation, least squares parameter estimates are usually not as efficient as generalized least squares parameter estimates. The model used is Gaussian, and the tool performs ordinary least squares regression. Aula Dei Experimental Station, CSIC, Campus de Aula Dei, PO Box 202, 50080 Zaragoza, Spain Some most common are (a) Include dummy variable in the data. The slope parameter .4843 (cell K18) serves as the estimate of ρ. δ2 (cell N5) is calculated by the formula =M5-M4*J$9. The Rainfall′ for 2000 (cell Q4) is calculated by the formula =B4*SQRT(1-$J$9). So having explained all that, lets now generate a variogram plot and to formally assess spatial autocorrelation. This form of OLS regression is shown in Figure 3. If had used the Prais-Winsten transformation for 2000, then we would have obtained regression coefficients 16.347, .9853, .7878 and standard errors of 10.558, .1633, .3271. S. Beguería. Var(ui) = Ď�i Ď�ωi 2= 2. Autocorrelation may be the result of misspecification such as choosing the wrong functional form. Autocorrelation may be the result of misspecification such as choosing the wrong functional form. Here as there A comparison of simultaneous autoregressive and generalized least squares models for dealing with spatial autocorrelation. Generalized Least Squares Estimation If we correctly specify the form of the variance, then there exists a more e¢ cient estimator (Generalized Least Squares, GLS) than OLS. The DW test statistic varies from 0 to 4, with values between 0 and 2 indicating positive autocorrelation, 2 indicating zero autocorrelation, and values between 2 and 4 indicating negative autocorrelation. Linked. The OLS estimator of is b= (X0X) 1X0y. We now demonstrate the. 14-5/59 Part 14: Generalized Regression Implications of GR Assumptions The assumption that Var[ ] = 2I is used to derive the result Var[b] = 2(X X)-1.If it is not true, then the use of s2(X X)-1 to estimate Var[b] is inappropriate. Example 1: Use the FGLS approach to correct autocorrelation for Example 1 of Durbin-Watson Test (the data and calculation of residuals and Durbin-Watson’s d are repeated in Figure 1). 46 5 Heteroscedasticity and Autocorrelation 5.3.2 Feasible Generalized Least Squares To be able to implement the GLS estimator we need to know the matrix Ω. Coefficients: generalized least squares Panels: heteroskedastic with cross-sectional correlation Correlation: no autocorrelation Estimated covariances = 15 Number of obs = 100 Estimated autocorrelations = 0 Number of groups = 5 Estimated coefficients = 3 Time periods = 20 Wald chi2(2) = 1285.19 Prob > chi2 = 0.0000 Variable: y R-squared: 0.996 Model: GLSAR Adj. Since we are using an estimate of Ï, the approach used is known as the feasible generalized least squares (FGLS) or estimated generalized least squares (EGLS). for all j > 0, then this equation can be expressed as the generalized difference equation: This equation satisfies all the OLS assumptions and so an estimate of the parameters β0â², β1, …, βk can be found using the standard OLS approach provided we know the value of Ï. .8151 (cell V18) is the regression coefficient for Rainfall′ but also for Rainfall, and .4128 (cell V19) is the regression coefficient for Temp′ and also for Temp. A common used formula in time-series settings is Ω(Ï)= Questions and Answers on Heteroskedasticity, Autocorrelation and Generalized Least Squares L. Magee Fall, 2008 |||||{1. The assumption was also used to derive the t and F test statistics, so they must be revised as well. The GLS approach to linear regression requires that we know the value of the correlation coefficient ρ. 46 5 Heteroscedasticity and Autocorrelation 5.3.2 Feasible Generalized Least Squares To be able to implement the GLS estimator we need to know the matrix Ω. The estimators have good properties in large samples. The model used is Gaussian, and the tool performs ordinary least squares regression. GLS is also called “ Aitken ’ s estimator, ” … E[εiεi+h] â 0 where h â 0. Functional magnetic resonance imaging (fMRI) time series analysis and statistical inferences about the effect of a cognitive task on the regional cere… The generalized least squares estimator of β in (1) is [10] Multiplying both sides of the second equation by Ï and subtracting it from the first equation yields, Note that εi â Ïεi-1 = δi, and if we set. See statsmodels.tools.add_constant. The presence of fixed effects complicates implementation of GLS as estimating the fixed effects will typically render standard estimators of the covariance parameters necessary for obtaining feasible GLS estimates inconsistent. The OLS estimator of is b= (X0X) 1X0y. Generalized least squares. We can use the Prais-Winsten transformation to obtain a first observation, namely, Everything you need to perform real statistical analysis using Excel .. … … .. © Real Statistics 2020, Even when autocorrelation is present the OLS coefficients are unbiased, but they are not necessarily the estimates of the population coefficients that have the smallest variance. Figure 5 – FGLS regression including Prais-Winsten estimate. Although the results with and without the estimate for 2000 are quite different, this is probably due to the small sample, and won’t always be the case. We now demonstrate the generalized least squares (GLS) method for estimating the ⦠Coefficients: generalized least squares Panels: heteroskedastic with cross-sectional correlation Correlation: no autocorrelation Estimated covariances = 15 Number of obs = 100 Estimated autocorrelations = 0 Number of groups = 5 Estimated coefficients = 3 Time periods = 20 Wald chi2(2) = 1285.19 Prob > chi2 = 0.0000 Parameters endog array_like. Generalized least squares (GLS) is a method for fitting coefficients of explanatory variables that help to predict the outcomes of a dependent random variable. and ρ = .637 as calculated in Figure 1. GLSAR Regression Results ===== Dep. Highlighting the range Q4:S4 and pressing, The linear regression methods described above (both the iterative and non-iterative versions) can also be applied to, Multinomial and Ordinal Logistic Regression, Linear Algebra and Advanced Matrix Topics, GLS Method for Addressing Autocorrelation, Method of Least Squares for Multiple Regression, Multiple Regression with Logarithmic Transformations, Testing the significance of extra variables on the model, Statistical Power and Sample Size for Multiple Regression, Confidence intervals of effect size and power for regression, Least Absolute Deviation (LAD) Regression. The assumption was also used to derive the t and F ⦠GLSAR Regression Results ===== Dep. This example is of spatial autocorrelation, using the Mercer & ⦠This generalized least-squares (GLS) transformation involves “generalized differencing” or “quasi-differencing.” Starting with an equation such as Eq. by Marco Taboga, PhD. The δ residuals are shown in column N. E.g. Aula Dei Experimental Station, CSIC, Campus de Aula Dei, PO Box 202, 50080 Zaragoza, Spain The generalized least squares (GLS) estimator of the coefficients of a linear regression is a generalization of the ordinary least squares (OLS) estimator. Hypothesis tests, such as the Ljung-Box Q-test, are equally ineffective in discovering the autocorrelation … Economic time series often ... We ï¬rst consider the consequences for the least squares estimator of the more ... Estimators in this setting are some form of generalized least squares or maximum likelihood which is developed in Chapter 14. (1) , the analyst lags the equation back one period in time and multiplies it by Ď�, the first-order autoregressive parameter for the errors [see Eq. The DW test statistic varies from 0 to 4, with values between 0 and 2 indicating positive autocorrelation, 2 indicating zero autocorrelation, and values between 2 and 4 indicating negative autocorrelation. Suppose instead that var e s2S where s2 is unknown but S is known Ĺ in other words we know the correlation and relative variance between the errors but we don’t know the absolute scale. Then, = Ω Ω = It is one of the best methods to estimate regression models with auto correlate disturbances and test for serial correlation (Here Serial correlation and auto correlate are same things). In fact, the method used is more general than weighted least squares. A nobs x k array where nobs is the number of observations and k is the number of regressors. Figure 1 – Estimating ρ from Durbin-Watson d. We estimate ρ from the sample correlation r (cell J9) using the formula =1-J4/2. As with temporal autocorrelation, it is best to switch from using the lm() function to using the Generalized least Squares (GLS: gls()) function from the nlme package. In these cases, correcting the specification is one possible way to deal with autocorrelation. An example of the former is Weighted Least Squares Estimation and an example of the later is Feasible GLS (FGLS). Similarly, the standard errors of the FGLS regression coefficients are 2.644, .0398, .0807 instead of the incorrect values 3.785, .0683, .1427. Even when autocorrelation is present the OLS coefficients are unbiased, but they are not necessarily the estimates of the population coefficients that have the smallest variance. Suppose the true model is: Y i = β 0 + β 1 X i +u i, Var (u ijX) = Ď�2i. Since we are using an estimate of ρ, the approach used is known as the feasible generalized least squares (FGLS) or estimated generalized least squares (EGLS). Figure 4 – Estimating ρ via linear regression. 2.1 A Heteroscedastic Disturbance [[1.00000e+00 8.30000e+01 2.34289e+05 2.35600e+03 1.59000e+03 1.07608e+05 1.94700e+03] [1.00000e+00 8.85000e+01 2.59426e+05 2.32500e+03 1.45600e+03 1.08632e+05 1.94800e+03] [1.00000e+00 8.82000e+01 2.58054e+05 3.68200e+03 1.61600e+03 1.09773e+05 1.94900e+03] [1.00000e+00 8.95000e+01 2.84599e+05 3.35100e+03 1.65000e+03 1.10929e+05 1.95000e+03] … Browse other questions tagged regression autocorrelation generalized-least-squares or ask your own question. Suppose that the population linear regression model is, Now suppose that all the linear regression assumptions hold, except that there is autocorrelation, i.e. See Cochrane-Orcutt Regression for more details, Observation: Until now we have assumed first-order autocorrelation, which is defined by what is called a first-order autoregressive AR(1) process, namely, The linear regression methods described above (both the iterative and non-iterative versions) can also be applied to p-order autoregressive AR(p) processes, namely, Everything you need to perform real statistical analysis using Excel .. … … .. © Real Statistics 2020, We now calculate the generalized difference equation as defined in, We place the formula =B5-$J$9*B4 in cell Q5, highlight the range Q5:S14 and press, which is implemented using the sample residuals, This time we perform linear regression without an intercept using H5:H14 as the, This time, we show the calculations using the Prais-Winsten transformation for the year 2000. 5. "Generalized least squares (GLS) is a technique for estimating the unknown parameters in a linear regression model. In the presence of spherical errors, the generalized least squares estimator can … ARIMAX model's exogenous components? Suppose we know exactly the form of heteroskedasticity. (a) First, suppose that you allow for heteroskedasticity in , but assume there is no autocorre- FEASIBLE METHODS. Multiplying both sides of the second equation by, This equation satisfies all the OLS assumptions and so an estimate of the parameters, Note that we lose one sample element when we utilize this difference approach since y, Multinomial and Ordinal Logistic Regression, Linear Algebra and Advanced Matrix Topics, Method of Least Squares for Multiple Regression, Multiple Regression with Logarithmic Transformations, Testing the significance of extra variables on the model, Statistical Power and Sample Size for Multiple Regression, Confidence intervals of effect size and power for regression, Least Absolute Deviation (LAD) Regression. Chapter 5 Generalized Least Squares 5.1 The general case Until now we have assumed that var e s2I but it can happen that the errors have non-constant variance or are correlated. Time-Series Regression and Generalized Least Squares in R* An Appendix to An R Companion to Applied Regression, third edition John Fox & Sanford Weisberg last revision: 2018-09-26 ... autocorrelation function, and an autocorrelation function with a single nonzero spike at lag 1. vec(y)=Xvec(β)+vec(ε) Generalized least squares allows this approach to be generalized to give the maximum likelihood … OLS yield the maximum likelihood in a vector β, assuming the parameters have equal variance and are uncorrelated, in a noise ε - homoscedastic. 14-5/59 Part 14: Generalized Regression Implications of GR Assumptions The assumption that Var[ ] = 2I is used to derive the result Var[b] = 2(X X)-1.If it is not true, then the use of s2(X X)-1 to estimate Var[b] is inappropriate. generalized least squares (FGLS). The FGLS standard errors are generally higher than the originally calculated OLS standard errors, although this is not always the case, as we can see from this example. In statistics, Generalized Least Squares (GLS) is one of the most popular methods for estimating unknown coefficients of a linear regression model when the independent variable is correlating with the residuals.Ordinary Least Squares (OLS) method only estimates the parameters in linear regression model. which is implemented using the sample residuals ei to find an estimate for ρ using OLS regression. As its name suggests, GLS includes ordinary least squares (OLS) as a special case. See also Note that we lose one sample element when we utilize this difference approach since y1 and the x1j have no predecessors. A 1-d endogenous response variable. Since, I estimate aggregate-level outcomes as a function of individual characteristics, this will generate autocorrelation and underestimation of standard errors. To solve that problem, I thus need to estimate the parameters using the generalized least squares method. Leading examples motivating nonscalar variance-covariance matrices include heteroskedasticity and first-order autoregressive serial correlation. 9 10 1Aula Dei Experimental Station, CSIC, Campus de Aula Dei, P.O. We now demonstrate the generalized least squares (GLS) method for estimating the regression coefficients with the smallest variance. 14.5.4 - Generalized Least Squares Weighted least squares can also be used to reduce autocorrelation by choosing an appropriate weighting matrix. 14.5.4 - Generalized Least Squares Weighted least squares can also be used to reduce autocorrelation by choosing an appropriate weighting matrix. These assumptions are the same made in the Gauss-Markov theorem in order to prove that OLS is BLUE, except for ⦠OLS, CO, PW and generalized least squares estimation (GLS) using the true value of the autocorrelation coefficient. This time, we show the calculations using the Prais-Winsten transformation for the year 2000. vec(y)=Xvec(β)+vec(ε) Generalized least squares allows this approach to be generalized to give the maximum likelihood ⦠Σ or estimate Σ empirically. This does not, however, mean that either method performed particularly well. For both heteroskedasticity and autocorrelation there are two approaches to dealing with the problem. BINARY â The dependent_variable represents presence or absence. Neudecker, H. (1977), âBounds for the Bias of the Least Squares Estimator of Ï 2 in Case of a First-Order Autoregressive Process (positive autocorrelation),â Econometrica, ⦠With either positive or negative autocorrelation, least squares parameter estimates are usually not as efficient as generalized least squares parameter estimates. 1 1 2 3 A COMPARISON OF SIMULTANEOUS AUTOREGRESSIVE AND 4 GENERALIZED LEAST SQUARES MODELS FOR DEALING WITH 5 SPATIAL AUTOCORRELATION 6 7 8 BEGUERIA1*, S. and PUEYO2, 3, Y. Generalized least squares (GLS) estimates the coefficients of a multiple linear regression model and their covariance matrix in the presence of nonspherical innovations with known covariance matrix. Consider a regression model y= X + , where it is assumed that E( jX) = 0 and E( 0jX) = . We assume that: 1. has full rank; 2. ; 3. , where is a symmetric positive definite matrix. An intercept is not included by default and should be added by the user. The result is shown on the right side of Figure 3. The dependent variable. We can also estimate ρ by using the linear regression model. STATISTICAL ISSUES. Also, it seeks to minimize the sum of the squares of the differences between the … A generalized spatial two stage least squares procedure for estimating a spatial autoregressive model with autoregressive disturbances. In other words, u ~ (0, Ď� 2 I n) is relaxed so that u ~ (0, Ď� 2 Ω) where Ω is a positive definite matrix of dimension (n × n).First Ω is assumed known and the BLUE for β is derived. A generalized least squares estimator (GLS estimator) for the vector of the regression coefficients, β, can be be determined with the help of a specification of the ... ϲ, and the autocorrelation coefficient Ï ... the weighted least squares method in the case of heteroscedasticity. The sample autocorrelation coefficient r is the correlation between the sample estimates of the residuals e1, e2, …, en-1 and e2, e3, …, en. where Ï is the first-order autocorrelation coefficient, i.e. Corresponding Author. Using the Durbin-Watson coefficient. Under heteroskedasticity, the variances Ï mn differ across observations n = 1, â¦, N but the covariances Ï mn, m â n,all equal zero. The model used is … OLS yield the maximum likelihood in a vector β, assuming the parameters have equal variance and are uncorrelated, in a noise ε - homoscedastic. Suppose we know exactly the form of heteroskedasticity. Featured on Meta A big thank you, Tim Post âQuestion closedâ notifications experiment results and graduation. The sample autocorrelation coefficient r is the correlation between the sample estimates of the residuals e 1, e 2, â¦, e n-1 and e 2, e 3, â¦, e n. Now suppose that all the linear regression assumptions hold, except that there is autocorrelation, i.e. The ordinary least squares estimator of is 1 1 1 (') ' (') '( ) (') ' ... so generalized least squares estimate of yields more efficient estimates than OLSE. Demonstrating Generalized Least Squares regression GLS accounts for autocorrelation in the linear model residuals. We now calculate the generalized difference equation as defined in GLS Method for Addressing Autocorrelation. This heteroskedasticity is expl⦠Σ or estimate ÎŁ empirically. The Rainfall′ for 2000 (cell Q4) is calculated by the formula =B4*SQRT(1-$J\$9). We see from Figure 2 that, as expected, the δ are more random than the ε residuals since presumably the autocorrelation has been eliminated or at least reduced. The Intercept coefficient has to be modified, as shown in cell V21 using the formula =V17/(1-J9). Consider a regression model y= X + , where it is assumed that E( jX) = 0 and E( 0jX) = . From this point on, we proceed as in Example 1, as shown in Figure 5. In fact, the method used is more general than weighted least squares. Var(ui) = Ïi ÏÏi 2= 2. Highlighting the range Q4:S4 and pressing Ctrl-R fills in the other values for 2000. Weighted Least Squares Estimation (WLS) Consider a general case of heteroskedasticity. Here as there The GLS is applied when the variances of the observations are unequal (heteroscedasticity), or when there is a certain degree of correlation between the observations." Why we use GLS (Generalized Least Squares ) method in panel data approach? The results suggest that the PW and CO methods perform similarly when testing hypotheses, but in certain cases, CO outperforms PW. There are various ways in dealing with autocorrelation. For more details, see Judge et al. Figure 3 – FGLS regression using Durbin-Watson to estimate ρ. where $$e_{t}=y_{t}-\hat{y}_{t}$$ are the residuals from the ordinary least squares fit. BIBLIOGRAPHY. This time the standard errors would have been larger than the original OLS standard errors. 3. ( 1985 , Chapter 8) and the SAS/ETS 15.1 User's Guide . This is known as Generalized Least Squares (GLS), and for a known innovations covariance matrix, of any form, ... As before, the autocorrelation appears to be obscured by the heteroscedasticity. In these cases, correcting the specification is one possible way to deal with autocorrelation. Weighted Least Squares Estimation (WLS) Consider a general case of heteroskedasticity. Generalized Least Squares. We should also explore the usual suite of model diagnostics. Roger Bivand, Gianfranco Piras (2015). The ordinary least squares estimator of is 1 1 1 (') ' (') '( ) (') ' ... so generalized least squares estimate of yields more efficient estimates than OLSE. where $$e_{t}=y_{t}-\hat{y}_{t}$$ are the residuals from the ordinary least squares fit. A generalized least squares estimator (GLS estimator) for the vector of the regression coefficients, β, can be be determined with the help of a specification of the ... Ď�², and the autocorrelation coefficient Ď� ... the weighted least squares method in the case of heteroscedasticity. This occurs, for example, in the conditional distribution of individual income given years of schooling where high levels of schooling correspond to relatively high levels of the conditional variance of income. For large samples, this is not a problem, but it can be a problem with small samples. A consumption function ... troduced autocorrelation and showed that the least squares estimator no longer dominates. Variable: y R-squared: 0.996 Model: GLSAR Adj. 14.5.4 - Generalized Least Squares Weighted least squares can also be used to reduce autocorrelation by choosing an appropriate weighting matrix. Of course, these neat A common used formula in time-series settings is Ω(Ď�)= This is known as Generalized Least Squares (GLS), and for a known innovations covariance matrix, of any form, ... As before, the autocorrelation appears to be obscured by the heteroscedasticity. For more details, see Judge et al. The Hildreth-Lu method (Hildreth and Lu; 1960) uses nonlinear least squares to jointly estimate the parameters with an AR(1) model, but it omits the first transformed residual from the sum of squares.
|
2021-11-27 05:23:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8319262862205505, "perplexity": 997.3900430438894}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358118.13/warc/CC-MAIN-20211127043716-20211127073716-00104.warc.gz"}
|
http://electricalacademia.com/basic-electrical/parallel-rlc-circuit-analysis/
|
Home / Basic Electrical / Parallel RLC Circuit Analysis
# Parallel RLC Circuit Analysis
Want create site? Find Free WordPress Themes and plugins.
When resistance, inductance, and capacitance are connected in parallel, the circuit is said to be RLC Parallel circuit. In the parallel RLC circuit shown in the figure below, the supply voltage is common to all components.
One of the most important second-order circuits is the parallel RLC circuit of figure 1 (a).
Fig.1 (a): Parallel RLC Circuit
We shall assume that at t=0 there is an initial inductor current,
$\begin{matrix} i(0)={{I}_{o}} & \cdots & (1) \\\end{matrix}$
And an initial capacitor voltage,
$\begin{matrix} v(0)={{V}_{o}} & \cdots & (2) \\\end{matrix}$
And analyze the circuit by finding v for t>0.
The single nodal equation that is necessary is given by
$\begin{matrix} \frac{v}{R}+\frac{v}{R}\int\limits_{0}^{t}{v}dt+{{I}_{o}}+C\frac{dv}{dt}={{i}_{g}} & \cdots & (3) \\\end{matrix}$
Which is an integral-differential equation that becomes, upon differentiation,
$\begin{matrix} C\frac{{{d}^{2}}v}{d{{t}^{2}}}+\frac{1}{R}\frac{dv}{dt}+\frac{1}{L}v=\frac{d{{i}_{g}}}{dt} & {} & {} \\\end{matrix}$
To find the natural response we make the right member zero, resulting in
$\begin{matrix} C\frac{{{d}^{2}}v}{d{{t}^{2}}}+\frac{1}{R}\frac{dv}{dt}+\frac{1}{L}v=0 & \cdots & (4) \\\end{matrix}$
Fig.1 (b): Parallel RLC Circuit without Source
This result follows also from killing the current source, as in figure 1 (b), and writing the nodal equation. From (4) the characteristic equation is
$C{{s}^{2}}+\frac{1}{R}s+\frac{1}{L}=0$
From which the natural frequencies are
\begin{align} & \begin{matrix} {{s}_{1,2}}=-\frac{1}{2RC}\pm \sqrt{{{\left( \frac{1}{2RC} \right)}^{2}}-\frac{1}{LC}} & \cdots & (5) \\\end{matrix} \\ & \\\end{align}
As in the general second order case, there are three types of responses, depending on the nature of the discriminant, ${{\left( {}^{1}/{}_{2RC} \right)}^{2}}-\left( {}^{1}/{}_{LC} \right)$ , in (5). We shall now look briefly at these three cases. For simplicity we will take ig=0 and consider the source free case of figure 1 (b). The forced response is then zero and the natural response is the complete response.
## Overdamped case
If the discriminant is positive, that is:
${{\left( {}^{1}/{}_{2RC} \right)}^{2}}-\left( {}^{1}/{}_{LC} \right)>0$
Or equivalently,
$\begin{matrix} L>4{{R}^{2}}C & \cdots & (6) \\\end{matrix}$
Then the natural frequencies of (5) are real and distinct negative numbers, and we have the overdamped case,
Over-damped Case
$\begin{matrix} v={{A}_{1}}{{e}^{{{s}_{1}}t}}+{{A}_{2}}{{e}^{{{s}_{2}}t}} & \cdots & (7) \\\end{matrix}$
From the initial conditions and (3) evaluated at t=0+, we obtain
$\begin{matrix} \frac{dv({{0}^{+}})}{dt}=-\frac{{{V}_{o}}+R{{I}_{o}}}{RC} & \cdots & (8) \\\end{matrix}$
Which together with (2) can be used to determine the arbitrary constants.
As an example, suppose R=1Ω, L=4/3H, C=1/4F, Vo=2V, and Io=-3A. Then by (5) we have${{s}_{1,2}}=-1,-3$, and hence
$\begin{matrix} v={{A}_{1}}{{e}^{-t}}+{{A}_{2}}{{e}^{-3t}} & {} & {} \\\end{matrix}$
Also, by (2) and (8) we have
\begin{align} & v(0)=2V \\ & \frac{dv(0+)}{dt}=4{}^{V}/{}_{s} \\\end{align}
Which may be used to obtain A1=5 and A2=-3, and thus
$\begin{matrix} v=5{{e}^{-t}}-3{{e}^{-3t}} & {} & {} \\\end{matrix}$
This overdamped case is easily sketched, as shown by the solid line of figure 2, by sketching the two components and adding them geographically.
Fig.2: Sketch of an Overdamped Response
The reason for the term overdamped may be seen from the absence of oscillations. The element values are such as to “damp out” any oscillatory tendencies. It is, of course, possible for the response to change signs once, depending on the initial conditions.
## Underdamped Case
If the discriminant in (5) is negative, that is:
$\begin{matrix} L<4{{R}^{2}}C & \cdots & (9) \\\end{matrix}$
Then we have the underdamped case, where the natural frequencies are complex, and the response contains sine and cosine, which of course are oscillatory type functions. In this case, it is convenient to define a resonant frequency,
$\begin{matrix} {{\omega }_{o}}=\frac{1}{\sqrt{LC}} & \cdots & (10) \\\end{matrix}$
A damping coefficient,
$\begin{matrix} \alpha =\frac{1}{2RC} & \cdots & (11) \\\end{matrix}$
And a damped frequency,
$\begin{matrix} {{\omega }_{d}}=\sqrt{\omega _{o}^{2}-{{\alpha }^{2}}} & \cdots & (12) \\\end{matrix}$
Each of these is a dimensionless quantity “per second”. The resonant and damped frequencies are defined to be radians per second (rad/s) and the damping coefficient is nepers per second (Np/s).
Using these definitions, the natural frequencies, by (5), are
${{s}_{1,2}}=-\alpha \pm j{{\omega }_{d}}$
And therefore the response is
Under-damped Case
$\begin{matrix} v={{e}^{-\alpha t}}({{A}_{1}}\cos {{\omega }_{d}}t+{{A}_{2}}\sin {{\omega }_{d}}t) & \cdots & (13) \\\end{matrix}$
Which is oscillatory in nature, as expected.
As an example, suppose R=5Ω, L=1H, C=1/10F, Vo=0, and Io=-3/2A. Then we have
$\begin{matrix} v={{e}^{-t}}({{A}_{1}}\cos 3t+{{A}_{2}}\sin 3t) & {} & {} \\\end{matrix}$
From the initial conditions we have
\begin{align} & v(0)=0V \\ & \frac{dv(0+)}{dt}=15{}^{V}/{}_{s} \\\end{align}
From which A1=0 and A2=5. Therefore the underdamped response is
$v=5{{e}^{-t}}\sin 3t$
The response is readily sketched if it is observed that since sin3t varies between +1 and -1, v must be a sinusoid that varies between 5e-t and -5e-t. The response is shown in figure 3, where it may be seen that is oscillatory in nature. The response goes through zero at the points where the sinusoid is zero, which is determined, in general, by the damped frequency ωd.
Fig.3: Sketch of an Underdamped Response
## Critically damped case
When the discriminant in (5) is zero, we have the critically damped case, for which
$\begin{matrix} L=4{{R}^{2}}C & \cdots & (14) \\\end{matrix}$
In this case, the natural frequencies are real and equal, given by
$\begin{matrix} {{s}_{1,2}}=-\alpha ,-\alpha & \cdots & (15) \\\end{matrix}$
Where α is given by (11). The response is then
Critically-damped Case
$\begin{matrix} v=({{A}_{1}}+{{A}_{2}}t){{e}^{-\alpha t}} & \cdots & (16) \\\end{matrix}$
As an example, suppose R=1Ω, L=1H, C=1/4F, Vo=0, and Io=-1A. Then we have
$\begin{matrix} v=({{A}_{1}}+{{A}_{2}}t){{e}^{-2t}} & {} & {} \\\end{matrix}$
From the initial conditions we have
\begin{align} & v(0)=0V \\ & \frac{dv(0+)}{dt}=4{}^{V}/{}_{s} \\\end{align}
From which A1=0 and A2=4. Therefore the critically damped response is
$v=4t{{e}^{-2t}}$
This is easily sketched by plotting 4t and e-2t and multiplying the two together. The result is shown in figure 4.
Fig.4: Sketch of a critically damped Response
For every case in the parallel RLC circuit, the steady-state value of the natural response is zero, because each term in the response contains a factor of eat, where a<0.
Did you find apk for android? You can find new Free Android Games and apps.
|
2018-12-13 17:49:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 7, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9664682149887085, "perplexity": 1386.9846317979445}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376825029.40/warc/CC-MAIN-20181213171808-20181213193308-00527.warc.gz"}
|
https://socratic.org/questions/592f771211ef6b1eee7921b9
|
# Question #921b9
$P b {\left(N {O}_{3}\right)}_{2} + C {l}^{-} = P b C {l}_{2} + N {O}_{3}^{-}$
Cl doesn't exist as single atom, it can exist as $C {l}^{-}$ that is an anion quite stable. It can react with the cations: $A {g}^{+} , H {g}_{2}^{2 +}$ and partially with $P {b}^{2 +}$ cold, to form insoluble compounds, while the salts of nitrate ion are all soluble
$P b {\left(N {O}_{3}\right)}_{2} + C {l}^{-} = P b C {l}_{2} + N {O}_{3}^{-}$
the molecule $C {l}_{2}$ instead, is a discrete oxidant, but doesn't react neither with $P {b}^{2 +}$ nor with nitrate ion
|
2020-09-25 10:44:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 7, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45203498005867004, "perplexity": 3688.4742315240915}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400223922.43/warc/CC-MAIN-20200925084428-20200925114428-00517.warc.gz"}
|
https://proofwiki.org/wiki/Primitive_of_Root_of_a_squared_minus_x_squared
|
# Primitive of Root of a squared minus x squared
$\ds \int \sqrt {a^2 - x^2} \rd x = \frac {x \sqrt {a^2 - x^2} } 2 + \frac {a^2} 2 \arcsin \frac x a + C$
$\ds \int \sqrt {a^2 - x^2} \rd x = \frac {x \sqrt {a^2 - x^2} } 2 - \frac {a^2} 2 \arccos \frac x a + C$
|
2023-02-08 19:53:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9698591232299805, "perplexity": 774.1297926289777}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500904.44/warc/CC-MAIN-20230208191211-20230208221211-00330.warc.gz"}
|
http://gasstationwithoutpumps.wordpress.com/tag/fet/
|
# Gas station without pumps
## 2014 March 5
### Sixteenth day: Arduino demo
Filed under: freshman design seminar,Pressure gauge — gasstationwithoutpumps @ 20:57
Tags: , , , , , ,
Today’s class in the freshman design seminar went well. I started by returning the drafts of the design reports and giving some generic feedback. I realized on reading the reports that I had not given a good explanation of what I meant by describing the components of the system—two of the groups had given me long parts lists on the first page of their reports, something that would only really be appropriate in an appendix. I explained that what I wanted was what the main blocks in the block diagram were, and that they should use the block diagram to organize their report, writing a page for each block. I also suggested that they use the block diagram to partition the project among the group members, with each group member working on a different component, then getting back together to reconcile any discrepancies. Note that this is much more like real engineering group work than the usual K–12 group project, which is usually done most efficiently by turning the whole project over to the most competent member of the group.
After the feedback on design reports, I offered the students a chance to get a demo of building an Arduino program with sensing and motor control. This was a completely extemporaneous demo—I had gathered a number of possibly useful components, but had not tested anything ahead of time nor even figured out what order to do the demo in. I asked the students if they wanted me to start with sensing or control—they asked for the motor control first.
I started by pulling a motor out of box of motors I had gotten when the elementary school my wife works at cleaned out their closets. I told the students that I had no idea what the spec of the motor were, but since it came from an elementary school, it probably ran on 3v batteries. I tested the motor by hooking it up first to the 3.3v, then to the 5v power on my Arduino Uno. It spun just fine on 3.3v, but squealed a bit on 5v, so we decided to run it on 3.3v.
I then pulled out the Sainsmart 4-relay board that I had bought some time ago but never used. I explained how a relay worked, what single-pole double-throw meant, and normally open (NO) and normally closed (NC) contacts. I used the board unpowered with the NC contacts to spin the motor, then moved the wire over to the NO contacts to turn the motor off. I then hooked up power to the board and tried connecting input IN1 to power to activate the relay. Nothing happened. I then tried connecting IN1 to ground, and the relay clicked and the motor spun. The inputs to the Sainsmart board are active low, which I explained to the students (though I did not use the terminology “active low”—perhaps I should have). I did make a point of establishing that the relay provides very good isolation between the control logic and the circuitry being controlled—you can hook up AC power from the walls to the relay contacts without interfering with the logic circuitry.
Having established that the relay worked, the next step was to get the class (as a group) to write an Arduino program to control the motor using the relay. With me taking notes on the whiteboard, they quickly came up with the pinMode command for the setup, the digitalWrite and delay for the loop, and with only a tiny bit of prompting with a second digitalWrite and delay to turn the motor back off. They even realized the need to have different delays for the on and off, so we could tell whether we had the polarity right on the control. Here is the program we came up with:
#define RELAY_PIN (3)
void setup()
{ pinMode(RELAY_PIN, OUTPUT);
}
void loop()
{
digitalWrite(RELAY_PIN,LOW); // turn motor ON via relay (or off via transistor)
delay(1000); // on for 1 second
digitalWrite(RELAY_PIN,HIGH); // turn motor OFF via relay (or on via transistor)
delay(3000); // off for 3 seconds
}
I typed the code in and downloaded it to the Arduino Uno, and it worked as expected. (It would be nice if the Arduino IDE would allow me to increase the font size, like almost every other program I use, so that students could have read the projection of what I was typing better.)
I then offered the students a choice of going on to sensing or looking at pulse-width modulation for proportional control. They wanted PWM. I explained why PWM is not really doable with relays (the relays are too slow, and chattering them would wear them out after a while. I did not have the specs on the relay handy, but I just looked up the specs for the SRD-05VDC-SL-C relays on the board: They have a mechanical life of 10,000,000 cycles, but an electrical life of only 100,000 cycles. The relay takes about 7msec to make a contact and about 3msec to break a contact, so they can’t be operated much faster than about 60 times a second, which could wear them out in as little as half an hour.
So instead of a relay, I suggested an nFET (Field-Effect Transistor). I gave them a circuit with one side of the motor connected to 3.3V, the other to the drain of an nFET, with the source connected to ground. I explained that the voltage between the gate and the source (VGS) controlled whether the transistor was on or off, and that putting 5v on the gate would turn it on fairly well. I then got out an AOI518 nFET and stuck it in my breadboard, explaining the orientation to allow using the other holes to connect to the source, gate, and drain.
I mentioned that different FETs have the order of the pins different, so one has to look up the pinout on data sheet. I pulled up the AOI518 data sheet, which has on the first page “RDS(ON) (at VGS = 4.5V) < 11.9mΩ”. I explained that if we were putting a whole amp through the FET (we’re not doing anywhere near that much current), the voltage drop would be 11.9mV, so the power dissipated in the transistor would be only 11.9mW, not enough to get it warm. I mentioned that more current would result in more power being dissipated (I2R), and that the FETs could get quite warm. I passed around my other breadboard which has six melted holes from FETs getting quite hot when I was trying to debug the class-D amplifier design. The students were surprised that the FETs still worked after getting that hot (I must admit that I was also).
I hooked up the AOI518 nFET using double-headed male header pins and female jumper cables, and the motor alternated on for 3 seconds, off for one second. We now had the transistor controlling the motor, so it was time to switch to PWM. I went to the Arduino reference page and looked around for PWM, finding it on analogWrite(). I clicked that link and we looked at the page, seeing that analog Write was like digitalWrite, except that we could put in a value from 0 to 255 that controlled what fraction of the time the pin was high.
I edited the code, changing the first digitalWrite() to analogWrite(nFET_GATE_PIN, 255), and commenting out the rest of the loop. We downloaded that, and it turned the motor on, as expected. I then tried writing 128, which still turned the motor on, but perhaps not as strongly (hard to tell with no load). Writing 50 resulted in the motor not starting. Writing 100 let the motor run if I started it by hand, but wouldn’t start the motor from a dead stop. I used this opportunity to point out that controlling the motor was not linear—1/5th didn’t run at 1/5th speed, but wouldn’t run the motor at all.
Next we switched over to doing sensors (with only 10 minutes left in the class). I got out the pressure sensor and instrumentation amp from the circuits course and hooked it up. The screwdriver I had packed in the box had too large a blade for the 0.1″ screw terminals, but luckily the tiny screwdriver on my Swiss Army knife (tucked away in the corkscrew) was small enough. After hooking up the pressure sensor to A0, I downloaded the Arduino Data Logger to the Uno, and started it from a terminal window. I set the triggering to every 100msec (which probably should be the default for the data logger), the input to A0, and convert to volts. I then demoed the pressure sensor by blowing into or sucking on the plastic tube hooked up to the sensor. With the low-gain output from the amplifier, the output swung about 0.5 v either way from the 2.5v center. Moving the A0 wire over to the high-gain output of the amplifier gave a more visible signal. I also turned off the “convert to volts” to show the students the values actually read by the Arduino (511 and 512, the middle of the range from 0 to 1023).
Because the class was over at that point, I offered to stay for another 10 minutes to show them how to use the pressure sensor to control the motor. One or two students had other classes to run to, but most stayed. I then wrote a program that would normally have the motor off, but would turn it full on if I got the pressure reading up to 512+255 and would turn it on partway (using PWM) between 512 and 512+255. I made several typos when entering the program (including messing up the braces and putting in an extraneous semicolon), but on the third compilation it downloaded successfully and controlled the motor as expected.
One student asked why the motor was off when I wasn’t blowing into the tube, so I explained about 512 being the pressure reading when nothing was happening (neither blowing into the tube nor sucking on it). I changed the zero point for the motor to a pressure reading of 300, so that the motor was normally most of the way on, but could be turned off by sucking on the tube. Here is the program we ended up with
#define nFET_GATE_PIN (3)
void setup()
{ pinMode(nFET_GATE_PIN, OUTPUT);
pinMode(A0, INPUT);
}
void loop()
{ int pressure;
if (pressure < 300)
{ digitalWrite(nFET_GATE_PIN,LOW); // turn motor off
}
else
{ if (pressure>300+255)
{ digitalWrite(nFET_GATE_PIN,HIGH); // turn motor on full
}
else
{ analogWrite(nFET_GATE_PIN,pressure-300); // turn motor partway on
}
}
}
Note: this code is not an example of brilliant programming style. I can see several things that I would have done differently if I had had time to think about the code, but for this blog it is more useful to show the actual artifact that was developed in the demo, even if it makes me cringe a little.
Overall, I thought that the demo went well, despite being completely extemporaneous. Running over by 10 minutes might have been avoidable, but only by omitting something useful (like the feedback on the design reports). The demo itself lasted about 70 minutes, making the whole class run 80 minutes instead of 70. I think I compressed the demo about as much as was feasible for the level the students were at.
Based on how the students developed the first motor-control program quickly in class, I think that some of them are beginning to get some of the main ideas of programming: explicit instructions and sequential ordering. Because we were out of time by the point I got to using conditionals, I did not get a chance to probe their understanding there.
## 2014 February 22
### Diode-connected nFET characterisitics
Filed under: Circuits course,Data acquisition — gasstationwithoutpumps @ 19:20
Tags: , , , ,
Test circuit for determining I-vs-V curves for a diode-connected nFET. The shunt resistor R2 was chosen from 0.5Ω to 680kΩ, and R3 was selected to keep E23 above 0 (0.5Ω to 150Ω).
In More mess in the the FET modeling lab, I showed I-vs-V plots for NTD5867NL nFETs, both with a fixed power supply and load resistor, and diode connected (Vgs=Vds). But this year, the NTD5867NL FETs were not available from Digikey, so we are getting AOI518 nFETs instead. I decided to try characterizing these with the KL25Z board. If I power the test off the KL25Z board’s 3.3v supply, I can take fairly high currents, as the board uses a NCP1117ST33T3G LDO regulator, which can the spec sheet claims can deliver up to 1A (800mA, if we limit the dropout to 1.2v). I’m only limited by the USB current limit (500mA), to keep the laptop from shutting off the USB port.
I used essentially the same circuit for testing a diode-connected AOI518 nFET as I used for testing the Schottky diodes, but I did not put a capacitor across the FET. (Well, initially I left the 4.7µF capacitor there, but I was noticing changing values that looked like RC charging when I was testing at small currents, so I removed the capacitor.)
Because the 3.3v supply droops if too much current is taken from it, I used the internal 1V bandgap reference to determine the scaling of the analog-to-digital converter on each reading. The voltage VDS is (E20-E21)/(BANDGAP), and the current IDS is (E22-E23)/(R2*BANDGAP).
Voltage vs current for diode-connected nFET. The model that fits the data (above 1µA) is that of subthreshold conduction, even when the current is over 100mA. (click to embiggen)
I get a very good fit to the data (above 1µA) with the subthreshold conduction model (essentially the same as a junction diode, but using n VT instead of VT, where n is determined by the size and shape of the FET). The value of n for this FET seems to be around 830mV/26mV = 32. The circuit models I’ve seen on the web seem to claim that I should be using a saturation-current model for a diode-connected FET, but that model doesn’t fit the data at all.
There is a very clear thermal shift in the curve for the high-current tests. As the transistor warms up the current increases for a given voltage. This is equivalent to the threshold voltage Vthr dropping with temperature. This is consistent with the data sheet, which shows a lower threshold voltage but higher on-resistance (at 10A) at 125° C than at 25° C.
I’m not seeing any evidence of the weird negative resistance that I saw on the NTD5867NL nFETs. (I tried checking the NTD5867NL nFET with the same testing setup as for the AOI518, and it definitely still shows weird behavior between 10 and 30 mA.)
Because large nFETs are often used to switch inductive loads (motors, loudspeakers, inductors in switching regulators, …), they incorporate a “flyback” diode in the FET. Normally, this diode is back-biased and does not conduct, but if an inductive load needs a current and there are no transistors that are on to provide the current, the diode conducts and keeps the output voltage from going too far below ground.
nMOS and pMOS transistors with flyback diodes. If both transistors are off, but the inductor L1 still wants current, it has to come through one of the flyback diodes D1 or D2. They keep the output voltage from going too far outside the rails.
I characterized the flyback diode on the AOI518 nFET the same way as before, now connecting the gate and the source to the higher voltage, and the drain to the lower voltage.
Below about 0.66 V, the flyback diode has a fairly normal exponential current with voltage, but above that it seems to have a linear relationship between current and voltage, with a dynamic resistance of about 180mΩ.
click to embiggen
The red points with the 0.5Ω shunt go up to an amp, which warms the FET enough to change its characteristics—the lower set of points are the warmer set.
I can also use the measurements of the flyback diode with the ½Ω shunt to characterize the LDO voltage regulator on the Freedom KL25Z board:
For currents up to 400mA, the LDO voltage regulator behaves like a 3.332 V source in series with a 55mΩ resistor.
click to embiggen
The data sheet claims that there should only be a 10mV drop in voltage for an 800mA current, and I’m seeing a 290mV drop. The extra drop is not from the LDO misbehaving, but from the USB voltage dropping—one is only supposed to take up to 500mA from a USB supply and the MacBook Pro apparently has a soft knee at 500mA, rather than an abrupt shutoff. I suspect that if I took the full amp for very long, the laptop would shut down the USB port, as it does if the USB 5V is accidentally shorted.
## 2013 March 2
### Rethinking the power-amp lab yet again
Filed under: Circuits course — gasstationwithoutpumps @ 14:05
Tags: , ,
In Rethinking the power-amp lab again, I described the failure of my class-D power amp on Thursday afternoon, and the dilemma I faced in whether to revise the lab or cut it from the course. Some of the students favored cutting it (they’re hitting end-of-quarter crunch time), but I know that they would not learn any of the material that lab is intended to help them understand without actually doing the lab. Also, if I had one less lab report to evaluate them on, I’d probably have to add a final exam, which I don’t think they would like any better.
So Thursday night I woke up around 3 a.m. and thought about ways to salvage the power-amp lab. First, I had to figure out what differences there were between the circuit I had working at home, and the same board, with slightly different power-supply options in the lab on campus. The only real difference was the voltages for the power-amp stage (the comparator and power FETs). At home, I was using a 6.6V power supply—the same one I was using for the pre-amp stage, so that the FET sources were at 0V and 6.6V (and the loudspeaker had a DC bias that is undesirable). On campus, the preamp had a +6V supply, and the comparator and FET sources were at ±4V, but I wanted to be able to go to ±7V or even ±9v.
I came up with some ideas for potential solutions.
1. The first was a fairly trivial one: just have them use a single 6V power-supply the way I had, and not worry about the DC offset to the loudspeaker. I don’t like this solution very much, because it doesn’t provide much power to the speaker (about 0.5W), and has that DC bias.
2. A minor variant of the first idea is to add a large series capacitor to the loudspeaker. Our big electrolytics are 470µF, and we’d have high-pass filtering with a corner frequency of about 66Hz, which is ok for the small speakers (whose resonant frequency is around 155Hz. The 16V limit for the electrolytic capacitor is no problem, since the pre-amp using an MCP6004 chip shouldn’t really be run with more than 6V.
3. Another minor variant eliminates the series capacitor by using a ±3V supply, with the preamp powered from -3V and +3v. This has the advantage of eliminating the need for a virtual ground in the pre-amp, as well as powering the loudspeaker cleanly. But we are still limited to 0.5W.
4. The second major solution was to change the output stage from having a simple cMOS inverter (with the pFET and nFET gates connected together) to using a separate comparator to drive each of the FET gates. By changing the pull-up resistors on the open-collector outputs of the comparators, I could adjust the rise and fall rates. By using a very small pull-up, the voltage would rise rapidly, fall slowly, and not get very low. By using a larger pull-up, the voltage would fall rapidly, rise slowly, and get quite low. Since I don’t want the nFET and pFET on at the same time, I want to turn the FETs off quickly, but on slower, so that one is off before the other turns one. That lead me to a design with a large pull-up resistor for the nFET gate and a small one for the pFET gate. (“Large” and “small” are relative terms here—mine were within a factor of 10 of each other, since they are constrained by how much current the comparator can sink and by how fast we want the gate voltages to change.
On Friday morning, I went into the lab and tried out potential solutions 1, 3, and 4. (Since we have a dual supply handy there is no reason not to use it.)
Since solution 1 is essentially identical to what I had debugged at home (but with a 6V supply instead of a 6.6V supply) it worked fine. Splitting the supply into a +3V and a -3V supply with the loudspeaker connected to the middle eliminated the DC bias on the speaker without changing anything else, so it worked fine also.
It took me a while to debug the design with the separated comparators, mainly because I had forgotten to allow for one very important constraint: the inputs to the comparators have to be between the power rails of the comparator. With the preamp powered from 0V and 6V, and the comparators powered at ±3V, that constraint was violated. I upped the voltage for the power stage to ±6V and the comparators worked ok. I did have to fuss around a bit with pull-up resistor for the nFET, since we need to make sure that the comparator will have an output low voltage < 1V above the bottom power rail. That means that the size of the pull-up for the nFET needs to be based on the power-supply voltage and the current sunk by the comparator (which we should assume is around 5–6mA if the output voltage is below 1 v, up to 7.5mA if we can tolerate a larger output voltage from the comparator). The pull-up for the pFET can be much smaller (so that the pFET turns off quickly), but that means that the pFET gate does not go close to the lower power rail. Still, the pFET works fine as long as the gate is at least 3V below the upper power rail, so if we can get away with a fairly small resistor. I probably should play around with the resistors some more on Monday, so that I can give the students better guidance on how to design them based on experience, and not just theory.
The separate-comparators option is the closest to a real power-amp design, and is the one I think I’ll write up a lab-handout addendum for this weekend. I’ll try to get that done and an EKG lab handout, and make a final decision about whether to drop the power-amp lab (adding a final instead) or keep it.
I’ve definitely rejected the idea of a bipolar current gain before the FET gates (too complex for a 1-week lab) and pretty much rejected the idea of class-A amplifier (both the DC offset and the heating of the FET are problematic). I will think about switching to teaching a bipolar class-AB power amp next year, though, instead of a class-D.
## 2013 February 28
### Rethinking the power-amp lab again
Filed under: Circuits course — gasstationwithoutpumps @ 22:30
Tags: , ,
I took the breadboard that had been working for me at home for the class-D power amp to the lab today, and tried getting it to work in the way that I expected the students to do it, with 3 power supplies.
It failed miserably.
Even after tinkering with the circuit a bit, the FETs kept getting hot (indicating that I was not successfully having only one on at a time). I’ve already released the handout for the power amp lab, but the students will not be able to get the amplifier working from those instructions. I have several choices facing me:
• Get rid of the power-amp lab entirely, and take 2 weeks for the EKG lab. Originally, I had planned 2 weeks for the EKG lab, as it is slightly more difficult than the pressure sensor lab, but the difference in difficulty is not 2-to-1.
• Modify the class-D lab to use a single power supply, as I’ve been doing at home. I think that the problem I was facing was that the larger voltages of the dual supply made the overlap range where both FETs were on much larger, and the simple cMOS-inverter output stage could not be driven fast enough to pass rapidly through the range. It may be enough to use two 3-v supplies, with everything except the loudspeaker running from +3V to -3V, and the other end of the loudspeaker at 0V. I’d be limited to the voltage range of the MCP6004 chip, which is 6V (the absolute max is 7V, and I’ve been running them at 6.6V at home without much trouble, though I’ve probably shortened their lifetime a lot). That would limit the power to the loudspeaker to around 0.6W, which is still a lot more than the op amps can deliver. Dual 3V supplies (and no extra 6v supply) would be a simpler design that what I have in the handout, but it should be very close to what I debugged at home. I should probably try it out in the lab tomorrow.
• Use bipolar transistors to drive the FET gates with more current so that they switch faster. We haven’t talked about bipolar transistors (except very,very cursorily in the context of the phototransistor, and the lab reports indicated that only one or two people had followed through to understand how the phototransistor works).
• Give up on class-D and do a simpler class-A amplifier with the loudspeaker as the load resistance. This is not the right way to use the loudspeaker, since it will have a large DC bias (pushing the cone out or pulling it in, rather than having it rest in the middle), but is a very simple circuit, and can use negative feedback from the loudspeaker to correct for any nonlinearity in the the circuit. It is also horribly inefficient, and whatever FET we use is almost sure to get warm. We could solder a heat sink onto the transistors, if needed, but that adds a different sort of complexity to the lab.
• Do a class-A amplifier with a power bipolar transistor.
I don’t really like any of these solutions, but I’ll have to pick one this weekend and write it up for the students. If we start the EKG lab a week earlier than planned, I’ll have to try building an EKG amplifier on the protoboard this weekend, to make sure that it works well enough, and get the handout for it written. Doing a lower-voltage class-D amplifier would require the least modification to the handout. Bipolar transistors would require not only acquiring the transistors, but debugging the lab with bipolars (and I might want to switch back to a class-AB amplifier if I use bipolars).
## 2013 February 16
### Teaching students to build and use models
Filed under: Uncategorized — gasstationwithoutpumps @ 11:45
Tags: , , , , , ,
In a comment on her post Student Thinking About Abstracting, Mylène says
What frustrates me and disorients my students is that those justifications are never discussed, and even the fact that this is a model is omitted. To further “simplify” (obscure) the situation, most discussions of the matter don’t distinguish between two ideas: “the model has a change in behavior at 0.7V,” vs. “they physical system has a change in behavior at 0.7V.” Finally, the chapter starts with the most abstracted model (1st diode approximation) and ends with the less abstracted (3rd diode approximation).
On getting students to understand models: I agree that this is a huge problem. I’ve been trying various techniques and can’t claim to have found a silver bullet.
One thing I tried in class yesterday (disguised as a gnuplot tutorial) was to build up a model a little at a time to match measured data. I was trying to build an equivalent-circuit model for a loudspeaker, so I started by gathering data (rms voltage measurements across the loudspeaker and across a series resistor at different frequencies) and plotting magnitude of impedance vs. frequency from the data, then building the model a component at a time. Before doing the modeling, we had spent some time looking at the behavior of building-block circuits (R+C, R||C, R+L, R||L, C||L, C||L||R) using gnuplot, so I could ask them things like “how can we model the impedance increasing with frequency above about 1kHz?” We could then immediately modify the model and plot the results. Once things were close, we could use gnuplot’s “fit” command to tweak the parameters.
We didn’t start with “loudspeakers are …”, though we did start with one of the specs—that this was an 8Ω loudspeaker—for our first model. I didn’t even point out to the students that the frequency of main resonance peak is given as a spec on the data sheet. The data sheet gives it at 191Hz, while our measured data show 148Hz (more than 22% off, while factory tolerances for the resonant frequency are usually ±15%). They also give the voice coil inductance as 0.44mH, while our model gets 35µH, a factor of 12.6 difference! And they give the Qes of the resonance peak as 3.52, while our model of the R||L||C for the peak has $Q_{es}=R\sqrt{C/L} = 5.71$.
Maybe the inductance difference can be explained by the standard measurement for the voice-coil inductance being made at 1kHz for the Theile-Small parameters, while I fitted for a wider frequency range and added an extra 112µH inductor in parallel with a 32Ω resistor to bump up the impedance around 10kHz. Or maybe my fitting is a really bogus way to get the inductance, since I’m only looking at the amplitude and not the phase of the signal, and non-linear resistance could throw things off. Or maybe the Parts-Express people mis-measured or had a typo—I have no idea what measurements they made to get the parameters they report, or maybe these loudspeakers were so cheap because they didn’t meet the specs, though they are certainly good enough for our lab.
I think that one could do the same sort of model-building with diodes (the part whose models Mylène’s students were confusing with reality): start by measuring the I-vs-V characteristics. The setup I used to get a lot of data points with the Arduino for characterizing the FET in an electret mic might be a good one for them to use, though the unipolar ADC in the Arduino might be more challenging for characterizing diodes. Then try fitting different curve families to the data. Forget about physics for explaining how the diodes work, but concentrate on finding simple models that fit the data. For example, the FET models we used for the mic are not quite the standard ones, since there is a clear slope in the saturation region, and it doesn’t match the channel-length modulation model—but it can be fit with some simple curves.
Of course, I gave up on some modeling before even having the students collect data themselves—the power FETs they are using are incredibly messy, having threshold voltages that shift a lot as the transistors warm up and having an undocumented negative dynamic resistance region when diode-connected.
So it is important that their attempts to build models be of phenomena that are relatively easy to model, but they should build and fit the models (with some guidance) rather than just be handed them. I made the mistake of handing them models to fit for the electret mic lab and for the electrode lab. They not only didn’t understand the models, but they didn’t understand how to do the fitting.
I’m planning next year to do the model-building/gnuplot tutorial much earlier in the quarter, before they do the electrode labs, so that they can build the electrode models with some understanding. I’ll need to rearrange some other material, to do inductors much sooner, if I plan to use the loudspeaker data again. I may want to rearrange the labs a lot next year, since all of my first three labs involved model fitting, and the students weren’t ready for it. It may be better to move the sampling lab (which is currently lab 6) into the beginning, so that students can learn to use the Arduino in a simpler lab. As currently written, though, that lab calls for designing a high-pass filter for DC level shifting and a low-pass filter for removing aliasing, neither of which are suitable for a first-week lab in circuits.
Scheduling the labs and the classes is difficult. Fitting in all the topics they need before each lab is a tricky jigsaw problem, particularly when I discover them having problems with topics that I assumed they knew or could pick up quickly. Sigh, some stuff in the first week or two of lab is probably going to have to be “magic” as they’ve learned so little in physics classes that I can’t count on them having any useful lab or modeling skills when they come into the class. I just have to decide which things I’m willing to give them, rather than having them do for themselves.
Currently, I’m leaning toward having every lab have a design component, and to have them build models for important concepts, but I’m willing to give them a model for thermistor behavior that they just have to fit the parameters for. The design in the first two labs this year is very light (selecting a resistor value), but the measuring and model fitting is pretty heavy. The electrode lab has no design currently, but a lot of measuring and model fitting. I think I underestimated the relative difficulty of model fitting and design for these students, and may need to move the model fitting later in the quarter. I don’t think I can start with RC filters in the first week though, as they need voltage dividers, complex numbers, sinusoids, and complex impedance—probably at least 4 classes worth of material. Maybe by week three, though.
Next Page »
|
2014-03-08 11:28:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48915669322013855, "perplexity": 1589.5906270502685}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999654390/warc/CC-MAIN-20140305060734-00029-ip-10-183-142-35.ec2.internal.warc.gz"}
|
https://stacks.math.columbia.edu/tag/0CCC
|
Lemma 33.35.7. Let $p > 0$ be a prime number. Let $S$ be a scheme in characteristic $p$. Let $X$ be a scheme over $S$. Then $\Omega _{X/S} = \Omega _{X/X^{(p)}}$.
Proof. This translates into the following algebra fact. Let $A \to B$ be a homomorphism of rings of characteristic $p$. Set $B' = B \otimes _{A, F_ A} A$ and consider the ring map $F_{B/A} : B' \to B$, $b \otimes a \mapsto b^ pa$. Then our assertion is that $\Omega _{B/A} = \Omega _{B/B'}$. This is true because $\text{d}(b^ pa) = 0$ if $\text{d} : B \to \Omega _{B/A}$ is the universal derivation and hence $\text{d}$ is a $B'$-derivation. $\square$
## Comments (0)
There are also:
• 3 comment(s) on Section 33.35: Frobenii
## Post a comment
Your email address will not be published. Required fields are marked.
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
Unfortunately JavaScript is disabled in your browser, so the comment preview function will not work.
All contributions are licensed under the GNU Free Documentation License.
In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder, this is tag 0CCC. Beware of the difference between the letter 'O' and the digit '0'.
|
2021-04-22 11:35:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.7964448928833008, "perplexity": 478.6656968194272}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039603582.93/warc/CC-MAIN-20210422100106-20210422130106-00634.warc.gz"}
|
http://aliquote.org/memos/2013/09/03/some-random-geeky-notes
|
# Some random geeky notes
2013-09-03
Here are some random geeky notes that have accumulated over the past few months on my desk.
There are too many articles I have read and have to read to provide a semblance of summary here. Regarding books, my reading list is growing as well. Nevertheless, I've been happy with Serious Stats, by Thom Baguley, and Statictics Applied to Clinical Studies, by Cleophas and Zwinderman (bought on http://www.springer.com) as a complement to Statictics Applied to Clinical Trials by the same authors. I also enjoyed Introduction to Psychometric Theory, by Raykov and Marcoulides, since I was looking for a book relying on the Mplus software.
R 3.0 has been released early this year. See also David Smith's post on Revolutions blog. I'm still using the 2.15.2 version, partly because I have a lot of work in progress and also because I haven't found any decent way to manage my old packages directory with both versions. I guess at what time I will have to update everything, but I have to wait for a moment.
I just updated to Tex 2013, I bought Stata 13 and I'm playing more and more with Julia. For litterate programming, I will probably be happy with dexy, and also try the Stata filter.
I authored (80%) a new course on the use of statistical software (R and Stata, as far as I was concerned) in medical research. It took me more than 150 hours to produce about 450 pages of slides, exercices and solutions, errata and handouts. What I've learned is that it is not writing code or designing a LaTeX template, or even learning some Stata, that take most time: it is all about finding some good data set!
I was supposed to attend the JSM meeting this year. Unfortunately, I couldn't make it, so I followed some of the #jsm2013 tweets. I hear about Nat Silver's talk, much like I followed the 'Data Science' trend in recent months.
The long awaited Applied Predictive Modeling by Max Kuhn and Kjell Johnson is now out. There's also an R package. I haven't time to buy and read the book at the moment, but this is just a matter of time.
There was a nice article about Bioconductor in PLoS Comp Bio: Software for Computing and Annotating Genomic Ranges. There was also a great tutorial at UseR! 2013 on the Analysis and Comprehension of High-Throughput Genomic Data.(a)
Now, there are really great tools to build interactive HTML slides, namely Slidify and RStudio presentations, available in the development preview of RStudio. I know Chris Fonnesbeck used landslide for his great Bios301 course (Introduction to Statistical Computing) in the Department of Biostatistics at VU.(b) I wish it doesn't foul my web browser history, but that's probbaly something I can manage in the future.
Other miscellanies: I noticed that Apple just updated Java to version 7 (from Oracle)--what they called Java for OS X 2013. It probably occured during an update that I allowed, although I haven't updated anything since one year or so. I believe I haven't noticed this change before because my Clojure install just works fine and it's been a long since I haven't needed Java web start or the Java compiler. It is still possible to revert those changes. Another funny thing is that I can still use my registered QuickTime Pro 7 software, despite QuickTime X being the default on OS X 10.7 and higher. Well, enough for my uphill complaints.
### Notes
(a) Their annual report looks great too!
(b) And now there's Bios366 (Advanced Statistical Computing) using Python.
## Articles with the same tag(s):
Spring quick notes
A bag of tweets / October 2015
A bag of tweets / September 2015
A bag of tweets / August 2015
Why I am still using Emacs
A bag of tweets / July 2015
A bag of tweets / June 2015
A bag of tweets / May 2015
Leanpub: a new way to publish your textbook
A bag of tweets / April 2015
|
2017-10-17 01:45:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23171071708202362, "perplexity": 2854.956931993011}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187820556.7/warc/CC-MAIN-20171017013608-20171017033608-00099.warc.gz"}
|
https://mathoverflow.net/questions/351424/conjectures-and-open-problems-in-representation-theory
|
# Conjectures and open problems in representation theory [closed]
Are there very famous open problems or conjectures in representation theory, or in enumerative geometry, like the volume conjecture in topology?
• Do you want representation theory problems or enumerative geometry problems? – Ben McKay Jan 29 at 12:50
• Dear Ben, one of them or both will be ok. Since I am interesting in both fields and they have some connections to each other. Thank you so much for your advice! – Khanh Nguyen Jan 29 at 12:55
• I guess that for representation theory you want to be more specific since different people mean different things when they say "Representation Theory". For some open problems in the Representation theory of finite dimensional algebras and quivers, see e.g. math.uni-bonn.de/people/schroer/fd-problems.html. – Julian Kuelshammer Jan 29 at 13:10
• The work of J.M. Landsberg on matrix multiplication draws from representation theory and complex algebraic geometry, but maybe not much enumerative geometry, and is driven by conjectures from computer science about the speed with which computers can multiply matrices. – Ben McKay Jan 29 at 14:37
• I'm kind of amazed this question is still open, given how strict math overflow usually is. Regardless of what happens, just a bit of advice Khanh: this question is way too broad. Even if your question was faithful to the title, it would be far too broad. The only real answer is yes, there are many conjectures and open problems in representation theory. The more thought you put into your question, the better answers you will get. – Andy Sanders Jan 29 at 18:29
There are many open, and seemingly deep, conjectures in modular representation theory (or block theory) in connection with enumerating representation-theoretic invariants: a start of a list might be : Brauer's $$k(B)$$-problem, the Alperin-McKay Conjecture, the Alperin Weight Conjecture, Dade's conjectures, Isaacs-Navarro conjecture. Gabriel Navarro has several recent survey papers discussing these and other conjectures.
|
2020-02-20 21:42:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6145482659339905, "perplexity": 441.1763696843349}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145282.57/warc/CC-MAIN-20200220193228-20200220223228-00429.warc.gz"}
|
https://cs.stackexchange.com/questions/122130/why-does-the-substitution-x-fy-y-z-work-this-way/122131
|
# Why does the substitution {x/f(y), y/z} work this way?
There is an example of applying a substitution to an expression, and I am having a problem with it. Let $$\theta = \{ x/f(y), y/z \}$$, and $$E=p(x,y,g(z))$$, then $$E\theta = p( f(y),z,g(z) )$$.
Why is $$y/z$$ not applied to $$E$$ after using $$x/f(y)$$, so that the answer would be $$E\theta = p( f(z),z,g(z) )$$?
Because that's not how substitution is defined.
Seriously, there isn't much more to it than that. In some situations (such as applying a single step of a collection of rewriting rules), having the ability to substitute "each variable once, all at once" like this is important for the correctness of the definition. So substitution is defined so that there is a way to write these definitions correctly.
In other situations (such as the output of a unification algorithm), authors require that substitutions be idempotent, i.e. applying the substitution a second time does not change the result. This prohibits substitutions such as the one in the question, thereby avoiding the issue.
|
2021-03-04 00:07:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 7, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9650633335113525, "perplexity": 236.09211053422467}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178367949.58/warc/CC-MAIN-20210303230849-20210304020849-00263.warc.gz"}
|
https://www.shaalaa.com/question-bank-solutions/particle-nature-electromagnetic-radiation-planck-s-quantum-theory-electrons-are-emitted-zero-velocity-metal-surface-when-it-exposed-radiation-wavelength-6800-calculate-threshold-frequency-work-function-w0-metal_10599
|
CBSE (Science) Class 11CBSE
Share
# Electrons Are Emitted with Zero Velocity from a Metal Surface When It is Exposed to Radiation of Wavelength 6800 å. Calculate Threshold Frequency () and Work Function (W0) of the Metal. - CBSE (Science) Class 11 - Chemistry
ConceptParticle Nature of Electromagnetic Radiation: Planck'S Quantum Theory
#### Question
Electrons are emitted with zero velocity from a metal surface when it is exposed to radiation of wavelength 6800 Å. Calculate threshold frequency (v0) and work function (W0) of the metal.
#### Solution
Threshold wavelength of radian (lambda_0)=6800Å 6800 × 10–10 m
Threshold frequency (v_0) of the metal
c/lambda_0 = (3xx10^8 ms^(-1))/(6.8 xx 10^(-7)) = 4.41 × 1014 s–1
Thus, the threshold frequency (v0) of the metal is 4.41 × 1014 s–1.
Hence, work function (W0) of the metal = hν0
= (6.626 × 10–34 Js) (4.41 × 1014 s–1)
= 2.922 × 10–19 J
Is there an error in this question or solution?
#### Video TutorialsVIEW ALL [1]
Solution Electrons Are Emitted with Zero Velocity from a Metal Surface When It is Exposed to Radiation of Wavelength 6800 å. Calculate Threshold Frequency () and Work Function (W0) of the Metal. Concept: Particle Nature of Electromagnetic Radiation: Planck'S Quantum Theory.
S
|
2019-12-07 04:37:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5698584914207458, "perplexity": 4184.9737536913635}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540495263.57/warc/CC-MAIN-20191207032404-20191207060404-00056.warc.gz"}
|
https://fr.mathworks.com/help/map/ref/gshhs.html
|
# gshhs
Read Global Self-Consistent Hierarchical High-Resolution Geography (GSHHG) data
## Syntax
``S = gshhs(filename)``
``S = gshhs(filename,latlim,lonlim)``
``indexFilename = gshhs(filename,"createindex")``
## Description
example
````S = gshhs(filename)` reads Global Self-Consistent Hierarchical High-Resolution Geography (GSHHG) vector data from a file.```
````S = gshhs(filename,latlim,lonlim)` reads data within the latitude and longitude limits specified by `latlim` and `lonlim`.```
example
````indexFilename = gshhs(filename,"createindex")` creates an index file called `indexFilename` that enables the `gshhs` function to more quickly read subsets of large data sets. Once you create the index file, the `gshhs` function uses it to access data by location.This syntax does not read the GSHHG data. To read the data after creating the index file, use the `gshhs` function again.```
## Examples
collapse all
Extract a file containing coarse GSHHG data from a GNU zipped file. Read the file into the workspace as a geographic data structure array.
```filename = gunzip("gshhs_c.b.gz"); S = gshhs(filename{1});```
Verify that all elements of the structure array represent polygons.
`isequal(S.Geometry,"Polygon")`
```ans = logical 1 ```
Query the number of polygons in the structure array.
`length(S)`
```ans = 1866 ```
Extract the latitude and longitude coordinates of the polygons from the structure array. Then, display the data using lines on a world map.
```lat = [S.Lat]; lon = [S.Lon]; figure worldmap world geoshow(lat,lon,"DisplayType","line")```
Find the polygons corresponding to land areas (`levels == 1`) and lakes (`levels == 2`). Create a map of Europe with the land areas in green and the lakes in blue.
```levels = [S.Level]; land = (levels == 1); lake = (levels == 2); figure worldmap europe geoshow(S(land),"FaceColor","#d2e9b8") geoshow(S(lake),"FaceColor","b")```
Extract a file containing coarse GSHHG data from a GNU zipped file. Create an index for the data.
```filename = gunzip("gshhs_c.b.gz"); indexFilename = gshhs(filename{1},"createindex");```
Read data for a region surrounding Africa into the workspace as a geographic data structure array. The `gshhs` function uses the index to more quickly read the data.
```latlim = [-40 40]; lonlim = [-20 55]; S = gshhs(filename{1},latlim,lonlim);```
Display the data on a world map. To display the lakes and islands within the land areas, sort the structure array in descending order according to the `Level` field.
```[~,ix] = sort([S.Level],"descend"); S = S(ix); figure worldmap(latlim,lonlim) geoshow(S,"FaceColor","#d2e9b8") setm(gca,"FFaceColor","#9dd7ee")```
## Input Arguments
collapse all
Name of the GSHHG file, specified as a character vector or a string scalar.
The filename must have one of these forms:
• `"gshhs_x.b"`
• `"wdb_borders_x.b"`
• `"wdb_rivers_x.b"`
`x` must be `c`, `l`, `i`, `h`, or `f`. These letters correspond to the resolution of the file.
Data Types: `char` | `string`
Latitude limits, specified as an empty vector (`[]`) or a two-element vector in units of degrees.
When you specify `latlim` as an empty vector, the `gshhs` function reads data within the latitude limits `[-90 90]`.
When you specify `latlim` as a two-element vector, the value of `latlim(1)` must be less than the value of `latlim(2)`.
Data Types: `double`
Longitude limits, specified as an empty vector (`[]`) or a two-element vector in units of degrees.
When you specify `lonlim` as an empty vector, the `gshhs` function reads data within the longitude limits `[-180 195]`.
When you specify `lonlim` as a two-element vector, the value of `lonlim(1)` must be less than the value of `lonlim(2)`.
Data Types: `double`
## Output Arguments
collapse all
Geographic data structure, returned as a structure array with these fields:
Field
Description
`Geometry`
Geometric type, returned as `'Line'` or `'Polygon'`.
`BoundingBox`
Bounding box, returned as a 2-by-2 matrix of the form `[minLon minLat; maxLon maxLat]`. The values `minLon` and `minLat` indicate the minimum longitude and latitude, respectively. The values `maxLon` and `maxLat` indicate the maximum longitude and latitude, respectively.
`Lon`
Longitude coordinates, returned as a numeric vector.
`Lat`
Latitude coordinates, returned as a numeric vector.
`South`
Southern latitude boundary, returned as a numeric scalar.
`North`
Northern latitude boundary, returned as a numeric scalar.
`West`
Western longitude boundary, returned as a numeric scalar.
`East`
Eastern longitude boundary, returned as a numeric scalar.
`Area`
Area of the polygon in square kilometers, returned as a numeric scalar.
`Level`
Level in topological hierarchy, returned as an integer in the range [1, 4].
`LevelString`
Level in topological hierarchy, returned as `'land'`, `'lake'`, `'island_in_lake'`, `'pond_in_island_in_lake'`, or `''`.
When you read the WDB rivers and borders data sets, the `LevelString` field is empty.
`NumPoints`
Number of points in the polygon, returned as a nonnegative integer.
`FormatVersion`
Format version of the data file, returned as one of these values:
• A positive integer — Indicates version 3 or later.
• Empty — Indicates version 1 or 2.
`Source`
Data source, returned as one of these values:
• `'WDBII'` — CIA World Data Bank II
• `'WVS'` — World Vector Shorelines
`CrossesGreenwich`
Indicator for the polygon crossing the prime meridian, returned as `1` when the polygon crosses the prime meridian and `0` otherwise
`GSHHS_ID`
Unique polygon ID, returned as a nonnegative integer.
When the value of `FormatVersion` is at least `7` (release 2.0 and later), the structure array contains these additional fields.
Field
Description
`RiverLake`
Indicator for a river-lake, returned as `1` when a polygon is the fat part of a major river and the value of `Level` is `2`, and `0` otherwise.
`AreaFull`
Area of the original full-resolution polygon, returned as a numeric scalar in units of $\frac{1}{10}k{m}^{2}$.
`Container`
ID of the container polygon, returned as a nonnegative integer or `-1`. A value of `-1` indicates that the polygon does not have a container (as in, the value of `Level` is `1`).
`Ancestor`
ID of the ancestor full-resolution polygon, returned as a nonnegative integer or `-1`. A value of `-1` indicates that the polygon does not have an ancestor.
When the value of `FormatVersion` is at least `9` (release 2.2 and later), the structure array contains this additional field.
Field Name
Field Contents
`CrossesDateline`
Indicator for the polygon crossing the dateline, returned as `1` when the polygon crosses the prime meridian and `0` otherwise
Name of the index file, returned as a character vector.
The index file has the same name as the GSHHG data file, but with the extension `i` instead of `b`. The function writes the file in the same folder as `filename`.
collapse all
### Global Self-Consistent Hierarchical High-Resolution Geography
Global Self-Consistent Hierarchical High-Resolution Geography (GSHHG) is a database created by Paul Wessel of the University of Hawaii and Walter H.F. Smith of the National Oceanic and Atmospheric Administration (NOAA) Geosciences Lab. This database includes coastlines, major rivers, and lakes. You can find GSHHG data in various resolutions from the Shoreline / Coastline Resources page on the NOAA website.
GSHHG is formerly known as Global Self-Consistent Hierarchical High-Resolution Shorelines (GSHHS).
## Tips
• Mapping Toolbox™ contains the file `gshhs_c.b` within the GNU zipped file `gshhs_c.b.gz`. The file contains the coarse data set for version 3 (release 1.3).
• When you read data within specified limits, the `gshhs` function does not clip data that is partially within the limits. To clip the data and maintain polygon topology, use the `maptrimp` function and specify the limits as the `Lat` and `Lon` fields contained in `S`.
• The `gshhs` function supports files up to version 15 (releases 1.1 through 2.3.6). The function can also read newer versions, provided they use the same header format as releases 2.0 and 2.1.
## Version History
Introduced before R2006a
|
2023-03-29 01:24:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5381408333778381, "perplexity": 3842.4534077112207}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948900.50/warc/CC-MAIN-20230328232645-20230329022645-00699.warc.gz"}
|
https://math.stackexchange.com/questions/2490681/volume-of-the-solid-in-the-first-octant-bounded-by-the-cylinder-z-9-y2
|
# Volume of the solid in the first octant bounded by the cylinder $z=9-y^2$
Find the volume of the solid in the first octant bounded by the cylinder $z=9-y^2$ and the plane $x=2$
Can I solve this problem using triple integrals in the following way
$$\int_0^2\int_0^{\sqrt{9-z}}\int_0^{9-y^2}1 \, dzdydx$$
I'm currently studying double integrals in my course but I'm not entirely sure how to attack the problem that way. Doing a bit of research I found a problem about a solid prism with similar bounds. I was wondering if I could solve the problem with triple integrals and if so would it be a better option than with double?
I'm not looking for a solution just to let you know. Thanks
To evaluate the volume of this solid, a triple iterated integral is fine. Note that the solid is given by $$\{(x,y,z)\in\mathbb{R}^3\;:\; 0\leq x\leq 2,\; 0\leq y,\; 0\leq z\leq 9-y^2\}.$$ So your upper integration limit for $y$ is not correct (and according to the order of integration, in any case it should not depend on $z$). What is the right upper limit for $y$?
• Right! if I let $z=0$ I get $y=\pm 3$. Since $y \ge 0$ that gives the bounds as $$0 \le y \le 3$$ Oct 26 '17 at 13:43
|
2021-09-26 03:14:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9225676655769348, "perplexity": 48.10405226537313}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057796.87/warc/CC-MAIN-20210926022920-20210926052920-00077.warc.gz"}
|
http://poshhg.codeplex.com/
|
Inspired by the Posh-Git project (http://github.com/dahlbyk/posh-git)
Posh-hg provides a custom prompt and tab expansion when using Mercurial from within a Windows Powershell command line.
For more details, see this post on Jeremy's blog
The source code is available on github
To install posh-hg, download the latest source either from github or on CodePlex
. path\to\posh-hg\profile.example.ps1
Alternatively, you can copy the contents of the profile.example.ps1 file in to your profile.ps1
The prompt is customisable. For example, you could have a multi-line prompt:
$global:HgPromptSettings.BeforeText = ' on '$global:HgPromptSettings.BeforeForegroundColor = [ConsoleColor]::White
$global:HgPromptSettings.AfterText = " nhg"$global:HgPromptSettings.BeforeTagText = ' at '
`
|
2016-06-25 23:02:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18004150688648224, "perplexity": 7612.7411206889965}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393997.50/warc/CC-MAIN-20160624154953-00114-ip-10-164-35-72.ec2.internal.warc.gz"}
|
http://www.reference.com/browse/Shuffled+deck
|
Definitions
Nearby Words
# Probability distribution
In probability theory and statistics, a probability distribution identifies either the probability of each value of an unidentified random variable (when the variable is discrete), or the probability of the value falling within a particular interval (when the variable is continuous). The probability function describes the range of possible values that a random variable can attain and the probability that the value of the random variable is within any (measurable) subset of that range.
When the random variable takes values in the set of real numbers, the probability distribution is completely described by the cumulative distribution function, whose value at each real x is the probability that the random variable is smaller than or equal to x.
The concept of the probability distribution and the random variables which they describe underlies the mathematical discipline of probability theory, and the science of statistics. There is spread or variability in almost any value that can be measured in a population (e.g. height of people, durability of a metal, etc.); almost all measurements are made with some intrinsic error; in physics many processes are described probabilistically, from the kinetic properties of gases to the quantum mechanical description of fundamental particles. For these and many other reasons, simple numbers are often inadequate for describing a quantity, while probability distributions are often more appropriate.
There are various probability distributions that show up in various different applications. One of the more important ones is the normal distribution, which is also known as the Gaussian distribution or the bell curve and approximates many different naturally occuring distributions. The toss of a fair coin yields another familiar distribution, where the possible values are heads or tails, each with probability 1/2.
## Rigorous definitions
In probability theory, every random variable may be attributed to a function defined on a state space equipped with a probability distribution that assigns a probability to every subset (more precisely every measurable subset) of its state space in such a way that the probability axioms are satisfied. That is, probability distributions are probability measures defined over a state space instead of the sample space. A random variable then defines a probability measure on the sample space by assigning a subset of the sample space the probability of its inverse image in the state space. In other words the probability distribution of a random variable is the push forward measure of the probability distribution on the state space.
In other words, given a random variable $X: Omega rightarrow Y$ between a probability space $\left(Omega, mathcal\left\{F\right\}, P\right)$, the sample space, and a measurable space $\left(Y, Sigma\right)$, called the state space, a probability distribution on (Y, Σ) is a probability measure $X_\left\{*\right\}P: Sigma rightarrow \left[0,1\right]$ on the state space where $X_\left\{*\right\}P$ is the push forward measure of P.
### Probability distributions of real-valued random variables
Because a probability distribution Pr on the real line is determined by the probability of being in a half-open interval Pr(ab], the probability distribution of a real-valued random variable X is completely characterized by its cumulative distribution function:
$F\left(x\right) = Pr left\left[X le x right\right] qquad forall x in mathbb\left\{R\right\}.$
#### Discrete probability distribution
A probability distribution is called discrete if its cumulative distribution function only increases in jumps. More precisely, a probability distribution is discrete if there is a finite or countable set whose probability is 1.
For many familiar discrete distributions, the set of possible values is topologically discrete in the sense that all its points are isolated points. But, there are discrete distributions for which this countable set is dense on the real line.
Discrete distributions are characterized by a probability mass function, $p$ such that
Pr left[X = x right] = p(x).
#### Continuous probability distribution
By one convention, a probability distribution is called continuous if its cumulative distribution function is continuous, which means that it belongs to a random variable X for which Pr[X = x ] = 0 for all x in R.
Another convention reserves the term continuous probability distribution for absolutely continuous distributions. These distributions can be characterized by a probability density function: a non-negative Lebesgue integrable function $f$ defined on the real numbers such that
F(x) = Pr left[X le x right] = int_{-infty}^x f(t),dt
Discrete distributions and some continuous distributions (like the devil's staircase) do not admit such a density.
### Terminology
The support of a distribution is the smallest closed interval/set whose complement has probability zero.
The probability density function of the sum of two independent random variables is the convolution of each of their density functions.
The probability density function of the difference of two random variables is the cross-correlation of each of their density functions.
A discrete random variable is a random variable whose probability distribution is discrete. Similarly, a continuous random variable is a random variable whose probability distribution is continuous.
## List of important probability distributions
Certain random variables occur very often in probability theory, in some cases due to their application to many natural and physical processes, and in some cases due to theoretical reasons such as the central limit theorem, the Poisson limit theorem, or properties such as memorylessness or other characterizations. Their distributions therefore have gained special importance in probability theory.
### Discrete distributions
#### With finite support
• The Bernoulli distribution, which takes value 1 with probability p and value 0 with probability q = 1 − p.
• The Rademacher distribution, which takes value 1 with probability 1/2 and value −1 with probability 1/2.
• The binomial distribution describes the number of successes in a series of independent Yes/No experiments.
• The degenerate distribution at x0, where X is certain to take the value x0. This does not look random, but it satisfies the definition of random variable. It is useful because it puts deterministic variables and random variables in the same formalism.
• The discrete uniform distribution, where all elements of a finite set are equally likely. This is supposed to be the distribution of a balanced coin, an unbiased die, a casino roulette or a well-shuffled deck of playing cards. Also, one can use measurements of quantum states to generate uniform random variables. All these are "physical" or "mechanical" devices, subject to design flaws or perturbations, so the uniform distribution is only an approximation of their behaviour. In digital computers, pseudo-random number generators are used to produce a statistically random discrete uniform distribution.
• The hypergeometric distribution, which describes the number of successes in the first m of a series of n Yes/No experiments, if the total number of successes is known.
• Zipf's law or the Zipf distribution. A discrete power-law distribution, the most famous example of which is the description of the frequency of words in the English language.
• The Zipf-Mandelbrot law is a discrete power law distribution which is a generalization of the Zipf distribution.
### Joint distributions
For any set of independent random variables the probability density function of their joint distribution is the product of their individual density functions.
|
2013-05-24 17:27:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 10, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8881086111068726, "perplexity": 206.70142395477683}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704818711/warc/CC-MAIN-20130516114658-00077-ip-10-60-113-184.ec2.internal.warc.gz"}
|
http://fomq.glitterclub.it/lightgbm-parameter-tuning-python.html
|
# Lightgbm Parameter Tuning Python
17) as VotingClassifier. Evaluating XGBoost and LightGBM. 7 is under development. Make sure to change the kernel to "Python (reco)". as LightGBM and. analyticsvidhya. However, for a brief recap, gradient boosting improves model performance by first developing an initial model called the base learner using whatever algorithm of your choice (linear, tree, etc. This document gives a basic walkthrough of LightGBM python package. According to research by Microsoft professionals on the comparison of these two algorithms, LightGBM proved to be a step ahead of XGBoost. Python has changed in some significant ways since I first wrote my "fast python" page in about 1996, which means that some of the orderings will have changed. This is a guide on parameter tuning in gradient boosting algorithm using Python to adjust bias variance trade-off in predictive modeling. LightGBM on Spark (Scala / Python / R) The parameter tuning tools. To use only with multiclass objectives. 01$(change gamma to. Seeing as XGBoost is used by many Kaggle competition winners, it is worth having a look at CatBoost! Contents. For example: random forests theoretically use feature selection but effectively may not, support vector machines use L2 regularization etc. notes about machine learning. Learning From Other Solutions 3. So let's first start with. The RLOF is a fast local optical flow approach described in and similar to the pyramidal iterative Lucas-Kanade method as proposed by. That way, each optimizer will use its default parameters Then you can select which optimizer was the best, and set optimizer=, then move on to tuning optimizer_params, with arguments specific to the optimizer you selected; CatBoost: Can't find similar Experiments for CatBoost?. Tune Parameters for the Leaf-wise (Best-first) Tree LightGBM uses the leaf-wise tree growth algorithm, while many other popular tools use depth-wise tree growth. The single most important reason for the popularity of Python in the field of AI and Machine Learning is the fact that Python provides 1000s of inbuilt libraries that have in-built functions and methods to easily carry out data analysis, processing, wrangling, modeling and so on. local machine, remote servers and cloud). 000 rows, as it tends to overfit for smaller datasets. 803, and the Top 10% Recall reached 46. It will help you bolster your understanding of boosting in general and parameter tuning for GBM. Note that this is but a sampling of available Python automated machine learning tools available. trees' was held constant at a value of 1200 Tuning parameter 'interaction. use "pylightgbm" python package binding to run this code. Hyperparameter tuning takes advantage of the processing infrastructure of Google Cloud Platform to test different hyperparameter configurations when training your model. Here is an example of Hyperparameter tuning with RandomizedSearchCV: GridSearchCV can be computationally expensive, especially if you are searching over a large hyperparameter space and dealing with multiple hyperparameters. The AUCs of prediction reached 0. What really is Hyperopt? From the site:. We performed machine learning experiments across six different datasets. Here an example python recipe to use it:. matplotlib - Plotting library. liquidsvm/liquidsvm. A recent study o f Kim and. Even after all of your hard work, you may have chosen the wrong classifier to begin with. Automated the Hyperparameter Tuning using Bayesian Optimization. Yes, H2O can use cross-validation for parameter tuning if early stopping is enabled (stopping_rounds>0). Given generated features and labels, we regard the prediction as a regression problem. On the other hand, we also need some execution time on feature selection itself, as the RFE search space is of size , where is the number of predictors. parameter_space dict. Python and its libraries like NumPy, SciPy, Scikit-Learn, Matplotlib are used in data science and data analysis. By Ieva Zarina, Software Developer, Nordigen. list of index vectors used for splits into training and validation sets. If you are interested in using the EnsembleClassifier, please note that it is now also available through scikit learn (>0. (which might end up being inter-stellar cosmic networks!. as LightGBM and. If you have been using GBM as a ‘black box’ till now, maybe it’s time for you to open it and see, how it actually works!. A particular implementation of gradient boosting, XGBoost, is consistently used to win machine learning competitions on Kaggle. This lead me to not be able to properly figure out what the optimal parameters for the model are. We use Python to build custom extensions to the Jupyter server that allows us to manage tasks like logging, archiving, publishing, and cloning notebooks on behalf of our users. class: center, middle ### W4995 Applied Machine Learning # (Gradient) Boosting, Calibration 02/20/19 Andreas C. Common hyperparameter tuning techniques such as GridSearch and Random Search roam the full space of available parameter values in an isolated way without paying attention to past results. Package 'rBayesianOptimization' September 14, 2016 Type Package Title Bayesian Optimization of Hyperparameters Version 1. 6459 when applied to the test set, whilst balanced accuracy was 0. Download Open Datasets on 1000s of Projects + Share Projects on One Platform. XGBoost Parameters (official guide) 精彩博文: XGBoost浅入浅出——wepon xgboost: 速度快效果好的boosting模型 Complete Guide to Parameter Tuning in XGBoost (with codes in Python) XGBoost Plotting API以及GBDT组合特征实践. A particular implementation of gradient boosting, XGBoost, is consistently used to win machine learning competitions on Kaggle. In this article, we are going to build a Support Vector Machine Classifier using R programming language. 6, and Python 3. Tune is a library for hyperparameter tuning at any scale. ai @arnocandel SLAC ICFA 02/28/18. compared to state-of-the-art algorithms for hyper-parameter tuning. These are the Python-based Machine Learning tools. LightGBM Vs XGBoost. 1BestCsharp blog 5,758,416 views. 01 in the codes above) the algorithm will converge at 42nd iteration. List of other helpful links. Hyper-Parameter Optimisation (HPO) Don't get panic when you see the long list of parameters. compared to state-of-the-art algorithms for hyper-parameter tuning. It offers some different parameters but most of them are very similar to their XGBoost counterparts. View ZHENG PAN’S profile on LinkedIn, the world's largest professional community. 6) ☑ Support for Conda ☑ Install R and Python libraries directly from Dataiku’s interface ☑ Open environment to install any R or Python libraries ☑ Manage packages dependencies and create reproducible environments Scale code execution. Happily, all of the code samples in the book run with Python 3. "Institute Merit Scholarship 2018-19" - Recipient of the 'Institute Merit Scholarship' for being Rank 1 in the branch and securing the highest cumulative Semester Performance Index (SPI) during the academic year 2017-18. The hyperparameters I tuned with this method are: colsample_bytree - Also called feature fraction, it is the fraction of features to consider while building a single gradient boosted tree. In this guide, learn how to define various configuration settings of your automated machine learning experiments with the Azure Machine Learning SDK. LightGBM on Spark (Scala / Python / R) The parameter tuning tools. To me, LightGBM straight out of the box is easier to set up, and iterate. If you want to break into competitive data science, then this course is for you! Participating in predictive modelling competitions can help you gain practical experience, improve and harness your data modelling skills in various domains such as credit, insurance, marketing, natural language processing, sales' forecasting and computer vision to name a few. Inspired by awesome-php. LightGBM, etc. 5 may be of interest to scientific programmers. Boosting Hyper-Parameter Settings Python libraries are used to deploy the boosting trees (GBoost, XGBoost, and LightGBM) (Ke et al. Browse other questions tagged python-3. Using Pandas-TD, you can fetch aggregated data from Treasure Data and move it into pandas. I ran an ensemble but found better performance by using only LightGBM. CatBoost is a machine learning method based on gradient boosting over decision trees. … We do that by specifying parameter. Consultez le profil complet sur LinkedIn et découvrez les relations de Olga, ainsi que des emplois dans des entreprises similaires. This guide uses Ubuntu 16. Hands-On Machine Learning for Algorithmic Trading is for data analysts, data scientists, and Python developers, as well as investment analysts and portfolio managers working within the finance and investment industry. Tuning for imbalanced data. NIPS2017読み会 LightGBM: A Highly Efficient Gradient Boosting Decision T… Overview of tree algorithms from decision tree to xgboost. To model decision tree classifier we used the information gain, and gini index split criteria. And on the right half of the slide you will see somehow loosely corresponding parameter names from LightGBM. Tuning may be done for individual Estimators such as LogisticRegression, or for entire Pipelines. LightGBM是微软推出的一款开源boosting工具,现在已经成为各类机器学习竞赛常用的一大利器。不过由于LightGBM是c++编写的,并且其预测功能的主要使用方式是命令行调用处理批量数据,比较 博文 来自: lyg5623的专栏. Hyperparameter optimization is a big part of deep learning. Far0n's framework for Kaggle competitions "kaggletils" 28 Jupyter Notebook tips, tricks and shortcuts; Advanced features II. In the remainder of today’s tutorial, I’ll be demonstrating how to tune k-NN hyperparameters for the Dogs vs. A particular implementation of gradient boosting, XGBoost, is consistently used to win machine learning competitions on Kaggle. save_period. Automatic tuning of Random Forest Parameters. LightGBM on Spark (Scala / Python / R) The parameter tuning tools. 大事なパラメタとその意味を調査. See the complete profile on LinkedIn and discover ZHENG’S connections and jobs at similar companies. Python Wrapper for MLJAR API. Explore Popular Topics Like Government, Sports, Medicine, Fintech, Food, More. Set Up a Python Virtual Environment. また機械学習ネタです。 機械学習の醍醐味である予測モデル作製において勾配ブースティング(Gradient Boosting)について今回は勉強したいと思います。. NIPS2017読み会 LightGBM: A Highly Efficient Gradient Boosting Decision T… Overview of tree algorithms from decision tree to xgboost. Package 'xgboost' August 1, 2019 Type Package Title Extreme Gradient Boosting Version 0. We build our models in XGBoost (we also tried LightGBM) and apply parameters tuning (we write auto-tuning scripts, available here). Here is an example of Hyperparameter tuning with RandomizedSearchCV: GridSearchCV can be computationally expensive, especially if you are searching over a large hyperparameter space and dealing with multiple hyperparameters. list of index vectors used for splits into training and validation sets. Tuned the parameters to improve the score. The scripts in this guide are written in Python 3, but should also work on Python 2. Open LightGBM github and see instructions. The optimal ROC selected was 0. R Script with Plot Python Script Obviously the convergence is slow, and we can adjust this by tuning the learning-rate parameter, for example if we try to increase it into$\gamma=. Flexible Data Ingestion. If linear regression was a Toyota Camry, then gradient boosting would be a UH-60 Blackhawk Helicopter. What is Hyperopt-sklearn? Finding the right classifier to use for your data can be hard. In the benchmarks Yandex provides, CatBoost outperforms XGBoost and LightGBM. ParameterGrid (param_grid) [source] ¶. With Tune, you can launch a multi-node distributed hyperparameter sweep in less than 10 lines of code. The RLOF is a fast local optical flow approach described in and similar to the pyramidal iterative Lucas-Kanade method as proposed by. • Generated simulation data from ten different settings, obtained a better tuning parameter combinations for both XGBoost and Random Forest by using GridSearchCV function in python scikit-learn. 1-line anon bash big-data big-data-viz C data-science econ econometrics editorial hacking HBase hive hql infosec java javascript linux lists machine-learning macro micro mssql MySQL nosql padb passwords postgres programming python quick-tip r ruby SAS sec security sql statistics stats sys-admin tsql usability useable-sec web-design windows. A Beginner's Guide to Python Machine Learning and Data Science Frameworks. Python API Tune Parameters for the Leaf-wise (Best-first) Tree LightGBM uses the leaf-wise tree growth algorithm, while many other popular tools use depth-wise tree growth. Practical XGBoost in Python - 2. Python scikit-learn package provides the GridSearchCV class that can simplify the task for machine learning practitioners. The two libraries have similar parameters and we'll use names from XGBoost. … We do that by specifying parameter. About LightGBM(LGBM) Microsoft謹製Gradient Boosting Decision Tree(GBDT)アルゴリズム 2016年に登場し、Kaggleなどで猛威を振るう → 「速い, 精度良い , メモリ食わない」というメリット 現在はPython , Rのパッケージが存在 4. Tags: Machine Learning, Scientific, GBM. Browse other questions tagged python-3. 92 AUC score. Normally, cross validation is used to support hyper-parameters tuning that splits the data set to training set for learner training and the validation set. From very simple random grid search to Bayesian Optimisation to genetic algorithms. In that case, cross-validation is used to automatically tune the optimal number of epochs for Deep Learning or the number of trees for DRF/GBM. local machine, remote servers and cloud). XGBoost Documentation¶. d) How to implement Grid search & Random search hyper parameters tuning in Python. about various hyper-parameters that can be tuned in XGBoost to improve model's performance. word2vec and others such methods are cool and good but they require some fine-tuning and don't always work out. XGBoost Parameter Tuning How not to do grid search (3 * 2 * 15 * 3 = 270 models): 15. More specifically you will learn: what Boosting is and how XGBoost operates. I'm guessing there is some variables that you think you are setting but you're really not. Tune Parameters for the Leaf-wise (Best-first) Tree¶ LightGBM uses the leaf-wise tree growth algorithm, while many other popular tools use depth-wise tree growth. essential tuning parameter to achieve desired performance. 大事なパラメタとその意味を調査. We will not discuss the details here, but there are advanced options for hyperopt that require distributed computing using MongoDB, hence the pymongo import. To analyze the sensitivity of XGBoost, LightGBM and CatBoost to their hyper-parameters on a fixed hyper-parameter set, we use a distributed grid-search framework. Detailed tutorial on Winning Tips on Machine Learning Competitions by Kazanova, Current Kaggle #3 to improve your understanding of Machine Learning. hyperopt - Distributed Asynchronous Hyperparameter Optimization in Python. We consider best iteration for predictions on test set. Machine Learning in Python Course. Here is what my model got after training for 10000 steps with default train. http://xyclade. 04 in the examples. Explore Popular Topics Like Government, Sports, Medicine, Fintech, Food, More. , preprocessing, feature engineering, feature selection, model building and Hyperparameter tuning. Complete Guide to Parameter Tuning in Gradient Boosting (GBM) in Python. According to research by Microsoft professionals on the comparison of these two algorithms, LightGBM proved to be a step ahead of XGBoost. Can be used to iterate over parameter value combinations with the Python built-in function iter. Python codes 9 from the scikit-learn were used in this paper to run the ML methods, lightGBM and catboost 10 packages were installed in Python, and Stata was used to run the MLE model. pandas - Data structures built on top of numpy. best_params_” to have the GridSearchCV give me the optimal hyperparameters. This page contains parameters tuning guides for different scenarios. In the benchmarks Yandex provides, CatBoost outperforms XGBoost and LightGBM. According to (M. Python Wrapper for MLJAR API. 000 rows, as it tends to overfit for smaller datasets. This means that the same max_depth parameter can result in trees with vastly different levels of complexity depending on the growth strategy. com/blog/2016/02/complete-guide-parameter-tuning-gradient-boosting-gbm-python/ XGBoost 应该如何调参:https://www. notes about machine learning. I'm guessing there is some variables that you think you are setting but you're really not. Practical XGBoost in Python - 2. View ZHENG PAN’S profile on LinkedIn, the world's largest professional community. Little improvement was there when early_stopping_rounds was used. Awesome Data Science with Python. io/MachineLearning/ Logistic Regression Vs Decision Trees Vs SVM. Type of kernel. Tuning by means of these techniques can become a time-consuming challenge especially with large parameters. There entires in these lists are arguable. This should help you better understand the choices I am making to start off our first grid search. Happily, all of the code samples in the book run with Python 3. The latest stable release of Python is version 3. Inside the python macro, there is a snippet of random search code for you to. The core functions in XGBoost are implemented in C++, thus it is easy to share models among different interfaces. I have achieved a F1 score of 0. 大事なパラメタとその意味を調査. LightGBM - the high performance machine learning library - for Ruby. The scripts in this guide are written in Python 3, but should also work on Python 2. We performed machine learning experiments across six different datasets. 缺失模块。 1、请确保node版本大于6. 07/17/2019; 6 minutes to read; In this article. Parameters can be set both in config file and command line. A radial basis function kernel SVM that consists of penalty parameter C and kernel degree γ was considered in this study. When it is TRUE, it means the larger the evaluation score the better. If you want to contribute to this list (please do), send me a pull request or contact me @josephmisiti. Machine Learning for Developers. And on the right half of the slide you will see somehow loosely corresponding parameter names from LightGBM. Catboost is a gradient boosting library that was released by Yandex. stop callback. as well as some sort of grid/random search for parameter tuning. In this situation, trees added early are significant and trees added late are unimportant. The aim of hyper-parameter tuning is to search for the hyper-parameter settings that maximize the cross-validated accuracy. I want to give LightGBM a shot but am struggling with how to do the hyperparameter tuning and feed a grid of parameters into something like GridSearchCV (Python) and call the “. CAE Related Projects. 총 86개의 parameter에 대한 다음과 같은 내용이 정리되어 있고, 원하는 filter로 parameter를 선택해서 볼 수도 있습니다. x machine-learning lightgbm boosting or ask your own question. Hyper parameter tuning for lightgbm. If one parameter appears in both command line and config file, LightGBM will use the parameter in command line. Yes, H2O can use cross-validation for parameter tuning if early stopping is enabled (stopping_rounds>0). Although the LightGBM was the fastest algorithm, it also gained the lowest out of three GBM models. In this Applied Machine Learning Recipe, you will learn: How to tune parameters in R: Automatic tuning of Random Forest Parameters. We’ve waxed lyrical about the benefits of hackathons on many occasions – testing theories within a collaborative sprint – learning new things whilst trying to apply them to a real-world situation in the safety of a non-customer facing environment. you can use # to comment. According to research by Microsoft professionals on the comparison of these two algorithms, LightGBM proved to be a step ahead of XGBoost. It also learns to enable dropout after a few trials, and it seems to favor small networks (2 hidden layers with 256 units), probably because bigger networks might over fit the data. This affects both the training speed and the resulting quality. For windows, you will need to compiule with visual-studio (download + install can be done in < 1 hour) 2. liquidsvm/liquidsvm. Today we are very happy to release the new capabilities for the Azure Machine Learning service. best_params_" to have the GridSearchCV give me the optimal hyperparameters. catboost - CatBoost is an open-source gradient boosting on decision trees library with categorical features support out of the box for Python, R #opensource. Technical Skills: ★ Python (8 years), C++(5 years), bash, ★ Pandas, Pytorch, SKLearn, XGB, LightGBM, Catboost, keras, etc ★ Deep Learning, Computer Vision, Data Science, Machine learning. It is heuristic algorithm created from combination of: not-so-random approach; and hill-climbing; The approach is not-so-random because each algorithm has a defined set of hyper-parameters that usually works. XGBoost Parameter Tuning RandomizedSearchCV and GridSearchCV to the rescue. BSON is from the pymongo module. capper: Learns the maximum value for each of the columns_to_cap and used that as the cap for those columns. If one parameter appears in both command line and config file, LightGBM will use the parameter from the command line. Changes to the data preparation include scaling, cleaning, selection, compressing, expanding, interactions, categorical encoding, sampling, and generating. 大事なパラメタとその意味を調査. You can use # to comment. For example, the iterations parameter has the following synonyms: num_boost_round, n_estimators, num_trees. Machine Learning For Physicists… or ”Facility needs - or chances - seen from the other side” Arno Candel CTO H2O. All algorithms can be parallelized in two ways, using:. We’ve waxed lyrical about the benefits of hackathons on many occasions – testing theories within a collaborative sprint – learning new things whilst trying to apply them to a real-world situation in the safety of a non-customer facing environment. For windows, you will need to compiule with visual-studio (download + install can be done in < 1 hour) 2. Using Pandas-TD, you can fetch aggregated data from Treasure Data and move it into pandas. R Script with Plot Python Script Obviously the convergence is slow, and we can adjust this by tuning the learning-rate parameter, for example if we try to increase it into \$\gamma=. However if your categorical variable happens to be ordinal then you can and should represent it with increasing numbers (for example “cold” becomes 0, “mild” becomes 1, and “hot” becomes 2). However, the leaf-wise growth may be over-fitting if not used with the appropriate parameters. If no cell is tagged with parameters, the injected cell will be inserted at the top of the notebook. More than 5000 participants joined the competition but only a few could figure out ways to work on a large data set in limited memory. After creating a pandas DataFrame, you can visualize your data, and build a model with your favorite Python machine learning libraries such as scikit-learn, XGBoost, LightGBM, and TensorFlow. A dictionary containing each parameter and its distribution. stop callback. Now it is time to implement a gradient boosting model on the Titanic disaster dataset. Here is what my model got after training for 10000 steps with default train. It also learns to enable dropout after a few trials, and it seems to favor small networks (2 hidden layers with 256 units), probably because bigger networks might over fit the data. 2 2、在博客根目录(注意不是yilia根目录)执行以下命令: npm i hexo-generator-json-content --save 3、在根目录_config. The following is a basic list of model types or relevant characteristics. you can use # to comment. c) How to implement different Classification Algorithms using Bagging, Boosting, Random Forest, XGBoost, Neural Network, LightGBM, Decition Tree etc. There are many ways of imputing missing data - we could delete those rows, set the values to 0, etc. I used scikit-learn’s Parameter Grid to systematically search through hyperparameter values for the LightGBM model. Either way, this will neutralize the missing fields with a common value, and allow the models that can’t handle them normally to function (gbm can handle NAs but glmnet. ZHENG has 4 jobs listed on their profile. What really is Hyperopt? From the site:. To better understand what is going on, let’s compute the gradients of the loss function for a single pair with respect to a single parameter. The dictionary key is the name of the parameter. as LightGBM and. 有问题,上知乎。知乎,可信赖的问答社区,以让每个人高效获得可信赖的解答为使命。知乎凭借认真、专业和友善的社区氛围,结构化、易获得的优质内容,基于问答的内容生产方式和独特的社区机制,吸引、聚集了各行各业中大量的亲历者、内行人、领域专家、领域爱好者,将高质量的内容透过. To understand the parameters, we better understand how XGBoost and LightGBM work at least a very high level. fine-tuning certain model parameters, all the better, but that is not the goal of this study. Compared with depth-wise growth, the leaf-wise algorithm can converge much faster. Python package scikit-learn comes with an automatized implementation of Grid Search with cross-validation. 7 train Models By Tag. And in the morning I had my results. Parameters; Python API; Tune Parameters for the Leaf-wise (Best-first) Tree. analyticsvidhya. I had the opportunity to start using xgboost machine learning algorithm, it is fast and shows good results. Python and its libraries like NumPy, SciPy, Scikit-Learn, Matplotlib are used in data science and data analysis. 17) as VotingClassifier. Far0n's framework for Kaggle competitions "kaggletils" 28 Jupyter Notebook tips, tricks and shortcuts; Advanced features II. Detailed tutorial on Winning Tips on Machine Learning Competitions by Kazanova, Current Kaggle #3 to improve your understanding of Machine Learning. LightGBM - Parameter Tuning application (default=regression) Many others possible, including different regression loss functions and binary (binary classification), multiclass for classification boosting (default=gbdt) Type of boosting applied (gbdt = standard decision tree boosting) Alternatives: rf (RandomForest), goss (see previous slides), dart DART [1] is an interestint alternative. The simplest definition of hyper-parameters is that they are a special type of parameters that cannot be inferred from the data. Normally, cross validation is used to support hyper-parameters tuning that splits the data set to training set for learner training and the validation set. I want to give LightGBM a shot but am struggling with how to do the hyperparameter tuning and feed a grid of parameters into something like GridSearchCV (Python) and call the “. The scripts in this guide are written in Python 3, but should also work on Python 2. In this repo I’ll take a look at the scalability of h2o, xgboost and lightgbm as a function of the number of CPU cores and sockets on various Amazon EC2 instances. use "pylightgbm" python package binding to run this code. The two libraries have similar parameters and we'll use names from XGBoost. Python Wrapper for MLJAR API. Python API Tune Parameters for the Leaf-wise (Best-first) Tree LightGBM uses the leaf-wise tree growth algorithm, while many other popular tools use depth-wise tree growth. model_selection. XGBoost, use depth-wise tree growth. Mastering Machine Learning on AWS: Advanced machine learning in Python using SageMaker, Apache Spark, and TensorFlow - Ebook written by Dr. XGBoost is a hometown hero for Seattle data analysts, having come out of a dissertation at University of Washington. Also try practice problems to test & improve your skill level. Several parameters have aliases. I like to think of tuning as finding the best settings for a machine learning model. The concept of hyper-parameters is very important, because these values directly influence overall performance of ML algorithms. Awesome Machine Learning. Data contains 492 frauds out of 284807 transactions. Flexible Data Ingestion. Also try practice problems to test & improve your skill level. , C73 (1996) 11-60). ML Pipelines: A New High-Level API for MLlib. If linear regression was a Toyota Camry, then gradient boosting would be a UH-60 Blackhawk Helicopter. how to apply XGBoost on a dataset and validate the results. That way, each optimizer will use its default parameters Then you can select which optimizer was the best, and set optimizer=, then move on to tuning optimizer_params, with arguments specific to the optimizer you selected; CatBoost: Can't find similar Experiments for CatBoost?. In this guide, learn how to define various configuration settings of your automated machine learning experiments with the Azure Machine Learning SDK. The 3 best (in speed, memory footprint and accuracy) open source implementations for GBMs are xgboost, h2o and lightgbm (see benchmarks). Note that this is but a sampling of available Python automated machine learning tools available. Tune supports any deep learning framework, including PyTorch, TensorFlow, and Keras. Hyperparameter tuning. Tuning Hyper-Parameters using Grid Search Hyper-parameters tuning is one common but time-consuming task that aims to select the hyper-parameter values that maximise the accuracy of the model. Moreover, there are now a number of Python libraries that make implementing Bayesian hyperparameter tuning simple for any machine learning model. The most important. One thing that can be confusing is the difference between xgboost, lightGBM and Gradient Boosting Decision Trees (which we will henceforth refer to as GBDTs). 7 train Models By Tag. According to research by Microsoft professionals on the comparison of these two algorithms, LightGBM proved to be a step ahead of XGBoost. LightGBM - the high performance machine learning library - for Ruby. By using config files, one line can only contain one parameter. Even after all of your hard work, you may have chosen the wrong classifier to begin with. Folks know that gradient-boosted trees generally perform better than a random forest, although there is a price for that: GBT have a few hyperparams to tune, while random forest is practically tuning-free. 6) ☑ Support for Conda ☑ Install R and Python libraries directly from Dataiku’s interface ☑ Open environment to install any R or Python libraries ☑ Manage packages dependencies and create reproducible environments Scale code execution. It has a lot of parameters most significant of which are: ngram_range: I specify in the code (1,3). Tuning the hyper-parameters of an estimator (sklearn) Optimizing hyperparameters with hyperopt Complete Guide to Parameter Tuning in Gradient Boosting (GBM) in Python. This parameter is passed to the cb. Here I will be using multiclass prediction with the iris dataset from scikit-learn. 7 and the scikit. Découvrez le profil de Olga Ishimbaeva sur LinkedIn, la plus grande communauté professionnelle au monde. Run the SAR Python CPU MovieLens notebook under the 00quickstart folder. XGBoost Parameter Tuning RandomizedSearchCV and GridSearchCV to the rescue. d) How to implement grid search cross validation for hyper parameters tuning. Tuning the hyper-parameters of an estimator (sklearn) Optimizing hyperparameters with hyperopt; Complete Guide to Parameter Tuning in Gradient Boosting (GBM) in Python; Tips and tricks. A thorough hyper-parameter tuning process will first explore the structural parameters, finding the most effective number of rounds at an initial high learning rate, then seek the best tree-specific and regularisation parameters, and finally re-train the model with a lower learning rate and higher number of rounds. In this tutorial, you'll learn to build machine learning models using XGBoost in python. best_params_" to have the GridSearchCV give me the optimal hyperparameters.
|
2019-12-13 13:11:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21138399839401245, "perplexity": 3140.613798763488}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540555616.2/warc/CC-MAIN-20191213122716-20191213150716-00301.warc.gz"}
|
https://codereview.stackexchange.com/questions/277186/printing-subarrays/277304
|
# Printing subarrays ⚡
I was trying to to print all subarrays of an array in quadratic time. Here is my code:
#include <iostream>
#include <vector>
int main()
{
std::vector<int> v;
int N;
std::cin >> N; //size of array
for (int i = 0; i < N; i++) {
int x;
std::cin >> x;
v.push_back(x);
}
int j = v.size() - 1;
std::vector<int> l;
do {
l.push_back(v[j]);
if (j == 0) {
int k = l.size() - 1;
do {
std::cout << l[k] << " ";
if (k == 0) {
l.pop_back();
if (l.empty()) {
break;
}
std::cout << '\n';
k = l.size() - 1;
} else {
k--;
}
} while (k >= 0);
v.pop_back();
if (v.empty()) {
std::cout << '\n';
break;
}
std::cout << '\n';
j = v.size() - 1;
} else {
j--;
}
} while (j >= 0);
}
What is the time complexity of this code? Is it more efficient than O(n^3)?
• For what you need v if you already have a? Jun 9 at 20:14
• This method of printing subarrays is little faster than the general method to print subarrays using three nested loops. Jun 10 at 8:32
• I was asking about why you need 2 vectors v and a? They are both exactly the same, or have I mised some? Jun 10 at 10:14
• I had edited my code according to your comment. Can you tell what is the time complexity of my code? Jun 10 at 11:34
• Any restrictions on subarrays? Like contiguity? Jun 10 at 12:51
The code you have written looks very obfuscated. For example, the outer do-loop does two things: it copies elements from v to l (in reverse order), and when it finished doing that, the next iteration will start the inner loop that prints a subarray (reversing it again), and then it resets j so the outer loop will start from scratch with a v that has one less element. The loop indices also go backwards. All variable names are only a single letter long. This makes the code very hard to read, not just for others but also for yourself in the future.
Use variable names that clearly indicate what the variable is used for. They don't have to be overly long; concise names are preferred over verbose ones. Only use one-letter names for very common things, like i for a loop index, x/y/z for coordinates, or n for a count of things.
Rewrite the loops so it becomes much more clear what is going on. Make use of the fact that you can copy whole std::vectors in one go, and create helper functions where appropriate. So for example:
static void print(const std::vector<int>& array) {
for (auto& item: array) {
std::cout << item << ' ';
}
std::cout << '\n';
}
...
std::vector<int> array;
...
while (!array.empty()) {
auto subarray = array;
while (!subarray.empty()) {
print(subarray);
subarray.erase(subarray.begin()); // pop_front()
}
array.pop_back();
}
# Efficiency
Is it more efficient than O(n^3)?
Not unless it has a bug. If you print out all the possible contiguous subarrays of a given array of length $$\N\$$, you are printing in the order of $$\O(N^3)\$$ elements. The question is then: is it less efficient than $$\O(N^3)\$$? This means looking carefully at hidden costs from manipulating the std::vectors, as not all operations on them are $$\O(1)\$$. There is in fact a $$\O(N \log N)\$$ cost to filling the vector l the first time, as it needs to reallocate memory multiple times and copy elements. You can avoid that by calling l.reserve(N) before entering the outer do-loop, but overall this does not influence the total complexity.
Note that while your code might have the best possible time complexity, that does not mean it is the most efficient way to do this. In particular, a lot of time is spent copying v into l. You don't need to do that; you can write your code such that you only need the original array a, and just print its elements in the right order.
• DO NOT edit your code after posting. Read the rules! Jun 11 at 5:25
There is also an recursive solution:
#include <iostream>
#include <vector>
using namespace std;
void printSubArray(const vector<int>& input, int currIndex){
string result("");
int length=input.size();
for (int i = currIndex; i <length ; i++){
result+=to_string(input[i]) + " ";
cout<<result<< '\n';
}
if(currIndex<length-1){
printSubArray(input, currIndex+1);
}
}
int main()
{
vector<int> v;
int N;
cin >> N;
for (int i = 0; i < N; i++) {
int x;
cin >> x;
v.push_back(x);
}
printSubArray(v, 0);
return 0;
}
This can be also turned to a solution with 2 loops, so the complexity should be O(n^2). Also since every array of size n has (n^2+n)/2 subarrays this is the number of arrays to be printed and the complexity is O(n^2).
• The number of loops does not necessarily correspond with the complexity. Regardless, It can't be $O(n^2)$ since you need to print $O(n^3)$ elements to cover all possible contiguous subarrays. Jun 11 at 21:32
• @G. Sliepen No 2 loops, if you want can rewrite this recursive solution to a solution with 2 loops. Jun 11 at 21:38
• @G. Sliepen No this is exactly the working recursive solution, avaible on the internet, just have writen it in C++. Jun 11 at 21:42
• Ah, but the string result can contain multiple numbers. Printing a string takes time that is proportional to its length, so you have to take that into account for the time complexity as well. Jun 11 at 22:22
• All possible subarrays can be printed using one loop. Jun 12 at 1:40
All possible subarrays can be printed using one loop:
#include <iostream>
#include <vector>
int main()
{
std::vector<int> array;
int N;
std::cin >> N; //size of array
for (int i = 0; i < N; i++) {
int x;
std::cin >> x;
array.push_back(x);
}
int i = array.size() - 1;
std::vector<int> subarray=array;
do {
std::cout << subarray[array.size()-i-1] << " ";
if (i == 0) {
subarray.pop_back();
if (subarray.empty()) {
array.pop_back();
subarray=array;
}
std::cout << '\n';
i = subarray.size() - 1;
} else {
i--;
}
} while (i >= 0);
}
• This is not a review of the code in the original question. And again, time complexity is not the same as the number of loops. By trying to cram as much as possible into one loop, you have created hard to read code. Jun 13 at 17:20
|
2022-08-19 15:32:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 5, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2832357585430145, "perplexity": 2157.034538680405}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573699.52/warc/CC-MAIN-20220819131019-20220819161019-00172.warc.gz"}
|
https://forum.bionicturtle.com/threads/p1-ch16-hull-4-22.23185/
|
# P1.Ch16 Hull 4.22
#### thanhtam92
##### Active Member
Hull.4.22:
A 5-year bond with a yield of 11% (continuously compounded) pays an 8% coupon at the end of
each year.
What is the bond’s price?
What is bond’s duration?
Instead of setting the CF table as shows in the Answer, I am trying to use my calculator to calculate the price given 2% change in yield, and apply the following formula D =(-delta_B/b)/delta_y. However, I did not get the 4.256 as duration or even the bond price.
I am using HP12C and the setup is below
n = 5, i = 11%, PMT = 8, FV = 100, PV = 92.165
n = 5, i = 10.8%, PMT = 8, FV = 100, PV = 92.81
D = (-(92.165-92.81)/92.165)/0.2% = 3.49 years
Can someone please help me to point out where I am getting this wrong?
#### David Harper CFA FRM
##### David Harper CFA FRM
Staff member
Subscriber
Hi @thanhtam92 Your prices assume annual compound frequency, so to use your approach we'd want to translate the 11.0% (with CC) into exp(11%)-1 = 11.628% per annum with annual compound frequency. Note that we can confirm: n = 5, i = 11.628%, PMT = 8, FV = 100 and → CPT 86.01. Then, if we use a shock of 20 bps, Δy = 0.20%, the effective duration is given by:
($87.466 -$86.143)/(.2%*2) * 1/$86.801 = 3.8127%, is what I get. Okay but that is effective duration which approximates modified duration. But Hull's solution is for Mac duration (aka, weighted average maturity) such that 3.8127% (1 + 11.628%) = 4.256 years ... and, we have reconciled to his answer. It's instructive, we learn three things at least: • different compound frequencies can be resolved if we translate • effective duration is an approximation of modified duration (see other forum conversations: it retrieves a secant that approximates the tangent; the tangent's slope is the "true" dollar duration) • we can translation back/from from mod to Mac duration with mod D = Mac D/(1+y/k). I hope that's helpful, #### thanhtam92 ##### Active Member Hi @thanhtam92 Your prices assume annual compound frequency, so to use your approach we'd want to translate the 11.0% (with CC) into exp(11%)-1 = 11.628% per annum with annual compound frequency. Note that we can confirm: n = 5, i = 11.628%, PMT = 8, FV = 100 and → CPT 86.01. Then, if we use a shock of 20 bps, Δy = 0.20%, the effective duration is given by: ($87.466 - $86.143)/(.2%*2) * 1/$86.801 = 3.8127%, is what I get.
Okay but that is effective duration which approximates modified duration. But Hull's solution is for Mac duration (aka, weighted average maturity) such that 3.8127% (1 + 11.628%) = 4.256 years ... and, we have reconciled to his answer. It's instructive, we learn three things at least:
• different compound frequencies can be resolved if we translate
• effective duration is an approximation of modified duration (see other forum conversations: it retrieves a secant that approximates the tangent; the tangent's slope is the "true" dollar duration)
• we can translation back/from from mod to Mac duration with mod D = Mac D/(1+y/k). I hope that's helpful,
thanks a lot @David Harper CFA FRM . I am not aware that we have to transfer to frequency compounding to use the calculator. And in the exam, would it clarify which duration we should calculate? Or we just assume it is Mac duration unless the security has an embedded option
#### David Harper CFA FRM
##### David Harper CFA FRM
Staff member
Subscriber
Hi @thanhtam92 You don't necessarily need to translate between compound frequencies. Hull's problem 4.22 can be (easily) solved for the answer he seeks (\$86.80) but you can't use the TVM worksheet (N, I/Y, PV, PMT, and PV) because they presume discrete periods. I was just responding to your approach using the TVM. Re: duration: yes the exam should be specific although (i) often the context implies the duration wanted and (ii) often the difference is not material; we've really pushed a lot of feedback up on these points. GARP has gotten a lot better about choices (A, B, C, D) that are not proximate to each other such that a typical duration Q&A won't "break" based on the Mac/modified duration choice. Most of our applications (e.g., estimate price change given yield shock) want the modified duration where the effective duration is a reliable approximation (of the modified duration). In exam questions, the appearance of Mac duration (aka, weighted average maturity) is typically out of convenience when the questions wants the duration of a T-year zero-coupon bond. That's popular because we know the Mac duration is T.0 years and the modified duration is T/(1+y/k). But the question has the burden to be specific. Thanks,
|
2022-06-30 23:08:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5852785110473633, "perplexity": 3704.4782700790424}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103915196.47/warc/CC-MAIN-20220630213820-20220701003820-00663.warc.gz"}
|
http://www.absoluteastronomy.com/topics/Differential_(mathematics)
|
Differential (mathematics)
# Differential (mathematics)
Discussion
Encyclopedia
In mathematics
Mathematics
Mathematics is the study of quantity, space, structure, and change. Mathematicians seek out patterns and formulate new conjectures. Mathematicians resolve the truth or falsity of conjectures by mathematical proofs, which are arguments sufficient to convince other mathematicians of their validity...
, the term differential has several meanings.
## Basic notions
• In calculus
Calculus
Calculus is a branch of mathematics focused on limits, functions, derivatives, integrals, and infinite series. This subject constitutes a major part of modern mathematics education. It has two major branches, differential calculus and integral calculus, which are related by the fundamental theorem...
, the differential
Differential of a function
In calculus, the differential represents the principal part of the change in a function y = ƒ with respect to changes in the independent variable. The differential dy is defined bydy = f'\,dx,...
represents a change in the linearization
Linearization
In mathematics and its applications, linearization refers to finding the linear approximation to a function at a given point. In the study of dynamical systems, linearization is a method for assessing the local stability of an equilibrium point of a system of nonlinear differential equations or...
of a function
Function (mathematics)
In mathematics, a function associates one quantity, the argument of the function, also known as the input, with another quantity, the value of the function, also known as the output. A function assigns exactly one output to each input. The argument and the value may be real numbers, but they can...
.
• In traditional approaches to calculus, the differentials (e.g. dx, dy, dt etc...) are interpreted as infinitesimal
Infinitesimal
Infinitesimals have been used to express the idea of objects so small that there is no way to see them or to measure them. The word infinitesimal comes from a 17th century Modern Latin coinage infinitesimus, which originally referred to the "infinite-th" item in a series.In common speech, an...
s. Although infinitesimals are difficult to give a precise definition, there are several ways to make sense of them rigorously.
• The differential
Total derivative
In the mathematical field of differential calculus, the term total derivative has a number of closely related meanings.The total derivative of a function f, of several variables, e.g., t, x, y, etc., with respect to one of its input variables, e.g., t, is different from the partial derivative...
is another name for the Jacobian matrix of partial derivative
Partial derivative
In mathematics, a partial derivative of a function of several variables is its derivative with respect to one of those variables, with the others held constant...
s of a function from Rn to Rm (especially when this matrix
Matrix (mathematics)
In mathematics, a matrix is a rectangular array of numbers, symbols, or expressions. The individual items in a matrix are called its elements or entries. An example of a matrix with six elements isMatrices of the same size can be added or subtracted element by element...
is viewed as a linear map).
• More generally, the differential or pushforward refers to the derivative of a map between smooth manifolds and the pushforward operations it defines. The differential is also used to define the dual concept of pullback.
• Stochastic calculus
Stochastic calculus
Stochastic calculus is a branch of mathematics that operates on stochastic processes. It allows a consistent theory of integration to be defined for integrals of stochastic processes with respect to stochastic processes...
provides a notion of stochastic differential and an associated calculus for stochastic process
Stochastic process
In probability theory, a stochastic process , or sometimes random process, is the counterpart to a deterministic process...
es.
• The integrator
Integrator
An integrator is a device to perform the mathematical operation known as integration, a fundamental operation in calculus.The integration function is often part of engineering, physics, mechanical, chemical and scientific calculations....
in a Stieltjes integral is represented as the differential of a function. Formally, the differential appearing under the integral behaves exactly as a differential: thus, the integration by substitution
Integration by substitution
In calculus, integration by substitution is a method for finding antiderivatives and integrals. Using the fundamental theorem of calculus often requires finding an antiderivative. For this and other reasons, integration by substitution is an important tool for mathematicians...
and integration by parts
Integration by parts
In calculus, and more generally in mathematical analysis, integration by parts is a rule that transforms the integral of products of functions into other integrals...
formulae for Stieltjes integral correspond, respectively, to the chain rule
Chain rule
In calculus, the chain rule is a formula for computing the derivative of the composition of two or more functions. That is, if f is a function and g is a function, then the chain rule expresses the derivative of the composite function in terms of the derivatives of f and g.In integration, the...
and product rule
Product rule
In calculus, the product rule is a formula used to find the derivatives of products of two or more functions. It may be stated thus:'=f'\cdot g+f\cdot g' \,\! or in the Leibniz notation thus:...
for the differential.
## Differential geometry
The notion of a differential motivates several concepts in differential geometry (and differential topology
Differential topology
In mathematics, differential topology is the field dealing with differentiable functions on differentiable manifolds. It is closely related to differential geometry and together they make up the geometric theory of differentiable manifolds.- Description :...
).
• Differential form
Differential form
In the mathematical fields of differential geometry and tensor calculus, differential forms are an approach to multivariable calculus that is independent of coordinates. Differential forms provide a better definition for integrands in calculus...
s provide a framework which accommodates multiplication and differentiation of differentials.
• The exterior derivative
Exterior derivative
In differential geometry, the exterior derivative extends the concept of the differential of a function, which is a 1-form, to differential forms of higher degree. Its current form was invented by Élie Cartan....
is a notion of differentiation of differential forms which generalizes the differential
Total derivative
In the mathematical field of differential calculus, the term total derivative has a number of closely related meanings.The total derivative of a function f, of several variables, e.g., t, x, y, etc., with respect to one of its input variables, e.g., t, is different from the partial derivative...
of a function (which is a differential 1-form).
• Pullback is, in particular, a geometric name for the chain rule
Chain rule
In calculus, the chain rule is a formula for computing the derivative of the composition of two or more functions. That is, if f is a function and g is a function, then the chain rule expresses the derivative of the composite function in terms of the derivatives of f and g.In integration, the...
for composing a map between manifolds with a differential form on the target manifold.
• Covariant derivatives or differentials
Covariant derivative
In mathematics, the covariant derivative is a way of specifying a derivative along tangent vectors of a manifold. Alternatively, the covariant derivative is a way of introducing and working with a connection on a manifold by means of a differential operator, to be contrasted with the approach given...
provide a general notion for differentiating of vector field
Vector field
In vector calculus, a vector field is an assignmentof a vector to each point in a subset of Euclidean space. A vector field in the plane for instance can be visualized as an arrow, with a given magnitude and direction, attached to each point in the plane...
s and tensor field
Tensor field
In mathematics, physics and engineering, a tensor field assigns a tensor to each point of a mathematical space . Tensor fields are used in differential geometry, algebraic geometry, general relativity, in the analysis of stress and strain in materials, and in numerous applications in the physical...
s on a manifold, or, more generally, sections of a vector bundle
Vector bundle
In mathematics, a vector bundle is a topological construction that makes precise the idea of a family of vector spaces parameterized by another space X : to every point x of the space X we associate a vector space V in such a way that these vector spaces fit together...
: see Connection (vector bundle)
Connection (vector bundle)
In mathematics, a connection on a fiber bundle is a device that defines a notion of parallel transport on the bundle; that is, a way to "connect" or identify fibers over nearby points. If the fiber bundle is a vector bundle, then the notion of parallel transport is required to be linear...
. This ultimately leads to the general concept of a connection
Connection (mathematics)
In geometry, the notion of a connection makes precise the idea of transporting data along a curve or family of curves in a parallel and consistent manner. There are a variety of kinds of connections in modern geometry, depending on what sort of data one wants to transport...
.
## Algebraic geometry
Differentials are also important in algebraic geometry
Algebraic geometry
Algebraic geometry is a branch of mathematics which combines techniques of abstract algebra, especially commutative algebra, with the language and the problems of geometry. It occupies a central place in modern mathematics and has multiple conceptual connections with such diverse fields as complex...
, and there are several important notions.
• Abelian differentials usually refer to differential one-forms on an algebraic curve
Algebraic curve
In algebraic geometry, an algebraic curve is an algebraic variety of dimension one. The theory of these curves in general was quite fully developed in the nineteenth century, after many particular examples had been considered, starting with circles and other conic sections.- Plane algebraic curves...
or Riemann surface
Riemann surface
In mathematics, particularly in complex analysis, a Riemann surface, first studied by and named after Bernhard Riemann, is a one-dimensional complex manifold. Riemann surfaces can be thought of as "deformed versions" of the complex plane: locally near every point they look like patches of the...
.
In mathematics, a quadratic differential on a Riemann surface is a section of the symmetric square of the holomorphic cotangent bundle.If the section is holomorphic, then the quadratic differentialis said to be holomorphic...
s (which behave like "squares" of abelian differentials) are also important in the theory of Riemann surfaces.
• Kahler differential
Kähler differential
In mathematics, Kähler differentials provide an adaptation of differential forms to arbitrary commutative rings or schemes.-Presentation:The idea was introduced by Erich Kähler in the 1930s...
s provide a general notion of differential in algebraic geometry
## Other meanings
The term differential has also been adopted in homological algebra and algebraic topology, because of the role the exterior derivative plays in de Rham cohomology: in a cochain complex , the maps (or coboundary operators) di are often called differentials. Dually, the boundary operators in a chain complex are sometimes called codifferentials.
The properties of the differential also motivate the algebraic notions of a derivation
Derivation (abstract algebra)
In abstract algebra, a derivation is a function on an algebra which generalizes certain features of the derivative operator. Specifically, given an algebra A over a ring or a field K, a K-derivation is a K-linear map D: A → A that satisfies Leibniz's law: D = b + a.More...
and a differential algebra
Differential algebra
In mathematics, differential rings, differential fields, and differential algebras are rings, fields, and algebras equipped with a derivation, which is a unary function that is linear and satisfies the Leibniz product law...
.
|
2018-11-14 03:23:53
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.844385027885437, "perplexity": 461.98902883349575}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039741578.24/warc/CC-MAIN-20181114020650-20181114042650-00339.warc.gz"}
|
https://www.physicsforums.com/threads/calculating-total-charge.404218/
|
# Homework Help: Calculating total charge
1. May 18, 2010
### kaiser0792
1. The problem statement, all variables and given/known data
The current at the terminals of the element in an ideal basic circuit element is
i = 0, t < 0;
i = 20e(-5000t) A, t $$\leq$$ 0
Calculate the total charge ( in microcoulombs) entering the element at its upper terminal.
2. Relevant equations
3. The attempt at a solution I'm just starting a Circuit Analysis course next week and I'm looking ahead in the text, trying to hit the ground running. There are no sample problems that even give me a starting place??
1. The problem statement, all variables and given/known data
2. Relevant equations
3. The attempt at a solution
1. The problem statement, all variables and given/known data
2. Relevant equations
3. The attempt at a solution
1. The problem statement, all variables and given/known data
2. Relevant equations
3. The attempt at a solution
1. The problem statement, all variables and given/known data
2. Relevant equations
3. The attempt at a solution
1. The problem statement, all variables and given/known data
2. Relevant equations
3. The attempt at a solution
1. The problem statement, all variables and given/known data
2. Relevant equations
3. The attempt at a solution
2. May 18, 2010
### rock.freak667
Current can be expressed at the rate of flow of charge, so that i =dQ/dt.
So you can integrate over time and get the total charge. Though why it time t < 0?
3. May 18, 2010
### kaiser0792
I suppose that is just a way of saying that there was no current flowing before the reference time, t = 0.
4. May 18, 2010
### kaiser0792
I tried integrating and came up with -0.04e-5000t + C coulombs.
Answer is supposed to be 4000 microcoulombs. My integration is a little rusty. Help?
5. May 19, 2010
### rock.freak667
The -0.04 should be 0.004, but remember your time is t≥0. So you are really integrating from 0 to ∞ so you need to compute
$$\left[ -0.004e^{-5000t} \right]_0 ^{\infty}$$
6. May 19, 2010
### kaiser0792
Thanks for the help, I knew the Integral to be solved and the limits of integration, what I was missing was the negative exponent of e. You helped me, thank you.
7. May 19, 2010
### kaiser0792
Thanks rock.freak, I was overlooking the negative exponent of "e" when I was integrating.
You helped me, thanks. Sometimes you just need to bounce it off someone else.
|
2018-12-15 14:30:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6433384418487549, "perplexity": 958.4925209125549}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376826856.91/warc/CC-MAIN-20181215131038-20181215153038-00599.warc.gz"}
|
https://mathematica.stackexchange.com/questions/268046/algorithm-permutations-signature
|
# Algorithm: Permutations & Signature
Theoretical side
Simple example: If I have two sets $$A_1=\{1,3\} ,A_2=\{2,3\}$$
and Permutations[{1, 2, 3}]= $$\left( \begin{array}{ccc} 1 & 2 & 3 \\ 1 & 3 & 2 \\ 2 & 1 & 3 \\ 2 & 3 & 1 \\ 3 & 1 & 2 \\ 3 & 2 & 1 \\ \end{array} \right)$$
Solution steps
1- Choose the rows whose sets $$A_1=\{1,3\} ,A_2=\{2,3\}$$ are partial and then count the number of rows $$D_2=\{1,3,2\},\{3,1,2\},\{2,3,1\},\{3,2,1\}$$
$$S_2=Count[D_2]/Count[Permutations[\{1, 2, 3\}]]=4/6$$
2- Delete $$D_2$$ from $$Permutations[\{1, 2, 3\}]$$ and choose from the triple order whose sets $$A_1=\{1,3\} ,A_2=\{2,3\}$$ are partial and then count the number of rows
$$D_3=\{1,2,3\},\{2,1,3\}$$
$$S_3=Count[D_3]/Count[Permutations[\{1, 2, 3\}]]=2/6$$
How can you write code that achieves this algorithm and maintains globality?
###########
#Edit
sets are partial:The group is contained within the second order
$$D_2$$:Such that
$$A_1=\{1,3\}\subset\{1,3,-\}$$ take $$\{1,3,-\}$$ $$A_1=\{1,3\}\subset\{3,1,-\}$$ take $$\{3,1,-\}$$ $$A_2=\{2,3\}\subset\{2,3,-\}$$ take $$\{1,3,-\}$$ $$A_2=\{2,3\}\subset\{3,2,-\}$$ take $$\{3,1,-\}$$
Then; We obtain $$D_2=\{1,3,2\},\{3,1,2\},\{2,3,1\},\{3,2,1\}$$
$$D_3$$:Such that
$$A_1=\{1,3\}\subset\{1,2,3\}$$ take $$\{1,2,3\}$$ $$A_1=\{2,3\}\subset\{2,1,3\}$$ take $$\{2,1,3\}$$
Then; We obtain $$D_3=\{1,2,3\},\{2,1,3\}$$
• Please clarify what you mean by "sets are partial". May 10 at 0:34
• @CarlWoll sets are partial: The set is contained within the second order $D_2$; $A_1=\{1,3\}\subset\{1,3,-\}$ or $A_1=\{1,3\}\subset\{3,1,-\}$ And, The set is contained within the thirds order $D_3$; $A_1=\{1,3\}\subset\{1,2,3\}$ May 10 at 0:46
|
2022-07-01 05:04:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 26, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8791652321815491, "perplexity": 1214.563736425524}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103920118.49/warc/CC-MAIN-20220701034437-20220701064437-00058.warc.gz"}
|
https://www.gamedev.net/blogs/entry/2261925-the-no-oop-back-to-basics-experiment/
|
• entries
9
41
• views
11667
# The No-OOP back to basics experiment
1907 views
I'm about to embark on a journey where I let go of the typical way I program and try something radically different as an experiment. I've been programming for a little over 15 years now and I have grown tremendously, but recently I've felt like I haven't been as productive as I've been in the past. This feeling has become stronger watching Casey Muratori code live on stream for his Handmade Hero project. The man is just non stop pumping out code like there's no tomorrow with seemingly no effort. What's funny is that I feel I've been the same way when I was less experienced, but nowadays I feel like most of my time is spent on thinking on what should go where, how objects should interact, what should be private or protected, should I have a move constructor or delete the copy and assignment constructor etc. What I want to find out for myself with this experiment is whether I can be more productive in a large project over a reasonably long time following a new, or rather old, way of programming.
Every experiment needs an hypothesis!
I'm not going to be all scientific about this, but I have an hypothesis. As you're learning to program, you're unfamiliar with a lot of concept and you make a lot of mistakes that you learn from. Often times you don't really know what the correct solution is, feeling like there is such a thing (there rarely is :(). You'll scour the internet and you find all sorts of patterns and paradigms, you'll love them and start applying them everywhere. If they don't fit the problem, you'll make it fit god damn it, silver bullets everywhere. Eventually you come to your senses and you start applying things where they fit instead of forcing it onto everything. You're experienced now, know what to do and when to do it, you make less silly mistakes and you only go "fuck, did I write that?" 9 out of 10 times now. Mistakenly you've attributed this success to the use of OOP, not the experience you've gained. That's what this experiment is about.
What I'll be doing and what not
I will program a complete game (the one I've wanted to make for so long now!) without the use of OOP and some typical C++ features.
If you're wondering what my definition of OOP is, I honestly don't know... or care much, but these are some of the things I'll be doing differently:
• Solve the problem at hand in the most simple possible way, no generalizations until they're actually needed.
• No more combining data with procedures, so no member functions.
• No more templates, I want my compile- and turnaround times to be as fast a possible.
• Start making use of some macros. I've been very opposed to using them before, but I will use them sparingly where they make sense.
• Everything is public, yikes!
• No constructors, destructors, copy/assign/move constructors. But what about RAII!? Well I did say radically different, so I'm going to give this a shot.
That's enough to give most programmers a heart attack. But I wonder, how many have actually tried programming in a different way after they've become experienced and how many just got set in their way? I love programming and I want to keep growing at it, that means sometimes you'll have to visit the no fly zones, just to test if your presumptions are (still) correct.
Interesting experiment. Speaking personally, years ago I was forced back to using C after many years of C++ and I found the only major issue for me was a lack of destructors for resource management. I ended up writing an outer and inner version of any function this affected, the outer to allocate resources, call the inner then free the resources. This allowed me to just return from the inner in the way I was used to in C++ but was a lot of boilerplate overhead. I'm sure in most ways you will find your approach very liberating. We all tend to get caught in over-engineering.
Thanks for sharing Aardvajk, how long did you have to use C for? I've already run into a case where I wanted to use a destructor, but I realized that it would only be used once and that I'd only have to manually call a shutodwn procedure once. Normally I'd be wary of this as future me or someone else might forget that manual call, but I'm trying to let go of that and see how much of an actual problem it will become. Over-engineering is the deadlock equivalent of my brain :P
Thanks for sharing Aardvajk, how long did you have to use C for? I've already run into a case where I wanted to use a destructor, but I realized that it would only be used once and that I'd only have to manually call a shutodwn procedure once. Normally I'd be wary of this as future me or someone else might forget that manual call, but I'm trying to let go of that and see how much of an actual problem it will become. Over-engineering is the deadlock equivalent of my brain :P
I have been using C for a quite a while. Mostly programming for hobby type stuff in C and ASM on really tiny devices with little to no processing power or memory. They call these things Microcontrollers. The thing is with C you don't necessarily need a shutdown procedure it all depends on what you are doing. The one thing I tell everyone that is going to do things the non OOP way and use the C way is the golden rule of what allocates the memory frees the memory. If you remember this things become very smooth and less over engineered. So you you have a function that allocates memory say for a structure make sure you have a function that frees the memory of that structure. So what you get is a very smooth flow chart like experience. It also makes potential memory leaks easy to fix if you forget to call your function that frees up the memory you allocated. When I code in C I usually dedicate specific code files for tasks as well. So if you are creating a structure or a linked list or whatever have a file that is dedicated to operating on that data which would include manipulation, creation and freeing of the memory. It makes for some smooth organization and prevents your code cluttering up one file.
I will agree it can seem like a lot of boilerplate but in reality I much prefer the non object oriented style of Functional and Procedural languages it just makes more sense and I feel leads to better code design as there are less over engineering pitfalls you can corner yourself into getting carried away. Much more linear and understandable.
I look forward to your results. I suspect you'll be fine in the shoot from the hip style because you're far more skilled than your 15 year ago self. As the project continues, things might get a little tangled. Fixing bugs or adding new features might be a little tougher, but that's a problem for future self to deal with. :)
What game are you testing this hypothesis with?
Thank you for the replies, lovely to see others showing interest.
Much more linear and understandable.
Yeah I agree with that. I often see deep inheritance trees in other code bases where methods are overloaded on various levels and it's a hell to figure out and work with.
What game are you testing this hypothesis with?
I'm making an online multiplayer arena based action RPG, that's a mouthful :P. I have some experience with mutliplayer games and I've written most of the fundamentals already, but I'm rewriting those now. It'll be fun to see how that turns out.
I'll be posting some of my experiences in the near future. I've already switched from CamelCase with the first letter capitalized to first letter lower case because that was quite handy for functions that construct an object so I can do something like:
Address address = address(...);
Which you'd normally be able to do with constructors, but now initialization is separated from allocation.
[quote name="Mussi" timestamp="1460581329"]Thanks for sharing Aardvajk, how long did you have to use C for? I've already run into a case where I wanted to use a destructor, but I realized that it would only be used once and that I'd only have to manually call a shutodwn procedure once. Normally I'd be wary of this as future me or someone else might forget that manual call, but I'm trying to let go of that and see how much of an actual problem it will become. Over-engineering is the deadlock equivalent of my brain :P[/quote] It was many years ago before home internet was standard and I only had an old PowerC compiler on a floppy disk. Kids today probably wouldn't even understand that sentence :). Wasn't for long to be fair. Had some interesting bugs when I forgot to prototype some functions returning pointers and the compiler was converting them to integers at the call point (shudder). A lot of the criticisms of C++ seem to me to be more criticisms of bad OOP. I find C++ to be very flexible in terms of programming paradigms. I fell into the bad OOP trap when I started with C++ and it took many years of experience to claw my way back out again.
I'm definitely more productive without OOP baggage. There's practically nothing of value in it. It's a cargo cult we're more or less forced into by trendy language designers and OS/library developers. C++ is not a good language (IMHO), it's merely a path of least resistance, like PHP for webdev. In those languages most of my code does not look very OO-ish.
I've been using C++ for a few years (for the first time since the 1990s when it really sucked), basically just the "C with Objects" subset. Everything "public"; some macros; no Templates, no Exceptions. Initially I had a small inheritance hierarchy for sprite types but that was a mistake; it's all compositional now, sort of "Data Oriented Design" style. I'm also trying to avoid STL now; too many surprises, crazy iterator syntax, and horrible performance for Strings especially (but I do use them at startup and level loading, just in gameplay). I do use methods where they make sense (a function clearly belonging to an object) but I don't think they're "good OOP", they're just one of the few organizational devices available in C++.
Solve the problem at hand in the most simple possible way, no generalizations until they're actually needed.
Started doing that about 10 years ago, reduces thinking about the future, but adds more work when making an extension. In all, I think it was a good move.
No more combining data with procedures, so no member functions.
OOP is not about member functions. You can write OO code in C too. Just use the first parameter as "this" variable which is a pointer to a structure, et voila object functions, just written differently, syntactically.
I don't think it's a bad pattern, you struct has meaning, and having a function to operate on a struct is a natural next step. The typical OO way of having member functions just makes this marriage more explicit.
No more templates, I want my compile- and turnaround times to be as fast a possible.
Templates have been invented to replace #define macros that contained code fragments. I would say a template is better than writing the same code 20 times, or doing #define macro expansion magic.
However, I would also argue that you have very few places with repetitive code, unless you over-engineer or over-generalize. Typical places are with standard containers as lists and sets, but those are in STL already.
Start making use of some macros. I've been very opposed to using them before, but I will use them sparingly where they make sense.
Never used macros much, as finding errors in them is a mess. I do use lots of static inline functions though, which the compiler then hopefully merges.
Everything is public, yikes!
Have been doing that for a lot years too. If you use the rule, everybody can look at anything, but respect its owner, it works nicely (until you start using threads :p )
You'll come to see getters and setters mostly as useless clutter :p
No constructors, destructors, copy/assign/move constructors. But what about RAII!? Well I did say radically different, so I'm going to give this a shot.
Ha, fun :)
I started coding C again after 20 years of OO, and oh boy, it's really getting used to that level of programming again :)
You need to write a lot more code than you'd would in OO.
I am less sure it counts as "more productive". Sure you type more lines of code, so it feels asif you're making more progress, I can see that. On the other hand, all the thinking you do when you are more experienced is because you're more focused on what you need to make, rather than just random trying. You don't just code, you also make sure it works in the context you want it, and you try to ensure it won't blow up if you change or extend the program tomorrow.
Arguably the latter time can be reduced, if you don't have plans in that direction.
In all, a good experiment, I think. At the very least you'll improve your understanding of what OO offers you, and maybe a few other things as well. good luck!
Thanks for sharing your experiences everyone.
OOP is not about member functions. You can write OO code in C too. Just use the first parameter as "this" variable which is a pointer to a structure, et voila object functions, just written differently, syntactically.
I agree, it's not about member functions. Conversely, you can write non OO code with the use of member functions. I don't think passing the first parameter as the this pointer, or even using the this pointer counts as OO. It's a bit of a vague term, so I try to avoid using it most of the time.
Never used macros much, as finding errors in them is a mess.
Yep I've seen this happen. I'm using some single line macros now, they're actually quite nice and can make things clearer and less error prone.
You'll come to see getters and setters mostly as useless clutter :P
I rarely use those so not much change for me there :P, but I am used to making things that belong to an object's internal state protected/private. So far I haven't encountered any problems with having everything public, but I haven't nearly written enough code yet to draw any conclusions.
You need to write a lot more code than you'd would in OO.
I am less sure it counts as "more productive".
This is an interesting point. On it's own merit, I wouldn't count it as being more productive either, but what if feeling that you are actually has a positive influence on your productivity? I don't know and I'll probably won't be able to tell even after having conducted my experiment. I don't even know if it'll result in more code.
## Create an account
Register a new account
|
2018-07-23 02:59:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3592822551727295, "perplexity": 1228.0578971745429}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676594790.48/warc/CC-MAIN-20180723012644-20180723032644-00088.warc.gz"}
|
http://lyrics.wikia.com/wiki/Idlewild:A_Distant_History
|
# Idlewild:A Distant History Lyrics
1,909,117pages on
this wiki
A Distant History
This song is by Idlewild and appears on the single You Held the World in Your Arms (2002) and on the album A Distant History: Rarities 1997-2007 (2007).
This fascination
where did it come from?
what does it mean
but you know, you'll never see
i overheard this conversation
as it started to belong to me
its only where your
waiting to see it through
and not understanding why i had to
be reminded that i'll tell you when i
waiting for longer than i had to
or could you just tell me now
couldnt you tell me, now?
even more money spent
nothing left to go round
even more money spent
nothing left to go round
all these descriptions
where did they come from?
do they describe what you mean
perfectly?
and how can i explain to you
how i know that
its a natual order of the things that i believe
its only where your
waiting to see it through
not understanding why i had to
be reminded that i'll tell you when i
waiting for longer than i had to
or could you just tell me now?
couldnt you tell me, now?
even more money spent
nothing left to go round
even more money spent
nothing left to go round
nothing left to go round
nothing left to go round
nothing left to go round
la la la la la la la la la
la la la la la
la la la la la la la la la
la la la la la
la la la la la la la la la
la la la la la
la la la la la la la la la
la la la la la
la la la la la la la la la
la la la la la
la la la la la la la la la
la la la la la
la la la la la la la la la
la la la la la
la la la la la la la la la
la la la la la
Written by: Idlewild Lyrics licensed by LyricFind
|
2017-05-25 05:11:29
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8799680471420288, "perplexity": 13535.723912116022}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607998.27/warc/CC-MAIN-20170525044605-20170525064605-00412.warc.gz"}
|
http://stackoverflow.com/questions/16525117/how-do-i-add-my-project-to-sys-path-on-windows-trying-to-use-disqus-export-p/16525370
|
# How do I add my project to 'sys.path' on windows? (trying to use disqus_export.py)
I'm trying to use the command disqus_export.py from 'django-disqus' to export my comments from django.contrib.comments to disqus.
When I use disqus_export.py in my outer project folder(where manage.py is) I get the return:
Traceback (most recent call last):
File "C:\Python27\Lib\site-packages\disqus\management\commands\disqus_export.p
y", line 5, in <module>
from django.contrib import comments
4, in <module>
from django.contrib.comments.models import Comment
File "C:\Python27\lib\site-packages\django\contrib\comments\models.py", line 1
, in <module>
from django.contrib.auth.models import User
File "C:\Python27\lib\site-packages\django\contrib\auth\models.py", line 5, in
<module>
from django.db import models
File "C:\Python27\lib\site-packages\django\db\__init__.py", line 11, in <modul
e>
if DEFAULT_DB_ALIAS not in settings.DATABASES:
File "C:\Python27\lib\site-packages\django\utils\functional.py", line 184, in
inner
self._setup()
File "C:\Python27\lib\site-packages\django\conf\__init__.py", line 40, in _set
up
raise ImportError("Settings cannot be imported, because environment variable
%s is undefined." % ENVIRONMENT_VARIABLE)
ImportError: Settings cannot be imported, because environment variable DJANGO_SE
TTINGS_MODULE is undefined.
As per the response in another similar question to, "Check this: python manage.py shell then import sys then sys.path. Is the project directory on that path? Exit out. Enter the regular python shell python. Then import sys, sys.path. Is the project directory on that path?,"
I did that and found that my project directory was returned by the first call but not the latter. However, the commenter who gave this instruction did not say what to do next, as the OP understood what he must do from there.
I assume I have to add my project directory to the latter sys.path, but I don't know how, so I'm hoping someone here can help me out.
-
sys.path is just a list. The following does what you'd expect:
sys.path.append('/path/to/project')
Alternatively, you could set the PYTHONPATH environment variable to your project directory (or edit it to include it, if it already exists).
-
Maybe I'm misunderstanding here. I went to 'Environment Variables' within Windows' 'System Properties' and created a 'System variable' called 'PYTHONPATH' with the value 'C:\Documents and Settings\Miles\Projects\mysite'. Nothing noticeable changed. I've previously only used the default 'path' 'System variable' so that I can run scripts/programs from the Windows 'Command Prompt', and I've created 'System variable's called 'PYTHON_HOME'(value:'\path\to\python27') and 'projects' in order to use them within the default 'path' 'System variable' like '%VARIABLE_NAME%\path\to\script_folder;'. – Miles Bardan May 14 at 12:56
I also, first, used 'sys.path.append('C:\Documents and Settings\Miles\Projects\mysite')', which worked within that python shell instance, but I don't know how to save the changes('sys.path.save()' and 'sys.path.write()' didn't work, for example...). I try searching the answers for these questions but I really can't find anything that helps me. – Miles Bardan May 14 at 12:57
You can't (really) make permanent changes to sys.path from within Python; if you're going that route, you should add the sys.path.append line to every source file that should know about it. I can't really help you with environment variables in Windows, because I haven't really touched that OS in about a decade. It sounds like you did every right, though. Could you verify that you can indeed see the PYTHONPATH variable from within a command prompt (echo %PYTHONPATH%)? – Cairnarvon May 14 at 17:15
Yes, I can verify that I can echo %PYTHONPATH% from within the command prompt. I can also now see it on the sys.path from within python. Thanks very much for your assistance, it has been much appreciated. Unfortunately I am still getting the same error though, so back to the drawing board on that one... – Miles Bardan May 15 at 13:07
|
2013-12-06 02:08:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42814597487449646, "perplexity": 5235.2809040902475}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163049020/warc/CC-MAIN-20131204131729-00063-ip-10-33-133-15.ec2.internal.warc.gz"}
|
https://learnzillion.com/resources/72203-rewrite-an-expression-to-understand-how-the-quantities-are-related-7-ee-a-2
|
# Rewrite an expression to understand how the quantities are related (7.EE.A.2)
Understand that rewriting an expression in different forms in a problem context can shed light on the problem and how the quantities in it are related. For example, a + 0.05a = 1.05a means that "increase by 5%" is the same as "multiply by 1.05".
|
2019-12-07 10:01:11
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9110566973686218, "perplexity": 563.5036407554514}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540497022.38/warc/CC-MAIN-20191207082632-20191207110632-00046.warc.gz"}
|
https://www.physicsforums.com/threads/differential-equations-mixing-problem.155795/
|
# Differential equations - mixing problem
A room containing 1000 cubic feet of air is originally free of carbon monoxide. Beginning at time t=0 cigarette smoke containing 4 percent carbon monoxide is blown into the room at 0.1 ft^3/min, and the well-circulated mixture leaves the room at hte same rate. Find the time when the concentration of carbon monoxide in the room reaches 0.012 percent.
rate = rate in - rate out ?
the 4% carbon monoxide part is really throwing me off. i don't exactly know what to do with this number. but other than that, i should be alright.
CO enters : 0.04 x 0.1 OR 0.1, i'm not sure whether to factor in the 4% CO
CO leaves : 0.04 x 0.1 S(t)/1000 OR 0.1 s(t)/1000
i need to figure that part out.. after that, everything i can handle. can someone please help me with the 4% carbon monoxide part
Related Calculus and Beyond Homework Help News on Phys.org
okay so i tried working it out with the 0.04 factored in as shown above... and at the end i get a ln -#, so i guess that is wrong.
so now i am doing it without the 0.04. if i do it without the 0.04, then i do not understand the concentration part. it says to find the time when conc is 0.012 %, so set c(t)=0.00012 OR shoudl i be factoring in 4% CO somewhere still. i am really confused
Pyrrhus
Homework Helper
Here's a hint: make M(t) be the quantity of carbon monoxide in the
room at any time and then the concentration is given by C(t)
= M(t)/1000.
i have used, s(t) = amount of CO
c(t)=s(t)/1000
so i have rate in as: 0.04 x 0.1
rate out: 0.1 s(t)/1000
i have modified my rate in and rate out from when i first posted the eq'n as i don't believe i have to add the 4% CO factor twice. tell me what you think now.
i have solved this ALL the way to plugging in C(t)
i plugged in c(t) as 1.2 x 10^-4
my eq'n is
c(t) = 4x 10^-10 (1-e^(1.0x10^-4)t)
so pluggin in c(t)
1.2 x 10^-4 = 4x 10^-10 (1-e^(1.0x10^-4)t)
solving for t
300000 = 1-e^(1.0x10^-4)t
299999 = -e^(1.0x10^-4)t
so.. once again. i am stuck.
here is all my work.
rate in: 0.04 x 0.01
rate out: 0.1 x s(t)/1000
s(0) = 0
s'(t) = 0.004 - 1.0x10^-4 s(t)
s'(t) + 1.0x10^-4 s(t) = 0.004
solving diff eq'n:
a(t) = 1.0x10^-4, b(t)=0.004
using formula: u(t) = exp(integ(a(t)dt))
u(t) = exp(integ(1.0x10^-4 dt))
u(t)=exp(1.0x10^-4 t)
using formula: d/dt (u(t) s(t) ) = u(t)b(t)
d/dt ( (e^(1.0x10^-4 t )) s(t) ) = (e^(1.0x10^-4 t)) x 0.004
(e^(1.0x10^-4 t )) s(t) = integ (0.004(e^(1.0x10^-4 t))dt)
(e^(1.0x10^-4 t )) s(t) = 0.004(1.0x10^-4)(e^(1.0x10^-4 t) + C
s(t) = [(4x10^-7) (e^(1.0x10^-4 t)) + C]/(e^(1.0x10^-4 t))]
s(t) = 4x10^-7 + Ce^(1.0x10^-4 t)
sub s(0)=0
0 = 4x10^-7 + C
C= -4x10^-7
so,
s(t)=(4x10^-7)(1-e^(1x10^-4 t))
c(t) = s(t)/1000
c(t) = (4x10^-10)(1-e^(1x10^-4 t))
find t when c(t) = 1.2 x 10^-4
1.2x10^-4 = (4x10^-10)(1-e^(1x10^-4 t))
300000 = 1-e^(1x10^-4 t)
299999 = -e^(1x10^-4 t)
stuck. any suggestions or any wrong steps???
Pyrrhus
Homework Helper
Use natural logarithm.
i end up ln-ing a negative..
can anyone find the mistake?
i have some help from the instructor as he said the percentage is percentage of the volume, so i guess that is the part that is wrong??? i'm not sure what he means by that.
Pyrrhus
Homework Helper
can anyone find the mistake?
i have some help from the instructor as he said the percentage is percentage of the volume, so i guess that is the part that is wrong??? i'm not sure what he means by that.
$$\frac{dm(t)}{dt} = 0.1 * 0.04 - 0.1 \frac{m(t)}{1000}$$
Maybe you will have less mistakes in your calculations if you solved it like this.
$$c(t) = \frac{m(t)}{1000}$$
$$1000\frac{dc(t)}{dt} = 0.1 * 0.04 - 0.1 \frac{1000c(t)}{1000}$$
Pyrrhus
Homework Helper
here is all my work.
(e^(1.0x10^-4 t )) s(t) = 0.004(1.0x10^-4)(e^(1.0x10^-4 t) + C
s(t) = [(4x10^-7) (e^(1.0x10^-4 t)) + C]/(e^(1.0x10^-4 t))]
s(t) = 4x10^-7 + Ce^(-1.0x10^-4 t)
The mistake lies in these steps. You forgot the negative.
Last edited:
i'm working it out your way now, but the negative i 4got to type in, but i had it on paper and it doesnt really make a difference as when i am plugging in s(0)=0, c is still the same regardless of that.
Pyrrhus
Homework Helper
i'm working it out your way now, but the negative i 4got to type in, but i had it on paper and it doesnt really make a difference as when i am plugging in s(0)=0, c is still the same regardless of that.
It DOES make a difference for the logarithm.
okay
so here is my work:
starting from pluggin in the concentration:
s(t) = (4x10^-7) - (4x10^-7)e^(-1.0x10^-4 t)
C(t) = s(t)/1000
c(t) = (4x10^-7)(1-e^(-1.0x10^-4 t)) / 1000
pluggin in c(t) = 1.2x10^-4
1.2x10^-4 = 4x10^-10(1-e^(-1.0x10^-4 t))
300000 = 1-e^(-1.0x10^-4 t)
299999 = -e^(-1.0x10^-4 t)
ln 299999 = ln -e^(-1.0x10^-4 t)
i am stuck agian..
at the same place
Pyrrhus
Homework Helper
Well i decided to do the problem to see why you weren't getting it right, anyway, i see another mistake, your integration is wrong. Btw, i get 30.05 as the answer, is t in minutes?, looks awfully fast if it was in seconds.
$$\int e^{kx} dx = \frac{1}{k} e^{kx} + C$$
Last edited:
oh my god , thank you so much.
I have checked over my work so many timse and still did not catch that!
reworking my solution as we speak and hopefully i get the same answer as you.
i got the same answer as you, thanks very much.
|
2019-12-09 04:20:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6807335615158081, "perplexity": 2589.6738424388673}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540517557.43/warc/CC-MAIN-20191209041847-20191209065847-00159.warc.gz"}
|
https://www.physicsforums.com/threads/solving-an-equation-with-respect-to-y-where-y-is-twice-hard.734965/
|
# Solving an equation with respect to y, where y is twice [hard]
1. Jan 26, 2014
### Science4ver
1. The problem statement, all variables and given/known data
Given the following equation
X(y) = y/(t*(sqrt(b^2+y^2)) - 1/p
How would I go solving that equation X(y) = 0 with respect to y?
3. The attempt at a solution
I can choose a commen dominator called p*t*(b^2+y^2)
But I when end up with
((y*p - t*p*(sqrt(y^2+b^2))/(p*t(b^2+y^2)) = 0
How would you guys surgest I proceed from where in order to isolate y?
2. Jan 26, 2014
### Dick
Now that you have it over a common denominator, if f(y)/g(y)=0, then f(y)=0.
3. Jan 26, 2014
### Science4ver
That implies that I need to solve the equation
(y*p - t*p*(sqrt(y^2+b^2)) = 0
Which allows me to arrive at the solution (using my graphical calculator)
y = -b * p *(sqrt(-1/(p^2-1)
But solution is suppose to be:
y = b/(sqrt(p/t +1) * sqrt(p/t-1))
I can't quite comprehend which step I need to use arrive at that solution. But cause to the best of my knowledge there aren't any variables which I substitute in order to arrive at that solution.
Any idears?
4. Jan 26, 2014
### Dick
You aren't going to get there by being sloppy and relying on a calculator.
There's an extra 'p' in that equation.
Extra 'p' here also, and what happened to the 't'? I think you should fix those up and figure out how your calculator found that expression before continuing.
5. Jan 26, 2014
### Science4ver
You are right. I see now the equation I am suppose to solve is y/(t*(sqrt(y^2+b^2))-1 = 0
Is it correct now?
6. Jan 26, 2014
### Dick
No, the p is gone. Your first step of putting it over a common denominator is wrong. It happened when you expressed the term 1/p with that common denominator.
Last edited: Jan 26, 2014
7. Jan 26, 2014
### Science4ver
So I am lost here :( What would you surgest I do as a first step with original equation? I can add 1/p on both sides I see that and next add the denominator on both sides. But what next?
8. Jan 26, 2014
### Dick
The strategy is to do what you were doing. Express X(y) over a common denominator. But do it right.
9. Jan 26, 2014
### Science4ver
I get that now but the only commen denominator I can deduce is my pee size brain is :)
p*t*(b^2+y^2))
But what I get from what you are saying is that denominator is wrong?
10. Jan 26, 2014
### Dick
I would use p*t*sqrt(b^2+y^2) as a common denominator. Actually that's what I thought you were using and just forgetting the 'sqrt' part. If you have two fraction a/b-c/d you can always use c*d as a common denominator. What do you get expressing a/b-c/d over a common denominator?
11. Jan 26, 2014
### Science4ver
Okay, Glad I am not totally stupid then :)
So any if I take the original equation : y/(t*(sqrt(b^2+y^2)) - 1/p
and use the commen denominator p*t*sqrt(b^2+y^2)
I arrive at
py/(p*t*sqrt(b^2+y^2) - (t*sqrt(b^2+y^2)/(p*t*sqrt(b^2+y^2) =
y/(t*sqrt(b^2+y^2)) - 1 = 0 ?
But if that the right cause of action I still have y twice with the term y^2 and y, and no matter what I do I can't arrive at the result that
y = b/(sqrt(p/t +1) * sqrt(p/t-1)) :( So I understand I must be doing something wrong and there possibly a term which can be rewritten to arrive that the right result for y, but I cant see which one :(
12. Jan 26, 2014
### Dick
Sorry, I've been assuming that some of the things you were writing were typos, when they are actually algebra mistakes. I revised the post you quoted. I'm asking you to show me how to put a/b-c/d over a common denominator.
13. Jan 26, 2014
### haruspex
You confused me dropping the '0' at the end of the first of those two lines.
You have dropped a p in getting to the last line.
Don't collect everything over on the left like that. Leave it as py = (t*sqrt(b^2+y^2)).
You next step is to get rid of the square root. How can you do that?
14. Jan 26, 2014
### Science4ver
But regarding the other expression I attempted now 10 times if I take my equation
X(y) = y/(t*(sqrt(b^2+y^2)) - 1/p = 0
and use the commen denominator (p*t(b^2+y^2))
I end up with the expression:
py/(pt*sqrt(b^2+y^2)) - (t(b^2+y^2))
/(p*t(b^2+y^2)) = ?
I must have messed up somewhere :( ?
15. Jan 26, 2014
### Dick
a/b-c/d=(ad-bc)/bd is correct. You are messing up the second part. Actually, looking at it, it's not messed up - but you don't have a common denominator. Just apply the correct pattern with a=y, b=t*(sqrt(b^2+y^2)), c=1 and d=p.
Last edited: Jan 26, 2014
16. Jan 26, 2014
### Ray Vickson
$$\frac{y}{t\sqrt{b^2+y^2}}-\frac{1}{p} = 0 \Rightarrow \frac{y}{t\sqrt{b^2+y^2}} = \frac{1}{p}$$
Square both sides to get
$$\frac{y^2}{t^2(b^2+y^2)} = \frac{1}{p^2}$$
Solve for $y^2$; in other words, let $y^2 = z$ and solve for $z$ from the simple equation
$$\frac{z}{b^2 t^2 + t^2 z} = \frac{1}{p^2}$$
17. Jan 26, 2014
### Dick
Yes, I know there is an easier way to attack this. You can also do it by putting everything over a common denominator and going from there. I was trying to diagose what was going so wrong with the OP's attempt to put things over a common denominator. Sometimes you have to do that. Here, it's optional.
18. Jan 27, 2014
### Science4ver
thank and then I should be able to arrive at the solution ?
But if I solve the rewritten equation with respect to z I get z=-b^2*t^2/(t^2-p^2)?
I am no closer to y = b/(sqrt(p/t +1) * sqrt(p/t-1)) which is suppose to be the solution for original equation.
Which is still no closer to y = b/(sqrt(p/t +1) * sqrt(p/t-1))
Last edited: Jan 27, 2014
19. Jan 27, 2014
### haruspex
Yes you are... almost there in fact. Just a little more juggling.
20. Jan 27, 2014
### Science4ver
I got it now :D
Last edited: Jan 27, 2014
|
2017-10-16 22:32:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8134244680404663, "perplexity": 1369.0448887976474}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187820466.2/warc/CC-MAIN-20171016214209-20171016234209-00144.warc.gz"}
|
http://mathoverflow.net/questions/152081/does-base-extension-reflect-the-property-of-being-isomorphic
|
# Does base extension reflect the property of being isomorphic?
Let L/K be a (separable?) field extension, let A be a finite dimensional algebra over K, and let M and N be two A-modules. Let $A' = L \otimes_K A$ be the algebra given by extension of scalars, and let $M' = L \otimes_K M$ and $N' = L \otimes_K N$ be the A'-modules given by extension of scalars.
Does $M' \cong N'$ (as A'-modules) imply that $M \cong N$ (as A-modules)?
(This question is obviously related. Note that just as for that question it is easy to see that base extension reflects isomorphisms in the sense that if a map $f: M \rightarrow N$ has the property that $f' : M' \rightarrow N'$ is an isomorphism then f is an isomorphism. This is asking about the more subtle question of whether it reflects the property of being isomorphic.)
I apologize if this is standard (I have a sinking suspicion that I've seen a theorem along these lines before), but I haven't been able to find it. There's a straightforward proof in the semisimple setting, but I have made no progress in the non-semisimple setting.
-
This has come up before mathoverflow.net/questions/28469/hilbert-90-for-algebras – David Speyer Dec 20 '13 at 17:08
I hope I'm not misunderstanding the question. Here goes:
We'll show that if $M,N$ are finite-dimensional over $K$, then they are isomorphic over $K$.
Think of the linear space $X=\mathrm{Hom}_{A}(M,N)$ as a variety over $K$. Inside $X$ look at the $K$-subvariety $X'$ of maps that are not isomorphisms $M \rightarrow N$. Now $X' \neq X$, because there is an $L$-point of $X$ not in $X'$. Therefore, over an infinite field $K$, there will certainly exist a $K$-point of $X$ that doesnt lie in the proper subvariety $X'$.
If $K$ is finite: $M,N$ are both $K$-forms of the same module $M'$ over $L$. The $L$-automorphisms of $M'$ are a connected group, because they amount to the complement of the hypersurface $X'$ inside the linear space $X$. So its Galois cohomology vanishes, thus the same conclusion.
-
I don't think it uses commutativity anywhere (?) We are just identifying $\mathrm{Hom}_A(M,N)$ with the linear subspace of $\mathrm{Hom}_K(M,N)$ which commutes with $A$. – Edgardo Dec 16 '13 at 23:54
Sorry, I was confused. – Noah Snyder Dec 17 '13 at 0:13
In the infinite field case, you're using the separable hypothesis, right? – Ben Wieland Dec 17 '13 at 0:42
I don't think so. It comes down to this: Take a polynomial $f \in K[x_1, \dots, x_n]$. If there exists $(a_1, \dots, a_n) \in L^n$ such that $f(a_1, \dots, a_n) \neq 0$, then also there exists $(b_1, \dots, b_n) \in K^n$ with $f(b_1, \dots, b_n) \neq 0$. The existence of $a_i$ means that $f$ is not identically vanishing. – Edgardo Dec 17 '13 at 0:44
Sorry, here it is: Modules that become isomorphic to $M$ over the algebraic closure of our finite field $K$ are classified by $H^1(G, \mathrm{Aut}(M'))$, where $M'$ is $M$ base-changed to the algebraic closure, and $G$ is the absolute Galois group of $K$. Now $\mathrm{Aut}(M')$ is the set of $\bar{K}$-points of a connected algebraic $K$-group, namely, the automorphism group of $M$ (considered as a $K$-variety). There is a theorem of Lang and Steinberg that says $H^1$ always vanishes in this setting. – Edgardo Dec 17 '13 at 2:57
Here's a counterexample to the same statement for infinite dimensional algebras:
Take $K=\mathbb{R}$, $L=\mathbb{C}$, $A=\mathbb{R}[x,y]/(x^2+y^2-1)$. Then $A$ is a Dedekind domain with class group cyclic of order 2, and $A'=A\otimes\mathbb{C}$ is a PID. We can take $M$ and $N$ to be non-isomorphic projective rank 1 modules over $A$, which both necessarily become free after tensoring with $\mathbb{C}$.
Explicitly, we can take $M=A$, $N=(x,y-1)\subset A$.
-
This algebra isn't finite dimensional, though. – Dag Oskar Madsen Dec 16 '13 at 21:34
I interpreted the question as asking for an algebra of finite Krull dimension. Perhaps I misunderstood. – Julian Rosen Dec 16 '13 at 21:41
I meant finite dimensional over the field. But it's helpful to see an infinite dimensional example too, so leave this up. – Noah Snyder Dec 16 '13 at 21:44
|
2016-02-11 06:58:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9375924468040466, "perplexity": 200.43882036736}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701161718.0/warc/CC-MAIN-20160205193921-00249-ip-10-236-182-209.ec2.internal.warc.gz"}
|
https://natron.readthedocs.io/en/v2.3.15/plugins/eu.cimg.CImgMatrix5x5.html
|
# Matrix5x5 node¶
This documentation is for version 1.0 of Matrix5x5 (eu.cimg.CImgMatrix5x5).
## Description¶
Compute the convolution of the input image with the specified matrix.
This works by multiplying each surrounding pixel of the input image with the corresponding matrix coefficient (the current pixel is at the center of the matrix), and summing up the results.
For example [-1 -1 -1] [-1 8 -1] [-1 -1 -1] produces an edge detection filter (which is an approximation of the Laplacian filter) by multiplying the center pixel by 8 and the surrounding pixels by -1, and then adding the nine values together to calculate the new value of the center pixel.
Uses the CImg library.
CImg is a free, open-source library distributed under the CeCILL-C (close to the GNU LGPL) or CeCILL (compatible with the GNU GPL) licenses. It can be used in commercial applications (see http://cimg.eu).
## Inputs¶
Input Description Optional
Source No
## Controls¶
Parameter / script name Type Default Function
/ matrix51 Double 0 Matrix coefficient.
/ matrix52 Double 0 Matrix coefficient.
/ matrix53 Double 0 Matrix coefficient.
/ matrix54 Double 0 Matrix coefficient.
/ matrix55 Double 0 Matrix coefficient.
/ matrix41 Double 0 Matrix coefficient.
/ matrix42 Double 0 Matrix coefficient.
/ matrix43 Double 0 Matrix coefficient.
/ matrix44 Double 0 Matrix coefficient.
/ matrix45 Double 0 Matrix coefficient.
/ matrix31 Double 0 Matrix coefficient.
/ matrix32 Double 0 Matrix coefficient.
/ matrix33 Double 0 Matrix coefficient.
/ matrix34 Double 0 Matrix coefficient.
/ matrix35 Double 0 Matrix coefficient.
/ matrix21 Double 0 Matrix coefficient.
/ matrix22 Double 0 Matrix coefficient.
/ matrix23 Double 0 Matrix coefficient.
/ matrix24 Double 0 Matrix coefficient.
/ matrix25 Double 0 Matrix coefficient.
/ matrix11 Double 0 Matrix coefficient.
/ matrix12 Double 0 Matrix coefficient.
/ matrix13 Double 0 Matrix coefficient.
/ matrix14 Double 0 Matrix coefficient.
/ matrix15 Double 0 Matrix coefficient.
Normalize / normalize Boolean Off Normalize the matrix coefficients so that their sum is 1.
(Un)premult / premult Boolean Off Divide the image by the alpha channel before processing, and re-multiply it afterwards. Use if the input images are premultiplied.
Invert Mask / maskInvert Boolean Off When checked, the effect is fully applied where the mask is 0.
Mix / mix Double 1 Mix factor between the original and the transformed image.
|
2021-06-20 09:24:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5532465577125549, "perplexity": 12033.072104816316}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487660269.75/warc/CC-MAIN-20210620084505-20210620114505-00072.warc.gz"}
|
https://peytondmurray.github.io/coding/embedding-html5-video-minimal-mistakes/
|
# Embedding HTML5 video in Github Pages
A while back, I wrote some a pair of longer posts about beautifully rendering scientific data using blender. For that data I generated a number of short video clips showing how the magnetic moments in a material reorient themselves when an external magnetic field is applied. Initially I had a lot of trouble embedding the video in the blog; uploading to YouTube and embedding from there is bad because there’s already native support for embedded videos with HTML5 - why should I have to muck about with YouTube to display a simple video? Also, nobody likes it when another video starts playing without being asked when the first is done, along with a bunch recommendations on what to watch next.
It took me a little while to figure out, but in short, FFMPEG is the ideal tool for the job. Equally important is the choice of compression, giving up as little as possible in terms of quality in exchange for smaller file size. I found the best compromise to be the following:
ffmpeg -i input.mp4 -c:v libx264 -preset slow -pix_fmt yuv420p -an output.mp4
Let’s understand the arguments:
1. -i input.mp4: Specify the input file with -i.
2. -c:v is short for -vcodec, which itself is an alias for -codec:v. libx264, used to encode H.264 video, was most commonly used when I published the posting before, and as I understand it there wasn’t yet widespread support for H.265. However, H.265 offers potentially large savings (up to ~50%!!) in terms of filesize for comparable video quality. As long as it’s supported, use libx265.
3. -preset slow: This argument controls how quickly the video is encoded; the slower the encoding, the larger the compression ratio is. If the amount of time you spend encoding the video is no object, use veryslow. If, on the other hand, you’re in a hurry to publish the video, use ultrafast. The compression ratio will be worse, but the video encoding will be quick.
4. -pix_fmt yuv420p: Correctly specifying the pixel format specifier, which controls how color image data is encoded, was critical for getting the embedded video to appear. yuv420p worked for me, but YMMV here - I’m pretty sure other options should work, but you need to make sure that the input video format has a pixel format compatible with the output.
5. -an: Don’t encode any audio in the output file. If you need audio, use -c:a <whatever audio codec>.
6. output.mp4: At the end, specify the output file.
Finally, embed the resulting output in your webpage:
<video autoplay="autoplay" loop="loop" width="800" height="450" codecs="h264" controls>
<source src=output.mp4" type="video/mp4">
</video>
And that’s it! If I’ve missed anything here, leave a comment below and I’ll incorporate it into the discussion here.
Categories:
Updated:
|
2020-08-12 11:32:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3337593376636505, "perplexity": 2235.507889907141}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738892.21/warc/CC-MAIN-20200812112531-20200812142531-00269.warc.gz"}
|
http://math.stackexchange.com/questions/193606/detect-when-a-point-belongs-to-a-bounding-box-with-distances/209908
|
Detect when a point belongs to a bounding box with distances
I have a box with known bounding coordinates (latitudes and longitudes): latN, latS, lonW, lonE.
I have a mystery point P with unknown coordinates. The only data available is the distance from P to any point p. dist(p,P).
I need a function that tells me whether this point is inside or outside the box.
-
So you have no knowledge of the coordinates of $P$, but you always have the distance to $P$ available? – rschwieb Sep 11 '12 at 12:35
Ah, I think I now understand. You know the bounding latitudes and longitudes of a box. You have some mystery point $P$, that might be inside or outside the box, and you want to determine which. The only other data you have is the distance of $P$ from any point $p$. Is this right? – Matt Pressland Sep 11 '12 at 12:37
Yes that's it. It's a magic point! – Jordi Planadecursach Sep 11 '12 at 14:16
Is this supposed to be on a sphere, or on a flat plane? If on a sphere what is the definition of dist? – Maesumi Oct 9 '12 at 10:32
Its is just a flat plane – Jordi Planadecursach Oct 9 '12 at 10:50
Once we know the coordinates of P then the problem revers to a well known answer. To get the coordinates of $P=(x, y)$ we take three measurements. I have moved the coordinate system onto one corner and named the objects accordingly, with $a$ the horizontal side and $b$ the vertical side. In addition, we are going to find the coordinates of P in polar notation, with the distance $r$ and angle $\theta$.
Measuring $\vec{AP}$ we get the distance $r$. Measuring $\vec{BP}$ gives us the cosine and measuring $\vec{DP}$ gives us the sine, by means of the law of cosines.
$$\cos\theta = \frac{r^2+a^2-d_{BP}^2}{2 a r}$$ $$\sin\theta = \frac{r^2+b^2-d_{DP}^2}{2 b r}$$
So the location of P is
$$x = r \cos\theta = d_{AP} \frac{r^2+a^2-d_{BP}^2}{2 a r}$$ $$y = r \sin\theta = d_{AP} \frac{r^2+b^2-d_{DP}^2}{2 b r}$$
The point is inside if $x>=0$ and $x<=a$ and $y>=0$ and $y<=b$.
I have checked this with the above C# code
static void Main(string[] args)
{
double a=2;
double b=1;
Point A=new Point(0, 0);
Point B=new Point(a, 0);
Point D=new Point(0, b);
for(double x=-5; x<=5; x+=0.5)
{
for(double y=-5; y<=5; y+=0.5)
{
Point P=new Point(x, y);
double d_AP=A.DistanceTo(P);
double d_BP=B.DistanceTo(P);
double d_DP=D.DistanceTo(P);
double r=d_AP;
double cos=(a*a+r*r-d_BP*d_BP)/(2*a*r);
double sin=(b*b+r*r-d_DP*d_DP)/(2*b*r);
double x_P=r*cos;
double y_P=r*sin;
P.inside=(x_P>=0&&x_P<=a)&&(y_P>=0&&y_P<=b);
Point Q=new Point(x_P, y_P);
if(P.DistanceTo(Q)>1e-6)
{
Console.WriteLine("({0},{1}) - ({2},{3})", x, y, x_P, y_P);
}
}
}
}
public struct Point
{
double x, y; //private, not visible
public bool inside;
public Point(double x, double y)
{
this.x=x;
this.y=y;
this.inside=false;
}
public double DistanceTo(Point p)
{
return Math.Sqrt((p.x-x)*(p.x-x)+(p.y-y)*(p.y-y));
}
}
and it all checks out ok. No need to check for signs and quadrants. It all works out cleanly.
-
There's a geometric way to do it, too.
Draw a rectangle and a point inside it. You can make four oriented triangles by connecting the point to the diagonals, and then uniformly drawing arrows on the edges so that they all "flow clockwise" or "flow counterclockwise".
Now if you imagine dragging the point over an edge, you'll see that three of the triangles retain their orientation, but the one whose edge has been crossed will flip orientation. If you go over a corner instead, two of the four triangles will flip orientation.
So, by uniformly representing these four triangles with vectors, you just have to check their cross products to see if they are all the same sign. If two disagree with the other two, you know the point lies on a diagonal outside. If one disagrees with the rest, it lies outside of one of the edges. If they all agree, the point is inside.
Here's the setup. Suppose you are given $(a_1,a_2),(b_1,b_2),(c_1,c_2),(d_1,d_2)$ as the corners of the rectangle (in cyclic order, say, clockwise), and $(e_1,e_2)$ is the point.
One triangle will have vector edges (in this order) $(a_1,a_2)-(e_1,e_2)$ and $(b_1,b_2)-(a_1,a_2)$. The other three will be: $(b_1,b_2)-(e_1,e_2)$ and $(c_1,c_2)-(b_1,b_2)$ $(c_1,c_2)-(e_1,e_2)$ and $(d_1,d_2)-(c_1,c_2)$ $(d_1,d_2)-(e_1,e_2)$ and $(a_1,a_2)-(d_1,d_2)$
They've all been set up so that if the point is inside, then taking the cross product of the first vector with the second in any of these four pairs, they will all have the same sign (all positive or all negative).
Given any four points in cyclic order like this, you can fill out the four equations and check all their cross products.
-
I realize it seems like a complex setup, but if you are given say 1000 rectangles, and you want to know if a given point is inside or outside of each of them, then a program does this pretty efficiently. Sorry if it doesn't fit your tools at hand: it's hard to match our answers to your resources, sometimes. – rschwieb Sep 10 '12 at 15:45
having e1,e2 is as easy as latS<e2<latN & lonW<e1<lonE, no need for cross product, am I missing anything? – Jordi Planadecursach Sep 10 '12 at 19:51
@JordiPlanadecursach I might agree, but the author mentions something about "lat" and "lon" not being available. Since I only have a vague notion of what both of you are thinking of, I just avoided it altogether. – rschwieb Sep 10 '12 at 20:13
I really like this, but my understanding is that we don't have $(e_1,e_2)$. We have some reference point $P$ that we do know the coordinates of, and we know the distance from $p=(e_1,e_2)$ to $P$, but not the values of $e_1$ and $e_2$. (I think - I'm still not completely sure I haven't misunderstood the question.) – Matt Pressland Sep 11 '12 at 10:20
@MattPressland Yeah, I agree. Let's see if we can get the OP to improve the question. – rschwieb Sep 11 '12 at 12:33
As I understand your question, it is somehow similar to this:How to check if a point is inside a rectangle. I see that my answer (while received no votes), may be correct and may be simply applied in your case.
-
To locate my answer, please do a find on my name after you have clicked the link. – Emmad Kareem Oct 4 '12 at 23:37
One (far-from-optimal, but straightforward) way to do it: since you have the box's coordinates available, you also presumably have the four corners of the box available. Now, knowing $P$'s distances from the two points along one edge lets you narrow down the point's location to one of two points $i_1$ and $i_2$ (since the two circles $|P-p_1|=r_1$ and $|P-p_2|=r_2$ have at most two intersection points). Similarly, knowing $P$'s distance from the two points along the other edge narrows down its location to one of two points $i_3$ and $i_4$. (If you really want, I can flesh this out with the equations for those two points - but it's straightforward to find them yourself; it's just a bit of algebra or alternately a bit of geometry) Now, the kicker: the two sets of two points can't be the same, because $i_2$ is the reflection of $i_1$ about the edge $p_1p_2$ and $i_3$ is the reflection of $i_4$ about a different edge $p_3p_4$. This means that you can compare to determine which of the points $i_1,i_2$ is identical to which point $i_3,i_4$ and that's enough to get your point precisely. Once you have that, the standard point-in-rectangle tests should suffice.
Note that for algorithmic reasons, once you have $i_1$ and $i_2$ you may want to perform the point-in-rectangle test on both of them; they can't both be in the rectangle, but it's possible that they're both out of the rectangle, in which case you don't have to bother going on and finding $i_3$ and $i_4$.
-
The distance measurement from any point gives you a circle around that point as a locus of possible positions of $P$. Make any such measurement from a point $A$. If the question is not settled after this (i.e. if the circle crosses the boundary of the rectangle), make a measurement from any other point $B$. The two intersecting circles leave only two possibilities for the location of $P$. If you are lucky, both options are inside or both outside the rectnagle and we are done. Otherwise, a third measurement taken form any point $C$ not collinear with $A$ and $B$ will settle the question of exact position of $P$ (and after that we easily see if $P$ is inside or not.
One may wish to choose the first point $A$ in an "optimal" faschion such that the probability of a definite answer is maximized. While this requires knowledge about soem a prioi distribution where $P$ might be, the center of the rechtangle seems like a good idea. The result is nondecisive only if the measured distance is between half the smallest side of the rectangle and half the diagonal of the rectangle.
-
Measure distances from the four corners of the rectangle.
Consider the four triangles formed from P and the four sides of the rectangle.
Apply the Law of Cosines to measure the angles of the four triangles next to the sides of the rectangle.
If any is larger than 90 degrees then you are outside. Otherwise you are inside.
EDIT 1:
Suppose your rectangle is $EFGH$ where the sides are $d(EF)=d(HG)=x$ and $d(FG)=d(EH)=y$.
Let the distance of $P$ from $E,F,G,H$ be $e,f,g,h$ respectively.
Now by Law of Cosines applied to angle $EHP=\alpha$ we have $\cos (\alpha) = (y^2+h^2-e^2)/(2yh)$. If the angle is larger than $90^\circ$ then $y^2+h^2-e^2<0$. We need to apply a similar check 8 times. If any of the expressions is negative then we are outside. Else we are inside (or on boundary). So you have to check the sign of 8 expressions.
If either of the following
$y^2+h^2-e^2$, $y^2+e^2-h^2$, $x^2+e^2-f^2$, $x^2+f^2-e^2$, $y^2+f^2-g^2$, $y^2+g^2-f^2$, $x^2+g^2-h^2$, $x^2+h^2-g^2$
is negative $P$ is outside.
EDIT 2:
In case efficiency is a consideration: To test that $P$ is inside`, or on the border, checking with respect to 3 sides will do. So it takes 6 inequalities. For example
$y^2+h^2-e^2\ge 0$, and $y^2+e^2-h^2\ge 0$, and
$x^2+e^2-f^2\ge 0$, and $x^2+f^2-e^2\ge 0$, and
$y^2+f^2-g^2\ge 0$, and $y^2+g^2-f^2\ge 0$.
EDIT 3:
A similar approach will work for checking with respect to a convex polygon.
-
can you translate that into a boolean equation ? – Jordi Planadecursach Oct 8 '12 at 20:33
I added a formulation. – Maesumi Oct 8 '12 at 22:17
You have the coordinates of the four corners of the box. I'll imagine that the box is situated so its sides are horizontal or vertical (This could be easily obtained by a rotation if box wasn't originally like this). So you can also get the coordinates of:
$p_M$, the center of the box (inside the box),
$p_L$, the point directly to the left of the center so that the left side of the box bisects the line joining $p_M$ to $p_L$,
$p_R$, the point directly to the right of the center so the right side of the box bisects the line joining $p_M$ to $p_R$,
two further points $p_A$ and $p_B$ respectively above and below the box, so that the top/bottom of the box bisects the line joing $p_M$ to $p_A$ or $p_B$.
Now for your mystery point P you find
$t_M=d(p_m,P)$, $t_L=d(p_L,P)$, and similarly for $t_R,t_A,t_B$.
Now use the fact that the mystery point $P$ is on the (half plane to the) left of the right side iff it is closer to the point $p_M$ than it is to $p_R$, i.e. iff $t_M>t_R$. Similarly $P$ is to the right of the left side iff $t_M>t_L$, and $P$ is below the top of the box iff $t_M>t_A$, and finally $P$ is above the bottom of the box iff $t_M>t_B$.
We now have that p lies in the box iff each of the quantities $t_M-t_*$ is positive, where * is one of the four symbols $L,R,A,B$. For the function, define
$f(distances)=min(t_M-t_L,t_M-t_R,t_M-t_A,t_M-t_B)$. Then $f>0$ iff $P$ lies in the interior of the box, while $f>=0$ iff $P$ lies in the interior or on the boundary of the box.
-
|
2014-10-25 17:56:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8345448970794678, "perplexity": 362.6472387269605}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119648891.34/warc/CC-MAIN-20141024030048-00260-ip-10-16-133-185.ec2.internal.warc.gz"}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.