url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
https://bastian.rieck.me/blog/posts/2017/aleph_homology_2_manifolds/
# Using Aleph to calculate the homology of 2-manifolds ## Tags: programming, projects, research Published on « Previous post: Homology statistics for 2-manifolds — Next post: Adding copyright notices to papers » In a previous article, I discussed the distribution of Betti numbers of triangulations of 2-manifolds. This article now discusses the code. Coincidentally, it also demonstrates how to use Aleph, my (forthcoming) library for exploring persistent homology and its uses. Aleph is header-only, so it can be readily deployed in your own projects. In the following, I am assuming that you have installed Aleph, following the official instructions. There are several tutorials available that cover different parts of the persistent homology calculation process. For our purpose, viz. the calculation of homology of triangulated spaces, no large machinery is needed. We start with some includes and using directives: #include <aleph/topology/io/LexicographicTriangulation.hh> #include <aleph/persistentHomology/Calculation.hh> #include <aleph/topology/Simplex.hh> #include <aleph/topology/SimplicialComplex.hh> #include <iostream> #include <string> #include <vector> using DataType = bool; using VertexType = unsigned short; using Simplex = aleph::topology::Simplex<DataType, VertexType>; using SimplicialComplex = aleph::topology::SimplicialComplex<Simplex>; Nothing fancy happened so far. We set up a simplex class, which is the basic data type in most applications, as well as a simplicial complex, which is the basic storage class for multiple simplices. The only interesting thing is our choice of DataType and VertexType. Since our simplices need not store any additional data except for their vertices, we select bool as a data type in order to make the class smaller. This could potentially be achieved with EBCO as well, but I did not yet have time to test it adequately. In addition to the data type, we use unsigned short as the vertex type—the triangulations that we want to analyse only feature 9 or 10 vertices, so unsigned short is the best solution for storing vertex identifiers. Next, we need some I/O code to read a lexicographic triangulation: int main(int argc, char* argv[]) { if( argc <= 1 ) return -1; std::string filename = argv[1]; std::vector<SimplicialComplex> simplicialComplexes; for( auto&& K : simplicialComplexes ) { K.createMissingFaces(); K.sort(); } } Here, we used the LexicographicTriangulationReader, a reader class that supports reading files in the format defined by Frank H. Lutz. However, this format only provides the top-level simplices of a triangulation. Hence, for a 2-manifold, only the 2-simplices are specified. For calculating homology groups, however, all simplices are required. Luckily, the SimplicialComplex class offers a method for just this purpose. By calling createMissingFaces(), all missing faces of the simplicial complex are calculated and added to the simplicial complex. Afterwards, we use sort() to sort simplices in lexicographical order. This order is required to ensure that the homology groups can be calculated correctly—the calculation routines assume that the complex is being "filtrated", so faces need to precede co-faces. Having created and stored a list of simplicial complexes, we may now finally calculate their homology groups by adding the following code after the last for-loop: for( auto&& K : simplicialComplexes ) { bool dualize = true; bool includeAllUnpairedCreators = true; auto diagrams = aleph::calculatePersistenceDiagrams( K, dualize, includeAllUnpairedCreators ); } This code calls calculatePersistenceDiagrams(), which is usually employed to calculate, well, a set of persistence diagrams. The two flags dualize and includeAllUnpairedCreators also deserve some explanation. The first flag only instructs the convenience function as to whether the boundary matrix that is required for (persistent) homology calculations should be dualized or not. Dualization was shown to result in faster computations, so of course we want to use it as well. The second parameter depends on our particular application scenario. Normally, the persistent homology calculation ignores all creator simplices in the top dimension. The reason for this is simple: if we expand a neighbourhood graph to a Vietoris–Rips complex, the top-level simplices are an artefact of the expansion process. Most of these simplices cannot be paired with higher-dimensional simplices, hence they will appear to create a large number of high-dimensional holes. The persistent homology calculation convenience function hence ignores those simplices for the creation of persistence diagrams by default. For our application, however, keeping those simplices is actually the desired behaviour—the top-level simplices are well-defined and an integral part of the triangulation. As a last step, we want to print the signature of Betti numbers for the given simplicial complex. This is easily achieved by adding a nested loop at the end of the for-loop: for( auto&& diagram : diagrams ) { auto d = diagram.dimension(); std::cout << "b_" << d << " = " << diagram.betti() << "\n"; } This will give us all non-zero Betti numbers of the given triangulation. The output format is obviously not optimal—in a research setting, I like to use JSON for creating human-readable and machine-readable results. If you are interested in how this may look, please take a look at a more involved tool for dealing with triangulations. Be aware, however, that this tool is still a work in progress. This concludes our little foray into Aleph. I hope that I piqued your interest! By the way, I am always looking for contributors…
2021-09-19 10:46:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40026581287384033, "perplexity": 1226.5486336001934}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056856.4/warc/CC-MAIN-20210919095911-20210919125911-00692.warc.gz"}
https://www.cut-the-knot.org/triangle/Morley/MorleyFalse.shtml
# Morley's Theorem: A Proof That Needs Fixing Morley's theorem asserts that in the diagram below $\Delta A'B'C'$ is equilateral, whatever $\Delta ABC.$ The proof below is taken verbatim from a recent book on problem-solving. Let $\angle BAC=3\alpha,$ $\angle ABC=3\beta,$ $\angle ACB=3\gamma.$ Let $B',$ $T,$ $S$ be the points of intersection of the trisectors. If we assume that $2\alpha +2\gamma\le 60^{\circ},$ $2\alpha +2\beta\le 60^{\circ},$ and $2\beta +2\gamma\le 60^{\circ}$ then $4\alpha +4\beta+4\gamma\le 180^{\circ},$ i.e., $\alpha +\beta+\gamma\le 45^{\circ},$ which implies $3\alpha +3\beta+3\gamma\le 135^{\circ},$ or, equivalently, $180^{\circ}\le 135^{\circ},$ a contradiction. We conclude that one of the sums $2\alpha +2\gamma,$ $2\alpha +2\beta,$ $2\beta +2\gamma$ should exceed $60^{\circ}.$ Suppose $2\alpha +2\gamma\gt 60^{\circ},$ then $\angle ABC\lt 90^{\circ}.$ Let $B'B_{1}\perp AC$ and $B'B_{3}\perp AT.$ Then, since every point of the angle bisector is equidistant from both sides of the angle, we get $B'B_{3}=2B'D=2B'B_{1}.$ Similarly, if we consider $B'B_{2}\perp CS,$ we obtain $B'B_{2}=2B'B_{1},$ and thus $B'B_{2}=B'B_{3}.$ We also observe that $\angle B_3B'B_{2}=2\alpha+2\gamma\gt 60^{\circ}.$ Consider the points $A',$ $C'$ on the semi-straight lines $AT$ and $CS,$ respectively, so that triangle $A'B'C'$ is isosceles. We obviously have $\angle A'B'B_{3}=\angle C'B'B_{2}=s,$ therefore $2s=2\alpha +2\gamma - 60^{\circ}.$ Observe that $A'B_{3}=A'B'$ (due to symmetry) and $C'B_{2}=C'B'.$ It follows that $s=\alpha +\gamma - 30^{\circ}$ and $2h+2\alpha +2\gamma=180^{\circ},$ hence $h=\angle B_{2}B{3}B'=\angle B'B_{2}B_{3}.$ Consequently, we have $h=90^{\circ}-\alpha -\gamma,$ thus \begin{align} h - s &= 120^{\circ}-2\alpha -2\gamma\\ &= 120^{\circ}-\frac{2}{3}(3\alpha +3\gamma)\\ &= 120^{\circ}-\frac{2}{3}(180^{\circ}-3\beta)\\ &= 2\beta, \end{align} and thus $u=2b,$ where $u=\angle B_{2}B_{3}A'=\angle SB_{2}B_{3}.$ At this point, we observe that $B_{2}A'=A'C'=C'B_{2}$ because of the isosceles triangle $A'B'C',$ and thus \begin{align} \angle B_{3}C'B_{2} &= 180^{\circ}-u-\frac{u}{2}\\ &= 180^{\circ}-\frac{3u}{2}\\ &= 180^{\circ}-3\beta. \end{align} It follows that the quadrilateral $B_{3}C'B_{2}$ is inscribed in a circle, and similarly, we obtain that the quadrilateral $BB_{2}A'B_{3}$ can be inscribed in a circle, as well. Consequently the straight lines $BA',$ $BC'$ trisect $\angle ABC,$ hence $T=A'$ and $S=C'.$ ### References 1. S. E. Louridas, M. Th. Rassias, Problem-Solving and Selected Topics in Euclidean Geometry, Springer, 2013 (65-68) ### Morley's Miracle #### Invalid proofs 1. Bankoff's conundrum 2. Proof by Nolan L Aljaddou 3. Morley's Theorem: A Proof That Needs Fixing
2021-08-03 11:11:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9999536275863647, "perplexity": 536.6558516277859}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154457.66/warc/CC-MAIN-20210803092648-20210803122648-00456.warc.gz"}
https://indico.desy.de/event/27991/contributions/102423/
# ICRC 2021 Jul 12 – 23, 2021 Online Europe/Berlin timezone ## The depth of the shower maximum of air showers measured with AERA Jul 13, 2021, 6:00 PM 1h 30m 03 #### 03 Talk CRI | Cosmic Ray Indirect Bjarni Pont ### Description The Auger Engineering Radio Array (AERA) is currently the largest array of radio antennas for the detection of cosmic rays, spanning an area of $17$ km$^2$ with 153 radio antennas, measuring in the energy range around the transition from galactic to extra-galactic origin. It measures the radio emission of extensive air showers produced by cosmic rays, in the $30-80$ MHz band. The cosmic-ray mass composition is a crucial piece of information in determining the sources of cosmic rays and their acceleration mechanisms. The composition can be determined with a likelihood analysis that compares the measured radio-emission footprint on the ground to an ensemble of footprints from CORSIKA/CoREAS Monte-Carlo air shower simulations. These simulations are also used to determine the resolution of the method and to validate the reconstruction by identifying and correcting for systematics. We will present the method for the reconstruction of the depth of the shower maximum, compare our results to the independent fluorescence detector reconstruction measured on an event-by-event basis, and show the results of the cosmic-ray mass composition reconstruction with AERA in the energy range from $10^{17.5}$ to $10^{19}$ eV for data taken over the past seven years. ### Keywords Radio; AERA; Pierre Auger; Xmax; mass composition; cosmic rays; depth of shower; composition; Collaboration Auger Experimental Results
2022-08-19 10:07:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27415964007377625, "perplexity": 2453.8532233708515}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573667.83/warc/CC-MAIN-20220819100644-20220819130644-00026.warc.gz"}
http://ccx.openaircraft.com/ccx-doc/ccx_2.17/doc/ccx/node336.html
## *NETWORK MPC Keyword type: model definition With this option, an equation between variables in a network (total temperature and total pressure at the end nodes of a network element, mass flow in the middle node) can be created. The corresponding degrees of freedom are: • total temperature: 0 • mass flow: 1 • total pressure: 2 The use of *NETWORK MPC requires the coding of subroutines networkmpc_lhs.f and networkmpc_rhs.f by the user. In these routines the user defines the MPC (linear or nonlinear) using the information entered underneath *NETWORK MPC. The syntax is identical to *EQUATION except for an additional parameter TYPE specifying the type of MPC. Using this type the user can distinguish between different kinds of MPC in the networkmpc_lhs.f and networkmpc_rhs.f subroutines. For instance, suppose the user wants to define a network MPC of the form: f:=a p_{t} (node_1) + b p_t^2 (node_2)=0 (812) specifying that the total pressure in node 1 should be (-b/a) times the square of the total pressure in node 2. There are 2 degrees of freedom involved: dof 2 in node 1 and dof 2 in node 2. Underneath *NETWORK MPC the user defines the coefficients and degrees of freedom of the terms involved: *NETWORK MPC,TYPE=QUADRATIC 2 node1,2,a,node2,2,b All this information including the type of the MPC is transferred to the networkmpc_lhs.f and networkmpc_rhs.f subroutines. In networkmpc_rhs.f the user has to code the calculation of -f, in networkmpc_lhs.f the calculation of the derivative of f w.r.t. each degree of freedom occurring in the MPC. This has been done for TYPE=QUADRATIC and the reader is referred to the source code and example networkmpc.inp for further details.
2021-12-05 02:16:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8232724666595459, "perplexity": 1582.139747770022}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363134.25/warc/CC-MAIN-20211205005314-20211205035314-00152.warc.gz"}
http://mathhelpforum.com/pre-calculus/134964-cross-product.html
# Math Help - the cross product 1. ## the cross product i wasn't @ school the day we learned the cross product. heres my first question, just help me get this one so i can complete the worksheet please!!! <-8,13,-2> X <-2,19,1> 2. Originally Posted by needmathhelptoujours i wasn't @ school the day we learned the cross product. heres my first question, just help me get this one so i can complete the worksheet please!!! <-8,13,-2> X <-2,19,1> Do you know how to find determinants? There isn't really anything fancy to understand, the cross product is just a defintion. If "a" and "b" are the vectors: $a=$ $b=$ The cross-product is the vector defined by: $a X b=$ This is difficult to remember, it's best if you know how to find determinants. http://en.wikipedia.org/wiki/Cross_product If you scroll down to "matrix" notation, you'll see how the determinant definition makes 3-by-3 cross products a little easier to remember.
2014-12-27 09:37:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6138222813606262, "perplexity": 1035.5865311196926}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1419447551951.106/warc/CC-MAIN-20141224185911-00093-ip-10-231-17-201.ec2.internal.warc.gz"}
https://forum.bebac.at/forum_entry.php?id=22793&order=time
## installation issue for bear since 2022 [🇷 for BE/BA] Dear Yung-jin, your sources are beyond me and therefore, my question might be stupid. Why do you need CairoDevice? pdf() should do as well and for bitmaps e.g., png(type = "cairo", ...) Of course, RGtk2 is another pot of tea. Dif-tor heh smusma 🖖🏼 Довге життя Україна! Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes
2023-02-06 03:02:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2330709844827652, "perplexity": 14487.649013664459}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500303.56/warc/CC-MAIN-20230206015710-20230206045710-00768.warc.gz"}
https://findingmyname.com/new-kilogram/
# New Kilogram! What news! There’s a new kilogram on the block! Le Grande K is being replaced by defining Planck’s constant to $$\Large{h=6.626\ 070\ 15\times10^{-34}{\rm J\ s}}$$. The kilogram is hiding in the “J”, the joule, which is a compound unit $$\Large{{\rm kg}\frac{{\rm m}^2}{{\rm s}^2}}$$. It isn’t official until May 2019, but it nails down another universal constant! It’s perhaps less exciting, but just as important, that other constants are also being defined. The highlighted rows are the new definitions: ConstantValue $$\Large{\Delta v_{\rm Cs}}$$ $$\Large{9\ 192\ 631\ 770{\rm Hz}}$$ $$\Large{c}$$ $$\Large{299\ 792\ 458\frac{\rm m}{\rm s}}$$ $$\Large{h}$$ $$\Large{6.626\ 070\ 15\times10^{-34}{\rm J\ s}}$$ $$\Large{e}$$ $$\Large{1.602\ 176\ 634\times10^{-19}{\rm C}}$$ $$\Large{k}$$ $$\Large{1.380\ 649\times10^{-23}\frac{\rm J}{\rm K}}$$ $$\Large{N_A}$$ $$\Large{6.022\ 140\ 76\times10^{23}\frac{1}{\rm mol}}$$ $$\Large{K_{cd}}$$ $$\Large{683\frac{\rm lm}{\rm W}}$$ This allows the definition of the seven SI units to universal constants. Unit Definition second (s) $$\Large{\frac{1}{\Delta\nu_{Cs}}}$$ meter (m) $$\Large{\frac{c}{\Delta\nu_{Cs}}}$$ kilogram (kg) $$\Large{\frac{c^2}{h\cdot\Delta\nu_{Cs}}}$$ ampere (A) $$\Large{\frac{1}{e\cdot\Delta\nu_{Cs}}}$$ kelvin (K) $$\Large{\frac{k}{h\Delta\nu_{Cs}}}$$ mole (mol) $$\Large{N_A}$$ candela (cd) $$\Large{\frac{1}{h\cdot\Delta\nu_{Cs}^2\cdot K_{cd}}}$$ The compound units used in the definitions of the constants are a shorthand. Another aspect that is not getting much press is the downside of defining the kilogram and Avogadro’s number: 1 mole of 12C no longer has an exact mass of 0.012 kg. Now the mass of 12C is $$\Large{0.0120000000(45)\frac{kg}{mol}}$$. Perhaps in the future these definitions can be reintegrated. Technically, they are still related by $$\Large{N_A=\frac{\alpha^2M({\rm e}^-)c}{2R_\infty h}}$$, but we aren’t exactly certain about the values of $$\Large{\alpha}$$, $$\Large{M({\rm e}}$$, or $$\Large{R_\infty}$$, so there’s still work to be done! Avogadro’s number may not move, but we may figure out these guys yet! While these changes are official, they don’t become “the definition” until 20 May 2019, so you’ve still got some time to adjust your voltmeter by 0.00001% or so. A tous les temps, à tous les peuples. This site uses Akismet to reduce spam. Learn how your comment data is processed.
2018-12-10 21:50:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6595796346664429, "perplexity": 1893.3289442517196}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823445.39/warc/CC-MAIN-20181210212544-20181210234044-00446.warc.gz"}
https://readpaper.com/paper/4734289627758739457
This website requires JavaScript. # Online and Dynamic Algorithms for Geometric Set Cover and Hitting Set Mar 2023 Set cover and hitting set are fundamental problems in combinatorialoptimization which are well-studied in the offline, online, and dynamicsettings. We study the geometric versions of these problems and present newonline and dynamic algorithms for them. In the online version of set cover(resp. hitting set), $m$ sets (resp.~$n$ points) are give $n$ points (resp.~$m$sets) arrive online, one-by-one. In the dynamic versions, points (resp. sets)can arrive as well as depart. Our goal is to maintain a set cover (resp.hitting set), minimizing the size of the computed solution. For online set cover for (axis-parallel) squares of arbitrary sizes, wepresent a tight $O(\log n)$-competitive algorithm. In the same setting forhitting set, we provide a tight $O(\log N)$-competitive algorithm, assumingthat all points have integral coordinates in $[0,N)^{2}$. No online algorithmhad been known for either of these settings, not even for unit squares (apartfrom the known online algorithms for arbitrary set systems). For both dynamic set cover and hitting set with $d$-dimensionalhyperrectangles, we obtain $(\log m)^{O(d)}$-approximation algorithms with$(\log m)^{O(d)}$ worst-case update time. This partially answers an openquestion posed by Chan et al. [SODA'22]. Previously, no dynamic algorithms withpolylogarithmic update time were known even in the setting of squares (foreither of these problems). Our main technical contributions are an\emph{extended quad-tree }approach and a \emph{frequency reduction} techniquethat reduces geometric set cover instances to instances of general set coverwith bounded frequency. Q1论文试图解决什么问题? Q2这是否是一个新的问题? Q3这篇文章要验证一个什么科学假设?
2023-03-20 13:14:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7274482250213623, "perplexity": 3410.2428671627263}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943483.86/warc/CC-MAIN-20230320114206-20230320144206-00782.warc.gz"}
https://www.lmfdb.org/ModularForm/GL2/Q/holomorphic/1008/1/f/a/433/1/
# Properties Label 1008.1.f.a.433.1 Level $1008$ Weight $1$ Character 1008.433 Self dual yes Analytic conductor $0.503$ Analytic rank $0$ Dimension $1$ Projective image $D_{2}$ CM/RM discs -3, -7, 21 Inner twists $4$ # Related objects ## Newspace parameters Level: $$N$$ $$=$$ $$1008 = 2^{4} \cdot 3^{2} \cdot 7$$ Weight: $$k$$ $$=$$ $$1$$ Character orbit: $$[\chi]$$ $$=$$ 1008.f (of order $$2$$, degree $$1$$, not minimal) ## Newform invariants Self dual: yes Analytic conductor: $$0.503057532734$$ Analytic rank: $$0$$ Dimension: $$1$$ Coefficient field: $$\mathbb{Q}$$ Coefficient ring: $$\mathbb{Z}$$ Coefficient ring index: $$1$$ Twist minimal: no (minimal twist has level 63) Projective image $$D_{2}$$ Projective field Galois closure of $$\Q(\sqrt{-3}, \sqrt{-7})$$ Artin image $D_4$ Artin field Galois closure of 4.0.3024.2 ## Embedding invariants Embedding label 433.1 Character $$\chi$$ $$=$$ 1008.43 ## $q$-expansion $$f(q)$$ $$=$$ $$q+1.00000 q^{7} +O(q^{10})$$ $$q+1.00000 q^{7} +1.00000 q^{25} -2.00000 q^{37} +2.00000 q^{43} +1.00000 q^{49} -2.00000 q^{67} -2.00000 q^{79} +O(q^{100})$$ ## Character values We give the values of $$\chi$$ on generators for $$\left(\mathbb{Z}/1008\mathbb{Z}\right)^\times$$. $$n$$ $$127$$ $$577$$ $$757$$ $$785$$ $$\chi(n)$$ $$1$$ $$-1$$ $$1$$ $$1$$ ## Coefficient data For each $$n$$ we display the coefficients of the $$q$$-expansion $$a_n$$, the Satake parameters $$\alpha_p$$, and the Satake angles $$\theta_p = \textrm{Arg}(\alpha_p)$$. Display $$a_p$$ with $$p$$ up to: 50 250 1000 Display $$a_n$$ with $$n$$ up to: 50 250 1000 $$n$$ $$a_n$$ $$a_n / n^{(k-1)/2}$$ $$\alpha_n$$ $$\theta_n$$ $$p$$ $$a_p$$ $$a_p / p^{(k-1)/2}$$ $$\alpha_p$$ $$\theta_p$$ $$2$$ 0 0 $$3$$ 0 0 $$4$$ 0 0 $$5$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$6$$ 0 0 $$7$$ 1.00000 1.00000 $$8$$ 0 0 $$9$$ 0 0 $$10$$ 0 0 $$11$$ 0 0 1.00000i $$-0.5\pi$$ 1.00000i $$0.5\pi$$ $$12$$ 0 0 $$13$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$14$$ 0 0 $$15$$ 0 0 $$16$$ 0 0 $$17$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$18$$ 0 0 $$19$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$20$$ 0 0 $$21$$ 0 0 $$22$$ 0 0 $$23$$ 0 0 1.00000i $$-0.5\pi$$ 1.00000i $$0.5\pi$$ $$24$$ 0 0 $$25$$ 1.00000 1.00000 $$26$$ 0 0 $$27$$ 0 0 $$28$$ 0 0 $$29$$ 0 0 1.00000i $$-0.5\pi$$ 1.00000i $$0.5\pi$$ $$30$$ 0 0 $$31$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$32$$ 0 0 $$33$$ 0 0 $$34$$ 0 0 $$35$$ 0 0 $$36$$ 0 0 $$37$$ −2.00000 −2.00000 −1.00000 $$\pi$$ −1.00000 $$\pi$$ $$38$$ 0 0 $$39$$ 0 0 $$40$$ 0 0 $$41$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$42$$ 0 0 $$43$$ 2.00000 2.00000 1.00000 $$0$$ 1.00000 $$0$$ $$44$$ 0 0 $$45$$ 0 0 $$46$$ 0 0 $$47$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$48$$ 0 0 $$49$$ 1.00000 1.00000 $$50$$ 0 0 $$51$$ 0 0 $$52$$ 0 0 $$53$$ 0 0 1.00000i $$-0.5\pi$$ 1.00000i $$0.5\pi$$ $$54$$ 0 0 $$55$$ 0 0 $$56$$ 0 0 $$57$$ 0 0 $$58$$ 0 0 $$59$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$60$$ 0 0 $$61$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$62$$ 0 0 $$63$$ 0 0 $$64$$ 0 0 $$65$$ 0 0 $$66$$ 0 0 $$67$$ −2.00000 −2.00000 −1.00000 $$\pi$$ −1.00000 $$\pi$$ $$68$$ 0 0 $$69$$ 0 0 $$70$$ 0 0 $$71$$ 0 0 1.00000i $$-0.5\pi$$ 1.00000i $$0.5\pi$$ $$72$$ 0 0 $$73$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$74$$ 0 0 $$75$$ 0 0 $$76$$ 0 0 $$77$$ 0 0 $$78$$ 0 0 $$79$$ −2.00000 −2.00000 −1.00000 $$\pi$$ −1.00000 $$\pi$$ $$80$$ 0 0 $$81$$ 0 0 $$82$$ 0 0 $$83$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$84$$ 0 0 $$85$$ 0 0 $$86$$ 0 0 $$87$$ 0 0 $$88$$ 0 0 $$89$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$90$$ 0 0 $$91$$ 0 0 $$92$$ 0 0 $$93$$ 0 0 $$94$$ 0 0 $$95$$ 0 0 $$96$$ 0 0 $$97$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$98$$ 0 0 $$99$$ 0 0 $$100$$ 0 0 $$101$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$102$$ 0 0 $$103$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$104$$ 0 0 $$105$$ 0 0 $$106$$ 0 0 $$107$$ 0 0 1.00000i $$-0.5\pi$$ 1.00000i $$0.5\pi$$ $$108$$ 0 0 $$109$$ −2.00000 −2.00000 −1.00000 $$\pi$$ −1.00000 $$\pi$$ $$110$$ 0 0 $$111$$ 0 0 $$112$$ 0 0 $$113$$ 0 0 1.00000i $$-0.5\pi$$ 1.00000i $$0.5\pi$$ $$114$$ 0 0 $$115$$ 0 0 $$116$$ 0 0 $$117$$ 0 0 $$118$$ 0 0 $$119$$ 0 0 $$120$$ 0 0 $$121$$ −1.00000 −1.00000 $$122$$ 0 0 $$123$$ 0 0 $$124$$ 0 0 $$125$$ 0 0 $$126$$ 0 0 $$127$$ −2.00000 −2.00000 −1.00000 $$\pi$$ −1.00000 $$\pi$$ $$128$$ 0 0 $$129$$ 0 0 $$130$$ 0 0 $$131$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$132$$ 0 0 $$133$$ 0 0 $$134$$ 0 0 $$135$$ 0 0 $$136$$ 0 0 $$137$$ 0 0 1.00000i $$-0.5\pi$$ 1.00000i $$0.5\pi$$ $$138$$ 0 0 $$139$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$140$$ 0 0 $$141$$ 0 0 $$142$$ 0 0 $$143$$ 0 0 $$144$$ 0 0 $$145$$ 0 0 $$146$$ 0 0 $$147$$ 0 0 $$148$$ 0 0 $$149$$ 0 0 1.00000i $$-0.5\pi$$ 1.00000i $$0.5\pi$$ $$150$$ 0 0 $$151$$ 2.00000 2.00000 1.00000 $$0$$ 1.00000 $$0$$ $$152$$ 0 0 $$153$$ 0 0 $$154$$ 0 0 $$155$$ 0 0 $$156$$ 0 0 $$157$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$158$$ 0 0 $$159$$ 0 0 $$160$$ 0 0 $$161$$ 0 0 $$162$$ 0 0 $$163$$ −2.00000 −2.00000 −1.00000 $$\pi$$ −1.00000 $$\pi$$ $$164$$ 0 0 $$165$$ 0 0 $$166$$ 0 0 $$167$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$168$$ 0 0 $$169$$ 1.00000 1.00000 $$170$$ 0 0 $$171$$ 0 0 $$172$$ 0 0 $$173$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$174$$ 0 0 $$175$$ 1.00000 1.00000 $$176$$ 0 0 $$177$$ 0 0 $$178$$ 0 0 $$179$$ 0 0 1.00000i $$-0.5\pi$$ 1.00000i $$0.5\pi$$ $$180$$ 0 0 $$181$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$182$$ 0 0 $$183$$ 0 0 $$184$$ 0 0 $$185$$ 0 0 $$186$$ 0 0 $$187$$ 0 0 $$188$$ 0 0 $$189$$ 0 0 $$190$$ 0 0 $$191$$ 0 0 1.00000i $$-0.5\pi$$ 1.00000i $$0.5\pi$$ $$192$$ 0 0 $$193$$ −2.00000 −2.00000 −1.00000 $$\pi$$ −1.00000 $$\pi$$ $$194$$ 0 0 $$195$$ 0 0 $$196$$ 0 0 $$197$$ 0 0 1.00000i $$-0.5\pi$$ 1.00000i $$0.5\pi$$ $$198$$ 0 0 $$199$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$200$$ 0 0 $$201$$ 0 0 $$202$$ 0 0 $$203$$ 0 0 $$204$$ 0 0 $$205$$ 0 0 $$206$$ 0 0 $$207$$ 0 0 $$208$$ 0 0 $$209$$ 0 0 $$210$$ 0 0 $$211$$ 2.00000 2.00000 1.00000 $$0$$ 1.00000 $$0$$ $$212$$ 0 0 $$213$$ 0 0 $$214$$ 0 0 $$215$$ 0 0 $$216$$ 0 0 $$217$$ 0 0 $$218$$ 0 0 $$219$$ 0 0 $$220$$ 0 0 $$221$$ 0 0 $$222$$ 0 0 $$223$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$224$$ 0 0 $$225$$ 0 0 $$226$$ 0 0 $$227$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$228$$ 0 0 $$229$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$230$$ 0 0 $$231$$ 0 0 $$232$$ 0 0 $$233$$ 0 0 1.00000i $$-0.5\pi$$ 1.00000i $$0.5\pi$$ $$234$$ 0 0 $$235$$ 0 0 $$236$$ 0 0 $$237$$ 0 0 $$238$$ 0 0 $$239$$ 0 0 1.00000i $$-0.5\pi$$ 1.00000i $$0.5\pi$$ $$240$$ 0 0 $$241$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$242$$ 0 0 $$243$$ 0 0 $$244$$ 0 0 $$245$$ 0 0 $$246$$ 0 0 $$247$$ 0 0 $$248$$ 0 0 $$249$$ 0 0 $$250$$ 0 0 $$251$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$252$$ 0 0 $$253$$ 0 0 $$254$$ 0 0 $$255$$ 0 0 $$256$$ 0 0 $$257$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$258$$ 0 0 $$259$$ −2.00000 −2.00000 $$260$$ 0 0 $$261$$ 0 0 $$262$$ 0 0 $$263$$ 0 0 1.00000i $$-0.5\pi$$ 1.00000i $$0.5\pi$$ $$264$$ 0 0 $$265$$ 0 0 $$266$$ 0 0 $$267$$ 0 0 $$268$$ 0 0 $$269$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$270$$ 0 0 $$271$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$272$$ 0 0 $$273$$ 0 0 $$274$$ 0 0 $$275$$ 0 0 $$276$$ 0 0 $$277$$ 2.00000 2.00000 1.00000 $$0$$ 1.00000 $$0$$ $$278$$ 0 0 $$279$$ 0 0 $$280$$ 0 0 $$281$$ 0 0 1.00000i $$-0.5\pi$$ 1.00000i $$0.5\pi$$ $$282$$ 0 0 $$283$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$284$$ 0 0 $$285$$ 0 0 $$286$$ 0 0 $$287$$ 0 0 $$288$$ 0 0 $$289$$ 1.00000 1.00000 $$290$$ 0 0 $$291$$ 0 0 $$292$$ 0 0 $$293$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$294$$ 0 0 $$295$$ 0 0 $$296$$ 0 0 $$297$$ 0 0 $$298$$ 0 0 $$299$$ 0 0 $$300$$ 0 0 $$301$$ 2.00000 2.00000 $$302$$ 0 0 $$303$$ 0 0 $$304$$ 0 0 $$305$$ 0 0 $$306$$ 0 0 $$307$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$308$$ 0 0 $$309$$ 0 0 $$310$$ 0 0 $$311$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$312$$ 0 0 $$313$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$314$$ 0 0 $$315$$ 0 0 $$316$$ 0 0 $$317$$ 0 0 1.00000i $$-0.5\pi$$ 1.00000i $$0.5\pi$$ $$318$$ 0 0 $$319$$ 0 0 $$320$$ 0 0 $$321$$ 0 0 $$322$$ 0 0 $$323$$ 0 0 $$324$$ 0 0 $$325$$ 0 0 $$326$$ 0 0 $$327$$ 0 0 $$328$$ 0 0 $$329$$ 0 0 $$330$$ 0 0 $$331$$ 2.00000 2.00000 1.00000 $$0$$ 1.00000 $$0$$ $$332$$ 0 0 $$333$$ 0 0 $$334$$ 0 0 $$335$$ 0 0 $$336$$ 0 0 $$337$$ −2.00000 −2.00000 −1.00000 $$\pi$$ −1.00000 $$\pi$$ $$338$$ 0 0 $$339$$ 0 0 $$340$$ 0 0 $$341$$ 0 0 $$342$$ 0 0 $$343$$ 1.00000 1.00000 $$344$$ 0 0 $$345$$ 0 0 $$346$$ 0 0 $$347$$ 0 0 1.00000i $$-0.5\pi$$ 1.00000i $$0.5\pi$$ $$348$$ 0 0 $$349$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$350$$ 0 0 $$351$$ 0 0 $$352$$ 0 0 $$353$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$354$$ 0 0 $$355$$ 0 0 $$356$$ 0 0 $$357$$ 0 0 $$358$$ 0 0 $$359$$ 0 0 1.00000i $$-0.5\pi$$ 1.00000i $$0.5\pi$$ $$360$$ 0 0 $$361$$ 1.00000 1.00000 $$362$$ 0 0 $$363$$ 0 0 $$364$$ 0 0 $$365$$ 0 0 $$366$$ 0 0 $$367$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$368$$ 0 0 $$369$$ 0 0 $$370$$ 0 0 $$371$$ 0 0 $$372$$ 0 0 $$373$$ 2.00000 2.00000 1.00000 $$0$$ 1.00000 $$0$$ $$374$$ 0 0 $$375$$ 0 0 $$376$$ 0 0 $$377$$ 0 0 $$378$$ 0 0 $$379$$ 2.00000 2.00000 1.00000 $$0$$ 1.00000 $$0$$ $$380$$ 0 0 $$381$$ 0 0 $$382$$ 0 0 $$383$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$384$$ 0 0 $$385$$ 0 0 $$386$$ 0 0 $$387$$ 0 0 $$388$$ 0 0 $$389$$ 0 0 1.00000i $$-0.5\pi$$ 1.00000i $$0.5\pi$$ $$390$$ 0 0 $$391$$ 0 0 $$392$$ 0 0 $$393$$ 0 0 $$394$$ 0 0 $$395$$ 0 0 $$396$$ 0 0 $$397$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$398$$ 0 0 $$399$$ 0 0 $$400$$ 0 0 $$401$$ 0 0 1.00000i $$-0.5\pi$$ 1.00000i $$0.5\pi$$ $$402$$ 0 0 $$403$$ 0 0 $$404$$ 0 0 $$405$$ 0 0 $$406$$ 0 0 $$407$$ 0 0 $$408$$ 0 0 $$409$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$410$$ 0 0 $$411$$ 0 0 $$412$$ 0 0 $$413$$ 0 0 $$414$$ 0 0 $$415$$ 0 0 $$416$$ 0 0 $$417$$ 0 0 $$418$$ 0 0 $$419$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$420$$ 0 0 $$421$$ 2.00000 2.00000 1.00000 $$0$$ 1.00000 $$0$$ $$422$$ 0 0 $$423$$ 0 0 $$424$$ 0 0 $$425$$ 0 0 $$426$$ 0 0 $$427$$ 0 0 $$428$$ 0 0 $$429$$ 0 0 $$430$$ 0 0 $$431$$ 0 0 1.00000i $$-0.5\pi$$ 1.00000i $$0.5\pi$$ $$432$$ 0 0 $$433$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$434$$ 0 0 $$435$$ 0 0 $$436$$ 0 0 $$437$$ 0 0 $$438$$ 0 0 $$439$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$440$$ 0 0 $$441$$ 0 0 $$442$$ 0 0 $$443$$ 0 0 1.00000i $$-0.5\pi$$ 1.00000i $$0.5\pi$$ $$444$$ 0 0 $$445$$ 0 0 $$446$$ 0 0 $$447$$ 0 0 $$448$$ 0 0 $$449$$ 0 0 1.00000i $$-0.5\pi$$ 1.00000i $$0.5\pi$$ $$450$$ 0 0 $$451$$ 0 0 $$452$$ 0 0 $$453$$ 0 0 $$454$$ 0 0 $$455$$ 0 0 $$456$$ 0 0 $$457$$ −2.00000 −2.00000 −1.00000 $$\pi$$ −1.00000 $$\pi$$ $$458$$ 0 0 $$459$$ 0 0 $$460$$ 0 0 $$461$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$462$$ 0 0 $$463$$ −2.00000 −2.00000 −1.00000 $$\pi$$ −1.00000 $$\pi$$ $$464$$ 0 0 $$465$$ 0 0 $$466$$ 0 0 $$467$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$468$$ 0 0 $$469$$ −2.00000 −2.00000 $$470$$ 0 0 $$471$$ 0 0 $$472$$ 0 0 $$473$$ 0 0 $$474$$ 0 0 $$475$$ 0 0 $$476$$ 0 0 $$477$$ 0 0 $$478$$ 0 0 $$479$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$480$$ 0 0 $$481$$ 0 0 $$482$$ 0 0 $$483$$ 0 0 $$484$$ 0 0 $$485$$ 0 0 $$486$$ 0 0 $$487$$ 2.00000 2.00000 1.00000 $$0$$ 1.00000 $$0$$ $$488$$ 0 0 $$489$$ 0 0 $$490$$ 0 0 $$491$$ 0 0 1.00000i $$-0.5\pi$$ 1.00000i $$0.5\pi$$ $$492$$ 0 0 $$493$$ 0 0 $$494$$ 0 0 $$495$$ 0 0 $$496$$ 0 0 $$497$$ 0 0 $$498$$ 0 0 $$499$$ 2.00000 2.00000 1.00000 $$0$$ 1.00000 $$0$$ $$500$$ 0 0 $$501$$ 0 0 $$502$$ 0 0 $$503$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$504$$ 0 0 $$505$$ 0 0 $$506$$ 0 0 $$507$$ 0 0 $$508$$ 0 0 $$509$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$510$$ 0 0 $$511$$ 0 0 $$512$$ 0 0 $$513$$ 0 0 $$514$$ 0 0 $$515$$ 0 0 $$516$$ 0 0 $$517$$ 0 0 $$518$$ 0 0 $$519$$ 0 0 $$520$$ 0 0 $$521$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$522$$ 0 0 $$523$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$524$$ 0 0 $$525$$ 0 0 $$526$$ 0 0 $$527$$ 0 0 $$528$$ 0 0 $$529$$ −1.00000 −1.00000 $$530$$ 0 0 $$531$$ 0 0 $$532$$ 0 0 $$533$$ 0 0 $$534$$ 0 0 $$535$$ 0 0 $$536$$ 0 0 $$537$$ 0 0 $$538$$ 0 0 $$539$$ 0 0 $$540$$ 0 0 $$541$$ 2.00000 2.00000 1.00000 $$0$$ 1.00000 $$0$$ $$542$$ 0 0 $$543$$ 0 0 $$544$$ 0 0 $$545$$ 0 0 $$546$$ 0 0 $$547$$ −2.00000 −2.00000 −1.00000 $$\pi$$ −1.00000 $$\pi$$ $$548$$ 0 0 $$549$$ 0 0 $$550$$ 0 0 $$551$$ 0 0 $$552$$ 0 0 $$553$$ −2.00000 −2.00000 $$554$$ 0 0 $$555$$ 0 0 $$556$$ 0 0 $$557$$ 0 0 1.00000i $$-0.5\pi$$ 1.00000i $$0.5\pi$$ $$558$$ 0 0 $$559$$ 0 0 $$560$$ 0 0 $$561$$ 0 0 $$562$$ 0 0 $$563$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$564$$ 0 0 $$565$$ 0 0 $$566$$ 0 0 $$567$$ 0 0 $$568$$ 0 0 $$569$$ 0 0 1.00000i $$-0.5\pi$$ 1.00000i $$0.5\pi$$ $$570$$ 0 0 $$571$$ −2.00000 −2.00000 −1.00000 $$\pi$$ −1.00000 $$\pi$$ $$572$$ 0 0 $$573$$ 0 0 $$574$$ 0 0 $$575$$ 0 0 $$576$$ 0 0 $$577$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$578$$ 0 0 $$579$$ 0 0 $$580$$ 0 0 $$581$$ 0 0 $$582$$ 0 0 $$583$$ 0 0 $$584$$ 0 0 $$585$$ 0 0 $$586$$ 0 0 $$587$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$588$$ 0 0 $$589$$ 0 0 $$590$$ 0 0 $$591$$ 0 0 $$592$$ 0 0 $$593$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$594$$ 0 0 $$595$$ 0 0 $$596$$ 0 0 $$597$$ 0 0 $$598$$ 0 0 $$599$$ 0 0 1.00000i $$-0.5\pi$$ 1.00000i $$0.5\pi$$ $$600$$ 0 0 $$601$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$602$$ 0 0 $$603$$ 0 0 $$604$$ 0 0 $$605$$ 0 0 $$606$$ 0 0 $$607$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$608$$ 0 0 $$609$$ 0 0 $$610$$ 0 0 $$611$$ 0 0 $$612$$ 0 0 $$613$$ 2.00000 2.00000 1.00000 $$0$$ 1.00000 $$0$$ $$614$$ 0 0 $$615$$ 0 0 $$616$$ 0 0 $$617$$ 0 0 1.00000i $$-0.5\pi$$ 1.00000i $$0.5\pi$$ $$618$$ 0 0 $$619$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$620$$ 0 0 $$621$$ 0 0 $$622$$ 0 0 $$623$$ 0 0 $$624$$ 0 0 $$625$$ 1.00000 1.00000 $$626$$ 0 0 $$627$$ 0 0 $$628$$ 0 0 $$629$$ 0 0 $$630$$ 0 0 $$631$$ −2.00000 −2.00000 −1.00000 $$\pi$$ −1.00000 $$\pi$$ $$632$$ 0 0 $$633$$ 0 0 $$634$$ 0 0 $$635$$ 0 0 $$636$$ 0 0 $$637$$ 0 0 $$638$$ 0 0 $$639$$ 0 0 $$640$$ 0 0 $$641$$ 0 0 1.00000i $$-0.5\pi$$ 1.00000i $$0.5\pi$$ $$642$$ 0 0 $$643$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$644$$ 0 0 $$645$$ 0 0 $$646$$ 0 0 $$647$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$648$$ 0 0 $$649$$ 0 0 $$650$$ 0 0 $$651$$ 0 0 $$652$$ 0 0 $$653$$ 0 0 1.00000i $$-0.5\pi$$ 1.00000i $$0.5\pi$$ $$654$$ 0 0 $$655$$ 0 0 $$656$$ 0 0 $$657$$ 0 0 $$658$$ 0 0 $$659$$ 0 0 1.00000i $$-0.5\pi$$ 1.00000i $$0.5\pi$$ $$660$$ 0 0 $$661$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$662$$ 0 0 $$663$$ 0 0 $$664$$ 0 0 $$665$$ 0 0 $$666$$ 0 0 $$667$$ 0 0 $$668$$ 0 0 $$669$$ 0 0 $$670$$ 0 0 $$671$$ 0 0 $$672$$ 0 0 $$673$$ −2.00000 −2.00000 −1.00000 $$\pi$$ −1.00000 $$\pi$$ $$674$$ 0 0 $$675$$ 0 0 $$676$$ 0 0 $$677$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$678$$ 0 0 $$679$$ 0 0 $$680$$ 0 0 $$681$$ 0 0 $$682$$ 0 0 $$683$$ 0 0 1.00000i $$-0.5\pi$$ 1.00000i $$0.5\pi$$ $$684$$ 0 0 $$685$$ 0 0 $$686$$ 0 0 $$687$$ 0 0 $$688$$ 0 0 $$689$$ 0 0 $$690$$ 0 0 $$691$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$692$$ 0 0 $$693$$ 0 0 $$694$$ 0 0 $$695$$ 0 0 $$696$$ 0 0 $$697$$ 0 0 $$698$$ 0 0 $$699$$ 0 0 $$700$$ 0 0 $$701$$ 0 0 1.00000i $$-0.5\pi$$ 1.00000i $$0.5\pi$$ $$702$$ 0 0 $$703$$ 0 0 $$704$$ 0 0 $$705$$ 0 0 $$706$$ 0 0 $$707$$ 0 0 $$708$$ 0 0 $$709$$ −2.00000 −2.00000 −1.00000 $$\pi$$ −1.00000 $$\pi$$ $$710$$ 0 0 $$711$$ 0 0 $$712$$ 0 0 $$713$$ 0 0 $$714$$ 0 0 $$715$$ 0 0 $$716$$ 0 0 $$717$$ 0 0 $$718$$ 0 0 $$719$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$720$$ 0 0 $$721$$ 0 0 $$722$$ 0 0 $$723$$ 0 0 $$724$$ 0 0 $$725$$ 0 0 $$726$$ 0 0 $$727$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$728$$ 0 0 $$729$$ 0 0 $$730$$ 0 0 $$731$$ 0 0 $$732$$ 0 0 $$733$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$734$$ 0 0 $$735$$ 0 0 $$736$$ 0 0 $$737$$ 0 0 $$738$$ 0 0 $$739$$ −2.00000 −2.00000 −1.00000 $$\pi$$ −1.00000 $$\pi$$ $$740$$ 0 0 $$741$$ 0 0 $$742$$ 0 0 $$743$$ 0 0 1.00000i $$-0.5\pi$$ 1.00000i $$0.5\pi$$ $$744$$ 0 0 $$745$$ 0 0 $$746$$ 0 0 $$747$$ 0 0 $$748$$ 0 0 $$749$$ 0 0 $$750$$ 0 0 $$751$$ 2.00000 2.00000 1.00000 $$0$$ 1.00000 $$0$$ $$752$$ 0 0 $$753$$ 0 0 $$754$$ 0 0 $$755$$ 0 0 $$756$$ 0 0 $$757$$ −2.00000 −2.00000 −1.00000 $$\pi$$ −1.00000 $$\pi$$ $$758$$ 0 0 $$759$$ 0 0 $$760$$ 0 0 $$761$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$762$$ 0 0 $$763$$ −2.00000 −2.00000 $$764$$ 0 0 $$765$$ 0 0 $$766$$ 0 0 $$767$$ 0 0 $$768$$ 0 0 $$769$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$770$$ 0 0 $$771$$ 0 0 $$772$$ 0 0 $$773$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$774$$ 0 0 $$775$$ 0 0 $$776$$ 0 0 $$777$$ 0 0 $$778$$ 0 0 $$779$$ 0 0 $$780$$ 0 0 $$781$$ 0 0 $$782$$ 0 0 $$783$$ 0 0 $$784$$ 0 0 $$785$$ 0 0 $$786$$ 0 0 $$787$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$788$$ 0 0 $$789$$ 0 0 $$790$$ 0 0 $$791$$ 0 0 $$792$$ 0 0 $$793$$ 0 0 $$794$$ 0 0 $$795$$ 0 0 $$796$$ 0 0 $$797$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$798$$ 0 0 $$799$$ 0 0 $$800$$ 0 0 $$801$$ 0 0 $$802$$ 0 0 $$803$$ 0 0 $$804$$ 0 0 $$805$$ 0 0 $$806$$ 0 0 $$807$$ 0 0 $$808$$ 0 0 $$809$$ 0 0 1.00000i $$-0.5\pi$$ 1.00000i $$0.5\pi$$ $$810$$ 0 0 $$811$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$812$$ 0 0 $$813$$ 0 0 $$814$$ 0 0 $$815$$ 0 0 $$816$$ 0 0 $$817$$ 0 0 $$818$$ 0 0 $$819$$ 0 0 $$820$$ 0 0 $$821$$ 0 0 1.00000i $$-0.5\pi$$ 1.00000i $$0.5\pi$$ $$822$$ 0 0 $$823$$ −2.00000 −2.00000 −1.00000 $$\pi$$ −1.00000 $$\pi$$ $$824$$ 0 0 $$825$$ 0 0 $$826$$ 0 0 $$827$$ 0 0 1.00000i $$-0.5\pi$$ 1.00000i $$0.5\pi$$ $$828$$ 0 0 $$829$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$830$$ 0 0 $$831$$ 0 0 $$832$$ 0 0 $$833$$ 0 0 $$834$$ 0 0 $$835$$ 0 0 $$836$$ 0 0 $$837$$ 0 0 $$838$$ 0 0 $$839$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$840$$ 0 0 $$841$$ −1.00000 −1.00000 $$842$$ 0 0 $$843$$ 0 0 $$844$$ 0 0 $$845$$ 0 0 $$846$$ 0 0 $$847$$ −1.00000 −1.00000 $$848$$ 0 0 $$849$$ 0 0 $$850$$ 0 0 $$851$$ 0 0 $$852$$ 0 0 $$853$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$854$$ 0 0 $$855$$ 0 0 $$856$$ 0 0 $$857$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$858$$ 0 0 $$859$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$860$$ 0 0 $$861$$ 0 0 $$862$$ 0 0 $$863$$ 0 0 1.00000i $$-0.5\pi$$ 1.00000i $$0.5\pi$$ $$864$$ 0 0 $$865$$ 0 0 $$866$$ 0 0 $$867$$ 0 0 $$868$$ 0 0 $$869$$ 0 0 $$870$$ 0 0 $$871$$ 0 0 $$872$$ 0 0 $$873$$ 0 0 $$874$$ 0 0 $$875$$ 0 0 $$876$$ 0 0 $$877$$ 2.00000 2.00000 1.00000 $$0$$ 1.00000 $$0$$ $$878$$ 0 0 $$879$$ 0 0 $$880$$ 0 0 $$881$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$882$$ 0 0 $$883$$ 2.00000 2.00000 1.00000 $$0$$ 1.00000 $$0$$ $$884$$ 0 0 $$885$$ 0 0 $$886$$ 0 0 $$887$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$888$$ 0 0 $$889$$ −2.00000 −2.00000 $$890$$ 0 0 $$891$$ 0 0 $$892$$ 0 0 $$893$$ 0 0 $$894$$ 0 0 $$895$$ 0 0 $$896$$ 0 0 $$897$$ 0 0 $$898$$ 0 0 $$899$$ 0 0 $$900$$ 0 0 $$901$$ 0 0 $$902$$ 0 0 $$903$$ 0 0 $$904$$ 0 0 $$905$$ 0 0 $$906$$ 0 0 $$907$$ 2.00000 2.00000 1.00000 $$0$$ 1.00000 $$0$$ $$908$$ 0 0 $$909$$ 0 0 $$910$$ 0 0 $$911$$ 0 0 1.00000i $$-0.5\pi$$ 1.00000i $$0.5\pi$$ $$912$$ 0 0 $$913$$ 0 0 $$914$$ 0 0 $$915$$ 0 0 $$916$$ 0 0 $$917$$ 0 0 $$918$$ 0 0 $$919$$ 2.00000 2.00000 1.00000 $$0$$ 1.00000 $$0$$ $$920$$ 0 0 $$921$$ 0 0 $$922$$ 0 0 $$923$$ 0 0 $$924$$ 0 0 $$925$$ −2.00000 −2.00000 $$926$$ 0 0 $$927$$ 0 0 $$928$$ 0 0 $$929$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$930$$ 0 0 $$931$$ 0 0 $$932$$ 0 0 $$933$$ 0 0 $$934$$ 0 0 $$935$$ 0 0 $$936$$ 0 0 $$937$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$938$$ 0 0 $$939$$ 0 0 $$940$$ 0 0 $$941$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$942$$ 0 0 $$943$$ 0 0 $$944$$ 0 0 $$945$$ 0 0 $$946$$ 0 0 $$947$$ 0 0 1.00000i $$-0.5\pi$$ 1.00000i $$0.5\pi$$ $$948$$ 0 0 $$949$$ 0 0 $$950$$ 0 0 $$951$$ 0 0 $$952$$ 0 0 $$953$$ 0 0 1.00000i $$-0.5\pi$$ 1.00000i $$0.5\pi$$ $$954$$ 0 0 $$955$$ 0 0 $$956$$ 0 0 $$957$$ 0 0 $$958$$ 0 0 $$959$$ 0 0 $$960$$ 0 0 $$961$$ 1.00000 1.00000 $$962$$ 0 0 $$963$$ 0 0 $$964$$ 0 0 $$965$$ 0 0 $$966$$ 0 0 $$967$$ −2.00000 −2.00000 −1.00000 $$\pi$$ −1.00000 $$\pi$$ $$968$$ 0 0 $$969$$ 0 0 $$970$$ 0 0 $$971$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$972$$ 0 0 $$973$$ 0 0 $$974$$ 0 0 $$975$$ 0 0 $$976$$ 0 0 $$977$$ 0 0 1.00000i $$-0.5\pi$$ 1.00000i $$0.5\pi$$ $$978$$ 0 0 $$979$$ 0 0 $$980$$ 0 0 $$981$$ 0 0 $$982$$ 0 0 $$983$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$984$$ 0 0 $$985$$ 0 0 $$986$$ 0 0 $$987$$ 0 0 $$988$$ 0 0 $$989$$ 0 0 $$990$$ 0 0 $$991$$ 2.00000 2.00000 1.00000 $$0$$ 1.00000 $$0$$ $$992$$ 0 0 $$993$$ 0 0 $$994$$ 0 0 $$995$$ 0 0 $$996$$ 0 0 $$997$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$998$$ 0 0 $$999$$ 0 0 Display $$a_p$$ with $$p$$ up to: 50 250 1000 Display $$a_n$$ with $$n$$ up to: 50 250 1000 ## Twists By twisting character Char Parity Ord Type Twist Min Dim 1.1 even 1 trivial 1008.1.f.a.433.1 1 3.2 odd 2 CM 1008.1.f.a.433.1 1 4.3 odd 2 63.1.d.a.55.1 1 7.6 odd 2 CM 1008.1.f.a.433.1 1 12.11 even 2 63.1.d.a.55.1 1 20.3 even 4 1575.1.e.b.874.2 2 20.7 even 4 1575.1.e.b.874.1 2 20.19 odd 2 1575.1.h.b.1126.1 1 21.20 even 2 RM 1008.1.f.a.433.1 1 28.3 even 6 441.1.m.a.19.1 2 28.11 odd 6 441.1.m.a.19.1 2 28.19 even 6 441.1.m.a.325.1 2 28.23 odd 6 441.1.m.a.325.1 2 28.27 even 2 63.1.d.a.55.1 1 36.7 odd 6 567.1.l.b.433.1 2 36.11 even 6 567.1.l.b.433.1 2 36.23 even 6 567.1.l.b.55.1 2 36.31 odd 6 567.1.l.b.55.1 2 60.23 odd 4 1575.1.e.b.874.2 2 60.47 odd 4 1575.1.e.b.874.1 2 60.59 even 2 1575.1.h.b.1126.1 1 84.11 even 6 441.1.m.a.19.1 2 84.23 even 6 441.1.m.a.325.1 2 84.47 odd 6 441.1.m.a.325.1 2 84.59 odd 6 441.1.m.a.19.1 2 84.83 odd 2 63.1.d.a.55.1 1 140.27 odd 4 1575.1.e.b.874.1 2 140.83 odd 4 1575.1.e.b.874.2 2 140.139 even 2 1575.1.h.b.1126.1 1 252.11 even 6 3969.1.t.c.3106.1 2 252.23 even 6 3969.1.t.c.2971.1 2 252.31 even 6 3969.1.k.b.460.1 2 252.47 odd 6 3969.1.k.b.1648.1 2 252.59 odd 6 3969.1.k.b.460.1 2 252.67 odd 6 3969.1.k.b.460.1 2 252.79 odd 6 3969.1.k.b.1648.1 2 252.83 odd 6 567.1.l.b.433.1 2 252.95 even 6 3969.1.k.b.460.1 2 252.103 even 6 3969.1.t.c.2971.1 2 252.115 even 6 3969.1.t.c.3106.1 2 252.131 odd 6 3969.1.t.c.2971.1 2 252.139 even 6 567.1.l.b.55.1 2 252.151 odd 6 3969.1.t.c.3106.1 2 252.167 odd 6 567.1.l.b.55.1 2 252.187 even 6 3969.1.k.b.1648.1 2 252.191 even 6 3969.1.k.b.1648.1 2 252.223 even 6 567.1.l.b.433.1 2 252.227 odd 6 3969.1.t.c.3106.1 2 252.247 odd 6 3969.1.t.c.2971.1 2 420.83 even 4 1575.1.e.b.874.2 2 420.167 even 4 1575.1.e.b.874.1 2 420.419 odd 2 1575.1.h.b.1126.1 1 By twisted newform Twist Min Dim Char Parity Ord Type 63.1.d.a.55.1 1 4.3 odd 2 63.1.d.a.55.1 1 12.11 even 2 63.1.d.a.55.1 1 28.27 even 2 63.1.d.a.55.1 1 84.83 odd 2 441.1.m.a.19.1 2 28.3 even 6 441.1.m.a.19.1 2 28.11 odd 6 441.1.m.a.19.1 2 84.11 even 6 441.1.m.a.19.1 2 84.59 odd 6 441.1.m.a.325.1 2 28.19 even 6 441.1.m.a.325.1 2 28.23 odd 6 441.1.m.a.325.1 2 84.23 even 6 441.1.m.a.325.1 2 84.47 odd 6 567.1.l.b.55.1 2 36.23 even 6 567.1.l.b.55.1 2 36.31 odd 6 567.1.l.b.55.1 2 252.139 even 6 567.1.l.b.55.1 2 252.167 odd 6 567.1.l.b.433.1 2 36.7 odd 6 567.1.l.b.433.1 2 36.11 even 6 567.1.l.b.433.1 2 252.83 odd 6 567.1.l.b.433.1 2 252.223 even 6 1008.1.f.a.433.1 1 1.1 even 1 trivial 1008.1.f.a.433.1 1 3.2 odd 2 CM 1008.1.f.a.433.1 1 7.6 odd 2 CM 1008.1.f.a.433.1 1 21.20 even 2 RM 1575.1.e.b.874.1 2 20.7 even 4 1575.1.e.b.874.1 2 60.47 odd 4 1575.1.e.b.874.1 2 140.27 odd 4 1575.1.e.b.874.1 2 420.167 even 4 1575.1.e.b.874.2 2 20.3 even 4 1575.1.e.b.874.2 2 60.23 odd 4 1575.1.e.b.874.2 2 140.83 odd 4 1575.1.e.b.874.2 2 420.83 even 4 1575.1.h.b.1126.1 1 20.19 odd 2 1575.1.h.b.1126.1 1 60.59 even 2 1575.1.h.b.1126.1 1 140.139 even 2 1575.1.h.b.1126.1 1 420.419 odd 2 3969.1.k.b.460.1 2 252.31 even 6 3969.1.k.b.460.1 2 252.59 odd 6 3969.1.k.b.460.1 2 252.67 odd 6 3969.1.k.b.460.1 2 252.95 even 6 3969.1.k.b.1648.1 2 252.47 odd 6 3969.1.k.b.1648.1 2 252.79 odd 6 3969.1.k.b.1648.1 2 252.187 even 6 3969.1.k.b.1648.1 2 252.191 even 6 3969.1.t.c.2971.1 2 252.23 even 6 3969.1.t.c.2971.1 2 252.103 even 6 3969.1.t.c.2971.1 2 252.131 odd 6 3969.1.t.c.2971.1 2 252.247 odd 6 3969.1.t.c.3106.1 2 252.11 even 6 3969.1.t.c.3106.1 2 252.115 even 6 3969.1.t.c.3106.1 2 252.151 odd 6 3969.1.t.c.3106.1 2 252.227 odd 6
2020-10-22 18:32:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9437578320503235, "perplexity": 6582.079341334432}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107880014.26/warc/CC-MAIN-20201022170349-20201022200349-00097.warc.gz"}
http://hepnp.ihep.ac.cn/article/id/0206db2c-bf5b-44f2-ac85-7f8eb34d0923
# Fusion reactions around the barrier for Be+238U • Fusion-evaporation cross sections of $^{238}$U($^{9}$Be, 5n)$^{242}$Cm are measured over a wide energy range around the Coulomb barrier. These measured cross sections are compared with model calculations using two codes, namely HIVAP2 and KEWPIE2. HIVAP2 calculations overestimate the measured fusion-evaporation cross sections by a factor of approximately 3. In KEWPIE2 calculations, two approaches, namely the Wentzel-Kramers-Brillouin (WKB) approximation and the empirical barrier-distribution (EBD) method, are used for the capture probability; both of them properly describe the measured cross sections. Additionally, fusion cross sections of $^{7,9}$Be+$^{238}$U measured in two experiments are applied to constrain model calculations further through three codes, i.e., HIVAP2, KEWPIE2, and CCFULL. Parameters in these codes are also examined by comparison with measured fusion cross sections. All the comparisons indicate that the KEWPIE2 calculations using the WKB approximation agree well with the measured cross sections of both fusion reactions $^{7,9}$Be+$^{238}$U and the fusion-evaporation reaction $^{238}$U($^{9}$Be, 5n)$^{242}$Cm. Calculations using the fusion code CCFULL are also in good agreement with the measured fusion cross sections of $^{7,9}$Be+$^{238}$U. • [1] S. Hofmann and G. Münzenberg, Rev. Mod. Phys. 72, 733 (2000 doi: 10.1103/RevModPhys.72.733 [2] B. B. Back, H. Esbensen, C. L. Jiang et al., Rev. Mod. Phys. 86, 317 (2014 doi: 10.1103/RevModPhys.86.317 [3] Y. T. Oganessian and V. K. Utyonkov, Rep. Prog. Phys. 78, 036301 (2015 doi: 10.1088/0034-4885/78/3/036301 [4] M. Dasgupta, D. J. Hinde, R. D. Butt et al., Phys. Rev. Lett. 82, 1395 (1999 doi: 10.1103/PhysRevLett.82.1395 [5] D. J. Hinde, M. Dasgupta, B. R. Fulton et al., Phys. Rev. Lett. 89, 272701 (2002 doi: 10.1103/PhysRevLett.89.272701 [6] R. Raabe, J. L. Sida, J. L. Charvet et al., Nature 431, 823 (2004 doi: 10.1038/nature02984 [7] R. Raabe, C. Angulo, J. L. Charvet et al., Phys. Rev. C 74, 044606 (2006 doi: 10.1103/PhysRevC.74.044606 [8] M. Dasgupta, D. J. Hinde, S. L. Sheehy et al., Phys. Rev. C 81, 024608 (2010 doi: 10.1103/PhysRevC.81.024608 [9] D. J. Hinde and M. Dasgupta, Phys. Rev. C 81, 064611 (2010 doi: 10.1103/PhysRevC.81.064611 [10] J. F. Liang and C. Signorini, Int. J. Mod. Phys. E 14, 1121 (2005 doi: 10.1142/S021830130500382X [11] N. Keeley, R. Raabe, N. Alamanos et al., Prog. Part. Nucl. Phys. 59, 579 (2007 doi: 10.1016/j.ppnp.2007.02.002 [12] L. F. Canto, P. R. S. Gomes, R. Donangelo et al., Phys. Rep. 596, 1 (2015 doi: 10.1016/j.physrep.2015.08.001 [13] V. Fekou-Youmbi, J. L. Sida, N. Alamanos et al., J. Phys. G 23, 1259 (1997 [14] V. Fekou-Youmbi, J. L. Sida, N. Alamanos et al., Nucl. Instrum. Methods A 437, 490 (1999 doi: 10.1016/S0168-9002(99)00683-X [15] K. Nishio, H. Ikezoe, Y. Nagame et al., Phys. Rev. Lett. 93, 162701 (2004 doi: 10.1103/PhysRevLett.93.162701 [16] K. Nishio, H. Ikezoe, S. Mitsuoka et al., Phys. Rev. C 77, 064607 (2008 doi: 10.1103/PhysRevC.77.064607 [17] K. Nishio, H. Ikezoe, I. Nishinaka et al., Phys. Rev. C 82, 044604 (2010 doi: 10.1103/PhysRevC.82.044604 [18] K. Nishio, S. Mitsuoka, I. Nishinaka et al., Phys. Rev. C 86, 034608 (2012 doi: 10.1103/PhysRevC.86.034608 [19] W. Reisdorf and M. Schädel, Z. Phys. A 343, 47 (1992 doi: 10.1007/BF01291597 [20] B. Bouriquet, Y. Abe, and D. Boilley, Comput. Phys. Commun. 159, 1 (2004 doi: 10.1016/j.cpc.2003.10.002 [21] K. Hagino, N. Rowley, and A. T. Kruppa, Comput. Phys. Commun. 123, 143 (1999 doi: 10.1016/S0010-4655(99)00243-X [22] W. Hua, Y. H. Zhang, X. H. Zhou et al., Nucl. Phys. Rev. 34, 138 (2017 [23] N. Wang, K. Zhao, W. Scheid et al., Phys. Rev. C 77, 014603 (2008 doi: 10.1103/PhysRevC.77.014603 [24] H. Lü, A. Marchix, Y. Abe et al., Comput. Phys. Commun. 200, 381 (2016 doi: 10.1016/j.cpc.2015.12.003 [25] N. T. Zhang, Y. D. Fang, P. R. S. Gomes et al., Phys. Rev. C 90, 024621 (2014 doi: 10.1103/PhysRevC.90.024621 [26] Y. D. Fang, P. R. S. Gomes, J. Lubian et al., Phys. Rev. C 91, 014608 (2015 doi: 10.1103/PhysRevC.91.014608 Figures(3) Get Citation Bo Mei, Dimiter L. Balabanski, Wei Hua, Yu-Hu Zhang, Xiao-Hong Zhou, Cen-Xi Yuan and Jun Su. Fusion reactions around the barrier for Be+238U[J]. Chinese Physics C. Bo Mei, Dimiter L. Balabanski, Wei Hua, Yu-Hu Zhang, Xiao-Hong Zhou, Cen-Xi Yuan and Jun Su. Fusion reactions around the barrier for Be+238U[J]. Chinese Physics C. Milestone Article Metric Article Views(35) Cited by(0) Policy on re-use To reuse of subscription content published by CPC, the users need to request permission from CPC, unless the content was published under an Open Access license which automatically permits that type of reuse. ###### 通讯作者: 陈斌, [email protected] • 1. 沈阳化工大学材料科学与工程学院 沈阳 110142 Title: Email: ## Fusion reactions around the barrier for Be+238U ###### Corresponding author: Wei Hua, [email protected] • 1. Sino-French Institute of Nuclear Engineering and Technology, Sun Yat-sen University, Zhuhai 519082, China • 2. Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou 730000, China • 3. Extreme Light Infrastructure Nuclear Physics, “Horia Hulubei” National R&D Institute for Physics and Nuclear Engineering, Strada Reactorului 30, 077125 Bucharest Magurele, Romania Abstract: Fusion-evaporation cross sections of $^{238}$U($^{9}$Be, 5n)$^{242}$Cm are measured over a wide energy range around the Coulomb barrier. These measured cross sections are compared with model calculations using two codes, namely HIVAP2 and KEWPIE2. HIVAP2 calculations overestimate the measured fusion-evaporation cross sections by a factor of approximately 3. In KEWPIE2 calculations, two approaches, namely the Wentzel-Kramers-Brillouin (WKB) approximation and the empirical barrier-distribution (EBD) method, are used for the capture probability; both of them properly describe the measured cross sections. Additionally, fusion cross sections of $^{7,9}$Be+$^{238}$U measured in two experiments are applied to constrain model calculations further through three codes, i.e., HIVAP2, KEWPIE2, and CCFULL. Parameters in these codes are also examined by comparison with measured fusion cross sections. All the comparisons indicate that the KEWPIE2 calculations using the WKB approximation agree well with the measured cross sections of both fusion reactions $^{7,9}$Be+$^{238}$U and the fusion-evaporation reaction $^{238}$U($^{9}$Be, 5n)$^{242}$Cm. Calculations using the fusion code CCFULL are also in good agreement with the measured fusion cross sections of $^{7,9}$Be+$^{238}$U. Reference (26) /
2021-02-27 21:52:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5963935256004333, "perplexity": 10321.509376845399}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178359497.20/warc/CC-MAIN-20210227204637-20210227234637-00227.warc.gz"}
http://mattpap.github.io/scipy-2011-tutorial/html/printers.html
# Setting up and using printers¶ Computations are at the heart of symbolic mathematics systems, but very often presentation and visualization of results or intermediate steps is also very important, for example for sharing results. SymPy implements a very generic and flexible framework for implementing printers of mathematical expressions, Python’s data types and date structures, and foreign types. ## Built-in printers¶ There are many ways how expressions can be printed in Sympy. ### Standard¶ This is what str(expression) returns and it looks like this: >>> print x**2 x**2 >>> print 1/x 1/x >>> print Integral(x**2, x) Integral(x**2, x) Note that str() is by design not aware of global configuration, so if you for example run bin/isympy -o grlex, str() will ignore this. There is another function sstr() that takes global configuration into account. ### Low-level¶ Due to internal implementation of Python, SymPy can’t use repr() for generating low-level textual representation of expressions. To get this kind of representation you have to use:func:$$srepr$$ >>> srepr(x**2) Pow(Symbol('x'), Integer(2)) >>> srepr(1/x) Pow(Symbol('x'), Integer(-1)) >>> srepr(Integral(x**2, x)) Integral(Pow(Symbol('x'), Integer(2)), Tuple(Symbol('x'))) repr() gives the same result as str(): >>> repr(x**2) x**2 Note that repr() is also not aware of global configuration. ### Pretty printing¶ This is a nice 2D ASCII-art printing produced by pprint(): >>> pprint(x**2, use_unicode=False) 2 x >>> pprint(1/x, use_unicode=False) 1 - x >>> pprint(Integral(x**2, x), use_unicode=False) / | | 2 | x dx | / It also has support for Unicode character set, which makes shapes look much more natural than in ASCII-art case: >>> pprint(Integral(x**2, x), use_unicode=True) ⌠ ⎮ 2 ⎮ x dx ⌡ By default pprint() tries to figure out the best of Unicode and ASCII-art for generating output. If Unicode is supported, then this will be the default. Otherwise it falls back to ASCII art. User can select the desired character set by setting use_unicode option in pprint(). ### Python printing¶ >>> print python(x**2) x = Symbol('x') e = x**2 >>> print python(1/x) x = Symbol('x') e = 1/x >>> print python(Integral(x**2, x)) x = Symbol('x') e = Integral(x**2, x) ### LaTeX printing¶ >>> latex(x**2) x^{2} >>> latex(x**2, mode='inline') $x^{2}$ >>> latex(x**2, mode='equation') $$x^{2}$$ >>> latex(x**2, mode='equation*') \begin{equation*}x^{2}\end{equation*} >>> latex(1/x) \frac{1}{x} >>> latex(Integral(x**2, x)) \int x^{2}\,dx ### MathML printing¶ >>> from sympy.printing.mathml import mathml >>> from sympy import Integral, latex >>> from sympy.abc import x >>> print mathml(x**2) <apply><power/><ci>x</ci><cn>2</cn></apply> >>> print mathml(1/x) <apply><power/><ci>x</ci><cn>-1</cn></apply> ### Printing with Pyglet¶ This allows for printing expressions in a separate GUI window. Issue: >>> preview(x**2 + Integral(x**2, x) + 1/x) and a Pyglet window with the LaTeX rendered expression will popup: ## Setting up printers¶ By default SymPy uses str()/sstr() printer. Other printers can be used explicitly as in examples in subsections above. This is efficient only when printing at most a few times with a non-standard printer. To make Python use a different printer than the default one, the typical approach is to modify sys.displayhook: >>> 1/x 1/x >>> import sys >>> oldhook = sys.displayhook >>> sys.displayhook = pprint >>> 1/x 1 ─ x >>> sys.displayhook = oldhook Alternatively one can use SymPy’s function init_printing(). This works only for pretty printer, but is the fastest way to setup this type of printer. ## Customizing built-in printers¶ Suppose we dislike how certain classes of expressions are printed. One such issue may be pretty printing of polynomials (instances of Poly class), in which case PrettyPrinter simply doesn’t have support for printing polynomials and falls back to StrPrinter: >>> Poly(x**2 + 1) Poly(x**2 + 1, x, domain='ZZ') One way to add support for pretty printing polynomials is to extend pretty printer’s class and implement _print_Poly method. We would choose this approach if we wanted this to be a permanent change in SymPy. We will choose a different way and subclass PrettyPrinter and implement _print_Poly in the new class. Let’s call the new pretty printer PolyPrettyPrinter. It’s implementation looks like this: from sympy.printing.pretty.pretty import PrettyPrinter from sympy.printing.pretty.stringpict import prettyForm class PolyPrettyPrinter(PrettyPrinter): """This printer prints polynomials nicely. """ def _print_Poly(self, poly): expr = poly.as_expr() gens = list(poly.gens) domain = poly.get_domain() pform_tail = self._print_seq([expr] + gens + [domain], '(', ')') return pform def pretty_poly(expr, **settings): """Pretty-print polynomials nicely. """ p = PolyPrettyPrinter(settings) s = p.doprint(expr) return s Using pretty_poly() allows us to print polynomials in 2D and Unicode: >>> pretty_poly(Poly(x**2 + 1)) ⎛ 2 ⎞ Poly⎝x + 1, x, ℤ⎠ We can use techniques from previous section to make this new pretty printer the default for all inputs. 1. Following implementation of PolyPrettyPrinter, add a printer for Lambda which would use mapping notation (arrow) instead of lambda calculus-like notation. (solution) 2. Following the way how Poly is printed by str() printer, make PolyPrettyPrinter print domain including domain= string. (solution) ## Implementing printers from scratch¶ SymPy implements a variety of printers and often extending those existent may be sufficient, to optimize them for certain problem domain or specific mathematical notation. However, we can also add completely new ones, for example to allow printing SymPy’s expression with other symbolic mathematics systems’ syntax. Suppose we would like to translate SymPy’s expressions to Mathematica syntax. As of version 0.7.1, SymPy doesn’t implement such a printer, so we get do it right now. Adding a new printer basically boils down to adding a new class, let’s say MathematicaPrinter, which derives from Printer and implements _print_* methods for all kinds of expressions we want to support. In this particular example we would like to be able to translate: • numbers • symbols • functions • exponentiation and compositions of all of those. A prototype implementation is as follows: from sympy.printing.printer import Printer from sympy.printing.precedence import precedence class MathematicaPrinter(Printer): """Print SymPy's expressions using Mathematica syntax. """ printmethod = "_mathematica" _default_settings = {} _translation_table = { 'asin': 'ArcSin', } def parenthesize(self, item, level): printed = self._print(item) if precedence(item) <= level: return "(%s)" % printed else: return printed def emptyPrinter(self, expr): return str(expr) def _print_Pow(self, expr): prec = precedence(expr) if expr.exp == -1: return '1/%s' % (self.parenthesize(expr.base, prec)) else: return '%s^%s' % (self.parenthesize(expr.base, prec), self.parenthesize(expr.exp, prec)) def _print_Function(self, expr): name = expr.func.__name__ args = ", ".join([ self._print(arg) for arg in expr.args ]) if expr.func.nargs is not None: try: name = self._translation_table[name] except KeyError: name = name.capitalize() return "%s[%s]" % (name, args) def mathematica(expr, **settings): """Transform an expression to a string with Mathematica syntax. """ p = MathematicaPrinter(settings) s = p.doprint(expr) return s Before we explain this code, let’s see what it can do: >>> mathematica(S(1)/2) 1/2 >>> mathematica(x) x >>> mathematica(x**2) x^2 >>> mathematica(f(x)) f[x] >>> mathematica(sin(x)) Sin[x] >>> mathematica(asin(x)) ArcSin[x] >>> mathematica(sin(x**2)) Sin[x^2] >>> mathematica(sin(x**(S(1)/2))) Sin[x^(1/2)] However, as we didn’t include support for Add, this doesn’t work: >>> mathematica(x**2 + 1) x**2 + 1 and very many other classes of expressions are printed improperly. If we need support for a particular class, we have to add another _print_* method to MathematicaPrinter. For example, to make the above example work, we have to implement _print_Add. 1. Make Mathematica printer correctly print $$\pi$$. (solution) 2. Add support for Add and Mul to Mathematica printer. In the case of products, allow both explicit and implied multiplication, and allow users to choose desired behavior by parametrization of Mathematica printer. (solution) ## Code generation¶ Besides printing of mathematical expressions, SymPy also implements Fortran and C code generation. The simplest way to proceed is to use codegen() which takes a tuple consisting of function name and an expression, or a list of tuples of this kind, language in which it will generate code (C for C programming language and F95 for Fortran, and file name: >>> from sympy.utilities.codegen import codegen >>> print codegen(("chebyshevt_20", chebyshevt(20, x)), "F95", "file")[0][1] !****************************************************************************** !* Code generated with sympy 0.7.1 * !* * !* * !* This file is part of 'project' * !****************************************************************************** REAL*8 function chebyshevt_20(x) implicit none REAL*8, intent(in) :: x chebyshevt_20 = 524288*x**20 - 2621440*x**18 + 5570560*x**16 - 6553600*x & **14 + 4659200*x**12 - 2050048*x**10 + 549120*x**8 - 84480*x**6 + & 6600*x**4 - 200*x**2 + 1 end function In this example we generated Fortran code for function chebyshevt_20 which allows use to evaluate Chebyshev polynomial of the first kind of degree 20. Almost the same way one can generate C code for this expression. 1. Generate C code for chebyshevt(20, x). 2. Make SymPy generate one file of Fortran or/and C code that would contain definitions of functions that would allow us to evaluate each of the first ten Chebyshev polynomials of the first kind.
2022-01-19 13:31:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.20051319897174835, "perplexity": 10192.399294021816}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301341.12/warc/CC-MAIN-20220119125003-20220119155003-00363.warc.gz"}
https://math.stackexchange.com/questions/1722368/could-relational-operators-be-used-to-construct-formal-theory-of-natural-numbers
Could relational operators be used to construct formal theory of natural numbers which is “stronger” than Peano Axioms? This is a beginner's question about foundational construction of (alternative?) number theory. The notion of mathematical equality is closely related to logico-philosophical notion of 'Law of Identity', and Peano axioms start from defining equality (axioms 1-4 according to wiki article) with a certain logical apparatus, which presupposes law of identity, and only then proceed to "Successor function" of additive property. The basic intuition here is that it could and imho would be more natural to derive mathematical equivalence from more basic "countable" or "ordinal" relations, formally relational operators < and >, than from formal presupposition of Law of Identity which basically gives "=" as given axiom. Deriving equality from "neither less nor more", e.g. if not not equal "<>" then equal "=" seems better connected with universals of counting and ordinality in natural languages of the world (AFAIK at least ordered finite set 'one', 'two', 'three', 'many'/'more' can be found in all languages) than the Peano approach. Wiki article on relational operators indeed states that "relational operators can be designed to have logical equivalence, such that they can all be defined in terms of one another". If I'm not mistaken, the "cardinal" aspect of natural numbers, which arithmetic functions seemingly requires, could then be derived from relational equivalence in consistent manner. and cardinal numbers as relational identities would be subcategory of more fundamental ordinal relations < and >, and further relational operators derived from those. I won't attempt to fully formalize this idea here, but leave the task open for any takers. I don't yet know how this idea relates e.g. to Skolem's approach, Löwenheim–Skolem theorem and Skolem's paradox, ie., how dependent those are from the logical apparatus and set theoretical approach used, and do their results extend to mathematics more generally, and welcome all contributions. Your plan is not crazy, and is actually pretty close to how set theory is often (but not always) formalized, in a logical language where $\in$ is the only primitive predicate, and equality is a defined concept: "$x=y$" is an abbreviation for "$\forall z(z\in x\leftrightarrow z\in y)$". This approach is not as popular for arithmetic though. I have no doubt something similar could be made to work for arithmetic if one sat down and did the necessary footwork, but I'm less convinced that it would really buy us anything. One possible reason why it is so is that for first-order Peano arithmetic it is not enough to have $0$ and the successor function; we also need addition and multiplication as primitives -- and as far as I can see, having a primitive ordering would not relieve us of this. And it is pretty important for reasoning about addition and multiplication that they're functions, which means that if their inputs are the same, the output will also be the same. This requires us to have some notion of "the same" before we can state even the most basic properties of our primitive notions here. (This is not a completely airtight rationale, because we might state that addition and multiplication are both increasing in each argument, and get an axiom that is slightly stronger than simply saying that they are functions -- but on the other hand "stronger axiom" doesn't always translate to "better" in foundational contexts). If we go to the second-order Peano axioms, we don't need addition and multiplication to be primitive anymore -- but on the other hand I don't see that we can progress in a sane way without explicit axioms that state that all predicates and functions respect the usual equality rules: $$\forall P \forall x \forall y \bigl( x<y \lor y<x \lor (P(x)\leftrightarrow P(y)) \bigr)$$ $$\forall F \forall x \forall y \bigl( F(x)<F(y) \lor F(y)<F(x) \to x<y \lor y < x \bigr)$$ (was well as variants of these for all other arities) -- and these properties do not feel like they really flow naturally from the concept of comparing sizes; they are much easier to justify by appealing to an idea of "being the same". A more philosophical objection: Even if you can construct an argument that "more/less" is a more fundamental property of number than "the same" is, consider that logic comes before number. And our entire tradition of symbolic logic is deeply entrenched with concepts of "the same". In particular, when we use a variable letter in different places in a formula, it is implicit that those two instances will represent the same thing -- except to the extent that we use quantifiers to explicitly modulate that expectation. It would seem arbitrary not to allow the formulas themselves to speak explicitly about "sameness" as a primitive concept, when "sameness" is already so fundamental to how to interpret formulas intuitively. • Given that Gödel showed with considerable success that logic does not come before number, I'm hoping that also Wittgenstein can offer some help to think with more clarity about the "gödlematical" foundational crisis. As for 'same', already Plato has thorough discussion of that concept and it's dynamical and codependent relation with other related "supercategories" in Sophist. Perhaps the concept of "sameness" is more primitive to analytical school that all of philosophy, and you are correct to stress that much of the weight of history of philosophy is at stake. – Santeri Satama Apr 21 '16 at 0:51
2019-06-18 09:34:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8047709465026855, "perplexity": 695.5400163316}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998708.41/warc/CC-MAIN-20190618083336-20190618105336-00189.warc.gz"}
https://bioinformatics.stackexchange.com/questions/2971/can-we-use-non-base-called-fast5-files-in-poretools
# Can we use non base-called fast5 files in poretools? I run the MinION MinKNOW without the live base-calling option. We know there is Metrichor and Albacore to perform base-calling after this process. However, I have not done any base-calling yet. My question is: Is it possible to use directly the fast5 files with poretools to extract a fastq file without doing any previous base-calling process? I tried it and I get an empty fastq file. My reads directory only contains the fast5 folder and I run: poretools fastq fast5/ > output.fastq Any ideas why do I get an empty file? What is the difference between base-called fast5 and non-base-called fast5 files? Now I am trying to do base-calling with Albacore to see if I get a fastq file. • You’ve edited the title - I should add that poretools can work with non base-called FAST5 files with some of its options, but not the fastq option. Some of the metadata options will function. – Scot Dec 5 '17 at 4:54 No, poretools does not do basecalling. The poretools fastq command can be used to extract the FASTQ information from the basecalled FAST5 file (via MinKNOW live-basecalling or albacore). Alternatively, both of these basecallers can export a FASTQ file directly if desired.
2022-01-23 19:18:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3726421296596527, "perplexity": 6033.881024361619}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304309.5/warc/CC-MAIN-20220123172206-20220123202206-00469.warc.gz"}
https://arsiv.cclub.metu.edu.tr/problem/problemforcengiz/
## Problem for Cengiz View as PDF Points: 1 Time limit: 1.0s Memory limit: 256M Problem types Fahri has an interesting problem for Cengiz. There is an array of $$N$$ elements $$a_1, a_2, ..., a_N$$. Count the number of intervals $$[l, r]$$ such that • $$1 ≤ l ≤ r ≤ N$$, • $$a_l + a_{l+1} + ... + a_{r-1} + a_r < K$$. Could you help Cengiz to solve this problem? Input The first line contains integer $$N$$. The next line contains $$N$$ integers $$a_1, a_2,... a_N$$ separated with single spaces. The following line contains integer $$K$$. Output Print the number of intervals. Constraints • $$1 ≤ N ≤ 2 · 10^5$$ • $$-10^9 ≤ a_i ≤ 10^9$$ • $$−10^9 ≤ K ≤ 10^9$$ Samples Input(stdin) 4 1 1 1 2 3 Output(stdout) 6 Notes While solving the sample Cengiz counts six intervals: $$[1, 1], [2, 2], [3, 3], [4, 4], [1, 2], [2, 3].$$
2022-06-30 04:01:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23999620974063873, "perplexity": 4953.763015874685}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103661137.41/warc/CC-MAIN-20220630031950-20220630061950-00577.warc.gz"}
https://www.nature.com/articles/s41563-022-01403-1?error=cookies_not_supported&code=d326fd84-b1f5-420d-9990-1619da52ab01
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. A magnetic continuum in the cobalt-based honeycomb magnet BaCo2(AsO4)2 Abstract Quantum spin liquids (QSLs) are topologically ordered states of matter that host fractionalized excitations. A particular route towards a QSL is via strongly bond-dependent interactions on the hexagonal lattice. A number of Ru- and Ir-based candidate Kitaev QSL materials have been pursued, but all have appreciable non-Kitaev interactions. Using time-domain terahertz spectroscopy, we observed a broad magnetic continuum over a wide range of temperatures and fields in the honeycomb cobalt-based magnet BaCo2(AsO4)2, which has been proposed to be a more ideal version of a Kitaev QSL. Applying an in-plane magnetic field of ~0.5 T suppresses the magnetic order, and at higher fields, applying the field gives rise to a spin-polarized state. Under a 4 T magnetic field that was oriented principally out of plane, a broad magnetic continuum was observed that may be consistent with a field-induced QSL. Our results indicate BaCo2(AsO4)2 is a promising QSL candidate. This is a preview of subscription content, access via your institution Access options \$32.00 All prices are NET prices. Data availability The data that support the findings of this study are present in the paper and/or in the Supplementary Information, and are deposited in the Zenodo repository: https://doi.org/10.5281/zenodo.7026702. Additional data related to the paper are available from the corresponding author upon reasonable request. References 1. Balents, L. Spin liquids in frustrated magnets. Nature 464, 199–208 (2010). 2. Broholm, C. et al. Quantum spin liquids. Science 367, eaay0668 (2020). 3. Anderson, P. W. Resonating valence bonds: a new kind of insulator. Mater. Res. Bull. 8, 153–160 (1973). 4. Kitaev, A. Anyons in an exactly solved model and beyond. Ann. Phys. 321, 2–111 (2006). 5. Takagi, H. et al. Concept and realization of Kitaev quantum spin liquids. Nat. Rev. Phys. 1, 264–280 (2019). 6. Chun, S. H. et al. Direct evidence for dominant bond-directional interactions in a honeycomb lattice iridate Na2IrO3. Nat. Phys. 11, 462–466 (2015). 7. Banerjee, A. et al. Proximate Kitaev quantum spin liquid behaviour in a honeycomb magnet. Nat. Mater. 15, 733–740 (2016). 8. Plumb, K. W. et al. α–RuCl3: a spin-orbit assisted Mott insulator on a honeycomb lattice. Phys. Rev. B 90, 041112 (2014). 9. Banerjee, A. et al. Neutron scattering in the proximate quantum spin liquid α-RuCl3. Science 356, 1055–1059 (2017). 10. Wang, Z. et al. Magnetic excitations and continuum of a possibly field-induced quantum spin liquid in α–RuCl3. Phys. Rev. Lett. 119, 227202 (2017). 11. Zheng, J. et al. Gapless spin excitations in the field-induced quantum spin liquid phase of α–RuCl3. Phys. Rev. Lett. 119, 227208 (2017). 12. Banerjee, A. et al. Excitations in the field-induced quantum spin liquid state of α-RuCl3. npj Quantum Mater. 3, 8 (2018). 13. Kasahara, Y. et al. Majorana quantization and half-integer thermal quantum Hall effect in a Kitaev spin liquid. Nature 559, 227–231 (2018). 14. Yokoi, T. et al. Half-integer quantized anomalous thermal Hall effect in the Kitaev material candidate α-RuCl3. Science 373, 568–572 (2021). 15. Sears, J. A. et al. Ferromagnetic Kitaev interaction and the origin of large magnetic anisotropy in α-RuCl3. Nat. Phys. 16, 837–840 (2020). 16. Sears, J. A. et al. Magnetic order in α–RuCl3: a honeycomb-lattice quantum magnet with strong spin-orbit coupling. Phys. Rev. B 91, 144420 (2015). 17. Li, H. et al. Identification of magnetic interactions and high-field quantum spin liquid in α-RuCl3. Nat. Commun. 12, 4007 (2021). 18. Liu, H. & Khaliullin, G. Pseudospin exchange interactions in d7 cobalt compounds: possible realization of the Kitaev model. Phys. Rev. B 97, 014407 (2018). 19. Sano, R., Kato, Y. & Motome, Y. Kitaev-Heisenberg Hamiltonian for high-spin d7 Mott insulators. Phys. Rev. B 97, 014408 (2018). 20. Liu, H., Chaloupka, J. & Khaliullin, G. Kitaev spin liquid in 3d transition metal compounds. Phys. Rev. Lett. 125, 047201 (2020). 21. Morris, C. M. et al. Duality and domain wall dynamics in a twisted Kitaev chain. Nat. Phys. 17, 832–836 (2021). 22. Kim, C., Kim, H. & Park, J. Spin-orbital entangled state and realization of Kitaev physics in 3d cobalt compounds: a progress report. J. Phys. Condens. Matter 34, 023001 (2021). 23. Vivanco, H. K., Trump, B. A., Brown, C. M. & McQueen, T. M. Competing antiferromagnetic-ferromagnetic states in a d7 Kitaev honeycomb magnet. Phys. Rev. B 102, 224411 (2020). 24. Lin, G. et al. Field-induced quantum spin disordered state in spin-1/2 honeycomb magnet Na2Co2TeO6 with small Kitaev interaction. Nat. Commun. 12, 5559 (2021). 25. Zhong, R., Gao, T., Ong, N. P. & Cava, R. J. Weak-field induced nonmagnetic state in a Co-based honeycomb. Sci. Adv. 6, eaay6953 (2020). 26. Shi, L. Y. et al. Magnetic excitations of the field-induced states in BaCo2(AsO4)2 probed by time-domain terahertz spectroscopy. Phys. Rev. B 104, 144408 (2021). 27. Das, S. et al. XY magnetism, Kitaev exchange, and long-range frustration in the Jeff = 1/2 honeycomb cobaltates. Phys. Rev. B 104, 134425 (2021). 28. Cao, H. B. et al. Low-temperature crystal and magnetic structure of α–RuCl3. Phys. Rev. B 93, 134423 (2016). 29. Do, S. H. et al. Majorana fermions in the Kitaev quantum spin system α-RuCl3. Nat. Phys. 13, 1079–1084 (2017). 30. Little, A. et al. Antiferromagnetic resonance and terahertz continuum in α–RuCl3. Phys. Rev. Lett. 119, 227201 (2017). 31. Reschke, S. et al. Terahertz excitations in α–RuCl3: Majorana fermions and rigid-plane shear and compression modes. Phys. Rev. B 100, 100403(R) (2019). 32. Sahasrabudhe, A. et al. High-field quantum disordered state in α–RuCl3: spin flips, bound states, and multiparticle continuum. Phys. Rev. B 101, 140410 (2020). 33. Zhang, X. et al. Hierarchy of exchange interactions in the triangular-lattice spin liquid YbMgGaO4. Phys. Rev. X 8, 031001 (2018). 34. Knolle, J., Kovrizhin, D. L., Chalker, J. T. & Moessner, R. Dynamics of a two-dimensional quantum spin liquid: signatures of emergent Majorana fermions and fluxes. Phys. Rev. Lett. 112, 207203 (2014). 35. Yoshitake, J., Nasu, J. & Motome, Y. Fractional spin fluctuations as a precursor of quantum spin liquids: Majorana dynamical mean-field study for the Kitaev model. Phys. Rev. Lett. 117, 157203 (2016). 36. Sandilands, L. J., Tian, Y., Plumb, K. W. & Kim, Y. J. Scattering continuum and possible fractionalized excitations in α–RuCl3. Phys. Rev. Lett. 114, 147201 (2015). 37. Winter, S. M. et al. Breakdown of magnons in a strongly spin-orbital coupled magnet. Nat. Commun. 8, 1152 (2017). 38. Winter, S. M. et al. Probing α–RuCl3 beyond magnetic order: effects of temperature and magnetic field. Phys. Rev. Lett. 120, 077203 (2018). 39. Regnault, L. P., Burlet, P. & Mignod, J. R. Magnetic ordering in a planar X - Y model: BaCo2(AsO4)2. Phys. B 86, 660–662 (1977). 40. Regnault, L. P., Boullier, C. & Lorenzo, J. E. Polarized-neutron investigation of magnetic ordering and spin dynamics in BaCo2(AsO4)2 frustrated honeycomb-lattice magnet. Heliyon 4, e00507 (2018). 41. Czajka, P. et al. Oscillations of the thermal conductivity in the spin-liquid state of α-RuCl3. Nat. Phys. 17, 915–919 (2021). 42. Yadav, R. et al. Kitaev exchange and field-induced quantum spin-liquid states in honeycomb α-RuCl3. Sci. Rep. 6, 37925 (2016). 43. Gordon, J. S., Catuneanu, A., Sørensen, E. S. & Kee, H. Y. Theory of the field-revealed Kitaev spin liquid. Nat. Commun. 10, 2470 (2019). 44. Maksimov, P. A. & Chernyshev, A. L. Rethinking α–RuCl3. Phys. Rev. Res. 2, 033011 (2020). 45. Ran, K. et al. Spin wave excitations evidencing the Kitaev interaction in single crystalline α–RuCl3. Phys. Rev. Lett. 118, 107203 (2017). 46. Patela, N. D. & Trivedia, T. Magnetic field-induced intermediate quantum spin liquid with a spinon Fermi surface. Proc. Natl Acad. Sci. USA 116, 12199–12203 (2019). 47. Hickey, C. & Trebst, S. Emergence of a field-driven U(1) spin liquid in the Kitaev honeycomb model. Nat. Commun. 10, 530 (2019). 48. Jiang, Y. F., Devereaux, T. P. & Jiang, H. C. Field-induced quantum spin liquid in the Kitaev-Heisenberg model and its relation to α–RuCl3. Phys. Rev. B 100, 165123 (2019). 49. Modic, K. A. et al. Scale-invariant magnetic anisotropy in α-RuCl3 at high magnetic fields. Nat. Phys. 17, 240–244 (2021). Acknowledgements This research was supported as part of the Institute for Quantum Matter, an Energy Frontier Research Center funded by the US Department of Energy’s Basic Energy Sciences programme under DE-SC0019331. N.P.A. had additional support from the Quantum Materials programme at the Canadian Institute for Advanced Research. We thank P. Chauhan and A. Legros for critical comments on this manuscript and H.-Y. Kee, G. Khaliullin and H. Liu for helpful conversations. Author information Authors Contributions X.Z. performed the terahertz experiments and analysed the data. R.Z. and R.J.C. grew the single crystals. Y.X. and N.D. performed the Raman spectroscopy. T.H. and C.B. performed the magnetization experiments. X.Z. and N.P.A. prepared the first draft, and all authors contributed to writing the manuscript. Corresponding author Correspondence to N. P. Armitage. Ethics declarations Competing interests The authors declare no competing interests. Peer review Peer review information Nature Materials thanks the anonymous reviewers for their contribution to the peer review of this work. Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Supplementary information Supplementary Information Supplementary Figs. 1–16. Rights and permissions Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law. Reprints and Permissions Zhang, X., Xu, Y., Halloran, T. et al. A magnetic continuum in the cobalt-based honeycomb magnet BaCo2(AsO4)2. Nat. Mater. (2022). https://doi.org/10.1038/s41563-022-01403-1 • Accepted: • Published: • DOI: https://doi.org/10.1038/s41563-022-01403-1
2022-12-08 21:26:43
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8236673474311829, "perplexity": 8138.615111573153}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711360.27/warc/CC-MAIN-20221208183130-20221208213130-00097.warc.gz"}
http://cms.math.ca/cmb/kw/parabolic%20Littlewood-Paley%20operator
location:  Publications → journals Search results Search: All articles in the CMB digital archive with keyword parabolic Littlewood-Paley operator Expand all        Collapse all Results 1 - 1 of 1 1. CMB 2009 (vol 52 pp. 521) Chen, Yanping; Ding, Yong The Parabolic Littlewood--Paley Operator with Hardy Space Kernels In this paper, we give the $L^p$ boundedness for a class of parabolic Littlewood--Paley $g$-function with its kernel function $\Omega$ is in the Hardy space $H^1(S^{n-1})$. Keywords:parabolic Littlewood-Paley operator, Hardy space, rough kernelCategories:42B20, 42B25 top of page | contact us | privacy | site map |
2017-01-21 23:43:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6166907548904419, "perplexity": 7828.871005789036}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00197-ip-10-171-10-70.ec2.internal.warc.gz"}
https://emj.bmj.com/content/18/1/65
Article Text Article 5. An introduction to estimation—2: from z to t 1. P Driscoll, 2. F Lecky 1. Accident and Emergency Department, Hope Hospital, Salford M6 8HD, UK 1. Correspondence to: Mr Driscoll, Consultant in Accident and Emergency Medicine (pdriscoll{at}hope.srht.nwest.nhs.uk) ## Objectives • Comparing a large sample with a population with unknown standard deviation • Using a large sample to estimate a population's probability value • Comparing a small sample with a population with unknown standard deviation In covering these objectives we will introduce the following terms: • Estimated standard error of the mean • Degrees of freedom • t statistic ## Introduction In the previous article we found that it was possible to estimate the probability of getting an element greater than or equal to a particular value (X) in a population with the known parameters, mean (μ) and standard deviation (σ).1 In these cases the z statistic is calculated to locate the position of X in a standard normal distribution where:$Math$ A similar process can be used when dealing with sample means. If a sufficient number of samples have been taken, and their means plotted, then they begin to take up a normal distribution. It can be shown mathematically that the mean of this distribution (μx) is the same as the population mean (μ). Furthermore, the standard deviation of the distribution is equal to σ/√n, where n is the number of cases in the sample. This is known as the standard error of the mean (SEM). To estimate the probability of getting a value greater than or equal to a particular sample mean (x), in a population with a known mean (μ) and standard deviation (σ), we again calculate the z statistic. However, as we are dealing with the distribution of the means, we use the SEM rather than the population's standard deviation:$Math$ You will have noticed that both of these calculations are dependent upon knowing the population's mean and standard deviation. In clinical and experimental practice this is rarely the case. However, we know that the best single estimate we have for the parameter μ is our sample mean.1 Unfortunately the same does not apply to the sample's standard deviation. We get around this problem by using the estimated standard error of the mean. ## Estimated standard error of the mean If we simply replaced the sample's standard deviation for σ to determine the SEM, we would end up with an underestimation of its true value. To overcome this we use an estimation of the population's standard deviation (s) using the following formula:$Math$where: s is the estimate of the population standard deviation based upon the sample data n is the number in the sample n−1 is called the degrees of freedom Key point When a formula is dealing with descriptive statistics the degrees of freedom are equal to n. In contrast, when they are dealing with inferential statistics the degrees of freedom are smaller (for example, n−1). This is to compensate for the formula's tendency to under estimate the parameter being derived from the statistic. To test our understanding so far, consider Egbert Everard's continuing assessment of staff in the Emergency Department of Deathstar General. Egbert selects five female night nurses at random and weighs them (50 kg, 60 kg, 60 kg, 60 kg, 70 kg). What is Egbert's best estimate of the population mean and standard deviation based upon this sample? The best estimate of the population mean is the sample mean: Estimated population mean = (sum of all measures/n) =300/5 = 60.0 kg The best estimation of the population's standard deviation is s where: $Math$$Math$$Math$ Key point We use s as a substitute for σ when trying to estimate the chances of getting a particular sample mean in a population with an unknown standard deviation. In these cases, rather than using the SEM: SEM = σ/√n we use the estimated SEM: ESEM = s/√n ## Comparing a large sample mean with a population with unknown SD The ESEM will provide a close approximation of the SEM if the sample size is 100 or greater. Consequently, using the method described in the previous article,1 it is possible to determine: • The chance of getting a sample mean greater than or equal to a particular value • The value of a sample mean with a particular chance of occurring • The chance of getting a sample mean between two particular values ### THE CHANCE OF GETTING A SAMPLE MEAN GREATER THAN OR EQUAL TO A PARTICULAR VALUE To demonstrate this consider the following example. Egbert wonders whether the female staff in the Emergency Directorate are as unfit as their male counterparts. To test this he measures the weight in 100 randomly selected female medical and nursing staff. The sample mean is 60 kg and s is 20 kg. From actuarial tables he finds that the mean weight for fit females is 55 kg, but the standard deviation is unknown. Egbert therefore wants to know what is the chance of getting a mean weight equal or greater to 60 kg from a sample that is still part of a normal fit female population? The sample size is large enough to allow the normal probability distribution to be used even though the standard deviation of the population is not known. The z statistic for this weight is therefore: z = (sample mean − population mean)/ESEM Where the ESEM is = s/√n = 20/√100 = 2 Therefore the z statistic is: (60 − 55)/2 = 2.5 Using the z statistic table, the area between z = 0 and z = 2.5 is 0.4938. Therefore the probability of getting a z value greater than or equal to 2.5 is: 0.5−0.4938 = 0.0062. Consequently the chances of a similar sample of fit women having a mean weight greater than or equal to 60 kg is 0.0062 or 0.62%. ### THE VALUE OF A SAMPLE MEAN WITH A PARTICULAR CHANCE OF OCCURRING Using the same process described in article 4, Egbert can also determine what sample mean demarcates the top 2.5% of the population.1 1. Convert the 2.5% to the proportion 0.025 2. Determine the proportion of a standard normal distribution curve from the midline to 0.025. This is equal to 0.5–0.025 = 0.475 3. Convert the proportion 0.475 to a z statistic. Using the z statistic tables, 0.475 gives a z statistic of 1.96. 4. Using this value for z, determine the sample mean. Remembering that: z = (sample mean − population mean)/ESEM 1.96 = (sample mean − 55)/2 Therefore the element value is 3.92 + 55 = 59 kg (rounded up). Consequently there is a 2.5% chance that a randomly selected sample of 100 fit women would have a mean weight of 59 kg or greater. ### THE CHANCE OF GETTING A SAMPLE MEAN BETWEEN TWO PARTICULAR VALUES Looking at the middle of the population, Egbert then wants to know what range of means from similar samples would demarcate the middle 95% of the population of fit women. As the upper 2.5% has already been calculated, Egbert calculates the value for the lower 2.5%. Using the same system as shown above he finds the lower 2.5% is:$Math$ Consequently the middle 95% of random samples of 100 fit women would have a mean weight between 51 to 59 kg. This range of values is known as the 95% confidence interval.1 In other words we are 95% confident that a random sample of 100 fit women from this population would have a mean weight between 51 and 59 kg. Therefore, provided the sample is large enough, the z statistic can be used to calculate confidence intervals when the population's standard deviation is not known. In these cases the confidence interval is equal to the sample mean plus/minus the z statistic appropriate for the level of confidence (zo) multiplied by the ESEM:$Math$ Key points • Provided the sample is big enough the ESEM can be used as a close approximation of the SEM. • It is therefore possible in these circumstances to determine the CI of the estimation of the population's mean (μ) when the population standard deviation (σ) is not known • Confidence interval = sample mean +/- (zo × ESEM) where zo is the z statistic for the appropriate level of confidence ## Estimating a population's probability values from a large sample We have seen in the above example that it is possible to determine the confidence interval of the estimation of the population's mean when σ is not known. It is also possible to do the same thing with respect to determining the confidence interval for the population's probability (P) using the sample's probability value (p). This is because the binomial probability distribution becomes approximately normal in shape when the sample is large. Consequently the z statistic can again be used to determine confidence intervals. To demonstrate this consider the following example. In the midst of his study on emergency staff, Egbert has been asked by his consultant to determine the proportion of patients who are covered for tetanus. Ever keen to help, he teams up with Dr Endora Lonely, an SpR in the Emergency Department at the neighbouring hospital St Heartsinc. Together they survey 700 patients at random and find 550 have adequate immunisation against tetanus infection. The probability (p) of adequate tetanus immunisation is therefore:$Math$ This represents the proportion of adequate tetanus immunisation in the sample. Egbert therefore now needs to estimate what the proportion would be in a population of similar patients (P). As with the situation described previously, the best estimate for the population's probability is p—that is, 0.786. What is now needed is the confidence interval of this estimation. The formula for calculating the confidence interval for P from a large sample is:$Math$ where: • zo is the z statistic appropriate for the confidence interval • p is the probability we are concerned with (that is, tetanus covered) • q is the probability we are not concerned with (that is, not tetanus covered) This formula assumes that the sample is large and that the smaller of the two groups must have at least 10 cases. As these both apply in this example, Egbert calculates the 95% confidence interval to be:$Math$$Math$ He therefore reports to his consultant that the proportion of patients adequately immunised against tetanus is 0.79 with a 95% confidence interval of 0.76 to 0.82. ## Comparing a small sample mean with a population with unknown SD In clinical practice we commonly deal with sample sizes smaller than 100 from populations with unknown standard deviations. When dealing with such samples to make inferences about the population it is no longer valid to use the z statistic. To over come these difficulties, W S Gossett derived a replacement known as the t statistic. Statistics trivia (2) Gossett carried out his work while working in the Guinness Brewery in Dublin. It was based upon samples taken from a population made up of the heights of 3000 criminals. At the time the company would not allow employees to publish their own work. He therefore had to have his findings printed under the pseudonym “Student” in 1908. Hence the name “Student's t distribution” and “Student's t test”. ### THE t STATISTIC The t statistic is derived in a similar fashion to the z statistic: t = (sample mean − population mean)/ESEM Consequently the t statistic is the number of estimated SEM a particular sample mean lies above or below the population mean. ### t DISTRIBUTION CURVES The t statistic tables show the area under the curve between a particular t value and the tip of the tail (fig 1). Along the horizontal axis is the t value. These are equivalent to the SD seen with the normal distribution plots. Therefore the same principle applies regarding set areas under the curve representing particular probabilities. Figure 1 Extract of the t table. The first column lists the degrees of freedom (n − 1). The remaining columns give the probabilities (P) for t to exceed the values listed. Symmetry is used for negative t values. The curve from which the z statistics were derived remains constant irrespective of the number in the sample. Consequently z values of mean +/- 1.96 will always mark out the middle 95% of the population. In contrast, the t statistics vary with sample size because the shape of the distribution changes. It is always symmetrical but with small sample sizes the curve is flatter and has longer “tails”. This is a result of the variation in ESEM as the sample size changes. With larger samples the t distribution becomes indistinguishable from a normal distribution. Consequently in these cases the z and t statistic values are the same. Therefore, a relevant question at this stage is how small does the sample need to be before the use of the t statistic is necessary. There is no definite answer because it depends upon several factors, including the distribution of the data. For example, when the data are normally distributed the z statistic can be used when there is as little as 30 subjects in the sample. In general however, it is recommended that the t test should be used when dealing with ESEM derived from samples sizes that are less than 100.2 Key points • As the ESEM varies with sample size, the t statistic value will also vary with sample size • Smaller samples have the biggest differences between the z and t statistics • As the sample size increases the t distribution takes on a normal distribution ### USING THE t TABLE As there is a family of t distribution curves, depending upon the sample size, the t table does not look initially like the z statistic table (fig 1). However, each line of the table represents the equivalent of a whole z table for a particular sample size. The left column deals with the size of the sample. It is labelled the “degrees of freedom” rather than sample number because, for mathematical reasons, we need to use a value one less than the number in the sample. For example, the t statistics for a sample of 15 would be found along the line whose degree of freedom was 14. Therefore, for this sample size, 2.5% of the total area under the curve lies between a t value of +2.145 to the right tail tip. As described above, the t statistic allows estimations of the population's standard error of the mean to be made from the sample data. This enables you to determine, in a population with an unknown standard deviation: • The chance of getting a sample mean greater than or equal to a particular value • The value of a sample mean with a particular chance of occurring • The chance of getting a sample mean between two particular values ### THE CHANCE OF GETTING A SAMPLE MEAN GREATER THAN OR EQUAL TO A PARTICULAR VALUE To demonstrate this, consider Egbert's result when he measured the resting heart rate in all the 25 male members of the department. He found the sample mean to be 70/minute with an s of 16. The population mean for fit men was found to be 60/minute. Therefore:$Math$ Without knowing the population's SEM, Egbert must use the ESEM to determine the chance of getting a resting heart rate equal or greater to 70/minute if his department's men were part of a fit male population. The t statistic for this resting heart rate is: (sample mean − population mean)/ESEM Therefore the t statistic is:$Math$ Using the t statistic table, for a sample size of 25, the area between t = 3.13 and the tip of the tail is less than 0.005. Consequently the chances of a sample having a resting heart rate greater than or equal to 70/minute in a fit male population is less than 0.5%. Key points • When a sample is less than 100, the t statistic should be used (rather than z) when making inferences about populations that are based upon the ESEM • You must use ESEM in these cases even if the population's standard deviation is known ### THE VALUE OF SAMPLE MEAN WITH A PARTICULAR CHANCE OF OCCURRING Using the ESEM, Egbert then determines what random, 25 male person sample mean demarcates the top 2.5% of the population of fit men. This is carried out in a similar manner to before, but this time using the t statistic. Using the t statistic table, the proportion 0.025 is equal to 2.064. This value for t can then be used to determine the sample mean by remembering that: t = sample mean − population mean/ESEM Therefore: 2.064 = (sample mean − 60)/3.2 Consequently the sample mean is 6.6 + 60 = 67/minute (rounded up) Therefore 2.5% of random samples of 25 fit men would have a mean resting heart rate of 67/minute or greater. ### THE CHANCE OF GETTING A SAMPLE MEAN BETWEEN TWO PARTICULAR VALUES Again with the ESEM, Egbert can determine the value of the sample means demarcating the middle 95% of the population of fit men. Using the same system, the lower 2.5% of the curve is demarcated by the t statistic −2.064 for a sample size of 25. With this value for t, the sample mean can be determined by: −2.064 = (sample mean − 60)/3.2 Therefore the element value is −6.6 + 60 = 53/minute (rounded down). It follows that the middle 95% of random samples of 25 fit men would have a mean resting heart rate between 53 to 67/minute. This also represents the 95% confidence interval—that is, we are 95% confident that a random sample of 25 men from this population would have a mean resting heart rate between 53 and 67 beats/minute. The t statistic can therefore be used to calculate confidence intervals. When using data from a sample, the confidence intervals are equal to the sample mean plus/minus the t statistic appropriate for the level of confidence (to) multiplied by the ESEM:$Math$ Key points • The t statistic enables the CI of the estimation of the population's mean (μ) to be determined when σ is not known • When using t to establish a confidence interval the population is assumed to be normally distributed • Confidence interval = sample mean +/− (to × ESEM) where to is the t statistic for the appropriate level of confidence • As a rough guide, the t statistic for the 95% confidence interval is usually around 2 Therefore, as an approximation, the true mean will lie within a range 2 ESEM above and below the sample mean. ## Summary Provided the sample size is large enough (that is, n greater than 100), the z statistic can be used to determine the confidence interval estimation of the population mean even when the σ is not known. In these cases the estimation of the standard error of the mean is used. The z statistic is also valid when determining the population's proportion based upon a large sample. However, when dealing with smaller samples, the z statistic is replaced by the t statistic. This makes it possible to estimate, in a population with an unknown standard deviation: • The probability of getting a sample mean greater than or equal to a particular value • The value of a sample mean with a particular probability of occurring • The probability of getting a sample mean between two particular values The confidence interval for the estimation of the population mean can also be determined using the t statistic. ## Quiz 1. A sample of five patients with fractured necks of femurs was studied. The trolley waiting times were: 1, 2, 2, 2, 3 hours respectively. What is the best estimate for the population's mean and estimated standard deviation of the mean? 2. The systolic blood pressure (SBP) is measured in 144 randomly selected, elderly (over 70 years) male patients, presenting to Deathstar's Emergency Department. The mean SBP is 140 mm Hg and s = 30 mm Hg. What is the 95% confidence interval for the mean SBP for this population of patients? 3. Egbert and Endora are asked to determine the proportion of asthmatics who had their inhaler technique assessed before discharge from their emergency departments. After a year's study 160 were appropriately assessed out of a random sample of 200 asthmatics. What is the 99% confidence interval for the proportion assessed in the population? 4. Egbert is interested in the total cholesterol concentrations of patients presenting with chest pain. He finds the mean concentration is 8.1 mmol/l in a sample of 25 randomly selected patients. s is calculated to be 2.5 mmol/l. Assuming that the population is normally distributed, what is the 95% confidence interval for the population's mean cholesterol level? 5. One for you to try on your own. Endora repeats Egbert's resting heart rate study with 16 female nurses in the Emergency Department of St Heartsinc. 60, 66, 66, 62, 68, 70, 70, 70, 72, 72, 76, 76, 78, 78, 80, 80 beats/minute What is the 95% confidence interval for the population's mean resting heart rate? 1. The best estimate of the population mean is the sample mean: Estimated population mean = (sum of all measures/n) =10/5 = 2 hours The ESEM is s/ n : where:$Math$$Math$ therefore; ESEM = 0.707/√5 = 0.32 hours (approximately) 2. The best estimate for the population's mean SBP is the sample mean (that is, 140 mm Hg). The SEM of the population is not known but the estimated standard error of the mean can be calculated:$Math$$Math$ As the sample is over 100 it is reasonable to assume the z statistic will be valid. Therefore the confidence interval for the estimated mean is: Sample mean +/− (zo × ESEM) zo for a 95% confidence interval is 1.96. Therefore:$Math$$Math$$Math$ 3. Again the best estimation of the population's proportion is the samples proportion. This is:$Math$ As the smallest group is greater than 10, and the sample is large, it is valid to determine the confidence interval for this estimated proportion by the following formula:$Math$ For a 99% confidence interval zo is 2.58. Therefore:$Math$$Math$$Math$ 4. The best estimation for the population mean is the sample mean—that is, 8.1 mmol/l. We do not know the standard deviation of the population but it is possible to calculate an estimation of the standard error of the mean:$Math$$Math$ The sample size is less than a 100 but we know the population is normally distributed. Consequently the confidence intervals should be determined using the t statistic:$Math$ In this case we are interested in the 95% confidence intervals. The sample size is 25, which means the degrees of freedom = 24. Using the t table this gives a to of 2.064. Therefore:$Math$$Math$$Math$ • Glaser A. Inferential statistics. In: High yield statistics. Baltimore: William and Wilkins, 1995:9–30. • Norman G, Streiner D. Statistical inference. In: PDQ statistics. 2nd ed. St Louis: Mosby, 1997:17–36. • Koosis D. Estimating. In: Statistics. 4th ed. New York: John Wiley, 1997:77–103. • Philips J. Description to inference: a transition. In: How to think about statistics. 6th ed. New York: WH Freeman, 2000:108–24. ## Acknowledgments The authors would like to thank Sally Hollis, Jim Wardrope and Iram Butt for their invaluable suggestions. View Abstract ## Request Permissions If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
2018-07-22 06:48:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 66, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8049296140670776, "perplexity": 694.9844682970498}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676593051.79/warc/CC-MAIN-20180722061341-20180722081341-00566.warc.gz"}
https://flask-monitoringdashboard.readthedocs.io/en/latest/functionality.html
# Detailed Functionality¶ The functionality of the Dashboard is divided into two main components: data collection and data visualization. You can find detailed information about both components below. ## 1. Data Collection¶ The amount of data collected by the Dashboard varies for each endpoint of the monitored Flask application, depending on the monitoring level selected. To select the monitoring level of your endpoints, you have to do the following (assuming you have successfully configured the Dashboard as described in the configuration page): 2. Go to the Overview tab in the left menu: http://localhost:5000/dashboard/overview 3. Select the endpoints that you want to monitor. 4. Select the desired monitoring level. A summary of the monitoring levels is provided next. Note that every level keeps all the features of the level below, in addition to bringing its own new features, as represented in the diagram below. ### Monitoring Level 0 - Disabled¶ When the monitoring level is set to 0, the Dashboard does not monitor anything about the performance of the endpoint. The only data that is stored is when the endpoint is last requested. ### Monitoring Level 1 - Performance and Utilization Monitoring¶ When the monitoring level is set to 1, the Dashboard collects performance (as in response time) and utilization information for every request coming to that endpoint. The following data is recorded: • Duration: the duration of processing that request. • Time_requested: the timestamp of when the request is being made. • Version_requested: the version of the Flask-application at the moment when the request arrived. This can either be retrieved via the VERSION value, or via the GIT value. If both are configured, the GIT value is used. • group_by: An option to group the collected results. As most Flask applications have some kind of user management, this variable can be used to track the performance between different users. It is configured using the following command: def get_user_id(): return 1234 # replace with a function to retrieve the id of the # user within a request. dashboard.config.group_by = get_user_id # Note that the function itself is passed, not the result of the function. Thus, it becomes: from flask import Flask dashboard.config.init_from(file='/<path to file>/config.cfg') def get_user_id(): return '1234' # replace with a function to retrieve the id of the # user within a request. dashboard.config.group_by = get_user_id dashboard.bind(app) @app.route('/') def index(): return 'Hello World!' if __name__ == '__main__': app.run(debug=True) The group_by-function must be a function that either returns a primitive (bool, bytes, float, int, str), or a function, or a tuple/list. Below is a list with a few valid examples: Code Result dashboard.config.group_by = lambda: 3 3 dashboard.config.group_by = lambda: (‘User’, 3) (User,3) dashboard.config.group_by = lambda: lambda: 3 3 dashboard.config.group_by = (‘User’, lambda: 3) (User,3) • IP: The IP-address from which the request is made. The IP is retrieved by the following code: from flask import request ### Monitoring Level 2 - Outliers¶ When the monitoring level is set to 2, the Dashboard collects extra information about slow requests. It is useful to investigate why certain requests take way longer to process than other requests. If this is the case, a request is seen as an outlier. Mathematically, a request is considered an outlier if its execution is a certain number of times longer than the average duration for requests coming to the same endpoint: $$duration_outlier > duration_average * constant$$ Where $$duration_average$$ is the average execution time per endpoint, and $$constant$$ is given in the configuration by OUTLIER_DETECTION_CONSTANT (its default value is $$2.5$$). • The stack-trace in which it got stuck. • The percentage of the CPU’s that are in use. • The current amount of memory that is used. • Request values. • Request environment. The data that is collected from outliers, can be seen by the following procedure: 1. Go to the Dashboard Overview: http://localhost:5000/measurements/overview 2. Click the endpoint for which you want to see the Outlier information. 3. Go to the Outliers tab: http://localhost:5000/dashboard/endpoint/:endpoint_id:/outliers ### Monitoring Level 3 - Profiler¶ When the monitoring level is set to 3, the Dashboard performs a statistical profiling of all the requests coming to that endpoint. What this means is that another thread will be launched in parallel with the one processing the request, it will periodically sample the processing thread, and will analyze its current stack trace. Using this information, the Dashboard will infer how long every function call inside the endpoint code takes to execute. The profiler is one of the most powerful features of the Dashboard, pointing to where your optimization efforts should be directed, one level of abstraction lower than the performance monitoring of Level 1. To access this information, you have to: 1. Go to the Overview tab in the left menu: http://localhost:5000/dashboard/overview 2. Select an endpoint for which the monitoring level is or was at some point at least 2. 3. Go to the Profiler tab: http://localhost:5000/dashboard/endpoint/:endpoint_id:/profiler 4. Go to the Grouped Profiler tab: http://localhost:5000/dashboard/endpoint/:endpoint_id:/grouped-profiler The Profiler tab shows all individual profiled requests of an endpoint in the form of a execution tree. Each code line is displayed along with its execution time and its share of the total execution time of the request. The Grouped Profiler tab shows the merged execution of up to 100 most recent profiled requests of an endpoint. This is displayed both as a table and as a Sunburst graph. The table shows for each code line information about the Hits (i.e. how many times it has been executed), average execution time and standard deviation, and also total execution time. ## 2. Data Visualization¶ The Dashboard shows the collected data by means of two levels of abstraction: application-wide and endpoint-specific. ### Application¶ Visualizations showing aggregated data of all the endpoints (with monitoring level at least 1) in the application can be found under the Dashboard menu: 1. Overview: A table with the all the endpoints that are being monitored (or have been monitored in the past). This table provides information about when the endpoint was last requested, how often it is requested and what is the current monitoring level. Each endpoint can be clicked to access the Endpoint-specific visualizations. 2. Hourly API Utilization: This graph provides information for each hour of the day of how often the endpoint is being requested. In this graph it is possible to detect popular hours during the day. 3. Multi Version API Utilization: This graph provides information about the distribution of the utilization of the requests per version. That is, how often (in percentages) is a certain endpoint requested in a certain version. 4. Daily API Utilization: This graph provides a row of information per day. In this graph, you can find whether the total number of requests grows over days. 5. API Performance: This graph provides a row of information per endpoint. In that row, you can find all the requests for that endpoint. This provides information whether certain endpoints perform better (in terms of execution time) than other endpoints. 6. Reporting: A more experimental feature which aims to automatically detect and report changes in performance for various intervals (e.g. today vs. yesterday, this week vs. last week, etc). ### Endpoint¶ For each endpoint in the Overview page, you can click on the endpoint to get more details. This provides the following information (all information below is specific for a single endpoint): 1. Hourly API Utilization: The same hourly load as explained in (2) above, but this time it is focused on the data of that particular endpoint only. 2. User-Focused Multi-Version Performance: A circle plot with the average execution time per user per version. Thus, this graph consists of 3 dimensions (execution time, users, versions). A larger circle represents a higher execution time. 3. IP-Focused Multi-Version Performance: The same type of plot as ‘User-Focused Multi-Version Performance’, but now that users are replaced by IP-addresses. 4. Per-Version Performance: A horizontal box plot with the execution times for a specific version. This graph is equivalent to (4.), but now it is focused on the data of that particular endpoint only. 5. Per-User Performance: A horizontal box plot with the execution time per user. In this graph, it is possible to detect if there is a difference in the execution time between users. 6. Profiler: A tree with the execution path for all requests. 7. Grouped Profiler: A tree with the combined execution paths for all (<100) requests of this endpoint. 8. Outliers: The extra information collected on outlier requests. Just as no two applications are the same, we understand that monitoring requirements differ for every use case. While all the above visualizations are included by default in the FMD and answer a wide range of questions posed by the typical web application developer, you can also create your own custom visualizations tailored to your needs. You might wish to know how the number of unique users, the size of your database, or the total number of endpoints have evolved over time. This is now easy to visualize using FMD. An example of a custom graph is shown below. FMD will execute on_the_minute() every minute at the second 01 and the graph will appear in the Custom graphs menu. def on_the_minute(): print(f"On the minute: {datetime.datetime.now()}") return int(random() * 100 // 10) minute_schedule = {'second': 00} dashboard.add_graph("On Half Minute", on_the_minute, "cron", **minute_schedule) Note the “cron” argument to the add graph. Just like in the case of the unix cron utility you can use more complex schedules. For example, if you want to collect the data every day at midnight you would use: midnight_schedule = {'month':"*", 'day': "*", 'hour': 23, 'minute': 59, 'second': 00} Besides cron, there’s also the “interval” schedule type, which is exemplified in the following snippet: def every_ten_seconds(): print(f"every_ten_seconds!!! {datetime.datetime.now()}") return int(random() * 100 // 10) every_ten_seconds_schedule = {'seconds': 10} dashboard.add_graph("Every 10 Seconds", every_ten_seconds, "interval", **every_ten_seconds_schedule) Note that not all fields in the schedule dictionary are required, only the non-zero / non-star ones. Also, note that in the “cron” graph types you use singular names (e.g. second) while in the “interval” you use plurals (e.g. seconds). Finally, the implementation of the scheduler in the FMD is based on the appscheduler.schedulers.Background schedulers about which you can read more in the corresponding documentation page.
2021-10-25 19:46:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21213175356388092, "perplexity": 1982.9912379715906}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587767.18/warc/CC-MAIN-20211025185311-20211025215311-00133.warc.gz"}
https://tex.stackexchange.com/questions/461420/align-nested-items-in-separate-frames-in-beamer
# Align nested items in separate frames in beamer I am working in beamer. I have used some itemize list in one slide, and I want to continue the indent of the nested list in the first slide to the next slide. Note that I do not want to use allowframebreaks. How can I achieve this? The situation is shown in the figure below. From the figure, understand that I need to align "nested thing 2" with "blah" in the itemize environment. MWE: \begin{frame}{First frame} \begin{itemize} \item something \item some other thing \begin{itemize} \item[$\hookrightarrow$] nested thing \item[$\hookrightarrow$] nested thing 2 \end{itemize} \end{itemize} \end{frame} \begin{frame}{Second frame} \begin{itemize} \begin{itemize} \item[$\hookrightarrow$] blah \item[$\hookrightarrow$] blah blah \end{itemize} \end{itemize} \end{frame} However, I get an error along with the output "Something's wrong--perhaps a missing \item". Is there any workaround, such as using \setlength, etc.? You could add an empty dummy \item to the top level list: \documentclass{beamer} \begin{document} \begin{frame}{First frame} \begin{itemize} \item something \item some other thing \begin{itemize} \item[$\hookrightarrow$] nested thing \item[$\hookrightarrow$] nested thing 2 \end{itemize} \end{itemize} \end{frame} \begin{frame}{Second frame} \begin{itemize} \item[] \begin{itemize} \item[$\hookrightarrow$] blah \item[$\hookrightarrow$] blah blah \end{itemize} \end{itemize} \end{frame} \end{document} • @AbhinavSinha You're welcome! – user36296 Nov 23 '18 at 14:37
2019-10-19 04:37:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7687329053878784, "perplexity": 5714.144010988088}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986688826.38/warc/CC-MAIN-20191019040458-20191019063958-00376.warc.gz"}
http://accessmedicine.mhmedical.com/Content.aspx?bookId=348&sectionId=40381705
Chapter 222 The hyperosmolar hyperglycemic state (HHS) is characterized by progressive hyperglycemia and hyperosmolarity typically found in a debilitated patient with poorly controlled or undiagnosed type II diabetes mellitus (DM), limited access to water, and commonly, a precipitating medical event. In view of its frequent association with concurrent illnesses and prevalence in debilitated patients, mortality estimates for HHS are significantly higher than diabetic ketoacidosis (DKA). Readers are likely to encounter a host of nomenclatures used to describe this disease state that may include the terms hyperosmotic, non-ketotic, hyperglycemic, and coma. The syndrome does not necessarily include ketosis and coma. This chapter uses the terminology adopted by the American Diabetes Association, “Hyperosmolar Hyperglycemic State (HHS).” The basic epidemiology of diabetes is discussed in Chapter 218, Type 1 Diabetes Mellitus, and Chapter 219, Type 2 Diabetes Mellitus. Prevalence rates for type 2 diabetes are estimated to be doubling every 10 years in developed countries worldwide, with prevalence rates in the U.S. for those >60 years old to be 20.9%.2 Over the past few decades, with advances in monitoring, treatment, and education, mortality rates from hyperglycemic crises appear to have declined by half, although mortality rates remain unacceptably high.3 In view of its frequent association with concurrent illnesses and prevalence in debilitated patients, mortality estimates for HHS are significantly higher than DKA, which is estimated to be at 2.4%.4 The basic pathophysiology of DM is discussed in Chapter 218, Type 1 Diabetes Mellitus, and Chapter 219, Type 2 Diabetes Mellitus. The development of HHS is attributed to three main factors: (1) insulin resistance or deficiency, or both; (2) increased hepatic gluconeogenesis and glycogenolysis; and (3) osmotic diuresis and dehydration followed by impaired renal excretion of glucose. In a patient with type 2 DM, physiologic stresses combined with inadequate water intake in an environment of insulin resistance or deficiency leads to HHS. Insulin resistance is the condition in which normal amounts of insulin are inadequate to produce a normal insulin response from fat, muscle, and liver cells. Insulin deficiency is the secretion of less insulin than necessary. Regardless of whether the state is insufficiency or resistance, the result is impaired peripheral utilization of glucose, increase in hepatic glucose production, and hyperglycemia. As serum glucose concentration increases, an osmotic gradient develops, attracting water from the intracellular space into the intravascular compartment. This initial increase in intravascular volume is accompanied by a temporary increase in the glomerular filtration rate. As serum glucose concentration increases >180 milligrams/dL, the capacity of the kidneys to reabsorb glucose is exceeded, and glucosuria and a profound osmotic diuresis occur. Patients with easy access to water are often able to prevent profound volume depletion by replacing fluid losses with large free water intake. If this water requirement is not met (as may occur in a nonambulatory nursing home patient), profound volume depletion occurs. During osmotic ... Sign in to your MyAccess profile while you are actively authenticated on this site via your institution (you will be able to verify this by looking at the top right corner of the screen - if you see your institution's name, you are authenticated). Once logged in to your MyAccess profile, you will be able to access your institution's subscription for 90 days from any location. You must be logged in while authenticated at least once every 90 days to maintain this remote access. Ok ## Subscription Options ### AccessMedicine Full Site: One-Year Subscription Connect to the full suite of AccessMedicine content and resources including more than 250 examination and procedural videos, patient safety modules, an extensive drug database, Q&A, Case Files, and more.
2016-08-25 06:09:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24142348766326904, "perplexity": 6881.807766707114}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982292944.18/warc/CC-MAIN-20160823195812-00152-ip-10-153-172-175.ec2.internal.warc.gz"}
http://mathoverflow.net/questions/93274/unpublished-work-of-wielandt
# Unpublished work of Wielandt Wielandt wrote a paper titled "Remarks on diagonable matrices". According to Mathematische Werke - Mathematical Works : Linear Algebra and Analysis by Helmut Wielandt, Hans Schneider, Bertram Huppert (Editor) page 260 this paper from Wielandt remained unpublished (at least from the 1950s to the 1980s). Does anyone have a copy of it or an idea of the proof on non defective pencils? The main theorem states that for $A,B \in \mathcal{M}_n(\mathbb C)$, if in the pencil $\lambda A+ \mu B$ all matrices are diagnosable ($\forall \lambda. \mu \in \mathbb{C}$), then $AB=BA$. Motzkin and Taussky proved that result (MR0086781 (19,242c)), using algebraic geometry, Kato proved it differently (MR1335452 (96a:47025)), using theory of complex functions in one variable. Wielandt seemed to have given another proof, hence my request. Thanks - Could you state precisely the theorem you're looking for? –  Deane Yang Apr 6 '12 at 7:19 @Deane, I updated the post with the theorem and references. Thanks –  Portland Apr 6 '12 at 15:01 Just curious.. is there an analogous result for the case $AB = -BA$? –  J.A Apr 6 '12 at 17:55 Wielandt's notebooks were TeXxed and published online [here][1], but now the page seems to be down, at least from my browser). If the page is not back up by Tuesday, I can ask for information in person --- I work in the same department. Today and Monday are bank holiday days in Germany so the admins are not there for sure, sorry. [1]: www4.math.tu-berlin.de/numerik/mt/Wielandt/index_en.html –  Federico Poloni Apr 6 '12 at 17:57 Are you looking for Wielandt's proof or just a proof that uses only linear algebra? Have you already looked at Gantmacher's books on theory of matrices? –  Deane Yang Apr 11 '12 at 8:30 Now that the server is back up, I am posting this as a real answer. With some work, you might be able to find the proof in Wielandt's notebooks, which were TeXxed and put online here. The TeX source files are also published, so you can download them and use an automated search tool. Nevertheless, there's lots of material there, so it is not an easy task if you have no idea which period to look at. - Perfect, thank you Federico! –  Portland Apr 12 '12 at 16:26 The Mutzin Taussky theorem states in fact an equivalence between the two propositions you gave. If you understand french, here is a paper presenting a problem ( starting from page 16 till 22) whose purpose is to prove that theorem. http://agreg.org/Rapports/rapport2009.pdf have fun! -
2014-07-26 19:38:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7664197683334351, "perplexity": 844.932048877273}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997904391.23/warc/CC-MAIN-20140722025824-00225-ip-10-33-131-23.ec2.internal.warc.gz"}
http://mchamber.org.mk/(S(nrvs3j55qnkvzt45jnk0k4e3))/images/imgHandler.ashx?width=2000&image=KIFWB6-logo.jpg
JFIFHHC    \$.' ",#(7),01444'9=82<.342C  2!!22222222222222222222222222222222222222222222222222e" }!1AQa"q2#BR\$3br %&'()*456789:CDEFGHIJSTUVWXYZcdefghijstuvwxyz w!1AQaq"2B #3Rbr \$4%&'()*56789:CDEFGHIJSTUVWXYZcdefghijstuvwxyz ?( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( (4Rf\$ fIQ% 5A\$U"}K-c#lgp)-݊ƛڌj=PWɍCŬsZ/0uLR_?e򀮊L%(4W?-3m)>Qskzf.3phhhJ(i(5J,\$ujF8s8U-9\$|dK5gahAEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPE!euTQp>jΝ5Σw@uv~>+ݝ#¶ͩj vy E>aĞ/\$J>B@# ^"֞k&7ĿK\C!1{_u4w #] v gp\$5cl&/~Qy.Xwg"-r<)jImح(OvQm.B.?b+fI(̾xqяP{z6k6w-/ёAU/oc!tJi1rvA}¾Y u]Hv-PEPEPEPEPEPEPEy&V 9&M%v;7+HZYX*5꺬yXF>"?"Nzj2+nEqǹ(z/LzT֤Zv=5tK#{(#iWxY>%wZ^PMϝVBRhρ@a޽#u#{o,qsj9QEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQERfxI8,{4X *3\x[R¡o~_U*r{"\⺝EI%xU?'}*eWBvW.CZ1a;7) /mfvQ^B~8~M0nScKWBR/csFkƏۿ?Y 0kԏ5>}Ǵ^&uf[G-_Ɠ#EJLJ7E (1ph8Y(?L/ _1Q;>"{>>Cu*3oԛrH9MYuGn󯜛źFO¢ok.y&TUG;[Z_O9u׷{`u]]}|K EOх]Ã@Q@Q@Q@Q@Q@M6\m7m佸h|%zzPQEQEQEQEQEQEQEQE" QFkh ۻ4u<19\ w[f9'EQ@Q@Q@Q@Q@Q@Q@zQ4 B5uԑ퐅@BEQEQEQEQEQEQEQEQEQEQf1#ʩ#xi6p [((((((((((3Fh5>"j>ӠeB\0q@ZѯdtkKT+vQf;PEPfQ\oĿK =ŨckO@ A*\ƀ>K@Q@!KP^]Ccg5(P:Yc6i8f8-G⇄1G+x⟉tcE\${ -aW.%#s@<<P87Xӵ<:+xڸp< Gߌ^a ux5M.It|+dk{I<%h57Eͩ^m\| QrZW-Sw+h=ׄR0d,@y X5X|8t׃xZ6.p7 @|οͥdnjJeC' d9g3xW;,-؍Wx# &׈'H y?\| _(xGPpq@MbKI\$&T?)>M,ȯ Sj;m>X`OW!x YUc/\WN({KtmV31YϏ<7jY OF䃩Q>%.g~d,.}O(F~+Ԭ[Us^u ?2||}4McVYYl؊fPm8"^K;`Ws{yk8ܝ+|7[}k[EEgj"16O4>!x_Zaa t"t5VXy&<++ oNo2dIT(W_5 uOLE+?hayY6Oo[Bw?#'huOҥhbU8dZׄ/\$u\$= Ҿr%75|U[5|H Y#& |L\$%y\$*zkTl2nǡ(G_h'nA BɮY>Bq:a\ xhc.}z \~6qoi8޸; VIO݃\|ð361kG߲[%8'@OUkFKkx #^GTY:?gwIPYk:vM^Eu+Z N!pbC> &g9#6ꉌ] kڤvܷzhѼ3wC^HWs;WK\_i FYH5P\$dXɀu_ j-=  ={V ۯpe}5+CΧ}|ַVkw  h_VN+ O=f\27J]OwZuXN#9U6̕`\yrwyO9>fݖ͵zK+MJn,#26k:Ś .u[xms^jZeLf҄rv 9xqQdw )_=9q |J&u}*l;aހG5=B Ve)\$Hi\$`sW̺+-{e<*)Z dK *㡬_&=J | ܚ\$x< :PQ,A"Q~xG^!;i3P?,Enzlr}Z>MW[kzno鷰\E? c) c^OXctZܤ#ցE"Fu=BVe3[44خ(;FsMs?ċ/Sd  ~ ӼI+3?sP{ ղHefWOZn Ϧ^sm3y~Súiy%ؒh 1]/ONWi<2#T5MoM"]N+TC!&+Yj2JnprF(7xH}6-/PV.|8tZL^{S6Nwyea)௄z|/kOrK@_ҼCkfE/1},vd5,\N^jO17R:"QOR<}w!c)ï2kjKz:VF7vP[O,xϵM%`Ym_Fh,u=Ng&26j?zUҼQZv%tZxź:Xw>}K[+N {sүZYF# WS⥣kLiuixX EzZQu6&1oKQ p?ON5 rA|o"<5BH}dUB*z"n\Et4u/h=Z:ӺDsko.^fm0y7YgtmjWMaYc9"?H?5w|C}\^äeԯiZ[ZSԡLqly>0AoqX:/Kk'_E{V? ][pE%áJ4r/bNJ=64Q3_0[k? |mKhʐ~Y1]G|f՝%TYM;*ƙ7zzQ fm~ NB<Lk (t dIRKeu OB Ӆxg/ݽ@.yR:1@ EPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPQKI@ xbqwʃY>2gϪֵ-;8뿆O#?.~l Cz]Rw_u[jP쬧5sİg˴XWQG36&q>iֵd?4W6~{yS" UYI-K:5# p8W> ʐd(O"½bLf6jKFJQ p[)l6Z=6y5MԱU,u#YJY*tiHjdVJF&Zj%Zt+&fZje\$f ?CRE @^;Zpw^Mtpֲ6PBӼĻ <IӢ~4&f#ZW4~)v+ڼ8ğʁI jZ(KUA&x2 =x./Wf98W:*[R:x/Gc۾?كSͯx[M[JT8\$3BPZ,"}zPxbڥ爵iv6 Z E)y&>TAƟmS6Ę~!ٗH"<7:j-k\${g' ,iCH*kq(E^m^;It O/u OUuĻT4J(8|ߏi(ݼ=g> qUtEZ(M_65_oo:UE{kρ?8]f vތʻ Z/-YPyCw\$)wN Wҥ q :Z(8oρo]4֋DqEquiXHTtzW|Az3U-[~VC}(jZʖB᝼G+O tcTo+Nz׭Š:oX!hNW3 '?޾ŠGI"նG(fY+zě|)7\$nO||yR#v֑X0LP98 >hȺ"4l FkJ j?ʽzQ@FxA%FkmGA~RvZt!z7l6 1<QKI@6| O\/z|sdZMfQ8_E:t:Nmap:#^[kjsUrGQ^E(UQ KNd[{}?ExDw/k[֥_yYִK柤15> ~=KR:/?ss#܁,l%aIdkb|s^e`yݏ| [>3F0B?%%&>dP#nc캥g\U e _&^y/*f7B89?s|Oq隼=p푴+@ϥ.oʾpI%>6I{H"_)#Z/ip׾׀hվiiҢB[ +&@Ρd2*᥄WePEHPƾ[k?V?ߗB4~ʖ?+V?j(0|1|#f=ȣi] c2)02]%Ziw'uH(зjX!}m/^0EU۸Ez(s}O C~"xL b DGoZO}oZ<nFgCWiO<R|o'X–Dsk*;Zg?JƸ?I+@(S.>* LWv9++ȈWP VH{Ex ];J|?әR+̿h_ ɿ*|=ӿ^kBgCMPx[E}7/*ؠ(H3*? E77_?h5i`Ӵ1w ^@JP~ Mpl[=+Ͽh[W3 I5P^4D2Bdz(å-·RɈ1tx IiL߸u܋<f;-fp'l5glHXӒhؼ@#O׾xSKGΟe*w`u8ׅdⵆ?_} cX C??/||v(|"f\Et53\$WE@|J^5 ROzxVʁ}o\$O? xw:+u"֨Jg޽|@5?bԿ;R:R~H͵>ٯB_e28#ytץ|=_r?ILSh(D* eGN1y}+@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@%-xx/A7G#l?5y>9^]kZ6W`Ǟn( }FGB3J:Tp}P*%-R@ ^"]j?& ^P]wm\$hRGXb%D{T֯4<=f|KޫYqןQYXդa*Ud)\&DS%B2V02PLY*ԢZV2H(G[3znlO:KUGAX^ 83};V}._TgcZ[!ii)k ( (5Y7|HO{E=Ne9 P4{χ6Q6ekЫ>a)ze (dC^&P'v?k?V:"FONT)Kzlu-3(|?+l\$d^¿R I@T߽5h~+\$APB.'8|G[V&5|-3+kSb9 ?څ{xH+H,cӭtQxxxif\$sEmGWWU͒ޛ>A&q1]QEVOɿ|cWEoRȯ=𷊥(7{-BѾ+Ѿȉ: YkOg9BDo!v# ZJZJzOq{oZӿ'φKmV&:mvyvm~og7cex׺t,Z3CE|_x+NiɁyBkǭyW x_SSk1?/-0>eėJIɎo?k+/-;m2\ qsZ}xgƯ ][jPMV \2T|RCk\GkFXHpסW<3ƲD 2)'/Qk"\$c2UK7,c:.i% ?x?(ȎzO(zMү&IjI1?sqW`p Ss]ƍ#is|#1^{K݂C}Ozі(qVP|)^Loa2l^0JIa+F;rҹ=oI厘:fK"U䗐 z^ot;_%K'_nޠ4_98.d20RzEiZw=XW?kG+ dD;wyps<5=R lt+|kaϝ[7T[wR&1@[>92X& U|n bO,H`AEOLxBYo 9"s^\ρ\$<f-ѹ˗WtWӿWX5XBnh/O4;#o]]`?z"xS Og~;~5? <~ I|G17t%[OQU-x[5Ubv[Ӛx#9ءs@ה|uЦ<5elzz` Y!v0"< ^]G A^.`+M_] ksxzYcڲA"G5zHc`ǰY6HumR&YF⽛:/ԮP(>X]H٥B5c`J1#\{t3sQ|0_77cSk9v1z uBҿG@}=vIAo!vhV'tӇJFQ^ʾ5>.q;:פZ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( (<=!ćھgmV*־u_@ds_6Sv: ~ i3 "`r?]\׀/4y67Q@Q@!4B4FKJ0 }>Â=ExYd)[޹qܿ-XJa+̑&Z*&Zj%ZTKRJFlkkúquHԏݡK -]e]*֪7Tvo*P`Kޖ˅-%-0 ( (\$m#*M|ijS}g:-v:.dF?3|gҿyN+;Ov4] ( ((Q@Q@Q@Q@ Z((b((((-Q(QEQE&-PEPF( ( ( 1EQEQEQEQEP(R@Q@ Z((Pb(((((LREQEQEb(Q@ Z()h 1Eb(P`RE (Q@Q@Q@Q@(((((((((((((((((((((((((((((((((((((((((((((((((((d\$=Jƾm%jn,|W6DŽs_GP ((4VOXk9T(}j֢WEBr>l[YO=:W|Jv6޼3^MxrH)bh-L \n&Zj%Zzdvj%WzV/] mxoMmKUE#qz 8-/R|WA_E1kU(ӄ(((Ymn<2GSi~i/Q- @=>WOlm#BnQEQEQEQEQEQEQEQEQEQEQ\Ǎ|go2;ۛi'Wmc\'4@Ҁ=_ aOťA\E\$!9((((((((((((((ƫK\^Cvj?'+/H #\/4 R [N(9YY×XrQ@Q@Q@Q@Q@Q@2YRI\"('9G2 8 AR3[rI5.3K^r\ցg7PZȆGns[_;Y|fSz##o>-_NaꫂM|`\$u C+D ,eZkU iSՒ6z+|a O}UyeEvt()h-E(J>(*uKPAAkX_mn`Z̓Fz2AҚUFXP謃 \$V׏h)=m )]2#> *,{V(;Y:Ed/7uU-K1AkW9QIUou+-6#-0 ̎[Kϋ v^d j~4x6F<`(hШ}'.F s;OZޙQE!@ E%e~"4tf T@Wq6Lu؆W (h(_Jmej=uң2NAΊϼ>"ev;q@oE[o_<׼|ltC;s7׃4v Y:|ǖʱD˱F?+k͊*{YhFCS!hV?C?SPrir>Լ X|X~*FHhKm_%o+/9됮A|r׮אrtzfz,FMB0@*+ϧ[j_48;P/⧄uYVu5IIvTQK "d26AQE!H㌼psZO hV?PSEyܟ|廑ǨCRA4D܆=Ҽ_kC6:{`u=s@ \?J뉮ȕ@ ݫG,?:vey\W|p Z[O:"u *WYJ^]O:(H*浦i/B2 NA[_(bɏĚ,ҬQv#*&=RUKJN?p?JEpg#Fg _!i(h"׃o*jXd"RSIew F~GTrLhh7-03Ȫ_6(vZ/ clP,t Q@Ҁ.^u4k~ҲqOۦ cbqC@NS XE:ݸ?[QI@ ERl4C8wU|]mmS{ʄsEy7r/ZwjnVqWaEC \%dFL(((;?y O__ºk⿈_jYWo (eg0IPU-[lu{h4٥9GQWѕH,0㸠gPJ+#]Km5Pс|n ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( (O"o'} dlto|b*G5\;Fkþ37trQǨfoi:}jk> X\sl~z~5Z( ( (JO >^),ZIVHx\ٟ]=.I|HrPLH*{=G1`L kKFm?T~ YҪQq{=ii85u9V_J [Ah`!PTF+俈o #ry}k^#Dz~ʟ!4F:5a6{ L'{ע||to{ȠljzԮX|f ⺊Yiԛҁ?*%ߓAQӨg _i,r=#WY^7 Ut}CLhe k?xV >6Ày_;mM߫ifZE]k)o= k~;Иb>k1ȯxl5_{czS/q}WG?⍷Fr"ZƽkMqK;1Vc~]O}u-ԆYm5kEuV?O qGNs-emV^ٖ-#i7( ;Oi:k<ʧ*"d-Rd + ]/p:H9Ҁ>V #/Q^kw6t,~\wѰ/@P#(TMBФF'{z˫ۆyg]&KI.EM4q!v xdDHZ7OҾ ZEKcF#7dlOZƁMsLF29WmωυnY{8I %GY .ͧ1=).-#U}B@N_:s3-uמCE, L+7/VԒsq``{P#,?L׭|PuCBKYgi.+hҼwtG}69| {(]O`žc]WRPDT<9 )ź^+ƒ]_YVއځ|uPgC,+ߓW;;,rM'@Fp:y?W2i aǥ})!fGb3\׾7i9,=s̼9gi6v0~hX𦔣 ilX1'}G| 2ƶ'; QE`%AK@aG> w6^@go>c 6ǩ \$_l呶!\$0pU;ۋZ[i 9R=h(iDoJC}+xNX^j{K;ەy&k/1eĬ3%*;J:}\$zV~|8|p7 nOJAދ:]5_ WgW;or^mJHv8T5E}W7u\r)P{ bﬢ A[}Mndun }kSBA.ۙ#e #5tBmu]Iq38TVkm4**׸xgM#7I+L1=@A=2H?>ٗN{YûWxfLr/IP; xF52ΜN%8WӺ6gPj6 w;{Ã^;/gI\H|'V>Oxš,Tauvھnۯ˨tԓ?z#WM{W.fk{3=k=G֗zO¿py20 8ZytӁ Wͬ"4m\$;υtm/KDpϙ#gE'du9VSl5tvs=&sF{d\$N0CSK_}E}2^Aȕ#O=䚇4nxDO|viڿrgٖs_^uD1_ km?ʼc#|1S]g)z'u?ʁY\qe% Ay'T5񝏃`.ysν_xVQ%/+Pzί{j:-ĭ(N(\$dy%sQI\$ts:Am\$"(#= jڼk& !|,B=cX\n69VZ:-v&sP#%PO gKK:"j:OpJRTo,.zQ@3h1{v:s^AoEwЫ}~~n̏s+Ix;ʃVdq\$Qirk<7*[(u3xhqQWaa,3򳌊M{C6~&{؊6}E/4ۇL,=}Yxö?uZ=^ VdtbVA5?#[-5I*"1kKz 4d|a^R`8k7-sq#U#u҆;T'?xjXmL%5[Oi{n@3L(c.WL42H嘞澥pֿ GS!T~&W(-Voll<M}6Ѡ1[Av8 jOm5rà>.~VKX[?Ɓ|n𵬅-[:\ krxIfr/@}D"M2)IpߕmWw6+sg;:\$C+?t}\ No^Դ(((((((((((((((((((((((((((((((((((((((((((((((((*+" f5 ٲ>dAMkAbҪv EjzUMc{R:}+x7Voũi'ArP@[?X,3êF1\$D4QEQEvםxA*'@kѪ TP H5jJltaqU4xBzZi: S؊kjE\jTRFԢq; G:/~w3-\FKt|>1QEvA\-wZe"cFISNUv3KlZ>7|cXZ3 s͝}h(f;Hnb9IP2|ToŢ>bcֽីKl ng/ΩJv@V&Zij}3ޣ{ B\Jѷ3],>f\msCګ[ҿvȻaEkƪ4\$.߻A JE,s_hҼE2_O[]?\$z'u6|K i+/ivyQ4d{~xGdW(dygƉ:Ц a#ּΰ"C~?0z@V+MUEjam}[s#p/zZpwP{PQW\$Z/?-WƾlZG'X05 -͵Vv,G|G׮iG #{֛|P;zi~>!9?}kgΥn۾J+3)BG1 vWBC>tt{ď7k>@B^k^x1[yGhּM: v@*ܧ/wU=I!'%|<*Hh c^Y~+H=(¾L?|kLk{qӐBz4LWP(|Ҽ/蓴upH ::V?Ɣcbg4Ma/Q'>4ȅ7c]Z~S oƺ!MG̃(WvTAc} Y4v44} %}FDHgEwbD Eq{xOz/ūMGx@4+>.|gt@Ϣi;#?h=ZVv+?>Wϝ=V|kJMqzQ@Q@/e7rzWJW@#VpOP3'"]:+?XM\$0R&Y[n2mqS[ 8\$\$WclɪthܣVAQ}mI^Zr5o'/{ǟ%jIڿr+j!΁fZǜy|/E8?*!_@Vτluic2%*;qXݨ_Ğ#N.}!fb|/d<Q1>ML`׮Z;˅ V-cy/&;ُ ޟ{Em\iZ HVy)O?ך/> xM>]3ȣ ֐߯X5䚧Ɵ_Ib oր> B@vk(|w__dϫ|v1ZɗPM}s.zc~1HԴCӭԒ8iU@)o+B.bOp3N8@ΗOwzOwg_*?4_ {(wtO\$m _OϟmYkUҼ'.tAotrvlsJ%:YH~! h䢯{=g7#_]7ĺIذ#˯.xQQC?zG>-Z4XpOzMSg0V3[8+? (m'߄]/ݏ_=vFlװI_ 9]ZS@Gr^[ɇ^ vVtQLuW^m \$Ǖ/_ν~@ wb9y+,\ A.[V9AvFV G|#*{+ɴ/mܤ8u`pxIT\`u xrSL> =zׯ2v^*MF>6B(((((((((((((((((((((((((((((((((((((((((((((((((;UK MR[;x\$dqjkKu]qt? Ң<)hP)]GOTK {A^O::^VaFDV_#jQQ-J+ő=ɢc"9Z_Ϝ0ZE7\$?2Z˫rU}O4KnJZBu i|ǯ\׍)h~EM<ȱ_*98g9P<K9(H'~ yx2O9O: x֘ HzCTeϞZ^Nx]]'u0]<.=B22/hÚj 7cfwdxט.v#OgvUVbB ('7ҽd5(DOf^0rxx>ghBe|W,%G\$*k o>yl5S>Q v@U wj0.!+]W|#䶽r7 g—G/5'WC)4y)ϣT–?SFimH-.MZQQNv{?[^3jq@m–q\jH:wUxď{ d(*0✗x }6f2z<e?:iMY:0xĄqޮ^?|cf,r)=Z ߆h-۫Ds~@>'gm>Wt` SS÷e ?ᇨOy|>PQ?5鴯Cd\$"g_N~۵=.4XLWT OE[_<U'+?M}Q_%7X\]8H=k幁2OR2q@3}^ZKr6%= } ʺ:89^0R\$&6?42-%i\g<ґ|tФj]A7Z\$aÕ>xðX2h7F8.B(')oϣT–?^j9"T5,P#-RG A߳f˴yϘk?eAC|}5goo]^vZKyj" |>WOX=<JJ >(\$+h?^^Aދ:j__9j_?*__9j_?* {EQEy'ǿU} E|@|m\Z H-_P (QE|JW]_(s!\j}PI|?{`o&>x;63|v~Ew6\$VCXsWx'Uv5I\$:^ 6ʞf\$mE%?.Bw:FL&.bضwn9ց ^/QfpvLve^f:q_cRHe̦1l l~ BGfL;*<'-`nO[zÞuR˷`(^:yg?Y 1#CAG h*ȕ#P|6y׸y"V\!/ݫG,?:vi(2<kξ8Ȇ?z-y\W|p Z >kR[OysFYmWYJVޞJ ;_[<q-[hNkťH&xfBe=A;_e-G@yDrI4LRX20x?q  :w>񯕽qExCp_۱ :yOZ|H+_7q\IX_[v0Zxf@ו}gfG?Zx<4ұy\$%SXY\WZd"(U}I%|6Dehg .z/+Q& R;iIy6Lcu 0/ؠҽ㿈v ̙Ax_zOw|G' g_*/ Cz/{^;@?ǫײU |?ūײU  ?*:Ϗ* ^GJ*׳םW `7QKA-0 |1*J^*IB >BV̤cPk>GRx~U}Ry!IF'5n}Ayi* YNs@3YUE0'5 LWSo(7ݰ+l#1gf +u ox3Z@emP?yOZӠWtMƳe rL"f9Wa5%Af++x,J= (2|Ib R35FH80=#W_<>ElqjRef.K #>Q޽{Xc8H-s(2YP0q>xWI#&ԤR§8>ځ 6/>"jDY ;\qOi.yt1f>8xE,E 9 >lvq2}+74= cyռ(\$(((((((((((((((((((((((((((((((((((((((((((((((((((CE-hznhֺW7g^G^_\$4Լm va> gBliMn*Jի];Pe=*и9caGo/'M-4\> C{wY'";5:haElb Uv-fnQc"4=/_c a>cd]˒=:QR\b7 )Gtx\$2 !ӇJoS.ĸ\Rtϰb#^\$Z]57 q~Ԣd(af%Sp+Ӎ'tu*ԑspaڊ)kc2)Y!SW~'қD>B3' xυ|Hm7vj:hXK:k%uٮ } xUj{UCnv\$>Qunv2!VZ>0,_Lmhl*{SL1Op+SLq[\F.{0s_b DE y/G&R4Oծ"<~w;Xfs&"}^l9xӾfKs3M<,rY5-֫} M-ĭ*&්5)W=>#W;3kx~O(B/r+>OXWCeg:Qf@_{ kX\y;C!2zC_cx:_ n\$^v5JGEKO݆N~43ϊ&ݺG2ۨq+NG#|9umG#檧|K#]&rݠ u_^TipO=dIһm;7\2H½?_*hw,@b0}z%O24XrxiaB":NG|ee ;ۡy->zr;rƓDJV^3⯁q]w]o(> 5m5MF(Lq\* vg &_ @~@rI% #;9,rjo:r?W:Ⱦ@߄` pu>#) 3p}<2*pE}A/7w\m-2&`1hw> M1C>ekHx^C4y< OEGlfY8uH\tw < ?be4Ç?cLT M{VZiVQ@0 ƀ*( ߍ DK?oz//R#c}Ws޳^^>_-O9h봴IE|_PSNG!coR^(?E__R-W5_k:G)y潲 EPEPL鐮/w-R1 Q\)/nhE}\$d[\Ɏ Ei->6障3^o1IB 2L כxJ爐Oȹ{s:oj4Ҭ,0ȧ,~g\$I999oˤM3Wtφ-V=0 ftfQ kt}80>η|J \$4_mvϊD:T,~0p] Wx;ॖq2'+"}hVCo{O~k:/u 3[i*1s|S0 |<Ҁ9W|wςu?k7ឃxkML)(|Qͫxnlѕ\|ns5H]vR1'Ο_ڝUv }kp5U_?C!KxTU|L/P#>|1Ss[ O ZnuyYy@zsGk2CG2t=hG~ .|;JC+^Ƴ*Wç|P^Q9z+̿IxT )㏳N|5Kv=?~^ -=Gj(~ xx xh20nNl5dUT(^<0%Xe8`{|3KmF>ކ᧋HfQё贯># {cqѩr%- ¹ybi31]>[OKď€dE#%|SL|4<-~@<3 <0k^ !m?5OQ,k,Ia 1xLloSEv^},}D\Xte 4 ~k ?,ux狜i\9[xAQ]?'Ҵ,)Ir8^wkzL6gr;tKLj/\sM{%k) * =R (?q?0-7oHhki䵺6*QzrF:bm(f>|gQ^1|V!ލ4 `h|u;I,yyg95'ʧ+ٞz*KT_7#M }V' 'kA^@꼃^^.3g){gѴDiRP:WZLR@ Z((-P1EQE(PEPEPF( ( 1E)1KEQEQE&-P`REQEQEQEQE(-QERb(((LREQE(1K(Q@Q@Q@)0)h-PbR@(1Q@ Z(((PbR@ ihQ@()1KE&)h (Q@(((LREQEQ@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@52aFRPI:^F ^iiG> 8)޾gU.g`1>ڝz=}k}r+\դks WYHIUz-Ao2[*\+c.d>U; ESQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQERw4wG8pÁtMAylA ʸsb*M՝GJ%^Io vרNVgԩ)Ǚlvl~Wwc8G;[^2:lG{:[1h((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((b|Yq/"⸕^aF:4uP?r2 orEײȠz{22.kW-t/ ptWU_S|—4f }iEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEV^.֝MTT&%%2TJL/U9}/?l~GưV/JXz}-*;=.nbz2槮k·{V<+]W|z~ΣR Z2 ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( (z(:mRH\$V={;Aʜ+6COq^FmwG~#ٜtl˜5z0tV^j;tAV?I}Mih֢RҞHQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQER@ Mud>iX68[O:}"Dܩ~=Bp ~5k6̀NVfPO#8OU#ҚPq{)sUlnդrs5%ZqvbIEX(((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((% ;i~%ݳ +0¬M ])ajZ+ӵ 7F!Eqۦxaii(h((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((( Xy.c2pUϲO̜~J
2020-06-05 06:44:17
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8936184644699097, "perplexity": 156.09763319626862}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348493151.92/warc/CC-MAIN-20200605045722-20200605075722-00296.warc.gz"}
https://www.atmos-chem-phys.net/19/2001/2019/
Journal topic Atmos. Chem. Phys., 19, 2001–2013, 2019 https://doi.org/10.5194/acp-19-2001-2019 Atmos. Chem. Phys., 19, 2001–2013, 2019 https://doi.org/10.5194/acp-19-2001-2019 Research article 14 Feb 2019 Research article | 14 Feb 2019 # Rate constant and secondary organic aerosol formation from the gas-phase reaction of eugenol with hydroxyl radicals Rate constant and secondary organic aerosol formation from the gas-phase reaction of eugenol with hydroxyl radicals Changgeng Liu1,2, Yongchun Liu3, Tianzeng Chen1,5, Jun Liu1,5, and Hong He1,4,5 Changgeng Liu et al. • 1State Key Joint Laboratory of Environment Simulation and Pollution Control, Research Center for Eco-Environmental Sciences, Chinese Academy of Sciences, Beijing 100085, China • 2School of Biological and Chemical Engineering, Panzhihua University, Panzhihua 617000, China • 3Beijing Advanced Innovation Center for Soft Matter Science and Engineering, Beijing University of Chemical Technology, Beijing 100029, China • 4Center for Excellence in Regional Atmospheric Environment, Institute of Urban Environment, Chinese Academy of Sciences, Xiamen 361021, China • 5University of Chinese Academy of Sciences, Beijing 100049, China Correspondence: Yongchun Liu ([email protected]) and Hong He ([email protected]) Abstract Methoxyphenols are an important organic component of wood-burning emissions and considered to be potential precursors of secondary organic aerosol (SOA). In this work, the rate constant and SOA formation potential for the OH-initiated reaction of 4-allyl-2-methoxyphenol (eugenol) were investigated for the first time in an oxidation flow reactor (OFR). The rate constant was $\mathrm{8.01}±\mathrm{0.40}×{\mathrm{10}}^{-\mathrm{11}}$ cm3 molecule−1 s−1, determined by the relative rate method. The SOA yield first increased and then decreased as a function of OH exposure and was also dependent on eugenol concentration. The maximum SOA yields (0.11–0.31) obtained at different eugenol concentrations could be expressed well by a one-product model. The carbon oxidation state (OSC) increased linearly and significantly as OH exposure rose, indicating that a high oxidation degree was achieved for SOA. In addition, the presence of SO2 (0–198 ppbv) and NO2 (0–109 ppbv) was conducive to increasing SOA yield, for which the maximum enhancement values were 38.6 % and 19.2 %, respectively. The N∕C ratio (0.032–0.043) indicated that NO2 participated in the OH-initiated reaction, subsequently forming organic nitrates. The results could be helpful for further understanding the SOA formation potential from the atmospheric oxidation of methoxyphenols and the atmospheric aging process of smoke plumes from biomass burning emissions. 1 Introduction Wood combustion is a major contributor to atmospheric fine particulate matter (PM) (Bruns et al., 2016), which could contribute approximately 10 %–50 % of the total organic fraction of atmospheric aerosols (Schauer and Cass, 2000). In some regions with cold climates, woodsmoke-associated aerosols are estimated to account for more than 70 % of PM2.5 in winter (Jeong et al., 2008; Ward et al., 2006). Recently, the significant potential for secondary organic aerosol (SOA) formation from woodsmoke emissions has been reported (Bruns et al., 2016; Gilardoni et al., 2016; Tiitta et al., 2016; Ciarelli et al., 2017; Ding et al., 2017). In addition, the organic compounds derived from wood combustion and their oxidation products may contribute significantly to global warming due to their light-absorbing properties (Chen and Bond, 2010). It has been reported that woodsmoke particles are predominant in the inhalable size range (Bari et al., 2010) and their extracts are mutagenic (Kleindienst et al., 1986). Exposure to woodsmoke can result in adverse health effects such as acute respiratory infections, tuberculosis, lung cancer, and cataracts (Bolling et al., 2009). Therefore, wood combustion has multifaceted impacts on climate, air quality, and human health. Methoxyphenols produced by lignin pyrolysis are potential tracers for woodsmoke, and their emission rates are in the range of 900–4200 mg kg−1 wood (Schauer et al., 2001; Simpson et al., 2005; Nolte et al., 2001). The highest level of methoxyphenols in the atmosphere always appears during a woodsmoke-dominated period, with observed values up to several mg m−3 (Schauer and Cass, 2000; Schauer et al., 2001; Simpson et al., 2005). Methoxyphenols are semi-volatile aromatic compounds with low molecular weight, and many of them are found to mainly exist in the gas phase at typical ambient temperature (Schauer et al., 2001; Simpson et al., 2005). Thus, methoxyphenols can be chemically transformed through gas-phase reactions with atmospheric oxidants (Coeur-Tourneur et al., 2010a; Lauraguais et al., 2012, 2014a, b, 2015, 2016; Yang et al., 2016; Zhang et al., 2016; El Zein et al., 2015). The corresponding rate constants control their effectiveness as stable tracers for wood combustion and atmospheric lifetimes. In recent years, the rate constants for the gas-phase reactions of some methoxyphenols with hydroxyl (OH) radicals (Coeur-Tourneur et al., 2010a; Lauraguais et al., 2012, 2014b, 2015), nitrate (NO3) radicals (Lauraguais et al., 2016; Yang et al., 2016; Zhang et al., 2016), chlorine atoms (Cl) (Lauraguais et al., 2014a), and ozone (O3) (El Zein et al., 2015) have been determined. Some studies have indicated significant SOA formation from 2,6-dimethoxyphenol (syringol) and 2-methoxyphenol (guaiacol) with respect to their reactions with OH radicals (Sun et al., 2010; Lauraguais et al., 2012, 2014b; Ahmad et al., 2017; Yee et al., 2013; Ofner et al., 2011). Although biomass burning emissions have been indicated to have great SOA formation potential via atmospheric oxidation (Bruns et al., 2016; Gilardoni et al., 2016; Li et al., 2017; Ciarelli et al., 2017; Ding et al., 2017), SOA formation and growth from methoxyphenols are still poorly understood. Besides, the observed SOA levels in the atmosphere cannot be explained well by the present knowledge of SOA formation, which reflects the fact that a large number of precursors are not taken into account in the SOA formation reactions included in atmospheric models (Lauraguais et al., 2012). 4-Allyl-2-methoxyphenol (eugenol) is a typical methoxyphenol produced by lignin pyrolysis with a branched alkene group. It is widely detected in the atmosphere with a concentration of the order of ng m−3, which is comparable to that of other methoxyphenols (e.g., guaiacol and syringol) (Schauer et al., 2001; Simpson et al., 2005; Bari et al., 2009). Its average emission concentration and factor in beech burning are 0.032 µg m−3 and 1.52 µg g−1 PM, respectively, which are both higher than those (0.016 µg m−3 and 0.762 µg g−1 PM) of guaiacol (Bari et al., 2009). It has even been detected in human urine after exposure to woodsmoke (Dills et al., 2006). Eugenol has been observed to mainly distribute in the gas phase in woodsmoke emissions (Schauer et al., 2001), and its gas–particle partition coefficient is lower than 0.01 (Zhang et al., 2016), thus indicating the importance of its gas-phase reactions in the atmosphere. For this reason, the aim of this work was to determine the rate constant and explore the SOA formation potential for eugenol in gas-phase reactions with OH radicals using an oxidation flow reactor (OFR). In addition, the effects of SO2 and NO2 on SOA formation were investigated. To our knowledge, this work represents the first determination of the rate constant and SOA yield for the gas-phase reaction of eugenol with OH radicals. 2 Experimental section The detailed schematic description of the experimental system used in this work is shown in Figs. S1 and S2 in the Supplement. The gas-phase reactions were conducted in the OFR, a detailed description of which has been presented elsewhere (Liu et al., 2014b). Before entering into the OFR, gas-phase species were mixed thoroughly in the mixing tube. The reaction time in the OFR was 26.7 s, calculated according to the illuminated volume (0.89 L) and the total flow rate (2 L min−1). OH radicals were generated by the photolysis of O3 in the presence of water vapor using a 254 nm UV lamp (Jelight Co., Inc.), and their formation reactions have been described elsewhere (Zhang et al., 2017). The concentration of OH radicals was governed by O3 concentration and relative humidity (RH). O3 concentration was controlled by changing the unshaded length of a 185 nm UV lamp (Jelight Co., Inc.). O3 with a concentration of 0.94–9.11 ppmv in the OFR was produced by passing zero air through an O3 generator (model 610-220, Jelight Co., Inc.), which was used to produce OH radicals. RH and temperature in the OFR were 44.0±2.0 % and 301±1 K, respectively, measured at the outlet of the OFR. The steady-state concentrations of OH radicals were determined using SO2 as the reference compound in separate calibration experiments. It is a widely used method for calculating OH exposure in the OFR, but could not sufficiently describe the potential OH suppression caused by the added external OH reactivity (Zhang et al., 2017; Lambe et al., 2015; Simonen et al., 2017; Li et al., 2015; Peng et al., 2015, 2016). The decay of SO2 from its reaction with OH radicals ($\mathrm{9}×{\mathrm{10}}^{-\mathrm{13}}$ cm3 molecule−1 s−1) (Davis et al., 1979) was measured by an SO2 analyzer (model 43i, Thermo Fisher Scientific Inc.). The concentration of OH radicals ([OH]) in this work ranged from approximately 4.5×109 to 4.7×1010 molecules cm−3, and the corresponding OH exposures were in the range of 1.21–12.55×1011 molecules cm−3 s or approximately 0.93 to 9.68 days of equivalent atmospheric exposure, which was calculated using a typical [OH] of 1.5×106 molecules cm−3 in the atmosphere (Mao et al., 2009). An Aerodyne high-resolution time-of-flight aerosol mass spectrometer (HR-ToF-AMS) was applied to perform online measurements of the chemical composition of particles and the non-refractory submicron aerosol mass (DeCarlo et al., 2006). The size distribution and concentration of particles were monitored by a scanning mobility particle sizer (SMPS) consisting of a differential mobility analyzer (DMA) (model 3082, TSI Inc.) and a condensation particle counter (CPC) (model 3776, TSI Inc.). Assuming that particles are spherical and nonporous, the average effective particle density could be calculated as 1.5 g cm−3 using the equation $\mathit{\rho }={d}_{\mathrm{va}}/{d}_{\mathrm{m}}$ (DeCarlo et al., 2004), where dva is the mean vacuum aerodynamic diameter measured by the HR-ToF-AMS and dm is the mean volume-weighted mobility diameter measured by the SMPS. The particle size for HR-ToF-AMS measurement was calibrated using NH4NO3 particles with a diameter between 60 and 700 nm selected by a DMA. The mass concentration of particles measured by HR-ToF-AMS was corrected by SMPS data in this work using the same method as in Gordon et al. (2014). Eugenol and reference compounds were measured by a high-resolution proton-transfer-reaction time-of-flight mass spectrometer (HR-ToF-PTRMS) (Ionicon Analytik GmbH). More experimental details are described in the Supplement. 3 Results and discussion ## 3.1 Rate constant The possible effect of O3 on the decay of eugenol and reference compounds was investigated in this work. As shown in Fig. S3, their concentrations were not affected by O3. Meanwhile, no SOA formation was observed by the SMPS and HR-ToF-AMS. In addition, in order to investigate the possible photolysis of eugenol and reference compounds at 254 nm UV light in the OFR, the comparative experiments were conducted with the UV lamp turned on and turned off when eugenol and reference compounds were introduced into the OFR. The normalized mass spectra of eugenol and reference compounds in the dark and light are shown in Fig. S4. The results showed that no significant decay (< 5 %) by photolysis was observed and could be neglected. According to the results reported by Peng et al. (2016), the photolysis of phenol and 1,3,5-trimethylbenzene can be ignored when the ratio of exposure to 254 nm and OH is lower than 1×106 cm s−1, a condition that the values (1.6×102 to 1.7×103 cm s−1) in this work also met. In addition, the initial concentration of eugenol was determined with the UV lamp turned on. Therefore, the effect of photolysis could be neglected in this work. However, it cannot be ruled out that photolysis under UV irradiation might have an influence on the evolution of oxidation products. The rate constant for the gas-phase reaction of eugenol with OH radicals was determined by the relative rate method, which can be expressed by the following equation (Coeur-Tourneur et al., 2010a; Yang et al., 2016; Zhang et al., 2016): $\begin{array}{}\text{(1)}& \mathrm{ln}\left({C}_{\mathrm{E}\mathrm{0}}/{C}_{\mathrm{Et}}\right)=\mathrm{ln}\left({C}_{\mathrm{R}\mathrm{0}}/{C}_{\mathrm{Rt}}\right){k}_{\mathrm{E}}/{k}_{\mathrm{R}},\end{array}$ where CE0 and CEt are the initial and real-time concentrations of eugenol, respectively. kE is the rate constant of the eugenol reaction with OH radicals. CR0 and CRt are the initial and real-time concentrations of the reference compound, respectively. kR is the rate constant of the reference compound with OH radicals, with values for m-xylene and 1,3,5-trimethylbenzene of $\mathrm{2.20}×{\mathrm{10}}^{-\mathrm{11}}$ and $\mathrm{5.67}×{\mathrm{10}}^{-\mathrm{11}}$ cm3 molecule−1 s−1, respectively (Kramp and Paulson, 1998; Coeur-Tourneur et al., 2010a). Table 1Rate constant for the gas-phase reaction of eugenol with OH radicals and the associated atmospheric lifetime. a Units of 10−11 cm3 molecule−1 s−1. b Atmospheric lifetime in hours. ${\mathit{\tau }}_{\mathrm{OH}}=\mathrm{1}/{k}_{\mathrm{E}}$[OH], assuming a 24 h average [OH] =1.5×106 molecules cm−3 (Mao et al., 2009). c Calculated using the US EPA AOP WIN model (US EPA, 2012). Data obtained from the reactions were plotted in the form of Eq. (1) and were fitted well by linear regression (R2>0.97; Fig. 1). A summary of the slopes and the rate constants is listed in Table 1. The errors in kEkR are the standard deviations generated from the linear regression analysis and do not include the uncertainty in the rate constants of the reference compounds. The rate constants are $\mathrm{7.54}±\mathrm{0.28}×{\mathrm{10}}^{-\mathrm{11}}$ and $\mathrm{8.47}±\mathrm{0.51}×{\mathrm{10}}^{-\mathrm{11}}$ cm3 molecule−1 s−1, respectively, when using 1,3,5-trimethylbenzene and m-xylene as reference compounds. According to the US EPA AOP WIN model based on the structure–activity relationship (SAR) (US EPA, 2012), the rate constant was calculated to be $\mathrm{6.50}×{\mathrm{10}}^{-\mathrm{11}}$ cm3 molecule−1 s−1 (Table 1), which is lower than that obtained in this work. Inaccurate performance of the AOP WIN model has been observed for other multifunctional organics due to the inaccurate representation of the electronic effects of different functional groups on reactivity (Coeur-Tourneur et al., 2010a; Lauraguais et al., 2012). In addition, differences between density functional theory (DFT) calculations and lab studies have been also observed. For example, the DFT-predicted rate constant of 2-methoxyphenol with OH radicals ($\mathrm{12.19}×{\mathrm{10}}^{-\mathrm{11}}$ cm3 molecule−1 s−1) is higher than that ($\mathrm{7.53}×{\mathrm{10}}^{-\mathrm{11}}$ cm3 molecule−1 s−1) obtained by a lab study (Coeur-Tourneur et al., 2010a; Priya and Lakshmipathi, 2017). This suggests that it is necessary to determine the rate constants of multifunctional organics through lab experiments. The rate constant determined in this work can be used to calculate the atmospheric lifetime of eugenol with respect to its reaction with OH radicals. Assuming a typical [OH] for a 24 h average value to be 1.5×106 molecules cm−3 (Mao et al., 2009), the corresponding lifetime of eugenol was calculated to be 2.31±0.12 h with an average rate constant of $\mathrm{8.01}±\mathrm{0.40}×{\mathrm{10}}^{-\mathrm{11}}$ cm3 molecule−1 s−1. This short lifetime indicates that eugenol is too reactive to be used as a tracer for woodsmoke emissions and also implies the possible fast conversion of eugenol from the gas phase to secondary aerosol during the transportation process. Figure 1Relative rate plots for the gas-phase reaction of OH radicals with eugenol. The rate constant obtained in this work is about 2 orders of magnitude faster than that for eugenol with NO3 radicals ($\mathrm{1.6}×{\mathrm{10}}^{-\mathrm{13}}$ cm3 molecule−1 s−1) (Zhang et al., 2016), which suggests that the OH-initiated reaction of eugenol might be the main chemical transformation in the atmosphere. The rate constants of OH-initiated reactions of guaiacol, 2,6-dimethylphenol, and syringol were $\mathrm{7.53}×{\mathrm{10}}^{-\mathrm{11}}$, $\mathrm{6.70}×{\mathrm{10}}^{-\mathrm{11}}$, and $\mathrm{9.66}×{\mathrm{10}}^{-\mathrm{11}}$ cm3 molecule−1 s−1, respectively (Coeur-Tourneur et al., 2010a; Thuner et al., 2004; Lauraguais et al., 2012), while their corresponding rate constants were calculated to be $\mathrm{2.98}×{\mathrm{10}}^{-\mathrm{11}}$, $\mathrm{5.04}×{\mathrm{10}}^{-\mathrm{11}}$, and $\mathrm{16.51}×{\mathrm{10}}^{-\mathrm{11}}$ cm3 molecule−1 s−1 according to the US EPA AOP WIN model (US EPA, 2012). These differences among rate constants suggest that the rate constants of multifunctional organics should be necessarily determined via lab experiments. The reactivity of eugenol toward OH radicals is slightly higher than those of guaiacol and 2,6-dimethylphenol, but slightly slower than that of syringol. The presence of two methoxyl groups (–O–CH3) in syringol activates the electrophilic addition of OH radicals to the benzene ring by donating electron density through the resonance effect (Lauraguais et al., 2016). The activation effect of the methoxyl group is much larger than that of alkyl groups (McMurry, 2004). In a recent study, the reported energy barrier of NO3 electrophilic addition to eugenol was about 2-fold higher than that of 4-ethylguaiacol, indicating that the activation effect of the allyl group (–CH2CH=CH2) is lower than that of the ethyl group (–CH2CH3) (Zhang et al., 2016). These results are in accordance with the activation effects of the substituents toward the electrophilic addition of OH radicals (McMurry, 2004). ## 3.2 Effects of eugenol concentration and OH exposure on SOA formation In this work, a series of experiments were conducted in the OFR with different eugenol concentrations. The SOA yield was determined as the ratio of the SOA mass concentration (Mo, µg m−3) to the reacted eugenol concentration ([eugenol], µg m−3) (Kang et al., 2007). The experimental conditions and maximum SOA yields are listed in Table 2. The wall loss of aerosol particles in the OFR could be ignored according to our previous results reported by Liu et al. (2014a). Figure S5 shows the plots of the SOA yield versus OH exposure at different eugenol concentrations. Higher concentrations resulted in higher amounts of condensable products and subsequently increased SOA yield (Lauraguais et al., 2012). SOA mass also directly influences gas–particle partitioning because SOA can serve as the adsorption medium for oxidation products, and higher SOA mass generally results in higher SOA yield (Lauraguais et al., 2012, 2014b). In the OFR, in all cases the SOA yield first increased and then decreased as a function of OH exposure (Fig. S5). This trend is the most common phenomenon observed in studies conducted in the OFR and a potential aerosol mass (PAM) reactor (Lambe et al., 2015; Ortega et al., 2016; Palm et al., 2016, 2018; Simonen et al., 2017). In this work, according to the OFR exposure estimator (v2.3) developed by Jimenez's group based on the estimation equations reported in previous work (Li et al., 2015; Peng et al., 2015, 2016), the maximum reduction of OH exposure by eugenol in the OFR was approximately 90 %. Its detailed calculation is shown in the Supplement. Although OH suppression by eugenol was not well determined in the OFR, OH radicals were expected to be the main oxidant due to the fast reaction rate constant of eugenol toward OH radicals obtained in this work. The decrease in SOA yield at high OH exposure is possibly contributed from the C–C bond scission of gas-phase species by further oxidation or heterogeneous reactions involving OH radicals, which would generate a large amount of fragmented molecules that subsequently volatilize out of aerosol particles (Lambe et al., 2015; Ortega et al., 2016; Simonen et al., 2017). Table 2Experimental conditions and results for SOA formation. a Initial eugenol concentrations. b Reacted eugenol concentrations. c SOA concentrations. d Maximum SOA yields. e Corresponding OH exposure of maximum SOA yields. f Corresponding atmospheric aging time of maximum SOA yields, calculated using a typical [OH] in the atmosphere in this work (1.5×106 molecules cm−3) (Mao et al., 2009). SOA yield can be described with a widely used semi-empirical model on the basis of the absorptive gas–particle partitioning of semi-volatile products, in which the overall SOA yield (Y) is given by Odum et al. (1996): $\begin{array}{}\text{(2)}& Y=\sum _{i}{M}_{\mathrm{o}}\frac{{\mathit{\alpha }}_{i}{K}_{\mathrm{om},i}}{\mathrm{1}+{K}_{\mathrm{om},i}{M}_{\mathrm{o}}}\end{array}$ where αi is the mass-based stoichiometric coefficient for the reaction producing the semi-volatile product i, Kom,i is the gas–particle partitioning equilibrium constant, and Mo is the total aerosol mass concentration. Figure 2Maximum SOA yields as a function of SOA mass concentration (Mo) formed from OH reactions at different eugenol concentrations. The solid line was fit to the experimental data using a one-product model. Values for αi and Kom,i used to generate the solid line are 0.36±0.02 and 0.013±0.002, respectively. The SOA yield data in Table 2 can be plotted in the form of Eq. (2) to obtain the yield curve for eugenol (Fig. 2). The simulation of experimental data indicated that a one-product model could accurately reproduce the data (R2=0.98), while the use of two or more products in the model did not significantly improve the fitting quality. Odum et al. (1996) reported that the SOA yield data from the oxidation of aromatic compounds could be fitted well using a two-product model. However, a one-product model was also efficient for describing the SOA yields from the oxidation of aromatics including methoxyphenols (Coeur-Tourneur et al., 2010b; Lauraguais et al., 2012, 2014b). The success of simulations with a one-product model in this work is likely to indicate that the products in SOA have similar values of αi and Kom,i, i.e., that the obtained αi (0.36±0.02) and Kom,i (0.013±0.002 m3µg−1) represent the average values. In this work, considering that the product composition of SOA was not determined, the volatility basis set (VBS) approach was not applied to simulate SOA yields. Figure S6 shows a plot of the SOA mass concentration (Mo) versus the reacted eugenol concentration ([eugenol]). Its slope was 0.37 as obtained using linear least-squares fitting, which is very close to the αi value (0.36). This suggests that the low-volatility products formed in the reaction almost completely disperse on the particle phase according to the theoretical partition model (Lauraguais et al., 2012, 2014b). In other words, SOA yield was approximately an upper limit for eugenol oxidation in the OFR. In view of the residence time in this work, it seems to be in contradiction to the recommendation of a longer residence time made by Ahlberg et al. (2017), who found that the condensation of low-volatility species on SOA in the OFR was often kinetically limited at low mass concentrations. In our recent experiments (not published), the SOA yields for guaiacol oxidation by OH radicals obtained under similar experimental conditions as this work could be comparable to those obtained in chamber studies conducted at low RH (Fig. S7) (Lauraguais et al., 2014b; Yee et al., 2013). This suggests that the effect of kinetic limitations on SOA condensation for the OH-initiated oxidation of methoxyphenols in this system might not be important. Elemental ratios (H∕C and O∕C) could provide insights into SOA composition and chemical processes along with aging (Bruns et al., 2015). As shown in Fig. 3, the O∕C ratio of SOA increased and the H∕C ratio decreased with increasing OH exposure because oxygen-containing functional groups were formed in the oxidation products. In addition, the organic mass fractions of mz 44 (${\mathrm{CO}}_{\mathrm{2}}^{+}$) and mz 43 (mostly C2H3O+), named f44 and f43, respectively, could also provide information about the nature of SOA formation. Figure S8 shows the evolution of f44 and f43 versus OH exposure at low (272 µg m−3) and high (1328 µg m−3) concentrations of eugenol. The values of f44 were much higher than those of f43 and increased significantly as a function of OH exposure, suggesting that SOA formed in the experiments became more oxidized. The f44 value in this work ranged up to 0.26, which was consistent with that observed for ambient low-volatility OA (LV-OA) higher than 0.25 (Ng et al., 2010). Figure 3OSC, H∕C, and O∕C vs. the OH exposure for SOA formed at two eugenol concentrations (272 and 1328 µg m−3). The average carbon oxidation state (OSC) proposed by Kroll et al. (2011) is considered a more accurate indicator of the oxidation degree of atmospheric organic species than the O∕C ratio alone because it takes into account the saturation level of the carbon atoms in the SOA. OSC is defined as ${\mathrm{OS}}_{\mathrm{C}}=\mathrm{2}\mathrm{O}/\mathrm{C}-\mathrm{H}/\mathrm{C}$ (Kroll et al., 2011), calculated according to the elemental composition of SOA measured by the HR-ToF-AMS. In this work, the OSC values obtained at low (272 µg m−3) and high (1328 µg m−3) concentrations of eugenol were compared. As shown in Fig. 3, OSC values for low concentration (0.035–1.78) were much larger than those for high concentration (0.0036–1.09) and increased linearly (R2>0.96) with OH exposure of $\mathrm{1.21}-\mathrm{12.55}×{\mathrm{10}}^{\mathrm{11}}$ molecules cm−3 s. The results were supported by the evolution of SOA mass spectra obtained by the HR-ToF-AMS at the same eugenol concentrations (Fig. S9). Similar trends have been observed in the smog chamber and PAM reactor (Simonen et al., 2017; Ortega et al., 2016). The OSC value in this work extended as high as 1.78, which was in good agreement with that observed for ambient LV-OA, up to 1.9 (Kroll et al., 2011). Recently, Ortega et al. (2016) reported that the OSC value for SOA formed from ambient air in an OFR ranged up to 2.0, and Simonen et al. (2017) determined a high OSC value (> 1.1) for SOA formed from the OH-initiated reaction of toluene in a PAM reactor with an OH exposure of 1.2×1012 molecules cm−3 s. In general, the OSC values for the PAM reactor are higher than those for smog chambers because OH exposure in the PAM reactor is about 1–3 orders of magnitude higher than in the smog chamber (Simonen et al., 2017; Ortega et al., 2016; Lambe et al., 2015). A higher OSC value indicates greater age, whereby the SOA components are further oxidized through heterogeneous oxidation, adding substantial oxygen and reducing hydrogen in the molecules in the particle phase to increase OSC values despite the overall loss of SOA mass (Ortega et al., 2016). ## 3.3 Effect of SO2 on SOA formation As shown in Fig. 4, the presence of SO2 favored SOA formation, and the sulfate concentration increased linearly (R2=0.99) as a function of OH exposure. The maximum SOA yield enhancement of 38.6 % was obtained at OH exposure of 5.41×1011 molecules cm−3 s and then decreased with the increase in OH exposure, possibly due to the fragmented molecules formed through the oxidation of gas-phase species by high OH exposure (Lambe et al., 2015; Ortega et al., 2016; Simonen et al., 2017). The SOA yield and sulfate concentration both increased linearly (R2>0.97) as the SO2 concentration increased from 0 to 198 ppbv at OH exposure of 1.21×1011 molecules cm−3 s (Fig. S10). Compared to the initial SOA yield (0.049) obtained in the absence of SO2, the SOA yield (0.066) obtained in the presence of 198 ppbv SO2 was enhanced by 34.7 %. In previous studies, Kleindienst et al. (2006) reported that the SOA yield from α-pinene photooxidation increased by 40 % in the presence of 252 ppbv SO2. T. Liu et al. (2016) recently found that the SOA yield from 5 h of photochemical aging for gasoline vehicle exhaust was enhanced by 60 %–200 % in the presence of ∼150 ppbv SO2. Figure 4Evolution of the enhanced SOA yield and sulfate formation as a function of OH exposure in the presence of 41 ppbv SO2 at an average eugenol concentration of 273 µg m−3. As shown in Figs. 4 and S10, the increase in sulfate concentration was favorable for SOA formation. In this system, it is difficult to completely remove trace NH3, and thus the formed sulfate was a mixture of sulfuric acid (H2SO4) and a small amount of ammonium sulfate ((NH4)2SO4). The in situ particle acidity was calculated as the H+ concentration ([H+], 40.23–648.39 nmol m−3) according to the AIM-II model for the H+${\mathrm{NH}}_{\mathrm{4}}^{+}$${\mathrm{SO}}_{\mathrm{4}}^{\mathrm{2}-}$${\mathrm{NO}}_{\mathrm{3}}^{-}$H2O systems (http://www.aim.env.uea.ac.uk/aim/model2/model2a.php, last access: 18 June 2018; T. Liu et al., 2016). A detailed description of the calculation method has been represented elsewhere (T. Liu et al., 2016). The elevated concentration of sulfate in the particle phase with the increases in SO2 concentration and OH exposure was an important reason for the enhanced SOA yields (Kleindienst et al., 2006; T. Liu et al., 2016). Cao and Jang (2007) indicated that SOA yields from the oxidation of toluene and 1,3,5-trimethylbenzene increased by 14 %–36 % in the presence of acid seeds, with [H+] of 240–860 nmol m−3, compared to those obtained in the presence of nonacid seeds. Similar results concerning the effect of particle acidity on SOA yields were reported in other studies (Kleindienst et al., 2006; T. Liu et al., 2016; Jaoui et al., 2008; Xu et al., 2016). However, Ng et al. (2007b) found that particle acidity had a negligible effect on SOA yields from the photooxidation of aromatics, possibly due to the low RH (∼5 %) used in their work. The water content of aerosol plays an essential role in acidity effects (Cao and Jang, 2007). Under acidic conditions, the gas-phase oxidation products of eugenol partitioned onto the particle phase would be further oxidized into low-volatility products or produce oligomers by acid-catalyzed heterogeneous reactions, subsequently enhancing SOA yields (Cao and Jang, 2007; Jaoui et al., 2008; T. Liu et al., 2016; Xu et al., 2016). In addition, the formed sulfate not only serves as the substrate for product condensation and likely participates in new particle formation (NPF) (Jaoui et al., 2008; Wang et al., 2016), but also enhances the surface areas of particles to facilitate heterogeneous reactions on aerosols (Xu et al., 2016). These roles of sulfate are also favorable for increasing SOA yields. Recently, Friedman et al. (2016) have indicated that SO2 could participate in the oxidation reactions of α-and β-pinene and perturbs their oxidation in the OFR, but this possible effect could be ignored in this work due to the relatively high RH and the negligible S∕C ratio observed by the HR-ToF-AMS (Friedman et al., 2016). ## 3.4 Effect of NO2 on SOA formation It is well known that a high NOx concentration almost always plays a negative role in NPF and SOA formation because the reaction of NO with RO2 radicals would result in the formation of more volatile products compared to the reaction of HO2 with RO2 radicals (Sarrafzadeh et al., 2016). Previous studies reported that nitro-substituted products were the main products for SOA formed from OH-initiated reactions of phenol precursors, including methoxyphenols, in the presence of NOx (Ahmad et al., 2017; Finewax et al., 2018; Lauraguais et al., 2012, 2014b). Thus, the effect of NO2 on SOA formation from eugenol oxidation by OH radicals was investigated. As shown in Fig. 5, the nitrate concentration measured by the HR-ToF-AMS increased as a function of OH exposure in the presence of 40 ppbv NO2, but it was much lower than the sulfate concentration (Fig. 4) even though the OH rate constant for NO2 was faster than that for SO2 (Atkinson et al., 1976; Davis et al., 1979). A possible explanation is that the formed HNO3 mainly existed in the gas phase, and the relatively high temperature (301±1 K) was not favorable for gaseous HNO3 distribution in the particle phase (Wang et al., 2016). It has been indicated that the temperature range for the greatest loss of nitrate is 293–298 K (Keck and Wittmaack, 2005). As illustrated in Fig. 5, the SOA yield enhancement and N∕C ratio both increased first and then decreased with rising OH exposure. The increase in NO2 concentration (40–109 ppbv) was beneficial to SOA yields (0.053–0.062), the N∕C ratio (0.032–0.041), and nitrate formation (4.29–6.30 µg m−3) (Fig. S11). Compared to the presence of 41 ppbv SO2, the maximum SOA yield enhancement (19.17 %) in the presence of 40 ppbv NO2 was lower. For most aromatic precursors, the addition of ppbv levels of NO2 should have a negligible effect on SOA formation because the rate constants of phenoxy radicals with O2 and NO2 are of the order of approximately 10−16 and 10−11 cm3 molecule−1 s−1, respectively (Atkinson and Arey, 2003). But, for phenol precursors, only about 0.5 ppbv NO2 is enough to compete with O2 for the reaction with phenoxy radicals (Finewax et al., 2018). Therefore, the enhancement effect of NO2 on SOA formation might be relevant to the special case of phenols or methoxyphenols but not for other aromatic precursors. Figure 5Evolution of the enhanced SOA yields, nitrate formation, and N∕C ratio as a function of OH exposure in the presence of 40 ppbv NO2 at an average eugenol concentration of 273 µg m−3. It is noteworthy that the N∕C ratio is in the range of 0.032–0.043, suggesting that NO2 participated in the OH reaction of eugenol through the addition of the phenoxy radical (Peng and Jimenez, 2017). Recently, Hunter et al. (2014) found that NO2 participated in OH reactions of cyclic alkanes, and the N∕C ratios were in the range of 0.031–0.064, higher than those obtained in this work. The nitro-substituted products were reported to be the main reaction products of the OH reactions of guaiacol and syringol in the presence of NO2 (Lauraguais et al., 2014b; Ahmad et al., 2017). N-containing products might be also formed through reactions involving NO3 radicals, which could be generated by the reaction between NO2 and O3 in this system (Atkinson, 1991). Using the box model (Peng et al., 2015) and the maximum O3 concentration (9.11 ppmv) in this work, the maximum NO3 exposure was calculated to be approximately 1.7×1011 molecules cm−3 s. Compared to the rate constant of eugenol with OH radicals obtained in this work, the rate constant ($\mathrm{1.6}×{\mathrm{10}}^{-\mathrm{13}}$ cm3 molecule−1 s−1) of eugenol with NO3 radicals was about 2 orders of magnitude lower (Zhang et al., 2016). Thus, the contribution of NO3 radicals to the decay of eugenol was insignificant. The relative low volatility of N-containing products could reasonably contribute to SOA formation (Duporté et al., 2016; J. Liu et al., 2016). In addition, a higher NO2∕NO ratio favors the formation of nitro-substituted products, which are potentially involved in NPF and SOA growth (Pereira et al., 2015). Ng et al. (2007a) also indicated that NOx could be beneficial to SOA formation for sesquiterpenes due to the formation of low-volatility organic nitrates and the isomerization of large alkoxy radicals, resulting in less volatile products. The decrease in the N∕C ratio at high OH exposure suggested that more volatile products were generated through the oxidation of particle-phase species by OH radicals. The ${\mathrm{NO}}^{+}/{\mathrm{NO}}_{\mathrm{2}}^{+}$ ratios measured by the HR-ToF-AMS are widely used to identify inorganic and organic nitrates. The ${\mathrm{NO}}^{+}/{\mathrm{NO}}_{\mathrm{2}}^{+}$ ratios for inorganic nitrates have been reported to be in the range of 1.08–2.81 (Farmer et al., 2010; Sato et al., 2010). The ratio ranged from 2.06 to 2.54 in this work as determined by the HR-ToF-AMS using ammonium nitrate as the calibration sample. However, the ${\mathrm{NO}}^{+}/{\mathrm{NO}}_{\mathrm{2}}^{+}$ ratios during the oxidation of eugenol in the presence of 40 ppbv NO2 were 3.98–6.09. They were higher than those for inorganic nitrates and consistent with those for organic nitrates (3.82–5.84) from the photooxidation of aromatics (Sato et al., 2010). According to the method described by Fry et al. (2013) (shown in the Supplement), the fraction of organic nitrate was calculated to be in the range of 25.64 % to 82.05 % using the ${\mathrm{NO}}^{+}/{\mathrm{NO}}_{\mathrm{2}}^{+}$ ratios (3.98–6.09) obtained at different OH exposure. The results were comparable to those reported in earlier studies (Liu et al., 2015; Hunter et al., 2014). Liu et al. (2015) reported that N-containing organic mass contributed 31.5±4.4 % to the total SOA derived from m-xylene oxidation by OH radicals. Hunter et al. (2014) estimated the organic nitrate yields of SOA to be 31 %–64 %, formed in OH-initiated reactions of acyclic, monocyclic, and polycyclic alkanes. This range obtained in this work should be the upper limit due to the possibility of C–C bond scission of gas- and particle-phase organics oxidized by high OH exposure. Besides, the maximum yield of nitrates for a single reaction step is expected to be approximately 30 % (Ziemann and Atkinson, 2012); this suggests that multiple reaction steps are needed. ## 3.5 Atmospheric implications Biomass burning not only serves as a major contributor of atmospheric primary organic aerosol (POA), but also has great SOA formation potential through atmospheric oxidation (Bruns et al., 2016; Gilardoni et al., 2016; Li et al., 2017; Ciarelli et al., 2017; Ding et al., 2017). Recent studies have indicated that SOA formed from biomass burning plays an important role in haze pollution in China (Li et al., 2017; Ding et al., 2017). Residential combustion (mainly wood burning) could contribute approximately 60 %–70 % to SOA formation in winter at the European scale (Ciarelli et al., 2017). In addition, methoxyphenols are the major component of OA from biomass burning (Bruns et al., 2016; Schauer and Cass, 2000). Based on our results and those of previous studies (Sun et al., 2010; Lauraguais et al., 2012, 2014b; Ahmad et al., 2017; Yee et al., 2013; Ofner et al., 2011), more attention should be paid to SOA formation from the OH oxidation of biomass burning emissions and its subsequent effect on haze evolution, especially in China with nationwide biomass burning and high daytime average [OH] in the ambient atmosphere ($\mathrm{5.2}-\mathrm{7.5}×{\mathrm{10}}^{\mathrm{6}}$ molecules cm−3) (Yang et al., 2017). Meanwhile, the potential contributions of SO2 and NO2 to SOA formation should also be taken into account because the concentrations of NOx and SO2 could be up to 200 ppbv in the severely polluted atmosphere in China (Li et al., 2017). Although eugenol concentrations in this work are higher than those in the ambient atmosphere, the results obtained in this work could provide new information for SOA formation from the atmospheric oxidation of methoxyphenols and might be useful for SOA modeling, especially for air quality simulation modeling of the specific regions experiencing serious pollution caused by fine particulate matter. N-containing products formed from the oxidation of methoxyphenols could contribute to water-soluble organics in SOA (Lauraguais et al., 2014b; Yang et al., 2016; Zhang et al., 2016), which have been widely detected in atmospheric humic-like substances (HULIS) (Wang et al., 2017). Due to their surface-active and UV-light-absorbing properties, HULIS could influence the formation of cloud condensation nuclei (CCN), solar radiation balance, and photochemical processes in the atmosphere (Wang et al., 2017). In addition, the formation of oligomers in the particle phase via the OH-initiated reaction of methoxyphenols, which has been observed in the aqueous oxidation of phenolic species (Yu et al., 2014), might also enhance light absorption in the UV-visible region. The high reactivity of methoxyphenols toward atmospheric radicals suggests that SOA was formed from their oxidation processes with a relatively high oxidation level, subsequently leading to SOA with strong optical absorption and hygroscopic properties (Lambe et al., 2013; Massoli et al., 2010). Therefore, SOA formed from the reactions of methoxyphenols with atmospheric oxidants might have important effects on air quality and climate. In addition, the experimental results from this study could help to further the understanding of the atmospheric aging process of smoke plumes from biomass burning emissions. 4 Conclusions For the first time, the rate constant and SOA formation from the gas-phase reaction of eugenol with OH radicals were investigated in an OFR. The second-order rate constant of eugenol with OH radicals was $\mathrm{8.01}±\mathrm{0.40}×{\mathrm{10}}^{-\mathrm{11}}$ cm3 molecule−1 s−1, as measured by the relative rate method, and the corresponding atmospheric lifetime was 2.31±0.12 h. In addition, significant SOA formation of eugenol oxidation by OH radicals was observed. The maximum SOA yields (0.11–0.31) obtained at different eugenol concentrations could be expressed well by a one-product model. SOA yield was dependent on OH exposure and eugenol concentration, which first increased and then decreased as a function of OH exposure due to the possible C–C bond scission of gas-phase species by further oxidation or heterogeneous reactions involving OH radicals. The OSC and O∕C ratio both increased significantly as a function of OH exposure, suggesting that SOA became more oxidized. The presence of SO2 and NO2 was helpful to increase SOA yield, and the maximum enhanced yields were 38.6 % and 19.2 %, respectively. The observed N∕C ratio of SOA was in the range of 0.032–0.043, indicating that NO2 participated in the OH-initiated reaction of eugenol, consequently producing organic nitrates. The experimental results might be helpful to further understand the atmospheric chemical behavior of eugenol and its SOA formation potential from OH oxidation in the atmosphere. Data availability Data availability. The experimental data are available upon request to the corresponding authors. Supplement Supplement. Author contributions Author contributions. CL, YL, and HH designed the research and wrote the paper. CL, TC, and JL performed the experiments. CL, YL, TC, JL, and HH carried out the data analysis. All authors contributed to the final paper. Competing interests Competing interests. The authors declare that they have no conflict of interest. Acknowledgements Acknowledgements. This work was financially supported by the National Key R&D Program of China (2016YFC0202700), the National Natural Science Foundation of China (21607088 and 41877306), a China Postdoctoral Science Foundation funded project (2017M620071), and the Applied Basic Research Project of Science and Technology Department of Sichuan Province (2018JY0303). Yongchun Liu would like to thank the Beijing University of Chemical Technology for financial support. The authors would also like to acknowledge the experimental help provided by Xiaolei Bao from the Hebei Provincial Academy of Environmental Sciences, Shijiazhuang, China. Edited by: Rainer Volkamer Reviewed by: two anonymous referees References Ahlberg, E., Falk, J., Eriksson, A., Holst, T., Brune, W. H., Kristensson, A., Roldin, P., and Svenningsson, B.: Secondary organic aerosol from VOC mixtures in an oxidation flow reactor, Atmos. Environ., 161, 210–220, https://doi.org/10.1016/j.atmosenv.2017.05.005, 2017. Ahmad, W., Coeur, C., Tomas, A., Fagniez, T., Brubach, J.-B., and Cuisset, A.: Infrared spectroscopy of secondary organic aerosol precursors and investigation of the hygroscopicity of SOA formed from the OH reaction with guaiacol and syringol, Appl. Opt., 56, 116–122, https://doi.org/10.1364/ao.56.00e116, 2017. Atkinson, R., Perry, R. A., and Pitts, J. N.: Rate constants for the reactions of the OH radicals with NO2 (M = Ar and N2) and SO2 (M = Ar), J. Chem. Phys., 65, 306–310, https://doi.org/10.1063/1.432770, 1976. Atkinson, R.: Kinetics and mechanisms of the gas-phase reactions of the NO3 radical with organic compounds, J. Phys. Chem. Ref. Data, 20, 459–507, https://doi.org/10.1063/1.555887, 1991. Atkinson, R. and Arey, J.: Atmospheric degradation of volatile organic compounds, Chem. Rev., 103, 4605–4638, https://doi.org/10.1021/cr0206420, 2003. Bari, M. A., Baumbach, G., Kuch, B., and Scheffknecht, G.: Wood smoke as a source of particle-phase organic compounds in residential areas, Atmos. Environ., 43, 4722–4732, https://doi.org/10.1016/j.atmosenv.2008.09.006, 2009. Bari, M. A., Baumbach, G., Kuch, B., and Scheffknecht, G.: Temporal variation and impact of wood smoke pollution on a residential area in southern Germany, Atmos. Environ., 44, 3823–3832, https://doi.org/10.1016/j.atmosenv.2010.06.031, 2010. Bolling, A. K., Pagels, J., Yttri, K. E., Barregard, L., Sallsten, G., Schwarze, P. E., and Boman, C.: Health effects of residential wood smoke particles: the importance of combustion conditions and physicochemical particle properties, Part. Fibre Toxicol., 6, 29, https://doi.org/10.1186/1743-8977-6-29, 2009. Bruns, E. A., El Haddad, I., Keller, A., Klein, F., Kumar, N. K., Pieber, S. M., Corbin, J. C., Slowik, J. G., Brune, W. H., Baltensperger, U., and Prévôt, A. S. H.: Inter-comparison of laboratory smog chamber and flow reactor systems on organic aerosol yield and composition, Atmos. Meas. Tech., 8, 2315–2332, https://doi.org/10.5194/amt-8-2315-2015, 2015. Bruns, E. A., El Haddad, I., Slowik, J. G., Kilic, D., Klein, F., Baltensperger, U., and Prevot, A. S. H.: Identification of significant precursor gases of secondary organic aerosols from residential wood combustion, Sci. Rep., 6., 27881, https://doi.org/10.1038/srep27881, 2016. Cao, G. and Jang, M.: Effects of particle acidity and UV light on secondary organic aerosol formation from oxidation of aromatics in the absence of NOx, Atmos. Environ., 41, 7603–7613, https://doi.org/10.1016/j.atmosenv.2007.05.034, 2007. Chen, Y. and Bond, T. C.: Light absorption by organic carbon from wood combustion, Atmos. Chem. Phys., 10, 1773–1787, https://doi.org/10.5194/acp-10-1773-2010, 2010. Ciarelli, G., Aksoyoglu, S., El Haddad, I., Bruns, E. A., Crippa, M., Poulain, L., Äijälä, M., Carbone, S., Freney, E., O'Dowd, C., Baltensperger, U., and Prévôt, A. S. H.: Modelling winter organic aerosol at the European scale with CAMx: evaluation and source apportionment with a VBS parameterization based on novel wood burning smog chamber experiments, Atmos. Chem. Phys., 17, 7653–7669, https://doi.org/10.5194/acp-17-7653-2017, 2017. Coeur-Tourneur, C., Cassez, A., and Wenger, J. C.: Rate Coefficients for the gas-phase reaction of hydroxyl radicals with 2-methoxyphenol (guaiacol) and related compounds, J. Phys. Chem. A, 114, 11645–11650, https://doi.org/10.1021/jp1071023, 2010a. Coeur-Tourneur, C., Foulon, V., and Lareal, M.: Determination of aerosol yields from 3-methylcatechol and 4-methylcatechol ozonolysis in a simulation chamber, Atmos. Environ., 44, 852–857, https://doi.org/10.1016/j.atmosenv.2009.11.027, 2010b. Davis, D. D., Ravishankara, A. R., and Fischer, S.: SO2 oxidation via the hydroxyl radical: Atmospheric fate of HSOx radicals, Geophys. Res. Lett., 6, 113–116, https://doi.org/10.1029/GL006i002p00113, 1979. DeCarlo, P. F., Slowik, J. G., Worsnop, D. R., Davidovits, P., and Jimenez, J. L.: Particle morphology and density characterization by combined mobility and aerodynamic diameter measurements, Part 1: Theory, Aerosol Sci. Technol., 38, 1185–1205, https://doi.org/10.1080/027868290903907, 2004. DeCarlo, P. F., Kimmel, J. R., Trimborn, A., Northway, M. J., Jayne, J. T., Aiken, A. C., Gonin, M., Fuhrer, K., Horvath, T., Docherty, K. S., Worsnop, D. R., and Jimenez, J. L.: Field-deployable, high-resolution, time-of-flight aerosol mass spectrometer, Anal. Chem., 78, 8281–8289, https://doi.org/10.1021/ac061249n, 2006. Dills, R. L., Paulsen, M., Ahmad, J., Kalman, D. A., Elias, F. N., and Simpson, C. D.: Evaluation of urinary methoxyphenols as biomarkers of woodsmoke exposure, Environ. Sci. Technol., 40, 2163–2170, https://doi.org/10.1021/es051886f, 2006. Ding, X., Zhang, Y.-Q., He, Q.-F., Yu, Q.-Q., Wang, J.-Q., Shen, R.-Q., Song, W., Wang, Y.-S., and Wang, X.-M.: Significant increase of aromatics-derived secondary organic aerosol during fall to winter in China, Environ. Sci. Technol., 51, 7432–7441, https://doi.org/10.1021/acs.est.6b06408, 2017. Duporté, G., Parshintsev, J., Barreira, L. M. F., Hartonen, K., Kulmala, M., and Riekkola, M.-L.: Nitrogen-containing low volatile compounds from pinonaldehyde-dimethylamine reaction in the atmosphere: A laboratory and field study, Environ. Sci. Technol., 50, 4693–4700, https://doi.org/10.1021/acs.est.6b00270, 2016. El Zein, A., Coeur, C., Obeid, E., Lauraguais, A., and Fagniez, T.: Reaction kinetics of catechol (1,2-benzenediol) and guaiacol (2-methoxyphenol) with ozone, J. Phys. Chem. A, 119, 6759–6765, https://doi.org/10.1021/acs.jpca.5b00174, 2015. Farmer, D. K., Matsunaga, A., Docherty, K. S., Surratt, J. D., Seinfeld, J. H., Ziemann, P. J., and Jimenez, J. L.: Response of an aerosol mass spectrometer to organonitrates and organosulfates and implications for atmospheric chemistry, P. Natl. Acad. Sci. USA, 107, 6670–6675, https://doi.org/10.1073/pnas.0912340107, 2010. Finewax, Z., de Gouw, J. A., and Ziemann, P. J.: Identification and quantification of 4-nitrocatechol formed from OH and NO3 radical-initiated reactions of catechol in air in the presence of NOx: Implications for secondary organic aerosol formation from biomass burning, Environ. Sci. Technol., 52, 1981–1989, https://doi.org/10.1021/acs.est.7b05864, 2018. Friedman, B., Brophy, P., Brune, W. H., and Farmer, D. K.: Anthropogenic sulfur perturbations on biogenic oxidation: SO2 additions impact gas-phase OH oxidation products of alpha- and beta-pinene, Environ. Sci. Technol., 50, 1269–1279, https://doi.org/10.1021/acs.est.5b05010, 2016. Fry, J. L., Draper, D. C., Zarzana, K. J., Campuzano-Jost, P., Day, D. A., Jimenez, J. L., Brown, S. S., Cohen, R. C., Kaser, L., Hansel, A., Cappellin, L., Karl, T., Hodzic Roux, A., Turnipseed, A., Cantrell, C., Lefer, B. L., and Grossberg, N.: Observations of gas- and aerosol-phase organic nitrates at BEACHON-RoMBAS 2011, Atmos. Chem. Phys., 13, 8585–8605, https://doi.org/10.5194/acp-13-8585-2013, 2013. Gilardoni, S., Massoli, P., Paglione, M., Giulianelli, L., Carbone, C., Rinaldi, M., Decesari, S., Sandrini, S., Costabile, F., Gobbi, G. P., Pietrogrande, M. C., Visentin, M., Scotto, F., Fuzzi, S., and Facchini, M. C.: Direct observation of aqueous secondary organic aerosol from biomass-burning emissions, P. Natl. Acad. Sci. USA, 113, 10013–10018, https://doi.org/10.1073/pnas.1602212113, 2016. Gordon, T. D., Presto, A. A., Nguyen, N. T., Robertson, W. H., Na, K., Sahay, K. N., Zhang, M., Maddox, C., Rieger, P., Chattopadhyay, S., Maldonado, H., Maricq, M. M., and Robinson, A. L.: Secondary organic aerosol production from diesel vehicle exhaust: impact of aftertreatment, fuel chemistry and driving cycle, Atmos. Chem. Phys., 14, 4643–4659, https://doi.org/10.5194/acp-14-4643-2014, 2014. http://www.aim.env.uea.ac.uk/aim/model2/model2a.php, last access: 18 June 2018. Hunter, J. F., Carrasquillo, A. J., Daumit, K. E., and Kroll, J. H.: Secondary organic aerosol formation from acyclic, monocyclic, and polycyclic alkanes, Environ. Sci. Technol., 48, 10227–10234, https://doi.org/10.1021/es502674s, 2014. Jaoui, M., Edney, E. O., Kleindienst, T. E., Lewandowski, M., Offenberg, J. H., Surratt, J. D., and Seinfeld, J. H.: Formation of secondary organic aerosol from irradiated alpha-pinene/toluene/NOx mixtures and the effect of isoprene and sulfur dioxide, J. Geophys. Res.-Atmos., 113, D09303, https://doi.org/10.1029/2007jd009426, 2008. Jeong, C.-H., Evans, G. J., Dann, T., Graham, M., Herod, D., Dabek-Zlotorzynska, E., Mathieu, D., Ding, L., and Wang, D.: Influence of biomass burning on wintertime fine particulate matter: Source contribution at a valley site in rural British Columbia, Atmos. Environ., 42, 3684–3699, https://doi.org/10.1016/j.atmosenv.2008.01.006, 2008. Kang, E., Root, M. J., Toohey, D. W., and Brune, W. H.: Introducing the concept of Potential Aerosol Mass (PAM), Atmos. Chem. Phys., 7, 5727–5744, https://doi.org/10.5194/acp-7-5727-2007, 2007. Keck, L. and Wittmaack, K.: Effect of filter type and temperature on volatilisation losses from ammonium salts in aerosol matter, Atmos. Environ., 39, 4093–4100, https://doi.org/10.1016/j.atmosenv.2005.03.029, 2005. Kleindienst, T. E., Shepson, P. B., Edney, E. O., Claxton, L. D., and Cupitt, L. T.: Wood smoke: Measurement of the mutagenic activities of its gas- and particulate-phase photooxidation products, Environ. Sci. Technol., 20, 493–501, https://doi.org/10.1021/es00147a009, 1986. Kleindienst, T. E., Edney, E. O., Lewandowski, M., Offenberg, J. H., and Jaoui, M.: Secondary organic carbon and aerosol yields from the irradiations of isoprene and alpha-pinene in the presence of NOx and SO2, Environ. Sci. Technol., 40, 3807–3812, https://doi.org/10.1021/es052446r, 2006. Kramp, F. and Paulson, S. E.: On the uncertainties in the rate coefficients for OH reactions with hydrocarbons, and the rate coefficients of the 1,3,5-trimethylbenzene and m-xylene reactions with OH radicals in the gas phase, J. Phys. Chem. A, 102, 2685–2690, https://doi.org/10.1021/jp973289o, 1998. Kroll, J. H., Donahue, N. M., Jimenez, J. L., Kessler, S. H., Canagaratna, M. R., Wilson, K. R., Altieri, K. E., Mazzoleni, L. R., Wozniak, A. S., Bluhm, H., Mysak, E. R., Smith, J. D., Kolb, C. E., and Worsnop, D. R.: Carbon oxidation state as a metric for describing the chemistry of atmospheric organic aerosol, Nat. Chem., 3, 133–139, https://doi.org/10.1038/nchem.948, 2011. Lambe, A. T., Cappa, C. D., Massoli, P., Onasch, T. B., Forestieri, S. D., Martin, A. T., Cummings, M. J., Croasdale, D. R., Brune, W. H., Worsnop, D. R., and Davidovits, P.: Relationship between oxidation level and optical properties of secondary organic aerosol, Environ. Sci. Technol., 47, 6349–6357, https://doi.org/10.1021/es401043j, 2013. Lambe, A. T., Chhabra, P. S., Onasch, T. B., Brune, W. H., Hunter, J. F., Kroll, J. H., Cummings, M. J., Brogan, J. F., Parmar, Y., Worsnop, D. R., Kolb, C. E., and Davidovits, P.: Effect of oxidant concentration, exposure time, and seed particles on secondary organic aerosol chemical composition and yield, Atmos. Chem. Phys., 15, 3063–3075, https://doi.org/10.5194/acp-15-3063-2015, 2015. Lauraguais, A., Coeur-Tourneur, C., Cassez, A., and Seydi, A.: Rate constant and secondary organic aerosol yields for the gas-phase reaction of hydroxyl radicals with syringol (2,6-dimethoxyphenol), Atmos. Environ., 55, 43–48, https://doi.org/10.1016/j.atmosenv.2012.02.027, 2012. Lauraguais, A., Bejan, I., Barnes, I., Wiesen, P., Coeur-Tourneur, C., and Cassez, A.: Rate coefficients for the gas-phase reaction of chlorine atoms with a series of methoxylated aromatic compounds, J. Phys. Chem. A, 118, 1777–1784, https://doi.org/10.1021/jp4114877, 2014a. Lauraguais, A., Coeur-Tourneur, C., Cassez, A., Deboudt, K., Fourmentin, M., and Choel, M.: Atmospheric reactivity of hydroxyl radicals with guaiacol (2-methoxyphenol), a biomass burning emitted compound: Secondary organic aerosol formation and gas-phase oxidation products, Atmos. Environ., 86, 155–163, https://doi.org/10.1016/j.atmosenv.2013.11.074, 2014b. Lauraguais, A., Bejan, I., Barnes, I., Wiesen, P., and Coeur, C.: Rate coefficients for the gas-phase reactions of hydroxyl radicals with a series of methoxylated aromatic compounds, J. Phys. Chem. A, 119, 6179–6187, https://doi.org/10.1021/acs.jpca.5b03232, 2015. Lauraguais, A., El Zein, A., Coeur, C., Obeid, E., Cassez, A., Rayez, M.-T., and Rayez, J.-C.: Kinetic study of the gas-phase reactions of nitrate radicals with methoxyphenol compounds: Experimental and theoretical approaches, J. Phys. Chem. A, 120, 2691–2699, https://doi.org/10.1021/acs.jpca.6b02729, 2016. Li, H., Zhang, Q., Zhang, Q., Chen, C., Wang, L., Wei, Z., Zhou, S., Parworth, C., Zheng, B., Canonaco, F., Prévôt, A. S. H., Chen, P., Zhang, H., Wallington, T. J., and He, K.: Wintertime aerosol chemistry and haze evolution in an extremely polluted city of the North China Plain: significant contribution from coal and biomass combustion, Atmos. Chem. Phys., 17, 4751–4768, https://doi.org/10.5194/acp-17-4751-2017, 2017. Li, R., Palm, B. B., Ortega, A. M., Hlywiak, J., Hu, W., Peng, Z., Day, D. A., Knote, C., Brune, W. H., de Gouw, J. A., and Jimenez, J. L.: Modeling the radical chemistry in an oxidation flow reactor: Radical formation and recycling, sensitivities, and the OH exposure estimation equation, J. Phys. Chem. A, 119, 4418–4432, https://doi.org/10.1021/jp509534k, 2015. Liu, J., Lin, P., Laskin, A., Laskin, J., Kathmann, S. M., Wise, M., Caylor, R., Imholt, F., Selimovic, V., and Shilling, J. E.: Optical properties and aging of light-absorbing secondary organic aerosol, Atmos. Chem. Phys., 16, 12815–12827, https://doi.org/10.5194/acp-16-12815-2016, 2016. Liu, T., Wang, X., Hu, Q., Deng, W., Zhang, Y., Ding, X., Fu, X., Bernard, F., Zhang, Z., Lu, S., He, Q., Bi, X., Chen, J., Sun, Y., Yu, J., Peng, P., Sheng, G., and Fu, J.: Formation of secondary aerosols from gasoline vehicle exhaust when mixing with SO2, Atmos. Chem. Phys., 16, 675-689, https://doi.org/10.5194/acp-16-675-2016, 2016. Liu, Y., Huang, L., Li, S.-M., Harner, T., and Liggio, J.: OH-initiated heterogeneous oxidation of tris-2-butoxyethyl phosphate: implications for its fate in the atmosphere, Atmos. Chem. Phys., 14, 12195–12207, https://doi.org/10.5194/acp-14-12195-2014, 2014a. Liu, Y., Liggio, J., Harner, T., Jantunen, L., Shoeib, M., and Li, S.-M.: Heterogeneous OH initiated oxidation: A possible explanation for the persistence of organophosphate flame retardants in air, Environ. Sci. Technol., 48, 1041–1048, https://doi.org/10.1021/es404515k, 2014b. Liu, Y., Liggio, J., Staebler, R., and Li, S.-M.: Reactive uptake of ammonia to secondary organic aerosols: kinetics of organonitrogen formation, Atmos. Chem. Phys., 15, 13569–13584, https://doi.org/10.5194/acp-15-13569-2015, 2015. Mao, J., Ren, X., Brune, W. H., Olson, J. R., Crawford, J. H., Fried, A., Huey, L. G., Cohen, R. C., Heikes, B., Singh, H. B., Blake, D. R., Sachse, G. W., Diskin, G. S., Hall, S. R., and Shetter, R. E.: Airborne measurement of OH reactivity during INTEX-B, Atmos. Chem. Phys., 9, 163–173, https://doi.org/10.5194/acp-9-163-2009, 2009. Massoli, P., Lambe, A. T., Ahern, A. T., Williams, L. R., Ehn, M., Mikkila, J., Canagaratna, M. R., Brune, W. H., Onasch, T. B., Jayne, J. T., Petaja, T., Kulmala, M., Laaksonen, A., Kolb, C. E., Davidovits, P., and Worsnop, D. R.: Relationship between aerosol oxidation level and hygroscopic properties of laboratory generated secondary organic aerosol (SOA) particles, Geophys. Res. Lett., 37, L24801, https://doi.org/10.1029/2010gl045258, 2010. McMurry, J. E.: Organic Chemistry, 6th edn., Brooks/Cole, Belmont, CA, 2004. Ng, N. L., Chhabra, P. S., Chan, A. W. H., Surratt, J. D., Kroll, J. H., Kwan, A. J., McCabe, D. C., Wennberg, P. O., Sorooshian, A., Murphy, S. M., Dalleska, N. F., Flagan, R. C., and Seinfeld, J. H.: Effect of NOx level on secondary organic aerosol (SOA) formation from the photooxidation of terpenes, Atmos. Chem. Phys., 7, 5159–5174, https://doi.org/10.5194/acp-7-5159-2007, 2007a. Ng, N. L., Kroll, J. H., Chan, A. W. H., Chhabra, P. S., Flagan, R. C., and Seinfeld, J. H.: Secondary organic aerosol formation from m-xylene, toluene, and benzene, Atmos. Chem. Phys., 7, 3909–3922, https://doi.org/10.5194/acp-7-3909-2007, 2007b. Ng, N. L., Canagaratna, M. R., Zhang, Q., Jimenez, J. L., Tian, J., Ulbrich, I. M., Kroll, J. H., Docherty, K. S., Chhabra, P. S., Bahreini, R., Murphy, S. M., Seinfeld, J. H., Hildebrandt, L., Donahue, N. M., DeCarlo, P. F., Lanz, V. A., Prévôt, A. S. H., Dinar, E., Rudich, Y., and Worsnop, D. R.: Organic aerosol components observed in Northern Hemispheric datasets from Aerosol Mass Spectrometry, Atmos. Chem. Phys., 10, 4625–4641, https://doi.org/10.5194/acp-10-4625-2010, 2010. Nolte, C. G., Schauer, J. J., Cass, G. R., and Simoneit, B. R. T.: Highly polar organic compounds present in wood smoke and in the ambient atmosphere, Environ. Sci. Technol., 35, 1912–1919, https://doi.org/10.1021/es001420r, 2001. Odum, J. R., Hoffmann, T., Bowman, F., Collins, D., Flagan, R. C., and Seinfeld, J. H.: Gas/particle partitioning and secondary organic aerosol yields, Environ. Sci. Technol., 30, 2580–2585, https://doi.org/10.1021/es950943, 1996. Ofner, J., Krüger, H.-U., Grothe, H., Schmitt-Kopplin, P., Whitmore, K., and Zetzsch, C.: Physico-chemical characterization of SOA derived from catechol and guaiacol – a model substance for the aromatic fraction of atmospheric HULIS, Atmos. Chem. Phys., 11, 1–15, https://doi.org/10.5194/acp-11-1-2011, 2011. Ortega, A. M., Hayes, P. L., Peng, Z., Palm, B. B., Hu, W., Day, D. A., Li, R., Cubison, M. J., Brune, W. H., Graus, M., Warneke, C., Gilman, J. B., Kuster, W. C., de Gouw, J., Gutiérrez-Montes, C., and Jimenez, J. L.: Real-time measurements of secondary organic aerosol formation and aging from ambient air in an oxidation flow reactor in the Los Angeles area, Atmos. Chem. Phys., 16, 7411–7433, https://doi.org/10.5194/acp-16-7411-2016, 2016. Palm, B. B., Campuzano-Jost, P., Ortega, A. M., Day, D. A., Kaser, L., Jud, W., Karl, T., Hansel, A., Hunter, J. F., Cross, E. S., Kroll, J. H., Peng, Z., Brune, W. H., and Jimenez, J. L.: In situ secondary organic aerosol formation from ambient pine forest air using an oxidation flow reactor, Atmos. Chem. Phys., 16, 2943–2970, https://doi.org/10.5194/acp-16-2943-2016, 2016. Palm, B. B., de Sá, S. S., Day, D. A., Campuzano-Jost, P., Hu, W., Seco, R., Sjostedt, S. J., Park, J.-H., Guenther, A. B., Kim, S., Brito, J., Wurm, F., Artaxo, P., Thalman, R., Wang, J., Yee, L. D., Wernis, R., Isaacman-VanWertz, G., Goldstein, A. H., Liu, Y., Springston, S. R., Souza, R., Newburn, M. K., Alexander, M. L., Martin, S. T., and Jimenez, J. L.: Secondary organic aerosol formation from ambient air in an oxidation flow reactor in central Amazonia, Atmos. Chem. Phys., 18, 467–493, https://doi.org/10.5194/acp-18-467-2018, 2018. Peng, Z., Day, D. A., Stark, H., Li, R., Lee-Taylor, J., Palm, B. B., Brune, W. H., and Jimenez, J. L.: HOx radical chemistry in oxidation flow reactors with low-pressure mercury lamps systematically examined by modeling, Atmos. Meas. Tech., 8, 4863–4890, https://doi.org/10.5194/amt-8-4863-2015, 2015. Peng, Z., Day, D. A., Ortega, A. M., Palm, B. B., Hu, W., Stark, H., Li, R., Tsigaridis, K., Brune, W. H., and Jimenez, J. L.: Non-OH chemistry in oxidation flow reactors for the study of atmospheric chemistry systematically examined by modeling, Atmos. Chem. Phys., 16, 4283–4305, https://doi.org/10.5194/acp-16-4283-2016, 2016. Peng, Z. and Jimenez, J. L.: Modeling of the chemistry in oxidation flow reactors with high initial NO, Atmos. Chem. Phys., 17, 11991–12010, https://doi.org/10.5194/acp-17-11991-2017, 2017. Pereira, K. L., Hamilton, J. F., Rickard, A. R., Bloss, W. J., Alam, M. S., Camredon, M., Ward, M. W., Wyche, K. P., Munoz, A., Vera, T., Vazquez, M., Borras, E., and Rodenas, M.: Insights into the formation and evolution of individual compounds in the particulate phase during aromatic photo-oxidation, Environ. Sci. Technol., 49, 13168–13178, https://doi.org/10.1021/acs.est.5b03377, 2015. Priya, A. M. and Lakshmipathi, S.: DFT study on abstraction reaction mechanism of oh radical with 2-methoxyphenol, J. Phys. Org. Chem., 30, e3713, https://doi.org/10.1002/poc.3713, 2017. Sarrafzadeh, M., Wildt, J., Pullinen, I., Springer, M., Kleist, E., Tillmann, R., Schmitt, S. H., Wu, C., Mentel, T. F., Zhao, D., Hastie, D. R., and Kiendler-Scharr, A.: Impact of NOx and OH on secondary organic aerosol formation from β-pinene photooxidation, Atmos. Chem. Phys., 16, 11237–11248, https://doi.org/10.5194/acp-16-11237-2016, 2016. Sato, K., Takami, A., Isozaki, T., Hikida, T., Shimono, A., and Imamura, T.: Mass spectrometric study of secondary organic aerosol formed from the photo-oxidation of aromatic hydrocarbons, Atmos. Environ., 44, 1080–1087, https://doi.org/10.1016/j.atmosenv.2009.12.013, 2010. Schauer, J. J. and Cass, G. R.: Source apportionment of wintertime gas-phase and particle-phase air pollutants using organic compounds as tracers, Environ. Sci. Technol., 34, 1821–1832, https://doi.org/10.1021/es981312t, 2000. Schauer, J. J., Kleeman, M. J., Cass, G. R., and Simoneit, B. R. T.: Measurement of emissions from air pollution sources, 3. C-1-C-29 organic compounds from fireplace combustion of wood, Environ. Sci. Technol., 35, 1716–1728, https://doi.org/10.1021/es001331e, 2001. Simonen, P., Saukko, E., Karjalainen, P., Timonen, H., Bloss, M., Aakko-Saksa, P., Rönkkö, T., Keskinen, J., and Dal Maso, M.: A new oxidation flow reactor for measuring secondary aerosol formation of rapidly changing emission sources, Atmos. Meas. Tech., 10, 1519–1537, https://doi.org/10.5194/amt-10-1519-2017, 2017. Simpson, C. D., Paulsen, M., Dills, R. L., Liu, L. J. S., and Kalman, D. A.: Determination of methoxyphenols in ambient atmospheric particulate matter: Tracers for wood combustion, Environ. Sci. Technol., 39, 631–637, https://doi.org/10.1021/es0486871, 2005. Sun, Y. L., Zhang, Q., Anastasio, C., and Sun, J.: Insights into secondary organic aerosol formed via aqueous-phase reactions of phenolic compounds based on high resolution mass spectrometry, Atmos. Chem. Phys., 10, 4809–4822, https://doi.org/10.5194/acp-10-4809-2010, 2010. Thuner, L. P., Bardini, P., Rea, G. J., and Wenger, J. C.: Kinetics of the gas-phase reactions of OH and NO3 radicals with dimethylphenols, J. Phys. Chem. A, 108, 11019–11025, https://doi.org/10.1021/jp046358p, 2004. Tiitta, P., Leskinen, A., Hao, L., Yli-Pirilä, P., Kortelainen, M., Grigonyte, J., Tissari, J., Lamberg, H., Hartikainen, A., Kuuspalo, K., Kortelainen, A.-M., Virtanen, A., Lehtinen, K. E. J., Komppula, M., Pieber, S., Prévôt, A. S. H., Onasch, T. B., Worsnop, D. R., Czech, H., Zimmermann, R., Jokiniemi, J., and Sippula, O.: Transformation of logwood combustion emissions in a smog chamber: formation of secondary organic aerosol and changes in the primary organic aerosol upon daytime and nighttime aging, Atmos. Chem. Phys., 16, 13251–13269, https://doi.org/10.5194/acp-16-13251-2016, 2016. US EPA: Estimation Programs Interface Suite for Microsoft® Windows, v 4.11, United States Environmental Protection Agency, Washington, DC, USA, 2012. Wang, D., Zhou, B., Fu, Q., Zhao, Q., Zhang, Q., Chen, J., Yang, X., Duan, Y., and Li, J.: Intense secondary aerosol formation due to strong atmospheric photochemical reactions in summer: observations at a rural site in eastern Yangtze River Delta of China, Sci. Total Environ., 571, 1454–1466, https://doi.org/10.1016/j.scitotenv.2016.06.212, 2016. Wang, Y., Hu, M., Lin, P., Guo, Q., Wu, Z., Li, M., Zeng, L., Song, Y., Zeng, L., Wu, Y., Guo, S., Huang, X., and He, L.: Molecular characterization of nitrogen-containing organic compounds in humic-like substances emitted from straw residue burning, Environ. Sci. Technol., 51, 5951–5961, https://doi.org/10.1021/acs.est.7b00248, 2017. Ward, T. J., Rinehart, L. R., and Lange, T.: The 2003/2004 Libby, Montana PM2.5 source apportionment research study, Aerosol Sci. Technol., 40, 166–177, https://doi.org/10.1080/02786820500494536, 2006. Xu, L., Middlebrook, A. M., Liao, J., de Gouw, J. A., Guo, H., Weber, R. J., Nenes, A., Lopez-Hilfiker, F. D., Lee, B. H., Thornton, J. A., Brock, C. A., Neuman, J. A., Nowak, J. B., Pollack, I. B., Welti, A., Graus, M., Warneke, C., and Ng, N. L.: Enhanced formation of isoprene-derived organic aerosol in sulfur-rich power plant plumes during Southeast Nexus, J. Geophys. Res.-Atmos., 121, 11137–11153, https://doi.org/10.1002/2016jd025156, 2016. Yang, B., Zhang, H., Wang, Y., Zhang, P., Shu, J., Sun, W., and Ma, P.: Experimental and theoretical studies on gas-phase reactions of NO3 radicals with three methoxyphenols: Guaiacol, creosol, and syringol, Atmos. Environ., 125, 243–251, https://doi.org/10.1016/j.atmosenv.2015.11.028, 2016. Yang, Y., Shao, M., Keßel, S., Li, Y., Lu, K., Lu, S., Williams, J., Zhang, Y., Zeng, L., Nölscher, A. C., Wu, Y., Wang, X., and Zheng, J.: How the OH reactivity affects the ozone production efficiency: case studies in Beijing and Heshan, China, Atmos. Chem. Phys., 17, 7127–7142, https://doi.org/10.5194/acp-17-7127-2017, 2017. Yee, L. D., Kautzman, K. E., Loza, C. L., Schilling, K. A., Coggon, M. M., Chhabra, P. S., Chan, M. N., Chan, A. W. H., Hersey, S. P., Crounse, J. D., Wennberg, P. O., Flagan, R. C., and Seinfeld, J. H.: Secondary organic aerosol formation from biomass burning intermediates: phenol and methoxyphenols, Atmos. Chem. Phys., 13, 8019–8043, https://doi.org/10.5194/acp-13-8019-2013, 2013. Yu, L., Smith, J., Laskin, A., Anastasio, C., Laskin, J., and Zhang, Q.: Chemical characterization of SOA formed from aqueous-phase reactions of phenols with the triplet excited state of carbonyl and hydroxyl radical, Atmos. Chem. Phys., 14, 13801–13816, https://doi.org/10.5194/acp-14-13801-2014, 2014. Zhang, H., Yang, B., Wang, Y., Shu, J., Zhang, P., Ma, P., and Li, Z.: Gas-phase reactions of methoxyphenols with NO3 radicals: Kinetics, products, and mechanisms, J. Phys. Chem. A, 120, 1213–1221, https://doi.org/10.1021/acs.jpca.5b10406, 2016. Zhang, X., Lambe, A. T., Upshur, M. A., Brooks, W. A., Be, A. G., Thomson, R. J., Geiger, F. M., Surratt, J. D., Zhang, Z., Gold, A., Graf, S., Cubison, M. J., Groessl, M., Jayne, J. T., Worsnop, D. R., and Canagaratna, M. R.: Highly oxygenated multifunctional compounds in alpha-pinene secondary organic aerosol, Environ. Sci. Technol., 51, 5932–5940, https://doi.org/10.1021/acs.est.6b06588, 2017. Ziemann, P. J. and Atkinson, R.: Kinetics, products, and mechanisms of secondary organic aerosol formation, Chem. Soc. Rev., 41, 6582–6605, https://doi.org/10.1039/c2cs35122f, 2012.
2020-07-12 17:01:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 34, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8928171396255493, "perplexity": 11884.413690767004}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657138752.92/warc/CC-MAIN-20200712144738-20200712174738-00340.warc.gz"}
https://www.gradesaver.com/textbooks/math/calculus/university-calculus-early-transcendentals-3rd-edition/chapter-2-section-2-3-the-precise-definition-of-a-limit-exercises-page-77/50
## University Calculus: Early Transcendentals (3rd Edition) To prove the limit, prove that given $\epsilon\gt0$ there exists a $\delta\gt0$ such that for all $x$ $$0\lt|x|\lt\delta\Rightarrow|f(x)|\lt\epsilon$$ $$\lim_{x\to0}x^2\sin\frac{1}{x}=0$$ Our job is to prove that given $\epsilon\gt0$ there exists a $\delta\gt0$ such that for all $x$ $$0\lt|x|\lt\delta\Rightarrow|f(x)|\lt\epsilon$$ 1) We know that for all $x$, $$|\sin\frac{1}{x}|\le1$$ Therefore, $$|x^2\sin\frac{1}{x}|\le|x^2|$$ 2) Now let an arbitrary value of $\epsilon\gt0$, and let a value of $\delta$ so that $\delta=\min(1,\epsilon)$. There are only 2 cases: - If $\epsilon\le1$, then $\delta=\epsilon\le1$, so $\delta^2\le\delta=\epsilon$. - If $\epsilon\gt1$, then $\delta=1\lt\epsilon$, so $\delta^2=1\lt\epsilon$ Therefore, $\delta^2\le\epsilon$ For all $x$, as $0\lt|x|\lt\delta$, we have $$|f(x)|=|x^2\sin\frac{1}{x}|\le|x^2|\lt\delta^2$$ $$|f(x)|\lt\delta^2$$ Yet since $\delta^2\le\epsilon$, $$|f(x)|\lt\epsilon$$ The proof has been completed.
2019-12-11 08:07:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.970074474811554, "perplexity": 55.76725746136363}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540530452.95/warc/CC-MAIN-20191211074417-20191211102417-00060.warc.gz"}
https://vitalflux.com/neural-network-back-propagation-python-examples/
# Backpropagation Algorithm in Neural Network: Examples In this post, you will learn about the concepts of neural network backpropagation algorithm along with Python examples. As a data scientist, it is very important to learn the concepts of backpropagation algorithm if you want to get good at deep learning models. This is because back propagation algorithm is key to learning weights at different layers in the deep neural network. ## What’s Backpropagation Algorithm? The backpropagation algorithm is a well-known procedure for training neural networks. In general, backpropagation works by propagating error signals backwards through the network, from the output layer back to the input layer. This process adjusts the weights of the connections between neurons in order to minimize the overall error. The backpropagation algorithm represents the propagation of the gradients of outputs from each node (in each layer) on the final output, in the backward direction right up to the input layer nodes. All that is achieved using the backpropagation algorithm is to compute the gradients of weights and biases. Remember that the training of neural networks is done using optimizers and NOT backpropagation. The primary goal of learning in the neural network is to determine how would the weights and biases in every layer would change to minimize the objective or cost function for each record in the training data set. Instead of determining the final output as a function of weights and biases of every layer and take the partial derivatives with respect to weights and biases to determine the gradients, backpropagation makes it simpler to propagate the gradients in the backward direction and help determine the gradients of weights and biases in every layer using the chain rule. The backpropagation algorithm can be summarized in a few simple steps: • First, the predicted output of the neural network is compared to the actual desired output. This produces an error signal. • Next, this error signal is propagated backwards through the network. That is, it is multiplied by the weights of the connections between neurons and passed back to the previous layer. • The neuron weights and biases are then updated according to this error signal. In general, weights are increased if they contribute to reducing the error, and decreased if they contribute to increasing the error. • This process is then repeated for each layer of the neural network until the error is minimized. The main idea behind calculating gradients in case of neural network with respect to cost function C is the following: How do we change weights and biases in every layer (increases or decreases) such that neural network provides output that minimises the cost function? This is where back propagation algorithm helps in determining direction in which each of the weights and biases need to change to minimise the cost function. Let’s understand the back propagation algorithm using the following simplistic neural network with one input layer, one hidden layer and one output layer. Let’s take activation function as an identity function for the sake of understanding. In real world problems, the activation functions most commonly used are sigmoid function, ReLU or variants of ReLU functions and tanh function. Lets understand the above neural network. • There are three layers in the network – input, hidden, and output layer • There are two input variables (features) in the input layer, three nodes in the hidden layer, and one node in the output layer • The activation function of the network is applied to the weighted sum of inputs at each node to calculate the activation value. • The output from nodes in the hidden and output layer is derived from applying the activation function on the weighted sum of inputs to each of the nodes in these layers. Mathematically, above neural network can be represented as following: For training the neural network using the dataset, the ask is to determine the optimal value of all the weights and biases denoted by w and b. And, the manner in which the optimal values are found is to optimize / minimize a loss function using the most optimal values of weights and biases. For regression problems, the most common loss function used is ordinary least square function (squared difference between observed value and network output value). For classification problems, the most common loss function used is cross-entropy loss function. For optimizing / minimizing the loss function, the gradient descent algorithm is applied on the loss function with respect to every weights and biases based on back propagation algorithm. The idea is to change or update the weights and biases for every layer in the manner that the loss function reduces after every iteration. Back propagation algorithm helps in determining gradients of weights and biases with respect to final output value of the network. Once gradients are found, the weights and biases are updated based on different gradient techniques such as stochastic gradient descent. In stochastic gradient descent technique, weights are biases are updated after processing small batches of training data. It is also called as mini-batch gradient descent technique. The training of neural network shown in the above diagram would mean learning the most optimal value of the following weights and biases in two different layers: • $$\Large w^1_{11}, w^1_{12}, w^1_{21}, w^1_{22}, w^1_{31}, w^1_{32}, b_1$$ for the first layer • $$\Large w^2_{11}, w^2_{12}, w^2_{13}, b_2$$ for the second layer. The optimal values for the above mentioned weights and biases in different layers are learned based on their gradients (partial derivatives) and optimization technique such as stochastic gradient descent. The gradients of all the weights and biases with respect to final output is found based on the back propagation algorithm. Here is the list of gradients which is required to be determined with respect to the final output value for learning purpose. If the final output is C (representing cost function), then the gradients can be determined as the following: $$\Large \frac{\partial C}{\partial w^1_{11}}, \frac{\partial C}{\partial w^1_{12}}, \frac{\partial C}{\partial w^1_{21}}, \frac{\partial C}{\partial w^1_{22}}, \frac{\partial C}{\partial w^1_{31}}, \frac{\partial C}{\partial w^1_{32}}, \frac{\partial C}{\partial b_{1}}$$ . $$\Large \frac{\partial C}{\partial w^2_{11}}, \frac{\partial C}{\partial w^2_{12}}, \frac{\partial C}{\partial w^2_{13}}, \frac{\partial C}{\partial b_{2}}$$ . Let’s see how back propagation algorithm can be used to determine all of the gradients. $$\Large \frac{\partial C}{\partial w^2_{11}} = \frac{\partial C}{\partial a^3_1}\frac{\partial a^3_1}{\partial Z^3_1}\frac{\partial Z^3_1}{\partial w^2_{11}}$$ . $$\Large \frac{\partial C}{\partial w^2_{12}} = \frac{\partial C}{\partial a^3_1}\frac{\partial a^3_1}{\partial Z^3_1}\frac{\partial Z^3_1}{\partial w^2_{12}}$$ . $$\Large \frac{\partial C}{\partial w^2_{13}} = \frac{\partial C}{\partial a^3_1}\frac{\partial a^3_1}{\partial Z^3_1}\frac{\partial Z^3_1}{\partial w^2_{13}}$$ . $$\Large \frac{\partial C}{\partial w^1_{11}} = \frac{\partial C}{\partial a^3_1}\frac{\partial a^3_1}{\partial Z^3_1}\frac{\partial Z^3_1}{\partial a^2_1}\frac{\partial a^2_1}{\partial Z^2_1}\frac{\partial Z^2_1}{\partial w^1_{11}}$$ . $$\Large \frac{\partial C}{\partial w^1_{12}} = \frac{\partial C}{\partial a^3_1}\frac{\partial a^3_1}{\partial Z^3_1}\frac{\partial Z^3_1}{\partial a^2_1}\frac{\partial a^2_1}{\partial Z^2_1}\frac{\partial Z^2_1}{\partial w^1_{12}}$$ . $$\Large \frac{\partial C}{\partial w^1_{21}} = \frac{\partial C}{\partial a^3_1}\frac{\partial a^3_1}{\partial Z^3_2}\frac{\partial Z^3_2}{\partial a^2_2}\frac{\partial a^2_2}{\partial Z^2_2}\frac{\partial Z^2_2}{\partial w^1_{21}}$$ . $$\Large \frac{\partial C}{\partial w^1_{22}} = \frac{\partial C}{\partial a^3_1}\frac{\partial a^3_1}{\partial Z^3_2}\frac{\partial Z^3_2}{\partial a^2_2}\frac{\partial a^2_2}{\partial Z^2_2}\frac{\partial Z^2_2}{\partial w^1_{22}}$$ . $$\Large \frac{\partial C}{\partial w^1_{31}} = \frac{\partial C}{\partial a^3_1}\frac{\partial a^3_1}{\partial Z^3_1}\frac{\partial Z^3_1}{\partial a^2_3}\frac{\partial a^2_3}{\partial Z^2_3}\frac{\partial Z^2_3}{\partial w^1_{31}}$$ . $$\Large \frac{\partial C}{\partial w^1_{32}} = \frac{\partial C}{\partial a^3_1}\frac{\partial a^3_1}{\partial Z^3_1}\frac{\partial Z^3_1}{\partial a^2_3}\frac{\partial a^2_3}{\partial Z^2_3}\frac{\partial Z^2_3}{\partial w^1_{32}}$$ . The above equations represents the aspect of how cost function C value will change by changing the respective weights in different layers. In other words, the above equations calculates gradients of weights and biases with respect to cost function value, C. Note how chain rule is applied while calculating gradients using back propagation algorithm. You may want to check this post to get an access to some real good articles and videos on back propagation algorithmTop Tutorials – Neural Network Back Propagation Algorithm. ### Learning Weights & Biases using Back Propagation Algorithm The equation below represents how weights & biases in specific layers are updated after the gradients are determined. Letter l is used to represent the weights of different layers $$\large w^l = w^l – learningRate * \frac{\partial C}{\partial w^l}$$ . $$\large b^l = b^l – learningRate * \frac{\partial C}{\partial b^l}$$ . ## Conclusions That’s all for this overview of the backpropagation algorithm used in the neural network. If you would like to know more, or have any questions, please let me know in the comments below and I will do my best to answer them. Have a great day!
2022-12-05 18:48:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6354838013648987, "perplexity": 287.91255605424067}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711042.33/warc/CC-MAIN-20221205164659-20221205194659-00822.warc.gz"}
https://math.stackexchange.com/questions/6371/lottery-probability
# Lottery probability In the UK the lottery uses numbers $1$ to $49$ and a total of six numbers are picked. It has been said may times that there is as much chance of numbers $1, 2 ,3 ,4 ,5 ,6$ to be picked as any other random combination. My question is this: Let's say that the first $3$ numbers to come out are $1,2$ and $3$. What are the chances of a number between $1$ and $10$ coming out next Vs a number between $11$ and $20$? There are obviously less numbers between $1$ and $10$ now that we already lost $1$ to $3$, so surely the probability is that a number between $11$ and $20$ is more likely? In which case, the chances of a lottery selection of $1,2,3,4,5,6$ is less likely than $2,12,21,28,32,47$ for example... • That is true, but a priori to the first three drawings the probability is the same. Once you drew three balls it becomes a conditional probability. Oct 9 '10 at 14:10 • If you consider 'numbers with digit 2 in them', by your logic, 1,2,3,4,5,6 has higher probability than 2,12,21,28,32,42... Oct 9 '10 at 14:28 • On a related note, if you want to maximise your expected winnings (and still participate) then you are better served by choosing combinations of numbers unlikely to be chosen by others, so as to minimise your chances of sharing your possible winnings with others. Dec 14 '17 at 5:25 The chances of the next number being between 1 and 10 is $\frac{7}{49}$, as opposed to the probability of it being between 11 and 20 being $\frac{10}{49}$. So, among other things, it is less likely that a lottery ticket will have only numbers between 1 and 10, as opposed to numbers between 1 and 20. However, that does not mean that a given ticket with numbers between 1 and 10 is less likely then a given ticket with numbers between 1 and 20. The fact that there are more tickets with numbers between 1 and 20 exactly cancels with the higher chance of getting such a ticket.
2022-01-27 00:44:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7402762174606323, "perplexity": 181.36044568098816}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305006.68/warc/CC-MAIN-20220126222652-20220127012652-00242.warc.gz"}
http://www.gradesaver.com/textbooks/math/trigonometry/trigonometry-10th-edition/chapter-1-trigonometric-functions-section-1-1-angles-1-1-exercises-page-9/108
## Trigonometry (10th Edition) 89$^{\circ}$ in the I quadrant 1. 0-90 degrees in the first quadrant. Therefore 0$^{\circ}$$\leq89^{\circ}$$\leq$90$^{\circ}$ 2. Coterminal angles are always in the same quadrant with original angle, because of full resolutions (2$\pi$) 89° + 360° = 449° 89° - 360° = -271°
2017-08-17 04:15:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9130864143371582, "perplexity": 6498.922695066063}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886102891.30/warc/CC-MAIN-20170817032523-20170817052523-00135.warc.gz"}
https://homework.cpm.org/category/CCI_CT/textbook/cc4/chapter/3/lesson/3.2.3/problem/3-102
### Home > CC4 > Chapter 3 > Lesson 3.2.3 > Problem3-102 3-102. Calculate each of the following products by drawing and labeling an area model or by using the Distributive Property. Homework Help ✎ 1. $- 4 y ( 5 x + 8 y )$ • Review the Math Notes box in this lesson. $−20xy + −32y^2$ 1. $9 x ( - 4 + 10 y )$ • See part (a). 1. $( x ^ { 2 } - 2 ) ( x ^ { 2 } + 3 x + 5 )$ • Use an area model or the Distributive Property. Review the Math Notes box in this lesson. The first term in the binomial is $x^2$, It should be multiplied (distributed) into the trinomial.
2019-10-17 15:51:04
{"extraction_info": {"found_math": true, "script_math_tex": 5, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5131596922874451, "perplexity": 3368.026045653485}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986675409.61/warc/CC-MAIN-20191017145741-20191017173241-00307.warc.gz"}
https://avatest.org/2022/09/19/algebraic-number-theory%E4%BB%A3%E8%80%83/
Posted on Categories:Algebraic Number Theory, 代数数论, 数学代写 # 数学代写|代数数论代写Algebraic Number Theory代考|MATH661 Integral Domains avatest™ ## avatest™帮您通过考试 avatest™的各个学科专家已帮了学生顺利通过达上千场考试。我们保证您快速准时完成各时长和类型的考试,包括in class、take home、online、proctor。写手整理各样的资源来或按照您学校的资料教您,创造模拟试题,提供所有的问题例子,以保证您在真实考试中取得的通过率是85%以上。如果您有即将到来的每周、季考、期中或期末考试,我们都能帮助您! •最快12小时交付 •200+ 英语母语导师 •70分以下全额退款 ## 数学代写|代数数论代写Algebraic Number Theory代考|Integral Domains A nonzero element $a$ of a ring $A$ (always commutative) is called a zero divisor if $a b=0$ for a nonzero $b$ in $A$. In the ring $\mathbb{Z} / 6 \mathbb{Z}, 2,3$, and 4 are the only divisors of zero. A field has no divisor of zero. A ring without zero divisors is called an integral domain or simply a domain. We have already discussed many integral domains which are not fields, e.g. $\mathbb{Z}, \mathbb{Z}[i], \mathbb{Z}[\omega]$ and $\mathbb{Z}[\sqrt{d}]$ for $d \neq 0$, a square-free integer, which are relevant to our subject. An element $u$ in $A$ is a $u n i t$ if $u v=1$ for some $v$ in $B$. For example, the only units in the ring $\mathbb{Z}$ are $\pm 1$. Definition 2.7. A domain $A$ is a Euclidean domain if there is a map which assigns to each nonzero element $\alpha$ of $A$ a non-negative integer $d(\alpha)$ such that for all nonzero $\alpha, \beta$ in $A$, i) $d(\alpha) \leq d(\alpha \beta)$, and ii) $A$ has elements $q$ (the quotient) and $\gamma$ (the remainder) so that $\alpha=q \beta+\gamma$ and either $\gamma=0$ or $d(\gamma)<d(\beta)$. With the Euclidean algorithm, both $\mathbb{Z}$ and the ring $k[x]$ of polynomials over a field $k$ are Euclidean domains. For $\mathbb{Z}, d(\alpha)=|\alpha|$ and for $k[x], d(f(x))=$ $\operatorname{deg} f(x)$ ## 数学代写|代数数论代写Algebraic Number Theory代考|Factoring Rational Primes in Zi Let $A$ be the ring $\mathbb{Z}[i]$ of Gaussian integers and $p=2,3,4, \ldots$ a rational prime. This $p$ may or may not be a prime element of $A$. To find exactly when it is, recall the famous theorem of Fermat on the sum of two squares, which was proved by Euler (cf. [8, p. 48]). Theorem $2.14$ (Fermat). An odd prime $p$ in $\mathbb{Z}$ is a sum of two squares $\left(p=a^2+b^2\right)$ if and only if $p=4 k+1$ for $k$ in $\mathbb{N}$. The norm of any divisor of $\alpha=a+i b$ must be a divisor of $N(\alpha)=a^2+b^2$, and for $\alpha=\beta \gamma$ with $\beta$, $\gamma$ both non-units, $1<N(\beta)<N(\alpha)$ (only the units have norm 1). Therefore, if $a^2+b^2$ is a prime, then $\alpha$ has to be a prime in $\mathbb{Z}[i]$. We have thus proved the following fact: Theorem 2.15. A prime $p$ is a sum of two squares, $p=a^2+b^2 \Leftrightarrow p$ is $a$ product $(a+i b)(a-i b)$ of two primes $a \pm i b$ in $\mathbb{Z}[i]$. For $p=2$, its two prime factors $1+i, 1-i$ in $\mathbb{Z}[i]$ are associates: $1+i=i(1-i)$. Therefore, $$2=i(1-i)^2 .$$ We say that 2 ramifies in $\mathbb{Z}[i]$. By Fermat’s Theorem (Theorem 2.15), $p \equiv 1$ $(\bmod 4) \Leftrightarrow p$ is a product $$p=\pi_1 \pi_2$$ of two primes $\pi_1, \pi_2$ in $\mathbb{Z}[i]$. Moreover, $\pi_1$ and $\pi_2$ are complex conjugates of each other and hence they are distinct. This discussion can be wrapped up as follows: In order to do that, observe that ${1, i}$ is a $\mathbb{Z}$-bases of $\mathbb{Z}[i]$ and so is its conjugate ${1,-i}$. These two bases make a $2 \times 2$ matrix $$A=\left(\begin{array}{cc} 1 & i \ 1 & -i \end{array}\right)$$ with $|\operatorname{det}(A)|=2$, called the discriminant of $\mathbb{Q}(i)$. ## 数学代写|代数数论代写代数数论代考|积分域 i) $d(\alpha) \leq d(\alpha \beta)$和 ii) $A$中有元素$q$(商)和$\gamma$(余数),那么$\alpha=q \beta+\gamma$和$\gamma=0$或$d(\gamma)<d(\beta)$ $$p=\pi_1 \pi_2$$ 。此外,$\pi_1$和$\pi_2$是彼此的复共轭,因此它们是不同的。这个讨论可以总结如下:为了做到这一点,观察${1, i}$是$\mathbb{Z}[i]$的$\mathbb{Z}$ -bases,它的共轭${1,-i}$也是。这两个基底构成一个$2 \times 2$矩阵 $$A=\left(\begin{array}{cc} 1 & i \ 1 & -i \end{array}\right)$$ ## MATLAB代写 MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中,其中问题和解决方案以熟悉的数学符号表示。典型用途包括:数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发,包括图形用户界面构建MATLAB 是一个交互式系统,其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题,尤其是那些具有矩阵和向量公式的问题,而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问,这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展,得到了许多用户的投入。在大学环境中,它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域,MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要,工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数(M 文件)的综合集合,可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。
2023-03-26 06:25:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9648428559303284, "perplexity": 282.4371275904675}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945433.92/warc/CC-MAIN-20230326044821-20230326074821-00611.warc.gz"}
http://eprints.iisc.ernet.in/16783/
# Ammonium lithocholate nanotubes:stability and copper metallization Terech, Pierre and Neralagatta, M. and Sangeetha, NM and Bhat, Shreedhar and Allegraud, Buhler and Buhler, Eric (2006) Ammonium lithocholate nanotubes:stability and copper metallization. In: Soft Matter, 2 (6). pp. 517-522. PDF 1_01.pdf - Published Version Restricted to Registered users only Download (502Kb) | Request a copy Official URL: http://www.rsc.org/ej/SM/2006/b604590a.pdf ## Abstract Ammonium lithocholate nanotubes $(NH_{4}LC)$ have been prepared in alkaline ammonia solutions and exhibited remarka ble monodisperse cross-sectional dimensions (external diameter = 52 nm) as shown by cryo-transmission electron microscopy measurements. A classical electroless metallic replication method was used with a single poly(ethylene-imine) PEI layer coating the negatively charged$NH_{4}LC$ nanotubes. Short copper rods (external diameter ${\sim}$ 80 nm) were observed by scanning electron microscopy that corresponded to the original organic templates. The results obtained in acidic conditions are analyzed in terms of the lifetime of the self-assembled structures and formation of bundles of tubes. Dynamic light scattering measurements and optical observations show that the system in the presence of controlled amounts of hydrochloric acid is stable enough to account for a metallic replication in acidic conditions. An average apparent diffusion coefficient of the organic $NH_{4}LC$ assemblies is extracted $(D \sim 9.8 \times10{^5} nm{^2}S{^-1})$ in homogeneous suspensions where bundles have been dispersed by the acidic additions. Item Type: Journal Article Publisher Copyrigt of this article belongs to Royal Society of Chemistry. Materials Science,Multidisciplinary;Integral-Equations; Nanowires;Tubules;Route Division of Chemical Sciences > Organic Chemistry 03 Dec 2008 10:00 19 Sep 2010 04:53 http://eprints.iisc.ernet.in/id/eprint/16783
2016-06-29 18:09:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43058615922927856, "perplexity": 9037.499291255912}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397795.31/warc/CC-MAIN-20160624154957-00004-ip-10-164-35-72.ec2.internal.warc.gz"}
http://physics.stackexchange.com/tags/cp-violation/hot
# Tag Info 13 Dear Chad, you misinterpret the statement that "the known sources of CP-violation are not enough to explain the matter-antimatter asymmetry in the Universe." You seem to think that the statement means that the known CP-violating parameter (namely the CP-violating phase in the CKM matrix) and the processes based on it are qualitatively insufficient to ... 9 If we write the field strength in terms of "electric" and "magnetic" fields $\vec{E}$ and $\vec{B}$, the relevant expression can be written as $$\text{Tr}F_{\mu\nu}\tilde{F}^{\mu\nu}=4\,\text{Tr}\vec{E}\cdot\vec{B}.$$. Under parity transformations, $\vec{E}\rightarrow-\vec{E}$ and $\vec{B}\rightarrow\vec{B}$, while under charge conjugation, ... 8 The text by Lumo may have been a bit confusing but it's the other way around: the possibility to redefine the phases of the vectors leads to a reduction of independent angles and phases in the CKM matrix, but there's still one complex phase that can't be rotated away. Imagine that you change the phases of the kets $u,c,t;d,s,b$ by six multiplicative ... 7 Presumably you are asking about the communication ambiguity in physics: can we unambiguously specify what we mean by "a right handed coordinate system" to a correspondent far away without a pre-arrnage communications channel (i.e. using SETI)? For a long time the answer seemed to be "no", but the discovery of parity violation in 1957 changed the answer to ... 7 The usual action for Yang-Mills theory is, using differential forms $$S = \int \operatorname{tr} (F \wedge \star F)$$ where $\star$ is the Hodge dual. Now note that the integral of a differential form is always defined with respect to an orientation, and the Hodge dual is also defined with respect to an orientation. Parity is reversing the orientation, which ... 7 Good question! Regarding (2) baryon number is certainly violated at Planckian energies. If you can make a black hole, you can eat up baryons. Luboš Motl's argument that you linked to is correct in this regard. Whether you can make a believable scenario of quantum gravity driven baryogenesis at the Planck time is up in the air as far as I know. It's the old ... 6 Surely someone has mulled over why the universe might exhibit such a non-intuitive and thus interesting asymmetry? As always, when it comes to valid and important physical theories, the reason why the Universe has non-intuitive features is simply that the intuition is wrong. Arguments based on wrong intuition are irrational and unscientific. Rationally ... 5 The spin-statistics thing isn't a problem, it is a theorem (a demonstrably valid proposition), and it shouldn't be addressed, it should be understood and celebrated. The Higgs field gives us interactions between chiral fermions and the Higgs, $yh\cdot \chi_\alpha\eta^\alpha$ which produces mass terms $m \chi_\alpha\eta^\alpha$ if the Higgs field has a ... 5 Good question! There actually isn't a term for this that I know of. The most common use of such a term would be to classify a particle, for example "the 'polarity' of the electron is matter-polarity," but in that case most physicists would just say "the electron is a matter particle." There is a mathematical operator called the charge conjugation operator, ... 5 1) Does antimatter-matter symmetry exist? Yes there is a CP violation and the whole Nobel prize thing. On the other hand there is CPT symmetry which is very protected. So call it what you want. As for the popsci articles... I would express my thoughts, but this is a family site. 2)Does CP violation explain matter-antimatter imbalance? It's certainly ... 4 A clear recent review of flavor physics, including CP violation, is in the TASI lectures by Gedalia & Perez. The parametrization-independent measure of how much CP violation is present in the Standard Model is called the "Jarlskog invariant"; it's explained in those lectures, but might be a useful keyword if you're searching for other resources. If you ... 4 Remember that the theta term appears in an exponential $e^{i\theta n}$ inside the path integral. If $\theta n$ shifts by $2\pi N$, for any integer $N$, the exponential is unchanged, and all path integrals have the same value. The integral $n = \frac{1}{32\pi^2} \int F \wedge F$ is not arbitrary either. It's a topological invariant, and it's normalized so ... 3 The sentence in Peskin's and Schroeder's book that "the weak interactions preserve CP and T" is a bit misleading but there is a sense in which it is right. Experimentally, CP and T is known to be violated and CPT is always a symmetry. Theoretically, CPT is always a symmetry, too – it's proven by the CPT theorem. The CPT transformation is effectively a ... 3 We have a pretty good idea of the thermal history of the universe. Combining this with the Sakharov criteria for baryogenesis allows one to calculate the necessary CP violation in terms of the strength of the bayon-number violating interaction and how far out of equilibrium the universe was. Taking a purely SM approach and having only sphalerons as ... 3 I haven't read that book, but I did read Feynman's discussion of (sounds like) exactly the same thing. Easy: Tell the aliens how to build a telescope, then describe the configuration of some galaxies near them. OK OK, but suppose we rule that out: We can't see any objects in common. Easy: Send them circularly-polarized radio waves (thanks @Anonymous Coward). ... 3 To do this, the man needs to build a particle accelerator and measure Kaon decays, or some other process involving higher quark flavors. Everything else is CP invariant, so he wouldn't know for sure. 3 Cecilia Jarlskog proposed this invariant already in 1973 and it was mentioned in the original Kobayashi-Maskawa paper. For three families, it's easy to see why it is nonzero iff the unitary matrix in $U(3)$ can't be brought to the real, orthogonal i.e. $O(3)$ form. It's because after the 5 phase redefinitions of the up-type-quark and down-type-quark ... 3 surely someone has mulled over why the universe might exhibit such a non-intuitive and thus interesting asymmetry? Oh yes, definitely. I have for one (though I haven't made a significant contribution to the question)! :) There are a number of "left-right symmetric" models out there which usually involve a group like $SU(2)_L \times SU(2)_R$ where the ... 3 No, it's not true. Suppose I'm floating in outer space (presumably in a space suit or something else to keep me alive). I'm still me, and I still know that, for example, my left hand is the one on the left, and my right hand is the one I can write with. Even on Earth, we don't need environmental clues to distinguish left from right; it's more a matter of ... 3 As dmckee wrote, the term "symmetry" has a fully uniform meaning. It is not used ambiguously in any way and for the same reason, it is not overused. Symmetries are really important in physics and that's why they're used so often. (We also use "symmetries" with various well-defined adjectives such as "global", "local/gauge", "approximate", "broken", ... 3 The question OP is proposing is linked to the question of the mass formulas. Here, what really matters is if the mass of the u quark is indeed very near zero and if one has some compelling theoretical reason to believe this. The strong CP problem could not be of much help here as pointed out in the Dine's review. The reason is quite simple: If one should ... 3 I'd like to point out that there is a small probability that the assumption on which the question is based: "As I hope is obvious to everyone reading this, the universe contains more matter than antimatter," may not be true, depending on the result of the Aegis experiment at CERN. That's because, as Professor Orzel stated in his answer to this ... 2 In reply to the second partenthetical question, I wrote that matter created from energy in particle physics experiments is "generally" in the form of particle-antiparticle pairs . This is too restrictive. Quantum numbers have to be conserved, and they are conserved in pair production, but there can also be associated production of mesons etc: For example ... 2 Time-reversal operator is anti-unitary, meaning, basically, that for any c-number $a$: $$T\,a\,T^{-1} = a^*$$ Now, If you have a T-invariant Lagrangian term ${\cal L}_{term}$: $$T{\cal L}_{term}T^{-1}={\cal L}_{term}$$ Then if multiply in by $a$: $$Ta{\cal L}_{term}T^{-1}=TaT^{-1}T{\cal L}_{term}T^{-1}=a^*{\cal L}_{term}$$ So you need $a$ to be real if you ... 2 CPT is a general theorem of quantum field theories: Specifically, the CPT theorem states that any Lorentz invariant local quantum field theory with a Hermitian Hamiltonian must have CPT symmetry. Questioning CPT invariance is questioning the foundations of modern physics theory, which is probably the reason you cannot find anything on this. The label ... 2 No. Conservation of energy is generated by the continuous time translation symmetry $t \rightarrow t + \epsilon$. This is a differenty symmetry than the discrete time reversal symmetry $t \rightarrow -t$. Violating the latter symmetry does not mean that you violate the former symmetry. 2 Noether's theorem does not apply to discrete symmetries like C, P, and T. Only continuous symmetries generate local conservation laws. For discrete symmetries you get multiplicate rather than additive conservation laws so they are somewhat less useful. Also note that T is an anti-unitary transformation so it is a little more subtle than the others. On the ... 2 In the Standard Model, the lepton sector does not have CP violating couplings (at tree level). The quark sector however has CP-violating couplings (through the CKM matrix). The PMNS matrix (describing neutrino mixing), may have a complex phase (implying CP violation). Whether it has a nonzero phase or not remains to be tested experimentally. This is ... 1 CP-violation in standard model is due to CKM complex phase of the quarks sector. You can see the parametrization of CKM matrix, like Wolfenstein parametrization, and see that there is only a phase in CKM matrix, the work of Kobayashi-Maskawa is about to understand that you need three generation of quarks to have CP-violation. Now you can have a similar ... Only top voted, non community-wiki answers of a minimum length are eligible
2014-09-02 10:01:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8292742967605591, "perplexity": 413.6168435320129}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535921872.11/warc/CC-MAIN-20140909030759-00008-ip-10-180-136-8.ec2.internal.warc.gz"}
http://www.ams.org/mathscinet-getitem?mr=MR2769881
MathSciNet bibliographic data MR2769881 15A18 (15A23) Liu, Yonghui; Tian, Yongge Max-min problems on the ranks and inertias of the matrix expressions $A-BXC\pm(BXC)\sp \ast$$A-BXC\pm(BXC)\sp \ast$ with applications. J. Optim. Theory Appl. 148 (2011), no. 3, 593–622. Article For users without a MathSciNet license , Relay Station allows linking from MR numbers in online mathematical literature directly to electronic journals and original articles. Subscribers receive the added value of full MathSciNet reviews. American Mathematical Society 201 Charles Street Providence, RI 02904-6248 USA
2016-06-25 14:18:38
{"extraction_info": {"found_math": true, "script_math_tex": 1, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8719938397407532, "perplexity": 9331.795771930454}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393332.57/warc/CC-MAIN-20160624154953-00027-ip-10-164-35-72.ec2.internal.warc.gz"}
https://astronomy.stackexchange.com/questions/34653/how-are-small-objects-automatically-identified-and-their-locations-determined-in
# How are small objects automatically identified and their locations determined in digital images? I have a large quantity of old digital images of low energy electron diffraction (LEED) patterns that I must process to identify and locate small, sometimes elongated gaussian-like blobs of a few pixels in width within a noisy and artifact-riddled background. I'm currently reading up on Laplacian of Gaussian blob detection 1, 2, 3 and have started to implement the "roll-your-own" script in the first link; there are python packages like OpenCV and Scikit-image that have standard libraries for this as well and I believe they both have Laplacian of Gaussian blob detection, but I like to do it first myself to better understand what's going on. ## Why are you asking this here in Astronomy SE? Good question! Because this looks at least superficially a lot like what astronomers need to do when searching for objects in deep space imaging applications, and there may be some standard implementations in AstroPy or even lecture notes for Digital Imaging in Astronomy 101 courses. In parallel with my brute force efforts, how might I compliment this work with existing astronomical imaging techniques to compare notes? It would be great if it turned out that some existing script or package searching for distant elliptical galaxies or weak gravitational lensing could be applied directly to these kinds of images! Example image (most data has smaller pixels higher pixel density, but I need to process these low pixel count images as well): The kinds of "spots" I'm looking for: First try with Python implementation of Laplacian of Gaussian blob detection, haven't yet looked at generalizing to non-circular shapes, just ran the script in the link with a few small modifications. • If the artifacts are the same in all the images, and the spots you are trying to find are not, you could start by stacking all the images and taking the median at each pixel. Then you substract this from each image in order to remove some of the structures you don't want – usernumber Jan 9 at 8:45 • The software that's most used for detecting things in astronomical images is SExtractor or the Python library SEP which implements the same core algorithms. – astrosnapper Jan 10 at 19:02 • do you know where the blobs are expected from the symmetry? in any case, I would expect SExtractor to do a good job on your image – student Sep 20 at 11:04 • The symmetry statement was separate, if you know where to look for the blobs, it becomes very straightforward to mask and fit each expected position with $N$ parametric models. But yes, SExtractor should be able to identify these easily, it is also very good at deblending close-together blobs. – student Sep 20 at 11:13 • If you don’t use it already, take a look at conda/anaconda, most scientific software is available from there nowadays, with all dependencies etc. managed. Case in point: conda install -conda-forge astromatic-source-extractor – student Sep 20 at 13:57
2020-10-31 19:22:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2739233076572418, "perplexity": 1069.9762267392714}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107922411.94/warc/CC-MAIN-20201031181658-20201031211658-00647.warc.gz"}
https://www.reddit.com/r/math/comments/c76bu/ask_mathit_is_1_infinitely_larger_than_0/
Press J to jump to the feed. Press question mark to learn the rest of the keyboard shortcuts 0 Posted by8 years ago Archived ## Ask Mathit: Is 1 infinitely larger than 0? I am trying to convince my girlfriend that 1 is infinitely larger than 0, she thinks otherwise. Help me prove her wrong. My argument: 1infiniteis always greater than 0infinite. Her argument: 0 is an arbitrary number. There is a set distance between 0 and 1. 9 comments 50% Upvoted This thread is archived New comments cannot be posted and votes cannot be cast Sort by level 1 [deleted] 18 points · 8 years ago The problem is that you two differ in the meaning of "larger". She's talking about addition your talking about multiplication. You're both right, from your own points of view ;) Edit: Actually, she's more correct than you, since you're saying she's wrong and she's saying she's right. level 2 3 points · 8 years ago Edit: Actually, she's more correct then you because you're probably going to want to get laid in the near future. level 1 15 points · 8 years ago Neither of you have defined your terms. Hence, you both lose. level 1 6 points · 8 years ago The 1infinite is always greater than 0infinite argument is BS. lim(x->infinity) 0.5x is 0. So 0.5 would not be infinitely larger than 0 while 1 is? And would that also mean that 2 is infinitely larger than 1? Now, lim(x->0)1/x is infinite, and 1 is larger than 0. (By that argument, any number >0 is infinitely larger than 0) And lim(x->0) -1/x is infinite, and -1 is smaller than 0. (By that argument, any number <0 is infinitely smaller than 0). Which means that 1 is ininitely larger than 0, and -1 is infinitely smaller than 0, but what is 1 to -1 then? (You can say that 2 is twice as large as 1, or 2 is twice larger than 1, but it only makes sense to talk that way if all numbers you have are >0). level 1 6 points · 8 years ago Define "larger" as the ratio of the two numbers. For example 10/2 = 5 so 10 is 5 times "larger" than 2. When the same logic is applied to 1 and 0, namely 1/0, there is a discontinuity error. However, in math 1/0 is often regarded as a complex infinity. Therefore, using the previously defined term of "larger", 1 is indeed infinitely larger than 0. level 1 [deleted] 3 points · 8 years ago You are aware the infinite is not a number (in the context you are using it) but instead a short hand for a limit. You could argue that there are infinitely many points between 0 and 1, but is the usual metric placed on the real numbers there is a finite distance between 0 and 1. So basically the question is ill posed, but you are still wrong. level 1 3 points · 8 years ago She's right. You're wrong. Therefore, I'm going to help her prove you wrong. 1 is exactly 1 larger than 0, since 0 + 1 = 1. 1{infinity} depends on what you mean by infinity, but it doesn't matter exactly how you define it to prove her right. Let's go with the most common notion of infinity and assume you meant infinity, as in, the number of integers (denoted aleph_0). In this case: 1aleph_0 = 1 0aleph_0 = 0 (I believe) Thus, 1infinity is still 1 bigger than 0infinity. Now, let's see how you should have argued this. A better statement for you to make is that "1 is infinitely many times greater than 0," which is more defensible. Her: 1 is 1 bigger than 0. You: 1 is twice as big as .5. 1 is ten times as big as .1. 1 is 100 times as big as .01. As x gets closer and closer to 0, 1 is (1/x) times larger than x. Since this goes to infinity in the limit as x goes to 0, 1 is infinitely many times greater than 0. level 1 Algebraic Geometry2 points · 8 years ago I know from SAT questions that when a guy and a girl have a debate the girl is always right. Also, you need to define a norm before you talk about larger, you guys are comparing them in different ways, as others said. level 1 1 point · 8 years ago What intrigues me about this question is about the way you're approaching this problem. The question you're attacking and the solution you are putting forth are inconsistent with one another. What your statement says is that 1 is larger than 0. It doesn't say how much larger it is from 0. A formal proof would be using mathematical induction by stating that 1n power is always larger than 0n and then proving that it is true for 1n+1 and 0n+1. However, in order to prove how many times a number is greater than another number concerns the numerical distance between 1 and 0, which is not what your solution is proposing. Informally speaking, 1 is infinitely greater than 0 in one case. It is to say that there exists countably infinite numbers of numbers between 1 and 0. For instance on a number line you have 0.1 that is between 0 and 1 and then you have 0.01 that is between 0.1 and 0 and then you can select another number that is between those two that are in between the sequence. There are essentially an infinite amount of numbers between any 2 constants which makes it infinitely larger. It is enough to just say that 1 is bigger than 0. Hope this helps - although I do not think you should make a big deal of it to fight with your girlfriend over a little problem. Community Details 552k Subscribers 1.1k Online Welcome to r/math! This subreddit is for discussion of mathematical links and questions. Please read the FAQ and the rules below before posting. If you're asking for help understanding something mathematical, post in the Simple Question thread or /r/learnmath. This includes reference requests - also see our lists of recommended books and free online resources. Here is a more recent thread with book recommendations. If you are asking for a calculation to be made, please post to /r/askmath or /r/learnmath. If you are asking for advice on choosing classes or career prospects, please post in the stickied Career & Education Questions thread. Please be polite and civil when commenting, and always follow reddiquette. r/math Rules 1. No homework problems 2. Stay on-topic 3. Be excellent to each other 4. No low-effort image posts Recurring Threads & Resources Everything about X - every Wednesday What Are You Working On? - posted Mondays Career and Education Q&A - Every other Thursday Simple Questions - Posted Fridays Click here to chat with us on IRC! Using LaTeX To view LaTeX on reddit, install one of the following: MathJax userscript (install Greasemonkey or Tampermonkey first) [; e^{\pi i} + 1 = 0 ;] Post the equation above like this: [; e^{\pi i}+1=0 ;] Useful Symbols Basic Math Symbols ≠ ± ∓ ÷ × ∙ – √ ‰ ⊗ ⊕ ⊖ ⊘ ⊙ ≤ ≥ ≦ ≧ ≨ ≩ ≺ ≻ ≼ ≽ ⊏ ⊐ ⊑ ⊒ ² ³ ° Geometry Symbols ∠ ∟ ° ≅ ~ ‖ ⟂ ⫛ Algebra Symbols ≡ ≜ ≈ ∝ ∞ ≪ ≫ ⌊⌋ ⌈⌉ ∘∏ ∐ ∑ ⋀ ⋁ ⋂ ⋃ ⨀ ⨁ ⨂ 𝖕 𝖖 𝖗 Set Theory Symbols ∅ ∖ ∁ ↦ ↣ ∩ ∪ ⊆ ⊂ ⊄ ⊊ ⊇ ⊃ ⊅ ⊋ ⊖ ∈ ∉ ∋ ∌ ℕ ℤ ℚ ℝ ℂ ℵ ℶ ℷ ℸ 𝓟 Logic Symbols ¬ ∨ ∧ ⊕ → ← ⇒ ⇐ ↔ ⇔ ∀ ∃ ∄ ∴ ∵ ⊤ ⊥ ⊢ ⊨ ⫤ ⊣ Calculus and Analysis Symbols ∫ ∬ ∭ ∮ ∯ ∰ ∇ ∆ δ ∂ ℱ ℒ ℓ Greek Letters 𝛢𝛼 𝛣𝛽 𝛤𝛾 𝛥𝛿 𝛦𝜀𝜖 𝛧𝜁 𝛨𝜂 𝛩𝜃𝜗 𝛪𝜄 𝛫𝜅 𝛬𝜆 𝛭𝜇 𝛮𝜈 𝛯𝜉 𝛰𝜊 𝛱𝜋 𝛲𝜌 𝛴𝜎 𝛵𝜏 𝛶𝜐 𝛷𝜙𝜑 𝛸𝜒 𝛹𝜓 𝛺𝜔 Other math subreddits r/learnmath 74,820 subscribers r/mathbooks 8,776 subscribers r/cheatatmathhomework 24,763 subscribers r/matheducation 12,310 subscribers r/CasualMath 7,819 subscribers r/puremathematics 9,408 subscribers r/mathpics 15,951 subscribers r/mathriddles 7,518 subscribers Related subreddits r/Mathematica 4,010 subscribers r/matlab 15,711 subscribers r/sagemath 586 subscribers r/actuary 10,879 subscribers r/algorithms 39,208 subscribers r/compsci 333,849 subscribers r/interdisciplinary 1,548 subscribers r/statistics 58,490 subscribers © 2018 Reddit, Inc. All rights reserved
2018-08-18 14:56:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5670666098594666, "perplexity": 2619.9940979254443}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221213689.68/warc/CC-MAIN-20180818134554-20180818154554-00372.warc.gz"}
http://zirz.sejfyr.de/unity-buoyancy.html
# Unity Buoyancy Tax Elasticity and Buoyancy in Nepal: A Revisit Neelam Timsina∗ Tax elasticity and buoyancy estimates are the dynamic tools for measuring the tax performance. An important indicator of tax revenue performance is the tax buoyancy ratio. The pool floor raises and lowers to 6 feet, so it's easy to get into and out of the water without stairs or ladders. buoyancy used in the giver. Only the gas fundamentals essential to the design and analysis of gas lift installations and operations are discussed in this section. Academic Survival Skills. While the thermal buoyancy always acts vertically. NVIDIA has made it easier than ever for game developers to add leading-edge technologies to their Unreal Engine 4 (UE4) games by providing custom UE4 branches for NVIDIA GameWorks technologies on GitHub. Ice is less dense than liquid water which is why your ice cubes float in your glass. Our most-requested feature was an easy to use system for simulating buoyancy of ships on Triton's simulated water in Unity. Arrows show the applied forces, and you can modify the properties of the blocks and the fluid. Expansion Deck Box 04 - Unity of Saiyans. This study makes a revisit to the studies carried out earlier to measure tax elasticity and buoyancy in Nepal, in the context of the structural changes that have taken place in the tax system in recent years. この記事でのバージョン Unity 5. Once you learn, for instance, the difference between the hierarchy and the scene view. Any help would be appreciated. buoyancy unscramble. In this work, by taking. Unity's term for any individual object Can be active or inactive Every entry in the Hierarchy pane is a game object Prefab Unity's term for prefabricated game objects Files with *. The buoys would apply an upward force to their parent rigidbody based on their buoyancy and how much is underwater. Click for file information. More info See in Glossary is above this. The latest version of Triton Oceans for Unity Pro / Windows is now available. If the Richardson number is of order unity, then the flow is likely to be buoyancy-driven: the energy of the flow derives from the potential energy in the system originally. Units of Measurement Wiki is a FANDOM Lifestyle Community. (Unity Game Engine). More info See in Glossary is above this. Elasticity is a measure of a variable's sensitivity to a change in another variable. Another way to look at the buoyancy of an object is as an interaction of two forces. buoyancy translate. The Relative Magnitude of these 2 opposing forces is the Grashof. 3, uses the Force of Buoyancy. buoyancy translate. —is a zero-sum game, a fact that puts to flight any notion of “unity in community,” or “unidad in communidad,” or indeed unitedness among our 50 states to say nothing of between political parties. 2-13) When this number approaches or exceeds unity, you should expect strong buoyancy contributions to the flow. In a column of fluid, pressure increases with depth as a result of the weight of the overlying fluid. as the wheel rotates, the A-legs " in the 1:30 position on the top right side in pic 1" start to open due to the float being on the upper arm of the A-leg, and the weight on the lower arm of the A-leg. In that case go to the define symbols in the player settings and make sure they contain either UNITY_POST_PROCESSING_STACK_V1 or UNITY_POST_PROCESSING_STACK_V2 depending on the version of the post processing stack you have imported. On the other hand, if the elasticity coefficient is less than unity, this would indicate lagged revenue growth compared to GDP growth. Buoyancy effects on the integral lengthscales and mean velocity profile in atmospheric surface layer flows Scott T. buoyancy during post tax reform period. Taking the density of water as unity, the upward (buoyancy) force is just 8 g. The Buoyant Force drags the air upward in the Wheel, imparting rotational force to the axle. Find the best selection of Buoyancy 3D models and Buoyancy textures for instant download and use from the best online 3D model catalog. はじめに 「Unity-WaterBuoyancy」を Unity プロジェクトに導入することで 3D で水の物理挙動を実装できるようになります 使用例 使い方 「WaterPro_DayTime」プレハブ、もしくは 「WaterPro_NightTime」プレハブをシーンに配置します 水に浮かべたいオブジェクトには「…. 6。如果找不到你需要的資料,可以上 CG數位學習網 有更多的學習資料。. 11) The GO Ocean Toolkit is a code package that brings realistic infinite ocean rendering to your Unity3d project. edu is a platform for academics to share research papers. The Lipumax body features ribs to key into the surrounding material and allows for soil to be compacted around and above the body. Realistic Water Rendering - Suimono 2. You will learn how to make and endless infinite ocean, add water foam and water wakes, add boat resistance forces, add propulsion, buoyancy so the boat ship can float, and much more. 2) rigidbody. A cheap physically-accurate buoyancy approximation for meshes - djkoloski/unity-buoyancy. On the other hand, if the elasticity coefficient is less than unity, this would indicate lagged revenue growth compared to GDP growth. The Boussinesq approximation is inaccurate when the nondimensionalised density difference $$\Delta\rho/\rho$$ is of order unity. 9 of MonoDevelop-Unity, a significant upgrade to the existing version of Unity’s scripting editor. Unity is the ultimate game development platform. The undesolved particles will simply move around the pipe letting it sink or float. Create an Animated Progress Bar to Show your Store in Operation. Adding Body Buoyancy to the Simulation. Only the gas fundamentals essential to the design and analysis of gas lift installations and operations are discussed in this section. Density can be measured for any substance and the results will vary depending on its temperature, pressure, buoyancy, purity and packaging, to name a few factors. This page is a directory of educational games, simulations, and virtual labs related to Weather, Climate, Atmospheric Science, and the Sun and Space Weather. Buoyancy Engine, such as shown in Fig. Biology-online is a completely free and open Biology dictionary with over 60,000 biology terms. Contribute to dbrizov/Unity-WaterBuoyancy development by creating an account on GitHub. 1f1 Personal 目次 目次 はじめに 使い方 使用例 浮力 水流 パラメータ Collider Mask Surface Level Density Linear Drag Angular Drag Flow Angle Flow Magnitude Flow Variation おわりに はじめに 今回はUnity5. Salesky,1,a) Gabriel G. If N,/N2 e 1 and N3/N, 1, then the effect of J2 and J3 on the equivalent moment of inertia Jleq is negligible. Free to Try. While the thermal buoyancy always acts vertically. Ice is less dense than liquid water which is why your ice cubes float in your glass. - Full support for Unity 3. Literally meaning 'uninfluenced by personal feelings in representing facts,' objective writing strives to do just that. Posted by Unity Tutorial, Timing, Animation and Importing - Create a Survival Game RubenBear Logo. At the end of the video list, you can take a quiz to assess your understanding of the introduction to buoyancy. A GameObject’s functionality is defined by the Components attached to it. Water Shader Unity Playing with creating a water shader using Gerstner waves. What is the direction of Fk?. He said during a prayer service that "Unity with God and our brothers and sisters is a gift that will come from on high. 3 comes with a new 2D physics system along with rigid body and buoyancy effector among the lengthy list of capabilities. Buoyancy is closely connected to gravitational force. Master game physics using Unity 5 w/ 2. We also observed an improvement in both buoyancy and elasticity over the reform period (1985-2007) as evidenced in pre-reform buoyancy and elasticity coefficient which were generally less than unity but became greater than one after the reform. Overall, sales tax takes the. Expected result: "The density of the Buoyancy Effector 2D fluid. starts to tilt too easily) Info: - Made box in 3ds max, broke it […]. Free trials and demos are available, and Triton for Unity may be purchased from our online store. The strength of the buoyant force on an object in water depends on the volume of the object that is. Chen 6163 Etcheverry Hall University of California at Berkeley, Berkeley, CA 94720, T. It only acts when the charge is moving and is neither attractive nor repulsive. prehensible Mar 7 '14 at 1:47 add a comment |. Unity, Balance, Scale & Proportion, Contrast, Emphasis & Repetition & Rhythm Principles of 3-D Design 1 2. When a GameObject The fundamental object in Unity scenes, which can represent characters, props, scenery, cameras, waypoints, and more. The last entries of good words are bold and underlined. Buoyancy Force 101. Tel + 44 (0)20 8433 7100. Buoyancy definition is - the tendency of a body to float or to rise when submerged in a fluid. Aside from the usual rigid body dynamics and ragdoll which both SDKs support, I'd really like some sort of built-in buoyancy - support for volumes of fluid, if that makes sense. Unity comes with everything necessary for making games and it can be intimidating when you first open the program. Download Unity 3D Sorted ASSET Pack torrent or any other torrent from the Applications Windows. If the Richardson number is of order unity, then the flow is likely to be buoyancy-driven: the energy of the flow derives from the potential energy in the system originally. The results show that the air flow within urban areas can be considered to be purely buoyancy driven when the B parameter is larger than about 50. Kill animals for meat. For example, consider an open window in a warm room. Did you know? Vimeo is an amazing video service for original creative work, but it's also a company with real human employees. Steps to get working in a project. Simple Buoyancy. So far I've tried a PhysX and TrueAxis - I'm very impressed with TrueAxis, but PhysX seems to be more oriented towards using hardware physics cards. 0) The Buoyancy Toolkit is a code package for the Unity3D game engine that brings realistic buoyancy simulation to your project. UNITY ASSET STORE: AQUAS WATER SET PUBLISHER: DOGMATIC. Buoyancy test as per Section 4 for “open” ships, or Unity factor of subdivision Regulation 8(2)(a)(ii) (16)VI Not more than 250 As defined in the Regulations Unity factor of subdivision Regulation 8(3)(a)(i) (17)VI(A) Not more than 50 As defined in the Regulations Unity factor of subdivision Regulation 9(2)(a)(i). Though many of the boards in this list have similar shapes, the details are what distinguish one from the next. We mention again that the magnitude of the parameter N indicates the relative strengths of the two buoyancy forces and the algebraic sign provides information on the relative direction of the two forces. i find the answer to my trouble the port 7001of the AN is already use by sentinel server and you have to stop it I ran netstat -ab to see what ports services/processes were listening on as well as the name of the executable that spawned the listening process. Most students will predict a ROCK will sink, but a pumice stone demonstrates how density affects buoyancy. Essentials of Economics (2nd edition) John Sloman. Tao, “ Direct numerical simulation of turbulent flow and heat transfer in a square duct with natural convection,” Heat Mass Transfer 44, 229 (2007). So decided to upload it here as a torrent. In the following, lets assume that the balloon is tight, so that the amount or mass of air in it stays the same: m a = const. Buoyancy-driven two-phase flow in a porous medium is en­ countered in numerous important technological applications. Western global superpower, the USA kept a steady conventional force of heavy, light tanks, and first MBTs. The Perseid Meteor Shower Is Here, And Might Foretell Humanity’s Extinction “The Perseid meteor shower, even with a near-full Moon to contend with, should be one of the years. There will be simple cubes that can be dragged and dropped into the tank and they will behave according to the laws of buoyancy. On the other hand, if the elasticity coefficient is less than unity, this would indicate lagged revenue growth compared to GDP growth. Examples are given later in this Section and are shown in Figure 1 to facilitate introduction to terminology and concepts. Unity-3D Tutorials This site has information on how to create and work with virtual reality components in the Unity3d Engine. Comparison with full three-dimensional boundary-integral calculations for deformable drops without van der Waals attraction is also made to demonstrate that, when the drop-to-medium viscosity ratio is of the order of unity, the present asymptotic approach is valid in a wide range of small and moderately small capillary numbers. Buoyancy is the upward force we need from the water to stay afloat, and it's measured by weight. Title: Viscosity and Density Units and Formula Author: PipeFlow. Buy Billabong Clothing and Accessories online at Shore. Buoyancy Loads Marine growth Hydrodynamic Coefficients Wave Kinematics and Current Blockage Factor Hydrostatic Collapse check Preparing for Analysis Load Combinations Allowable Stress Modification factor Unity Check Partition Table Code Check &other Analysis options Defining Boundary Conditions. Build a fire. prehensible Mar 7 '14 at 1:47 add a comment |. com uses cookies to improve user experience. A Bouyancy machine. The regression coefficient of interaction variable, which is differential tax buoyancy, is significantly negative showing that the tax buoyancy is less than unity during post tax reform period. It reaches the finish line in exactly 1 minute and 20 seconds ( = 80. To keep up to date with the latest tech this post will primarily concentrate on new Pro water stuff, but later on there are some general tips which should be. The elasticity of the total tax revenue both with respect to the total GDP and the non-agricultural GDP base is less than unity. A Unity ID allows you to buy and/or subscribe to Unity products and services, shop in the Asset Store, and participate in the Unity community. Buoyancy is caused by the differences in pressure acting on opposite sides of an object immersed in a static fluid. Chen 6163 Etcheverry Hall University of California at Berkeley, Berkeley, CA 94720, T. Aside from the usual rigid body dynamics and ragdoll which both SDKs support, I'd really like some sort of built-in buoyancy - support for volumes of fluid, if that makes sense. It's multithreaded and memory efficient. Tao, “ Direct numerical simulation of turbulent flow and heat transfer in a square duct with natural convection,” Heat Mass Transfer 44, 229 (2007). Synonyms for jollity at Thesaurus. 3 years ago. A buoyancy ratio greater than unity (one) over the long-term supports the sustainability of fiscal policy. Simple Buoyancy. Buoyancy Loads Marine growth Hydrodynamic Coefficients Wave Kinematics and Current Blockage Factor Hydrostatic Collapse check Preparing for Analysis Load Combinations Allowable Stress Modification factor Unity Check Partition Table Code Check &other Analysis options Defining Boundary Conditions. This will help us understand the origin of pressure drag. It is found that both the transpiration-induced buoyancy and the diffusional transports play a decisive role in determining the heat transfer when the wall-to-stream temperature ratio (T w /T ∞) is only moderately different from unity. Please watch the videos provided, which provide different perspectives and approaches to introducing the concept of buoyancy. There will be simple cubes that can be dragged and dropped into the tank and they will behave according to the laws of buoyancy. In addition to an overhauled and simplified UI, debugging can now be performed by attaching to a desired target quickly and easily – saving multiple clicks over the. By continuing to use this site, you agree to allow us to store cookies on your computer. I believe you mean thermal buoyancy. Buoyancy effects on the integral lengthscales and mean velocity profile in atmospheric surface layer flows Scott T. When a GameObject The fundamental object in Unity scenes, which can represent characters, props, scenery, cameras, waypoints, and more. About; Privacy; Terms; Cookie Policy; Careers; Help; Feedback © 2019 Ask Media Group, LLC. In the summer of 2013, the Inner Arbor Trust, led by Strategic Leisure's President, Michael McCall, established its brand mark and the green color palette for the Chrysalis, the first phase development in Merriweather Park at Symphony Woods. ITEM DESCRIPTION OCEANIC BioFlex FLEX SCUBA BC BCD Buoyancy Compensator SIZE- Large( L) You are bidding on an OCEANIC BioFlex SCUBA BC BCD Buoyancy Compensator that is in Excellent Condition(See Pictures for Details. For contract work, please get in touch at [email protected] DE RIS (4) analyzed fuel-rich fires spreading within ventilated fuel-lined ducts in terms of overall energy balance and. Theres nothing to do, but just drive around and push the ice. domain is unity, with rigid horizontal and reflecting vertical boundaries. In a section about buoyancy it states that the buoyancy force can be found by integrating the pressure over the area (quite natural). There is room for both. Buoyancy is a measure of the upward force a fluid exerts on an obj ect that is submerged. given in the two columns are measured in different where B is the air buoyancy, p the air density, units. Title: Viscosity and Density Units and Formula Author: PipeFlow. However, this law, also known as Archimedes' principle (AP), does not yield the force observed when the body is in contact to the container walls, as is more evident in the case of a block immersed in a liquid and in contact to the bottom, in which a downward force that increases with depth is observed. Density can be measured for any substance and the results will vary depending on its temperature, pressure, buoyancy, purity and packaging, to name a few factors. It measures the difference of an object's density and the fluid or gas it displaces. 19 July 2019. This list of inspirational words is updated constantly. In this tutorial I show you how to create floating physics objects on water within Unity3D. is the fraction of the short wave flux absorbed at the surface. i find the answer to my trouble the port 7001of the AN is already use by sentinel server and you have to stop it I ran netstat -ab to see what ports services/processes were listening on as well as the name of the executable that spawned the listening process. The Journal of Engineering Mathematics promotes the application of mathematics to problems from engineering and the applied sciences. The undesolved particles will simply move around the pipe letting it sink or float. Expansion Deck Box 04 - Unity of Saiyans. 0 brings advanced and realistic water and ocean rendering to your Unity projects!. I suggest you may go thru the book titled (perhaps) 'Buoyancy Induced Flows' authored by Gebhart, Mahajan, Jaluria and [there is a fourth author - Samakia?], if you have not done so. That indoor plumbing does function because of the constancy of physical science which is a study of our material environment. Triton is a full-featured water engine for Unity, including buoyancy, impacts, ship wakes, rotor wash, volumetric decal textures, reflections, and much more. i find the answer to my trouble the port 7001of the AN is already use by sentinel server and you have to stop it I ran netstat -ab to see what ports services/processes were listening on as well as the name of the executable that spawned the listening process. Unity 3D - Buoyancy Physics Simulation v1-+ Dailymotion. Buoyancy is the upward force that an object feels from the water and when compared to the weight of the object, it is what makes an object float, sink, or remain neutrally buoyant in the water. The other force is the upward pressure of the fluid on the object. Buoyancy is conventionally defined as the ratio of percentage change in tax collection to percentage change in GDP, where the latter is a proxy for percentage change in personal pre-tax income. This is what causes buoyancy (also not relevant right now). In the following, lets assume that the balloon is tight, so that the amount or mass of air in it stays the same: m a = const. Buoyancy test as per Section 4 for “open” ships, or Unity factor of subdivision Regulation 8(2)(a)(ii) (16)VI Not more than 250 As defined in the Regulations Unity factor of subdivision Regulation 8(3)(a)(i) (17)VI(A) Not more than 50 As defined in the Regulations Unity factor of subdivision Regulation 9(2)(a)(i). It also makes it easy for developers to port. When a GameObject The fundamental object in Unity scenes, which can represent characters, props, scenery, cameras, waypoints, and more. All we need to be able to do is to calculate the magnitude and point of application of the. Steps to get working in a project. This machine has not been built to date (May, 98) and would be expensive for the estimated power output. The density of water is the weight of the water per its unit volume, which depends on the temperature of the water. buoyancy traduzione. Downloadable! An attempt is made in the present to provide an empirical content to differential coefficient of tax [revenue] buoyancy during post tax reform period in India by fitting a double-log regression model with an interaction variable to the stationary time series data based on Augmented - Dicky Fuller [ ADF ] and Phillips-Parron [PP] Tests. If N,/N2 e 1 and N3/N, 1, then the effect of J2 and J3 on the equivalent moment of inertia Jleq is negligible. This study, therefore, is an effort towards this end. i saw a page mentioning a couple of boat buoyancy scripts, like advanced and at least one was free, on the unity forums. From this it can also be understood that the average. There will be simple cubes that can be dragged and dropped into the tank and they will behave according to the laws of buoyancy. "Unity", los logos de Unity, y otras marcas de Unity son marcas o marcas registradas de Unity Technologies o sus filiales en EE. Heat transfer can occur by three main methods: conduction, convection, andradiation. Buoyancy from Mercadian Masques for. Unity Suimono 2. But are they as fun as playing an old-school pinball game? In this tutorial, you're going to find out by applying those techniques to build a 2D Pinball game! This game requires a bit of Physics, so having advanced knowledge of Unity's 2D Techniques & 2D Physics. Support is stoppped. Objects cause ripples when moving through the water. Buoyancy Toolkit (v2. This type of machine have several advantages. When a ship is floating in still water, the pressure of water on the boat below the waterline pushes upward, creating a buoyant force. A description of what we know about the mechanics of physical law (buoyancy, Boles law and other physical facts about the dynamics of water, pressure and gravity) are not intrinsically ethical, in and of themselves. LIQUID PHYSICS物理模拟资源包,使用2D物理引擎,使用粒子来模拟水的效果,然后使用一整张图通过shader算法将粒子整合起来形成水的效果,可以实现类似小鳄鱼爱洗澡的水流效果,性能可以在手机端使用。. Our bodies are mostly water, so a person's density is fairly close to that of water. One Response to “Deep Outdoors Unity II BCD Buoyancy Compensator Jacket For Scuba Diving (SMALL)” 7088758 June 1, 2012 at 9:49 pm # Description Adjustable sternum strap harness. As you watch each video, take notes on new ideas about buoyancy, questions you have, or ah-ha moments. This will help us understand the origin of pressure drag. but rectangular. Boat Control and Buoyancy Toolkit Pro that makes a large boat turn realistically and that makes it float on the simple water. Any help would be appreciated. Surplus Industrial Compressed Gas or Petroleum gases are injected into the engine and are captured in the Gas-holding spaces of the Wheel. Synonyms for resilience at Thesaurus. Setup a realist buoyancy in seconds ! Drag & drop the script, and that's it, you have a floating entity !. The Buoyancy Of Unity Filter By: Series Ephesians Acts Luke Sexuality Everywhere The Love of God Joy To The World Entrusted Rhythms of Joy Combined Sunday Service Fighting for Joy Upside Down Substance 1 Samuel Feel Better Everything We Need Real Spirituality Songs of the Messiah For Freedom Onward The Way Of Wisdom In Proverbs Tiny. To the OP, I might suggest that swamp water is a mixture and not a solution, meaning that while the SG might be higher than unity, this higher SG will not contribute to the buoyancy of the pipe. com uses cookies to improve user experience. Unity Voxel Based Buoyancy. There are many websites that offer free lessons, articles, worksheets, and tips to help you. Recently,. Air can be regarded as a liquid of very low density in which "lighter" objects, like ballons, can float. The buoyancy force per unit volume g( - ) is resisted by the viscous drag within the fluid. It comes with mobile-ready refractive (glass) shaders, water textures, an example of buoyancy physics, and a water collision FX with sounds. THOMAS (2) (3) has shown that backing occurs if the ratio of the buoyancy head to the velocity head is greater than unity. 这篇教程将教大家如何在Unity中制作一个简单的碎片效果。当物体撞击或销毁时,我们将物体分裂为更小的碎片来取代之前的仅仅直接“删除”物体。需求这篇教程需要最新版本的Unity,已经一些基础的Unity 博文 来自: abcd5711664321的博客. where U c = g K β Δ T v and the buoyancy ratio parameter N is defined by expression (4. 6 degrees-of-freedom head-mounted display with world-scale tracking, giving you the freedom to wander. Unity Web Player | Simple Buoyancy Physics « created with Unity »Unity ». He also estimated the buoyancy 5 If the elasticity coefficient exceeds unity, revenue growth is exceeding GDP growth. In that case go to the define symbols in the player settings and make sure they contain either UNITY_POST_PROCESSING_STACK_V1 or UNITY_POST_PROCESSING_STACK_V2 depending on the version of the post processing stack you have imported. newton The unit of force required to accelerate a mass of 1 kg by 1 m per second per second. By continuing to use this site, you agree to allow us to store cookies on your computer. Moreover, the study of tax elasticity and buoyancy is also useful for revenue forecasting. In pure natural convection, the strength of the buoyancy-induced flow is measured by the Rayleigh number:. If it is much greater than unity, buoyancy is dominant (in the sense that there is insufficient kinetic energy to homogenize the fluids). Add Buoyancy. Tax Elasticity and Buoyancy in Nepal: A Revisit Neelam Timsina∗ Tax elasticity and buoyancy estimates are the dynamic tools for measuring the tax performance. So much more than a magazine, Scope provides leveled resources that make it easy to differentiate instruction while using the same reading material. NUMERICAL STUDY OF BUOYANCY EFFECTS ON TRIPLE FLAMES J. 3より追加されたBuoyancyEffector2Dのお話。. where and are turbulent Prandtl and Schmidt numbers, which are approximately unity. It also makes it easy for developers to port. The following list of values will help you develop a clearer sense of what’s most important to you in life, as explained in the article Living Your Values. Expected result: "The density of the Buoyancy Effector 2D fluid. This is partially why a large, heavy object like a ship can float. Define buoyancy. SimpleCraftingSystem_JavaScript 46. Learn the study skills techniques that generations of MIT students have learned before you;. As an approximation, you could make a "buoy" component that behaved similarly to Unity's colliders - if each buoy was a sphere, it wouldn't be hard to calculate the volume of a slice to find how much was submerged. In addition to an overhauled and simplified UI, debugging can now be performed by attaching to a desired target quickly and easily – saving multiple clicks over the. NET applications created with Visual Studio to Linux and macOS maintaining a single code base for all platforms. —Chinese Proverb We can wait for joy to be triggered by winning the lottery or by having someone express love for us or by our favorite team winning a game or by getting a hole in one. 1 brings advanced ocean and interactive water effects to the Unity Game Engine. Ship Stability for Masters and Mates The lever of zero was at the keel so the ®nal answer was relative to this point, i. Download Unity 3D Sorted ASSET Pack torrent or any other torrent from the Applications Windows. Buoyancy/Natural Convection. Taking the density of water as unity, the upward (buoyancy) force is just 8 g. Though many of the boards in this list have similar shapes, the details are what distinguish one from the next. But are they as fun as playing an old-school pinball game? In this tutorial, you're going to find out by applying those techniques to build a 2D Pinball game! This game requires a bit of Physics, so having advanced knowledge of Unity's 2D Techniques & 2D Physics. Downloadable! An attempt is made in the present to provide an empirical content to differential coefficient of tax [revenue] buoyancy during post tax reform period in India by fitting a double-log regression model with an interaction variable to the stationary time series data based on Augmented - Dicky Fuller [ ADF ] and Phillips-Parron [PP] Tests. Unity is the ultimate game development platform. Please use MKS units throughout this lab. January 3, 2017 0. 4 3D Buoyancy models available for download. My grandmother always. The comparison of the slug and the pound makes it clear why the size of the pound is more practical for commerce. The strength of the buoyant force on an object in water depends on the volume of the object that is. The buoyancy is more important in deep canyons. Idéal pour les jeux se déroulant sur de l’eau, le système est assez impressionnant par le réalisme qu’il apporte. Unity of Destruction. buoyancy, which was just above the unity during pre tax reform period, is less than unity during post tax reform period evincing the fact that the gross Tax is relatively inelastic. If you can figure out how to get free energy out of buoyancy then you should be able to figure out how to get free energy out of two pails of water connected by a rope thrown over a pulley. Master game physics using Unity 5 w/ 2. The water surface is offset according to a custom wave function which can be used to control large scale waves. 6。如果找不到你需要的資料,可以上 CG數位學習網 有更多的學習資料。. 18 posts • Page 2 of 2 • 1 , 2. ; Gilman, P. The estimate of the tax buoyancy which was just above the unity during pre tax reform period is less than unity during post tax reform period evincing the fact that the gross tax is relatively inelastic. Unity comes with everything necessary for making games and it can be intimidating when you first open the program. 3ds " , and free source codes from chupamobile , codecanyon and gamegorillaz. Why the heck do things float? If you're behind a web filter, please make sure that the domains *. No pun intended. Now, today his presence still remains strong, due in part to how his words seem to drip of the divine, and startle a profound rememberance that links all back to the Soul-Essence. Buoyancy effects on the integral lengthscales and mean velocity profile in atmospheric surface layer flows Scott T. Ice is less dense than liquid water which is why your ice cubes float in your glass. (boat explodes if it tilts too much, its bit bugged right now. Air can be regarded as a liquid of very low density in which "lighter" objects, like ballons, can float. Perfect for those that are new to Unity and Programming in General. Use filters to find rigged, animated, low-poly or free 3D models. Unity-3D Tutorials This site has information on how to create and work with virtual reality components in the Unity3d Engine. 2) rigidbody. The distance traveled in time t is. During pre-reform era the tax buoyancy was just more than unity and in post-reforms era it is less than unity. If is put equal to unity, then in equation must be put to zero, implying that all the incident shortwave radiation is absorbed at the surface and there is no penetrative heating of the ocean. 1 N is equal to 100 000 dynes. 这篇教程将教大家如何在Unity中制作一个简单的碎片效果。当物体撞击或销毁时,我们将物体分裂为更小的碎片来取代之前的仅仅直接“删除”物体。需求这篇教程需要最新版本的Unity,已经一些基础的Unity 博文 来自: abcd5711664321的博客. This page is a directory of educational games, simulations, and virtual labs related to Weather, Climate, Atmospheric Science, and the Sun and Space Weather. Cereal Simulation. Theres nothing to do, but just drive around and push the ice. I will explain why in a minute, but first let’s quickly review an important key concept covered in Causal to Conscious Creation: SEPARATION vs. Dynamic Water Physics is a floating object simulator that uses self-generated simplified mesh to create realistic behavior. Documentation is provided in Assets folder. The elasticity of the total tax revenue both with respect to the total GDP and the non-agricultural GDP base is less than unity. It shows only the difference between the V the volume of body weighed, and V, the volume expansion of the sample in question and that of of weights. 【Unity】コンパイル完了時や Unity 再生時に Game ビューの Scale が 1 にリセットされる現象を防ぐエディタ拡張 【Unity】スクリプトの実行順を制御する属性を使用できる「Unity3D-ExecutionOrderAttribute」紹介. edu is a platform for academics to share research papers. If the Richardson number is of order unity, then the flow is likely to be buoyancy-driven: the energy of the flow derives from the potential energy in the system originally. Key Concepts Buoyancy, specific gravity, density Objectives On completion of this experiment, students should be able to:. If is put equal to unity, then in equation must be put to zero, implying that all the incident shortwave radiation is absorbed at the surface and there is no penetrative heating of the ocean. Ship Stability for Masters and Mates The lever of zero was at the keel so the ®nal answer was relative to this point, i. The latest version of Triton Oceans for Unity Pro / Windows is now available. Downloadable! An attempt is made in the present to provide an empirical content to differential coefficient of tax [revenue] buoyancy during post tax reform period in India by fitting a double-log regression model with an interaction variable to the stationary time series data based on Augmented - Dicky Fuller [ ADF ] and Phillips-Parron [PP] Tests. This machine has not been built to date (May, 98) and would be expensive for the estimated power output. Freelance Unity Development. English: ESA-Astronaut Alexander Gerst, Flight Engineer of Soyuz TMA-13M launched on 28 May 2014 19:57 at the Baikonur Cosmodrome in Kazakhstan, will spend the next five and a half months aboard the International Space Station. From Part 7, the simulation code already has density, which is used to compute fluid buoyancy. The Ratio Keff/k Is Greater Than Unity Because Of Fluid Motion Driven By Buoyancy Forces, As Represented By The Dashed Streamlines. THE shift needed to create in unity…and the basis of this e-course. Acrostic definition is - a composition usually in verse in which sets of letters (such as the initial or final letters of the lines) taken in order form a word or phrase or a regular sequence of letters of the alphabet. Chapter 1 The Family 'The family is the unit of the nation.
2019-11-22 00:08:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28043851256370544, "perplexity": 2062.4150854882237}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496671053.31/warc/CC-MAIN-20191121231600-20191122015600-00212.warc.gz"}
https://emresahin.net/
# TIL June 11 Creating AWS S3 buckets from command line If you have necessary AWS credentials export AWS_ACCESS_KEY_ID="XXXX" export AWS_SECRET_ACCESS_KEY="YYYY" you can install aws cli with pip3 install --user aws and create an S3 bucket from command line without opening a web browser. aws s3api create-bucket --acl public-read --bucket my-unique-bucket-name --region eu-central-1 --create-bucket-configuration LocationConstraint=eu-central-1 which will create a bucket readable by public in http://my-unique-bucket-name.s3.amazonaws.com. Then you can copy your files with aws s3 cp my-file. # Translating Ottoman Turkish Spelling to Latin Alphabet using Surface Forms dervaze is a project I have started back in my Ph.D. work in 2015 to translate Ottoman Turkish to modern Turkish spelling and providing an OCR/ICR/handwriting recognition engine for Ottoman language. The reason I had to stop was the lack of data, since without some considerable amount of data, statistical methods for both Natural Language Processing and Computer Vision fails. Producing and maintaining data seemed a much more important burden than having technical solutions, so I mostly gave up the idea that a working solution is obtainable with the classical OCR techniques. # telegram-send There is a little Python command line program called telegram-send to send messages to you telegram account. First you need to register a new bot from BotFather and get a key. Then you pip3 install --user telegram-send and prepare a config file in ~/.config/telegram-send.conf [telegram] token = <TOKEN_YOU_GET_FROM_BOT_FATHER> chat_id = <CHAT OR USER ID> You need to start a conversation with the bot and learn your user id (that’s identical with the chat id you start with the bot. # aerc and goneovim aerc I started to use aerc as a command line email client. At first I used its archived Github repository but the software was buggy. Then the real repository gave me the fastest processing IMAP client I’ve ever had. Asynchronous operations make the workflow very smooth and you don’t wait the server for each deleted/archived mail. goneovim I also began using a neovim GUI called goneovim. Formerly I was using Neovim-GTK for this but somehow (either from Fira Code’s ligatures or some kind of incompatibility) the visuals were ugly and there was some lag. # TIL May 1 Nota seems a nice command line calculator. It converts what you type into ASCII art formulas. In[1]: 10 + 10 Out[1]: 20.0 _____ In[2]: ╲╱ 100 Out[2]: 10.0 ┌ ┐ In[3]: Max │ 10 , 1 , 21 , -3 │ └ ┘ Out[3]: 21.0 In[4]: ⟨Emre's Number⟩ ≡ 79 Out[4]: 79.0 _______________ In[5]: ╲╱ Emre's Number Out[5]: 8.888194417315589 2 In[6]: Emre's Number Out[6]: 6241.0 Emre's Number In[7]: Emre's Number Out[7]: 8. When I try to use sed for find edit in multiple files, always I remember that perl -pe is better suited for this task. Today this happened again. I tried to find and replace lines starting with # Bla bla with title: Bla bla and it was easier to use perl -pe 's|^#+ (.*)|title: $1|g than identifying what kind of regular expressions does sed use. For Hugo front matter at the beginning of files, it’s possible to determine type but not possible to set the section. # TIL April 28 In yesterday’s post, I’ve presented a Python script to convert Pelican preamble files to YAML for Hugo. For some UTF-8 files, these is a BOM marker at the beginning of the file. The script (as a true quick and dirty solution) doesn’t check the presence of such marker and it cannot detect the Title element if it exists. I added an fm = fm.strip('\ufeff') line to clear BOM marker from a line if it exists. # TIL April 27 This blog has now moved to Amazon Amplify. It’s connected to a Bitbucket git repository and AWS pulls it at the moment it’s pushed. I was polling the repository manually in a VPS but this is much quicker. Setting your domain name for Amplify requires (a) to write a CNAME record to prove ownership. Then (b) you modify ALIAS and CNAME records of @ and www records to a cloudfront URL given to you and automatically your site becomes https. # TIL April 26 Hugo has a Casper theme but not listed in the official themes directory. Hosting static websites on AWS takes 5 minutes of configuration. For some of my books, I think I can use some ornamental public domain images. This guy talks about a third way to stop the pandemic: Testing everyone. The one that is most proven and ready to scale is based on a technology called LSPR. # Anonymous functions in dart Sometimes we need anonymous functions to use for once. Dart allows two similar syntax for writing these. First one is when there is a single expression to write. (a, b) => a + b The other is when you need to write multiple statements in an anonymous functions. (a, b) { return a + b; } # Should we expect a software crisis? I read a blog post titled The Quiet Crisis unfolding in Software Development that mainly says, current software building practices lead to accumulate technical debt and legacy software becomes unmanageable in time. It warns about highly skilled developers “These kinds of high performers are actually low performers when when TCO is factored in. Unless you’re a startup where time to market is the highest priority, keep these kinds of developers under close scrutiny with extensive design and code reviews. # Zenity When you need a simple dialog to get input from the user or just some piece of information in a GUI dialog, zenity helps. It allows scripts to receive user input by dialog. zenity --info --text="Merge complete. Updated 3 of 10 files." # Python Data Science Handbook The book has the chapters on iPython, Numpy, Pandas, Matplotlib and Machine Learning. It looks, it doesn’t delve into technical/theoretical aspects but focuses on Python libraries regarding data science. Github page for the book. # SSH keys for Multiple Accounts in Github I have multiple Github accounts and some of these are collaborator in others. I don’t like to write passwords every time I push, so I set up SSH keys for my accounts. But Github (understandably) doesn’t accept a key in more than one account. (Otherwise how can it know?) But there is a way to use .ssh/config file to use different keys for different target urls. # Personal account, - the default config Host github. # Fixing Pip Timeout Problems For a large package like tensorflow I was experiencing the following error in pip pip._vendor.urllib3.exceptions.ReadTimeoutError: HTTPSConnectionPool(host='files.pythonhosted.org', port=443): Read timed out. v = self._sslobj.read(len, buffer) socket.timeout: The read operation timed out I noticed the pip has a timeout parameter that can be set: pip --default-timeout=1000 install package-name # How to convert Numpy image to QImage? When writing the GUI code in Qt for a deep learning system, a general problem is to convert an image (read from disk or camera using OpenCV) in the form of a Numpy array, to a QImage to be shown in a form or widget. There are basically two problems: Numpy array’s data type has usually more than 8 bits and OpenCV reads the image in BGR format rather than the more general RGB. # Query Logging in Databases when using Parameters We don’t construct the database queries using the string formatting due to security problems. SQL injection attacks stem from lack of escapes and building queries from given strings. We use parameter passing to database engine, e.g. SELECT * FROM people WHERE name = ? and use this query and pass the parameters /separately/ to the database. All databases support this kind of queries. In Sqlite 3 under Python, we use # Coursera Deep Learning Specialization Notes These do not contain answers to quizzes or assignments per Honor Code. If you are looking for those, look elsewhere. Binary Classification Given a picture, classify it as cat or non-cat. The result is$\hat{y} = P(y=1 | x)$. In other words, given$x, we calculate the probability that this data represents a cat. Feature Vector from Image We convert a picture, e.g. (64, 64, 3) picture into a (64 * 64 * 3, 1) feature vector. # Numpy ValueError while using dlib's face detector For two days, I was trying to find a bug in my code, because an assertion in the code that uses numpy.max was giving an error like ValueError: zero-size array to reduction operation maximum which has no identity which didn’t seem reasonable. I’m building a face recognizer with dlib‘s frontal face detector, and today, I noticed that some of the results return negative coordinates in face detection. This means the detected face is partial, although it’s a bit a stretch to use negative coordinates for this. # Static Variables in Python I’m using this too much in different projects and would like to keep it here. Python doesn’t have C style static variables natively. (Although it supports class variables which can be used for similar purpose in OOP.) However, as the functions are also objects in Python, it’s possible to embed variables inside the function. An elegant solution at SO creates a decorator for static variables. def static_vars(**kwargs): def decorate(func): for k in kwargs: setattr(func, k, kwargs[k]) return func return decorate @static_vars(counter=0) def foo(): foo. # Development Journal, June 9 I began implementing Ottoman translator using Finite State Transducers via OpenFST. Instead of using ad hoc algorithms to translate Ottoman and Turkish into each other, I’ll be creating FSTs. In the past I have used FOMA and TRmorph, as a building block and basis for Ottoman conversion. However I saw that writing something on top of a morphological analyzer to convert Ottoman to Turkish requires almost another morphological analyzer. (This is also true for Turkish to Ottoman conversion as well, because spelling rules of Ottoman requires another layer of FSTs. # Adding version information to executables in CMake projects In programming, versioning your code files are of immense importance. Most of the files needs to be updated, renamed, merged constantly. You also need backups, as one learns through losing work due to various computer problems. Another problem that we face is establishing a connection between an executable file or library to its code. We normally don’t add executable files to the version control, as they are produced from code files. # When y and p commands in IdeaVim doesn't work I began using IdeaVim plugin for Android Studio, some time ago. It’s nice, but as a Vim newbie, I wasn’t aware that Vim doesn’t use system (Windows, macOS, XWindow, etc.) clipboard for copy/paste by default. So when you use y command in IdeaVim, it doesn’t paste to other applications.. The solution is easy when you spot it: Your ~/.ideavimrc file needs the following line set clipboard+=unnamed # The Sorry State of NDK testing in Android I’m writing a C library to use in Android, iOS and Python applications. Although the C library has its own unit tests, I wanted to write a few more to ensure that data transfer between C and Android layers are correct. In Android, one needs to put unit test files in app/src/test/*module-name* directory. I spent a few hours yesterday to write tests that checks the conversion between visenc and unicode are correct in Android. # Regular and Recurring Tasks in todo.txt I’m using todo.txt format to keep some of my daily tasks. It’s a plain text format and both iOS and Android has apps, like SimpleTasks. Emacs and Vim has support for the format too. Actually you don’t need a special editor for it, the format is so simple that even Notepad may be enough. I have a shell script to add daily recurring tasks, like Drink Water or Pray Maghreb to this file. # Progress on Ottoman Translation - 2018-6. Week. Some of the following posts will be like TODO list for the coming months. What am I planning with dervaze and its mobile versions. As I have become mostly a solo developer, I’ll share my experience with the problem here to shed light for those interested. The technology for Ottoman OCR was mostly ready when my interruptions regarding family life began. I’ll need to check what is available but a more pressing problem for me is the speed of translation. # A Restart It’s been a while, a few years, that I’ve updated this site. I had some of my technical writing elsewhere, but I’ve decided that I can restart updating here as well. I’ve moved the site to Pelican and moved older writings here. Much of the content is out of my interest, but I’m just keeping them for no concrete reason. I’ll try to update the site daily, with my adventures in software development and technology. # CV as of February 2019 Personal Info Name Ibadullah Emre Sahin Work Address: Teknokrat Yazilim A.S. Yahsibey Mh. Yahsibey Ck. No: 8 Bursa Turkey Birth 15 July 1979, Ankara Turkey Citizenship Turkish Gender Male Contact [email protected] +90 532 261 8985 Work Experience Various small programming projects during high school. (1995-1998) Project Manager, Devops and Software Developer in YD Yazilim, Ankara Turkey. Freelance Developer during university years and after (2000-2015) # zsh'de dosya seçim operatörleri zsh, dosya seçmek için bash'ten biraz daha gelişmiş operatörlere sahip. Bunlar sayesinde bir defada, bir dizindeki dosyaların tamamına erişip, onlar üzerinde işlemler yapmak mümkün. Burada kısa bazı örnekler vereceğim. Bir dizindeki tüm dosyalar :: ls * Bulunduğumuz ls ** Bulunduğumuz ls */**(.mw-2) Bulunduğumuz ls */**(.Lm+100) Bulunduğumuz ls */**(.R) /etc ls /etc/**(.W) /etc =ls etc/**(.Wmw-1) zsh'in sağladığı bu gibi seçeneklerin yanında, dosya adını parçalamak da kolay . Misalen dosyanın eklentiden önceki kısmını almak için *(:r) yazıyoruz. # Visual Transliteration for Ottoman There are already various transliteration systems for Arabic based scripts to represent in Roman. However all of them aims to represent phonemes in transliteration, without paying attention to different visual elements. When we are manually transcribing these texts, the method is fine. However when we try to represent visual elements in scanned handwritten documents, we faced some problems regarding these transliteration systems. Since conventional systems aim to represent phonemes, a correct reading is necessary and this requires expertise in the represented language. # A Fast Local Descriptor for Dense Matching Authors: Engin Tola, Vincent Lepetit, Pascal Fua Keywords: Stereo image descriptor circle quantization formalization binary mask Depth estimation Q1: How depth estimation is related with object recognition? Objects are located in a 3D environment and in order to recognize them correctly, we need to be able to recreate their layout in a scene. With such an aid, we cn successfully determine the object boundaries. Q2: What does the descriptor contain? # A Need for Yet Another Transliteration Alphabet for Ottoman The Ottoman Text Archival Project has its own reversible transcription system. However, for word labels, this is an overkill and too much work for experts. I'm looking for one-to-one mapping between different visual elements of a word and its representation in UTF-8. The labels should be simple to remember, but variable enough to represent visual variations of words. I'm thinking to create letter+digit codes. Letter part will reflect the most similar sound, the digit will reflect the visual variation. # A Regular Conversion Algorithm Between Turkish and Ottoman Modern Turkish spells all Turkish/Arabic/Farsi rooted words according to their pronunciation. When it comes to convert from a system to another, this creates a problem that might be solved with the aid of regular expressions. For example, in Ottoman a word is spelled as mnwr, as letters corresponding to letters in Arabic, but in Turkish, the spelling reflects the pronunciation as münevver. Since 1-1 mapping is not possible between these two writing systems, a set of possible Ottoman spellings must be produced with a regular expression. # Backup Script for Recent Files I decided to write a script to backup only the recent files. There are solutions based on unison that work periodically for all files, but as I'm changing projects, I need to configure new backups for these projects as well. However this is cumbersome and error prone, it's easy to forget to add new artifacts to backup scripts and lose them in a state of emergency. Therefore I decided that a small bash script that works with rsync and find works better. # cat-for-writers I write everyday. Everyday I write. I have a quota of words to fill and after each sleeping session, I sit in front of my keyboard and begin pressing words. I was using Emacs for this. Emacs, the One True Editor. Yet it has one flaw that makes me divert from this writing routine. It has too many features and when I see a block and some idea that I'm not big enough to put into words, I begin to play with it. # Converting Latin based Turkish spelling to Ottoman I'm working on a system to search Ottoman document collections. In order to query a large collection in Ottoman, the user needs to write the query in Ottoman, which uses Arabic based alphabet with completely different set of spelling rules. This limits the usability, since most of the users will not be familiar with spelling. Experts do, but we can't assume experts will be able to use it. There are various methods of transcribing Ottoman to modern Turkish. # Copying and pasting with XWindow clipboard from tmux tmux does not natively support XWindow's clipboard. With two lines in .tmux.conf you can configure two keys to send and retrieve clipboard content. Traditionally applications use PRIMARY selection which uses the mouse selection for copy and pastes with the third button. However this becomes less and less common, so I'll configure the CLIPBOARD selection most newer browsers, applications and Emacs use. Add following lines to .tmux.conf: # move x clipboard into tmux paste buffer bind < run "xsel -ob | tmux load-buffer - ; tmux paste-buffer " # move tmux copy buffer into x clipboard bind > run "( tmux show-buffer | xsel -bi ) && tmux display-message \"ok! # Dervaze: A Transliteration System for Ottoman /Dervaze/ (meaning "the portal") is a set of tools that aim to transliterate historical Ottoman documents to Modern Turkish. These will be hosted in http://dervaze.com in the near future. In this document, I describe the transliteration system. The system is organized as a pipeline in which the tools at a stage produce the input of the next stage. Input to the system is a set of historical document images and the output is either a search result or a textual representation of these documents. # ggplot2 Elegant Graphics for Data Analysis The important parts of the book are grammar of ggplot qplot for easy plotting geoms linear models in plots qplot =qplot()= is designed after plot() The three most important parameters to qplot are x, y and data. If data is specified, it's used as a namespace for variables =qplot(carat, price, data = diamonds)= =qplot(carat, x * y * z, data = diamonds)= =color= is another argument that can be specified for differentiating. # Midori My browser of choice was Google Chrome, but latest versions became resource hogs and I was feeling this in my older machines. I decided to take a look at alternative browsers and settled on Midori. I turned off JavaScript (best JS is dead JS), turned on ad blocking and keyboard shortcut customization (Ctrl-F to Ctrl-S as in Emacs). It's loading noticably faster and I can't guess the number of tabs open in my browser while using other applications. # mu4e I used mutt for years. I like it. Its customizability and macros make me feel at home and I was able to automate most of my tasks with it. I began to use mutt after gnus on Emacs. The reason I left gnus behind is that it was incompatible with offlineimap and slow for IMAP use. I see no point installing a local IMAP server when the tool must work with Maildirs does not work. # My Emacs Packages I'm using Emacs for about 7 or 8 years now, maybe a bit less than that, maybe more. I tried to quit several times for other editors, different workflows and everytime I returned with more enthusiasm. It's hard to tell for those who use their editors with mouse clicks on pretty icons but once you catch this virus called doing everything from the keyboard, it becomes attached to your digital (from digitus, finger) psyche that is impossible to leave behind. # nginx and php-fpm notes These are a few points that I put as a reminder to myself. If you host multiple sites, only one of them (default) should have listen 80 directive. The rest are defined by server_name directives. Debian's default configuration file comes with Unix socket definitions for php-fpm. Nginx needs to connect via TCP port, it should be changed to port directives. # Notes on Computer Vision A Modern Approach 2E A: What do you want from me? What should I know to consider myself expert in CV? A: How an object is separated from its background? An object is separated from its background in an image by an occluding contour. A: What would you want from Chapter 1? Chapter 1 is about cameras and their parameters. I don't like to learn much about these at the moment. A: What would you want from Chapter 2? # Paper Review: A practical approximation algorithm for LMS line estimator Authors: David M. Mount, Nathan S. Netanyahu, Kathleen Romanik, Ruth Silverman, Angela Y. Wue Keywords: LMS estimator O(n logn) bracelet slab random approximation quantiles Q1: What is LMS? Given a set of points p0, ..., pn, LMS finds a line q0, q1 that minimizes the median of the square of distances of p0, ..., pn. This is in contrast with summing up all the squared distances and minimize them as in OLS (Ordinary Least Squares. # Paper Review: Computerized Paleography: Tools for Historical Manuscripts Authors: Liow Wolf, liza Potikha, nachum Dershowitz, Roni Shweka, Yaacov Choueka Keywords: handwritten paleography fragments SIFT sparse coding dictionaries Q1: What is the ultimate goal of authors? Two main goals are, providing tools to bring together the fragments of the same page (from Cairo Genizah) and trying to classify handwriting and dates. Q2: How SIFT is used? SIFT is used in (all?) points of a letter to generate desxriptors. # Paper Review: FREAK: Fast Retina Keypoint URL: http://www.ivpe.com/papers/freak.pdf Authors: Alexandre Alahi, Raphael Ortiz, Pierre Vandergheynst Keywords: Keypoint Binary descriptor Retina Sampling Saccadic Coarse-to-fine Orientation Q1: What is the formula for the retina pattern? The one difference from BRISK is the pattern has overlapping circles. In BRISK they were tangential. Redundancy increases recognition. The circles are log polar. In this case, it's similar to Shape Context descriptors, but we don't divide into regions, we create increasingly larger circles on polar lines. # Paper Review: Handwritten character recognition using elastic matching based on a class-dependent deformation model Authors: Seiichi Uchida and Hiroaki Sakoe Keywords: Elastic Image Matching Eigen-deformations Tangent distance PCA Q1: How classes are defined? Are they defined per letter? The classes are defined per letter. Q2: How a class defines a deformation? Q3: What is the deformation model? Q4: How are the experimental results? Q5: Is it possible to apply to Ottoman? # Paper Review: High Performance Layout Analysis for Arabic and Urdu Authors: Syed Saqib Bukhari, Faisal Shafait and Thomas M. Breuel Keywords: ridge printed text non-text segmentation gaussian-filter bank reading order Q1: How line skew is determined? There is a θ parameter in Gaussian kernel which is used to produce ridges. This may be used in detecting the skew, but since it's constant for an entire page, a varying line skew will probably decrease its performance. Q2: How non-text portions are detected? # Paper Review: HMM-Based Alignment of Inaccurate Transcriptions for Historical Documents Authors: Andreas Fischer, Emanuel Indermühle, Volkmar Frinken and Horst Bunke Keywords: error tolerant DTW HMM inaccurate transcriptions Parzival DoG string alignment keyword spotting Viterbi Q1: What's the measure for success of alignment? The measure for success is (words − deletions − insertions − substitutions)/(words) . It gives the accuracy of alignment. Q2: What are the features used in keyword spotting? Q3: How Viterbi algorithm is employed? Q4: What does the first pass receive and produce? # Paper Review: Polygonal Approximation of Digital Curves to Preserve Original Shapes Authors: Daeho Lee, Seung Gwan Lee Keywords: dominant points consecutive vectors toothbrush shape distance metric smallest perpendicular distance Q1: How usual calculation of distance is done? Minor DPs are deleted in approximation. A minor DP is a DP where the perpendicular distance between the point and the straight line is minimum. a a b Here b is deleted when its distance to the line a-a is minimum. # Paper Review: Shape Classification Using Zernike Moments A: What is a moment? A moment is defined as mp, q(x, y) = ∫+ ∞− ∞∫+ ∞− ∞xpyqf(x, y)dxdy In other words, it's the summation of the figure w.r.t function f for both axes A: What are Zernike moments? Zernike moments are complex polynomial functions that we use to sum the elements of a shape. It is was first introduced in 1930s. The higher the order of it, the more complex shape appears. # Paper Review: Text Line Segmentation of Historical Documents: A Survey Authors: Laurance Likforman-Sulem, Abderrezak Sahour, Bruno Taconet URL: http://arxiv.org/pdf/0704.1267.pdf Keywords: page segmentation overlapping components image quality document complexity preprocessing projection based smearing based grouping based hough transform based repulsive attractive stochastic touching components Q1: What are the most usable techniques for Ottoman divans? Likforman-Sulem and Faure's techique which uses Gestalt criteria to associate text elements might be of use. Feldbach and Tennies' work which is tried on Church Registers may also be helpful. # Paper Review: Three Things Everyone Should Know to Improve Object Retrieval Authors: Relja Arandjelovic, Andrew Zisserman Keywords: large scale image datasets rootSIFT image augmentation query expansion paris buildings Q1: What's RootSIFT and how does it improve over L2? /RootSIFT/ is a modified SIFT descriptor where the elements are square roots of L1 normalized SIFT descriptors. Comparing RootSIFT descriptors with Euclidean (L2) is equivalent to using Hellinger kernel to compare SIFT. Hellinger kernel is dE(√(x), √(y))2 = 2 − 2H(x, y). # Patch Histogram Feature This post will introduce a new feature for binary blobs like connected components in a text. The feature is called patch histogram and it's the histogram of 3x3 patches of black and white pixels. We collect all 3x3 patches and count their frequency. 3x3 patch for a binary image contains 2^9 = 512 different combinations. For each of these combinations, we assign a number. I wrote the implementation in Python and here is a lookup table that converts all possible 3x3 patches to their ids. # Probabilistic Graphical Models Course Notes Preliminaries Distributions Video Suppose A has 2, B has 2 and C has 3 possible values. Their Joint Probability Distribution will contain 2x2x3=12 values. We can condition the values by setting a variable to a certain value. We can also marginalize the values to a certain variable and check the distribution of this single variable. Factors file:~/bighome/Watch/1 - 4 - Factors (0640).mp4 A factor \phi is a function that takes values for A, B and C and returns a real value. # R Notes These are the notes I took from here and there, including Coursera Data Analysis course and R's online help, with help.start Basics =R= objects have attributes which can be observed by attributes() functions. =<-= is the assignment operator =:= is used to create integer sequences. 1:4 = 1 2 3 4 =c= function can be used to create vectors from different kinds of objects. (concatenate) c(TRUE, FALSE) creates a logical vector, c(1+3i, 4+8i, 3-5i) creates a complex vector. # Randomness Course Notes Definitions of Randomness Kolmogorov Complexity of a seqyence = The shortest algorithm that produces it Martin-Löf A sequence is random is it passes all statistical tests It cannot be produced by a program shorter than itself The digits of \pi are not random in this sense. Not just "difficult to compute", there is no consistent way to define shortest algorithm It's impossible to find a way to ensure that a sequence is random. # Randomness Course Notes Definitions of Randomness Kolmogorov Complexity of a seqyence = The shortest algorithm that produces it Martin-Löf A sequence is random is it passes all statistical tests It cannot be produced by a program shorter than itself The digits of \pi are not random in this sense. Not just "difficult to compute", there is no consistent way to define shortest algorithm It's impossible to find a way to ensure that a sequence is random. # Recurrent Neural Networks These notes are gathered from various places. When I can, I give credits and links, but even if I don't, they are certainly not original ideas. Sequence Learning in RNNs A example to sequence is a set of words in English. Sequence learning and transforming allows computers to translate this sequence to another language. Or if no target exists, RNNs predict next element in a sequence. The prediction blurs the line between supervised and unsupervised learning. # Shell (Bash and Zsh) Notes Don't use ~ in scripts, useHOME I used ~ several times in scripts and it may or may not work. Use \$HOME to refer to the home dir, it always works. Multithreaded programming requires a shift of paradigm when it comes to return values of functions. C++11 provides std::async <http://en.cppreference.com/w/cpp/thread/async>__ to run functions asynchronously but this is not available in older versions. My current project on word spotting on historical documents is fairly complete in functionality but I decided that searching word images on page images concurrently is necessary for speed up. I'm already using Boost for many of the functionality and instead of creating a dependency on not yet mature C++11 support in various compilers, I decided to use =boost::thread=s.
2020-10-21 05:05:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2787894606590271, "perplexity": 4317.919398527087}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107875980.5/warc/CC-MAIN-20201021035155-20201021065155-00188.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/introductory-algebra-for-college-students-7th-edition/chapter-2-section-2-5-an-introduction-to-problem-solving-exercise-set-page-167/55
## Introductory Algebra for College Students (7th Edition) Published by Pearson # Chapter 2 - Section 2.5 - An Introduction to Problem Solving - Exercise Set - Page 167: 55 #### Answer This statement is false. The true statement should read: $$x - 10 = 160$$ #### Work Step by Step This statement is false because we want to subtract $10$ from Bill's weight, $x$, not Bill's weight, $x$, from $10$. To make the statement true, we need to subtract $10$ pounds from Bill's weight. The new equation should read: $$x - 10 = 160$$ After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
2019-10-17 00:19:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4073431193828583, "perplexity": 1255.0052083139653}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986672431.45/warc/CC-MAIN-20191016235542-20191017023042-00122.warc.gz"}
https://www.tutorke.com/lesson/3293-below-is-a-sketch-of-a-graph-showing-the-change-in-viscosity-ease-of-flow-with-temperature-when-so.aspx
Below is a sketch of a graph showing the change in viscosity (Ease of flow) with temperature when solid sulphur is heated. Describe what happens to the sulphur molecules when sulphur is heated from 150^0C to about 200^0C.
2022-01-17 07:29:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19556109607219696, "perplexity": 2972.0752752000462}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300343.4/warc/CC-MAIN-20220117061125-20220117091125-00313.warc.gz"}
https://crypto.stackexchange.com/questions/26514/aes-and-homomorphic-encryption/26520
# AES and Homomorphic Encryption Is it possible to do the following? Input would be to generate a new AES key, encrypt the private data with that key, encrypt the AES key with the FHE key, and send the FHE-encrypted AES key along with the AES encrypted data to the compute node. It is possible to do, but depending on the performed operation, it may not be useful at all, so you have to choose the operation carefully. The ultimate goal of FHE is to perform generic computations over encrypted data. That is, you have a generic function $f$ and you want to be able to compute it using encrypted inputs, so that: $$f(\mathsf{Enc}(x), \mathsf{Enc}(y)) = \mathsf{Enc}(f(x,y))$$ If you encrypt the AES key, and perform generic computations on the encrypted key and some other encrypted input, then, thanks to the homomorphism of the FHE scheme, you will obtain after decryption the computation over the key and the other input to the function. Therefore, depending on the computation (i.e., the function $f$), this could produce a different AES key, and the AES encrypted data cannot be retrieved with the resulting key. However, that does not mean that what you suggest is not useful at all. A possible scenario is that the operation you perform over the encrypted data does not modify the original value, for example, when FHE is used for implementing a proxy re-encryption functionality. In this case, the original message would be the AES key encrypted under some public key $pk_1$, and the homomorphic function is actually decrypting the AES key under $pk_1$ and encrypting it again under $pk_2$. This way, when someone uses $sk_2$ to decrypt the result, he will obtain the original AES key. Suppose that you have your AES key, $K$, encrypted under $pk_1$, that is, $\mathsf{Enc}_{pk_1}(K)$. You also have a FHE scheme so that $f(\mathsf{Enc}_{pk_2}(x), \mathsf{Enc}_{pk_2}(y)) = \mathsf{Enc_{pk_2}}(f(x,y))$, for some other public key $pk_2$. Now simply set $x = sk_1$, and $y = \mathsf{Enc}_{pk_1}(K)$, so: $$f(\mathsf{Enc}_{pk_2}(sk_1), \mathsf{Enc}_{pk_2}(\mathsf{Enc}_{pk_1}(K))) = \mathsf{Enc_{pk_2}}(f(sk_1,\mathsf{Enc}_{pk_1}(K)))$$ As you can see, if $f$ is designed to decrypt ciphertexts using the corresponding secret key, that is, $f(sk_1,\mathsf{Enc}_{pk_1}(K)) = K$, you are implementing a proxy re-encryption scheme that transforms ciphertexts from one public key to another, without altering the original message: $$f(\mathsf{Enc}_{pk_2}(sk_1), \mathsf{Enc}_{pk_2}(\mathsf{Enc}_{pk_1}(K))) = \mathsf{Enc_{pk_2}}(K)$$ There are other types of use cases; see for example the ones in mikeazo's answer. • Proxy re-encryption is just a specific use-case. You could just use trans-encryption to drastically reduce the transmission size, and then do your computation on the data as usual. – Dillinur Jun 25 '15 at 13:36 • @Dillinur I merely put it as an example. I will edit the answer for clarifying that. – cygnusv Jun 25 '15 at 13:38 Yes, Where I have seen this idea primarily mentioned is to minimize the number of homomorphic operations that the client has to do. 1. Encrypt the AES key with FHE. 2. Encrypt the inputs with AES. 3. Send encrypted inputs and encrypted key to cloud. 4. Cloud uses encrypted AES key and AES encrypted inputs and runs the AES decryption circuit homomorphically. The outputs are FHE encrypted inputs. 5. Run computations on the FHE encrypted inputs. 6. Return FHE encrypted result to client. Without doing this, the process would look like this: 1. Encrypt the inputs with FHE. 2. Send encrypted inputs to cloud. 3. Cloud runs computations on the FHE encrypted inputs. 4. Return FHE encrypted result to client. This second option has far fewer steps, but since the FHE encrypt operation is typically very expensive (and produces large ciphertexts), the client has more work to do, and there is more data to transfer to the cloud. By encrypting the data with AES (a relatively cheap operation) and the key with FHE, the client is able to push off expensive computations to the cloud. This, along with the much smaller ciphertext sizes of AES compared to all existing FHE ciphers, are the major benefits of the approach you outline. You just have to make sure you encrypt the inputs with AES in such a way that when the AES circuit is executed homomorphically, the outputs are useful for computation. • You're kind of missing the point, after the decryption, you get the exact same inputs as you'd have received without using trans-encryption, so no "trick" is needed for it to be useful. The main advantage (and it's a tremendous one) is that you avoid the extremely severe data size inflation for the upload (your symmetric ciphertext having the same size as the cleartext). Please also note that 'AES' should be understand as 'a generic symmetric cipher', you wouldn't use AES with homomorphic encryption. – Dillinur Jun 25 '15 at 13:26 • @Dillinur I don't think I am missing the point at all. Your comment relates to one minor sentence at the end of my answer. The rest of my answer is exactly what you say. Also, I disagree that with your assertion that "no trick is needed". If I take a long string of integers separated by commas and encrypt it directly with AES-CBC it is going to be very hard for the cloud to do very useful operations with that. – mikeazo Jun 25 '15 at 13:30 • Regarding the useful operations, it will be as hard as if you just sent the same string with plain homomorphic encryption. You're only trading upload bandwidth with decryption overhead, beside that there is no impact on the algorithms you're willing to use on the data. – Dillinur Jun 25 '15 at 13:35 • @Dillinur would it? You would have to then split the homomorphically encrypted string on the commas, convert the ASCII representations of integers into usable integers. I'm not convinced that all that is as easy as you are implying. Remember, equality tests can't be done without interaction. – mikeazo Jun 25 '15 at 13:41 • If you send the same data, using both of your methods, you'll get the exact same problem. Sending the same string with just homomorphic encryption does not alleviate the problem you're stating in any way. Properly formating your inputs would solve this problem in both cases. – Dillinur Jun 25 '15 at 13:44
2020-10-31 11:04:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 3, "x-ck12": 0, "texerror": 0, "math_score": 0.26791566610336304, "perplexity": 970.4682223751611}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107917390.91/warc/CC-MAIN-20201031092246-20201031122246-00486.warc.gz"}
https://brilliant.org/problems/recurring-perfect-squares/
# Recurring perfect squares How many integers in the infinite sequence $11, \, 111, \, 1111, \, \ldots, \, \underbrace{11111\ldots1}_{n \text{ number of 1's}} \, , \ldots$ are perfect square? ×
2017-12-12 06:45:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7873956561088562, "perplexity": 4124.863472901698}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948515309.5/warc/CC-MAIN-20171212060515-20171212080515-00786.warc.gz"}
http://tex.stackexchange.com/questions/85838/part-of-the-file-name-of-my-image-is-printed-in-the-document-twice
# Part of the file name of my image is printed in the document, twice [duplicate] Possible Duplicate: How to include graphics with spaces in their path? When I add a piece of code of figure insertion to the context I'm faced with a problem: a number of irrelevant words appear. Code: \begin{figure} \centering \includegraphics[width=0.9\textwidth]{C:/Thesis/Latex/thesis_1(1)/Pictures/study area.jpg} \rule{35em}{0.3pt} \caption{The Grand St. Bernard wireless sensor network deployment (a) the coordinates of nodes according to the Swiss coordinate system (b) the distribution of the nodes in the study site \citep{r33}} \label{fig:study area} \end{figure} Problem: - ## marked as duplicate by egreg, Kurt, Martin Schröder, Werner, Andrew SwannDec 6 '12 at 19:47 it looks as if this has something to do with the space in the file name (but others know more about this). what i really wanted to do is suggest that you put a "slash space" (\ ) after "St." so that the space there isn't so large. – barbara beeton Dec 6 '12 at 18:42 rename your file so that it does not include white spaces, e.g. study_area.jpg – Jörg Dec 6 '12 at 18:43
2016-02-12 20:30:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5905059576034546, "perplexity": 1638.3664813501816}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701165302.57/warc/CC-MAIN-20160205193925-00333-ip-10-236-182-209.ec2.internal.warc.gz"}
http://nbloomf.blog/posts/ml/Indices.html
# Sizes and Indices Posted on 2017-10-12 by nbloomf This post is part of a series of notes on machine learning. This post is literate Haskell; you can load the source into GHCi and play along. First some boilerplate. {-# LANGUAGE LambdaCase #-} module Indices where import Data.List import Test.QuickCheck import Test.QuickCheck.Test import System.Exit This post is just some preliminary ideas about tensors - nothing learning-specific yet. Fundamentally, supervised learning models are (nonlinear) functions involving vector spaces over $$\mathbb{R}$$. A lot of literature refers to the elements of these spaces as “tensors”, because, well, that’s what they are. But I think this word “tensor” can be unhelpful in this context for several reasons. For one thing, the correct answer to the question “what is a tensor?” quickly veers into multilinear functions and massive quotient spaces and universal properties and omg I just wanted to write a program to tell the difference between cats and dogs. So I’ll just say that a tensor “is” just a multidimensional array, in the sense that a linear transformation “is” a matrix, since for the most part we really want to think of tensors as data, and don’t care so much about the more abstract bits. With that said, what exactly is a multidimensional array? One definition is that it’s an element of a set like $\mathbb{R}^{k_1 \times k_2 \times \cdots \times k_t}$ where each $$k_t$$ is a natural number. And this is totally appropriate. But I am going to do something a little different. I’m not really comfortable defining things in terms of ellipses; that “dot dot dot” hides enough details to make rigorous calculations awkward to my taste. And since from a machine learning perspective we don’t really need the full power of tensor algebra, I suspect we can afford to take a different tack. Instead, let’s think for a minute about what we want out of a multidimensional array. An array is a really simple kind of data structure, consisting of entries that are accessed using their index or position in the array. That word dimension is a funny thing – what does it really mean here? In the strictest sense, it measures the number of “coordinates” needed to specify an entry in the array. So, for example, an array in $$\mathbb{R}^{2 \times 3 \times 4}$$ has “dimension” 3, since each entry has an address along 3 different “axes”. But then $$\mathbb{R}^{5 \times 6 \times 7}$$ also has dimension 3 in this sense. (In tensor language we’d call this the rank rather than dimension.) The reason why we attach numbers to things is typically to quantify how alike they are. So: how are arrays in $$\mathbb{R}^{2 \times 3 \times 4}$$ and $$\mathbb{R}^{5 \times 6 \times 7}$$ alike? Are they alike enough to warrant using a hefty word like dimension to express their similarity, especially when there’s a more relevant notion of vector space dimension floating around? I don’t think so. In this post I’ll define a couple of algebras in an attempt to nail down a useful notion of dimension, as well as shape, size, and index for multidimensional arrays. For now, when I say algebra I mean universal algebra; that is, a set with some functions on it that satisfy 0 or more universally quantified axioms. Let’s think again about that vector dimension $$2 \times 3 \times 4$$. This is a funny way to write a dimension. Yes, we can think of a natural number as the set of numbers less than itself, and that $$\times$$ like the cartesian product of sets, and then $$\mathbb{R}^{2 \times 3 \times 4}$$ can be thought of as a literal set of functions from the set $$2 \times 3 \times 4$$ to $$\mathbb{R}$$, as the notation suggests. But that $$2 \times 3 \times 4$$ makes it look like we want to express some kind of arithmetic that remembers where it came from, in the sense that $$2 \times 3$$ and $$3 \times 2$$ are different. Doing this with actual sets is a little awkward, though, so lets make an algebra instead. We denote by $$\mathbb{S}$$ the free algebra over $$\mathbb{N}$$ with two function symbols of arity 2, denoted $$\oplus$$ and $$\otimes$$. Elements of $$\mathbb{S}$$ are called sizes, and we’ll sometimes refer to $$\mathbb{S}$$ as the algebra of sizes. For example, $$2 \oplus 4$$ and $$5 \otimes (3 \oplus 6)$$ are elements of $$\mathbb{S}$$. Eventually we’ll use, for instance, the element $$2 \otimes 3$$ to describe the “size” of a $$2 \times 3$$ matrix. The size algebra has no axioms, so for example $$a \otimes (b \otimes c)$$ and $$(a \otimes b) \otimes c$$ are not equal. And just ignore the $$\oplus$$ for now. :) So the elements of $$\mathbb{S}$$ look like unevaluated arithmetic expressions with plus and times. By the way, one benefit of using free algebras is we can implement them in Haskell with algebraic data types. data Size = Size Integer | Size :+ Size | Size :* Size deriving Eq instance Show Size where show = let p x = if ' ' elem x then "(" ++ x ++ ")" else x in \case Size k -> show k a :+ b -> concat [p $show a, " + ", p$ show b] a :* b -> concat [p $show a, " x ", p$ show b] -- so we can define them with numeric literals instance Num Size where fromInteger k = if k >= 0 then Size k else error "sizes cannot be negative." (+) = (:+) (*) = (:*) abs = error "Size Num instance: abs makes no sense." signum = error "Size Num instance: signum makes no sense." negate = error "Size Num instance: negate makes no sense." If you’re following along with GHCi, try defining some Sizes. (The Num instance is just there to make the notation less awkward.) $> 4 :: Size$> 2*3 :: Size $> 2+(3*4) :: Size Another nice thing about free algebras is that we get universal mappings for free! For example: We denote by $$\mathbb{H}$$ the free algebra over $$\ast = \{\ast\}$$ with two function symbols of arity 2, denoted $$\oplus$$ and $$\otimes$$. Elements of $$\mathbb{H}$$ are called shapes, and we’ll sometimes refer to $$\mathbb{H}$$ as the algebra of shapes. Define $$h : \mathbb{N} \rightarrow \ast$$ by $$h(k) = \ast$$, and let $$H : \mathbb{S} \rightarrow \mathbb{H}$$ be the map induced by $$h$$. If $$s \in \mathbb{S}$$, we say $$H(s)$$ is the shape of $$s$$. Note that $$(\mathbb{N},+,\times)$$ is an algebra with two function symbols of arity 2. Let $$D : \mathbb{S} \rightarrow \mathbb{N}$$ be the map induced by the identity function on $$\mathbb{N}$$. If $$s \in \mathbb{S}$$, we say $$D(s)$$ is the dimension of $$s$$. Again, we can implement these in code in the usual way. data Shape = HAtom | HPlus Shape Shape | HTimes Shape Shape deriving Eq instance Show Shape where show = \case HAtom -> "*" HPlus a b -> concat ["(", show a, " + ", show b, ")"] HTimes a b -> concat ["(", show a, " x ", show b, ")"] shapeOf :: Size -> Shape shapeOf = \case Size _ -> HAtom a :+ b -> HPlus (shapeOf a) (shapeOf b) a :* b -> HTimes (shapeOf a) (shapeOf b) dimOf :: Size -> Integer dimOf = \case Size k -> k a :+ b -> (dimOf a) + (dimOf b) a :* b -> (dimOf a) * (dimOf b) Eventually, $$s \in \mathbb{S}$$ will represent the “size” of a tensor and $$D(s) \in \mathbb{N}$$ will be the vector space dimension of the space it comes from. ## Indices This is well and good; we have a type, Size, that will eventually represent the size of a multidimensional array, and we can extract the “shape” and “dimension” of a size. But we also need a reasonable understanding of how to refer to the entries of an array of a given size. However we define indices, which indices make sense for a given size will depend on the structure of the size. For instance, a natural number size $$k$$ might be indexed by $$k$$ contiguous natural numbers, starting from 0 or 1 or whatever. A product-shaped size like $$a \otimes b$$ might be indexed by a pair $$(u,v)$$, where $$u$$ is an index of $$a$$ and $$v$$ an index of $$b$$. The sum size is a little stranger: to index $$a \oplus b$$, we need an index for either $$a$$ or $$b$$, and some way to distinguish which is which. Putting this together, we will define an algebra of indices like so. We denote by $$\mathbb{I}$$ the free algebra over $$\mathbb{N}$$ with two function symbols of arity 1 and one of arity 2, denoted $$\mathsf{L}$$, $$\mathsf{R}$$, and $$\&$$. Elements of $$\mathbb{I}$$ are called indices, and we’ll sometimes refer to $$\mathbb{I}$$ as the algebra of indices. Again, since $$\mathbb{I}$$ is a free algebra we can represent it as an algebraic type. data Index = Index Integer | L Index | R Index | Index :& Index deriving Eq instance Show Index where show = \case Index k -> show k L a -> "L(" ++ show a ++ ")" R b -> "R(" ++ show b ++ ")" a :& b -> concat ["(", show a, ",", show b, ")"] instance Num Index where fromInteger = Index (+) = error "Index Num instance: (+) does not make sense" (*) = error "Index Num instance: (*) does not make sense" negate = error "Index Num instance: negate does not make sense" abs = error "Index Num instance: abs does not make sense" signum = error "Index Num instance: signum does not make sense" Now given an index and a size, it may or may not make sense to talk about an entry at the index in a structure of the given size – like asking for the item at index 10 in an array of length 5. To capture this, we define a compatibility relation to detect when an index can be used on a given size. isIndexOf :: Index -> Size -> Bool (Index t) isIndexOf (Size k) = 0 <= t && t < k (L u) isIndexOf (a :+ _) = isIndexOf u a (R v) isIndexOf (_ :+ b) = isIndexOf v b (u :& v) isIndexOf (a :* b) = (u isIndexOf a) && (v isIndexOf b) _ isIndexOf _ = False From now on, if $$s$$ is a size, I’ll also use $$s$$ to denote the set of indices compatible with $$s$$. So for example, if $$s = 5$$, we might say somthing like $\sum_{i \in s} f(i)$ without ambiguity. We’d like to be able to construct $$s$$ as a list; this is what indicesOf does. I’m going to play a little fast and loose with the proof because laziness. indicesOf :: Size -> [Index] indicesOf = \case Size k -> map Index [0..(k-1)] a :+ b -> map L (indicesOf a) ++ map R (indicesOf b) a :* b -> [ u :& v | v <- indicesOf b, u <- indicesOf a ] The number of different indices for a given size should be equal to the size’s dimension. This suggests a simple test: the length of the index list is the dimension, and all entries of the index list are distinct. _test_index_count :: Test (Size -> Bool) _test_index_count = testName "dimOf s == length$ indicesOf s" $\s -> (dimOf s) == (fromIntegral$ length $indicesOf s) _test_indices_distinct :: Test (Size -> Bool) _test_indices_distinct = testName "indicesOf s all distinct"$ \s -> (indicesOf s) == (nub $indicesOf s) In later posts, $$s \in \mathbb{S}$$ will represent the size (and shape) of the elements in a vector space consisting of tensors, which itself has vector space dimension $$D(s)$$. But it will sometimes be convenient to think of these tensors canonically as $$D(s)$$-dimensional vectors. To do this, we’ll set up a bijection between the indices of a given size $$s$$ and the natural numbers less than $$D(s)$$. I’ll call the function from indices to numbers “flatten”, since it turns a complicated thing into a one-dimensional thing, and call the inverse “buildup”. flatten :: Size -> Index -> Integer flatten (Size k) (Index t) = if 0 <= t && t < k then t else error "index out of bounds" flatten (a :+ _) (L u) = flatten a u flatten (a :+ b) (R v) = (dimOf a) + (flatten b v) flatten (a :* b) (u :& v) = (flatten a u) + (flatten b v)*(dimOf a) buildup :: Size -> Integer -> Index buildup (Size k) t = if 0 <= t && t < k then Index t else error "integer index out of bounds" buildup (a :+ b) t = if t < dimOf a then L$ buildup a t else R $buildup b (t - dimOf a) buildup (a :* b) t = (buildup a (t rem (dimOf a))) :& (buildup b (t quot (dimOf a))) Now flatten and buildup should be inverses of each other, which we can test. _test_flatten_buildup :: Test (Size -> Bool) _test_flatten_buildup = testName "flatten s . buildup s == id"$ \s -> let ks = [0..((dimOf s) - 1)] in ks == map (flatten s . buildup s) ks _test_buildup_flatten :: Test (Size -> Bool) _test_buildup_flatten = testName "buildup s . flatten s == id" $\s -> let ks = indicesOf s in ks == map (buildup s . flatten s) ks To wrap up, in this post we defined two algebraic types, Size and Index, to represent the sizes and indices of multidimensional arrays, and two functions, flatten and buildup, that canonically map the indices of a given size to a 0-indexed list of natural numbers. In the next post, we’ll use Size and Index to define and manipulate multidimensional arrays. ## Tests Math heavy code is well suited to automated tests, so we’ll write some along the way using the QuickCheck library. First off, we won’t be needing the full complexity of QuickCheck, so here are some helper functions to make the tests a little simpler to write. type Test prop = (String, prop) testName :: String -> prop -> Test prop testName name prop = (name, prop) runTest, chattyTest, skipTest :: Testable prop => Args -> Test prop -> IO () runTest args (name, prop) = do putStrLn ("\x1b[1;34m" ++ name ++ "\x1b[0;39;49m") result <- quickCheckWithResult args prop if isSuccess result then return () else (putStrLn (show result)) >> exitFailure chattyTest args (name, prop) = do putStrLn ("\x1b[1;35m" ++ name ++ "\x1b[0;39;49m") result <- verboseCheckWithResult args prop if isSuccess result then return () else (putStrLn (show result)) -- when testing tests skipTest _ (name, _) = putStrLn ("\x1b[1;33mskipped: " ++ name ++ "\x1b[0;39;49m") testLabel :: String -> IO () testLabel msg = putStrLn ("\n\x1b[1;32m" ++ msg ++ "\x1b[0;39;49m") class TypeName t where typeName :: t -> String instance TypeName Int where typeName _ = "Int" instance TypeName Integer where typeName _ = "Integer" instance TypeName Double where typeName _ = "Double" pairOf :: (Monad m) => m a -> m b -> m (a,b) pairOf ma mb = do x <- ma y <- mb return (x,y) forAll2 :: (Show a, Show b, Testable prop) => Gen a -> Gen b -> (a -> b -> prop) -> Property forAll2 ga gb f = forAll genPair (uncurry f) where genPair = do x <- ga y <- gb return (x,y) forAll3 :: (Show a, Show b, Show c, Testable prop) => Gen a -> Gen b -> Gen c -> (a -> b -> c -> prop) -> Property forAll3 ga gb gc f = forAll genTriple g where genTriple = do x <- ga y <- gb z <- gc return (x,y,z) g (x,y,z) = f x y z To write QuickCheck tests for a given type it needs to be an instance of Arbitrary, which provides two basic functions: arbitrary, which generates a “random” element of the type, and shrink, which takes an element and makes it “smaller” in some way. Defining these functions for a given type may be ugly, but only has to be done once. instance Arbitrary Size where arbitrary = sized arbSize shrink = \case Size k -> if k <= 0 then [] else [Size (k-1)] u :+ v -> [u, v] u :* v -> [u, v] arbSize :: Int -> Gen Size arbSize 0 = do k <- elements [0,1,1,2,2,2,2] return (Size k) arbSize n = do switch <- arbitrary :: Gen Int m <- choose (1,n) case switch mod 5 of 0 -> do u <- arbSize$ n-1 v <- arbSize $n-1 return (u :* v) 1 -> do u <- arbSize$ n-1 v <- arbSize \$ n-1 return (u :+ v) _ -> do k <- elements [0,1,2] return (Size k) Now we can wrap up our tests in a little suite, _test_index. The arguments for this function are (1) the number of test cases to generate and (2) how big they should be. -- run all tests for Size and Index _test_index :: Int -> Int -> IO () _test_index num size = do testLabel "Size and Index" let args = stdArgs { maxSuccess = num , maxSize = size } runTest args _test_index_count runTest args _test_indices_distinct runTest args _test_flatten_buildup runTest args _test_buildup_flatten main_index :: IO () main_index = _test_index 200 20
2022-08-14 15:03:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8828733563423157, "perplexity": 1345.1924320581356}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572043.2/warc/CC-MAIN-20220814143522-20220814173522-00455.warc.gz"}
https://ssc.wisc.edu/sscc/pubs/dwr/reshape-tidy.html
19 Reshaping Reshaping involves changing how data is organized by moving information between rows and columns. The tidyr package has functions for reshaping data, so load that in addition to dplyr for data manipulation functions, as well as a fresh airquality dataset. library(tidyr) library(dplyr) air <- airquality 19.1 Data “Shapes” Dataframes can be organized in different ways for different purposes. Dataframes often come in less-than-ideal formats, especially when you are using secondary data. It is important to know how to rearrange the data to prepare it for tables or for plotting with ggplot2. Data comes in two primary shapes: wide and long. 19.1.1 Wide Data Data is wide when a row has more than one observation, and the units of observation (e.g., individuals, countries, households) are on one row each. You might run into this format if you work with survey or assessment data, or if you have ever downloaded data from Qualtrics. In the example below, each row corresponds to a single person, and each column is a different observation for that person. ID Q1 Q2 Q3 Q4 001 5 3 2 4 002 3 2 5 4 003 4 3 1 1 19.1.2 Long Data Data is long when a row has only one observation, but the units of observation are repeated down a column. Longitudinal data is often in the long format. You might have a column where ID numbers are repeated, a column marking when each data point was observed, and another column with observed values. ID Month Income 001 1 16000 001 2 18000 001 3 18000 002 1 43000 002 2 40000 19.1.3 The Shape of airquality Is the air dataframe in wide or long format? head(air) Ozone Solar.R Wind Temp Month Day 1 41 190 7.4 67 5 1 2 36 118 8.0 72 5 2 3 12 149 12.6 74 5 3 4 18 313 11.5 62 5 4 5 NA NA 14.3 56 5 5 6 28 NA 14.9 66 5 6 Our answer to that question depends on what variables we are interested in and how we conceive of our data. Does the air dataframe contain multiple observations (Ozone, Solar.R, Wind, Temp) of interest per row, and do we conceive of Day as the unit of observation? If so, this is a wide dataframe since we have multiple observations per row (Day). Or, are we more interested in just one of the variables (such as Temp), and do we think of Month as the unit of observation? If so, then air is in long format since Month is repeated down its column across observations. 19.2 Making Long Data Wide If our data is long, we can reshape (or “pivot”) it into a wide format with the aptly named pivot_wider() function. First, select the units of observation and the column where the observed values lie, and pass these to pivot_wider(). air_wide <- air %>% select(Temp, Month, Day) %>% pivot_wider(names_from = Day, values_from = Temp) Take a moment to open air and air_wide in the Viewer to see what just happened. The resulting dataframe, air_wide, is not ideal. We specified that the column names should come from our Day column. Day was numeric, so now we have column names that are numbers. Numbers are “non-syntactic” object names, so we have to set them off with backticks (). In this case, it is especially confusing because the column named 1 is the 2nd column. air_wide$1 [1] 67 78 84 81 91 air_wide[, 2] # A tibble: 5 x 1 1 <int> 1 67 2 78 3 84 4 81 5 91 To fix this, we can prefix the resulting column names with the word “Day” with the names_prefix argument. air_wide <- air %>% select(Temp, Month, Day) %>% pivot_wider(names_from = Day, values_from = Temp, names_prefix = "Day") Now the column names are a bit easier to handle and call. Day 1 is now named Day1, day 2 is Day2, and so on. 19.2.1 New Missing Data air contains months of different lengths: June and September have 30 days each. In the un-modified air dataframe, there are no rows for June 31 or September 31. Only the months with 31 days (May, July, August) have rows corresponding to day 31. air %>% filter(Day %in% 31) Ozone Solar.R Wind Temp Month Day 1 37 279 7.4 76 5 31 2 59 254 9.2 81 7 31 3 85 188 6.3 94 8 31 The Temp column of air does not have any missing data. air %>% select(Temp) %>% is.na() %>% sum() [1] 0 However, after making the dataframe wide, Day31 was filled in with NA for June and September. air_wide %>% select(Month, Day31) # A tibble: 5 x 2 Month Day31 <int> <int> 1 5 76 2 6 NA 3 7 81 4 8 94 5 9 NA Recall that a dataframe is a series of same-length vectors. Even though day 31 only had three observed values, the length of the Day31 column had to be five to match the lengths of the other columns. NA values were supplied to fill in the gaps. The same thing would happen if data were also collected for the first ten days of October. Days 11-31 would be filled in with NA when converting to wide format. 19.3 Making Wide Data Long Alternatively, if a dataframe is in the wide format, it can be converted into the long format with the function pivot_longer(). Supply the cols argument with a vector (c()) or range (with :) of columns containing observations. Quotes are not needed. (The cols argument can also take other selection functions, such as starts_with() or where(). See the chapter on Subsetting for more.) air_long <- air %>% pivot_longer(cols = Ozone:Temp, names_to = "Variable", values_to = "Value") The resulting Variable column contains the names of the different measurement variables, and Value contains the observed values. In this format, it is not appropriate to calculate mean(air_long$Value) because this column contains values for different variables. Instead, subset by the Variable column to compute summary statistics. air_long %>% filter(Variable %in% "Temp") %>% summarize(TempAvg = mean(Value)) # A tibble: 1 x 1 TempAvg <dbl> 1 77.9 Or, group by Month and Variable to quickly calculate mean values by month for each variable, remembering to set na.rm to TRUE because of missing data. air_long %>% group_by(Month, Variable) %>% summarize(Avg = mean(Value, na.rm = T)) summarise() has grouped output by 'Month'. You can override using the .groups argument. # A tibble: 20 x 3 # Groups: Month [5] Month Variable Avg <int> <chr> <dbl> 1 5 Ozone 23.6 2 5 Solar.R 181. 3 5 Temp 65.5 4 5 Wind 11.6 5 6 Ozone 29.4 6 6 Solar.R 190. 7 6 Temp 79.1 8 6 Wind 10.3 9 7 Ozone 59.1 10 7 Solar.R 216. 11 7 Temp 83.9 12 7 Wind 8.94 13 8 Ozone 60.0 14 8 Solar.R 172. 15 8 Temp 84.0 16 8 Wind 8.79 17 9 Ozone 31.4 18 9 Solar.R 167. 19 9 Temp 76.9 20 9 Wind 10.2 When pivoting the data to the wide format above, NA was supplied to make the dataframe rectangular. In contrast, when converting to long format, rows with no observations are preserved by default. To change this, set the argument values_drop_na to TRUE. (Its default is FALSE, as can be seen in help(pivot_longer).) nrow(air_long) [1] 612 air_long %>% filter(is.na(Value)) # A tibble: 44 x 4 Month Day Variable Value <int> <int> <chr> <dbl> 1 5 5 Ozone NA 2 5 5 Solar.R NA 3 5 6 Solar.R NA 4 5 10 Ozone NA 5 5 11 Solar.R NA 6 5 25 Ozone NA 7 5 26 Ozone NA 8 5 27 Ozone NA 9 5 27 Solar.R NA 10 6 1 Ozone NA # ... with 34 more rows air_long <- air %>% pivot_longer(cols = Ozone:Temp, names_to = "Variable", values_to = "Value", values_drop_na = TRUE) nrow(air_long) [1] 568 air_long %>% filter(is.na(Value)) # A tibble: 0 x 4 # ... with 4 variables: Month <int>, Day <int>, Variable <chr>, Value <dbl> 19.4 Reshaping Exercises 1. Reshape the WorldPhones dataset into a long format. Be sure to name the new columns appropriately. 2. Reshape ChickWeight into a wide format with columns created from Time. 19.5 Data Wrangling Exercises Putting everything together now, 1. Reshape us_rent_income (from the tidyr package) so that it has one line per state, and two new columns named estimate_income and estimate_rent that contain values from estimate. 2. Merge this with state.x77, and keep all rows. Then, drop rows where any values are missing. You can do this in one or two steps. 3. Add a column containing state.division. 4. Add a column with the proportion of income spent on rent (rent / income). 5. Drop rows where Area is not greater than ten times Frost. 6. Replace all spaces in all column names with dashes (e.g., HS Grad to HS-Grad`). 7. Without removing any rows, add a column with the population-weighted mean rent by geographic division. • Which division has the highest mean rent? 1. Save the resulting dataframe as a CSV file, a tab-delimited text file, and an .RData file.
2021-05-12 12:21:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21221372485160828, "perplexity": 4729.9386596091645}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989693.19/warc/CC-MAIN-20210512100748-20210512130748-00236.warc.gz"}
http://www.cfd-online.com/W/index.php?title=Two_equation_turbulence_models&diff=5559&oldid=5549
Two equation turbulence models (Difference between revisions) Two-equation models, like $k-\epsilon$ models and $k-\omega$ models, are among the most commonly used turbulence models today.
2016-06-29 05:57:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6973096132278442, "perplexity": 3199.114997051783}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397567.28/warc/CC-MAIN-20160624154957-00093-ip-10-164-35-72.ec2.internal.warc.gz"}
https://practicepaper.in/gate-me/mohrs-circle
# Mohr’s Circle Question 1 The stress state at a point in a material under plane stress condition is equi-biaxial tension with a magnitude of 10 MPa. If one unit on the $\sigma -\tau$ plane is 1 MPa, the Mohr's circle representation of the state-of-stress is given by A a circle with a radius equal to principal stress and its center at the origin of the $\sigma -\tau$ plane B a point on the $\sigma$ axis at a distance of 10 units from the origin C a circle with a radius of 10 units on the $\sigma -\tau$ plane D a point on the $\tau$ axis at a distance of 10 units from the origin GATE ME 2020 SET-1   Strength of Materials Question 1 Explanation: The given state of stress is represented by a point on $\sigma -\tau$ graph which is located on $\sigma$-axis at a distance of 10 units from origin. Question 2 The state of stress at a point in a component isrepresented by a Mohr's circle of radius 100MPa centered at 200 MPa on the normal stress axis. On a plane passing through the same point, the normal stress is 260 MPa. The magnitude of the shear stress on the same plane at the same point is ______ MPa. A 48 B 63 C 96 D 80 GATE ME 2019 SET-2   Strength of Materials Question 2 Explanation: In triangle CEF $\begin{array}{l} \mathrm{CF}^{2}=\mathrm{CE}^{2}+\mathrm{EF}^{2} \\ 100^{2}=60^{2}=\mathrm{EF}^{2}\\ \mathrm{EF}^{2}=100^{2}-60^{2}=6400 \\ \mathrm{EF}=80 \mathrm{MPa} \end{array}$ $\mathrm{EF} \rightarrow$Represents shear stress at the same point $=\mathrm{EF}=\tau=80 \mathrm{MPa}$ Question 3 The state of stress at a point, for a body in plane stress, is shown in the figure below. If the minimum principal stress is 10 kPa, then the normal stress $\sigma_{y}$ (in kPa) is A 9.45 B 18.88 C 37.78 D 75.5 GATE ME 2018 SET-1   Strength of Materials Question 3 Explanation: \begin{aligned} \sigma_{x} &=100 \mathrm{kPa}, \tau_{x y}=50 \mathrm{kPa} \\ \text { Minimum principal stress } &=\frac{\sigma_{x}+\sigma_{y}}{2}-\sqrt{\left(\frac{\sigma_{x}-\sigma_{y}}{2}\right)^{2}+\tau_{x y}^{2}} \\ 10 &=\frac{100+\sigma_{y}}{2}-\sqrt{\left(\frac{100-\sigma_{y}}{2}\right)^{2}+50^{2}} \\ \therefore \quad \sqrt{\left(50-\frac{\sigma_{y}}{2}\right)^{2}+50^{2}} &=50+\frac{\sigma_{y}}{2}-10=40+\frac{\sigma_{y}}{2} \end{aligned} By squaring $2500+\frac{\sigma_{y}^{2}}{4}-50 \sigma_{y}+2500=1600+\frac{\sigma_{y}^{2}}{4}+40 \sigma_{y}$ \begin{aligned} \therefore \quad 90 \sigma_{y} &=3400 \\ \sigma_{y} &=37.78 \mathrm{MPa} \end{aligned} Question 4 If $\sigma _{1}$ and $\sigma _{3}$ are the algebraically largest and smallest principal stresses respectively, the value of the maximum shear stress is A $\frac{\sigma _{1} + \sigma _{3}}{2}$ B $\frac{\sigma _{1} - \sigma _{3}}{2}$ C $\sqrt{\frac{\sigma _{1} + \sigma _{3}}{2}}$ D $\sqrt{\frac{\sigma _{1} - \sigma _{3}}{2}}$ GATE ME 2018 SET-1   Strength of Materials Question 4 Explanation: Maximum shear stress $=\frac{\sigma_{1}-\sigma_{3}}{2}$ Question 5 The state of stress at a point is $\sigma _{x}=\sigma _{y}=\sigma _{z}=t_{xz}=t_{zx}=t_{yz}=t_{zy}=0$ and $t_{xy}=t_{yx}=50MPa$ . The maximum normal stress (in MPa) at that point is_____. A 49 B 50 C 55 D 60 GATE ME 2017 SET-2   Strength of Materials Question 5 Explanation: Given state of stress condition indicates pure shear state of stress. For pure shear state of stress, Max. tensile stress = Max. comp. stress = Max. Shear stress $=\tau_{X Y}=50 \mathrm{MPa}$ Hence, Max. normal stress $=50 \mathrm{MPa}$ Question 6 In a plane stress condition, the components of stress at a point are $\sigma_{x}=20 MPa$ ,$\sigma_{y}=80 MPa$ and $\tau _{xy}=40 MPa$ . The maximum shear stress (in MPa) at the point is A 20 B 25 C 50 D 100 GATE ME 2015 SET-2   Strength of Materials Question 6 Explanation: $\begin{array}{c} \sigma_{1,2}=\frac{1}{2}\left[\left(\sigma_{x}+\sigma_{y}\right) \pm \sqrt{\left(\sigma_{x}-\sigma_{y}\right)^{2}+4 \tau_{x y}^{2}}\right] \\ =\frac{1}{2}[100 \pm \sqrt{(60)^{2}+4 \times 40^{2}}] \\ \sigma_{1}=100 \\ \sigma_{2}=0 \\ \tau_{\max }=\sigma_{1} / 2=50 \mathrm{MPa} \end{array}$ Question 7 The state of stress at a point under plane stress condition is $\sigma _{xx}$ = 40MPa, $\sigma _{yy}$ = 100MPa and $\tau _{xy}$ = 40MPa. The radius of the Mohr's circle representing the given state of stress in MPa is A 40 B 50 C 60 D 100 GATE ME 2012   Strength of Materials Question 7 Explanation: Mohr's circle $R=\sqrt{(40)^{2}+(30)^{2}}=50 \mathrm{MPa}$ Question 8 A two dimensional fluid element rotates like a rigid body. At a point within the element, the pressure is 1 unit. Radius of the Mohr's circle, characterizing the state of stress at the point, is A 0.5 unit B 0 unit C 1 unit D 2 unit GATE ME 2008   Strength of Materials Question 8 Explanation: since the fluid element will be subjected to hydrostatic loading therefore Mohr circle will reduce into a point on $\sigma\text{-axis}$. $\therefore$Radius of mohr circle =0 unit Question 9 The Mohr's circle of plane stress for a point in a body is shown. The design is to be done on the basis of the maximum shear stress theory for yielding. Then, yielding will just begin if the designer chooses a ductile material whose yield strength is A 45 Mpa B 50 Mpa C 90 Mpa D 100 Mpa GATE ME 2005   Strength of Materials Question 9 Explanation: As per maximum shear stress theory, $\left(\tau_{\max }\right)_{\text {absolute }} \leq\left(\frac{S_{y t}}{2}\right)_{\pi}$ and when $\sigma_{1}$ and $\sigma_{2}$ are like in nature \begin{aligned} \sigma_{1} & \leq S_{y t} \\ S_{y t} &=100 \mathrm{MPa} \end{aligned} Question 10 The figure shows the state of stress at a certain point in a stressed body. The magnitudes of normal stresses in the x and y direction are 100 MPa and 20 MPA respectively. The radius of Mohr's stress circle representing this state of stress is A 120 B 80 C 60 D 40 GATE ME 2004   Strength of Materials Question 10 Explanation: \begin{aligned} \text { Radius } &=\sqrt{\left(\frac{\sigma_{x}-\sigma_{y}}{2}\right)^{2}+\tau_{x y}} \\ \text{Given,} \quad \sigma_{x} &=100 \mathrm{MPa} \\ \sigma_{y}&=-20 \mathrm{MPa} \\ \tau_{x y} &=0 \\ \therefore \text { Radius } &=\sqrt{\left(\frac{100+20}{2}\right)^{2}+0} \\ &=60 \mathrm{MPa} \end{aligned} There are 10 questions to complete. ### 7 thoughts on “Mohr’s Circle” 1. Question 6 and 10, questions are wrong, in 6 th question Txy=40, not 20 And in 10 , data incomplete • Thank you for your suggestions. We have updated the correction suggested by You. 2. Question 12, Explanation given is right but Answer given is wrong. Answer would be option B (175 MPa, 175 MPa) …However Answer Provided is Opion D (0,0) which is wrong. • Thank You DURGA SINGH, We have updated the answer.
2021-12-01 13:21:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 40, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9838183522224426, "perplexity": 1542.7054079197012}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964360803.0/warc/CC-MAIN-20211201113241-20211201143241-00328.warc.gz"}
https://en.wikipedia.org/wiki/Basel_problem
# Basel problem The Basel problem is a problem in mathematical analysis with relevance to number theory, first posed by Pietro Mengoli in 1644 and solved by Leonhard Euler in 1734[1] and read on 5 December 1735 in The Saint Petersburg Academy of Sciences (Russian: Петербургская Академия наук).[2] Since the problem had withstood the attacks of the leading mathematicians of the day, Euler's solution brought him immediate fame when he was twenty-eight. Euler generalised the problem considerably, and his ideas were taken up years later by Bernhard Riemann in his seminal 1859 paper On the Number of Primes Less Than a Given Magnitude, in which he defined his zeta function and proved its basic properties. The problem is named after Basel, hometown of Euler as well as of the Bernoulli family who unsuccessfully attacked the problem. The Basel problem asks for the precise summation of the reciprocals of the squares of the natural numbers, i.e. the precise sum of the infinite series: ${\displaystyle \sum _{n=1}^{\infty }{\frac {1}{n^{2}}}={\frac {1}{1^{2}}}+{\frac {1}{2^{2}}}+{\frac {1}{3^{2}}}+\cdots +{\frac {1}{n^{2}}}+\cdots }$ The sum of the series is approximately equal to 1.644934 . The Basel problem asks for the exact sum of this series (in closed form), as well as a proof that this sum is correct. Euler found the exact sum to be π2/6 and announced this discovery in 1735. His arguments were based on manipulations that were not justified at the time, although he was later proven correct, and it was not until 1741 that he was able to produce a truly rigorous proof. ## Euler's approach Euler's original derivation of the value π2/6 essentially extended observations about finite polynomials and assumed that these same properties hold true for infinite series. Of course, Euler's original reasoning requires justification (100 years later, Karl Weierstrass proved that Euler's representation of the sine function as an infinite product is valid, by the Weierstrass factorization theorem), but even without justification, by simply obtaining the correct value, he was able to verify it numerically against partial sums of the series. The agreement he observed gave him sufficient confidence to announce his result to the mathematical community. To follow Euler's argument, recall the Taylor series expansion of the sine function ${\displaystyle \sin x=x-{\frac {x^{3}}{3!}}+{\frac {x^{5}}{5!}}-{\frac {x^{7}}{7!}}+\cdots }$ Dividing through by x, we have ${\displaystyle {\frac {\sin x}{x}}=1-{\frac {x^{2}}{3!}}+{\frac {x^{4}}{5!}}-{\frac {x^{6}}{7!}}+\cdots }$ Using the Weierstrass factorization theorem, it can also be shown that the left-hand side is the product of linear factors given by its roots, just as we do for finite polynomials (which Euler assumed, but is not always true): {\displaystyle {\begin{aligned}{\frac {\sin x}{x}}&=\left(1-{\frac {x}{\pi }}\right)\left(1+{\frac {x}{\pi }}\right)\left(1-{\frac {x}{2\pi }}\right)\left(1+{\frac {x}{2\pi }}\right)\left(1-{\frac {x}{3\pi }}\right)\left(1+{\frac {x}{3\pi }}\right)\cdots \\&=\left(1-{\frac {x^{2}}{\pi ^{2}}}\right)\left(1-{\frac {x^{2}}{4\pi ^{2}}}\right)\left(1-{\frac {x^{2}}{9\pi ^{2}}}\right)\cdots \end{aligned}}} If we formally multiply out this product and collect all the x2 terms (we are allowed to do so because of Newton's identities), we see that the x2 coefficient of sin x/x is ${\displaystyle -\left({\frac {1}{\pi ^{2}}}+{\frac {1}{4\pi ^{2}}}+{\frac {1}{9\pi ^{2}}}+\cdots \right)=-{\frac {1}{\pi ^{2}}}\sum _{n=1}^{\infty }{\frac {1}{n^{2}}}.}$ But from the original infinite series expansion of sin x/x, the coefficient of x2 is 1/3! = −1/6. These two coefficients must be equal; thus, ${\displaystyle -{\frac {1}{6}}=-{\frac {1}{\pi ^{2}}}\sum _{n=1}^{\infty }{\frac {1}{n^{2}}}.}$ Multiplying through both sides of this equation by −π2 gives the sum of the reciprocals of the positive square integers. ${\displaystyle \sum _{n=1}^{\infty }{\frac {1}{n^{2}}}={\frac {\pi ^{2}}{6}}.}$ ## The Riemann zeta function The Riemann zeta function ζ(s) is one of the most important functions in mathematics, because of its relationship to the distribution of the prime numbers. The function is defined for any complex number s with real part greater than 1 by the following formula: ${\displaystyle \zeta (s)=\sum _{n=1}^{\infty }{\frac {1}{n^{s}}}.}$ Taking s = 2, we see that ζ(2) is equal to the sum of the reciprocals of the squares of the positive integers: ${\displaystyle \zeta (2)=\sum _{n=1}^{\infty }{\frac {1}{n^{2}}}={\frac {1}{1^{2}}}+{\frac {1}{2^{2}}}+{\frac {1}{3^{2}}}+{\frac {1}{4^{2}}}+\cdots ={\frac {\pi ^{2}}{6}}\approx 1.644934.}$ Convergence can be proven by the integral test, or via the following inequality: {\displaystyle {\begin{aligned}\sum _{n=1}^{N}{\frac {1}{n^{2}}}&<1+\sum _{n=2}^{N}{\frac {1}{n(n-1)}}\\&=1+\sum _{n=2}^{N}\left({\frac {1}{n-1}}-{\frac {1}{n}}\right)\\&=1+1-{\frac {1}{N}}\;{\stackrel {N\to \infty }{\longrightarrow }}\;2.\end{aligned}}} This gives us the upper bound 2, and because the infinite sum has no negative terms, it must converge to a value between 0 and 2. It can be shown that ζ(s) has a simple expression in terms of the Bernoulli numbers whenever s is a positive even integer. With s = 2n:[3] ${\displaystyle \zeta (2n)={\frac {(2\pi )^{2n}(-1)^{n+1}B_{2n}}{2\cdot (2n)!}}.}$ ## A rigorous proof using Fourier series Use Parseval's identity (applied to the function f(x) = x) to obtain ${\displaystyle \sum _{n=-\infty }^{\infty }|a_{n}|^{2}={\frac {1}{2\pi }}\int _{-\pi }^{\pi }x^{2}\,dx,}$ where {\displaystyle {\begin{aligned}a_{n}&={\frac {1}{2\pi }}\int _{-\pi }^{\pi }xe^{-inx}\,dx\\&={\frac {n\pi \cos(n\pi )-\sin(n\pi )}{\pi n^{2}}}i\\&={\frac {\cos(n\pi )}{n}}i-{\frac {\sin(n\pi )}{\pi n^{2}}}i\\&={\frac {(-1)^{n}}{n}}i\end{aligned}}} for n ≠ 0, and a0 = 0. Thus, ${\displaystyle |a_{n}|^{2}={\begin{cases}{\dfrac {1}{n^{2}}},&{\text{for }}n\neq 0,\\0,&{\text{for }}n=0,\end{cases}}}$ and ${\displaystyle \sum _{n=-\infty }^{\infty }|a_{n}|^{2}=2\sum _{n=1}^{\infty }{\frac {1}{n^{2}}}={\frac {1}{2\pi }}\int _{-\pi }^{\pi }x^{2}\,dx.}$ Therefore, ${\displaystyle \sum _{n=1}^{\infty }{\frac {1}{n^{2}}}={\frac {1}{4\pi }}\int _{-\pi }^{\pi }x^{2}\,dx={\frac {\pi ^{2}}{6}}}$ as required. ## A rigorous elementary proof This is by far the most elementary well-known proof; while most proofs use results from advanced mathematics, such as Fourier analysis, complex analysis, and multivariable calculus, the following does not even require single-variable calculus (although a single limit is taken at the end). For a proof using the residue theorem, see the linked article. ### History of this proof The proof goes back to Augustin Louis Cauchy (Cours d'Analyse, 1821, Note VIII). In 1954, this proof appeared in the book of Akiva and Isaak Yaglom "Nonelementary Problems in an Elementary Exposition". Later, in 1982, it appeared in the journal Eureka, attributed to John Scholes, but Scholes claims he learned the proof from Peter Swinnerton-Dyer, and in any case he maintains the proof was "common knowledge at Cambridge in the late 1960s". ### The proof The inequality ${\displaystyle {\tfrac {1}{2}}r^{2}\tan \theta >{\tfrac {1}{2}}r^{2}\theta >{\tfrac {1}{2}}r^{2}\sin \theta }$ is shown. Taking reciprocals and squaring gives ${\displaystyle \cot ^{2}\theta <{\tfrac {1}{\theta ^{2}}}<\csc ^{2}\theta }$. The main idea behind the proof is to bound the partial (finite) sums ${\displaystyle \sum _{k=1}^{m}{\frac {1}{k^{2}}}={\frac {1}{1^{2}}}+{\frac {1}{2^{2}}}+\cdots +{\frac {1}{m^{2}}}}$ between two expressions, each of which will tend to π2/6 as m approaches infinity. The two expressions are derived from identities involving the cotangent and cosecant functions. These identities are in turn derived from de Moivre's formula, and we now turn to establishing these identities. Let x be a real number with 0 < x < π/2, and let n be a positive odd integer. Then from de Moivre's formula and the definition of the cotangent function, we have {\displaystyle {\begin{aligned}{\frac {\cos(nx)+i\sin(nx)}{(\sin x)^{n}}}&={\frac {(\cos x+i\sin x)^{n}}{(\sin x)^{n}}}\\&=\left({\frac {\cos x+i\sin x}{\sin x}}\right)^{n}\\&=(\cot x+i)^{n}.\end{aligned}}} From the binomial theorem, we have {\displaystyle {\begin{aligned}(\cot x+i)^{n}=&{n \choose 0}\cot ^{n}x+{n \choose 1}(\cot ^{n-1}x)i+\cdots +{n \choose {n-1}}(\cot x)i^{n-1}+{n \choose n}i^{n}\\[6pt]=&{\Bigg (}{n \choose 0}\cot ^{n}x-{n \choose 2}\cot ^{n-2}x\pm \cdots {\Bigg )}\;+\;i{\Bigg (}{n \choose 1}\cot ^{n-1}x-{n \choose 3}\cot ^{n-3}x\pm \cdots {\Bigg )}.\end{aligned}}} (Here cotn x is shorthand for (cot x)n, and similarly for other trigonometric functions.) Combining the two equations and equating imaginary parts gives the identity ${\displaystyle {\frac {\sin(nx)}{(\sin x)^{n}}}={\Bigg (}{n \choose 1}\cot ^{n-1}x-{n \choose 3}\cot ^{n-3}x\pm \cdots {\Bigg )}.}$ We take this identity, fix a positive integer m, set n = 2m + 1, and consider xr = rπ/2m + 1 for r = 1, 2, …, m. Then nxr is a multiple of π and therefore sin(nxr) = 0. So, ${\displaystyle 0={{2m+1} \choose 1}\cot ^{2m}x_{r}-{{2m+1} \choose 3}\cot ^{2m-2}x_{r}\pm \cdots +(-1)^{m}{{2m+1} \choose {2m+1}}}$ for every r = 1, 2, …, m. The values xr = x1, x2, …, xm are distinct numbers in the interval 0 < xr < π/2. Since the function cot2 x is one-to-one on this interval, the numbers tr = cot2 xr are distinct for r = 1, 2, …, m. By the above equation, these m numbers are the roots of the mth degree polynomial ${\displaystyle p(t)={{2m+1} \choose 1}t^{m}-{{2m+1} \choose 3}t^{m-1}\pm \cdots +(-1)^{m}{{2m+1} \choose {2m+1}}.}$ By Vieta's formulas we can calculate the sum of the roots directly by examining the first two coefficients of the polynomial, and this comparison shows that ${\displaystyle \cot ^{2}x_{1}+\cot ^{2}x_{2}+\cdots +\cot ^{2}x_{m}={\frac {\binom {2m+1}{3}}{\binom {2m+1}{1}}}={\frac {2m(2m-1)}{6}}.}$ Substituting the identity csc2 x = cot2 x + 1, we have ${\displaystyle \csc ^{2}x_{1}+\csc ^{2}x_{2}+\cdots +\csc ^{2}x_{m}={\frac {2m(2m-1)}{6}}+m={\frac {2m(2m+2)}{6}}.}$ Now consider the inequality cot2 x < 1/x2 < csc2 x (illustrated geometrically above). If we add up all these inequalities for each of the numbers xr = rπ/2m + 1, and if we use the two identities above, we get ${\displaystyle {\frac {2m(2m-1)}{6}}<\left({\frac {2m+1}{\pi }}\right)^{2}+\left({\frac {2m+1}{2\pi }}\right)^{2}+\cdots +\left({\frac {2m+1}{m\pi }}\right)^{2}<{\frac {2m(2m+2)}{6}}.}$ Multiplying through by (π/2m + 1)2 , this becomes ${\displaystyle {\frac {\pi ^{2}}{6}}\left({\frac {2m}{2m+1}}\right)\left({\frac {2m-1}{2m+1}}\right)<{\frac {1}{1^{2}}}+{\frac {1}{2^{2}}}+\cdots +{\frac {1}{m^{2}}}<{\frac {\pi ^{2}}{6}}\left({\frac {2m}{2m+1}}\right)\left({\frac {2m+2}{2m+1}}\right).}$ As m approaches infinity, the left and right hand expressions each approach π2/6, so by the squeeze theorem, ${\displaystyle \zeta (2)=\sum _{k=1}^{\infty }{\frac {1}{k^{2}}}=\lim _{m\to \infty }\left({\frac {1}{1^{2}}}+{\frac {1}{2^{2}}}+\cdots +{\frac {1}{m^{2}}}\right)={\frac {\pi ^{2}}{6}}}$ and this completes the proof. ## Notes 1. ^ Ayoub, Raymond (1974). "Euler and the zeta function". Amer. Math. Monthly. 81: 1067–86. doi:10.2307/2319041. 2. ^ E41 – De summis serierum reciprocarum 3. ^ Arakawa, Tsuneo; Ibukiyama, Tomoyoshi; Kaneko, Masanobu (2014). Bernoulli Numbers and Zeta Functions. Springer. p. 61.[ISBN missing]
2017-01-21 04:25:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 29, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.943938136100769, "perplexity": 1155.9150267885009}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280900.71/warc/CC-MAIN-20170116095120-00413-ip-10-171-10-70.ec2.internal.warc.gz"}
http://sbsh.die-kunstproduzenten.de/power-series-method-pdf.html
# Power Series Method Pdf Representation of Functions as Power Series. Shortcut tricks on number series are one of the most important topics in exams. 1 Introduction to Power Series As noted a few times, not all differential equations have exact solutions. The Cisco Meraki Z-Series teleworker gateway is an enterprise class firewall, VPN gateway and router. A series solution converges on at least some interval jx x 0j< R, where R is the distance from x 0 to the closest singular point. Arithmetic and Geometric Series Definitions: First term: a 1 Nth term: a n Number of terms in the series: n Sum of the first n terms: S n Difference between successive terms: d Common ratio: q Sum to infinity: S Arithmetic Series Formulas: a a n dn = + −1 (1) 1 1 2 i i i a a a − + + = 1 2 n n a a S n + = ⋅ 2 11 ( ) n 2. This method will be used to (1) monitor secular trends in overweight prevalence; (2) describe the prevalence of obesity; and (3) examine the relationship between overweight and obesity and other examination measures, including blood pressure, glucose intolerance, and a battery of indicators for cardiovascular disease. One of such features is the ability to use Javascript in PDF documents. Digital Talks 2019 Transforming Industry Together Digital transformation is a journey of continuous improvement. 1 in every major reliability category by ITIC*, IBM Power Systems deliver reliable on-premises infrastructure 24/7. The convergence analysis of the proposed scheme is also discussed. 2 to determine the first few terms of the. You were also shown how to integrate the equation to get the solution y = Aeαx, (2. to be used as fuel for the power plants, The construction can be performed by different methods. We'll look at this one in a moment. Method Optimal Group Size Instructor Role(s) Method (How) Interactivity Level Devices Meta-Tags Other Matrix of Instructional Strategies, 14Dec11 (LAS) Page 1 Instructional Strategies and Methods for Delivering Instruction Choosing the appropriate learner-centered instructional strategy enables effective achievement of educational goals. The source current can now be determined using Ohm’s law, and we can proceed back through the network as shown in Fig (d). Thematic analysis is a method that is often used to analyse data in primary qualitative research. 2 MULTIPLE COMPARISONS IN A SIMPLE EXPERIMENT ON MORPHINE TOLERANCE 12. The unit starts by developing and extending learners’ understanding of fundamental electrical and electronic principles through analysis of simple direct current (DC) circuits. Shifting the first power series gives us,. First assume that the matrix A has a dominant eigenvalue with correspond-ing dominant eigenvectors. The Cisco Meraki Z-Series teleworker gateway is an enterprise class firewall, VPN gateway and router. In a power system there are always small load changes, switching actions, and other transients occurring so that in a strict mathematical sense most of the variables are varying with the time. Substituting. Download PDF Show page numbers For the social scientist, archival research can be defined as the locating, evaluating, and systematic interpretation and analysis of sources found in archives. Optimization Problems77 15. Solution of dierential equations by the power series method 2. Except upon the express prior permission in writing, from the authors, no part of this work may be reproduced, transcribed, stored electronically, or transmitted in any form by any method. Buy Methods of Theoretical Physics, Part I (International Series in Pure and Applied Physics) on Amazon. Furthermore similar methods can be easily obtained for most formal calculations with power series. Power series method 1 2. (You should. Power Series Solutions and Frobenius Method September 16, 2017 ME 501A Seminar in Engineering Analysis Page 2 7 Review Last Week V • Convert matrix equation for s into matrix equation for y using y = Xs. ENVIRONMENTAL PROTECTION AGENCY 2014 LOCAL GOVERNMENT CLIMATE AND ENERGY STRATEGY SERIES Combined Heat. 01SC Single Variable Calculus Fall 2010 For information about citing these materials or our Terms of Use, visit: http://ocw. 4 for each series. In general, such a solution assumes a power series with unknown coefficients, then substitutes that solution into the differential equation to find a recurrence relation for the coefficients. The aim is to introduce and review the basic notation, terminology, conventions, and elementary facts. 3 that a power function has the form y = axb. The Newton Method, properly used, usually homes in on a root with devastating e ciency. So did Mengoli and Leibniz. Cheat sheets for the Ashtanga yoga series (PDF) The perfect cheat sheet to place next to your yoga mat: Asana sequences in a small and practical format for downloading and printing. 2 Series SolutionsNear an Ordinary Point I 320 7. Home Contents Index Top of page Hosted by ziaspace. Lee and Stephen D. After all, lim z→∞ zn exp(z) − Xn p=0 zp n! → ∞. Sample Questions with Answers The curriculum changes over the years, so the following old sample quizzes and exams may differ in content and sequence. can be calculated for the motor from the locked rotor test data. The current various researches have used the method of forecasting with time series data such as the electric power consumption. Then y(z) can be written as y(z) = X1 n=0 anz n: (7) Such a power series converges for. The calculator will find the Taylor (or power) series expansion of the given function around the given point, with steps shown. Use a known series to find a power series in x that has the given function as its sum: (a) /Courses Fall 2008/Math 262/Exam Stuff/M262PowerSeriesPracSoln. Figure 13-10 shows several examples of continuous waveforms that repeat themselves from negative to positive infinity. CONTENTS 12. Taylor and Maclaurin (Power) Series Calculator. Once we nd (5), we next check the convergence of the series. Power Series Solutions to the Bessel Equation Note:The ratio test shows that the power series formula converges for all x 2R. 3 Series SolutionsNear an Ordinary Point II 335 7. To find power used by R 2 and R 3, using values from previous calculations: Given: Solution: Now that you have solved for the unknown quantities in this circuit, you can apply what you have learned to any series, parallel, or combination circuit. , daily exchange rate, a share price, etc. Based on previous values, time series can be used to forecast trends in economics, weather, and capacity planning, to name a few. 3 Linear Ordinary Differential Equations with Nonconstant Coefficients A181 A. The Equation for the Quantum Harmonic Oscillator is a second order differential equation that can be solved using a power series. 3af Simultaneous SSIDs Up to 16 (14 if background scanning enabled) EAP Type(s) EAP-TLS, EAP-TTLS/MSCHAPv2, EAPv0/EAP-MSCHAPv2, PEAPv1/EAP-GTC, EAP-SIM, EAP-AKA, EAP-FAST User/Device Authentication WPA™ and WPA2™ with 802. Power Integrations, Inc. 6 Taylor Series You can see that we can make Taylor Polynomial of as high a degree as we'd like. This may add considerable effort to the solution and if the power series solution can be identified as an elementary function, it's generally easier to just solve the homogeneous equation and use either the method of undetermined coefficients or the method of variation of parameters. They are ubiquitous is science and engineering as well as economics, social science, biology, business, health care, etc. We begin with the general power series solution method. A power series is like a polynomial of in nite degree. FREE with a 30 day free trial. Daileda Frobenius' Method. They used two different univariate modeling methods namely, ARIMA and AR(1) with a highpass filter. Series compensation is the method of improving the system voltage by connecting a capacitor in series with the transmission line. Published by MacKichan Software, Inc. 3 Convergence of the Power Method If A is an diagonalizable matrix with a dominant eigenvalue, then there exists a nonzero vector such that the sequence of vectors given by. the other hand, when the power series is convergent for all x, we say its radius of convergence is infinity, that is ρ= ∞. Time series provide the opportunity to forecast future values. The quantum mechanical hypervirial theorems (HVT) method used for a class of central potentials in obtaining approximate analytic expressions in the form of truncated power series expansions for energy eigenvalues, the expectation values for the. Good luck! You may be interested to read the Introduction to Calculus , which has a brief history of calculus. The amount of power does not change (except for a small loss. An eigenvalue problem solved by the power series method 5 6 48 89 Stand out from the crowd Designed for graduates with less than one year of full-time postgraduate work. 2 NUMERICAL METHODS FOR DIFFERENTIAL EQUATIONS Introduction Differential equations can describe nearly all systems undergoing change. Optional arguments used by specific methods. Brown Duke University Physics Department Durham, NC 27708-0305 [email protected] 4A GUIDE TO LOW RESISTANCE TESTING A GUIDE TO LOW RESISTANCE TESTING 5 are exposed to acid vapors, causing further degradation. density func. power transfer between the reader and the tag and the communication distance are maximized. CONTENTS 12. Solving the Legendre Equation with Frobenius Method method (all the way through and not randomly deciding to take s=0 since that's just a regular power series. We begin with the general power series solution method. Consequently, Fuchs's result does not even guarantee the existence of power series solutions to Bessel's equation. POWER SERIES 97 4. NO Does lim n→∞ sn = s s finite? YES. 1 TIME SERIES PATTERNS Horizontal Pattern Trend Pattern Seasonal Pattern Trend and Seasonal Pattern Cyclical Pattern Using Excel's Chart Tools to Construct a Time Series Plot Selecting a Forecasting Method 15. On the whole, the new methods that have been developed consist of enhance-ments to these basic methods, sometimes major, in the form of preconditioners, or other variations. ) Power and inverse power methods February 15, 2011 1 / 17. It involves gen- erating all permutations of design congurations, screening them through a mechanical feasibility check, and evaluating and determining the optimal parameters for the mechanically feasible ones. We plan to cover most of the textbook. 2 The Power Series Method. 37 A The parallel resistors must be reduced to a single series value before being added to the series resistor. This Series includes publications related to testing and assessment of chemicals; some of them support the development of OECD Test Guidelines (e. IndiaBIX provides you lots of fully solved Logical Reasoning (Number Series) questions and answers with Explanation. Morgan Street (m/c 249), Chicago, IL 60607-7045, USA fnbliss2,[email protected] Added Apr 17, 2012 by Poodiack in Mathematics. Exercises78 Chapter 6. This Technical Memorandum provides a quick reference for some of the more common approaches used in dynamics analysis. inverse power series using only one page of computations with approximately J(n + l)2 numbers. Students should also be prepared to review their calculus, especially if they have been away from calculus for a while. A SHORT(ER) PROOF OF THE DIVERGENCE OF THE HARMONIC SERIES LEO GOLDMAKHER It is a classical fact that the harmonic series 1+ 1 2 + 1 3 + 1 4 + diverges. Derive a Fourier series for a periodic function f(x) with a period (0, 2L). Best Practices for Mixed Methods Research in the Health Sciences • Embedding data. It gives solutions in the form of power series. Introduction A power series (centered at 0) is a series of the form ∑∞ n=0 anx n = a 0 +a1x+a2x 2. Because rcan be fractional or a negative number, (19) is in general not a power series. Approximation Methods 3. MIT OpenCourseWare http://ocw. The basic idea is to obtain a power series expansion for a function whose roots are multiples of the perfect squares 1, 4, 9, etc. to be used as fuel for the power plants, The construction can be performed by different methods. Thematic analysis is a method that is often used to analyse data in primary qualitative research. Competitive exams are all about time. Once we nd (5), we next check the convergence of the series. Time series in which treatment administered by. DC generators are classified based on their method of excitation. 5 = 15 ohms divided by 2 = 7. It is often difficult to operate with power series. Home Contents Index Top of page Hosted by ziaspace. Chapter 7 Series Solutionsof Linear Second Order Equations 7. Noll and therefore only its application to parallel circuits will be discussed here. So on this basis there are two types of DC generators:-1. Saab and colleagues [9] studied the forecasting method for monthly electric energy consumption in Lebanon. Convexity, Concavity and the Second Derivative74 12. For Power factor improvement purpose, Static capacitors are connected in parallel with those devices which work on low power factor. It is often useful to designate the infinite possibilities by what is called the Taylor Series. 9/12 Functional Data Having observations that are time series can be thought of as having a “function” as an observation. " This becomes clearer in the expanded […]. Meanwhile for Incremental Conductance method [6], the slope of the PV’s power against voltage (P-V) curve is used to track the MPP. 5 TUKEY’S TEST. In Two wattmeter method the current coils of the wattmeter are connected with any two lines, say R and Y and the potential coil of each wattmeter is joined on the. 3 Linear Ordinary Differential Equations with Nonconstant Coefficients A181 A. Figure 3 Relationships Between Power, Current, Power Factor and Motor Load Example: Input Power. DATA SHEET | FortiGate® 100E Series wwwfortinetcom Copyriht 01 Fortinet Inc All rihts reserved Fortinet® FortiGate® FortiCare® and FortiGuard® and certain other mars are reistered trademars of Fortinet Inc and other Fortinet names herein may also be reistered and/or common law. A Simple Circuit for Measuring Complex Impedance 4 Parallel Impedance The methods used in this article determine the series resistance and reactance. Use a known series to find a power series in x that has the given function as its sum: (a) /Courses Fall 2008/Math 262/Exam Stuff/M262PowerSeriesPracSoln. There are also similarities amongst some of the methods. It is true that economic series tend to move together but in order to obtain a linear combination of the series, that is. We know that most of the industries and power system loads are inductive that take lagging current which decrease the system power factor (See Disadvantages of Low Power factor). One might say that the field has evolved eve n more from gaining maturity than from the few important developments which took place. Process leaks could result in death or serious injury. Note - Taking Methods Outline Method Organizational technique which allows you to show main points, sub-points and details. In mathematics, the power series method is used to seek a power series solution to certain differential equations. 7 The Method of Frobenius III 379. A Frobenius series (generalized Laurent series) of the form can be used to solve the differential equation. An important application of power series in the field of engineering is spectrum analysis. will study the theory, methods of solution and applications of ordinary dif-ferential equations. SAMPLE SIZES FOR SELF-CONTROLLED CASE SERIES STUDIES 1 1. The target function to determine the TCSCs installing place is reducing power system lines overload during fault contingency. Multiplication of Power Series. More specifically, the objectives of the handbook are to: (a) Provide, in one publication, basic concepts and methodologically sound procedures for designing samples for, in particular, national‑level household surveys, emphasizing applied aspects of household sample design; (b) Serve as a practical guide for survey practitioners in designing and. 3 pJ for potentiation and 20 pJ for depression cycles of. 4A GUIDE TO LOW RESISTANCE TESTING A GUIDE TO LOW RESISTANCE TESTING 5 are exposed to acid vapors, causing further degradation. If an input is given then it can easily show the result for the given number. Mathematical Methods for Introductory Physics by Robert G. Given real (or complex!) numbers aand r, X1 n=0 arn= (a 1 r if jr <1 divergent otherwise The mnemonic for the sum of a geometric series is that it's \the rst term divided by one minus the common ratio. If G(x,y) can be factored to give G(x,y) = M(x)N(y),then the equation is called separable. the other hand, when the power series is convergent for all x, we say its radius of convergence is infinity, that is ρ= ∞. Power Series - Working with power series. 1 General Review A general review of the applicability of series compensation shows that it serves to increase power transfer under steady state and transient conditions, as well as. This is the PDF file of text No. The time domain signal used in the Fourier series is periodic and continuous. Anthony Metivier has taught as a professor, is the creator of the acclaimed Magnetic Memory Method and the author behind a dozen bestselling books on the topic of memory and language learning. If you don't recall how to do this take a quick look at the first review section where we did several of these types of problems. 7 TAYLOR AND LAURENT SERIES 3 7. Featuring new hit original series The Rook, Sweetbitter, Power, The Spanish Princess, Vida, Outlander, Wrong Man, American Gods, Now Apocalypse as well as Warriors of Liberty City, America to Me, Ash vs Evil Dead, Black Sails, Survivor's Remorse, The. This experiment should show you the difference. In fact, Borel's theorem implies that every power series is the Taylor series of some smooth function. 1 Introduction to Power Series As noted a few times, not all differential equations have exact solutions. The power applied to each speaker is calculated: Pa=Po x(Zt/Zn) Pa = 200 x (2/4) Pa = 100 watts Each 4 ohm speaker would receive 100 watts. In following section, 2. If you don't recall how to do this take a quick look at the first review section where we did several of these types of problems. Di⁄erentiating Power Series Theorem. 14: Power in AC Circuits 14: Power in AC Circuits •Average Power •Cosine Wave RMS •Power Factor + •Complex Power •Power in R, L, C •Tellegen's Theorem •Power Factor Correction •Ideal Transformer •Transformer Applications •Summary E1. The “out-of-the-box” version of the Texas Method simply isn’t ideal. Here follows a collection of examples of how one can solve linear differential equations with polynomial coefficients by the method of power series. Math 306 - Power Series Methods Final Review Key (1) True of False? (a) If the series X1 n=k an converges, then the sequence an converges to 0: TRUE (b) If an converges to 0 then X1 n=k an converges:. The last few months (May to July) have been very busy for the IEEE Power Electronics Society (PELS). Given that y(x) satis es y00+ y0+ x2y = 0 y(0) = 1 y0(0) = 2. Power Series Lecture Notes A power series is a polynomial with infinitely many terms. 01SC Single Variable Calculus Fall 2010 For information about citing these materials or our Terms of Use, visit: http://ocw. Some administrations have already adopted or are developing methods or procedures for measuring either the radiated emissions or the conducted emissions from power line telecommunication systems, or both. POWER SYSTEM is predominantly in steady state operation or in a state that could with sufficient accuracy be regarded as steady state. Lets take a look at the motor stator that utilizes this power source. The three commonly employed current sensing methods for switch mode power supplies are: are using a sense resistor, using the MOSFET R DS(ON) and using the DC resistance (DCR) of the inductor. 1 Introduction to Power Series As noted a few times, not all differential equations have exact solutions. Normally, one supposes that statistically significant peaks at the same frequency have been shown in two time series and that we wish to see. Besides being compact, this method has the advantage of being systematic. short-circuit currents with a reasonable degree of accuracy at various points for either 3Ø or 1Ø electrical distribution systems. PDF has set of features for creating documents that could change their contents in response to reader actions. The unit starts by developing and extending learners’ understanding of fundamental electrical and electronic principles through analysis of simple direct current (DC) circuits. 5 Repeat steps 5. Power consumption per CMOS neuron block was found to be 3 nw in the 65 nm process technology, while the energy consumption per cycle was 0. daily temperature in NY, SF,. This way we don't have to pay special attention to the initial indices in the power series. Attempt at classification: • Local methods: the use of p-adic fields, in an elementary way (congruences modulo powers of p), or less elementary (Strassmann's or Weierstrass's theorem, p-adic power series,Herbrand's and Skolem's method). In The Hand Of The Goddess Song Of The Lioness Series Book 2. CHANGE AGENTS – “Twelve Methods Used By Change Agents to Change the Church”. Recall from Chapter 8 that a power series represents a function f on an interval of convergence, and that you can successively differentiate the power series to obtain a series for and so on. So on this basis there are two types of DC generators:-1. CHAPTER 83 POWER SERIES METHODS OF SOLVING ORDINARY DIFFERENTIAL EQUATIONS. López-Sandoval*a, A. preventing and mitigating EMP threats and the impacts on the power grid. Here is an example: 0 B œ "  B  B  B  âa b # $. We can safely write sums over all integers k, and then simply remember that for a power series, ak = 0 for all k < 0. The aim is to introduce and review the basic notation, terminology, conventions, and elementary facts. Here is an example: 0 B œ " B B B âa b #$ Like a polynomial, a power series is a function of B. One might say that the field has evolved eve n more from gaining maturity than from the few important developments which took place. An algorithm for the machine calculation of complex Fourier series. [email protected] x $k[+|s3 ª9)7 |hbed)qvb ¼ se¿j qk9)> ¼0¾ e jk Ëcd03 @[email protected];fAround the world, liberalization and privatization in the electricity industry have lead to increased competition among utilities. It involves gen- erating all permutations of design congurations, screening them through a mechanical feasibility check, and evaluating and determining the optimal parameters for the mechanically feasible ones. As you read this pamphlet, may God increase your passion to study the Word of God for yourself and then to pass on what you learn to others. We know that most of the industries and power system loads are inductive that take lagging current which decrease the system power factor (See Disadvantages of Low Power factor). NO Does lim n→∞ sn = s s finite? YES. If x = x 0 is an ordinary point of the DE (1) then we can always nd two linearly independent power series solutions centered at x 0: y = P1 n=0 c n(x x 0)n. Labels need not be unique but must be a hashable type. Series DC Motor Components of a series motor include the armature, labeled A1 and A2, and the field, S1 and S2. Then for values of x very close to the origin, we can approximate a(x) ≃ aand b(x) ≃ bby the leading terms of their Taylor series about. It can be interpreted as an infinite polynomial. Thus, the interval of convergence is (0,2]. power series methods of summability Enes Yavuz1 and Özer Talo2 Abstract: We prove a Korovkin type approximation theorem via power series methods of summability for con-tinuous 2π-periodic functions of two variables and verify the convergence of approximating double sequences of positive linear operators by using modulus of continuity. Then choose an initial approximation of one of the dominant eigenvectors of A. The appearance and equivalent circuit for the PLY10 series hybrid choke coil are shown above. Included are discussions of using the Ratio Test to determine if a power series will converge, adding/subtracting power series, differentiating power series and index shifts for power series. Example The function y(x) = 1 1 − x is defined for x ∈ R −{1}. Daileda Frobenius' Method. Review of Concepts and Methods A167 A. Given real (or complex!) numbers aand r, X1 n=0 arn= (a 1 r if jr <1 divergent otherwise The mnemonic for the sum of a geometric series is that it's \the rst term divided by one minus the common ratio. 2 MULTIPLE COMPARISONS IN A SIMPLE EXPERIMENT ON MORPHINE TOLERANCE 12. The Cisco Meraki Z-Series teleworker gateway is an enterprise class firewall, VPN gateway and router. One of such features is the ability to use Javascript in PDF documents. The basic idea is to obtain a power series expansion for a function whose roots are multiples of the perfect squares 1, 4, 9, etc. You can get immediate free access to these example files by subscribing to the Power Spreadsheets Newsletter. We also assume that a 0 6= 0. , monthly data for unemployment, hospital admissions, etc. Lee Department of Mathematics Oregon State University January 2006. A well-known method of forecasting wind is the simplistic persistence method. Natural gas knowledge series : Laying Natural Gas Pipeline. infinite radius of convergence, so do both series above. The methods presented will largely follow the methods developed by Bar-. The Maclaurin series is a template that allows you to express many other functions as power series. The fastest growing community of electrical engineers with 300+ new members every day seeking technical articles, advanced education, tools, and peer-to-peer discussions. This paper focuses on the operation of. Now, break up the first term into two so we can multiply the coefficient into the series and multiply the coefficients of the second and third series in as well. Threading Basics Fundamental Manufacturing Processes Video Series Study Guide - 2 - In manufacturing, external threads are produced in several ways. Chapter 7 Series Solutionsof Linear Second Order Equations 7. Chasnov The Hong Kong University of Science and Technology. This method of rating batteries is also called the 20-hour discharge rating. Like a polynomial, a power series is a function of B. We will only need to shift the second series down by two to get all the exponents the same in all the series. If you are using an adjustable power supply, turn the control all the way down in the counterclockwise direction, plug it in and turn it on. Mega Feature: Layne Norton Training Series + Full Power/Hypertrophy Routine Layne Norton is a Pro Natural Bodybuilder with the IFPA and NGA. Computing Fourier Series and Power Spectrum with MATLAB By Brian D. 1) converges at any other point x , x 0, we say that (7. If is too large, thenB B the series will diverge:. STARZ official website containing schedules, original content, movie information, On Demand, STARZ Play and extras, online video and more. power is simply the vector sum or geometrical sum of reactive and active power (Fig. Different approaches are needed for different power series. HEAD OFFICE: TOKYO BUILDING, 2-7-3, MARUNOUCHI, CHIYODA-KU, TOKYO 100-8310, JAPAN NAGOYA WORKS: 1-14, YADA-MINAMI 5, HIGASHI-KU, NAGOYA, JAPAN. This is a convergent power series, but the same power series does not define an asymptotic series for exp(z). This method can assume unlimited primary short-circuit current (infinite bus) or it can be used with limited primary available current. to put into appropriate form. Exponents81 2. Learn essential HPLC maintenance tips Ensure trouble-free operation of your lab Select the ultimate PerfectFit HPLC supplies. The initial data imply that a0 D1 and a1 D0ify Da0 Ca1x Ca2x2 C. Separately excited DC generator. As we have seen, we can use these Taylor series approximations to estimate the mean and variance estimators. POWER at circuit breaker or fuse and test that power is off before wiring! Infrared Ceiling Mounted Occupancy Sensor Cat. Exercises76 14. This requires the coefficients on each power of x to equal zero. 248 CHAPTER 7. Read More. For example,B 0 ! œ " ! ! ! â œ "a b. Leavitt Power series in the past played a minor role in the numerical solutions of ordi-nary and partial differential equations. 6 Taylor Series You can see that we can make Taylor Polynomial of as high a degree as we'd like. Self-excited DC generator. Time Series Analysis and Forecasting CONTENTS STATISTICS IN PRACTICE: NEVADA OCCUPATIONAL HEALTH CLINIC 15. Exercises78 Chapter 6. The disadvantage, however, to series-parallel circuits is twofold: The individual series must be balanced within 10 percent (three caps cannot be included in one series and seven caps in another), and the blasting. In Figure 2, we saw the cross section of a 3 phase, 2 pole motor. Except upon the express prior permission in writing, from the authors, no part of this work may be reproduced, transcribed, stored electronically, or transmitted in any form by any method. Definition 1. Example 3: Find a power series solution in x for the IVP. 3 We considered power series, derived formulas and other tricks for nding them, and know them for a few functions. Experiment 4 ~ Resistors in Series & Parallel Objective: In this experiment you will set up three circuits: one with resistors in series, one with resistors in parallel, and one with some of each. These change agents are making for difficult times in many places for the body of Christ. For example, the bi- as-breaking method [5] is a form of post-stratification, which is a type of stratification, and the propensity score is derived from stratification. time series and time-trend regression is appropriate for trend stationary I(0) time series. 1 Time series data A time series is a set of statistics, usually collected at regular intervals. Time series in which treatment administered by. Handle the transmitter carefully. Language Teaching Methods Teacher's Handbook for the Video Series by Diane Larsen-Freeman Office of English Language Programs Materials Branch United States Department of State. 6 The Method of Frobenius II 365 7. The series converges, but the exact value of the sum proves hard to find. Featuring new hit original series The Rook, Sweetbitter, Power, The Spanish Princess, Vida, Outlander, Wrong Man, American Gods, Now Apocalypse as well as Warriors of Liberty City, America to Me, Ash vs Evil Dead, Black Sails, Survivor's Remorse, The. Although many forecasting methods were developed, none can be generalized for all demand patterns. - Power loss since power dissipation P=I2× R. So on this basis there are two types of DC generators:-1. The disk of convergence of the derivative or integral series is the same as that of the original series. Di⁄erentiating Power Series Theorem. The high current. All students, freshers can download Logical Reasoning Number Series quiz questions with answers as PDF files and eBooks. Euler's formula states that for any real number x: e i x = cos ⁡ x + i sin ⁡ x , {\displaystyle e^{ix}=\cos x+i\sin x,} where e is the base of the natural logarithm , i is the imaginary unit , and cos and sin are the trigonometric functions cosine and sine respectively, with the argument x given in radians. Buy Methods of Theoretical Physics, Part I (International Series in Pure and Applied Physics) on Amazon. The conversion method shown in figure 1 is the method that will be applied to the cost effective power supply. Competitive exams are all about time. Then we choose an initial approximation of one of the dominant eigenvectorsof A. It is important to note that asymptotic series are distinct from convergent series: a convergent series need not be asymptotic. 9 Representation of Functions by Power Series 671 Operations with Power Series The versatility of geometric power series will be shown later in this section, following a discussion of power series operations. More generally, a series of the form is called a power series in (x-a) or a power series at a. The geometric series is a simplified form of a larger set of series called the power series. Each model is designed to securely extend the power of Meraki cloud managed networking to employees, IT staff, and executives working from home. short-circuit currents with a reasonable degree of accuracy at various points for either 3Ø or 1Ø electrical distribution systems. ) Below the 50% load point, due to reactive magnetizing current requirements, power factor degrades and the amperage curve becomes increasingly non-linear. Given a set, the objects that form it are called its elements. Binomial expansion, power series, limits, approximations, Fourier series Notice: this material must not be used as a substitute for attending the lectures. At the endpoint x= 0, the power series becomes the harmonic series 1+ 1 2 + 1 3 + 1 4 +, which diverges. Advanced Calculus Third Edition Robert Wrede, Ph. In this chapter the conventional methods employed for reactive power compensation, their relative merits and demerits, desirable features of an advanced compensator in a distribution system are highlighted. INTRODUCTION Throughout this paper X∞ n=0 a n is a series of real or complex numbers and {s. Based on previous values, time series can be used to forecast trends in economics, weather, and capacity planning, to name a few. Natural gas knowledge series : Laying Natural Gas Pipeline. In general, whenever you want to know lim n→∞ f(n) you should first attempt to compute lim x→∞ f(x), since if the latter exists it is also equal to the first limit. 1 Review of Power Series 307 7. A power series is like a polynomial of in nite degree. These are the books for those you who looking for to read the In The Hand Of The Goddess Song Of The Lioness Series Book 2, try to read or download Pdf/ePub books and some of authors may have disable the live reading. Text: Matrix and Power Series Methods, 5th Edition, John W. 1 shows the connection diagram for the two-wattmeter method of measuring three- phase power. A TAUBERIAN THEOREM FOR DISCRETE POWER SERIES METHODS Bruce Watson RECEIVED: ABSTRACT: A tauberian theorem from summability by a discrete power series method to ordinary convergence is proved. These change agents are making for difficult times in many places for the body of Christ. [email protected] x$ k[+|s3 ª9)7 |hbed)qvb ¼ se¿j qk9)> ¼0¾ e jk Ëcd03 @[email protected];fAround the world, liberalization and privatization in the electricity industry have lead to increased competition among utilities. Sample Quizzes with Answers Search by content rather than week number. The basic idea is to obtain a power series expansion for a function whose roots are multiples of the perfect squares 1, 4, 9, etc. Included are discussions of using the Ratio Test to determine if a power series will converge, adding/subtracting power series, differentiating power series and index shifts for power series. Manipulating Power Series Our technique for solving di⁄erential equations by power series will essentially be to substitute a generic power series expression y(x) = X1 n=0 a n (x x o) n into a di⁄erential equations and then use the consequences of this substitution to determine the coe¢ cients a n. Notice that the single phase curve, unlike its three phase cousin, consists of only one wave form. A Simple Circuit for Measuring Complex Impedance 4 Parallel Impedance The methods used in this article determine the series resistance and reactance. The Cisco Meraki Z-Series teleworker gateway is an enterprise class firewall, VPN gateway and router. 5 The Power Series Method, Part II A191 A. Once the series solution is obtained, it should be substituted into the di erential equation to con rm that it really is a solution. Chapter 1 Linear Algebra 1. If x = x 0 is an ordinary point of the DE (1) then we can always nd two linearly independent power series solutions centered at x 0: y = P1 n=0 c n(x x 0)n. Power Series Power series are one of the most useful type of series in analysis. We will only need to shift the second series down by two to get all the exponents the same in all the series.
2019-11-18 04:19:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6165821552276611, "perplexity": 1266.6372260608728}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669431.13/warc/CC-MAIN-20191118030116-20191118054116-00046.warc.gz"}
https://odin.cse.buffalo.edu/rants/2012-08-25-The_Viewlet_Transform_Part_5_Hypergraph_Partitioning.html
The Viewlet Transform (Part 5: Hypergraph Partitioning) I've been talking for several weeks now about tools and techniques related to AGCA and the viewlet transform.  Most recently, I've been talking about optimization techniques for AGCA, but I'm going to take a quick detour this week and provide a quick overview of another technique: Hypergraph Partitioning.  In general, this technique is most suited for optimizing the materialization process, but there are applications to the optimization of aggregate computations as well. The Query Hypergraph Before I get into the technique though, we need to discuss an alternate representation of AGCA expressions (one that's actually used pretty frequently in query optimization): the query hypergraph (basically a graph where an edge can connect any number of nodes.  This kind of hypergraph can be created for any product of terms (in the trivial case, we have a product of just one term).  Each node in the hypergraph is a variable/column of the query (both output and input variables are treated identically for this purpose).  Each hyperedge corresponds to one term in the product, and each edge connects all variables that appear in the term corresponding to the edge (regardless of whether they appear as inputs or outputs). Hypergraph Partitioning Remember that the product operator corresponds to the natural join (and that comparisons are implemented as relations).  As a consequence, any disconnected components in the graph effectively correspond to cross products (a natural join with no shared columns).  For example, consider the following trivial example. R(A) * S(B) R(A) is a hyperedge touching only A.  S(B) is a hyperedge touching only B.  Thus A and B are separate disconnected components.  Note, by the way, that there are no comparisons between A and B in this query.  This product is a pure cartesian cross-product.  The following query would not be: R(A) * S(B) * {A < B} In this query, the term { A < B } connects both A and B. Now, if we have disconnected components, it typically pays to materialize them separately.  For example, going with R and S above, we could materialize them as M( R(A) * S(B) ) But now we have to store |R| * |S| entries (where |R| is the number of tuples in R).  Worse, if we need to update the materialized view, it will cost us |S| after an update to R, and |R| after an update to S.  On the other hand, we could materialize as M(R(A)) * M(S(B)) Now we only store |R| + |S| tuples (between the two materialized views), and updating either can be done in constant time.  Better still, we lose nothing with this representation.  It costs us O(|R|*|S|) to iterate over every element of either materialization of the expression. You might say that this is a crazy corner case -- people almost never compute cross products.  That's usually true, but in DBToaster, this situation crops up quite frequently.  For example, consider the three way join query: R(A) * S(A,B) * T(B) The (optimized) delta of this query with respect to +S(dA, dB) is R(dA) * T(dB) Because each delta essentially removes a hyperedge in the query hypergraph, partitioned components are created extremely frequently. Partitioning and Trigger Parameters There's also one more situation where this is beneficial.  Consider the following query. R(A) * S(A) * T(A) And its delta with respect to the insertion +S(dA) R(dA) * T(dA) Even though dA is touched by both R and T, we lose nothing if we materialize them separately (as before, evaluation is O(1) either way), and materializing them separately results in more efficient maintenance.  In this case, dA is a trigger parameter -- one of the variables drawn from the relation being modified.  These trigger parameter variables can be excluded from the query hyper graph. Applications to Query Optimization In general, when computing aggregates, hypergraph partitioning can be used to select a more efficient computation order.  Each materialized component gets scanned independently, and the resulting aggregate can be computed. And that's about it for now.  Next week, we return to AGCA optimization with a discussion of the interplay between equality and lifts, and how to optimize expressions of this form.
2019-07-20 16:40:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6387360095977783, "perplexity": 1964.5613635182258}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526536.46/warc/CC-MAIN-20190720153215-20190720175215-00513.warc.gz"}
https://www.illn.in/sr/opencv-learning-4.html
strgeon Opencv学习4 30 2018/06 Opencv学习-学习笔记4 # cvHoughLines2()函数寻找直线 ## 函数详解 CvSeq* cvHonghLines2(CvArr* image,void* line_storage,int mehtod,double rho,double theta,int threshold,double param1 =0,double param2 =0); image line_storage method Hough 变换变量,是下面变量的其中之一: • CV_HOUGH_STANDARD - 传统或标准 Hough 变换. 每一个线段由两个浮点数 (ρ, θ) 表示,其中 ρ 是直线与原点(0,0) 之间的距离,θ 线段与 x-轴之间的夹角。因此,矩阵类型必须是 CV_32FC2 type. • CV_HOUGH_PROBABILISTIC- 概率 Hough 变换(如果图像包含一些长的线性分割,则效率更高). 它返回线段分割而不是整个线段。每个分割用起点和终点来表示,所以矩阵(或创建的序列)类型是 CV_32SC4. • CV_HOUGH_MULTI_SCALE - 传统 Hough 变换的多尺度变种。线段的编码方式与 CV_HOUGH_STANDARD 的一致. rho theta threshold param1 param2 4检测直线.cpp中使用例子 lines = cvHoughLines2( dst, storage, CV_HOUGH_PROBABILISTIC, 1, CV_PI/180, 50, 50, 10 ); # 霍夫圆变换HoughCircles CvSeq* cvHoughCircles( CvArr* image, void* circle_storage, int method, double dp, double min_dist, double param1=100, double param2=100, int min_radius=0, int max_radius=0 ); image circle_storage method Hough 变换方式,目前只支持CV_HOUGH_GRADIENT, which is basically 21HT, described in [Yuen03]. dp Resolution of the accumulator used to detect centers of the circles. For example, if it is 1, the accumulator will have the same resolution as the input image, if it is 2 - accumulator will have twice smaller width and height, etc. min_dist Minimum distance between centers of the detected circles. If the parameter is too small, multiple neighbor circles may be falsely detected in addition to a true one. If it is too large, some circles may be missed. param1 The first method-specific parameter. In case of CV_HOUGH_GRADIENT it is the higher threshold of the two passed to Canny edge detector (the lower one will be twice smaller). param2 The second method-specific parameter. In case of CV_HOUGH_GRADIENT it is accumulator threshold at the center detection stage. The smaller it is, the more false circles may be detected. Circles, corresponding to the larger accumulator values, will be returned first. min_radius Minimal radius of the circles to search for. max_radius Maximal radius of the circles to search for. By default the maximal radius is set to max(image_width, image_height). Last modification:August 10th, 2018 at 03:56 pm If you think my article is useful to you, please feel free to appreciate
2018-09-18 14:00:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6597277522087097, "perplexity": 10853.774226699665}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267155413.17/warc/CC-MAIN-20180918130631-20180918150631-00188.warc.gz"}
https://chemistry.stackexchange.com/questions/14239/why-is-it-written-as-joule-and-j/14307
# Why is it written as “joule” and “J”? [closed] Why is the unit joule is written as "joule" and "J", I mean what does "J" mean and what does "joule" mean. ## closed as off-topic by Jan, ringo, Todd Minehardt, getafix, Klaus-Dieter WarzechaNov 6 '16 at 22:54 • This question does not appear to be about chemistry within the scope defined in the help center. If this question can be reworded to fit the rules in the help center, please edit the question. • I'm voting to close this question as off-topic because this is easily answerable by just about any research. – Jan Nov 6 '16 at 20:36 In SI, every base units and many derived units have an official name and an official symbol. For example "meter"-"m", "second"-"s" or "joule"-"J". There are grammatical rules how to use them, but more or less they are interchangeable. If you wonder about the capitalization: in English, the unit names are common nouns and not capitalized ("joule") even if they are derived from a person's name (Joule) or the symbol is a capital letter ("J"). The joule , symbol "$\mathrm{J}$", is a derived unit of energy, work, or amount of heat in the SI Units. It is equal to the energy expended (or work done) in applying a force of one newton through a distance of one metre (1 newton metre or $\mathrm{N\cdot m}$), or in passing an electric current of one ampere through a resistance of one ohm for one second. It is named after the English physicist James Prescott Joule (1818–1889). In terms firstly of base SI units and then in terms of other SI units: $$\mathrm J = {}\rm \frac{kg \cdot m^2}{s^2} = N \cdot m = \rm Pa \cdot m^3={}\rm W \cdot s = C \cdot V$$ where $\mathrm{kg}$ is the kilogram, $\mathrm{m}$ is the metre, $\mathrm{s}$ is the second, $\mathrm{N}$ is the newton, $\mathrm{Pa}$ is the pascal, $\mathrm{W}$ is the watt, $\mathrm{C}$ is the coulomb, and $\mathrm{V}$ is the volt. One joule can also be defined as: • The work required to move an electric charge of one coulomb through an electrical potential difference of one volt, or one '"coulomb volt" ($\mathrm{C\cdot V}$). This relationship can be used to define the volt. • The work required to produce one watt of power for one second, or one "watt second" ($\mathrm{W\cdot s}$) (compare kilowatt hour). This relationship can be used to define the watt. Source: Wikipedia.
2019-07-18 08:50:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7498447299003601, "perplexity": 1011.551530006293}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525587.2/warc/CC-MAIN-20190718083839-20190718105839-00382.warc.gz"}
https://cs.stackexchange.com/questions/75686/matching-points-between-2-polygons/75694
# Matching points between 2 polygons Given 2 polygons in a plane: $A : ( (xa_1,ya_1), (xa_2,ya_2), ... (xa_n,ya_n) )$ $B : ( (xb_1,yb_1), (xb_2,yb_2), ... (xb_m,yb_m) )$ Is there a polynomial algorithm to compute a matching $M$ between the points in A and B, such that: 1. If $(xa_i,ya_i)$ is matched to $(xb_p,yb_p)$ and $(xa_k,ya_k)$ is matched to $(xb_r,yb_r)$, then for $i<j<k$ and $p<q<r$, $(xa_j,ya_j)$ is matched to $(xb_q,yb_q)$. 2. For $M:\{(i_1,j_1),(i_2,j_2)...\}$ and $|M|=min(n,m)$, $\Sigma_{k=1}^{|M|} distance((xa_{i_k},ya_{i_k}),(xb_{j_k},yb_{j_k}))$ is minimized. • You define two polygons, but match polylines. It seems to me that your polylines should be allowed to go around polygons, like in cyclical order. Can you please clarify that? – HEKTO May 20 '17 at 16:11 • Yes, i want to describe it like that but can not find the exact wording. But either one is okay for my application actually. – axeven May 21 '17 at 19:23
2020-09-27 17:27:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6989898085594177, "perplexity": 339.60504304810144}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400283990.75/warc/CC-MAIN-20200927152349-20200927182349-00369.warc.gz"}
https://howlingpixel.com/i-en/Hans_Geiger
Hans Geiger Johannes Wilhelm "Hans" Geiger (/ˈɡaɪɡər/; German: [ˈɡaɪɡɐ]; 30 September 1882 – 24 September 1945) was a German physicist. He is best known as the co-inventor of the detector component of the Geiger counter and for the Geiger–Marsden experiment which discovered the atomic nucleus. He was the brother of meteorologist and climatologist Rudolf Geiger. Hans Geiger Hans Wilhelm Geiger (1928) Born30 September 1882 Died24 September 1945 (aged 62) NationalityGerman Known forGeiger counter Geiger–Marsden experiment Geiger–Müller tube Geiger–Nuttall law Atomic nucleus AwardsHughes Medal (1929) Duddell Medal and Prize (1937) Scientific career FieldsPhysics and sciences InstitutionsUniversity of Erlangen University of Manchester InfluencesErnest Rutherford John Mitchell Nuttall Biography Geiger was born at Neustadt an der Haardt, Germany. He was one of five children born to the Indologist Wilhelm Ludwig Geiger, who was a professor at the University of Erlangen. In 1902, Geiger started studying physics and mathematics at the University of Erlangen and was awarded a doctorate in 1906.[1] His thesis was on electrical discharges through gases.[2] He received a fellowship to the University of Manchester and worked as an assistant to Arthur Schuster. In 1907, after Schuster's retirement, Geiger began to work with his successor, Ernest Rutherford, and in 1908, along with Ernest Marsden, conducted the famous Geiger–Marsden experiment (also known as the "gold foil experiment"). This process allowed them to count alpha particles and led to Rutherford's winning the 1908 Nobel Prize in Chemistry.[3][4][5][6] In 1911 Geiger and John Mitchell Nuttall discovered the Geiger–Nuttall law (or rule) and performed experiments that led to Rutherford's atomic model.[7] In 1912, Geiger was named head radiation research at the German National Institute of Science and Technology in Berlin. There he worked with Walter Bothe (winner of the 1954 Nobel Prize in Physics) and James Chadwick (winner of the 1935 Nobel Prize in Physics).[8] Work was interrupted when Geiger served in the German military during World War I as an artillery officer from 1914 to 1918. In 1924, Geiger used his device to confirm the Compton effect which helped earn Arthur Compton the 1927 Nobel Prize in Physics.[9] In 1925, he began a teaching position at the University of Kiel where, in 1928 Geiger and his student Walther Müller created an improved version of the Geiger tube, the Geiger–Müller tube. This new device not only detected alpha particles, but beta and gamma particles as well, and is the basis for the Geiger counter.[10][11] In 1929 Geiger was named professor of physics and director of research at the University of Tübingen where he made his first observations of a cosmic ray shower. In 1936 he took a position with the Technische Universität Berlin (Technical University of Berlin) where he continued to research cosmic rays, nuclear fission, and artificial radiation until his death in 1945.[12] Beginning in 1939, after the discovery of atomic fission, Geiger was a member of the Uranium Club, the German investigation of nuclear weapons during World War II. The group splintered in 1942 after it was incorrectly determined that nuclear weapons would not play a major role in ending the war.[13] Although Geiger signed a petition against the Nazi government's interference with universities, he provided no support to colleague Hans Bethe (winner of the 1967 Nobel Prize in Physics) when he was fired for being Jewish.[14][15] Geiger endured the investiture of Berlin and subsequent Russian occupation (April/May 1945). Two months later he moved to Potsdam, dying there two months after the first nuclear bomb exploded over Japan. References 1. ^ Krebs, AT (July 1956). "Hans Geiger: Fiftieth Anniversary of the Publication of His Doctoral Thesis, 23 July 1906". Science. 124 (3213): 166. Bibcode:1956Sci...124..166K. doi:10.1126/science.124.3213.166. PMID 17843412. 2. ^ Shampo, M. A.; Kyle, R. A.; Steensma, D. P. (2011). "Hans Geiger—German Physicist and the Geiger Counter". Mayo Clinic Proceedings. 86 (12): e54. doi:10.4065/mcp.2011.0638. PMC 3228631. PMID 22196280. 3. ^ Rutherford E.; Geiger H. (1908). "An electrical method of counting the number of α particles from radioactive substances". Proceedings of the Royal Society of London, Series A. 81 (546): 141–161. Bibcode:1908RSPSA..81..141R. doi:10.1098/rspa.1908.0065. ISSN 1364-5021. 4. ^ Geiger H. (1913). "Über eine einfache Methode zur Zählung von α- und β-Strahlen (On a simple method for counting α- and β-rays)". Verhandlungen der Deutschen Physikalischen Gesellschaft. 15: 534–539. 5. ^ Campbell John (1999). Rutherford Scientist Supreme, AAS Publications. 6. ^ Shampo, M. A.; Kyle, R. A.; Steensma, D. P. (2011). "Hans Geiger—German Physicist and the Geiger Counter". Mayo Clinic Proceedings. 86 (12): e54. doi:10.4065/mcp.2011.0638. PMC 3228631. PMID 22196280. 7. ^ H. Geiger and J.M. Nuttall (1911) "The ranges of the α particles from various radioactive substances and a relation between range and period of transformation," Philosophical Magazine, series 6, vol. 22, no. 130, pages 613-621. See also: H. Geiger and J.M. Nuttall (1912) "The ranges of α particles from uranium," Philosophical Magazine, series 6, vol. 23, no. 135, pages 439-445. 8. ^ 9. ^ Shampo, M. A.; Kyle, R. A.; Steensma, D. P. (2011). "Hans Geiger—German Physicist and the Geiger Counter". Mayo Clinic Proceedings. 86 (12): e54. doi:10.4065/mcp.2011.0638. PMC 3228631. PMID 22196280. 10. ^ Geiger; Müller W. (1928). "Elektronenzählrohr zur Messung schwächster Aktivitäten (Electron counting tube for the measurement of the weakest radioactivities)". Die Naturwissenschaften (The Sciences). 16 (31): 617–618. Bibcode:1928NW.....16..617G. doi:10.1007/BF01494093. ISSN 0028-1042.
2019-05-25 01:51:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5553473830223083, "perplexity": 6596.937740580357}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257845.26/warc/CC-MAIN-20190525004721-20190525030721-00036.warc.gz"}
http://www.encyclopediaofmath.org/index.php/Knot_and_link_groups
##### Actions A class of groups isomorphic to the fundamental groups (cf. Fundamental group) of the complementary spaces of links (cf. Link) of codimension two in the sphere . For the cases the groups of smooth links of multiplicity are distinguished by the following properties [3]: 1) is generated as a normal subgroup by elements; 2) the -dimensional homology group of with integer coefficients and trivial action of on is ; and 3) the quotient group of by its commutator subgroup is a free Abelian group of rank . If is the group of the link , then 1) holds because becomes the trivial group after setting the meridian equal to 1 (see below), property 2) follows from Hopf's theorem, according to which is a quotient group of , equal to by Alexander duality; property 3) follows from the fact that and by Alexander duality. In the case or , necessary and sufficient conditions have not yet been found (1984). If , then does not split if and only if is aspherical, i.e. is an Eilenberg–MacLane space of type . A link splits if and only if the group has a presentation with deficiency larger than one [3]. The complement of a higher-dimensional link having more than one component is never aspherical, and the complement of a higher-dimensional knot can be aspherical only under the condition [5]. Furthermore, for every -dimensional knot with aspherical complement is trivial. It is also known that for a link is trivial if and only if its group is free [3]. Suppose now that . To obtain a presentation of the group by a general rule (cf. Fundamental group) in one forms a two-dimensional complex containing the initial knot and such that . Then the -chains of give a system of generators for and going around the -chains in gives the relations. If one takes a cone over for , emanating from a point below the plane of projection, one obtains the upper Wirtinger presentation (cf. Knot and link diagrams). If for one takes the union of the black and white surfaces obtained from the diagram of (removing the exterior domain), one obtains the Dehn presentation. The specification of in the form of a closed braid (cf. Braid theory; Knot and link diagrams) leads to a presentation of in the form , where is a word over the alphabet , and in the free group . In addition, every presentation of this type is obtained from a closed braid. For other presentations see [1], [2], [4], [7], [8]. Comparison of the upper and lower Wirtinger presentations leads to a particular kind of duality in (cf. [7]). This may be formulated in terms of a Fox calculus: has two presentations and such that for a certain equivalence one has and , where the equations are taken modulo the kernel of the homomorphism of the group ring of the free group onto the group ring of . This duality implies the symmetry of the Alexander invariant (cf. Alexander invariants). The identity problem has been solved only for isolated classes of knots (e.g. torus and some pretzel-like knots, cf. [6], etc.). There is no algorithm (cf. [1]) for recognizing the groups of -dimensional knots from their presentation. Stronger invariants for are the group systems consisting of and systems of classes of conjugate subgroups. A subgroup in is called a peripheral subgroup of the component ; it is the image under the imbedding homomorphism of the fundamental group the boundary of which is a regular neighbourhood of the component . If is not the trivial knot, separated from the other components of the -sphere, then . The meridian and the parallel in generate in two elements which are also called the meridian and the parallel for in the group system. In the case the parallel is uniquely determined for the group itself in the subgroup , but the meridian is only determined up to a factor of the form . For as an invariant see Knot theory. The automorphism group of the group has been completely studied only for torus links, for Listing knots (cf. Listing knot) and, to a higher degree, for Neuwirth knots (cf. Neuwirth knot, [2]). The representation of in different groups, especially with regard to , is a powerful means of distinguishing knots. E.g., the representation in the group of motions of the Lobachevskii plane allows one to describe the non-invertible knots. Metacyclic representations have been studied systematically. If does not split, then for a subgroup of a space of type is used as a covering of which, like , has the homotopy type of a -dimensional complex. It follows that an Abelian subgroup of is isomorphic to or ; in particular, contains no non-trivial elements of finite order. For the peripheral subgroups are maximal in the set of Abelian subgroups. Only the group of a toroidal link can have a centre [10]. A fundamental role is played by the subgroup containing the elements of whose link coefficients with the union of the oriented components are . If , then is the commutator subgroup; generally . Therefore may be taken as group of a covering over with infinite cyclic group of covering transformations. If is a connected oriented surface in with boundary , then it is covered in by a countable system of surfaces , which decompose into a countable number of pieces (where ). Hence one obtains that is the limit of the diagram where all the , are induced inclusions. It turns out that either they are all isomorphisms or no two are epimorphisms [2]. If the genus of a connected is equal to the genus of its link (such a is called completely non-split), then all the , are monomorphisms and is either a free group of rank or is not finitely generated (and not free, if the reduced Alexander polynomial is not zero; this is so for knots, in particular). A completely non-split link with finitely generated is called a Neuwirth link. #### References [1] R.H. Crowell, R.H. Fox, "Introduction to knot theory" , Ginn (1963) [2] L.P. Neuwirth, "Knot groups" , Princeton Univ. Press (1965) [3] J.A. Hillman, "Alexander ideals of links" , Springer (1981) [4] C.McA. Gordon, "Some aspects of clasical knot theory" , Knot theory. Proc. Sem. Plans-sur-Bex, 1977 , Lect. notes in math. , 685 , Springer (1978) pp. 1–60 [5] B. Eckmann, "Aspherical manifolds and higher-dimensional knots" Comm. Math. Helv. , 51 (1976) pp. 93–98 [6] K. Reidemeister, "Ueber Knotengruppen" Abh. Math. Sem. Univ. Hamburg , 6 (1928) pp. 56–64 [7] G. Hotz, "Arkandenfadendarstellung von Knoten und eine neue Darstellung der Knotengruppe" Abh. Math. Sem. Univ. Hamburg , 24 (1960) pp. 132–148 [8] H.F. Trotter, "Homology of group systems with applications to knot theory" Ann. of Math. , 76 (1962) pp. 464–498 [9] H.F. Trotter, "Non-invertible knots exist" Topology , 2 (1964) pp. 275–280 [10] G. Burde, H. Zieschang, "Eine Kennzeichnung der Torusknotten" Math. Ann. , 167 (1966) pp. 169–176
2013-06-20 10:10:58
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8534227013587952, "perplexity": 479.07213866618696}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368711406217/warc/CC-MAIN-20130516133646-00091-ip-10-60-113-184.ec2.internal.warc.gz"}
https://conversioncalculator.org/square-feet-to-acres/
square feet to acres Square Feet to Acres Square Feet in an Acre Dimension 1 Acre 4,840 square yards 1 Acre 43,560 square feet 1 Acre 4,047 square meters 1 Acre 0.4047 hectares How Many Square Feet in An Acre FAQ Related to sq ft to acres How many square feet means 1 acre? What is the square footage of 1/2 acre? How many acres is 90169? How many acres is 40×60? How many acres is equal to 1 Bigha? How many acres is 1 mile by 1 mile? How many football fields is an acre? How many miles are in 1 square mile? Are acres bigger than Miles? What does 1 acre of land look like? What is the dimension of 1 acre? How big is 1000 square acres? How many square feet are there in 1 acre? There are 43560 square feet in 1 acre. To convert from acres to square feet, multiply your figure by 43560 . How many acres are there in 1 square foot? There are 2.2956841138659E-5 acres in 1 square foot. To convert from square feet to acres, divide your figure by 43560 . What lot size is 1/2 acre? An acre is 43560 square feet so half an acre is 43560/2 = 21780 square feet. If your 1/2 acre plot of land is a square with area 21780 square feet then each side is of length √21780 feet. How many acres is 200 feet by 200 feet? .918 acres 200 feet x 200 feet = . 918 acres. Or in other words, approximately 92% of an acre. What are the dimensions of an acre of land? 43,560 square feet Because an acre is a measure of area, not length, it is defined in square feet. An acre can be of any shape-a rectangle, a triangle, a circle, or even a star-so long as its area is exactly 43,560 square feet. The most standard shape for an acre is one furlong by one chain, or 660 feet by 66 feet. What is the square footage of 1/4 acre? There are 43560 square feet in an acre so one quarter of an acre is square feet. If each side of the square is F feet long then the area is F^2 square feet. Thus F^2 = 10890 and F = \sqrt{10890} = 104.35 feet. Acres to Square Feet Conversion Table Acres Square Feet 1 acre 43560 square feet 2 acres 87120 square feet 3 acres 130680 square feet 4 acres 174240 square feet 5 acres 217800 square feet 6 acres 261360 square feet 7 acres 304920 square feet 8 acres 348480 square feet 9 acres 392040 square feet 10 acres 435600 square feet 11 acres 479160 square feet 12 acres 522720 square feet 13 acres 566280 square feet 14 acres 609840 square feet 15 acres 653400 square feet 16 acres 696960 square feet 17 acres 740520 square feet 18 acres 784080 square feet 19 acres 827640 square feet 20 acres 871200 square feet Categories CC
2022-05-25 00:57:18
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8822466731071472, "perplexity": 8671.111495387237}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662577757.82/warc/CC-MAIN-20220524233716-20220525023716-00067.warc.gz"}
https://www.flyingcoloursmaths.co.uk/2020/11/
# November, 2020 ## A random number puzzle A puzzle that crossed my path via @drmaciver: player 1 pushes the button twice, and multiplies the two outputs together to get a score (e.g. 0.45 x 0.9=0.4). then player 2 pushes the button once, and squares the result to get their score (e.g. 0.67 x 0.67 = 0.4489) the Dear Uncle Colin, I’m told there are two circles that touch the x-axis at the origin and are also tangent to the line $4x-3y+24=0$, but I can’t find their equations. Any ideas? - A Geometrically Nasty Example Seems Impossible Hi, AGNESI, and thanks for your message! I’m going to start ## A proof without words Via nRICH: A circle touches the lines OA extended, OB extended and AB where OA and OB are perpendicular. Show that the diameter of the circle is equal to the perimeter of the triangle. $\blacksquare$ ## Ask Uncle Colin: Angles and roots Dear Uncle Colin, In my non-calculator paper, I’m told $\cos(\theta) = \sqrt{\frac{1}{2}+ \frac{1}{2\sqrt{2}}}$ and that $\sin(\theta) = -\left(\sqrt{\frac{1}{2}-\frac{1}{2\sqrt{2}}}\right)$. Given that $0 \le \theta \lt 2\pi$, find $\theta$. I’ve no idea how to approach it! - Trigonometric Headaches Evaluating This Angle Hi, THETA, and thanks for your message! My third thought ## The Mathematical Ninja and the Cube Root of 81 “I would have to assume the teacher means $\sqrt[4]{81}$ instead.” “That’s as may be. But $4\ln(3)$ is 4.4 (less one part in 800). A third of that is $1.4\dot 6$, less one part in 800, call it 1.465.” “So you’d do $e$ to the power of that?” “Indeed! $\ln(4)$ is ## Ask Uncle Colin: A Calculator Error Dear Uncle Colin, I have to work out $\cot\left( \frac{3}{2}\pi \right)$. Wolfram Alpha says it’s 0, but when I work out $\frac{1}{\tan\left(\frac{3}{2}\pi\right)}$, my calculator shows an error. What’s going on? - Troublesome Angle, No? Hi, TAN, and thanks for your message! The cotangent function is slightly unusual in that it ## Continued fractions and the square root of 3. I’m a Big Fan of both @standupmaths and @sparksmaths, two mathematicians who fight the good fight. I was interested to see Ben tackling the square root of 3 using the ‘long division’ method. It’s a method I’ve tried hard to love. It’s a method I just can’t bring myself to ## Ask Uncle Colin: A Seemingly Undefined Integral Dear Uncle Colin, I need to evaluate $\int_0^{\piby2} \frac{1}{1+\sin(x)}\dx$ but I end up with $\infty - \infty$ and that’s no good! How should I be doing it? Big Integral, Not Exactly Trivial Hi, BINET, and thanks for your message! This is a fun problem! I can think of several possible ## Dictionary of Mathematical Eponymy: Wahba’s Problem While my thesis has the word ‘topology’ in its title, at heart I’m a vectors-in-3D person. Give me matrices, not manifolds! So today’s entry in the Dictionary of Mathematical Eponymy is one that brings me joy. What is Wahba’s Problem? The mathematical statement of Wahba’s Problem is as follows: Given
2020-12-05 15:12:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.634204626083374, "perplexity": 1559.1680601733422}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141747887.95/warc/CC-MAIN-20201205135106-20201205165106-00407.warc.gz"}
https://search.r-project.org/CRAN/refmans/DeclareDesign/html/simulate_design.html
simulate_design {DeclareDesign} R Documentation ## Simulate a design ### Description Runs many simulations of a design and returns a simulations data.frame. ### Usage simulate_design(..., sims = 500) simulate_designs(..., sims = 500) ### Arguments ... A design created using the + operator, or a set of designs. You can also provide a single list of designs, for example one created by expand_design. sims The number of simulations, defaulting to 500. If sims is a vector of the form c(10, 1, 2, 1) then different steps of a design will be simulated different numbers of times. ### Details Different steps of a design may each be simulated different a number of times, as specified by sims. In this case simulations are grouped into "fans". The nested structure of simulations is recorded in the dataset using a set of variables named "step_x_draw." For example if sims = c(2,1,1,3) is passed to simulate_design, then there will be two distinct draws of step 1, indicated in variable "step_1_draw" (with values 1 and 2) and there will be three draws for step 4 within each of the step 1 draws, recorded in "step_4_draw" (with values 1 to 6). ### Examples my_model <- declare_model( N = 500, U = rnorm(N), Y_Z_0 = U, Y_Z_1 = U + rnorm(N, mean = 2, sd = 2) ) my_assignment <- declare_assignment(Z = complete_ra(N)) my_inquiry <- declare_inquiry(ATE = mean(Y_Z_1 - Y_Z_0)) my_estimator <- declare_estimator(Y ~ Z, inquiry = my_inquiry) my_reveal <- declare_measurement(Y = reveal_outcomes(Y ~ Z)) design <- my_model + my_inquiry + my_assignment + my_reveal + my_estimator ## Not run: simulations <- simulate_design(designs, sims = 2) diagnosis <- diagnose_design(simulations_df = simulations) ## End(Not run) ## Not run: # A fixed population with simulations over assignment only head(simulate_design(design, sims = c(1, 1, 1, 100, 1))) ## End(Not run) [Package DeclareDesign version 1.0.2 Index]
2023-03-26 12:18:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36510488390922546, "perplexity": 6379.560420396036}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945472.93/warc/CC-MAIN-20230326111045-20230326141045-00536.warc.gz"}
https://aviation.stackexchange.com/questions/52549/why-does-the-diffuser-section-generate-thrust-in-a-jet-engine
# Why does the diffuser section generate thrust in a jet engine? I am studying the thrust distribution of jet engines. But I'm now a bit confused. In Rolls-Royce's "The Jet Engine" book, http://aeromodelbasic.blogspot.com/2012/05/thrust-distribution-distribution-of.html At the start of the cycle, air is induced into the engine and is compressed. The rearward accelera- tions through the compressor stages and the resultant pressure rise produces a large reactive force in a forward direction. On the next stage of its journey the air passes through the diffuser where it exerts a small reactive force, also in a forward direction I understand the first part of the paragraph that the compressor is providing forward thrust, as it is pushing (so compressing) air rearward. But why is the diffuser also providing forward thrust? And also why the nozzle is providing rearward thrust? The similar conculsion is also shown here: http://www.pulse-jets.com/phpbb3/viewtopic.php?t=2183, that the diffuser is providing positive thrust by calculating the pressure force. From my understanding of basic fluid mechanics, shouldn't a nozzle be providing forward thrust, like the sprinkler in the garden or a fire hose? And shouldn't a diffuser be providing rearward thrust, as the outlet speed is slower than the inlet speed, and so m dot X (v - u) is negative? What's wrong with my understanding? • Is the text you are quoting the text from RR's book, or text written by the person who posted the picture on the blog site you link to? Your understanding seems perfectly fine to me, I would like to verify the context of the diagram. – Penguin Jun 12 '18 at 10:52 • It's from RR, See P.218 Paragraph 3 in docs.google.com/file/d/0Bx0MqOfev7dnS01DMmtqSmhoVmc/edit – Jono Jun 12 '18 at 13:04 • because the inner surface of the nozzle faces front so the gas pressure pushes it back, very simple. – user3528438 Jun 12 '18 at 22:33 But why is the diffuser also providing forward thrust? The diffuser slows down the flow to ease fuel-air mixing and combustion a bit later. If you only focus on entry and exit speeds, there would be no thrust. However, if you look at the pressures on the diffuser walls, a different result emerges. Slower flow means higher static pressure, and the total pressure right at the compressor exit is already the highest within the whole engine. The pressure on those widening diffusor walls does indeed push the engine forward because of the forward slant of the pressure vector (which acts perpendicularly to the diffusor walls). Your linked pulse jet page explains this quite well. Of course, no thrust would result if the flow were not heated and thus accelerated further downstream. So the diffuser all by itself will not create thrust; this happens only when it is placed inside a working jet engine. And also why the nozzle is providing rearward thrust? This is not always the case, but here the nozzle has a converging shape which helps to accelerate the subsonic flow and converts the remaining pressure to speed. The walls now have a backward-facing slant, so the pressure vector on them will contribute a backward-facing component. In addition, the high flow speed along the large nozzle walls causes some friction, which needs to be considered, too. For a comparison, look at the cone behind the turbine wheels. Its thrust contribution only stems from the forward-facing pressure acting on it. • But can you explain why don't i need to consider the pressure difference when i am calculating the thrust of a fire hose or garden sprinkler, but i need to consider the pressure difference when i am calculating the thrust contribution of a nozzle or diffuser? What makes them so different? – Jono Jun 13 '18 at 11:18 • @Jono: What makes you think that you should not consider the pressures in garden hoses? What makes them different??? – Peter Kämpf Jun 13 '18 at 16:48 • It seems like the engine nozzle and garden hose nozzle have a lot of similarities to me. Garden hose: 1. High pressure created by water pump/tap at inlet; 2. Low pressure at outlet (atmospheric?) 3.Water accelerates from inlet to outlet.. and Engine: 1. High pressure created by compressor/ombustor/turbine at inlet, 2. Low pressure at outlet (atmospheric?) 3. Air accelerates from inlet to outlet... But why is the nozzle in engine creating thrust in the same direction as the fluid flow (to the right as the picture above), while the garden hose is creating thrust in the opposite direction? – Jono Jun 13 '18 at 21:52 And shouldn't a diffuser be providing rearward thrust, as the outlet speed is slower than the inlet speed, and so $\dot{m}\times(v - u)$ is negative? Conservation laws in physics are an excellent tool. They let you calculate a lot without looking at the minutiae details of the actual process. And this is great example: you can trivially calculate thrust of the whole engine from the change of momentum of the working fluid. But that won't tell you how the force is actually applied, only the sum of the forces on the entire engine. The thrust break-down is the minutiae details of the process. And at that level, the only way to create a force is by pressure of the fluid, and since pressure always acts perpendicular to the surface, only the aft-facing surfaces can have forward thrust act on them, while any forward-facing surfaces have negative thrust act on them. And it's no different in the sprinkler. The pressure inside is acting on all the walls, but there is a bit missing in the nozzle where the water flows out, so the force on the opposite wall prevails. • ,I also think that diffuser produce rearward thrust,because walls from diffuser are facing backwards and pressure allways act prependicular to the surface.. – Aeronautic Freek Jul 11 '20 at 15:16
2021-04-20 23:44:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44327306747436523, "perplexity": 1236.9635479172905}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039491784.79/warc/CC-MAIN-20210420214346-20210421004346-00465.warc.gz"}
https://murray.cds.caltech.edu/index.php?title=CDS_110b:_Robust_Stability&diff=5858&oldid=5823
# Difference between revisions of "CDS 110b: Robust Stability" WARNING: This page is for a previous year. See current course homepage to find most recent page available. This set of lecture describes how to model uncertainty in $$H_\infty$$ control and provides conditions for checking robust stability in this framework. ## Course Materials • AM06, Sections 9.5, 12.1 and 12.2
2021-12-09 14:07:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47405585646629333, "perplexity": 3722.195214072751}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964364169.99/warc/CC-MAIN-20211209122503-20211209152503-00105.warc.gz"}
https://www.learncram.com/maths/maths-formulas/
# Maths Formulas for Class 6 to Class 12 PDF | All Basic Maths Formulas Maths Formulas – Most of you might feel Maths as your biggest nightmare. But, it’s not and it can be quite interesting once you get to know the applications of it in real life. It’s all about connecting the dots and knowing which calculation to use. Maths Formulas are difficult to memorize and Learn Cram Experts have curated some of the List of Basic Mathematical Formulas that you may find useful in your way of preparation. Students of Class 6 to 12 can utilise the Maths Formulas PDF and cover the entire syllabus. Revise these formulae thoroughly and identify your strengths and weaknesses in the subject and its formulae. Resolve your doubts while solving the problems by making use of these General Maths Formulas for Classes 6, 7, 8, 9, 10, 11, 12. Looking for some smart ways to remember the Mathematical Formulas? You can make use of the handy learning aids and develop an in-depth knowledge on the subject. Check out the Class 6 to 12 Maths Formulas available Chapter Wise as per the Latest CBSE Syllabus and score more marks in the exam. These Maths Formulas act as a quick reference for Class 6 to Class 12 Students to solve problems easily. Students can get all basic mathematics formulas absolutely free from this page and can methodically revise and memorize them. Comprehensive list of Maths Formulas for Classes 12, 11, 10, 9 8, 7, 6 to solve problems efficiently. Download Mathematics Formula PDF to complete the syllabus and excel in your exams. ### FAQs on Maths Formulas 1. Where can I get all Mathematical Formulas? You can get all Mathematical Formulas arranged in an organised manner as per the Chapters for various classes from here. 2. What are the types of mathematical formulas? There are many types in maths as far as formulas are concerned. Have a glance at some of the types of Mathematical Formulas. • Linear equation • Cubic equation • First order Differential equations • Integral equations • Trigonometric equations, Matrix equations, 2nd order differentials, Fourier transforms, Laplace transforms, Hamiltonians and much more. 3. Where can I find Maths Formulas for Class 6 to Class 12 in PDF Format? You can find Maths Formulas for Classes 12, 11, 10, 9, 8, 7, 6 in PDF Format for various concepts in a structured way by referring to our page. Make the most out of these and score better grades in the exam.
2020-08-09 04:35:44
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8102295398712158, "perplexity": 950.7182227035447}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738425.43/warc/CC-MAIN-20200809043422-20200809073422-00192.warc.gz"}
https://pennylane.ai/blog/2023/03/pennylane-v029-released/
# PennyLane v0.29 released We hope you’ve recovered from the excitement of QHack 2023, because here’s something more for you — the release of PennyLane 0.29! Check out all of the awesome new functionality below. ## Feel the pulse 🔊 You might be familiar with constructing quantum circuits using gates like single-qubit rotations and CNOTs as their building blocks. But there is another way to do this! Quantum hardware often implements the gates we are familiar with using a sequence of carefully calibrated laser pulses. This release of PennyLane allows you to control those pulses directly, unlocking a new toolset to construct, simulate, and differentiate pulse-based quantum circuits 🔊. ### Pulses and time-dependent Hamiltonians A pulse can be thought of as a time-dependent function that sets a coefficient in a Hamiltonian: >>> import jax >>> from jax import numpy as jnp >>> pulse = lambda p, t: p[0] * jnp.sin(p[1] * t + p[2]) >>> H = qml.PauliZ(0) + pulse * qml.PauliX(0) >>> p = jnp.array([[0.9, 1.1, 0.1]]) >>> t = 1.0 >>> H(p, t) (1*(PauliZ(wires=[0]))) + (0.8388351798057556*(PauliX(wires=[0]))) This time-dependent Hamiltonian, H, determines the evolution of a single-qubit system with time. ### Creating pulse-based circuits We can use H to create a pulse-based circuit using qml.evolve: dev = qml.device("default.qubit.jax", wires=1) @jax.jit @qml.qnode(dev, interface="jax") def circuit(p, t): qml.evolve(H)(p, t=t) return qml.expval(qml.PauliZ(0)) Pulse-based circuits can be executed using the jax interface: >>> t = jnp.array([0, 5]) >>> circuit(p, t) Array(0.86624, dtype=float32) Moreover, we can also differentiate the circuit with respect to its parameters, p: >>> jax.grad(circuit)(p, t) Array([[-0.7991514 , 1.7720929 , 0.06342924]], dtype=float32) Stay tuned! We’ll be releasing a demo in a few weeks’ time to show what you can do with pulse-level programming! ## Here comes the SU(N) 🌞 Tired of working out which gates to put where in your circuit? Would you like something a bit more flexible? With the new qml.SpecialUnitary gate — which realizes an SU(N) transformation — you can apply an arbitrary unitary to a collection of qubits. What’s more, you can optimize the parameters of qml.SpecialUnitary in a hardware-compatible way, allowing you to create expressive circuits without worrying about your choice of gates! ### Creating an $$n$$-qubit unitary We can generate $$n$$-qubit unitaries from the SU(N) group, where $$N=2^n$$. To do this, we need to choose a vector $$\vec{\theta}$$ of length $$d = 4^n - 1$$. This vector sets the angles corresponding to the $$n$$-qubit Pauli words: >>> qml.ops.qubit.special_unitary.pauli_basis_strings(1) # 4**1-1 = 3 Pauli words ['X', 'Y', 'Z'] >>> qml.ops.qubit.special_unitary.pauli_basis_strings(2) # 4**2-1 = 15 Pauli words ['IX', 'IY', 'IZ', 'XI', 'XX', 'XY', 'XZ', 'YI', 'YX', 'YY', 'YZ', 'ZI', 'ZX', 'ZY', 'ZZ'] For example, on a single qubit, we may define >>> from jax import numpy as jnp >>> import jax >>> theta = jnp.array([0.2, 0.1, -0.5]) >>> U = qml.SpecialUnitary(theta, 0) >>> U.matrix() Array([[ 0.8537127 -0.47537234j, 0.09507447+0.19014893j], [-0.09507447+0.19014895j, 0.8537127 +0.47537234j]], dtype=complex64) The unitary corresponding to $$\vec{\theta}$$ is given by $$U(\vec{\theta}) = \exp \left(\sum_{m=1}^{d} \theta_{m} P_{m} \right)$$, where $$P_{m}$$ are Pauli words. ### Executing and differentiating SU(N) The qml.SpecialUnitary operation can be included inside a PennyLane circuit: dev = qml.device("default.qubit", wires=1) @qml.qnode(dev, interface="jax", diff_method="parameter-shift") def circuit(theta): qml.SpecialUnitary(theta, wires=0) return qml.expval(qml.PauliZ(0)) This circuit can be executed and differentiated: >>> circuit(theta) Array(0.9096085, dtype=float32) Array([-0.710832 , -0.355416 , -0.03075087], dtype=float32) Note that we are using the hardware-compatible "parameter-shift" method for gradient calculations! Check out the qml.SpecialUnitary documentation to understand how this is working! ## Always differentiable 📈 You might have guessed that, here at PennyLane, we love calculating derivatives! Well, this release is no exception; we’ve added lots of new tools to help make your autodiff life easier 😎. We’re particularly excited about the addition of two new gradient methods: the Hadamard test and SPSA. ### The Hadamard test The Hadamard test is a hardware-compatible method that allows you to calculate gradients with fewer circuit executions, at the cost of an additional auxiliary qubit. >>> with qml.tape.QuantumTape() as tape: ... qml.RX(0.1, wires=0) ... qml.RY(0.2, wires=0) ... qml.RX(0.3, wires=0) ... qml.expval(qml.PauliZ(0)) >>> print(tape.draw(decimals=2)) 0: ──RX(0.10)──RY(0.20)──RX(0.30)─┤ <Z> >>> qml.enable_return() 0: ──RX(0.10)─╭X──RY(0.20)──RX(0.30)─┤ ╭<Z@Y> 1: ──H────────╰●──H──────────────────┤ ╰<Z@Y> If you’re not familiar with our low-level qml.tape.QuantumTape circuit representation, don’t worry, the example above is just for illustration. If you want to use the Hadamard test method to calculate gradients, simply request hadamard when selecting the diff_method in your QNode: >>> import torch >>> qml.enable_return() >>> dev = qml.device("default.qubit", wires=2) >>> @qml.qnode(dev, interface="torch", diff_method="hadamard") >>> def circuit(params): ... qml.RX(params[0], wires=0) ... qml.RY(params[1], wires=0) ... qml.RX(params[2], wires=0) ... return qml.expval(qml.PauliZ(0)) >>> params = torch.tensor([0.1, 0.2, 0.3], requires_grad=True) >>> res = circuit(params) >>> res.backward() tensor([-0.3875, -0.1888, -0.3836]) ### SPSA This release also allows you to request the SPSA gradient method directly within a QNode. Here is an example using the torch interface: >>> import torch >>> qml.enable_return() >>> dev = qml.device("default.qubit", wires=2) >>> @qml.qnode(dev, interface="torch", diff_method="spsa", h=0.05, num_directions=20) >>> def circuit(params): ... qml.RX(params[0], wires=0) ... qml.RY(params[1], wires=0) ... qml.RX(params[2], wires=0) ... return qml.expval(qml.PauliZ(0)) >>> params = torch.tensor([0.1, 0.2, 0.3], requires_grad=True) >>> res = circuit(params) >>> res.backward() tensor([-0.1772, -0.1105, -0.2467]) The returned value is an estimator for the true gradient. Check out the documentation for more details! ## Smartly decompose Hamiltonian evolution 💯 For those of you who love to time-evolve Hamiltonians, things just got even better! Now you can break down that evolution into more elementary operations. If the time-evolved Hamiltonian is equivalent to another PennyLane operation, then that operation is returned as the decomposition: >>> exp_op = qml.evolve(qml.PauliX(0) @ qml.PauliX(1)) >>> exp_op.decomposition() [IsingXX((2+0j), wires=[0, 1])] If the Hamiltonian is a Pauli word, then the decomposition is provided as a qml.PauliRot operation: >>> qml.evolve(qml.PauliZ(0) @ qml.PauliX(1)).decomposition() [PauliRot((2+0j), ZX, wires=[0, 1])] Otherwise, the Hamiltonian is a linear combination of operators and the Suzuki–Trotter decomposition is used: >>> sum = qml.sum(qml.PauliX(0), qml.PauliY(0), qml.PauliZ(0)) >>> qml.evolve(sum, num_steps=2).decomposition() [RX((1+0j), wires=[0]), RY((1+0j), wires=[0]), RZ((1+0j), wires=[0]), RX((1+0j), wires=[0]), RY((1+0j), wires=[0]), RZ((1+0j), wires=[0])] This decomposition is an approximation. Increasing num_steps will result in a closer approximation to your target evolution, at a cost of increased circuit depth. ## Improvements 🛠 In addition to the new features listed above, the release contains a wide array of improvements and optimizations: • The default interface is now auto. There is no need to specify the interface anymore; it is automatically determined by checking your QNode parameters: import jax import jax.numpy as jnp qml.enable_return() a = jnp.array(0.1) b = jnp.array(0.2) dev = qml.device("default.qubit", wires=2) @qml.qnode(dev) def circuit(a, b): qml.RY(a, wires=0) qml.RX(b, wires=1) qml.CNOT(wires=[0, 1]) return qml.expval(qml.PauliZ(0)), qml.expval(qml.PauliY(1)) >>> circuit(a, b) (Array(0.9950042, dtype=float32), Array(-0.19767681, dtype=float32)) >>> jac = jax.jacobian(circuit)(a, b) >>> jac (Array(-0.09983341, dtype=float32, weak_type=True), Array(0.01983384, dtype=float32, weak_type=True)) • The function called qml.dot has been updated to compute the dot product between a vector and a list of operators: >>> coeffs = np.array([1.1, 2.2]) >>> ops = [qml.PauliX(0), qml.PauliY(0)] >>> qml.dot(coeffs, ops) (1.1*(PauliX(wires=[0]))) + (2.2*(PauliY(wires=[0]))) >>> qml.dot(coeffs, ops, pauli=True) 1.1 * X(0) + 2.2 * Y(0) • The default.mixed device has received a performance improvement for multi-qubit operations. This also allows you to apply channels that act on more than seven qubits, which was not possible before. • qml.draw and qml.draw_mpl have been updated to draw any quantum function, which allows for visualizing only part of a complete circuit/QNode. • The qml.math module now also contains a submodule for fast Fourier transforms, qml.math.fft. The submodule in particular provides differentiable versions of the following functions, available in all common interfaces for PennyLane: fft, ifft, fft2, and ifft2. Note that the output of the derivatives of these functions may differ when used with complex-valued inputs, due to different conventions on complex-valued derivatives. • Most quantum channels are now fully differentiable on all interfaces. • Writing Hamiltonians to a file using the qml.data module has been improved by employing a condensed writing format. ## Deprecations and breaking changes 💔 As new things are added, outdated features are removed. To keep track of things in the deprecation pipeline, check out the deprecations page. Here’s a summary of what has changed in this release: • When a QNode interface is not specified, it is determined during the QNode call instead of the initialization. This means that the gradient_fn and gradient_kwargs are only defined on the QNode at the beginning of the call. Furthermore, without specifying the interface it is not possible to guarantee that the device will not be changed during the call if you are using backprop (for example, the device may change from default.qubit to default.qubit.jax). If you would like to interact with the device after calling a QNode, you should specify the interface you want to use. • Operation.inv() and the Operation.inverse setter have been removed. Please use qml.adjoint or qml.pow instead. • op.simplify() for operators which are linear combinations of Pauli words will use a built-in Pauli representation to more efficiently compute the simplification of the operator. • The collections module has been deprecated. These highlights are just scratching the surface — check out the full release notes for more details. ## Contributors ✍️ As always, this release would not have been possible without the hard work of our development team and contributors: Gian-Luca Anselmetti, Guillermo Alonso-Linaje, Juan Miguel Arrazola, Ikko Ashimine, Utkarsh Azad, Miriam Beddig, Cristian Boghiu, Thomas Bromley, Astral Cai, Isaac De Vlugt, Olivia Di Matteo, Amintor Dusko, Lillian M. A. Frederiksen, Soran Jahangiri, Korbinian Kottmann, Christina Lee, Vincent Michaud-Rioux, Albert Mitjans Coma, Romain Moyard, Lee J. O’Riordan, Mudit Pandey, Chae-Yeun Park, Borja Requena, Shuli Shu, Matthew Silverman, Jay Soni, Antal Száva, Frederik Wilde, David Wierichs, Moritz Willmann. # Author Biography #### Isaac De Vlugt Isaac De Vlugt is a quantum computing educator at Xanadu. His work involves creating accessible quantum computing content for the community, as well as spamming GIFs in our Slack channels. #### Thomas Bromley Thomas is a quantum scientist working at Xanadu. His work is focused on developing software to execute quantum algorithms on simulators and hardware.
2023-03-23 04:08:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3625986576080322, "perplexity": 7047.116074420247}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296944996.49/warc/CC-MAIN-20230323034459-20230323064459-00019.warc.gz"}
https://mathhelpboards.com/threads/tensor-products-dummit-and-foote-section-10-4-pages-359-362.9023/
# Tensor Products - Dummit and Foote - Section 10-4, pages 359 - 362 #### Peter ##### Well-known member MHB Site Helper In Dummit and Foote, Section 10.4: Tensor Products of Modules, on pages 359 - 364 (see attachment) the authors deal with a process of 'extension of scalars' of a module, whereby we construct a left $$\displaystyle S$$-module $$\displaystyle S \oplus_R N$$ from an $$\displaystyle R$$-module $$\displaystyle N$$. In this construction the unital ring $$\displaystyle R$$ is a subring of the unital ring $$\displaystyle S$$. (For a detailed description of this construction see the attachment pages 359 - 361 or see D&F Section 10.4) To construct $$\displaystyle S \oplus_R N$$ take the abelian group $$\displaystyle N$$ together with a map from $$\displaystyle S \times N$$ to $$\displaystyle N$$, where the image of the pair (s,n) is denoted by sn. D&F then argue that it is "natural" (but why is it natural???) to consider the free $$\displaystyle \mathbb{Z}$$-module (the free abelian group) on the set $$\displaystyle S \times N$$ - that is, the collection of all finite commuting sums of elements of the form $$\displaystyle (s_i, n_i)$$ where $$\displaystyle s_i \in S$$ and $$\displaystyle n_i \in N$$. To satisfy the relations necessary to attain an S-module structure, D&F argue that we must take the quotient of this abelian group by the subgroup H generated by all elements of the form: $$\displaystyle (s_1 + s_2, n) - (s_1, n) - (s_2, n)$$ $$\displaystyle (s, n_1 + n_2) - (s, n_1) - (s, n_2)$$ $$\displaystyle (sr,n) - (s, rn)$$ for $$\displaystyle s, s_1, s_2 \in S, n, n_1, n_2 \in N$$ and $$\displaystyle r \in R$$ where rn in the last element refers to the R-module structure already defined on N. The resulting quotient group is denoted by $$\displaystyle S \oplus_R N$$ and is called the tensor product of S and N over R. If $$\displaystyle s \oplus n$$ denotes the coset containing (s,n) then by definition of the quotient we have forced the relations: $$\displaystyle (s_1 + s_2) \oplus n = s_1 \oplus n + s_2 \oplus n$$ $$\displaystyle s \oplus (n_1 + n_2) = s \oplus n_1 + s \oplus n_2$$ $$\displaystyle sr \oplus n = s \oplus rn$$ The elements of $$\displaystyle S \oplus_R N$$ are called tensors and can be written (non-uniquely in general) as finite sums of "simple tensors" of the form $$\displaystyle s \oplus n$$. ---------------------------------------------------------------------------- Issues/Problems Issue/Problem (1) I am having real trouble understanding/visualizing the nature and character of the cosets of the quotient group defined above - I would really like to get a tangible and concrete view of the nature of the cosets. Can someone help in this matter either by general explanation and/or a concrete example. (I can see in the case of a quotient group like $$\displaystyle \mathbb{Z}/\mathbb{5Z}$$ that the cosets are clearly $$\displaystyle 0 + 5 \mathbb{Z}, 1 + 5 \mathbb{Z}, 2 + 5 \mathbb{Z}, 3 + 5 \mathbb{Z}, 4 + 5 \mathbb{Z}$$, and that x and y are in the same coset if x - y is divisible by 5 - but I cannot get the same feeling for and understanding of the cosets of $$\displaystyle s \oplus n$$) I really hope someone can help make the nature of the cosets a little clearer. Certainly no texts or online notes attempt top make this clearer for the student/reader ... nor do they give helpful examples ... Issue/Problem 2 D&F state that: "by definition of the quotient we have forced the relations: $$\displaystyle (s_1 + s_2) \oplus n = s_1 \oplus n + s_2 \oplus n$$ $$\displaystyle s \oplus (n_1 + n_2) = s \oplus n_1 + s \oplus n_2)$$ $$\displaystyle sr \oplus n) = s \oplus rn$$. My question is, how exactly, does taking the quotient of the abelian group N by the subgroup H generated by all elements of the form: $$\displaystyle s_1 + s_2, n) - (s_1, n) - (s_2, n)$$ $$\displaystyle (s, n_1 + n_2) - (s, n_1) - (s, n_2)$$ $$\displaystyle (sr,n) - (s, rn)$$ guarantee or force the relations required? I would be really grateful if someone can help. Again, as with issue/problem 1 no text or online notes have given a good explanation of this matter. Peter Last edited:
2022-05-25 18:59:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9342177510261536, "perplexity": 346.7792541972824}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662593428.63/warc/CC-MAIN-20220525182604-20220525212604-00294.warc.gz"}
http://mathoverflow.net/questions/33236/a-necessary-and-sufficient-condition-for-a-curve-to-have-an-a-k-singularity/33262
## A necessary and sufficient condition for a curve to have an $A_k$ singularity. Hi Does any one know of a necessary and sufficient condition for a curve to have a singularity of type A_k. More precisely, a curve f=0 has a singularity of type A_k at a point, if there exist local coordinates (x,y), where the function can be written as f(x,y)=x^2+y^{k+1}=0. If you understand what I am talking about, you need not read the rest. But in case you don't follow the question, let me elaborate a bit. A necessary and sufficient condition for a curve to have an A_1 node is the following df:= (f_x, f_y) = 0 Hessian(f) = non degenerate. This is essentially the morse lemma. I know conditions for A_2, A_3, ...... until A_6 node. I was wondering if anyone knew something for a general k. - At least over $\mathbb{C}$, there is a simple answer. A plane curve $f(x,y)=0$ has a singularity of type $A_k$ in $o=(0,0)$ if and only if • $o$ is a $double$ $point$, that is all first partial derivatives of $f$ vanish in $o$ but there is at least one second partial derivative which is not zero; • the $Milnor$ $number$ $\mu(f, o):= \dim_{\mathbb{C}}\mathcal{O}_{o}/(f_x, f_y)$ is equal to $k$. Here $\mathcal{O}_{o}$ denotes the ring of convergent power series. This can be generalized in higher dimensions. In fact, one proves that a (germ of) complex hypersurface singularity $f(x_1, ...,x_n)=0$ is of type $A_k$ if and only if • the corank $\textrm{crk}(f):=n-\textrm{rank}(\textrm{Hessian}(f))(o)$ is $\leq 1$; • the Milnor number $\mu(f, o):= \dim_{\mathbb{C}}\mathcal{O}_{o}/(J_f)$ is equal to $k$. This follows from a sort of generalized Morse Lemma. See the book GREUEL - LOSSEN - SHUSTIN "Introduction to singularities and deformations" p. 150 for the proof. ADDED TO ANSWER THE COMMENT BELOW. I do not know any explicit expression for the Milnor number, I think that in general you cannot avoid to compute the $\mathbb{C}$-basis for the Milnor algebra. I agree that these computations are tedious by hand, however you can use a Computer Algebra software like SINGULAR (which is free) to do this quickly and easily. And yes, there are similar conditions for $D_k$ and $E_6$, $E_7$, $E_8$. Let me state the condition for $D_k$. Let $f \in \boldsymbol{m}^3 \subset \mathcal{O}_o$ and $k \geq 4$. Denote by $f^{(3)}$ the $3$-jet of $f$. Then the following are equivalent: • $f^{(3)}$ factors into at least two different factors and $\mu(f, o)=k$; • $f$ is of type $D_k$. Moreover, $f^{(3)}$ factors into three different factors if and only if $f$ is of type $D_4$. The conditions for $E_6$, $E_7$, $E_8$ are a bit more complicate and I will not state them here. You will find them in the book of GREUEL, LOSSEN and SHUSTIN, p. 154. - Hi Francesco Thank you for your reply. Just two further questions. 1) Is there an explicit expression for the Milnor number in terms of the partial derivatives of f? I am only talking about maps from C^2 to C. 2) Are there similar conditions for other singularities? Such as D_k singularity and E_k singularity? Again, only for maps from C^2 to C. – Ritwik Jul 26 2010 at 2:39 Hi, I edited the reply to answer your new questions. Best, f. – Francesco Polizzi Jul 26 2010 at 9:55 You can almost settle the issue by counting the number of blow-ups necessary to achieve an embedded resolution: a curve of type $A_{2k}$ or $A_{2k-1}$ requires exactly $k$ blow-ups. Then to distinguish between $2k$ and $2k-1$, look at the singularity that you have after $k-1$ blow-ups and decide whether it is $A_1$ or $A_2$, e.g., by following Francesco's suggestion. - Although the above answer involving the local algebra and the Milnor number is correct, it is often very hard to apply in real situations. Especially if you have a general function with arbitrary coefficients. You can perform a rather messy iterative process to check for an $A_k.$ In general the condition is far too ugly to want to, or be able to, write down. You have a curve in the plane given by $f(x,y) = 0.$ The Taylor series, with respect to $x$ and $y$ is what you're really interested in. Let's assume we are only interested in the origin. If the linear terms vanish then you know that you have a singular point (a critical point of $f$). In that case you consider the quadratic part. If the quadratic part is non-degenerate, i.e. not a perfect square, then you have Morse singularity. These are $\mathscr{A}$-equivalent to $x^2 \pm y^2,$ and give the so-called $A_1^{\pm}$-singularity types. (Notice that $\mathscr{A}$-equivalence has no relevance to the $A$ in $A_k.$ $\mathscr{A}$-equivalence is also called $\mathscr{RL}$-equivalence. You allow diffeomorphic changes of coordinate in the source and target (right and left sides of the commutativity diagram.) If $f$ has a zero linear part and a degenerate quadratic part, we complete the square on the quadratic part. Then take a change of coordinates that turns the quadratic part into $\tilde{x}^2$. The condition for exactly an $A_2$ is that $\tilde{x}$ does not divide the new, post-coordinate change, cubic term. If not then $f$ is $\mathscr{A}$-equivalent to $\tilde{x}^2 + \tilde{y}^3.$ (There is no $\pm$ because $(x,y) \mapsto (x,-y)$ changes the sign of the cubic term. If $\tilde{x}$ does divide the new cubic term, then you can complete the square on the three jet, i.e. on the quadratic and cubic terms as a whole. You take a change of coordinates so that this completed square become, say $X^2$. The condition for an $A_3^{\pm}$ is that $X$ does not divide the new, post-coordinate change, quadric terms. If not then $f$ is $\mathscr{A}$-equivalent to $X^2 \pm Y^4.$ In general you follow the same pattern. Complete the square, take a formal power series change of coordinates so that the perfect square becomes $x_{new}^2.$ Check if $x_{new}$ divides the next set of fixed order terms. If not then stop. If $x_{new}$ didn't divide the order $n$-terms then $f$ is $\mathscr{A}$-equivalent to $x_{new}^2 \pm y_{new}^n.$ You just repeat the pattern: Is the quadratic part degenerate? If so then change coordinates (by a formal power series of low, but sufficient order) so that the degenerate part becomes $x_{new}^2 + O(3).$ Check if $x_{new}$ divides the cubic terms. If not then you have $x^2 + y^3.$ If so then complete the square on the new 3-jet and change coordinates so that you have $x_{new}^2 + O(4)$. Does $x_{new}^2$ divide the quartic terms? If not then you have $x^2 \pm y^4.$ If so then complete the square on the new 4-jet and change coordinates. Just keep completing the square, checking divisibility, changing coordinates. The conditions on the coefficients soon spiral out of control. To check an $A_6$ you'll need a computer program. For a general polynomial it's impossible without a computer. (Except for very special cases!) I wrote a program in Maple to calculate the conditions up to $A_k$ once, but the output was so messy then I gave up. Having said that, for an explicit polynomial it's child's play. -
2013-05-26 03:04:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8888770341873169, "perplexity": 243.54399673778423}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706499548/warc/CC-MAIN-20130516121459-00065-ip-10-60-113-184.ec2.internal.warc.gz"}
https://chem.libretexts.org/Courses/Madera_Community_College/MacArthur_Chemistry_3A_v_1.2/10%3A_Aqueous_Solutions/10.02%3A_Measures_of_Concentration
# 10.2: Measures of Concentration • Anonymous • LibreTexts ##### Learning Objectives • Understand what is meant by the term solution concentration. To define a solution precisely, we need to state its concentration: how much solute is dissolved in a certain amount of solvent. Words such as dilute or concentrated are used to describe solutions that have a little or a lot of dissolved solute, respectively, but these are relative terms with meanings that depend on various factors. Concentration is the measure of how much of a given substance is mixed with another substance. Solutions are said to be either dilute or concentrated. When we say that vinegar is $$5\%$$ acetic acid in water, we are giving the concentration. If we said the mixture was $$10\%$$ acetic acid, this would be more concentrated than the vinegar solution. A concentrated solution is one in which there is a large amount of solute in a given amount of solvent. A dilute solution is one in which there is a small amount of solute in a given amount of solvent. A dilute solution is a concentrated solution that has been, in essence, watered down. Think of the frozen juice containers you buy in the grocery store. To make juice, you have to mix the frozen juice concentrate from inside these containers with three or four times the container size full of water. Therefore, you are diluting the concentrated juice. In terms of solute and solvent, the concentrated solution has a lot of solute versus the dilute solution that would have a smaller amount of solute. The terms "concentrated" and "dilute" provide qualitative methods of describing concentration. Although qualitative observations are necessary and have their place in every part of science, including chemistry, we have seen throughout our study of science that there is a definite need for quantitative measurements in science. This is particularly true in solution chemistry. In this section, we will explore some quantitative methods of expressing solution concentration. There have been many ways that people have measured concentrations. We will be looking at a few of them in this book in the following subsections.
2023-02-06 10:00:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6133753061294556, "perplexity": 561.4313688866104}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500334.35/warc/CC-MAIN-20230206082428-20230206112428-00103.warc.gz"}
http://technotes.smoothwater.net/2010/11/file-associations-from-command-line-ie.html
## Monday, November 22, 2010 ### File Associations from the command line (ie. in a batch file) In all of my years of administering workstations, I haven't had to use this until today.  I was surprised to learn that is has been around about as long as I have been administering workstations. Two commands, ASSOC and FTYPE, allow you to manage file associations from a command prompt (or in a batch file). Typing ASSOC, without parameters, displays the currently defined extensions. Type ASSOC .{ext} to display the .{ext} file association. Typing ASSOC .{ext}= will delete the .{ext} association. Typing FTYPE without options displays the file types that have defined open command strings. Typing FTYPE {AppName} will display the open command string for the file type {AppName}. Typing FTYPE {AppName}= will delete the open command string. To define a new association for .mug files which you want to open with mspaint: assoc .mug=MugShot ftype MugShot=%Systemroot%\System32\mspaint.exe %1 For a complete explanation, type ftype /? or assoc /? at a command prompt. Here are a couple other references too: http://ss64.com/nt/assoc.html http://ss64.com/nt/ftype.html
2019-07-22 19:27:51
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8291098475456238, "perplexity": 10801.100012574883}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195528208.76/warc/CC-MAIN-20190722180254-20190722202254-00039.warc.gz"}
http://peeterjoot.com/2021/01/
## Example of PL/I macro January 27, 2021 Mainframe No comments , , , Up until last week PL/I macros were a bit of a mystery.  Most of the ones that I’d seen in customer code were impressively inscrutable, and if I had to look at any of them, my reaction was to throw my hands in the air and plead with the compiler backend guys for help.  Implementing one such macro has been very helpful to understanding how these work. Here is a C program that roughly models some PL/I code of interest The documentation for the ‘foo’ function says of the final return code parameter that it is 12 bytes long, and that the ‘rcvalues.h’ header file has a set of RCNNN constants and a RCCHECK macro that can be used to test for any one of those constants.  A possible C implementation of that header might look something like: /* rcvalues.h */ #define RC000 0x0000000000000000LL #define RC001 0x0000000123456789LL /* ... */ #define RCCHECK( urc, crc ) ( memcmp( &(urc), &(crc), 8 ) == 0 ) PL/I APIs do not typically use modern constructs like typedefs.  The closest that I have seen is for an API header file (copybook in the mainframe lingo) is to declare a variable (which becomes a local variable with a specific name in the including module), which the programmer can refer to using the LIKE keyword, as in the following example: I believe there is also a DEFINE keyword available in newer PL/I compilers, which provides a typedef like mechanism, but most existing code probably doesn’t use such new-fangled nonsense, when cut and paste has far superior maintenance characteristics.  For that reason, the API would be unlikely to have a typedef equivalent for the return code structure.  Instead, the PL/I equivalent of the C code above, would probably look like: (i.e. the C code is really modeled on the PL/I code of this form, and if this was a C API, the API would have a struct declaration or a typedef for the return code structure) The RCNNN constants would actually be found as named variables (not immutable constants) in the copy book, perhaps declared something like: I struggled a bit to figure out what the PL/I equivalent of my C RCCHECK macro would be.  The following inner function correctly did the required type casting and comparisons: The implementation is very long, since the entire declaration of the input parameter type has to be duplicated. If I was to put this RCCHECK implementation above into my RCVALUES.inc header file, it would only work if all the customer declaration of their return code structure objects were field by field compatible.  What I really want is for my RCCHECK function to take the address of the parameter, and pass that instead of the underlying type.  That was not at all obvious to figure out how to do, but with some help, I was eventually able to construct a PL/I macro (with helper inner-function) of the following form: It’s clearly no longer a one liner.  Some notes on this PL/I macro: • The PL/I macro body looks like a regular PL/I function, but the begin-PROCEDURE and END statements start with % (% is not part of the PROC name.) • Macro parameters and return values are explicit strings, regardless of the types of the parameters that were actually passed. • In PL/I the || symbol is used for string concatenation, so this constructs output that inserts an ADDR() call around ARG1 token and then passes the ARG2 token as is. • I don’t know if there’s a way to implement this macro in a way that doesn’t require a helper function, and still have the output work in the context of an IF statement. • You have to explicitly enable the macro, using %ACTIVATE.  In my case, without %ACTIVATE, the RCCHECK symbol ends up as an undeclared external entry, and was no call to the @RCCHECK_HELPER function $${}^{[1]}$$. • Observe that the PL/I macro provides a mechanism to jam whatever you want into the code, as the compiler’s macro preprocessor replaces the macro call tokens with the string that you have provided, leaving that string for the final compiler pass to interpret instead. If I compile the code using this macro version of RCCHECK, the preprocessor output looks like: I’m still pretty horrified at some of the macros that I’ve seen in customer code — they almost seem like the source equivalent of self modifying code.  You can’t figure out what is going on without also looking at all the output of the precompiler passes.  This is especially evil, since you can write PL/I preprocessor macros that generate preprocessor macros and require multiple preprocessor passes to produce the final desired output! ### Footnotes [1] note that @ is a valid PL/I character to use in a symbol name, as is # and $— so if you want your functions to look like swear words, this is a language where that is possible. Something like the following is probably valid PL/I : V = #@$1A#@(1); For added entertainment, your file names (i.e. PDS member names) can also be like ‘#@$1A#@’. Storing files with names like that on a Unix filesystem results in hours of fun, as you are then left with the task of figuring out how to properly quote file names with embedded$’s and #’s in scripts and makefiles. ## Notes. Due to limitations in the MathJax-Latex package, all the oriented integrals in this blog post should be interpreted as having a clockwise orientation. [See the PDF version of this post for more sophisticated formatting.] ## Guts. Given a two dimensional generating vector space, there are two instances of the fundamental theorem for multivector integration \label{eqn:unpackingFundamentalTheorem:20} \int_S F d\Bx \lrpartial G = \evalbar{F G}{\Delta S}, and \label{eqn:unpackingFundamentalTheorem:40} \int_S F d^2\Bx \lrpartial G = \oint_{\partial S} F d\Bx G. The first case is trivial. Given a parameterizated curve $$x = x(u)$$, it just states \label{eqn:unpackingFundamentalTheorem:60} \int_{u(0)}^{u(1)} du \PD{u}{}\lr{FG} = F(u(1))G(u(1)) – F(u(0))G(u(0)), for all multivectors $$F, G$$, regardless of the signature of the underlying space. The surface integral is more interesting. Let’s first look at the area element for this surface integral, which is \label{eqn:unpackingFundamentalTheorem:80} d^2 \Bx = d\Bx_u \wedge d \Bx_v. Geometrically, this has the area of the parallelogram spanned by $$d\Bx_u$$ and $$d\Bx_v$$, but weighted by the pseudoscalar of the space. This is explored algebraically in the following problem and illustrated in fig. 1. fig. 1. 2D vector space and area element. ## Problem: Expansion of 2D area bivector. Let $$\setlr{e_1, e_2}$$ be an orthonormal basis for a two dimensional space, with reciprocal frame $$\setlr{e^1, e^2}$$. Expand the area bivector $$d^2 \Bx$$ in coordinates relating the bivector to the Jacobian and the pseudoscalar. With parameterization $$x = x(u,v) = x^\alpha e_\alpha = x_\alpha e^\alpha$$, we have \label{eqn:unpackingFundamentalTheorem:120} \Bx_u \wedge \Bx_v = \lr{ \PD{u}{x^\alpha} e_\alpha } \wedge \lr{ \PD{v}{x^\beta} e_\beta } = \PD{u}{x^\alpha} \PD{v}{x^\beta} e_\alpha e_\beta = \PD{(u,v)}{(x^1,x^2)} e_1 e_2, or \label{eqn:unpackingFundamentalTheorem:160} \Bx_u \wedge \Bx_v = \lr{ \PD{u}{x_\alpha} e^\alpha } \wedge \lr{ \PD{v}{x_\beta} e^\beta } = \PD{u}{x_\alpha} \PD{v}{x_\beta} e^\alpha e^\beta = \PD{(u,v)}{(x_1,x_2)} e^1 e^2. The upper and lower index pseudoscalars are related by \label{eqn:unpackingFundamentalTheorem:180} e^1 e^2 e_1 e_2 = -e^1 e^2 e_2 e_1 = -1, so with $$I = e_1 e_2$$, \label{eqn:unpackingFundamentalTheorem:200} e^1 e^2 = -I^{-1}, leaving us with \label{eqn:unpackingFundamentalTheorem:140} d^2 \Bx = \PD{(u,v)}{(x^1,x^2)} du dv\, I = -\PD{(u,v)}{(x_1,x_2)} du dv\, I^{-1}. We see that the area bivector is proportional to either the upper or lower index Jacobian and to the pseudoscalar for the space. We may write the fundamental theorem for a 2D space as \label{eqn:unpackingFundamentalTheorem:680} \int_S du dv \, \PD{(u,v)}{(x^1,x^2)} F I \lrgrad G = \oint_{\partial S} F d\Bx G, where we have dispensed with the vector derivative and use the gradient instead, since they are identical in a two parameter two dimensional space. Of course, unless we are using $$x^1, x^2$$ as our parameterization, we still want the curvilinear representation of the gradient $$\grad = \Bx^u \PDi{u}{} + \Bx^v \PDi{v}{}$$. ## Problem: Standard basis expansion of fundamental surface relation. For a parameterization $$x = x^1 e_1 + x^2 e_2$$, where $$\setlr{ e_1, e_2 }$$ is a standard (orthogonal) basis, expand the fundamental theorem for surface integrals for the single sided $$F = 1$$ case. Consider functions $$G$$ of each grade (scalar, vector, bivector.) From \ref{eqn:unpackingFundamentalTheorem:140} we see that the fundamental theorem takes the form \label{eqn:unpackingFundamentalTheorem:220} \int_S dx^1 dx^2\, F I \lrgrad G = \oint_{\partial S} F d\Bx G. In a Euclidean space, the operator $$I \lrgrad$$, is a $$\pi/2$$ rotation of the gradient, but has a rotated like structure in all metrics: \label{eqn:unpackingFundamentalTheorem:240} = e_1 e_2 \lr{ e^1 \partial_1 + e^2 \partial_2 } = -e_2 \partial_1 + e_1 \partial_2. • $$F = 1$$ and $$G \in \bigwedge^0$$ or $$G \in \bigwedge^2$$. For $$F = 1$$ and scalar or bivector $$G$$ we have \label{eqn:unpackingFundamentalTheorem:260} \int_S dx^1 dx^2\, \lr{ -e_2 \partial_1 + e_1 \partial_2 } G = \oint_{\partial S} d\Bx G, where, for $$x^1 \in [x^1(0),x^1(1)]$$ and $$x^2 \in [x^2(0),x^2(1)]$$, the RHS written explicitly is \label{eqn:unpackingFundamentalTheorem:280} \oint_{\partial S} d\Bx G = \int dx^1 e_1 \lr{ G(x^1, x^2(1)) – G(x^1, x^2(0)) } – dx^2 e_2 \lr{ G(x^1(1),x^2) – G(x^1(0), x^2) }. This is sketched in fig. 2. Since a 2D bivector $$G$$ can be written as $$G = I g$$, where $$g$$ is a scalar, we may write the pseudoscalar case as \label{eqn:unpackingFundamentalTheorem:300} \int_S dx^1 dx^2\, \lr{ -e_2 \partial_1 + e_1 \partial_2 } g = \oint_{\partial S} d\Bx g, after right multiplying both sides with $$I^{-1}$$. Algebraically the scalar and pseudoscalar cases can be thought of as identical scalar relationships. • $$F = 1, G \in \bigwedge^1$$. For $$F = 1$$ and vector $$G$$ the 2D fundamental theorem for surfaces can be split into scalar \label{eqn:unpackingFundamentalTheorem:320} \int_S dx^1 dx^2\, \lr{ -e_2 \partial_1 + e_1 \partial_2 } \cdot G = \oint_{\partial S} d\Bx \cdot G, and bivector relations \label{eqn:unpackingFundamentalTheorem:340} \int_S dx^1 dx^2\, \lr{ -e_2 \partial_1 + e_1 \partial_2 } \wedge G = \oint_{\partial S} d\Bx \wedge G. To expand \ref{eqn:unpackingFundamentalTheorem:320}, let \label{eqn:unpackingFundamentalTheorem:360} G = g_1 e^1 + g_2 e^2, for which \label{eqn:unpackingFundamentalTheorem:380} \lr{ -e_2 \partial_1 + e_1 \partial_2 } \cdot G = \lr{ -e_2 \partial_1 + e_1 \partial_2 } \cdot \lr{ g_1 e^1 + g_2 e^2 } = \partial_2 g_1 – \partial_1 g_2, and \label{eqn:unpackingFundamentalTheorem:400} d\Bx \cdot G = \lr{ dx^1 e_1 – dx^2 e_2 } \cdot \lr{ g_1 e^1 + g_2 e^2 } = dx^1 g_1 – dx^2 g_2, so \ref{eqn:unpackingFundamentalTheorem:320} expands to \label{eqn:unpackingFundamentalTheorem:500} \int_S dx^1 dx^2\, \lr{ \partial_2 g_1 – \partial_1 g_2 } = \int \evalbar{dx^1 g_1}{\Delta x^2} – \evalbar{ dx^2 g_2 }{\Delta x^1}. This coordinate expansion illustrates how the pseudoscalar nature of the area element results in a duality transformation, as we end up with a curl like operation on the LHS, despite the dot product nature of the decomposition that we used. That can also be seen directly for vector $$G$$, since \label{eqn:unpackingFundamentalTheorem:560} = = dA I \lr{ \grad \wedge G }, since the scalar selection of $$I \lr{ \grad \cdot G }$$ is zero.In the grade-2 relation \ref{eqn:unpackingFundamentalTheorem:340}, we expect a pseudoscalar cancellation on both sides, leaving a scalar (divergence-like) relationship. This time, we use upper index coordinates for the vector $$G$$, letting \label{eqn:unpackingFundamentalTheorem:440} G = g^1 e_1 + g^2 e_2, so \label{eqn:unpackingFundamentalTheorem:460} \lr{ -e_2 \partial_1 + e_1 \partial_2 } \wedge G = \lr{ -e_2 \partial_1 + e_1 \partial_2 } \wedge G \lr{ g^1 e_1 + g^2 e_2 } = e_1 e_2 \lr{ \partial_1 g^1 + \partial_2 g^2 }, and \label{eqn:unpackingFundamentalTheorem:480} d\Bx \wedge G = \lr{ dx^1 e_1 – dx^2 e_2 } \wedge \lr{ g^1 e_1 + g^2 e_2 } = e_1 e_2 \lr{ dx^1 g^2 + dx^2 g^1 }. So \ref{eqn:unpackingFundamentalTheorem:340}, after multiplication of both sides by $$I^{-1}$$, is \label{eqn:unpackingFundamentalTheorem:520} \int_S dx^1 dx^2\, \lr{ \partial_1 g^1 + \partial_2 g^2 } = \int \evalbar{dx^1 g^2}{\Delta x^2} + \evalbar{dx^2 g^1 }{\Delta x^1}. As before, we’ve implicitly performed a duality transformation, and end up with a divergence operation. That can be seen directly without coordinate expansion, by rewriting the wedge as a grade two selection, and expanding the gradient action on the vector $$G$$, as follows \label{eqn:unpackingFundamentalTheorem:580} = = dA I \lr{ \grad \cdot G }, since $$I \lr{ \grad \wedge G }$$ has only a scalar component. fig. 2. Line integral around rectangular boundary. ## Theorem 1.1: Green’s theorem [1]. Let $$S$$ be a Jordan region with a piecewise-smooth boundary $$C$$. If $$P, Q$$ are continuously differentiable on an open set that contains $$S$$, then \begin{equation*} \int dx dy \lr{ \PD{y}{P} – \PD{x}{Q} } = \oint P dx + Q dy. \end{equation*} ## Problem: Relationship to Green’s theorem. If the space is Euclidean, show that \ref{eqn:unpackingFundamentalTheorem:500} and \ref{eqn:unpackingFundamentalTheorem:520} are both instances of Green’s theorem with suitable choices of $$P$$ and $$Q$$. I will omit the subtleties related to general regions and consider just the case of an infinitesimal square region. ### Start proof: Let’s start with \ref{eqn:unpackingFundamentalTheorem:500}, with $$g_1 = P$$ and $$g_2 = Q$$, and $$x^1 = x, x^2 = y$$, the RHS is \label{eqn:unpackingFundamentalTheorem:600} \int dx dy \lr{ \PD{y}{P} – \PD{x}{Q} }. On the RHS we have \label{eqn:unpackingFundamentalTheorem:620} \int \evalbar{dx P}{\Delta y} – \evalbar{ dy Q }{\Delta x} = \int dx \lr{ P(x, y_1) – P(x, y_0) } – \int dy \lr{ Q(x_1, y) – Q(x_0, y) }. This pair of integrals is plotted in fig. 3, from which we see that \ref{eqn:unpackingFundamentalTheorem:620} can be expressed as the line integral, leaving us with \label{eqn:unpackingFundamentalTheorem:640} \int dx dy \lr{ \PD{y}{P} – \PD{x}{Q} } = \oint dx P + dy Q, which is Green’s theorem over the infinitesimal square integration region. For the equivalence of \ref{eqn:unpackingFundamentalTheorem:520} to Green’s theorem, let $$g^2 = P$$, and $$g^1 = -Q$$. Plugging into the LHS, we find the Green’s theorem integrand. On the RHS, the integrand expands to \label{eqn:unpackingFundamentalTheorem:660} \evalbar{dx g^2}{\Delta y} + \evalbar{dy g^1 }{\Delta x} = dx \lr{ P(x,y_1) – P(x, y_0)} + dy \lr{ -Q(x_1, y) + Q(x_0, y)}, which is exactly what we found in \ref{eqn:unpackingFundamentalTheorem:620}. ### End proof. fig. 3. Path for Green’s theorem. We may also relate multivector gradient integrals in 2D to the normal integral around the boundary of the bounding curve. That relationship is as follows. ## Theorem 1.2: 2D gradient integrals. \begin{equation*} \begin{aligned} \int J du dv \rgrad G &= \oint I^{-1} d\Bx G = \int J \lr{ \Bx^v du + \Bx^u dv } G \\ \int J du dv F \lgrad &= \oint F I^{-1} d\Bx = \int J F \lr{ \Bx^v du + \Bx^u dv }, \end{aligned} \end{equation*} where $$J = \partial(x^1, x^2)/\partial(u,v)$$ is the Jacobian of the parameterization $$x = x(u,v)$$. In terms of the coordinates $$x^1, x^2$$, this reduces to \begin{equation*} \begin{aligned} \int dx^1 dx^2 \rgrad G &= \oint I^{-1} d\Bx G = \int \lr{ e^2 dx^1 + e^1 dx^2 } G \\ \int dx^1 dx^2 F \lgrad &= \oint G I^{-1} d\Bx = \int F \lr{ e^2 dx^1 + e^1 dx^2 }. \end{aligned} \end{equation*} The vector $$I^{-1} d\Bx$$ is orthogonal to the tangent vector along the boundary, and for Euclidean spaces it can be identified as the outwards normal. ### Start proof: Respectively setting $$F = 1$$, and $$G = 1$$ in \ref{eqn:unpackingFundamentalTheorem:680}, we have \label{eqn:unpackingFundamentalTheorem:940} \int I^{-1} d^2 \Bx \rgrad G = \oint I^{-1} d\Bx G, and \label{eqn:unpackingFundamentalTheorem:960} \int F d^2 \Bx \lgrad I^{-1} = \oint F d\Bx I^{-1}. Starting with \ref{eqn:unpackingFundamentalTheorem:940} we find \label{eqn:unpackingFundamentalTheorem:700} \int I^{-1} J du dv I \rgrad G = \oint d\Bx G, to find $$\int dx^1 dx^2 \rgrad G = \oint I^{-1} d\Bx G$$, as desireed. In terms of a parameterization $$x = x(u,v)$$, the pseudoscalar for the space is \label{eqn:unpackingFundamentalTheorem:720} I = \frac{\Bx_u \wedge \Bx_v}{J}, so \label{eqn:unpackingFundamentalTheorem:740} I^{-1} = \frac{J}{\Bx_u \wedge \Bx_v}. Also note that $$\lr{\Bx_u \wedge \Bx_v}^{-1} = \Bx^v \wedge \Bx^u$$, so \label{eqn:unpackingFundamentalTheorem:760} I^{-1} = J \lr{ \Bx^v \wedge \Bx^u }, and \label{eqn:unpackingFundamentalTheorem:780} I^{-1} d\Bx = I^{-1} \cdot d\Bx = J \lr{ \Bx^v \wedge \Bx^u } \cdot \lr{ \Bx_u du – \Bx_v dv } = J \lr{ \Bx^v du + \Bx^u dv }, so the right acting gradient integral is \label{eqn:unpackingFundamentalTheorem:800} \int J du dv \grad G = \int \evalbar{J \Bx^v G}{\Delta v} du + \evalbar{J \Bx^u G dv}{\Delta u}, which we write in abbreviated form as $$\int J \lr{ \Bx^v du + \Bx^u dv} G$$. For the $$G = 1$$ case, from \ref{eqn:unpackingFundamentalTheorem:960} we find \label{eqn:unpackingFundamentalTheorem:820} \int J du dv F I \lgrad I^{-1} = \oint F d\Bx I^{-1}. However, in a 2D space, regardless of metric, we have $$I a = – a I$$ for any vector $$a$$ (i.e. $$\grad$$ or $$d\Bx$$), so we may commute the outer pseudoscalars in \label{eqn:unpackingFundamentalTheorem:840} \int J du dv F I \lgrad I^{-1} = \oint F d\Bx I^{-1}, so \label{eqn:unpackingFundamentalTheorem:850} -\int J du dv F I I^{-1} \lgrad = -\oint F I^{-1} d\Bx. After cancelling the negative sign on both sides, we have the claimed result. To see that $$I a$$, for any vector $$a$$ is normal to $$a$$, we can compute the dot product \label{eqn:unpackingFundamentalTheorem:860} \lr{ I a } \cdot a = = = 0, since the scalar selection of a bivector is zero. Since $$I^{-1} = \pm I$$, the same argument shows that $$I^{-1} d\Bx$$ must be orthogonal to $$d\Bx$$. ### End proof. Let’s look at the geometry of the normal $$I^{-1} \Bx$$ in a couple 2D vector spaces. We use an integration volume of a unit square to simplify the boundary term expressions. • Euclidean: With a parameterization $$x(u,v) = u\Be_1 + v \Be_2$$, and Euclidean basis vectors $$(\Be_1)^2 = (\Be_2)^2 = 1$$, the fundamental theorem integrated over the rectangle $$[x_0,x_1] \times [y_0,y_1]$$ is \label{eqn:unpackingFundamentalTheorem:880} \int dx dy \grad G = \int \Be_2 \lr{ G(x,y_1) – G(x,y_0) } dx + \Be_1 \lr{ G(x_1,y) – G(x_0,y) } dy, Each of the terms in the integrand above are illustrated in fig. 4, and we see that this is a path integral weighted by the outwards normal. fig. 4. Outwards oriented normal for Euclidean space. • Spacetime: Let $$x(u,v) = u \gamma_0 + v \gamma_1$$, where $$(\gamma_0)^2 = -(\gamma_1)^2 = 1$$. With $$u = t, v = x$$, the gradient integral over a $$[t_0,t_1] \times [x_0,x_1]$$ of spacetime is \label{eqn:unpackingFundamentalTheorem:900} \begin{aligned} &= \int \gamma^1 dt \lr{ G(t, x_1) – G(t, x_0) } + \gamma^0 dx \lr{ G(t_1, x) – G(t_1, x) } \\ &= \int \gamma_1 dt \lr{ -G(t, x_1) + G(t, x_0) } + \gamma_0 dx \lr{ G(t_1, x) – G(t_1, x) } . \end{aligned} With $$t$$ plotted along the horizontal axis, and $$x$$ along the vertical, each of the terms of this integrand is illustrated graphically in fig. 5. For this mixed signature space, there is no longer any good geometrical characterization of the normal. fig. 5. Orientation of the boundary normal for a spacetime basis. • Spacelike: Let $$x(u,v) = u \gamma_1 + v \gamma_2$$, where $$(\gamma_1)^2 = (\gamma_2)^2 = -1$$. With $$u = x, v = y$$, the gradient integral over a $$[x_0,x_1] \times [y_0,y_1]$$ of this space is \label{eqn:unpackingFundamentalTheorem:920} \begin{aligned} &= \int \gamma^2 dx \lr{ G(x, y_1) – G(x, y_0) } + \gamma^1 dy \lr{ G(x_1, y) – G(x_1, y) } \\ &= \int \gamma_2 dx \lr{ -G(x, y_1) + G(x, y_0) } + \gamma_1 dy \lr{ -G(x_1, y) + G(x_1, y) } . \end{aligned} Referring to fig. 6. where the elements of the integrand are illustrated, we see that the normal $$I^{-1} d\Bx$$ for the boundary of this region can be characterized as inwards. fig. 6. Inwards oriented normal for a Dirac spacelike basis. # References [1] S.L. Salas and E. Hille. Calculus: one and several variables. Wiley New York, 1990. ## Switching from screen to tmux January 11, 2021 perl and general scripting hackery 3 comments , , RHEL8 (Redhat enterprise Linux 8) has dropped support for my old friend screen.  I had found a package somewhere that still worked for one new RHEL8 installation, but didn’t record where, and the version I installed on my most recently upgraded machine is crashing horribly. ### Screen Screen was originally recommended to me by Sam Bortman when I worked at IBM, and I am forever grateful to him, as it has been a godsend over the years.  The basic idea is that you have have a single terminal session that not only saves all state, but also allows you to have multiple terminal “tabs” all controlled by that single master session.  Since then, I no longer use nohup, and no longer try to run many background jobs anymore.  Both attempting to background or nohup a job can be problematic, as there are a suprising number of tools and scripts that expect an active terminal.  As well as the multiplexing functionality, running screen ensures that if you loose your network connection, or switch from wired to wireless and back, or go home or go to work, in all cases, you can resume your work where you left it. A typical session looks something like the following: i.e. plain old terminal, but three little “tabs” at the bottom, each representing a different shell on the same machine.  In this case, I have my ovpn client running in window 0, am in my Tests/scripts/ directory in window 1, and have ‘git log –graph –decorate’ running in window 2.  The second screenshot above shows the screen menu, listing all the different active windows. screen can do window splitting vertically and horizontally too, but I’ve never used that.  My needs are pretty simple: • multiple windows, each with a different shell, • an easy way to tab between the windows, • an easy way to start a new shell. I always found the screen key bindings to be somewhat cumbersome (example: control-A ” to start the window menu), and it didn’t take me long before I’d constructed a standard .screenrc for myself with a couple handy key bindings: • F5: -1th window • F6: previous window (after switching explicitly using key bindings or the menu) • F8: +1th window • F9: new window I’ve used those key bindings for so many years that I feel lost without them! With screen crashing on my constantly, my options were to find a stable package somewhere, build it myself (which I used to do all the time at IBM when I had to work on many Unix platforms simultaneously), or bite the bullet and see what it would take to switch to tmux. ### tmux attach I chose the latter, and with the help of some tutorials, it was pretty easy to make the switch to tmux.  Startup is pretty easy: tmux and tmux at (at is short for attach, what to use instead of screen -dr) ### tmux bindings All my trusty key bindings were easy to reimplement, requiring the following in my .tmux.conf: bind-key -T root F4 list-windows bind-key -T root F5 select-window -p bind-key -T root F6 select-window -l bind-key -T root F8 select-window -n bind-key -T root F9 new-window ### tmux command line One of the nice things about tmux is that you don’t need a whole bunch of complex key bindings that are hard to remember, as you can do it all on the command line from within any tmux slave window. This means that you can also alias your tmux commands easily! Here are a couple examples: alias weekly='tmux new-window -c ~/weeklyreports/01 -n weekly -t 1' alias master='tmux new-window -n master -c ~/master' alias tests='tmux new-window -n tests -c ~/Tests' These new-window aliases change the name displayed in the bottom bar, and open a new terminal in a set of specific directories. The UI is pretty much identical, and a session might look something like: ### tmux prefix binding The only other customization that I made to tmux was to override the default key binding, as tmux uses control-b instead of screen’s control-a. control-b is much easier to type than control-a, but messes up paging in vim, so I’ve reset it to control-n using: unbind C-b set -g prefix ^N bind n send-prefix With this, the rename window command becomes ‘control-n ,’. I can’t think of anything that uses control-n, but if that choice ends up being intrusive, I’ll probably just unbind control-b and not bother with a prefix binding, since tmux has the full functioning command line options, and I can use easier to remember (or lookup) aliases. ### Incompatibilities. It looks like the bindings that I used above are valid with RHEL8 tmux-2.7, but not with RHEL7’s tmux-1.8.  That’s a bit of a pain, and means that I’ll have to 1. find alternate newer tmux packages for RHEL7, or 2. figure out how to do the same bindings with tmux-1.8 and have different dot files, or 3. keep on using screen until I’ve managed to upgrade all my machines to RHEL8. Nothing is ever easy;) ## Raccoons vs. Cake: “Oh, come on kids, …, it’s still good!” January 10, 2021 Incoherent ramblings No comments , Life comes in cycles, and here’s an old chapter replaying itself. When I was a teenager, we spent weekdays with mom, and weekends with dad. Both of them lived a subsistence existence, but with the rent expenses that mom also had, she really struggled to pay the bills at that stage of our lives. I don’t remember the occasion, but one hot summer day, she had saved enough to buy the eggs, flour and other ingredients that she needed to make us all a cake as a special treat. After the cake was cooked, she put it on the kitchen table to cool enough that she could ice it (she probably would have used her classic cream-cheese and sugar recipe.) That rental property did not have air conditioning, and the doors were always wide open in the summer. Imagine the smell of fresh baked cake pervading the air in the house, and then a blood curdling scream. It was the scream of a horrific physical injury, perhaps that of somebody with a foreign object embedding deep in the flesh of their leg. We all rushed down to find out what happened, and it turned out that the smell of the cake was not just inviting to us, but also to a family of raccoons. Mom walked into the kitchen to find a mother raccoon and her little kids all sitting politely at the table in a circle around the now cool cake, helping themselves to dainty little handfuls.  What sounded like the scream of mortal injury, was the scream of a struggling mom, who’s plan to spoil her kids was being eaten in front of her eyes. From the kitchen you could enter the back room, or the hallway to the front door, and from the front door you could enter the “piano room”, which also had a door to the back room and back to the kitchen.  The scene degenerates into chaos at this point, with mom and the rest of us chasing crying and squealing raccoons in circles all around the first floor of the house along that circular path, with cake crumbs flying in all directions.  I don’t know how many laps we and the raccoons made of the house before we managed to shoo them all out the front or back door, but eventually we were left with just the crumb trail and the remains of the cake. The icing on the cake was mom’s reaction though. She went over to the cake and cut all the raccoon handprints out of it. We didn’t want to eat it, and I still remember her pleading with us, “Oh come on kids, try it. It’s still good!” Poor mom.  She even took sample bites from the cake to demonstrate it was still edible, and convince us to partake in the treat that she’d worked so hard to make for us.  I don’t think that we ate her cake, despite her pleading. Thirty years later, it’s my turn. I spent an hour making chili today, and after dinner I put the left overs out on the back porch to cool in the slow cooker pot with the lid on. I’d planned to bag and freeze part of it, and put the rest in the fridge as leftovers for the week. It was cold enough out that I didn’t think that the raccoons would be out that early, but figured it would have been fair game had I left it out all night in the “outside fridge”. Well, those little buggers were a lot more industrious than I gave them credit, and by the time I’d come back from walking the dog, they’d helped themselves to a portion, lifting the lid of the slow cooker pot, and making a big mess of as much chili as they wanted.  They ate quite a lot, but perhaps it had more spice than they cared for, as they left quite a lot: Judging by the chili covered hand prints on the back deck I think they enjoyed themselves, despite the spices. When I went upstairs to let Sofia know what had happened, she immediately connected the dots to this cake story that I’ve told so many times, and said in response: “Oh come on kids, it’s still good!”, at which point we both started laughing. The total cost of the chili itself was probably only \$17, plus one hour of time.  However, I didn’t intend to try to talk anybody into eating the remains.  It is just not worth getting raccoon carried Giardia or some stomach bug.  I was sad to see my work wasted and the leftovers ruined.  I wish Mom was still with us, so that I could share this with her.  I can imagine her visiting on this very day, where I could have scooped everything off the top, and then offered her a spoonful, saying “Oh come on Mom, it’s still good!”  I think that she would have gotten a kick out of that, even if she was always embarrassed about this story and how poor we were at the time. ### Final thoughts. There were 4 cans of beans in that pot of chili.  I have to wonder if we are going to have a family of farting raccoons in the neighbourhood for a few days? ## Some experiments in youtube mathematics videos A couple years ago I was curious how easy it would be to use a graphics tablet as a virtual chalkboard, and produced a handful of very rough YouTube videos to get a feel for the basics of streaming and video editing (much of which I’ve now forgotten how to do). These were the videos in chronological order: • Introduction to Geometric (Clifford) Algebra.Introduction to Geometric (Clifford) algebra. Interpretation of products of unit vectors, rules for reducing products of unit vectors, and the axioms that justify those rules. • Geometric Algebra: dot, wedge, cross and vector products.Geometric (Clifford) Algebra introduction, showing the relation between the vector product dot and wedge products, and the cross product. • Solution of two line intersection using geometric algebra. • Linear system solution using the wedge product.. This video provides a standalone introduction to the wedge product, the geometry of the wedge product and some properties, and linear system solution as a sample application. In this video the wedge product is introduced independently of any geometric (Clifford) algebra, as an antisymmetric and associative operator. You’ll see that we get Cramer’s rule for free from this solution technique. • Exponential form of vector products in geometric algebra.In this video, I discussed the exponential form of the product of two vectors. I showed an example of how two unit vectors, each rotations of zcap orthonormal $$\mathbb{R}^3$$ planes, produce a “complex” exponential in the plane that spans these two vectors. • Velocity and acceleration in cylindrical coordinates using geometric algebra.I derived the cylindrical coordinate representations of the velocity and acceleration vectors, showing the radial and azimuthal components of each vector. I also showed how these are related to the dot and wedge product with the radial unit vector. • Duality transformations in geometric algebra.Duality transformations (pseudoscalar multiplication) will be demonstrated in $$\mathbb{R}^2$$ and $$\mathbb{R}^3$$. A polar parameterized vector in $$\mathbb{R}^2$$, written in complex exponential form, is multiplied by a unit pseudoscalar for the x-y plane. We see that the result is a vector normal to that vector, with the direction of the normal dependent on the order of multiplication, and the orientation of the pseudoscalar used. In $$\mathbb{R}^3$$ we see that a vector multiplied by a pseudoscalar yields the bivector that represents the plane that is normal to that vector. The sign of that bivector (or its cyclic orientation) depends on the orientation of the pseudoscalar. The order of multiplication was not mentioned in this case since the $$\mathbb{R}^3$$ pseudoscalar commutes with any grade object (assumed, not proved). An example of a vector with two components in a plane, multiplied by a pseudoscalar was also given, which allowed for a visualization of the bivector that is normal to the original vector. • Math bait and switch: Fractional integer exponents.When I was a kid, my dad asked me to explain fractional exponents, and perhaps any non-positive integer exponents, to him. He objected to the idea of multiplying something by itself $$1/2$$ times. I failed to answer the question to his satisfaction. My own son is now reviewing the rules of exponentiation, and it occurred to me (30 years later) why my explanation to Dad failed. Essentially, there’s a small bait and switch required, and my dad didn’t fall for it. The meaning that my dad gave to exponentiation was that $$x^n$$ equals $$x$$ times itself $$n$$ times. Using this rule, it is easy to demonstrate that $$x^a x^b = x^{a + b}$$, and this can be used to justify expressions like $$x^{1/2}$$. However, doing this really means that we’ve switched the definition of exponential, defining an exponential as any number that satisfies the relationship: $$x^a x^b = x^{a+b}$$, where $$x^1 = x$$. This slight of hand is required to give meaning to $$x^{1/2}$$ or other exponentials where the exponential argument is any non-positive integer. Of these videos I just relistened to the wedge product episode, as I had a new lone comment on it, and I couldn’t even remember what I had said. It wasn’t completely horrible, despite the low tech. I was, however, very surprised how soft and gentle my voice was. When I am talking math in person, I get very animated, but attempting to manage the tech was distracting and all the excitement that I’d normally have was obliterated. I’d love to attempt a manim based presentation of some of this material, but suspect if I do something completely scripted like that, I may not be a very good narrator. ## New version of classical mechanics notes I’ve posted a new version of my classical mechanics notes compilation.  This version is not yet live on amazon, but you shouldn’t buy a copy of this “book” anyways, as it is horribly rough (if you want a copy, grab the free PDF instead.)  [I am going to buy a copy so that I can continue to edit a paper copy of it, but nobody else should.] This version includes additional background material on Space Time Algebra (STA), i.e. the geometric algebra name for the Dirac/Clifford-algebra in 3+1 dimensions.  In particular, I’ve added material on reciprocal frames, the gradient and vector derivatives, line and surface integrals and the fundamental theorem for both.  Some of the integration theory content might make sense to move to a different book, but I’ll keep it with the rest of these STA notes for now.
2021-09-25 09:35:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7426897883415222, "perplexity": 2754.426026184474}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057615.3/warc/CC-MAIN-20210925082018-20210925112018-00508.warc.gz"}
http://kitchingroup.cheme.cmu.edu/blog/category/recursive/
## Make a list of org-files in all the subdirectories of the current working directory | categories: | tags: | View Comments It would be helpful to get a listing of org-files in a directory tree in the form of clickable links. This would be useful, for example, to find all files associated with a project in a directory with a particular extension, or to do some action on all files that match a pattern. To do this, we will have to recursively walk through the directories and examine their contents. Let us examine some of the commands we will need to use. One command is to get the contents of a directory. We will explore the contents of a directory called literate in my computer. ;; list contents of the directory (let ((abspath nil) (match nil) (nosort t)) (directory-files "literate" abspath match nosort)) makefile-main Makefile main.o main.f90 main literate.org hello.f90 circle.o circle.mod circle.f90 circle-area.png archive a.out .. . Note the presence of . and ... Those stand for current directory and one directory up. We should remove those from the list. We can do that like this. ;; remove . and .. (let ((abspath nil) (match nil) (nosort t)) (remove "." (remove ".." (directory-files "literate" abspath match nosort)))) makefile-main Makefile main.o main.f90 main literate.org hello.f90 circle.o circle.mod circle.f90 circle-area.png archive a.out Next, we need to know if a given entry in the directory files is a file or a directory. Emacs-lisp has a few functions for that. We use absolute filenames here since the paths are relative to the "molecules" directory. Note we could use absolute paths in directory-files, but that makes it hard to remove "." and "..". ;; print types of files in the directory (let ((root "literate") (abspath nil) (match nil) (nosort t)) (mapcar (lambda (x) (cond ((file-directory-p (expand-file-name x root)) (print (format "%s is a directory" x))) ((file-regular-p (expand-file-name x root)) (print (format "%s is a regular file" x))))) (remove "." (remove ".." (directory-files root abspath match nosort))))) "makefile-main is a regular file" "Makefile is a regular file" "main.o is a regular file" "main.f90 is a regular file" "main is a regular file" "literate.org is a regular file" "hello.f90 is a regular file" "circle.o is a regular file" "circle.mod is a regular file" "circle.f90 is a regular file" "circle-area.png is a regular file" "archive is a directory" "a.out is a regular file" Now, we are at the crux of this problem. We can differentiate between files and directories. For each directory in this directory, we need to recurse into it, and list the contents. There is some code at http://turingmachine.org/bl/2013-05-29-recursively-listing-directories-in-elisp.html which does this, but I found that I had to modify the code to not list directories, and here I want to show a simpler recursive code. (defun os-walk (root) "recursively walks through directories getting list of absolute paths of files" (let ((files '()) ; empty list to store results (current-list (directory-files root t))) ;;process current-list (while current-list (let ((fn (car current-list))) ; get next entry (cond ;; regular files ((file-regular-p fn) ;; directories ((and (file-directory-p fn) ;; ignore . and .. (not (string-equal ".." (substring fn -2))) (not (string-equal "." (substring fn -1)))) ;; we have to recurse into this directory (setq files (append files (os-walk fn)))) ) ;; cut list down by an element (setq current-list (cdr current-list))) ) files)) (os-walk "literate") c:/Users/jkitchin/Dropbox/blogofile-jkitchin.github.com/blog/literate/makefile-main c:/Users/jkitchin/Dropbox/blogofile-jkitchin.github.com/blog/literate/main.o c:/Users/jkitchin/Dropbox/blogofile-jkitchin.github.com/blog/literate/main.f90 c:/Users/jkitchin/Dropbox/blogofile-jkitchin.github.com/blog/literate/main c:/Users/jkitchin/Dropbox/blogofile-jkitchin.github.com/blog/literate/literate.org c:/Users/jkitchin/Dropbox/blogofile-jkitchin.github.com/blog/literate/hello.f90 c:/Users/jkitchin/Dropbox/blogofile-jkitchin.github.com/blog/literate/circle.o c:/Users/jkitchin/Dropbox/blogofile-jkitchin.github.com/blog/literate/circle.mod c:/Users/jkitchin/Dropbox/blogofile-jkitchin.github.com/blog/literate/circle.f90 c:/Users/jkitchin/Dropbox/blogofile-jkitchin.github.com/blog/literate/circle-area.png c:/Users/jkitchin/Dropbox/blogofile-jkitchin.github.com/blog/literate/a.out c:/Users/jkitchin/Dropbox/blogofile-jkitchin.github.com/blog/literate/Makefile c:/Users/jkitchin/Dropbox/blogofile-jkitchin.github.com/blog/literate/archive/empty-text-file.txt Nice, that gives us a recursive listing of all the files in this directory tree. Let us take this a step further, and apply a function to that list to filter out a list of the org files. We will also create org-links out of these files. (defun os-walk (root) (let ((files '()) ;empty list to store results (current-list (directory-files root t))) ;;process current-list (while current-list (let ((fn (car current-list))) ; get next entry (cond ;; regular files ((file-regular-p fn) ;; directories ((and (file-directory-p fn) ;; ignore . and .. (not (string-equal ".." (substring fn -2))) (not (string-equal "." (substring fn -1)))) ;; we have to recurse into this directory (setq files (append files (os-walk fn)))) ) ;; cut list down by an element (setq current-list (cdr current-list))) ) files)) (require 'cl) (mapcar (lambda (x) (princ (format "[[%s][%s]]\n" x (file-relative-name x ".")))) (remove-if-not (lambda (x) (string= (file-name-extension x) "org")) (os-walk "literate"))) That is certainly functional. It might be nice to format the links a bit nicer to show their structure in a table of contents way, or to sort them in a nice order if there were many of these files. org-mode source Org-mode version = 8.2.5h ## Lather, rinse and repeat | categories: | tags: | View Comments Recursive functions are functions that call themselves repeatedly until some exit condition is met. Today we look at a classic example of recursive function for computing a factorial. The factorial of a non-negative integer n is denoted n!, and is defined as the product of all positive integers less than or equal to n. The key ideas in defining a recursive function is that there needs to be some logic to identify when to terminate the function. Then, you need logic that calls the function again, but with a smaller part of the problem. Here we recursively call the function with n-1 until it gets called with n=0. 0! is defined to be 1. def recursive_factorial(n): '''compute the factorial recursively. Note if you put a negative number in, this function will never end. We also do not check if n is an integer.''' if n == 0: return 1 else: return n * recursive_factorial(n - 1) print recursive_factorial(5) 120 from scipy.misc import factorial print factorial(5) 120.0 ### 0.1 Compare to a loop solution This example can also be solved by a loop. This loop is easier to read and understand than the recursive function. Note the recursive nature of defining the variable as itself times a number. n = 5 factorial_loop = 1 for i in range(1, n + 1): factorial_loop *= i print factorial_loop 120 There are some significant differences in this example than in Matlab. 1. the syntax of the for loop is quite different with the use of the in operator. 2. python has the nice *= operator to replace a = a * i 3. We have to loop from 1 to n+1 because the last number in the range is not returned. ## 1 Conclusions Recursive functions have a special niche in mathematical programming. There is often another way to accomplish the same goal. That is not always true though, and in a future post we will examine cases where recursion is the only way to solve a problem. org-mode source ## Some of this, sum of that | categories: | tags: | View Comments Python provides a sum function to compute the sum of a list. However, the sum function does not work on every arrangement of numbers, and it certainly does not work on nested lists. We will solve this problem with recursion. Here is a simple example. v = [1, 2, 3, 4, 5, 6, 7, 8, 9] # a list print sum(v) v = (1, 2, 3, 4, 5, 6, 7, 8, 9) # a tuple print sum(v) 45 45 If you have data in a dictionary, sum works by default on the keys. You can give the sum function the values like this. v = {'a':1, 'b':3, 'c':4} print sum(v.values()) 8 ## 1 Nested lists Suppose now we have nested lists. This kind of structured data might come up if you had grouped several things together. For example, suppose we have 5 departments, with 1, 5, 15, 7 and 17 people in them, and in each department they are divided into groups. Department 1: 1 person Department 2: group of 2 and group of 3 Department 3: group of 4 and 11, with a subgroups of 5 and 6 making up the group of 11. Department 4: 7 people Department 5: one group of 8 and one group of 9. We might represent the data like this nested list. Now, if we want to compute the total number of people, we need to add up each group. We cannot simply sum the list, because some elements are single numbers, and others are lists, or lists of lists. We need to recurse through each entry until we get down to a number, which we can add to the running sum. v = [1, [2, 3], [4, [5, 6]], 7, [8,9]] def recursive_sum(X): 'compute sum of arbitrarily nested lists' s = 0 # initial value of the sum for i in range(len(X)): import types # we use this to test if we got a number if isinstance(X[i], (types.IntType, types.LongType, types.FloatType, types.ComplexType)): # this is the terminal step s += X[i] else: # we did not get a number, so we recurse s += recursive_sum(X[i]) return s print recursive_sum(v) print recursive_sum([1,2,3,4,5,6,7,8,9]) # test on non-nested list 45 45 In Post 1970 we examined recursive functions that could be replaced by loops. Here we examine a function that can only work with recursion because the nature of the nested data structure is arbitrary. There are arbitary branches and depth in the data structure. Recursion is nice because you do not have to define that structure in advance.
2018-07-19 23:16:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5647351741790771, "perplexity": 2869.181371251267}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591332.73/warc/CC-MAIN-20180719222958-20180720002958-00209.warc.gz"}
https://dsp.stackexchange.com/questions/56885/remove-constant-noise-from-goertzel-output
# Remove constant noise from Goertzel output I'm trying to analyze a signal recorded from an analogue phone line and do DTMF recognition using a Goertzel algorithm. However, due to a fault in the hardware, there is a constant 50Hz power line noise with a lot of harmonics included in the recorded signal. In short term there is no possibility to change the hardware so I was wondering if it is possible to 'substract' a known noise level (from a periodic noise) from the Goertzel results? Until now, I'm doing the Goertzel calculations like this: samplesCount = 320, scalingFactor = samplesCount/2 Magnitude rawRelMagnitudeSquared = (q1*q1 + q2*q2 - q1*q2*frequency.coeff); Decibel level db = 10.0f * log10f(rawRelMagnitudeSquared / 3225); Power ratio scaledRelMagnitudeSquared = 2*rawRelMagnitudeSquared / scalingFactor; signalAbsPower = sum each sample*sample powerRatio = scaledRelMagnitudeSquared / signalAbsPower; I was running this for a recording that only contains noise and measuring the average rawRelMagnitudeSquared and signalAbsPower for each DTMF frequency. I then subtracted this from the normal results I get from other signals that have the same noise. As the formula above for powerRatio gives values >1 I assume my approach is faulty. Can you help me out to understand whats wrong or if this is possible at all?
2019-08-25 14:46:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7262790203094482, "perplexity": 1907.6403456526368}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027330233.1/warc/CC-MAIN-20190825130849-20190825152849-00535.warc.gz"}
http://crypto.stackexchange.com/questions/786/how-can-i-use-weierstrass-curve-operations-with-a-3-for-implementing-operations/791
How can I use Weierstrass curve operations with a=-3 for implementing operations for a=0? I am working with golang's elliptic library. It implements functions on Weierstrass elliptic curves with $a=-3$. I need to make my own library that allows me to handle curves with $a=0$. I understand there are specific equations to use to optimally perform operations on different elliptic curves, which can be found in Short Weierstrass curves. I would like to know which functions from the library I should change, and which stay the same (as I'm not an expert in cryptography). Here is the full list: IsOnCurve, affineFromJacobian, Add, addJacobian, Double, doubleJacobian, ScalarMult, ScalarBaseMult, GenerateKey, Marshal, Unmarshal, and some initialization functions. As I understand I need to change the following: • IsOnCurve (removing the -3x) • doubleJacobian (different equation) • and the init functions Those should stay the same: • addJacobian (same optimal equation) • Double • ScalarBaseMult • Marshal • Unmarshal And I'm not sure about those: • affineFromJacobian • ScalarMult • GenerateKey Can anyone tell me if my list is correct, and if not, give me some feedback on how I should change the given items? - I have not thoroughly investigated golang's elliptic library (or Go at all), but I have implemented elliptic curves (with Jacobian coordinates) and I would say that your guess is correct. The "$a$" parameter is not used in the addition of two distinct points, but it appears in the formulas for doubling a point. With Jacobian coordinates, a normal implementation will include core functions such as addJacobian() and doubleJacobian(), which are used only through wrappers which filter out the special cases (if both operants to an "add" are identical, then it must be a "double"; if one operand is the point-at-infinity, then the result is the other operand). The $a$ curve parameter is supposedly used when doubling a point, and when deciding if a point is on the curve or not (by applying the curve equation). It would also be used in point (de)compression, a kind of marshalling which saves a bit of space; but point compression does not appear to be implemented in the code you link to (it is optional and rumoured to be patented, which is why it is often avoided). I am not entirely sure that the code is correct, though. In ScalarMult(), the multiplier is not reduced modulo the curve order, so an oversized value k could imply calling addJacobian() on a point and itself, something which does not appear to be handled in the code (this should be checked: if $n$ is the curve order, try $k = n+2$; this should yield $2G$ if the code is correct).
2015-04-18 11:44:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6379937529563904, "perplexity": 969.3515167375231}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246634331.38/warc/CC-MAIN-20150417045714-00219-ip-10-235-10-82.ec2.internal.warc.gz"}
https://fr.wikisource.org/wiki/Mercuriales_de_la_Haute-Loire,_d%E2%80%99avril_%C3%A0_d%C3%A9cembre_1878
# Mercuriales de la Haute-Loire, d’avril à décembre 1878 Aller à la navigation Aller à la recherche ## MERCURIALES DE LA HAUTE-LOIRE D’APRÈS LES RENSEIGNEMENTS FOURNIS PAR LA PRÉFECTURE ### AVRIL À DÉCEMBRE 1878 PRODUITS. MARCHÉS ${\displaystyle \overbrace {\quad \quad \quad \quad \quad } }$ duPuy. deCraponne duMonastier. deSaugues. dePradelles. deBrioude. D’Yssin-geaux. AVRIL Froment (l’hect.) 23f 32c »f »c »f »c »f »c »f »c 23f 68c 22f 37c Méteil 18 63 » » » » » » » » » » » » Seigle 14 81 15 25 15 » 13 75 14 25 15 06 15 56 Orge 16 32 15 25 16 » 13 25 15 50 14 62 15 16 Avoine 9 32 11 50 8 75 8 75 8 50 9 25 9 77 Pois 20 31 » » » » » » 17 50 » » » » Lentilles 39 50 » » » » » » » » » » » » Haricots 30 » » » » » 30 » » » 27 » » » Pommes de terre 4 75 5 » 6 » 4 50 » » 4 93 1 03 Boeuf (le kil.) 1 80 » » 1 70 » » » » 1 60 1 58 Vache 1 60 1 25 1 56 1 40 » » 1 50 1 58 Veau 1 77 1 50 1 70 1 30 1 47 1 67 1 58 Mouton 1 75 1 80 1 60 1 55 1 77 1 82 1 58 Porc 1 77 1 60 » » 1 50 1 60 1 70 1 50 Foin (le quint. mét.) 6 » » » 4 » 6 » 5 25 7 » 5 75 Paille 2 » » » 2 50 » » 2 25 4 » 2 35 PRODUITS. MARCHÉS ${\displaystyle \overbrace {\quad \quad \quad \quad \quad } }$ duPuy. deCraponne duMonastier. deSaugues. dePradelles. deBrioude. D’Yssin-geaux. MAI Froment (l’hect.) 23f 17c »f »c »f »c »f »c »f »c 23f 62c 22f 25c Méteil 17 62 » » » » » » » » » » » » Seigle 14 48 15 50 15 » 13 75 14 50 14 75 15 37 Orge 16 24 15 50 15 50 13 25 15 50 14 75 15 37 Avoine 9 30 13 » 8 75 9 37 8 50 9 37 10 06 Pois 21 25 » » » » » » 17 50 » » » » Lentilles 38 » » » » » » » » » » » » » Haricots 30 » » » » » » » » » 27 » » » Pommes de terre 4 50 5 » 5 » 4 37 5 80 3 87 3 09 Boeuf (le kil.) 1 90 » » » » » » » » » » 1 62 Vache 1 67 1 40 » » 1 40 » » 1 50 1 62 Veau 1 82 1 65 1 75 1 40 1 50 1 60 1 62 Mouton 1 90 1 80 1 70 1 67 1 80 1 75 1 37 Porc 1 82 1 60 1 60 1 50 1 60 1 60 1 25 Foin (le quint. mét.) 6 » » » 4 » » » 6 » 7 » 4 87 Paille 2 » » » 2 75 » » 2 40 4 » 1 77 JUIN Froment (l’hect.) 23 22 » » » » » » » » 23 75 21 87 Méteil 17 37 » » » » » » » » » » » » Seigle 14 04 15 50 15 » 13 87 14 » 14 83 15 56 Orge 15 93 19 50 15 » 14 12 15 » 15 » 15 81 Avoine 9 31 9 62 8 75 10 » 8 81 9 12 10 31 Pois 21 25 » » » » » » 17 50 » » » » Lentilles 38 » » » » » » » » » » » » » Haricots 30 » » » » » » » » » 26 75 » » Pommes de terre 5 10 5 » 5 » 4 12 5 39 4 » 3 84 Boeuf (le kil.) 1 90 » » » » » » » » » » 1 60 Vache 1 65 1 40 » » 1 40 » » 1 55 1 60 Veau 1 90 1 70 1 70 1 50 1 50 1 62 1 60 Mouton 2 » 1 80 1 80 1 80 1 80 1 80 1 60 Porc 1 90 1 65 1 60 1 50 1 60 1 65 1 60 Foin (le quint. mét.) 6 » » » 4 50 5 50 4 87 7 » 4 50 Paille 2 » » » 3 » 2 » 2 50 4 » 2 05 PRODUITS. MARCHÉS ${\displaystyle \overbrace {\quad \quad \quad \quad \quad } }$ duPuy. deCraponne duMonastier. deSaugues. dePradelles. deBrioude. D’Yssin-geaux. JUILLET Froment (l’hect.) 23f 64c »f »c »f »c »f »c »f »c 23f 75c 21f 56c Méteil 17 54 » » » » » » » » » » » » Seigle 14 11 15 » 15 50 14 » 13 87 14 50 15 50 Orge 16 12 15 50 16 » 15 » 14 62 15 » 15 43 Avoine 9 50 11 40 9 50 10 » 8 87 9 » 10 12 Pois 21 25 » » » » » » » » » » » » Lentilles 38 » » » » » » » » » » » » » Haricots 30 » » » » » » » » » » » » » Pommes de terre 6 12 5 » 5 » 5 » 5 45 3 75 5 28 Boeuf (le kil.) 1 88 » » » » » » » » » » 1 50 Vache 1 60 1 40 » » 1 40 » » 1 60 1 55 Veau 1 90 1 70 1 80 1 50 1 47 1 65 1 55 Mouton 1 90 1 80 1 80 1 80 1 80 1 85 1 55 Porc 1 90 1 60 1 60 1 50 1 60 1 70 1 55 Foin (le quint. mét.) 6 » » » 4 » 4 50 5 » 7 » 4 45 Paille 2 » » » » » » » 2 50 4 » 2 » AOÛT Froment (l’hect.) 23 27 » » » » » » » » 23 75 21 37 Méteil 16 93 » » » » » » » » » » » » Seigle 14 10 15 50 15 » 13 75 13 62 14 50 14 60 Orge 14 70 15 50 14 50 14 50 13 50 12 50 12 49 Avoine 9 39 13 » 9 50 9 75 8 50 8 87 9 87 Pois 21 04 » » » » » » » » » » » » Lentilles 29 83 » » » » » » » » » » » » Haricots 31 04 » » » » » » » » » » » » Pommes de terre 5 77 5 » 5 » 4 37 » » 5 62 3 27 Boeuf (le kil.) 1 80 » » » » » » » » » » 1 50 Vache 1 60 1 40 » » 1 40 » » 1 55 1 55 Veau 1 88 1 70 1 65 1 50 1 45 1 60 1 55 Mouton 1 88 1 80 1 70 1 70 1 80 1 90 1 55 Porc 1 90 1 70 1 70 1 80 1 70 1 82 1 55 Foin (le quint. mét.) 6 » » » 4 » 6 20 » » 7 50 5 » Paille 2 » » » » » » » 2 40 4 » 2 33 PRODUITS. MARCHÉS ${\displaystyle \overbrace {\quad \quad \quad \quad \quad } }$ duPuy. deCraponne duMonastier. deSaugues. dePradelles. deBrioude. D’Yssin-geaux. SEPTEMBRE Froment (l’hect.) 23f 43c »f »c »f »c »f »c »f »c 23f 75c 22f 12c Méteil 17 68 » » » » » » » » » » » » Seigle 14 87 15 50 14 » 13 50 13 25 15 62 14 06 Orge 14 43 15 50 13 50 14 » 12 42 13 » 11 62 Avoine 8 75 13 » 9 25 9 50 8 25 8 43 7 87 Pois 20 62 » » » » » » » » » » » » Lentilles 29 50 » » » » » » » » » » » » Haricots 33 12 » » » » » » » » » » » » Pommes de terre 4 52 5 » 4 50 3 75 » » 3 93 2 49 Boeuf (le kil.) 1 80 » » » » » » » » » » 1 50 Vache 1 60 1 40 » » 1 40 1 55 1 55 1 50 Veau 1 80 1 65 1 55 1 50 1 60 1 60 1 50 Mouton 1 85 1 80 1 65 1 80 1 80 1 90 1 50 Porc 1 90 1 65 1 60 1 60 1 60 1 75 1 50 Foin (le quint. mét.) 6 » » » 4 » 5 » 5 » 7 50 5 » Paille 2 » » » » » 2 20 2 20 4 » 1 80 OCTOBRE Froment (l’hect.) 23 31 » » 23 » » » » » 22 52 20 72 Méteil 17 56 » » » » » » » » » » » » Seigle 14 62 15 50 14 » 12 50 12 25 14 62 14 50 Orge 14 31 15 50 14 » 12 » 12 » 12 93 12 45 Avoine 8 75 13 » 8 75 8 70 8 12 8 » 8 04 Pois 19 37 » » » » » » » » » » » » Lentilles 29 » » » » » » » » » » » » » Haricots 30 62 » » » » » » » » 27 50 » » Pommes de terre 4 18 5 » 4 » 2 25 3 75 4 18 2 59 Boeuf (le kil.) 1 80 » » » » » » » » 1 60 1 55 Vache 1 60 1 40 » » 1 40 » » 1 55 1 50 Veau 1 80 1 70 1 60 1 50 1 42 1 60 1 50 Mouton 1 80 1 80 1 70 1 80 1 80 1 90 1 50 Porc 1 90 1 60 1 60 1 50 1 50 1 75 1 50 Foin (le quint. mét.) 6 » » » 4 » » » 5 » 7 50 5 » Paille 2 » » » » » » » 2 30 4 » 2 01 PRODUITS. MARCHÉS ${\displaystyle \overbrace {\quad \quad \quad \quad \quad } }$ duPuy. deCraponne duMonastier. deSaugues. dePradelles. deBrioude. D’Yssin-geaux. NOVEMBRE Froment (l’hect.) 21f 04c »f »c »f »c »f »c »f »c 21f 06c 20f »c Méteil 17 45 » » » » » » » » » » » » Seigle 14 08 15 50 15 » 15 » 12 25 14 » 14 37 Orge 13 19 15 50 14 » 12 » 12 » 13 14 12 62 Avoine 8 20 13 » 8 75 8 70 7 87 8 » 8 19 Pois 18 75 » » » » » » » » » » » » Lentilles 29 » » » » » » » » » » » » » Haricots 30 » » » » » » » » » 23 75 » » Pommes de terre 3 87 5 » 4 » 2 25 3 75 3 87 2 99 Boeuf (le kil.) 1 80 » » » » » » » » 1 60 1 50 Vache 1 60 1 40 » » 1 30 » » 1 55 1 50 Veau 1 75 1 70 1 55 1 37 1 42 1 60 1 50 Mouton 1 81 1 80 1 65 1 35 1 77 1 90 1 50 Porc 1 82 1 60 1 55 1 80 1 50 1 75 1 50 Foin (le quint. mét.) 6 37 » » 4 » » » 5 12 7 50 5 52 Paille 2 25 » » 2 50 » » 2 25 4 25 2 40 DÉCEMBRE Froment (l’hect.) 20 62 » » » » » » » » 21 25 19 » Méteil 18 07 » » » » » » » » » » » » Seigle 13 87 15 50 15 » 13 25 12 » 14 87 14 62 Orge 13 04 15 50 13 » 13 50 12 25 12 62 12 87 Avoine 8 06 13 » 7 50 9 » 7 50 8 » 8 12 Pois 18 75 » » » » » » » » » » » » Lentilles 29 » » » » » » » » » » » » » Haricots 30 » » » » » » » » » 22 50 » » Pommes de terre 3 43 5 » 4 » 3 50 » » 4 31 3 53 Boeuf (le kil.) 1 80 » » » » » » » » 1 60 1 50 Vache 1 60 1 40 » » 1 30 » » 1 52 1 50 Veau 1 70 1 70 1 60 1 45 1 45 1 60 1 50 Mouton 1 80 1 80 1 60 1 40 1 77 1 90 1 50 Porc 1 77 1 60 1 50 1 80 1 50 1 68 1 50 Foin (le quint. mét.) 6 25 » » 6 » 7 50 5 37 7 50 6 07 Paille 2 25 » » 2 50 » » 2 30 4 87 2 55
2022-05-22 15:03:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 5, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3552331328392029, "perplexity": 608.21141678118}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662545548.56/warc/CC-MAIN-20220522125835-20220522155835-00162.warc.gz"}
https://dispatchesfromturtleisland.blogspot.com/2018/11/
## Thursday, November 29, 2018 ### 14,000 Year Old Fishing Village Unearthed In British Columbia The New Archaeological Site In British Columbia While it has received prominent mention in recent years, it is still possible to gain valuable insights into human prehistory by means other than genetics. Sometimes old school archaeological digs and carbon dating can still be a source of important discoveries. An archaeological site in British Columbia sheds light on the lives of members of this Founding Population at a time close to their primary expansion out of Beringia. Among other things, it corroborated the hypothesis that these people had relatively long term settlements in some places, and relied on a mix of fishing and terrestrial hunting and gathering for subsistence. CTV reports that a team of students from the University of Victoria’s archeology department have uncovered the oldest settlement in North America. This ancient village was discovered when researchers were searching Triquet Island, an island located about 300 miles north of Victoria, British Columbia. The team found ancient fish hooks and spears, as well as tools for making fires. However, they really hit the jackpot when they found an ancient cooking hearth, from which they were able to obtain flakes of charcoal burnt by prehistoric Canadians. Using carbon dating on the charcoal flakes, the researchers were able to determine that the settlement dates back 14,000 years ago[.] . . . Alisha Gauvreau, a Ph.D student who helped discover this site. . . and her team began investigating the area for ancient settlements after hearing the oral history of the indigenous Heiltsuk people, which told of a sliver of land that never froze during the last ice age. William Housty, a member of the Heiltsuk First Nation, said, “To think about how these stories survived only to be supported by this archeological evidence is just amazing. This find is very important because it reaffirms a lot of the history that our people have been talking about for thousands of years.” But, one quote from the PhD student in the story is mostly wrong: “What this is doing, is changing our idea of the way in which North America was first peopled, said Gauvreau.” In fact, while this find is important, it is important mostly because it confirms and corroborates the existing paradigm regarding the peopling of the Americas, not because it "is changing our idea" of how this happened. It is notable not because it changes our ideas about the peopling of the Americas, but because it is some of the most clear and concrete evidence to date confirming the existing paradigm. But, it is understandable and forgivable that an investigator selling a story about her discovery to the press stretched the truth a little on this score. Paradigm changing discoveries are hot news. And, while this particular paradigm affirming find actually is important, paradigm affirming results are rarely news (imagine how dull the nightly news would be if it ran a big news story every time that the Large Hadron Collider had a result consistent with the Standard Model of Particle Physics). Background: Why Does The "Founding Population" Of The Americas Matter? When it comes to the prehistory of the Americas, one of the central questions is to understand is how people arrived in the Americas, and one of the central players in the answer is the "Founding Population" of the Americas. The Founding Population was a group of people with a quite small effective population size (a few hundred at most) who rapidly expanded from Beringia into essentially all of the "virgin territory" of North America and South America over a period of a couple of thousand years or so as the last great ice age (which peaked at the Last Glacial Maximum about 20,000 years ago) retreated, starting more than a thousand years before the Young Dryas climate event (ca. 12,900 to 11,700 years years ago, which was a return to glacial conditions which temporarily reversed the gradual climatic warming after the Last Glacial Maximum started receding). There is a growing community of investigators and observers of the prehistory of the Americas who give credence to the scattered bits of evidence for one or more older hominin populations in the Americas (either modern human or archaic hominid) who migrated into the Americans from Beringia before or during the Last Glacial Maximum, rather than only starting when the vast North American glacier started to melt and recede. But, we know that any earlier hominins in the Americas (modern human or otherwise) never thrived and were either almost entirely wiped out by the later waves of modern human migration, or were so similar genetically similar to the founding population of the Americas they are indistinguishable from them genetically. Because there is no distinguishable trace of them in any modern or ancient DNA samples from the Americas with the possible exception of some minor "paleo-Asian" ancestry in a few tribes in the Amazon, whose origins are a mystery. But, even if you find that evidence to be credible, there is overwhelming modern and archaic genetic evidence that 99.99% or more of the ancestry of the pre-Columbian residents of the Americas is derived from a single "Founding Population" which started to expand in earnest not many centuries earlier than 14,000 years ago, subject to two exceptions: (1) Inuits in the Arctic and sub-Arctic, and (2) some select tribes in Alaska, the Pacific Northwest and the American Southwest with Na-Dene ancestry. The Inuits derive from a migration wave from Northeast Asia within the last two thousand years and replaced earlier "paleo-Eskimo" populations in Northern Canada. The Na-Dene derive from a migration wave from Northeast Asia around the time of the European Bronze Age and then admixed with descendants of the Founding Population who were already present in North America. But, apart from a small component of some cryptic "paleo-Asian" ancestry in a handful of hunter-gatherer tribes in jungles in the Amazon River basin near the northeastern foothills of the Andes Mountains, all other pre-Columbia genetic ancestry in the Americas derives from the Founding Population. Founding Population ancestry was the predominant source of ancestry in almost every non-Inuit indigenous person in North America and South America in 1492, and was the only source of ancestry in the lion's share of those millions of people. So, given their central role as the primary ancestors of all of the indigenous people of the Americas, except the Inuits, knowing more about this quite small community of people from around 14,000 years ago, is obviously a matter of great importance. ## Wednesday, November 28, 2018 ### The Latest On Top Quark Mass And Properties ATLAS and CMS have also come out with a combined review of top quark property measurements at the LHC. Like the Higgs boson review released yesterday, the data aren't particularly new. In principle, the top quark is fully described in the Standard Model when you know its mass and the relevant components of the CKM matrix (which is itself a function of four parameters). All other top quark properties are predicted by the Standard Model (sometimes with the assistance of other Standard Model parameters like the strong force coupling constant), so the experimental results can be compared to Standard Model predictions to constrain extensions of the Standard Model involving "new physics" and to calibrate numerical and analytical approximation methods for ascertaining Standard Model predictions. The top quark mass, mtop is a key parameter in the SM and is the major contributor to the Higgs boson mass (mH) through radiative corrections. Therefore, the accuracy on both mtop and mH measurements is quite crucial for the consistency tests of the SM. Starting from the Tevatron experiments, the top quark mass has so far been measured with increasing precision using multiple final states, as well as with different analysis techniques. Two of the most recent mtop measurements from CMS and ATLAS experiments are presented here. ATLAS has recently performed a top quark mass measurement [21] in lepton+jets final states using a 20.2 fb−1 dataset at √ s = 8 TeV. The full event reconstruction is performed using a likelihood based kinematic fitter, KLFITTER [22]. The t¯t → lepton+jets event selection is further optimized through the usage of a boosted decision tree [23]. The top quark mass (mtop) together with the jet energy scale factor (JSF) and b-jet energy scale factor (bJSF) is then simultaneously extracted using the template fit technique. The template fit results in terms of mtop and mW are shown in Fig. 7. The measurement yields a top quark mass of 172.8±0.39(stat)±0.82(syst) GeV, where the dominant uncertainties are driven by theoretical modeling and systematics. The latest mtop measurement [25] from CMS is based on a 35.9 fb−1 dataset at √ s = 13 TeV. The full t¯t → lepton+jets event reconstruction is performed using a kinematic fit of the decay products. A 2-D ideogram fitting technique [24] is then applied to the data to measure the top quark mass simultaneously with an overall jet energy scale factor (JSF), constrained by mW (through W → qq¯ 0 decays); the fit results in terms of mtop and mW are shown in Fig. 8. The ideogram method measures an mtop value of 172.25 ± 0.08(stat) ± 0.62(syst) GeV, in consistency with the Run 1 CMS measurements at √ s = 7 and 8 TeV. The measurement results in a precision of ∆mtop/mtop ≈ 0.36% where the leading uncertainties originate from MC modeling, color reconnection, parton showering, JES, etc. The most recent individual mtop measurements from the LHC experiments, along with the world average value for mtop are summarized in Fig. 9. The Particle Data Group reports that the global average value for the top quark mass (including measurements from Tevatron as well as the LHC and also the one CMS Run 2 result) is 173.0 ± 0.4 GeV. Either analysis of speculative theoretical predictions of what the top quark mass should be to fit various assumptions can be found in a March 20, 2014 post at this blog. Some highlights: An extended Koide's rule estimate of the top quark mass using only the electron and muon masses as inputs, predicted a top quark mass of 173,263.947 ± 0.006 MeV. . . . The dominance of any imprecision in the top quark mass to overall model fits is further amplified in cases where the quantities compared are the square of the masses rather than the masses themselves (e.g. comparing the sum of squares of the Standard Model particle masses to the almost precisely identical square of the vacuum expectation value of the Higgs field). . . . About 72% of this imprecision is due to the top quark mass and about 99.15% of the imprecision is due to the top quark mass and Higgs boson masses combined. . . . What is the best fit value for the top quark mass? Answer: 173,112.5 ± 2.5 MeV . . . The value of the top quark mass necessary to make the sum of the squares of the fermion masses equal to the sum of the square of the boson masses would be about 174,974 MeV under the same set of assumptions[.] That analysis assumed a 125,955.8 MeV mass for the Higgs boson, which is high (the current best estimate is 125.18 ± 0.16 GeV), so the top quark mass estimates in both cases should be higher than estimated given those assumptions. As previously noted in a December 16, 2016 blog post at this blog: If the the sum of the square of the boson masses equals the sum of the square of the fermion masses equals one half of the Higgs vacuum expectation value, the implied top quark mass is 174.03 GeV if pole masses of the quarks are used, and 174.05 GeV if MS masses at typical scales are used. . . . The expected value of the top mass from the formula that the sum of the square of each of the fundamental particle masses equals the square of the Higgs vaccum expectation value (a less stringent condition because the fermion and boson masses don't have to balance), given the global average Higgs boson mass measurement (and using a global fit value of 80.376 GeV for the W boson rather than the PDG value) is 173.73 GeV. The top quark mass can be a little lighter in this scenario because the global average measured value of the Higgs boson mass is a bit heavier than under the more stringent condition. One property of the top quark predicted by the Standard Model is its "decay width" (which has a one to one correspondence with its half-life). A particle's half-life is inversely proportional to its decay width, so a particle with a very large decay width, like the top quark, has a very short half-life. The quantity αs referred to in the text is the strong force coupling constant strength at the Z boson mass. Top Decay Width Being quite heavy the top quark has a large decay width (Γt). Within the SM, the Next-to-next-to-leading-order (NNLO) calculations predict Γt of 1.322 GeV for a top quark mass (mtop) of 172.5 GeV and αs=0.1189 [15]. CMS has recently utilized the t¯t → dilepton events from 12.9 fb−1 of the Run 2 dataset (at √ s = 13 TeV) to constrain the total decay width of the top quark through direct measurement. . . . the likelihood fit provides an observed (expected) bound of 0.6 < Γt < 2.5 (0.6 < Γt < 2.4) GeV at 95% confidence level [16]. [Ed. although expressed differently, this is roughly equivalent to a value of 1.5 +/- 0.45 GeV which is actually a smaller MOE and a mean value closer to the predicted value than the ATLAS measurement. The actual fit to the prediction is actually a little better since the error margins are lopsided.] ATLAS performed a more refined measurement of top quark decay width using the t¯t → lepton+jets events from 20.2 fb−1 of the Run 1 dataset at √ s = 8 TeV. . . . the measurement yields a value of Γt = 1.76±0.33(stat) +0.79 −0.68 (syst) GeV (for mtop=172.5 GeV) [17], in good agreement with the SM predicted value. However, the measurement is limited by the systematic uncertainties from jet energy scale/resolution and signal modeling. ## Tuesday, November 27, 2018 ### The Latest On The Higgs Boson Mass One of the parameters of the Standard Model which I watch very closely, because it has only been measured at all for a few years and because it is relevant for many purposes is the Higgs boson mass. Indeed, it is the only experimentally measured parameter involving the Higgs boson in the Standard Model. If you know its mass, in the Standard Model, the particle is fully described. An end of year paper speaking officially for both the ATLAS and CMS experiments at the Large Hadron Collider provides this summary of Higgs boson mass measurements at the LHC. 5.4 Higgs boson mass measurement The Higgs boson mass can be measured using the high resolution ZZ∗ and γγ final state. Combining the measurements in these two channels from 2015-2016 data and from run 1, the ATLAS collaboration reports a value of the Higgs boson mass of 124.97±0.24 GeV [32] (with ±0.19 GeV of statistical uncertainty and ±0.13 GeV of systematic uncertainty, mainly from uncertainties in the photon energy scale). With the ZZ∗ channel from 2015-2016 data, the CMS collaboration reports a mass value of 125.26 ± 0.21 GeV [33]. At the same time, a direct upper limit on the decay width is set at 95% confidence level at 1.1 GeV. This is still far above the predicted width in the SM which is about 4 MeV. A more model dependent constrain on the Higgs boson width can be derived comparing the rate of gg → H(∗) → ZZ(∗) events in the on-shell and off-shell Higgs mass regions. The ATLAS analysis with 2015-2016 data sets a model-dependent limit at 14.4 MeV on the decay width, at 95% confidence level [34]. All other properties of the Higgs boson measured to date are consistent with the Standard Model predictions for it, within the limits of experimental measurement uncertainty. The most recent current combined LHC mass measurement of the Higgs boson I have see in most sources is 125.09 ± 0.24 GeV, which is based upon all measurements in all channels at ATLAS and CMS combined, in Run 1. But, the Particle Data Group reports a more precise figure of 125.18 ± 0.16 GeV, which includes one Run 2 measurement in one channel from CMS. A Higgs boson mass of 124.65 GeV is not yet ruled out by the data and would be interesting because that mass is one for which the sum of Yukawas for all of the fundamental bosons in the Standard Model is exactly 0.5. But, the weighted global average of the Higgs boson mass is about 125.09 GeV with a MOE of 0.24 GeV, which is 0.44 GeV higher than the 124.65 GeV value that is so notable, which is a little under two sigma. So, the lower value isn't excluded experimentally, but it isn't favored either. The gap between the ATLAS measurement and this theoretical value is 0.32 GeV, which with a MOE of 0.24 GeV is just 1.25 sigma from the expected value. But, the gap between the CMS measurement and the theoretical value is 0.61 GeV, which with a MOE of 0.21 GeV is almost three sigma. On the other hand, the fact that two experiments using the same equipment are 0.36 GeV apart, and that the underlying measurements that went into each experiment's average value are even further apart, makes me think that the systemic and.or theoretical error is underestimated. The ATLAS and CMS individual experiments going into the global average have a swing of on the order of 1 GeV plus. Statistical error is pretty hard to get wrong (except for considering the effect of look elsewhere effects which aren't very important when there are only four measurements or so at issue), but systemic and theoretical error is inherently hard to estimate. ### Quantum Gravity v. General Relativity A quantum gravity theory based upon a massless spin-2 graviton should, in the classical limit, reproduce general relativity (GR) (I have yet to see any really rigorous proof of this piece of folk wisdom). But, such a theory isn't and can't be, completely identical to GR, although devising an experimental test of whether it is one or the other is a question that has stumped physicists so far. There are some pretty generic qualitative differences between classical GR and any theory of gravity based upon graviton exchange. Here are fifteen of them. In a quantum gravity theory: 1. Gravitational energy is localized (this is not true in GR). 2. Gravitational energy is perfectly conserved (this is not true in most interpretations of GR). 3. Graviton self-interactions and graviton interactions with other particles would look the same mathematically, while in GR gravitational field self-interactions do have an impact on space-time curvature, but while all other kinds of mass-energy inputs make their way into Einstein's equations via the stress-energy tensor, gravitational field self-interactions are treated differently mathematically. 4. Gravitons deliver gravity in tiny lumps, while space-time curvature does so continuously; i.e. sometimes graviton should act like particles instead of waves, while GR has only wave-like gravitational behavior. 5. Gravitons ought to be able to exhibit tunneling behavior that doesn't exist in classical GR. 6. A graviton based theory is stochastic; GR is deterministic. 7. It is much less "natural" to include the cosmological constant in a graviton theory than in GR where it is an integration constant. In a quantum gravity theory there is a tendency to decouple dark energy from other gravitational phenomena. 8. In a quantum gravity theory, gravitons couple to everything so a creation operator from a pair of high energy gravitons could give rise to almost anything (in contrast, photoproduction can give rise only to pairs of charged particles that couple to photons); likewise any two particles with opposite quantum numbers could annihilate into gravitons instead of, for example, photons. Neither creation nor annihilation operations exist in GR in quite the same way, although seemingly massive systems can be converted into high energy gravitational waves. 9. In some graviton based theories, properties of a graviton must be renormalized with energy scale like all of the SM physical constants; in others there is a cancellation or symmetry of some kind (probably a unique one) that prevents this from happening. One or the other possibility is true but we don't know which one. GR doesn't renormalize. 10. In graviton based theories lots of practical calculations require approximating infinite series that we don't know how to manage mathematically; in GR, in contrast, infinite series expressions are very uncommon and the calculations are merely wickedly difficult rather than basically impossible. 11. In GR singularities like black holes can be absolute; in a quantum gravity theory they can be only nearly "perfect" but will always leak a little, because they are discontinuous and stochastic. 12. In quantum gravity it ought to be possible to have gravitons that are entangled with each other, while in GR this doesn't happen. 13. In quantum gravity with gravitons, the paradigmatic approach is to look at the propagators of point particles; GR is conventionally formulated in a hydrodynamic form that encompasses a vast number of individual particles (although it is possible to formulate GR differently while retaining its classical character). 14. In quantum gravity, calculations for almost every other interaction of every kind need to be tweaked by considering graviton loops; in GR the gravitational sector and the fundamental particles of the Standard Model operate in separate domains. For example, even if Newton's constant does not run with energy scale due to some symmetry in a quantum gravity theory, the running for the strong force coupling constant with energy scale would be slightly different than in the SM without gravitons. 15. Adding a graviton to the mix of particles in a TOE qualitatively changes what groups can include all fundamental particles that exist and none that do not; while in GR where gravity is not fundamental particle based, it does not. ## Monday, November 26, 2018 ### More Structure Not Predicted By The Standard Model Of Cosmology The Standard Model of Cosmology a.k.a. LambdaCDM model a.k.a. Concordance Model of Cosmology, doesn't predict the tight relationship between the distribution of star in galaxies and the location of dark matter inferred from the dynamics and lensing in the vicinity of those stars. Another thing which is observed, but not predicted by the Concordance Model is the fairly strong correlation between a galaxy's bulge size and its number of satellite galaxies. But, that structure is also present in the data. A new paper confirms that there is a correlation and that the Concordance Model doesn't predict its existence. There is a correlation between bulge mass of the three main galaxies of the Local Group (LG), i.e. M31, Milky Way (MW), and M33, and the number of their dwarf spheroidal galaxies. A similar correlation has also been reported for spiral galaxies with comparable luminosities outside the LG. These correlations do not appear to be expected in standard hierarchical galaxy formation. In this contribution, and for the first time, we present a quantitative investigation of the expectations of the standard model of cosmology for this possible relation using a galaxy catalogue based on the Millennium-II simulation. Our main sample consists of disk galaxies at the centers of halos with a range of virial masses similar to M33, MW, and M31. For this sample, we find an average trend (though with very large scatter) similar to the one observed in the LG; disk galaxies in heavier halos on average host heavier bulges and larger number of satellites. In addition, we study sub-samples of disk galaxies with very similar stellar or halo masses (but spanning a range of 2-3 orders of magnitude in bulge mass) and find no obvious trend in the number of satellites vs. bulge mass. We conclude that while for a wide galaxy mass range a relation arises (which seems to be a manifestation of the satellite number - halo mass correlation), for a narrow one there is no relation between number of satellites and bulge mass in the standard model. Further studies are needed to better understand the expectations of the standard model for this possible relation. B. Javanmardi, M. Raouf, H. G. Khosroshahi, S. Tavasoli, O. Müller, A. Molaeinezhad, "The number of dwarf satellites of disk galaxies versus their bulge mass in the standard model of cosmology" (November 21, 2018) (accepted in The Astrophysical Journal). This is quite powerful, despite a firstly thin data set to establish the correlation that exists in the real world, because it is a problem with lambdaCDM that is independent of its inaccurate expectations about where dark matter is located. A new paper continuing this line of research is the following one: Low mass galaxies are expected to be dark matter dominated even within their centrals. Recently two observations reported two dwarf galaxies in group environment with very little dark matter in their centrals. We explore the population and origins of dark matter deficit galaxies (DMDGs) in two state-of-the-art hydrodynamical simulations, the EAGLE and Illustris projects. For all satellite galaxies with M>109 M in groups with M200>1013 M, we find that about 5.0% of them in the EAGLE, and 3.2% in the Illustris are DMDGs with dark matter fractions below 50% inside two times half-stellar-mass radii. We demonstrate that DMDGs are highly tidal disrupted galaxies; and because dark matter has higher binding energy than stars, mass loss of the dark matter is much more rapid than stars in DMDGs during tidal interactions. If DMDGs were confirmed in observations, they are expected in current galaxy formation models. Yingjie Jing, et al., "The dark matter deficit galaxies in hydrodynamical simulations" (November 22, 2018). Another problem, somewhat related to the unexpected structure in inferred dark matter distributions, is that a very large swath of the parameter space of particles that interact with Standard Model matter non-gravitationally has been excluded experimentally, but the tight alignment of stars and inferred dark matter distributions implies that if dark matter is real that it has to have non-trivial, non-gravitational interactions with stars and other ordinary matter. Truly "sterile" dark matter which doesn't interact with anything non-gravitationally, which would be "collisionless", as lambdaCDM assumes that dark matter comes close to, has basically been ruled out experimentally. In addition to these two relatively independent problems, lambdaCDM also has a problem with its chronology of the moderately early universe. This gives rise to the "Impossible Early Galaxies" problem, and to 21cm radiation wavelength lines that fail to show the behavior expected in a world with dark matter at roughly the end of the "radiation era". While correlation is not causation, most strong correlations in nature have a cause of some kind. Figuring out which set is the cause and which is the effect can be difficult, or can even be a category error. But, there is almost always some reason for the relationship. Because the Concordance Model fails to explain multiple independent phenomena that show correlations, it is probably wrong. Not wildly totally wrong, because it does get lots of things that we can confirm with astronomy at very large scales right. But, significantly, deeply flawed. It only took the one flaw to convince me that something was amiss with the Concordance Model. But, lots of people who are less skeptical of lambdaCDM than I am, are going to start looking for alternatives as multiple, significant, seemingly independent breaks between the Concordance Model and observed reality emerge. I, of course, think (although I can't personally rigorously prove it) that pretty much all of the flaws of lambdaCDM exist because we have misunderstood some important second and third order quantum gravitational effects that matter in very weak gravitational fields in very high mass systems. My very strong intuition is that, in reality, there is both no dark matter and no dark energy, apart of fields of Standard Model fundamental particles and gravitons. But, I don't expect that paradigm shift to spread all that quickly, unless a rising star popularizes a solution of that kind of a mass scale within the physics and physics journalism communities. Meanwhile, another promising modified gravity theory has emerged. We have recently shown that the baryonic Tully-Fisher (BTF) and Faber-Jackson (BFJ) relations imply that the gravitational "constant" G in the force law vary with acceleration a as 1/a. Here we derive the converse from first principles. First we obtain the gravitational potential for all accelerations and we formulate the Lagrangian for the central-force problem. Then action minimization implies the BTF/BFJ relations in the deep MOND limit as well as weak-field Weyl gravity in the Newtonian limit. The results show how we can properly formulate a nonrelativistic conformal theory of modified dynamics that reduces to MOND in its low-acceleration limit and to Weyl gravity in the opposite limit. An unavoidable conclusion is that a0, the transitional acceleration in modified dynamics, does not have a cosmological origin and it may not even be constant among galaxies and galaxy clusters. Dimitris M. Christodoulou, Demosthenes Kazanas, "Gravitational Potential and Nonrelativistic Lagrangian in Modified Gravity with Varying G" (November 21, 2018). Further afield and mostly unrelated is the possibility that lots of the filamentary large scale structure of the universe could be driven by magnetism, which is usually assumed to be negligible and not influential in interstellar space. But, maybe not: Evidence repeatedly suggests that cosmological sheets, filaments and voids may be substantially magnetised today. The origin of magnetic fields in the intergalactic medium is however currently uncertain. We discuss a magnetogenesis mechanism based on the exchange of momentum between hard photons and electrons in an inhomogeneous intergalactic medium. Operating near ionising sources during the epoch of reionisation, it is capable of generating magnetic seeds of relevant strengths over scales comparable to the distance between ionising sources. Furthermore, when the contributions of all ionising sources and the distribution of gas inhomogeneities are taken into account, it leads, by the end of reionisation, to a level of magnetisation that may account for the current magnetic fields strengths in the cosmic web. Mathieu Langer, Jean-Baptiste Durrive "Magnetising the Cosmic Web during Reionisation" (November 22, 2018). MORE INTERESTING PAPERS (NO TIME TO FORMAT THEM, A RICH LOAD OF PAPERS TODAY FOR SOME REASON, PERHAPS A PRE-THANKSGIVING RUSH TO WRAP STUFF UP): arXiv:1811.09197 [pdf, other] Large-scale redshift space distortions in modified gravity theories César Hernández-Aguayo, Jiamin Hou, Baojiu Li, Carlton M. Baugh, Ariel G. Sánchez Comments: 18 pages, 11 figures, submitted to MNRAS Subjects: Cosmology and Nongalactic Astrophysics (astro-ph.CO) Measurements of redshift space distortions (RSD) provide a means to test models of gravity on large-scales. We use mock galaxy catalogues constructed from large N-body simulations of standard and modified gravity models to measure galaxy clustering in redshift space. We focus our attention on two of the most representative and popular families of modified gravity models: the Hu \& Sawicki f(R) gravity and the normal branch of the DGP model. The galaxy catalogues are built using a halo occupation distribution (HOD) prescription with the HOD parameters in the modified gravity models tuned to match with the number density and the real-space clustering of {\sc boss-cmass} galaxies. We employ two approaches to model RSD: the first is based on linear perturbation theory and the second models non-linear effects on small-scales by assuming standard gravity and including biasing and RSD effects. We measure the monopole to real-space correlation function ratio, the quadrupole to monopole ratio, clustering wedges and multipoles of the correlation function and use these statistics to find the constraints on the distortion parameter, β. We find that the linear model fails to reproduce the N-body simulation results and the true value of β on scales $s < 40\Mpch$, while the non-linear modelling of RSD recovers the value of β on the scales of interest for all models. RSD on large scales (s≳20-$40\Mpch$) have been found to show significant deviations from the prediction of standard gravity in the DGP models. However, the potential to use RSD to constrain f(R) models is less promising, due to the different screening mechanism in this model, arXiv:1811.09222 [pdf, ps, other] Beyond the Standard models of particle physics and cosmology Maxim Yu. Khlopov Comments: Prepared for Proceedings of XXI Bled Workshop "What comes beyond the Standard models?" Subjects: Cosmology and Nongalactic Astrophysics (astro-ph.CO); High Energy Astrophysical Phenomena (astro-ph.HE); High Energy Physics - Phenomenology (hep-ph) The modern Standard cosmological model of inflationary Universe and baryosynthesis deeply involves particle theory beyond the Standard model (BSM). Inevitably, models of BSM physics lead to cosmological scenarios beyond the Standard cosmological paradigm. Scenarios of dark atom cosmology in the context of puzzles of direct and indirect dark matter searches, of clusters of massive primordial black holes as the source of gravitational wave signals and of antimatter globular cluster as the source of cosmic antihelium are discussed. arXiv:1811.09578 (cross-list from physics.gen-ph) [pdfpsother] Emergent photons and gravitons Comments: to appear in Proceedings of the 21st Bled Workshop "What Comes Beyond Standard Models" Subjects: General Physics (physics.gen-ph); High Energy Physics - Phenomenology (hep-ph); High Energy Physics - Theory (hep-th) Now, it is already not a big surprise that due to the spontaneous Lorentz invariance violation (SLIV) there may emerge massless vector and tensor Goldstone modes identified particularly with photon and graviton. Point is, however, that this mechanism is usually considered separately for photon and graviton, though in reality they appear in fact together. In this connection, we recently develop the common emergent electrogravity model which would like to present here. This model incorporates the ordinary QED and tensor field gravity mimicking linearized general relativity. The SLIV is induced by length-fixing constraints put on the vector and tensor fields, A2μ=±M2A and H2μν=±M2H (MA and MH are the proposed symmetry breaking scales) which possess the much higher symmetry then the model Lagrangian itself. As a result, the twelve Goldstone modes are produced in total and they are collected into the vector and tensor field multiplets. While photon is always the true vector Goldstone boson, graviton contain pseudo-Goldstone modes as well. In terms of the appearing zero modes, theory becomes essentially nonlinear and contains many Lorentz and CPT violating interaction. However, as argued, they do not contribute in processes which might lead to the physical Lorentz violation. Nonetheless, how the emergent electrogravity theory could be observationally differed from conventional QED and GR theories is also briefly discussed. The electron self-energy in QED at two loops revisited Subjects: High Energy Physics - Phenomenology (hep-ph) We reconsider the two-loop electron self-energy in quantum electrodynamics. We present a modern calculation, where all relevant two-loop integrals are expressed in terms of iterated integrals of modular forms. As boundary points of the iterated integrals we consider the four cases p2=0p2=m2p2=9m2 and p2=. The iterated integrals have q-expansions, which can be used for the numerical evaluation. We show that a truncation of the q-series to order (q30) gives numerically for the finite part of the self-energy a relative precision better than 1020for all real values p2/m2. Properties of the decay Hγγ using the approximate α4s-corrections and the principle of maximum conformality Subjects: High Energy Physics - Phenomenology (hep-ph) The Higgs boson decay channel, Hγγ, is one of the most important channels for probing the properties of the Higgs boson. In the paper, we reanalyze its decay width by using the QCD corrections up to α4s-order level. The principle of maximum conformality has been adopted to achieve a precise pQCD prediction without conventional renormalization scheme-and-scale ambiguities. By taking the Higgs mass as the one given by the ATLAS and CMS collaborations, i.e. MH=125.09±0.21±0.11 GeV, we obtain Γ(Hγγ)|LHC=9.364+0.0760.075 KeV. Lepton and Quark Masses and Mixing in a SUSY Model with Delta(384) and CP Comments: 1+41 pages, 1 figure, 5 tables Subjects: High Energy Physics - Phenomenology (hep-ph) We construct a supersymmetric model for leptons and quarks with the flavor symmetry Delta(384) and CP. The peculiar features of lepton and quark mixing are accomplished by the stepwise breaking of the flavor and CP symmetry. The correct description of lepton mixing angles requires two steps of symmetry breaking, where tri-bimaximal mixing arises after the first step. In the quark sector the Cabibbo angle theta_C equals sin pi/16 = 0.195 after the first step of symmetry breaking and it is brought into full agreement with experimental data after the second step. The two remaining quark mixing angles are generated after the third step of symmetry breaking. All three leptonic CP phases are predicted, sin delta^l = -0.936, |sin alpha|=|sin beta|=1/sqrt{2}. The amount of CP violation in the quark sector turns out to be maximal at the lowest order and is correctly accounted for, when higher order effects are included. Charged fermion masses are reproduced with the help of operators with different numbers of flavor (and CP) symmetry breaking fields. Light neutrino masses, arising from the type-I seesaw mechanism, can accommodate both mass orderings, normal and inverted. The vacuum alignment of the flavor (and CP) symmetry breaking fields is discussed at leading and at higher order. arXiv:1811.09378 [pdf, other] Bound on the graviton mass from Chandra X-ray cluster sample Sajal Gupta, Shantanu Desai
2022-10-06 08:03:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5991812348365784, "perplexity": 1227.8077907386564}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00450.warc.gz"}
https://www.gamedev.net/forums/topic/301274-planetmoon-rotation-in-opengl-solar-systems/
# OpenGL Planet/moon rotation in OpenGL solar systems This topic is 4873 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts Hi everyone. =) I am trying to create a solar system in OpenGL, and have some trouble with rotation. I have managed to make planets rotate around the sun, by using glRotatef(v, -0.1 , 1 , 0.2); right before I create the sun and all the planet, and placing the sun with the center at the point 0,0,0. However, what I also want to do is to make it possible for moons to rotate around the rotating planets, and making the planets rotate around their own axis (like they do in reality when switching between day and night - one side isn´t always facing the sun). This is the code for how I did to create the solar system: //creates a planet, and binds the texture if tex isn´t higher than the texture index. void createPlanet(GLfloat k, int x, int y, int tex) { if (tex < MAX_NO_TEXTURES) { glBindTexture(GL_TEXTURE_2D, texture_id[tex]);}; glutSolidSphere(k, x, y); } //Draws and rotates the solar system. void createSolarsystem() { glRotatef(v, -0.1 , 1 , 0.2); glMaterialfv(GL_FRONT, GL_SPECULAR, mat_specular); glMaterialfv(GL_FRONT, GL_SHININESS, mat_shininess); createPlanet(2.5, 12, 12, 0); //creates the sun (a bit bigger than the planets, texture 0). glTranslatef(0.0, 0.0, 6.0); createPlanet(1.0, 10, 10, 1); //creates the earth, texture 1, 40% of the sun´s size.) glTranslatef(0.0, 0.0, -11.0); createPlanet(1.0, 12, 12 , 2); //creates mars, texture 2, the same size as the earth). } //display function void display() { glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glPushMatrix(); glRotatef(angleX,1.0,0.0,0.0); glRotatef(angleY,0.0,1.0,0.0); glLightfv(GL_LIGHT0, GL_POSITION, light_position); glMaterialfv(GL_FRONT, GL_SPECULAR, mat_specular); glMaterialfv(GL_FRONT, GL_SHININESS, mat_shininess); createSolarsystem(); //creates the solar system glPopMatrix(); glutSwapBuffers(); } If any of you have any ideas about how I could do to rotate planets surface and moons, that would be great. Thanks. =) ##### Share on other sites You have your sun at the center of the solar system and you gave the earth a matrix that makes it rotate with time. You also have a similar matrix for the moon, but it's relative to itself (which you don't want) so to do it relativly you just multiply the matrices together (earth*moon)=moonabsolutematrix; this is the matrix that you draw the moon with. Actually it's not a straight matrix multiplication because the orientation of the earth does not affect the orientation of the moon, so in this planet example you want: vector3 moonDistance(100.0f, 0.0f, 0.0f); // However far away the moon is from the earthvector3 absoluteMoonPosition=earthmatrix.transform(moonDistance); // Transform the moon position into world coordinatesvector3 moonRotation= what you want (constant speed rotation)combine the position and rotation here into a single matrix for the moon Edit: I hope you can understand this, maybe my description isn't as clear as it should be. Basically you need to multiply vectors by matrices. I'm not very experienced with opengl so I can't post code. ##### Share on other sites Thanks. =) I´m not quite sure that I understand this.. I´ve failed the course in Linear Algebra both this autumn and the year before that, so that might be why. Is there some way to automatically generate the matrix with the earth corodinates, so that I can use it in the formulas you described above, to create the moon matrix? Because all I did was to create the earth, then rotated it around the sun with glRotate - I never changed its position coordinates manually. Do you know if it is possible to do it by having a rotation around a planet inside the rotation around the sun instead of using matrix multiplication? ##### Share on other sites I had a quick look at some opengl tutorials. The easiest way is to do: glPushMatrix()// draw Earth hereglPushMatrix()glTranslatef(10.0f, 0.0f, 0.0f); // The distance from earth to moon// Draw moon hereglPopMatrix();glPopMatrix(); This method is fine if your planets are totally spherical/symmetrical. But if not you will see how the rotation of the moon changes with the earths. I have no idea how to set the rotation not to be multiplied with the previous matrix in OpenGL yet :/ Wait...I think I've got it: glPushMatrix();glTranslate(100.0f, 0.0f, 0.0f); // The distance from sun to earthglPushMatrix();glTranslate(10.0f, 0.0f, 0.0f); // The distance from earth to moonglRotatef(.....) // Your rotation for the moon (increment it each frame)// Draw the moon hereglPopMatrix();glRotate(....); // The rotation for the earth (increment it each frame)// Draw the earth hereglPopMatrix(); ##### Share on other sites They are spheres... so that won´t be any problem. :) However, I tried to do like you said: glPushMatrix(); glTranslatef(0.0, 0.0, 6.0); //earth is 6 away from the sun glPushMatrix(); glTranslatef(0.0, 0.0, -1.5); //moon is -1.5 away from the center of the earth glRotatef((1.5 * v), -0.1 , 0.0 , 1); //moon rotation createPlanet(0.3, 16, 16 , 2); //creates the moon glPopMatrix(); glRotatef((2 * v), -0.1 , 0.1 , 1); //earth rotation createPlanet(1.0, 16, 16, 1); //create earth glPopMatrix(); but it seems like there is still something wrong. The moon is rotating around the sun, just like the earth is doing, and the earth is not rotating either.. so I guess that way didn´t work after all. :i ##### Share on other sites I managed to make it work now. =) It wasn´t the code above which was the big problem, but that I had used both glTexGeni(GL_T, GL_TEXTURE_GEN_MODE, GL_SPHERE_MAP); and glTexGeni(GL_S, GL_TEXTURE_GEN_MODE, GL_SPHERE_MAP); earlier in the code - if I only use glTexGeni(GL_T, GL_TEXTURE_GEN_MODE, GL_SPHERE_MAP), the textures rotate with the planets (although they look a bit worse, but ýou can barely see that because of the shadows). 1. 1 2. 2 3. 3 4. 4 Rutin 18 5. 5 • 11 • 21 • 12 • 12 • 11 • ### Forum Statistics • Total Topics 631406 • Total Posts 2999896 ×
2018-06-22 21:15:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3250691592693329, "perplexity": 2050.945995463134}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864795.68/warc/CC-MAIN-20180622201448-20180622221448-00353.warc.gz"}
https://webmasters.stackexchange.com/questions/114666/rules-for-registering-a-country-code-domain-id
# Rules for registering a country code domain (.id) I want to register a country code .id (Indonesia) domain for a personal website. I do not live in Indonesia, and do not intend to do business in Indonesia. Is this acceptable; does anyone have any experience of country level domain registration for personal purposes or any recommended registrars? ## migrated from webapps.stackexchange.comApr 23 '18 at 22:35 This question came from our site for power users of web applications. • I suggest you edit your question title to be specific for Indonesia, because that's what you're asking for. The rules defer for each TLD. – arieljannai Apr 23 '18 at 16:49 ## TL;DR: It seems to be allowed. Note: The rules differ between countries. So it answer your question regards the Indonesian TLD, but not in general. Although in the Wikipedia article of .id TLD it says that, I think it's outdated. Registration restrictions Indonesian presence required; various restrictions specific to different subdomains From the Wikipedia article List of Internet top-level domains, about the .id TLD: Restricted to Indonesian companies (co.id), organisations (or.id), academic (ac.id & sch.id) and citizens (biz.id, my.id & web.id). Second-level domains are becoming available now and opened to general registration on 17 August 2014. And quoting from this source, from 2013: On Indonesia’s independence day next year, August 17, ‘.id’ domains will be available to the general public, first come first served. The new domain ending will cost IDR 500,000 (\$41) per year. ## Where it can be bought? From a quick look, it seems that those websites sell them: Like @arieljannai says the answer depends on the TLD, so here I propose another way to look at it, that you could apply to other cases. So same "TL;DR" but different explanations. If you look at the IANA database of TLDs, which is the only authoritative one, you come there for .ID: https://www.iana.org/domains/root/db/id.html You will then find who is the registry for this TLD. Only the registry of the TLD is authoritative on rules governing everything concerning this TLD. In that case it is: https://pandi.id Now, depending on the case, you may find registry websites more or less organized, up to date, with relevant content and hopefully with English translation. So in this case, it does not seem to bad and if you browse it a little you should arrive at: That should be the registry policy but at least in English that is not very useful as a page. Maybe a translation missing? But in fact almost all pages in English are empty of content. Not a good sign... If I go to the Indonesian version of "Registration Requirements" and do some online translation I get this as constraints for .ID (note that there are other subdomains of .ID available with probably other rules and more constraints): • It is intended for individuals, individuals, citizens of Indonesia, foreign citizens, or legal entities. • If the Applicant is a State Implementing Agency, then the registration of the Domain Name shall follow the Minister's Decree on Communication and Informatics. So from this quick look I would say it can be reserved by anyone. But the devils may be in the details. I advise to go to some trusted registrar and see if they have information on that TLD to make sure you can buy it. I can not and will not give you specific names of registrar as that would be obviously personal biaised opinion. You should start with the companies you already use if you already buy domain names. Now if you decided to buy such a domain name, you need to find a registrar. Again the registry website should probably have a page on that, here it happens to be: https://pandi.id/en/registrar/list-of-registrars/ If you find other companies selling it, they are probably resellers of some of these registrars. It is not bad per se, and can make even more sense in some cases, depending on your constraints, but it also means that there is one more provider in the path between you and your domain. All registrars seem Indonesian ones (or the registry web page is awfully out of date for both languages), the registry may put some constraints on that. This seems indeed to be what point 4 of the FAQ says, again translated: PANDI has selected Indonesian companies interested in performing registrar functions. There are twelve companies that qualify to be PANDI registrars. You can register a domain name on the twelve companies. Sidenote: note all TLDs today work in the same registry/registrar split as known in gTLD. In some TLDs, you may not have registrars at all, and in others you may have registrars but the registry may also sell domain names directly to end clients. Also remember one specific things that is often forgotten, as obvious as it may be: when you register a ccTLD you are automatically being put under this country rules, for example on things like DNS censorship, sensitive words on so on. You are bound by those rules that can change and can impact you. It happened in the past for example in .LY And when you are buying a gTLD you are probably at least in part bound by US laws for multiple reasons. This is especially important if you decide to use some specific TLD to do some "domain games" by doing some "clever" naming. One another point: when you will get problems or questions on your domain name, you will go through your registrar, but sometimes you may deal with the registry directly, especially for disputes. You have to realize then, again as obvious as it may be, that you may be required to speak the country language to get yourself understood by the registry and understand them, and you will of course be also bound by their hours of operation (like for urgent matters they may be closed on their timezone when you need them).
2019-10-16 17:56:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18402203917503357, "perplexity": 1848.563945460109}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986669057.0/warc/CC-MAIN-20191016163146-20191016190646-00520.warc.gz"}
https://www.techwhiff.com/learn/what-does-is-mean-that-emissions-trading-has/2753
# What does is mean that "emissions trading has propertitized pollution" ###### Question: What does is mean that "emissions trading has propertitized pollution"? #### Similar Solved Questions ##### Use the molar bond enthalpy data in the table to estimate the value of AHxn for... Use the molar bond enthalpy data in the table to estimate the value of AHxn for the equation CCl4(g) + 2F2(g) — CF4(g) + 2Cl2(g) The bonding in the molecules is shown. N-N 418 ' F-F C—CI + Average molar bond enthalpies (H bond) Bond kJ.mol-1 Bond TkJmol-1 0-H 464 C=N 890 0-0 142 N-H ... ##### B) A mechanism with 3-degree of freedom (DOF) is shown in the following figure. d3 02... b) A mechanism with 3-degree of freedom (DOF) is shown in the following figure. d3 02 01 (3 marks) (4 marks) (6 marks) Assign coordinate frames as necessary based on D-H representation. Fill out the parameter table. iii) Determine the homogenous transformation matrix UTH ARi) "To Determine all t... ##### PN 200 Fundamentals of Nursing II Osteoarthritis- Naprosyn An older adult patient has osteoarthritis. The patient... PN 200 Fundamentals of Nursing II Osteoarthritis- Naprosyn An older adult patient has osteoarthritis. The patient had previously taken aspirin for the condition and later was prescribed naproxen (Naprosyn). Both drugs caused GI distress. The health care provider discontinued the naproxen and prescri... ##### LUULEED A Noisy Room (Ranking) Due this Friday, Oct 18 at 11:59 pm (EDT) You are... LUULEED A Noisy Room (Ranking) Due this Friday, Oct 18 at 11:59 pm (EDT) You are standing in the middle of a large room listening to a cacaphony of sounds. Rank the intensity of the sounds from each source (1 loudest, 2 next loudest, ...) Consider al objects to be right next to you. If two values ar... ##### Are the eco-promises that companies communicate about their products and services helpful example( consumers can make... are the eco-promises that companies communicate about their products and services helpful example( consumers can make better decisions) worth the time and financial investment?... ##### Pdf X Frederic HMOTUTUR ata/local/Microsoft Windows/iNetCache/Content Outlook/NMWXOC18/Cal%201%20Hmk%2001.pdf - + Fit to page 1 Calc. I -... pdf X Frederic HMOTUTUR ata/local/Microsoft Windows/iNetCache/Content Outlook/NMWXOC18/Cal%201%20Hmk%2001.pdf - + Fit to page 1 Calc. I - Fall 2019 F. Xavier August 29, 2019 Homework # 1, due 09/05 DO NOT USE CALCULATORS. 1 Find the domain of the function f(x) = V-2 - 2x + 10. 2) Which function belo... ##### Find R in for the network shown below. Round any Wye-Delta conversions to three decimal places,... Find R in for the network shown below. Round any Wye-Delta conversions to three decimal places, and round the final answer to one decimal place. 2Ω 6Ω 8Ω 9Ω 4 S2 2Ω in 2Ω 6Ω Rin... ##### In 1943, an organization surveyed 1100 adults and asked, "Are you a total abstainer from, or... In 1943, an organization surveyed 1100 adults and asked, "Are you a total abstainer from, or do you on occasion consume, alcoholic beverages?" Of the 1100 adults surveyed, 385 indicated that they were total abstainers. In a recent survey, the same question was asked of 1100 adults and 308 in... ##### An someone explain how to do this? I understand how to do this when there are... an someone explain how to do this? I understand how to do this when there are 12 bits like 0x72e. but this hex number has only 8 so i dont get it.    Error Correction, 5 Data bits are interspersed among the parity bits Let pi be the i-th parity bit i means read i, skip i Let di be the j-th...
2022-09-29 01:34:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33825603127479553, "perplexity": 5733.392064884803}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335303.67/warc/CC-MAIN-20220929003121-20220929033121-00495.warc.gz"}
http://www.journaltocs.ac.uk/index.php?action=browse&subAction=subjects&publisherID=8&journalID=18123&pageb=1&userQueryID=&sort=&local_page=&sorType=&sorCol=
for Journals by Title or ISSN for Articles by Keywords help Subjects -> MATHEMATICS (Total: 879 journals)     - APPLIED MATHEMATICS (71 journals)    - GEOMETRY AND TOPOLOGY (19 journals)    - MATHEMATICS (651 journals)    - MATHEMATICS (GENERAL) (42 journals)    - NUMERICAL ANALYSIS (19 journals)    - PROBABILITIES AND MATH STATISTICS (77 journals) MATHEMATICS (651 journals)                  1 2 3 4 | Last 1 2 3 4 | Last Acta Applicandae Mathematicae   [SJR: 0.624]   [H-I: 34]   [1 followers]  Follow         Hybrid journal (It can contain Open Access articles)    ISSN (Print) 0167-8019 - ISSN (Online) 1572-9036    Published by Springer-Verlag  [2353 journals] • A Global Existence Result for the Anisotropic Rotating Magnetohydrodynamical Systems • Authors: Van-Sang Ngo Pages: 1 - 42 Abstract: Abstract In this article, we study an anisotropic rotating system arising in magnetohydrodynamics (MHD) in the whole space $$\mathbb{R}^{3}$$ , in the case where there are no diffusivity in the vertical direction and a vanishing diffusivity in the horizontal direction (when the rotation goes to infinity). We first prove the local existence and uniqueness of a strong solution and then, using Strichartz-type estimates, we prove that this solution exists globally in time for large initial data, when the rotation is fast enough. PubDate: 2017-08-01 DOI: 10.1007/s10440-016-0092-z Issue No: Vol. 150, No. 1 (2017) • Global Existence of a Weak Solution for a Model in Radiation Magnetohydrodynamics • Authors: Bernard Ducomet; Marek Kobera; Šárka Nečasová Pages: 43 - 65 Abstract: Abstract We consider a simplified model based on the Navier-Stokes-Fourier system coupled to a transport equation and the Maxwell system, proposed to describe radiative flows in stars. We establish global-in-time existence for the associated initial-boundary value problem in the framework of weak solutions. PubDate: 2017-08-01 DOI: 10.1007/s10440-016-0093-y Issue No: Vol. 150, No. 1 (2017) • Global Existence and the Optimal Decay Rates for the Three Dimensional Compressible Nematic Liquid Crystal Flow • Authors: Fuyi Xu; Xinguang Zhang; Yonghong Wu; Lishan Liu Pages: 67 - 80 Abstract: Abstract The present paper is dedicated to the study of the Cauchy problems for the three-dimensional compressible nematic liquid crystal flow. We obtain the global existence and the optimal decay rates of smooth solutions to the system under the condition that the initial data in lower regular spaces are close to the constant equilibrium state. Our main method is based on the spectral analysis and the smooth effect of dissipative operator. PubDate: 2017-08-01 DOI: 10.1007/s10440-017-0094-5 Issue No: Vol. 150, No. 1 (2017) • Reproducing Pairs of Measurable Functions • Authors: J.-P. Antoine; M. Speckbacher; C. Trapani Pages: 81 - 101 Abstract: Abstract We analyze the notion of reproducing pair of weakly measurable functions, which generalizes that of continuous frame. We show, in particular, that each reproducing pair generates two Hilbert spaces, conjugate dual to each other. Several examples, both discrete and continuous, are presented. PubDate: 2017-08-01 DOI: 10.1007/s10440-017-0095-4 Issue No: Vol. 150, No. 1 (2017) • A Regularity Condition of 3d Axisymmetric Navier-Stokes Equations • Authors: Xinghong Pan Pages: 103 - 109 Abstract: Abstract In this paper, we study the regularity of 3d axisymmetric Navier-Stokes equations under a prior point assumption on $$v^{r}$$ or $$v^{z}$$ . That is, the weak solution of the 3d axisymmetric Navier-Stokes equations $$v$$ is smooth if $$rv^{r}\geq-1; \quad\mbox{or}\quad r\bigl v^{r}(t,x)\bigr \leq Cr^{\alpha}, \ \alpha\in(0,1];\quad\mbox{or} \quad r\bigl v^{z}(t,x)\bigr \leq Cr^{ \beta},\ \beta\in[0,1];$$ where $$r$$ is the distance from the point $$x$$ to the symmetric axis. PubDate: 2017-08-01 DOI: 10.1007/s10440-017-0096-3 Issue No: Vol. 150, No. 1 (2017) • On Stability of Solutions to Equations Describing Incompressible Heat-Conducting Motions Under Navier’s Boundary Conditions • Authors: Ewa Zadrzyńska; Wojciech M. Zaja̧czkowski Abstract: Abstract In this paper we prove existence of global strong-weak two-dimensional solutions to the Navier-Stokes and heat equations coupled by the external force dependent on temperature and the heat dissipation, respectively. The existence is proved in a bounded domain with the Navier boundary conditions for velocity and the Dirichlet boundary condition for temperature. Next, we prove existence of 3d global strong solutions via stability. PubDate: 2017-08-03 DOI: 10.1007/s10440-017-0116-3 • The Inhomogeneous Fermi-Pasta-Ulam Chain, a Case Study of the 1 : 2 : 3 $1:2:3$ Resonance • Authors: Roelof Bruggeman; Ferdinand Verhulst Abstract: Abstract A 4-particles chain with different masses represents a natural generalization of the classical Fermi-Pasta-Ulam chain. It is studied by identifying the mass ratios that produce prominent resonances. This is a technically complicated problem as we have to solve an inverse problem for the spectrum of the corresponding linearized equations of motion. In the case of such an inhomogeneous periodic chain with four particles each mass ratio determines a frequency ratio for the quadratic part of the Hamiltonian. Most prominent frequency ratios occur but not all. In general we find a one-dimensional variety of mass ratios for a given frequency ratio. A detailed study is presented of the resonance $$1:2:3$$ . A small cubic term added to the Hamiltonian leads to a dynamical behaviour that shows a difference between the case that two opposite masses are equal and a striking difference with the classical case of four equal masses. For two equal masses and two different ones the normalized system is integrable and chaotic behaviour is small-scale. In the transition to four different masses we find a Hamiltonian-Hopf bifurcation of one of the normal modes leading to complex instability and Shilnikov-Devaney bifurcation. The other families of short-periodic solutions can be localized from the normal forms together with their stability characteristics. For illustration we use action simplices and examples of behaviour with time. PubDate: 2017-08-02 DOI: 10.1007/s10440-017-0115-4 • A Variational Inequality Theory with Applications to P $P$ -Laplacian Elliptic Inequalities • Authors: Yi-rong Jiang; Nan-jing Huang; Donal O’Regan Abstract: Abstract The main purpose of this paper is to establish variational inequality theory in connection with demicontinuous $$\psi_{p}$$ -dissipative maps in reflexive smooth Banach spaces by considering the convergence of approximants. As an application of this variational inequality theory, existence, uniqueness and convergence of approximants of positive weak solution for $$p$$ -Laplacian elliptic inequalities are obtained under suitable conditions. PubDate: 2017-08-01 DOI: 10.1007/s10440-017-0118-1 • Busemann Functions and Barrier Functions • Authors: Xiaojun Cui; Jian Cheng Abstract: Abstract On a smooth, non-compact, complete, boundaryless, connected Riemannian manifold there are two kinds of functions: Busemann functions with respect to rays and barrier functions with respect to lines (if there exists at least one). In this paper we collect some known properties on Busemann functions and introduce some new fundamental properties on barrier functions. Based on these properties of barrier functions, we could define some relations on the set of lines and thus classify them. With the equivalence relation we introduced, we present a generalization of a rigidity conjecture. PubDate: 2017-07-31 DOI: 10.1007/s10440-017-0114-5 • A Heroin Epidemic Model: Very General Non Linear Incidence, Treat-Age, and Global Stability • Authors: Salih Djilali; Tarik Mohammed Touaoula; Sofiane El-Hadi Miri Abstract: Abstract We consider an age structured heroin epidemic model, in a population divided into three sub-populations: $$S$$ the susceptible individuals, $$U_{1}$$ the drug users and $$U_{2}$$ the drug users under treatment, interacting as follows: $$\left \{ \textstyle\begin{array}{l} S'=A-\mu S-F ( S,U_{1} ) , \\ U_{1}'=F ( S,U_{1} ) - ( \mu +\delta_{1}+p ) U_{1}+\int_{0}^{\infty }k ( a ) U_{2} ( t,a ) da, \\ \frac{\partial U_{2}}{\partial t}+\frac{\partial U_{2}}{\partial a}=- ( \mu +\delta_{2}+k ( a ) ) U_{2}. \end{array}\displaystyle \right .$$ Our main contribution consists in considering a nonlinear incidence function $$F(S,U_{1})$$ in its very general form. Global dynamics of the obtained problem is analyzed. PubDate: 2017-07-31 DOI: 10.1007/s10440-017-0117-2 • On Weighted Average Interpolation with Cardinal Splines • Authors: J. López-Salazar; G. Pérez-Villalón Abstract: Abstract Given a sequence of data $$\{ y_{n} \} _{n \in \mathbb{Z}}$$ with polynomial growth and an odd number $$d$$ , Schoenberg proved that there exists a unique cardinal spline $$f$$ of degree $$d$$ with polynomial growth such that $$f ( n ) =y_{n}$$ for all $$n\in \mathbb{Z}$$ . In this work, we show that this result also holds if we consider weighted average data $$f\ast h ( n ) =y_{n}$$ , whenever the average function $$h$$ satisfies some light conditions. In particular, the interpolation result is valid if we consider cell-average data $$\int_{n-a}^{n+a}f ( x ) dx=y_{n}$$ with $$0< a\leq 1/2$$ . The case of even degree $$d$$ is also studied. PubDate: 2017-07-28 DOI: 10.1007/s10440-017-0112-7 • Existence and Asymptotic Behavior of Solutions for a Predator-Prey System with a Nonlinear Growth Rate • Authors: Wenbin Yang Abstract: Abstract The paper is concerned with a predator-prey diffusive system subject to homogeneous Neumann boundary conditions, where the growth rate $$(\frac{\alpha}{1+\beta v})$$ of the predator population is nonlinear. We study the existence of equilibrium solutions and the long-term behavior of the solutions. The main tools used here include the super-sub solution method, the bifurcation theory and linearization method. PubDate: 2017-07-27 DOI: 10.1007/s10440-017-0111-8 • A New Construction of Boundary Interpolating Wavelets for Fourth Order Problems • Authors: Silvia Bertoluzza; Valérie Perrier Abstract: Abstract In this article we introduce a new mixed Lagrange–Hermite interpolating wavelet family on the interval, to deal with two types (Dirichlet and Neumann) of boundary conditions. As this construction is a slight modification of the interpolating wavelets on the interval of Donoho, it leads to fast decomposition, error estimates and norm equivalences. This new basis is then used in adaptive wavelet collocation schemes for the solution of one dimensional fourth order problems. Numerical tests conducted on the 1D Euler–Bernoulli beam problem, show the efficiency of the method. PubDate: 2017-07-13 DOI: 10.1007/s10440-017-0110-9 • Exponential Stability and Periodic Solutions of Impulsive Neural Network Models with Piecewise Constant Argument • Authors: Kuo-Shou Chiu Abstract: Abstract In this paper we introduce an impulsive cellular neural network models with piecewise alternately advanced and retarded argument. The model with the advanced argument is system with strong anticipation. Some sufficient conditions are established for the existence and global exponential stability of a unique periodic solution. The approaches are based on employing Banach’s fixed point theorem and a new integral inequality of Gronwall type with impulses and deviating arguments. The criteria given are easily verifiable, possess many adjustable parameters, and depend on impulses and piecewise constant argument deviations, which provides flexibility for the design and analysis of cellular neural network models. Several numerical examples and simulations are also given to show the feasibility and effectiveness of our results. PubDate: 2017-06-16 DOI: 10.1007/s10440-017-0108-3 • Towards a Comprehensive Stability Theory for Feynman’s Operational Calculus: The Time-Dependent Setting • Authors: Lance Nielsen Abstract: We establish a comprehensive stability theory for Feynman’s operational calculus (informally, the forming of functions of several noncommuting operators) in the time-dependent setting. Indeed, the main theorem, Theorem 2, contains many of the current stability theorems for the operational calculus and allows the stability theory to be significantly extended. The assumptions needed for the main theorem, Theorem 2, are rather mild and fit in nicely with the current abstract theory of the operational calculus in the time-dependent setting. Moreover, Theorem 2 allows the use of arbitrary time-ordering measures, as long as the discrete parts of these measures are finitely supported. PubDate: 2017-06-16 DOI: 10.1007/s10440-017-0109-2 • Existence of Multi-peak Solutions for a Class of Quasilinear Problems in Orlicz-Sobolev Spaces • Authors: Claudianor O. Alves; Ailton R. da Silva Abstract: Abstract The aim of this work is to establish the existence of multi-peak solutions for the following class of quasilinear problems $$- \mbox{div} \bigl(\epsilon^{2}\phi\bigl(\epsilon \nabla u \bigr)\nabla u \bigr) + V(x)\phi\bigl(\vert u\vert\bigr)u = f(u)\quad\mbox{in } \mathbb{R}^{N},$$ where $$\epsilon$$ is a positive parameter, $$N\geq2$$ , $$V$$ , $$f$$ are continuous functions satisfying some technical conditions and $$\phi$$ is a $$C^{1}$$ -function. PubDate: 2017-06-14 DOI: 10.1007/s10440-017-0107-4 • On the Euler-Korteweg System with Free Boundary Condition • Authors: Tong Tang; Hongjun Gao Abstract: Abstract In this paper, we study the compressible Euler-Korteweg equations with free boundary condition in vacuum. Under physically assumptions of positive density and pressure, we introduce some physically quantities to show that the spreading diameter of regions grows linearly in time. This is an interesting result as one would expect that the capillary forces would prevent the boundary from spreading. Moreover, we construct a spherically symmetric global solution to support our theorem, followed by Sideris (J. Differ. Equ. 257:1–14, 2014). PubDate: 2017-06-05 DOI: 10.1007/s10440-017-0097-2 • Global Existence and Finite Time Blow-up for a Reaction-Diffusion System with Three Components • Authors: Huiling Li; Yang Zhang Abstract: Abstract This paper concerns global existence and finite time blow-up behavior of positive solutions for a nonlinear reaction-diffusion system with different diffusion coefficients. By use of algebraic matrix theory and modern analytical theory, we extend results of Wang (Z. Angew. Math. Phys. 51:160–167, 2000) to a more general system. Furthermore, we give a complete answer to the open problem which was brought forward in Wang (Z. Angew. Math. Phys. 51:160–167, 2000). PubDate: 2017-06-05 DOI: 10.1007/s10440-017-0105-6 • Error Bounds for the Large-Argument Asymptotic Expansions of the Hankel and Bessel Functions • Authors: Gergő Nemes Abstract: Abstract In this paper, we reconsider the large-argument asymptotic expansions of the Hankel, Bessel and modified Bessel functions and their derivatives. New integral representations for the remainder terms of these asymptotic expansions are found and used to obtain sharp and realistic error bounds. We also give re-expansions for these remainder terms and provide their error estimates. A detailed discussion on the sharpness of our error bounds and their relation to other results in the literature is given. The techniques used in this paper should also generalize to asymptotic expansions which arise from an application of the method of steepest descents. PubDate: 2017-05-17 DOI: 10.1007/s10440-017-0099-0 • Positive Solution of a Nonlinear Parabolic System Arising in Grain Drying • Authors: A. Ambrazevičius; V. Skakauskas Abstract: Abstract Coupled system of nonlinear parabolic equations for grain drying is proposed and the existence and uniqueness theorem of classical solutions is proved by using the upper and lower solutions technique. The long-time behaviour of the solution is also investigated. PubDate: 2017-05-15 DOI: 10.1007/s10440-017-0098-1 JournalTOCs School of Mathematical and Computer Sciences Heriot-Watt University Edinburgh, EH14 4AS, UK Email: [email protected] Tel: +00 44 (0)131 4513762 Fax: +00 44 (0)131 4513327 Home (Search) Subjects A-Z Publishers A-Z Customise APIs
2017-08-17 05:42:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7419502139091492, "perplexity": 1128.46450906825}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886102967.65/warc/CC-MAIN-20170817053725-20170817073725-00534.warc.gz"}
http://ecommons.library.cornell.edu/handle/1813/6464
Please use this identifier to cite or link to this item: http://hdl.handle.net/1813/6464 Title: A Triangular Processor Array for Computing the Singular Value Decomposition Authors: Luk, Franklin T. Keywords: computer sciencetechnical report Issue Date: Jul-1984 Publisher: Cornell University Citation: http://techreports.library.cornell.edu:8081/Dienst/UI/1.0/Display/cul.cs/TR84-625 Abstract: A triangular processor array for computing a singular value decomposition (SVD) of an $m \times n (m \geq n)$ matrix is proposed. A Jacobi-type algorithm is used to first triangularize the given matrix and then diagonalize the resultant triangular form. The requirements are $O(m)$ time and $1/4 n^{2} + O(n)$ processors. URI: http://hdl.handle.net/1813/6464 Appears in Collections: Computer Science Technical Reports Files in This Item: File Description SizeFormat
2014-09-30 06:09:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2612500786781311, "perplexity": 5048.153484229843}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037662880.10/warc/CC-MAIN-20140930004102-00088-ip-10-234-18-248.ec2.internal.warc.gz"}
https://stats.stackexchange.com/questions/448064/erroneous-expression-for-metropolis-hastings-acceptance-ratio-in-a-paper
# Erroneous expression for Metropolis-Hastings acceptance ratio in a paper Let • $$(E,\mathcal E)$$ be a measure space; • $$\rho:E\to[0,\infty)$$ be $$\mathcal E$$-measurable, $$p:E^2\to[0,\infty)$$ be $$\mathcal E^{\otimes2}$$-measurable, $$r(x,y):=\left.\begin{cases}\displaystyle\frac{\rho(y)p(y,x)}{\rho(x)p(x,y)}&\text{, if }\rho(x)p(x,y)>0\\1&\text{, otherwise}\end{cases}\right\}\;\;\;\text{for }x,y\in E$$ and $$\overline\rho(x,y):=\left.\begin{cases}\displaystyle\frac{\rho(y)}{p(x,y)}&\text{, if }\rho(x)p(x,y)>0\\0&\text{, otherwise}\end{cases}\right\}\;\;\;\text{for }x,y\in E.$$ Assuming $$\forall y\in E:\left(p(y)>0\Rightarrow\forall x\in G:q(x,y)>0\right),\tag1$$ are we able to show that $$\tilde r(x,y):=\left.\begin{cases}\displaystyle\frac{\overline\rho(x,y)}{\overline\rho(y,x)}&\text{, if }\overline\rho(y,x)>0\\1&\text{, otherwise}\end{cases}\right\}=r(x,y)\tag2$$ for all $$x,y\in E$$? This claim is made in this paper on page 8.$$^1$$ However, it should hold $$\tilde r(x,y)=\left.\begin{cases}\displaystyle\frac{\rho(y)p(y,x)}{\rho(x)p(x,y)}&\text{, if }\rho(x)\rho(y)>0\\1&\text{, otherwise}\end{cases}\right\}\;\;\;\text{for all }x,y\in E\tag3$$ and hence, for example, if $$x,y\in E$$ with $$\rho(x)p(x,y)>0$$ and $$\rho(y)=0$$, then $$r(x,y)=0$$, but $$\tilde r(x,y)=1$$. Am I missing something? If not, can we fix this? $$^1$$ They actually claim that $$\left.\begin{cases}\displaystyle\frac{\overline\rho(y,x)}{\overline\rho(x,y)}&\text{, if }\overline\rho(x,y)>0\\1&\text{, otherwise}\end{cases}\right\}=r(x,y)\;\;\;\text{for all }x,y\in E,\tag4$$ but since this is obviously wrong, I suspected that they mean $$\tilde r$$ instead. • I do not think this is of importance: while the Markov chain remains in the exterior of the support of $\rho$ it is free to do whatever it wants. The sooner it leaves this transient region the better. Jan 25 at 7:01 There is an error in the paper, indeed. I think you state that the paper is wrong with the following "claim": There's really no proof or derivation of the expression. All they did was to plug the definition of $$\bar \rho$$ on the same page into the definition of $$r(x,y)$$ on p.5. Unfortunately, while doing so they messed up. Here's why. Both definitions are in your question, first two equations. I can re-write $$r(x,y)$$ as follows: $$\frac{\rho(y)}{p(x,y)}\frac 1 {\left(\frac{\rho(x)}{p(y,x)}\right)}=\bar \rho(x,y)\frac{1}{\bar \rho(y,x)}$$ I don't think this error impacts the rest of the paper though, because it's not used anywhere further in the text explicitly. • Thank you for your answer. Your last displayed equation is my equation $(2)$. My problem is that I don't understand why this equation holds, since we not have the equivalence $\overline\rho(y,x)>0\Leftrightarrow\rho(x)p(x,y)>0$. Feb 11 '20 at 7:41 • @0xbadf00d, $\rho(x)>0\implies p(y,x)>0\implies\bar r(y,x)>0$, see the statement on Assumption 1 on p.8 of the paper Feb 12 '20 at 15:09 • @Aksakal Yes, but this yields only one implication. What about the other direction? Feb 13 '20 at 5:06 • @0xbadf00d, why do you need it in other direction? as I wrote, this result is not important for the paper anyways Feb 13 '20 at 15:32 • @Aksakal We need the other direction since otherwise the claimed equality only holds on a subset. (I know that this result is not important for the paper, but it's important in my application.) Feb 13 '20 at 17:04
2021-12-01 22:03:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 24, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8839114308357239, "perplexity": 263.70687002304925}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964360951.9/warc/CC-MAIN-20211201203843-20211201233843-00556.warc.gz"}
https://en.wikipedia.org/wiki/Standard_Model_(mathematical_formulation)
# Standard Model (mathematical formulation) For a less mathematical description, see Standard Model. Standard Model of Particle Physics. The diagram shows the elementary particles of the Standard Model (the Higgs boson, the three generations of quarks and leptons, and the gauge bosons), including their names, masses, spins, charges, chiralities, and interactions with the strong, weak and electromagnetic forces. It also depicts the crucial role of the Higgs boson in electroweak symmetry breaking, and shows how the properties of the various particles differ in the (high-energy) symmetric phase (top) and the (low-energy) broken-symmetry phase (bottom). The Standard Model of Particle Physics: More Schematic Depiction This article describes the mathematics of the Standard Model of particle physics, a gauge quantum field theory containing the internal symmetries of the unitary product group SU(3) × SU(2) × U(1). The theory is commonly viewed as containing the fundamental set of particles – the leptons, quarks, gauge bosons and the Higgs particle. The Standard Model is renormalizable and mathematically self-consistent,[1] however despite having huge and continued successes in providing experimental predictions it does leave some unexplained phenomena. In particular, although the physics of special relativity is incorporated, general relativity is not, and the Standard Model will fail at energies or distances where the graviton is expected to emerge. Therefore, in a modern field theory context, it is seen as an effective field theory. This article requires some background in physics and mathematics, but is designed as both an introduction and a reference. ## Quantum field theory The pattern of weak isospin T3, weak hypercharge YW, and color charge of all known elementary particles, rotated by the weak mixing angle to show electric charge Q, roughly along the vertical. The neutral Higgs field (gray square) breaks the electroweak symmetry and interacts with other particles to give them mass. The standard model is a quantum field theory, meaning its fundamental objects are quantum fields which are defined at all points in spacetime. These fields are • the fermion fields, ψ, which account for "matter particles"; • the electroweak boson fields ${\displaystyle W_{1},W_{2},W_{3}}$, and B; • the gluon field, Ga; and • the Higgs field, φ. That these are quantum rather than classical fields has the mathematical consequence that they are operator-valued. In particular, values of the fields generally do not commute. As operators, they act upon the quantum state (ket vector). The dynamics of the quantum state and the fundamental fields are determined by the Lagrangian density ${\displaystyle {\mathcal {L}}}$ (usually for short just called the Lagrangian). This plays a role similar to that of the Schrödinger equation in non-relativistic quantum mechanics, but a Lagrangian is not an equation of motion – rather, it is a polynomial function of the fields and their derivatives, and used with the principle of least action. While it would be possible to derive a system of differential equations governing the fields from the Langrangian, it is more common to use other techniques to compute with quantum field theories. The standard model is furthermore a gauge theory, which means there are degrees of freedom in the mathematical formalism which do not correspond to changes in the physical state. The gauge group of the standard model is SU(3) × SU(2) × U(1), where U(1) acts on B and φ, SU(2) acts on W and φ, and SU(3) acts on G. The fermion field ψ also transforms under these symmetries, although all of them leave some parts of it unchanged. ### The role of the quantum fields In classical mechanics, the state of a system can usually be captured by a small set of variables, and the dynamics of the system is thus determined by the time evolution of these variables. In classical field theory, the field is part of the state of the system, so in order to describe it completely one effectively introduces separate variables for every point in spacetime (even though there are many restrictions on how the values of the field "variables" may vary from point to point, for example in the form of field equations involving partial derivatives of the fields). In quantum mechanics, the classical variables are turned into operators, but these do not capture the state of the system, which is instead encoded into a wavefunction ψ or more abstract ket vector. If ψ is an eigenstate with respect to an operator P, then = λψ for the corresponding eigenvalue λ, and hence letting an operator P act on ψ is analogous to multiplying ψ by the value of the classical variable to which P corresponds. By extension, a classical formula where all variables have been replaced by the corresponding operators will behave like an operator which, when it acts upon the state of the system, multiplies it by the analogue of the quantity that the classical formula would compute. The formula as such does however not contain any information about the state of the system; it would evaluate to the same operator regardless of what state the system is in. Quantum fields relate to quantum mechanics as classical fields do to classical mechanics, i.e., there is a separate operator for every point in spacetime, and these operators do not carry any information about the state of the system; they are merely used to exhibit some aspect of the state, at the point to which they belong. In particular, the quantum fields are not wavefunctions, even though the equations which govern their time evolution may be deceptively similar to those of the corresponding wavefunction in a semiclassical formulation. There is no variation in strength of the fields between different points in spacetime; the variation that happens is rather one of phase factors. ### Vectors, scalars, and spinors Mathematically it may look as though all of the fields are vector-valued (in addition to being operator-valued), since they all have several components, can be multiplied by matrices, etc., but physicists assign a more specific physical meaning to the word: a vector is something which transforms like a four-vector under Lorentz transformations, and a scalar is something which is invariant under Lorentz transformations. The B, Wj, and Ga fields are all vectors in this sense, so the corresponding particles are said to be vector bosons. The Higgs field φ is a scalar. The fermion field ψ does transform under Lorentz transformations, but not like a vector should; rotations will only turn it by half the angle a proper vector should. Therefore, these constitute a third kind of quantity, which is known as a spinor. It is common to make use of abstract index notation for the vector fields, in which case the vector fields all come with a Lorentzian index μ, like so: ${\displaystyle B^{\mu },W_{j}^{\mu }}$, and ${\displaystyle G_{a}^{\mu }}$. If abstract index notation is used also for spinors then these will carry a spinorial index and the Dirac gamma will carry one Lorentzian and two spinorian indices, but it is more common to regard spinors as column matrices and the Dirac gamma γμ as a matrix which additionally carries a Lorentzian index. The Feynman slash notation can be used to turn a vector field into a linear operator on spinors, like so: ${\displaystyle {\not }B=\gamma ^{\mu }B_{\mu }}$; this may involve raising and lowering indices. ## Alternative presentations of the fields Connections denoting which particles interact with each other. As is common in quantum theory, there is more than one way to look at things. At first the basic fields given above may not seem to correspond well with the "fundamental particles" in the chart above, but there are several alternative presentations which, in particular contexts, may be more appropriate than those that are given above. ### Fermions Rather than having one fermion field ψ, it can be split up into separate components for each type of particle. This mirrors the historical evolution of quantum field theory, since the electron component ψe (describing the electron and its antiparticle the positron) is then the original ψ field of quantum electrodynamics, which was later accompanied by ψμ and ψτ fields for the muon and tauon respectively (and their antiparticles). Electroweak theory added ${\displaystyle \psi _{\nu _{\mathrm {e} }},\psi _{\nu _{\mu }}}$, and ${\displaystyle \psi _{\nu _{\tau }}}$ for the corresponding neutrinos, and the quarks add still further components. In order to be four-spinors like the electron and other lepton components, there must be one quark component for every combination of flavour and colour, bringing the total to 24 (3 for charged leptons, 3 for neutrinos, and 2·3·3 = 18 for quarks). Each of these is a four component bispinor, for a total of 96 complex-valued components for the fermion field. An important definition is the barred fermion field ${\displaystyle {\bar {\psi }}}$ is defined to be ${\displaystyle \psi ^{\dagger }\gamma ^{0}}$, where ${\displaystyle \dagger }$ denotes the Hermitian adjoint and γ0 is the zeroth gamma matrix. If ψ is thought of as an n × 1 matrix then ${\displaystyle {\bar {\psi }}}$ should be thought of as a 1 × n matrix. #### A chiral theory An independent decomposition of ψ is that into chirality components: "Left" chirality:  ${\displaystyle \psi ^{L}={\frac {1}{2}}(1-\gamma _{5})\psi }$ "Right" chirality:  ${\displaystyle \psi ^{R}={\frac {1}{2}}(1+\gamma _{5})\psi }$ where ${\displaystyle \gamma _{5}}$ is the fifth gamma matrix. This is very important in the Standard Model because left and right chirality components are treated differently by the gauge interactions. In particular, under weak isospin SU(2) transformations the left-handed particles are weak-isospin doublets, whereas the right-handed are singlets – i.e. the weak isospin of ψR is zero. Put more simply, the weak interaction could rotate e.g. a left-handed electron into a left-handed neutrino (with emission of a W), but could not do so with the same right-handed particles. As an aside, the right-handed neutrino originally did not exist in the standard model – but the discovery of neutrino oscillation implies that neutrinos must have mass, and since chirality can change during the propagation of a massive particle, right-handed neutrinos must exist in reality. This does not however change the (experimentally-proven) chiral nature of the weak interaction. Furthermore, U(1) acts differently on ${\displaystyle \psi _{\mathrm {e} }^{L}}$ than on ${\displaystyle \psi _{\mathrm {e} }^{R}}$ (because they have different weak hypercharges). #### Mass and interaction eigenstates A distinction can thus be made between, for example, the mass and interaction eigenstates of the neutrino. The former is the state which propagates in free space, whereas the latter is the different state that participates in interactions. Which is the "fundamental" particle? For the neutrino, it is conventional to define the "flavour" (ν e , ν μ , or ν τ ) by the interaction eigenstate, whereas for the quarks we define the flavour (up, down, etc.) by the mass state. We can switch between these states using the CKM matrix for the quarks, or the PMNS matrix for the neutrinos (the charged leptons on the other hand are eigenstates of both mass and flavour). As an aside, if a complex phase term exists within either of these matrices, it will give rise to direct CP violation, which could explain the dominance of matter over antimatter in our current universe. This has been proven for the CKM matrix, and is expected for the PMNS matrix. #### Positive and negative energies Finally, the quantum fields are sometimes decomposed into "positive" and "negative" energy parts: ψ = ψ+ + ψ. This is not so common when a quantum field theory has been set up, but often features prominently in the process of quantizing a field theory. ### Bosons Due to the Higgs mechanism, the electroweak boson fields ${\displaystyle W_{1},W_{2},W_{3}}$, and ${\displaystyle B}$ "mix" to create the states which are physically observable. To retain gauge invariance, the underlying fields must be massless, but the observable states can gain masses in the process. These states are: The massive neutral (Z) boson: ${\displaystyle Z=\cos \theta _{W}W_{3}-\sin \theta _{W}B}$ The massless neutral boson: ${\displaystyle A=\sin \theta _{W}W_{3}+\cos \theta _{W}B}$ The massive charged W bosons: ${\displaystyle W^{\pm }={\frac {1}{\sqrt {2}}}\left(W_{1}\mp iW_{2}\right)}$ where θW is the Weinberg angle. The A field is the photon, which corresponds classically to the well-known electromagnetic four-potential – i.e. the electric and magnetic fields. The Z field actually contributes in every process the photon does, but due to its large mass, the contribution is usually negligible. ## Perturbative QFT and the interaction picture Much of the qualitative descriptions of the standard model in terms of "particles" and "forces" comes from the perturbative quantum field theory view of the model. In this, the Langrangian is decomposed as ${\displaystyle {\mathcal {L}}={\mathcal {L}}_{0}+{\mathcal {L}}_{\mathrm {I} }}$ into separate free field and interaction Langrangians. The free fields care for particles in isolation, whereas processes involving several particles arise through interactions. The idea is that the state vector should only change when particles interact, meaning a free particle is one whose quantum state is constant. This corresponds to the interaction picture in quantum mechanics. In the more common Schrödinger picture, even the states of free particles change over time: typically the phase changes at a rate which depends on their energy. In the alternative Heisenberg picture, state vectors are kept constant, at the price of having the operators (in particular the observables) be time-dependent. The interaction picture constitutes an intermediate between the two, where some time dependence is placed in the operators (the quantum fields) and some in the state vector. In QFT, the former is called the free field part of the model, and the latter is called the interaction part. The free field model can be solved exactly, and then the solutions to the full model can be expressed as perturbations of the free field solutions, for example using the Dyson series. It should be observed that the decomposition into free fields and interactions is in principle arbitrary. For example, renormalization in QED modifies the mass of the free field electron to match that of a physical electron (with an electromagnetic field), and will in doing so add a term to the free field Lagrangian which must be cancelled by a counterterm in the interaction Lagrangian, that then shows up as a two-line vertex in the Feynman diagrams. This is also how the Higgs field is thought to give particles mass: the part of the interaction term which corresponds to the (nonzero) vacuum expectation value of the Higgs field is moved from the interaction to the free field Lagrangian, where it looks just like a mass term having nothing to do with Higgs. ### Free fields Under the usual free/interaction decomposition, which is suitable for low energies, the free fields obey the following equations: • The fermion field ψ satisfies the Dirac equation; ${\displaystyle (i\hbar {\not }\partial -m_{f}c)\psi _{f}=0}$ for each type ${\displaystyle f}$ of fermion. • The photon field A satisfies the wave equation ${\displaystyle \partial _{\mu }\partial ^{\mu }A^{\nu }=0}$. • The Higgs field φ satisfies the Klein–Gordon equation. • The weak interaction fields Z, W± also satisfy the Proca equation. These equations can be solved exactly. One usually does so by considering first solutions that are periodic with some period L along each spatial axis; later taking the limit: L → ∞ will lift this periodicity restriction. In the periodic case, the solution for a field F (any of the above) can be expressed as a Fourier series of the form ${\displaystyle F(x)=\beta \sum _{\mathbf {p} }\sum _{r}E_{\mathbf {p} }^{-{\frac {1}{2}}}\left(a_{r}(\mathbf {p} )u_{r}(\mathbf {p} )e^{-{\frac {ipx}{\hbar }}}+b_{r}^{\dagger }(\mathbf {p} )v_{r}(\mathbf {p} )e^{\frac {ipx}{\hbar }}\right)}$ where: • β is a normalization factor; for the fermion field ${\displaystyle \psi _{f}}$ it is ${\displaystyle {\sqrt {m_{f}c^{2}/V}}}$, where ${\displaystyle V=L^{3}}$ is the volume of the fundamental cell considered; for the photon field Aμ it is ${\displaystyle \hbar c/{\sqrt {2V}}}$. • The sum over p is over all momenta consistent with the period L, i.e., over all vectors ${\displaystyle {\frac {2\pi \hbar }{L}}(n_{1},n_{2},n_{3})}$ where ${\displaystyle n_{1},n_{2},n_{3}}$ are integers. • The sum over r covers other degrees of freedom specific for the field, such as polarization or spin; it usually comes out as a sum from 1 to 2 or from 1 to 3. • Ep is the relativistic energy for a momentum p quantum of the field, ${\displaystyle ={\sqrt {m^{2}c^{4}+c^{2}\mathbf {p} ^{2}}}}$ when the rest mass is m. • ar(p) and ${\displaystyle b_{r}^{\dagger }(\mathbf {p} )}$ are annihilation and creation respectively operators for "a-particles" and "b-particles" respectively of momentum p; "b-particles" are the antiparticles of "a-particles". Different fields have different "a-" and "b-particles". For some fields, a and b are the same. • ur(p) and vr(p) are non-operators which carry the vector or spinor aspects of the field (where relevant). • ${\displaystyle p=(E_{\mathbf {p} }/c,\mathbf {p} )}$ is the four-momentum for a quantum with momentum p. ${\displaystyle px=p_{\mu }x^{\mu }}$ denotes an inner product of four-vectors. In the limit L → ∞, the sum would turn into an integral with help from the V hidden inside β. The numeric value of β also depends on the normalization chosen for ${\displaystyle u_{r}(\mathbf {p} )}$ and ${\displaystyle v_{r}(\mathbf {p} )}$. Technically, ${\displaystyle a_{r}^{\dagger }(\mathbf {p} )}$ is the Hermitian adjoint of the operator ar(p) in the inner product space of ket vectors. The identification of ${\displaystyle a_{r}^{\dagger }(\mathbf {p} )}$ and ar(p) as creation and annihilation operators comes from comparing conserved quantities for a state before and after one of these have acted upon it. ${\displaystyle a_{r}^{\dagger }(\mathbf {p} )}$ can for example be seen to add one particle, because it will add 1 to the eigenvalue of the a-particle number operator, and the momentum of that particle ought to be p since the eigenvalue of the vector-valued momentum operator increases by that much. For these derivations, one starts out with expressions for the operators in terms of the quantum fields. That the operators with ${\displaystyle \dagger }$ are creation operators and the one without annihilation operators is a convention, imposed by the sign of the commutation relations postulated for them. An important step in preparation for calculating in perturbative quantum field theory is to separate the "operator" factors a and b above from their corresponding vector or spinor factors u and v. The vertices of Feynman graphs come from the way that u and v from different factors in the interaction Lagrangian fit together, whereas the edges come from the way that the as and bs must be moved around in order to put terms in the Dyson series on normal form. ### Interaction terms and the path integral approach The Lagrangian can also be derived without using creation and annihilation operators (the "canonical" formalism), by using a "path integral" approach, pioneered by Feynman building on the earlier work of Dirac. See e.g. Path integral formulation on Wikipedia or A. Zee's QFT in a nutshell. This is one possible way that the Feynman diagrams, which are pictorial representations of interaction terms, can be derived relatively easily. A quick derivation is indeed presented at the article on Feynman diagrams. ## Lagrangian formalism The above interactions show some basic interaction vertices – Feynman diagrams in the standard model are built from these vertices. Higgs boson interactions are however not shown, and neutrino oscillations are commonly added. The charge of the W bosons are dictated by the fermions they interact with. We can now give some more detail about the aforementioned free and interaction terms appearing in the Standard Model Lagrangian density. Any such term must be both gauge and reference-frame invariant, otherwise the laws of physics would depend on an arbitrary choice or the frame of an observer. Therefore, the global Poincaré symmetry, consisting of translational symmetry, rotational symmetry and the inertial reference frame invariance central to the theory of special relativity must apply. The local SU(3) × SU(2) × U(1) gauge symmetry is the internal symmetry. The three factors of the gauge symmetry together give rise to the three fundamental interactions, after some appropriate relations have been defined, as we shall see. A complete formulation of the Standard Model Lagrangian with all the terms written together can be found e.g. here. ### Kinetic terms A free particle can be represented by a mass term, and a kinetic term which relates to the "motion" of the fields. #### Fermion fields The kinetic term for a Dirac fermion is ${\displaystyle i{\bar {\psi }}\gamma ^{\mu }\partial _{\mu }\psi }$ where the notations are carried from earlier in the article. ψ can represent any, or all, Dirac fermions in the standard model. Generally, as below, this term is included within the couplings (creating an overall "dynamical" term). #### Gauge fields For the spin-1 fields, first define the field strength tensor ${\displaystyle F_{\mu \nu }^{a}=\partial _{\mu }A_{\nu }^{a}-\partial _{\nu }A_{\mu }^{a}+gf^{abc}A_{\mu }^{b}A_{\nu }^{c}}$ for a given gauge field (here we use A), with gauge coupling constant g. The quantity  abc is the structure constant of the particular gauge group, defined by the commutator ${\displaystyle [t_{a},t_{b}]=if^{abc}t_{c},}$ where ti are the generators of the group. In an Abelian (commutative) group (such as the U(1) we use here), since the generators ta all commute with each other, the structure constants vanish. Of course, this is not the case in general – the standard model includes the non-Abelian SU(2) and SU(3) groups (such groups lead to what is called a Yang–Mills gauge theory). We need to introduce three gauge fields corresponding to each of the subgroups SU(3) × SU(2) × U(1). • The gluon field tensor will be denoted by ${\displaystyle G_{\mu \nu }^{a}}$, where the index a labels elements of the 8 representation of colour SU(3). The strong coupling constant is conventionally labelled gs (or simply g where there is no ambiguity). The observations leading to the discovery of this part of the Standard Model are discussed in the article in quantum chromodynamics. • The notation ${\displaystyle W_{\mu \nu }^{a}}$ will be used for the gauge field tensor of SU(2) where a runs over the 3 generators of this group. The coupling can be denoted gw or again simply g. The gauge field will be denoted by ${\displaystyle W_{\mu }^{a}}$. • The gauge field tensor for the U(1) of weak hypercharge will be denoted by Bμν, the coupling by g′, and the gauge field by Bμ. The kinetic term can now be written simply as ${\displaystyle {\mathcal {L}}_{\rm {kin}}=-{1 \over 4}B_{\mu \nu }B^{\mu \nu }-{1 \over 2}\mathrm {tr} W_{\mu \nu }W^{\mu \nu }-{1 \over 2}\mathrm {tr} G_{\mu \nu }G^{\mu \nu }}$ where the traces are over the SU(2) and SU(3) indices hidden in W and G respectively. The two-index objects are the field strengths derived from W and G the vector fields. There are also two extra hidden parameters: the theta angles for SU(2) and SU(3). ### Coupling terms The next step is to "couple" the gauge fields to the fermions, allowing for interactions. #### Electroweak sector The electroweak sector interacts with the symmetry group U(1) × SU(2)L, where the subscript L indicates coupling only to left-handed fermions. ${\displaystyle {\mathcal {L}}_{\mathrm {EW} }=\sum _{\psi }{\bar {\psi }}\gamma ^{\mu }\left(i\partial _{\mu }-g^{\prime }{1 \over 2}Y_{\mathrm {W} }B_{\mu }-g{1 \over 2}{\boldsymbol {\tau }}\mathbf {W} _{\mu }\right)\psi }$ Where Bμ is the U(1) gauge field; YW is the weak hypercharge (the generator of the U(1) group); Wμ is the three-component SU(2) gauge field; and the components of τ are the Pauli matrices (infinitesimal generators of the SU(2) group) whose eigenvalues give the weak isospin. Note that we have to redefine a new U(1) symmetry of weak hypercharge, different from QED, in order to achieve the unification with the weak force. The electric charge Q, third component of weak isospin T3 (also called Tz, I3 or Iz) and weak hypercharge YW are related by ${\displaystyle Q=T_{3}+{\tfrac {1}{2}}Y_{W},}$ or by the alternate convention Q = T3 + YW. The first convention (used in this article) is equivalent to the earlier Gell-Mann–Nishijima formula. We can then define the conserved current for weak isospin as ${\displaystyle \mathbf {j} _{\mu }={1 \over 2}{\bar {\psi }}_{L}\gamma _{\mu }{\boldsymbol {\tau }}\psi _{L}}$ and for weak hypercharge as ${\displaystyle j_{\mu }^{Y}=2(j_{\mu }^{em}-j_{\mu }^{3})}$ where ${\displaystyle j_{\mu }^{em}}$ is the electric current and ${\displaystyle j_{\mu }^{3}}$ the third weak isospin current. As explained above, these currents mix to create the physically observed bosons, which also leads to testable relations between the coupling constants. To explain in a simpler way, we can see the effect of the electroweak interaction by picking out terms from the Lagrangian. We see that the SU(2) symmetry acts on each (left-handed) fermion doublet contained in ψ, for example ${\displaystyle -{g \over 2}({\bar {\nu }}_{e}\;{\bar {e}})\tau ^{+}\gamma _{\mu }(W^{-})^{\mu }{\begin{pmatrix}{\nu _{e}}\\e\end{pmatrix}}=-{g \over 2}{\bar {\nu }}_{e}\gamma _{\mu }(W^{-})^{\mu }e}$ where the particles are understood to be left-handed, and where ${\displaystyle \tau ^{+}\equiv {1 \over 2}(\tau ^{1}{+}i\tau ^{2})={\begin{pmatrix}0&1\\0&0\end{pmatrix}}}$ This is an interaction corresponding to a "rotation in weak isospin space" or in other words, a transformation between eL and νeL via emission of a W boson. The U(1) symmetry, on the other hand, is similar to electromagnetism, but acts on all "weak hypercharged" fermions (both left and right handed) via the neutral Z0, as well as the charged fermions via the photon. #### Quantum chromodynamics sector The quantum chromodynamics (QCD) sector defines the interactions between quarks and gluons, with SU(3) symmetry, generated by Ta. Since leptons do not interact with gluons, they are not affected by this sector. The Dirac Lagrangian of the quarks coupled to the gluon fields is given by ${\displaystyle {\mathcal {L}}_{\mathrm {QCD} }=i{\overline {U}}\left(\partial _{\mu }-ig_{s}G_{\mu }^{a}T^{a}\right)\gamma ^{\mu }U+i{\overline {D}}\left(\partial _{\mu }-ig_{s}G_{\mu }^{a}T^{a}\right)\gamma ^{\mu }D.}$ where D and U are the Dirac spinors associated with up- and down-type quarks, and other notations are continued from the previous section. ### Mass terms and the Higgs mechanism #### Mass terms The mass term arising from the Dirac Lagrangian (for any fermion ψ) is ${\displaystyle -m{\bar {\psi }}\psi }$ which is not invariant under the electroweak symmetry. This can be seen by writing ψ in terms of left and right handed components (skipping the actual calculation): ${\displaystyle -m{\bar {\psi }}\psi =-m({\bar {\psi }}_{L}\psi _{R}+{\bar {\psi }}_{R}\psi _{L})}$ i.e. contribution from ${\displaystyle {\bar {\psi }}_{L}\psi _{L}}$ and ${\displaystyle {\bar {\psi }}_{R}\psi _{R}}$ terms do not appear. We see that the mass-generating interaction is achieved by constant flipping of particle chirality. The spin-half particles have no right/left chirality pair with the same SU(2) representations and equal and opposite weak hypercharges, so assuming these gauge charges are conserved in the vacuum, none of the spin-half particles could ever swap chirality, and must remain massless. Additionally, we know experimentally that the W and Z bosons are massive, but a boson mass term contains the combination e.g. AμAμ, which clearly depends on the choice of gauge. Therefore, none of the standard model fermions or bosons can "begin" with mass, but must acquire it by some other mechanism. #### The Higgs mechanism Main article: Higgs mechanism The solution to both these problems comes from the Higgs mechanism, which involves scalar fields (the number of which depend on the exact form of Higgs mechanism) which (to give the briefest possible description) are "absorbed" by the massive bosons as degrees of freedom, and which couple to the fermions via Yukawa coupling to create what looks like mass terms. In the Standard Model, the Higgs field is a complex scalar of the group : ${\displaystyle \phi ={\frac {1}{\sqrt {2}}}{\begin{pmatrix}\phi ^{+}\\\phi ^{0}\end{pmatrix}},}$ where the superscripts + and 0 indicate the electric charge (Q) of the components. The weak hypercharge (YW) of both components is 1. The Higgs part of the Lagrangian is ${\displaystyle {\mathcal {L}}_{H}=\left[\left(\partial _{\mu }-igW_{\mu }^{a}t^{a}-ig'Y_{\phi }B_{\mu }\right)\phi \right]^{2}+\mu ^{2}\phi ^{\dagger }\phi -\lambda (\phi ^{\dagger }\phi )^{2},}$ where λ > 0 and μ2 > 0, so that the mechanism of spontaneous symmetry breaking can be used. There is a parameter here, at first hidden within the shape of the potential, that is very important. In a unitarity gauge one can set φ+ = 0 and make φ0 real. Then ${\displaystyle \langle \phi ^{0}\rangle =v}$ is the non-vanishing vacuum expectation value of the Higgs field. v has units of mass, and it is the only parameter in the Standard Model which is not dimensionless. It is also much smaller than the Planck scale; it is approximately equal to the Higgs mass, and sets the scale for the mass of everything else. This is the only real fine-tuning to a small nonzero value in the Standard Model, and it is called the Hierarchy problem. Quadratic terms in Wμ and Bμ arise, which give masses to the W and Z bosons: {\displaystyle {\begin{aligned}M_{W}&={\tfrac {1}{2}}v|g|\\M_{Z}&={\tfrac {1}{2}}v{\sqrt {g^{2}+{g'}^{2}}}\end{aligned}}} The Yukawa interaction terms are ${\displaystyle {\mathcal {L}}_{YU}={\overline {U}}_{L}G_{u}U_{R}\phi ^{0}-{\overline {D}}_{L}G_{u}U_{R}\phi ^{-}+{\overline {U}}_{L}G_{d}D_{R}\phi ^{+}+{\overline {D}}_{L}G_{d}D_{R}\phi ^{0}+hc}$ where Gu,d are 3 × 3 matrices of Yukawa couplings, with the ij term giving the coupling of the generations i and j. #### Neutrino masses As previously mentioned, evidence shows neutrinos must have mass. But within the standard model, the right-handed neutrino does not exist, so even with a Yukawa coupling neutrinos remain massless. An obvious solution[2] is to simply add a right-handed neutrino νR resulting in a Dirac mass term as usual. This field however must be a sterile neutrino, since being right-handed it experimentally belongs to an isospin singlet (T3 = 0) and also has charge Q = 0, implying YW = 0 (see above) i.e. it does not even participate in the weak interaction. Current experimental status is that evidence for observation of sterile neutrinos is not convincing.[3] Another possibility to consider is that the neutrino satisfies the Majorana equation, which at first seems possible due to its zero electric charge. In this case the mass term is ${\displaystyle -{m \over 2}\left({\overline {\nu }}^{C}\nu +{\overline {\nu }}\nu ^{C}\right)}$ where C denotes a charge conjugated (i.e. anti-) particle, and the terms are consistently all left (or all right) chirality (note that a left-chirality projection of an antiparticle is a right-handed field; care must be taken here due to different notations sometimes used). Here we are essentially flipping between LH neutrinos and RH anti-neutrinos (it is furthermore possible but not necessary that neutrinos are their own antiparticle, so these particles are the same). However, for the left-chirality neutrinos, this term changes weak hypercharge by 2 units - not possible with the standard Higgs interaction, requiring the Higgs field to be extended to include an extra triplet with weak hypercharge 2[2] - whereas for right-chirality neutrinos, no Higgs extensions are necessary. For both left and right chirality cases, Majorana terms violate lepton number, but possibly at a level beyond the current sensitivity of experiments to detect such violations. It is possible to include both Dirac and Majorana mass terms in the same theory, which (in contrast to the Dirac-mass-only approach) can provide a "natural" explanation for the smallness of the observed neutrino masses, by linking the RH neutrinos to yet-unknown physics around the GUT scale[4] (see seesaw mechanism). Since in any case new fields must be postulated to explain the experimental results, neutrinos are an obvious gateway to searching physics beyond the Standard Model. ## Detailed Information This section provides more detail on some aspects, and some reference material. ### Field content in detail The Standard Model has the following fields. These describe one generation of leptons and quarks, and there are three generations, so there are three copies of each field. By CPT symmetry, there is a set of right-handed fermions with the opposite quantum numbers. The column "representation" indicates under which representations of the gauge groups that each field transforms, in the order (SU(3), SU(2), U(1)). Symbols used are common but not universal; superscript C denotes an antiparticle; and for the U(1) group, the value of the weak hypercharge is listed. Note that there are twice as many left-handed lepton field components as left-handed antilepton field components in each generation, but an equal number of left-handed quark and antiquark fields. ### Fermion content This table is based in part on data gathered by the Particle Data Group.[5] 1. ^ a b c These are not ordinary abelian charges, which can be added together, but are labels of group representations of Lie groups. 2. ^ a b c Mass is really a coupling between a left-handed fermion and a right-handed fermion. For example, the mass of an electron is really a coupling between a left-handed electron and a right-handed electron, which is the antiparticle of a left-handed positron. Also neutrinos show large mixings in their mass coupling, so it's not accurate to talk about neutrino masses in the flavor basis or to suggest a left-handed electron antineutrino. 3. The Standard Model assumes that neutrinos are massless. However, several contemporary experiments prove that neutrinos oscillate between their flavour states, which could not happen if all were massless. It is straightforward to extend the model to fit these data but there are many possibilities, so the mass eigenstates are still open. See neutrino mass. 4. W.-M. Yao et al. (Particle Data Group) (2006). "Review of Particle Physics: Neutrino mass, mixing, and flavor change" (PDF). Journal of Physics G. 33: 1. arXiv:astro-ph/0601168. Bibcode:2006JPhG...33....1Y. doi:10.1088/0954-3899/33/1/001. 5. ^ a b c d The masses of baryons and hadrons and various cross-sections are the experimentally measured quantities. Since quarks can't be isolated because of QCD confinement, the quantity here is supposed to be the mass of the quark at the renormalization scale of the QCD scale. ### Free parameters Upon writing the most general Lagrangian without neutrinos, one finds that the dynamics depend on 19 parameters, whose numerical values are established by experiment. With neutrinos 7 more parameters are needed, 3 masses and 4 PMNS matrix parameters, for a total of 26 parameters.[6] The neutrino parameter values are still uncertain. The 19 certain parameters are summarized here. The choice of free parameters is somewhat arbitrary. In the table above, gauge couplings are listed as free parameters, therefore with this choice Weinberg angle is not a free parameter - it is defined as ${\displaystyle \tan \theta _{W}={\frac {g_{1}}{g_{2}}}}$. Likewise, fine structure constant of QED is ${\displaystyle \alpha ={\frac {1}{4\pi }}{\frac {(g_{1}g_{2})^{2}}{g_{1}^{2}+g_{2}^{2}}}}$. Instead of fermion masses, dimensionless Yukawa couplings can be chosen as free parameters. For example, electron mass depends on the Yukawa coupling of electron to Higgs field, and its value is ${\displaystyle m_{e}={\frac {y_{e}v}{\sqrt {2}}}}$. Instead of the Higgs mass, the Higgs self-coupling strength λ ~ ⅛ can be chosen as a free parameter. ### Additional symmetries of the Standard Model From the theoretical point of view, the Standard Model exhibits four additional global symmetries, not postulated at the outset of its construction, collectively denoted accidental symmetries, which are continuous U(1) global symmetries. The transformations leaving the Lagrangian invariant are: ${\displaystyle \psi _{\text{q}}(x)\to e^{i\alpha /3}\psi _{\text{q}}}$ ${\displaystyle E_{L}\to e^{i\beta }E_{L}{\text{ and }}(e_{R})^{c}\to e^{i\beta }(e_{R})^{c}}$ ${\displaystyle M_{L}\to e^{i\beta }M_{L}{\text{ and }}(\mu _{R})^{c}\to e^{i\beta }(\mu _{R})^{c}}$ ${\displaystyle T_{L}\to e^{i\beta }T_{L}{\text{ and }}(\tau _{R})^{c}\to e^{i\beta }(\tau _{R})^{c}}$ The first transformation rule is shorthand meaning that all quark fields for all generations must be rotated by an identical phase simultaneously. The fields ML, TL and ${\displaystyle (\mu _{R})^{c},(\tau _{R})^{c}}$ are the 2nd (muon) and 3rd (tau) generation analogs of EL and ${\displaystyle (e_{R})^{c}}$ fields. By Noether's theorem, each symmetry above has an associated conservation law: the conservation of baryon number, electron number, muon number, and tau number. Each quark is assigned a baryon number of ${\displaystyle {}_{\frac {1}{3}}}$, while each antiquark is assigned a baryon number of ${\displaystyle {}_{-{\frac {1}{3}}}}$. Conservation of baryon number implies that the number of quarks minus the number of antiquarks is a constant. Within experimental limits, no violation of this conservation law has been found. Similarly, each electron and its associated neutrino is assigned an electron number of +1, while the anti-electron and the associated anti-neutrino carry a −1 electron number. Similarly, the muons and their neutrinos are assigned a muon number of +1 and the tau leptons are assigned a tau lepton number of +1. The Standard Model predicts that each of these three numbers should be conserved separately in a manner similar to the way baryon number is conserved. These numbers are collectively known as lepton family numbers (LF). In addition to the accidental (but exact) symmetries described above, the Standard Model exhibits several approximate symmetries. These are the "SU(2) custodial symmetry" and the "SU(2) or SU(3) quark flavor symmetry." ### The U(1) symmetry For the leptons, the gauge group can be written SU(2)l × U(1)L × U(1)R. The two U(1) factors can be combined into U(1)Y × U(1)l where l is the lepton number. Gauging of the lepton number is ruled out by experiment, leaving only the possible gauge group SU(2)L × U(1)Y. A similar argument in the quark sector also gives the same result for the electroweak theory. ### The charged and neutral current couplings and Fermi theory The charged currents ${\displaystyle j^{\pm }=j^{1}\pm ij^{2}}$ are ${\displaystyle j_{\mu }^{+}={\overline {U}}_{iL}\gamma _{\mu }D_{iL}+{\overline {\nu }}_{iL}\gamma _{\mu }l_{iL}.}$ These charged currents are precisely those that entered the Fermi theory of beta decay. The action contains the charge current piece ${\displaystyle {\mathcal {L}}_{CC}={\frac {g}{\sqrt {2}}}(j_{\mu }^{+}W^{-\mu }+j_{\mu }^{-}W^{+\mu }).}$ For energy much less than the mass of the W-boson, the effective theory becomes the current–current interaction of the Fermi theory. However, gauge invariance now requires that the component ${\displaystyle W^{3}}$ of the gauge field also be coupled to a current that lies in the triplet of SU(2). However, this mixes with the U(1), and another current in that sector is needed. These currents must be uncharged in order to conserve charge. So we require the neutral currents ${\displaystyle j_{\mu }^{3}={\frac {1}{2}}({\overline {U}}_{iL}\gamma _{\mu }U_{iL}-{\overline {D}}_{iL}\gamma _{\mu }D_{iL}+{\overline {\nu }}_{iL}\gamma _{\mu }\nu _{iL}-{\overline {l}}_{iL}\gamma _{\mu }l_{iL})}$ ${\displaystyle j_{\mu }^{em}={\frac {2}{3}}{\overline {U}}_{i}\gamma _{\mu }U_{i}-{\frac {1}{3}}{\overline {D}}_{i}\gamma _{\mu }D_{i}-{\overline {l}}_{i}\gamma _{\mu }l_{i}.}$ The neutral current piece in the Lagrangian is then ${\displaystyle {\mathcal {L}}_{NC}=ej_{\mu }^{em}A^{\mu }+{\frac {g}{\cos \theta _{W}}}(J_{\mu }^{3}-\sin ^{2}\theta _{W}J_{\mu }^{em})Z^{\mu }.}$
2016-09-26 05:51:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 213, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8949939012527466, "perplexity": 480.04291186365816}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660706.30/warc/CC-MAIN-20160924173740-00071-ip-10-143-35-109.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/635926/finding-a-closed-form-for-a-sum-involving-floor-function-sum-limits-k-1nk-l
# Finding a closed form for a sum involving floor function $\sum\limits_{k=1}^nk\lfloor km/n\rfloor$ Given integers $m,n$, is there a known closed form for the sum $\sum\limits_{k=1}^nk\lfloor km/n\rfloor$? and if not, is it possible to show there is no closed form for this sum? I've attempted to derive a closed form using the techniques in this question: Formula and proof for the sum of floor and ceiling numbers, with no luck. Any help will be greatly appreciated. • This could help: If $m$ and $n$ are positive and coprime, then$$\sum_{k=1}^{n-1} \left\lfloor \frac{km}{n} \right\rfloor = \frac{1}{2}(m - 1)(n - 1)$$ Source – rhaldryn Feb 2 '14 at 12:13 This is only a partial answer. There are three identifiable cases here: $m=n$, $m>n$, and $0<m<n$. Case 1: $\textbf{m=n:}$ This is the simpler case: $$S=\sum_{k=1}^n k\lfloor\frac{km}{n}\rfloor = \sum_{k=1}^n k\lfloor k\rfloor=\sum_{k=1}^n k^2=\frac{1}{6}n(n+1)(2n+1),$$ by way of Bernoulli's sum of powers formula. The other two cases are more difficult. Case 2: $\textbf{m>n:}$ To simplify matters suppose $m=bn$ for some integer $b>1$. Then $$S=\sum_{k=1}^n k\lfloor k\frac{bn}{n}\rfloor = \sum_{k=1}^n k\lfloor k b\rfloor = b\sum_{k=1}^n k^2 = \frac{b}{6}n(n+1)(2n+1).$$ For the more general case, suppose $m=bn+a$ with $1<a<n$ and $b\geq 1$. Then we have $$S=\sum_{k=1}^n k\lfloor \frac{k(bn+a)}{n}\rfloor = \sum_{k=1}^n k\lfloor kb+\frac{ka}{n}\rfloor = b\sum_{k=1}^n k^2+\sum_{k=1}^n k\lfloor\frac{ka}{n}\rfloor,$$ so $$S= \frac{b}{6}n(n+1)(2n+1)+\sum_{k=1}^n k\lfloor\frac{ka}{n}\rfloor,$$ where we see that $0<a/n<1$. Thus the entire problem has been reduced to the third case only, i.e. for when $0<m<n$ (see top of answer) Case 3: $\textbf{0<m<n:}$ It may be the case that there is a closed form when $\text{gcd}(m,n)=1$. • Why does $m>n \rightarrow n\mid m$? – rhaldryn Feb 2 '14 at 12:22 • It doesn't, but for the special case $m=an$ we have $n\mid an$. – Pixel Feb 2 '14 at 12:25 Maybe the following helps: For given $n\in{\mathbb N}_{\geq1}$ put $\omega:=e^{2\pi i/n}$. Then for any integer $j$ one has $$\left\lfloor{j\over n}\right\rfloor={j\over n}-{n-1\over 2n}+{1\over n}\sum_{\ell=1}^{n-1}{\omega^{j\ell}\over 1-\omega^{-\ell}}\ .$$ • That's an interesting Identity, but I can't see how this helps. – SomeStrangeUser Feb 2 '14 at 16:44 • Could you please provide a source/proof for this identity? – Qiang Li Dec 4 '16 at 20:44 • @SomeStrangeUser: I'm interested in a closed formula for the sum $\sum\limits_{k=1}^{n-1}k\lfloor \frac{km}{n}\rfloor$ too. Have you found any additional reference for it? Would you have any combinatorial interpretation for it? – MathChat Jan 6 '17 at 1:22
2019-07-15 22:31:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9266566038131714, "perplexity": 166.9668415316905}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195524254.28/warc/CC-MAIN-20190715215144-20190716001144-00142.warc.gz"}
https://physics.stackexchange.com/questions/267058/thermal-expansion-of-both-liquid-and-glass-tube
# Thermal expansion of both liquid and glass tube I'm a bit confused about thermal expansion in the case in which both a liquid and the container do expand. I will describe an example situation to expose the problem. Consider a cylindrical glass tube (linear thermal expansion coefficient $\alpha$) that contains liquid (volume thermal expansion coefficient $\beta$). The height of the tube is $h_{t,0}$ and the height of the liquid inside of it is $h_{l,0}$. If the temperature changes of an amount $\Delta T$ what is the new height of the liquid? If the cylindrical tube is provided of a measuring scale, what is the new height of liquid measured from the scale? The relation I would use is $$\frac{\Delta V}{V_0}\approx\frac{\Delta h}{h_0} +\frac{\Delta A}{A_0}$$ Which comes from $$(V_0+\Delta V)=(h_0+\Delta h) \cdot(A_0+\Delta A)$$ Neglecting higher order terms. To find the new "absolute" height of the liquid I would simply consider the change in volume $\Delta V_{l}=V_{l,0} \beta \Delta T$, and then the change in the area of the cylinder $\Delta A_{t}=A_{t,0} 2 \alpha \Delta T$. Then I would write $$\frac{\Delta h_{l}}{h_{l,0}} =\frac{\Delta V_{l}}{V_{l,0}}- \frac{\Delta A_{t}}{A_{t,0}}=(\beta-2\alpha) \Delta T$$ So actually in this case I would not consider the change in height of the tube, since I'm looking for the absolute change in height of the liquid. To get the new height of liquid "relative to the tube" I would consider the "relative change in volume" $$\Delta V_{l,relative}=\Delta V_{l}-\Delta V_{t}=(V_{l,0} \beta- V_{t,0} 3\alpha)\Delta T$$ Here is my main doubt: does this "relative" change already takes into account the fact that both the area and the height of the tube change? If so, considering this "relative change" I can write $$\frac{\Delta h_{l,relative}}{h_{l,0}}= \frac{\Delta V_{l,relative}}{V_{l,0}}$$ Because "relative to the tube" the only thing that can change is the height of the liquid and the base area is "constant" (infact the change in area of the liquid is the same of the one of the tube). Are these two processes correct or are there any mistakes (conceptual or of other kind) ? Any suggestion is highly appreciated • I believe you wrote some additions instead of multiplications, for instance, the second equation is inconsistent in the units – Wolphram jonny Jul 9 '16 at 0:16 You already have the answer when you write $$\frac{\Delta h}{h} = (\beta -2\alpha)\Delta T$$ What you do after that is unnecessary and does not make sense. You have already said that the height of the tube is irrelevant, so the height of the liquid "relative to the tube" is meaningless. If initially the liquid fills the tube completely and you want to know how much liquid spills out, use $$\frac{\Delta V}{V} = (\beta - 3\alpha)\Delta T$$ I think what you are trying to do is calculate the new volume reading of the liquid on the scale on the tube. For this you should use the same formula (for volume), which is marked in units of $cc$ or $cm^3$. So if the reading on the scale was initially $V_0$ cc then after expansion of the liquid and the glass tube the reading will be $V_1$ cc where $$V_1 - V_0 = V_0 (\beta - 3\alpha)\Delta T.$$ • Thanks for the reply! Actually I wanted to neglet the change in height of the tube just in the first point. In the second point I do not neglet the expansion of the tube (in particular is change in height). The formula for second point in my question does not differ a lot from your second formula $$\Delta h_{l,rel}=( \beta \cdot h_{l,0}- 3 \alpha \cdot h_{t,0}) \Delta T$$ The difference is that there are the two (possibly different) heights of tube and liquid. Can this formula be correct if I do not neglet the change in height of the tube? – Sørën Jul 11 '16 at 7:50 • Thanks for the add to your answer, thats what I'm trying to do! I understood the formula but my doubt was in considering the case where $V_{0,l}$ (initial volume of liquid) is not the same as $V_{0,t}$ (initial volume of tube) (which also means that the initial heights are different, or, in other words, the liquid does not fill the tube completely initially). In this case I don't think that is correct to evaluate the (relative) change in volume of the tube as $V_{0,l} 3\alpha \Delta T$, since $V_{0,l}\neq V_{0,t}$ but I would say $V_{0,t} 3 \alpha \Delta T$. Would that make sense? – Sørën Jul 11 '16 at 10:50 • In my answer $V_0$ is the volume of the liquid as indicated by the scale on the tube, so by definition $V_{0,l}=V_{0,t}$. $V_1$ is the new volume reading opposite the liquid level after expansion of the liquid and the tube. This assumes (of course) that the liquid does not spill out of the tube. $V_0$ and $V_1$ are not the volumes of the tube, which could be much bigger - how much bigger is irrelevant. – sammy gerbil Jul 11 '16 at 11:14
2020-10-23 22:17:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7533501386642456, "perplexity": 162.07466796682692}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107865665.7/warc/CC-MAIN-20201023204939-20201023234939-00605.warc.gz"}
https://ssagesproject.github.io/docs/Forward-Flux.html
# Forward-Flux¶ Forward Flux Sampling (FFS) is an advanced sampling method to simulate “rare events” in non-equilibrium and equilibrium systems. Several review articles in the literature present a comprehensive perspective on the basics, applications, implementations, and recent advances of FFS. Here, we provide a brief general introduction to FFS, and describe the Rosenbluth-like variant of forward flux method. We also explain various options and variables to setup and run an efficient FFS simulation using SSAGES. ## Introduction¶ Rare events are ubiquitous in nature. Important examples include crystal nucleation, earthquake formation, slow chemical reactions, protein conformational changes, switching in biochemical networks, and translocation through pores. The activated/rare process from a stable/metastable region A to a stable/metastable region B is characterized by a long waiting time between events, which is several orders of magnitude longer than the transition process itself. This long waiting time typically arises due to the presence of a large free energy barrier that the system has to overcome to make the transition from one region to another. The outcomes of rare events are generally substantial and thereby it is essential to obtain a molecular-level understanding of the mechanisms and kinetics of these events. “Thermal fluctuations” commonly drive the systems from an initial state to a final state over an energy barrier $$\Delta E$$. The transition frequency from state A to state B is proportional to $$e^{\frac{-\Delta E}{k_{B}T}}$$, where $$k_{B}T$$ is the thermal energy of the system. Accordingly, the time required for an equilibrated system in state A to reach state B grows exponentially (at a constant temperature) as the energy barrier $$\Delta E$$ become larger. Eventually, none or only a few transitions may occur within the typical timescale of molecular simulations. In FFS method, several intermediate states or so-called interfaces ($$\lambda_{i}$$) are placed along a “reaction coordinate” or an “order parameter” between the initial state A and the final state B (Figure 1). These intermediate states are chosen such that the energy barrier between adjacent interfaces are readily surmountable using typical simulations. Using the stored configurations at an interface, several attempts are made to arrive at the next interface in the forward direction (the order parameter must change monotonically when going from A to B). This incremental progress makes it more probable to observe a full transition path from state A to state B. FFS uses positive flux expression to calculate rate constant. The system dynamics are integrated forward in time and therefore detailed balance is not required. In the Forward Flux Sampling method, several intermediate states are placed along the order parameter to link the initial state A and the final state B. Incremental progress of the system is recorded and analyzed to obtain relevant kinetic and thermodynamic properties. Several protocols of Forward Flux Sampling have been adopted in the literature to 1. generate the intermediate configurations, 2. calculate the conditional probability of reaching state B starting from state A, $$P(\lambda_{B} = \lambda_{n} | \lambda_{A} = \lambda_{0})$$, 3. compute various thermodynamic properties, and 4. optimize overall efficiency of the method [1]. The following are the widely-used variants of Forward Flux Sampling method: • Direct FFS (DFFS) (currently implemented in SSAGES) • Branched Growth FFS (BGFFS) • Rosenbluth-like FFS (RBFFS) • Restricted Branched Growth FFS (RBGFFS) • FFS Least-Squares Estimation (FFS-LSE) • FF Umbrella Sampling (FF-US) ## Rate Constant and Initial Flux¶ The overall rate constant or the frequency of going from state A to state B is computed using the following equation: $k_{AB} = \Phi_{A,0} \cdot P\left(\lambda_{N} \vert \lambda_{0}\right)$ here, $$\Phi_{A,0}$$ is the initial forward flux or the flux at the initial interface, and $$P\left(\lambda_{N} \vert \lambda_{0}\right)$$ is the conditional probability of the trajectories that initiated from A and reached B before returning to A. In practice, $$\Phi_{A,0}$$ can be obtained by simulating a single trajectory in State A for a certain amount of time $$t_{A}$$, and counting the number of crossings of the initial interface $$\lambda_{0}$$. Alternatively, a simulation may be carried out around state A for a period of time until $$N_{0}$$ number of accumulated configurations is stored: $\Phi_{A,0} = \frac{N_{0}}{t_{A}}$ here, $$N_{0}$$ is the number of instances in which $$\lambda_{0}$$ is crossed in forward direction, and $$t_{A}$$ is the simulation time that the system was run around state A. Note that 1. $$\lambda_{0}$$ can be crossed in either forward ($$\lambda_{t} < \lambda_{0}$$) or backward ($$\lambda_{t} > \lambda_{0}$$) directions, but only “forward crossing” marks a checkpoint (see Figure 2) and 2. $$t_{A}$$ should only include the simulation time around state A and thereby the portion of time spent around state B must be excluded, if any. In general, the conditional probability is computed using the following expression: $P\left(\lambda_{n} \vert \lambda_{0}\right) = \prod\limits_{i=0}^{n-1} P\left(\lambda_{i+1} \vert \lambda_{i}\right) = P\left(\lambda_{1}\vert\lambda_{0}\right) \cdot P\left(\lambda_{2}\vert\lambda_{1}\right) \dots P\left(\lambda_{n}\vert\lambda_{n-1}\right)$ $$P\left(\lambda_{i+1}\vert\lambda_{i}\right)$$ is computed by initiating a large number of trials from the current interface and recording the number of successful trials that reaches the next interface. The successful trials in which the system reaches the next interface are stored and used as checkpoints in the next interface. The failed trajectories that go all the way back to state A are terminated. Different flavors of forward flux method use their unique protocol to select checkpoints to initiate trials at a given interface, compute final probabilities, create transitions paths, and analyze additional statistics. A schematic representation of computation of initial flux using a single trajectory initiated in state A. The simulation runs for a certain period of time $$t_{A}$$ and number of forward crossing is recorded. Alternatively, we can specify the number of necessary checkpoints $$N_{0}$$ and run a simulation until desired number of checkpoints are collected. In this figure, green circles show the checkpoints that can be used to generate transition paths. ## Rosenbluth-like Forward Flux Sampling (RBFFS)¶ Rosenbluth-like Forward Flux Sampling (RBFFS) method is an adaptation of Rosenbluth method in polymer sampling to simulate rare events [19]. The RBFFS is comparable to Branched Growth Forward Flux (BGFFS) [2][7] but, in contrast to BGFFS, a single checkpoint is randomly selected at a non-initial interface instead of initiation of trials from all checkpoints at a given interface (Figure 3). In RBFFS, first a checkpoint at $$\lambda_{0}$$ is selected and $$k_{0}$$ trials are initiated. The successful runs that reach $$\lambda_{1}$$ are stored. Next, one of the checkpoints at $$\lambda_{1}$$ is randomly chosen (in contrast to Branched Growth where all checkpoints are involved), and $$k_{1}$$ trials are initiated to $$\lambda_{2}$$. Last, this procedure is continued for the following interfaces until state B is reached or all trials fail. This algorithm is then repeated for the remaining checkpoints at $$\lambda_{0}$$ to generate multiple “transition paths”. Rosenbluth-like Forward Flux Sampling (RBFFS) involves sequential generation of unbranched transition paths from all available checkpoints at the first interface $$\lambda_{0}$$. A single checkpoint at the interface $$\lambda_{i > 0}$$ is randomly marked and $$k_{i}$$ trials are initiated from that checkpoint which may reach to the next interface $$\lambda_{i+1}$$ (successful trials) or may return to state A (failed trial). In Rosenbluth-like forward flux sampling, we choose one checkpoint from each interface independent of the number of successes. The number of available checkpoints at an interface are not necessarily identical for different transition paths $$p$$. This implies that more successful transition paths are artificially more depleted than less successful paths. Therefore, we need to enhance those extra-depleted paths by reweighting them during post-processing. The weight of path $$p$$ at the interface $$\lambda_{i}$$ is given by: $w_{i,b} = \prod\limits_{j=0}^{i-1} \frac{S_{j,p}}{k_{j}}$ where $$S_{j,p}$$ is the number of successes at the interface $$j$$ for path $$p$$. The conditional probability is then computed using the following expression: $P\left(\lambda_{n}\vert\lambda_{0}\right) = \prod\limits_{i=0}^{n-1} P\left(\lambda_{i+1} \vert \lambda_{i}\right) = frac{ \prod_{i=0}^{n-1}\sum_{p} w_{i,p} S_{i,p} / k_{i} }{ \sum_{p} w_{i,p} }$ Here, the summation runs over all transition paths in the simulation. ## Options & Parameters¶ The notation used in SSAGES implementation of the FFS is mainly drawn from Ref. [2]. We recommend referring to this review article if the user is unfamiliar with the terminology. To run a DFFS simulation using SSAGES, an input file in JSON format is required along with a general input file designed for your choice of molecular dynamics engine (MD engine). For your convenience, two files Template_Input.json and FF_Input_Generator.py are provided to assist you in generating the JSON file. Here we describe the parameters and the options that should be set in Template_Input.json file in order to successfully generate an input file and run a DFFS simulation. Warning The current implementation of FFS only accepts one CV. The following parameters need to be set under "method" in the JSON input file: "type": "ForwardFlux" The following options are available for Forward Flux Sampling: flavor (required) Specifies the flavor of the FFS method that SSAGES should run. Available options: “DirectForwardFlux” Note Currently, only DFFS has been implemented in SSAGES. RBFFS and BGFFS will be available in the future releases. trials (required) Array of number of trials to be spawned from each interface. The length of this array should match the length of the array of interfaces, or can be left blank ([]) if defined in FF_Input_Generator.py. interfaces (required) Array of intermediate interfaces linking the initial state A to the final state B. This array can either be defined in Template_Input.json or FF_Input_Generator.py. In the latter case, the values of interfaces is left blank in the Template_Input.json file. nInterfaces (optional) Total number of interfaces connecting the initial state A to the final state B, inclusive. (Default: 5) Warning Minimum of two interfaces must be defined. N0Target (optional) Number of configurations to be generated (or provided by user) at the first interface. (Default: 100) computeInitialFlux (optional) Specifies whether a calculation of the initial flux should be performed. If this parameter is set to true, SSAGES would also generate the user-specified number of initial configurations (N0Target) at the first interface. To compute the initial flux, user must provide an initial configuration in state A, otherwise SSAGES would issue an error. If this parameter is set to false, the user must provide the necessary number of the initial configurations in separate files. The files name and the files content should follow a specific format. The format of the filenames should be l0-n<n>.dat where <n> is the configuration number (i.e. 1, 2, …, N0Target). The first line of the configuration files includes three numbers <l> <n> <a>, where <l> is the interface number (set to zero here), <n> is the configuration number, and <a> is the attempt number (set to zero here). The rest of the lines include the atoms IDs and their corresponding values of positions and velocities, in the format <atom ID> <x> <y> <z> <vx> <vy> <vz> where <atom ID> is the ID of an atoms, <x>, <y>, <z> are the coordinates of that atom, and <vx>, <vy>, and <vz> are the components of the velocity in the x, y, and z directions. Please note that the stored configurations at other interfaces follow a similar format. (Default: true) saveTrajectories (optional) This flag determines if the FFS trajectories should be saved. (Default: true) Warning Saving trajectories of thousands of atoms may require large amount of storage space. currentInterface (optional) Specifies the interface from which the calculations should start (or continue). This parameter is helpful in restarting a FFS calculation from interfaces other than the initial state A. (Default: 0) outputDirectoryName (optional) Specifies the directory name that contains the output of the FFS calculations including the initial flux, the successful and failed configurations, commitor probabilities, and the trajectories. The output data related to the computation of the initial flux is stored in the file initial_flux_value.dat, and the data related to transition probabilities is stored in the file commitor_probabilities.dat. (Default: “FFSoutput”) ## Tutorial¶ This tutorial will walk you step-by-step through the user example provided with the SSAGES source code that runs the forward flux method on a Langevin particle in a two-dimensional potential energy surface using LAMMPS. This example shows how to prepare a multi-walker simulation (here we use 2 walkers). First, be sure you have compiled SSAGES with LAMMPS. Then, navigate to the Examples/User/ForwardFlux/LAMMPS/Langevin subdirectory. Now, take a moment to observe the in.LAMMPS_FF_Test_1d file to familiarize yourself with the system being simulated. The next two files of interest are the Template_Input.json input file and the FF_Input_Generator.py script. These files are provided to help setup sophisticated simulations. Both of these files can be modified in your text editor of choice to customize your input files, but for this tutorial, simply observe them and leave them be. FF_Template.json contains all information necessary to fully specify a walker; FF_Input_Generator.py uses the information in this file and generates a new JSON along with necessary LAMMPS input files. Issue the following command to generate the files: python FF_Input_Generator.py You will produce a file called Input-2walkers.json along with in.LAMMPS_FF_Test_1d-0 and in.LAMMPS_FF_Test_1d-1. You can also open these files to verify for yourself that the script did what it was supposed to do. Now, with your JSON input and your SSAGES binary, you have everything you need to perform a simulation. Simply run: mpiexec -np 2 ./ssages Input-2walkers.json This should run a quick FFS calculation and generate the necessary output. ## Developers¶ • Joshua Lequieu
2020-09-23 07:14:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5173403024673462, "perplexity": 1348.063243652458}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400209999.57/warc/CC-MAIN-20200923050545-20200923080545-00619.warc.gz"}
http://aimsciences.org/article/doi/10.3934/dcdss.2019102?viewType=html
# American Institute of Mathematical Sciences ## An efficient RFID anonymous batch authentication protocol based on group signature 1 School of Information and Control Engineering, Xi'an University of Architecture and Technology, Xi'an 710055, Shaanxi, China 2 The State Key Laboratory of Integrated Service Networks, Xidian University, Xi'an 710071, China * Corresponding author: Lanjun Dang Received  June 2017 Revised  November 2017 Published  November 2018 In order to address the anonymous batch authentication problem of a legal reader to many tags in RFID (Radio Frequency Identification) system, an efficient RFID anonymous batch authentication protocol was proposed based on group signature. The anonymous batch authentications of reader to many tags are achieved by using a one-time group signature based on Hash function; the authentication of the tag to the reader is realized by employing MAC (Message Authentication Code). The tag's anonymity is achieved via the dynamic TID (Temporary Identity) instead of the tag's identity. The proposed protocol can resist replay attacks by using random number. Theoretical analyses show that, the proposed protocol reaches the expected security goals. Compared with the protocol proposed by Liu, the proposed protocol reduces the computation and storage of the server and tag while improving the security. Citation: Jie Xu, Lanjun Dang. An efficient RFID anonymous batch authentication protocol based on group signature. Discrete & Continuous Dynamical Systems - S, doi: 10.3934/dcdss.2019102 ##### References: [1] M. Akram and M. Sarwar, Novel applications of m-polar fuzzy hypergraphs, Journal of Intelligent & Fuzzy Systems: Applications in Engineering and Technology, 32 (2017), 2747-2762. [2] W.-S. Bae, Formal verification of an RFID authentication protocol based on Hash function and secret code, Wireless Personal Communications, 79 (2014), 2595-2609. [3] A. Basar and M. Y. Abbasi, On ordered bi-ideals in ordered-semigroups, Journal of Discrete Mathematical Sciences and Cryptography, 20 (2017), 645-652. doi: 10.1080/09720529.2015.1130474. [4] L. Batina, Y. K. Lee and S. Seys, et al., Extending ECC-based RFID authentication protocols to privacy-preserving multi-party grouping proofs, Personal and Ubiquitous Computing, 16 (2012), 323-335. [5] X. Cao, W. Kou and H. Li, Secure mobile IP registration scheme with AAA from parings to reduce registration delay, CIS 2006, New York: IEEE Press, 2006, 1037-1042 [6] W. Gao and W. F. Wang, A tight neighborhood union condition on fractional (g, f, n', m)-critical deleted graphs, Colloquium Mathematicum, 149 (2017), 291-298. doi: 10.4064/cm6959-8-2016. [7] J. B. Gurubani, H. Thakkar and D. R. Patel, Improvements over extended LMAP+: RFID authentication protocol, Proceedings of 6th International Conference on Trust Management IFIPTM, Surat: Springer Boston, 2012, 225-231. [8] D. He, N. Kumar and N. Chilamkurti, et al., Lightweight ECC based RFID authentication integrated with an ID verifier transfer protocol, Journal of Medical Systems, 38 (2014), 116. [9] A. Juels, Strengthening EPC Tag against Cloning, Proceedings of ACM Workshop on Wireless Security, Cologne, 2005, 67-76. [10] M. Kianersi, M. Gardeshi and M. Arjmand, SULMA: A secure ultra light-weight mutual authentication protocol for lowcost RFID tags, International Journal of UbiComp (IJU), 2 (2011), 17-24. [11] S. Li, Handwritten character recognition technology combined with artificial intelligence, Journal of Discrete Mathematical Sciences and Cryptography, 20 (2017), 167-178. [12] H. Liu, X. Li and J. Bai, A new one-time group signature based on Hash function, Journal of Beijing Electronic Science and Technology Institute, 21 (2013), 25-29. [13] J. Liu, R.-J. Chen and D.-S. Yan, et al., Efficient identity-based ring signature for RFID authentication scheme, Proceeding of the IEEE International Conference on RFID-Technology and Applications, Guangzhou: IEEE, 2010, 7-10. [14] Y. L. Liu, X. L. Qin and B. H. Li, et al., A Forward-Secure Grouping-proof protocol for Multiple RFID tags, International Journal of Computational Intelligence Systems, 5 (2012), 824-833. [15] M. Ohkubo, K. Suzuki and S. Kinoshita, Hash-chain based forward secure privacy protection scheme for low-cost RFID, Proceedings of the 2004 Symposium on Cryptography and Information Security (SCIS 2004), Sendai, 2004, 719-724. [16] S. E. Sarma, S. A. Weis and D. W. Engels, RFID systems and security and privacy implications, Proceedings of the 4th International Workshop on Cryptographic Hardware and Embedded Systems (CHES 2002), LNCS, 2523, Berlin: Springer-Verlag, 2003, 454-469. [17] Y. Tian, G. L. Chen and J. Li, A New Ultralightweight RFID Authentication Protocol with Permutation, IEEE Communications Letters, 16 (2012), 702-705. [18] S. A. Weis, S. E. Sarma, R. L. Rivest and D. W. Engels, Security and privacy aspects of lowcost radio frequency identification systems, Proceedings of the 1st International Conference on Security in Pervasive Computing, LNCS, 2802, Berlin: Springer-Verlag, 2004, 719-724. [19] J. P. de Wet and S. A. van Aardt, Traceability of locally Hamiltonian and locally traceable graphs, Discrete Mathematics and Theoretical Computer Science, 17 (2016), 245-262. show all references ##### References: [1] M. Akram and M. Sarwar, Novel applications of m-polar fuzzy hypergraphs, Journal of Intelligent & Fuzzy Systems: Applications in Engineering and Technology, 32 (2017), 2747-2762. [2] W.-S. Bae, Formal verification of an RFID authentication protocol based on Hash function and secret code, Wireless Personal Communications, 79 (2014), 2595-2609. [3] A. Basar and M. Y. Abbasi, On ordered bi-ideals in ordered-semigroups, Journal of Discrete Mathematical Sciences and Cryptography, 20 (2017), 645-652. doi: 10.1080/09720529.2015.1130474. [4] L. Batina, Y. K. Lee and S. Seys, et al., Extending ECC-based RFID authentication protocols to privacy-preserving multi-party grouping proofs, Personal and Ubiquitous Computing, 16 (2012), 323-335. [5] X. Cao, W. Kou and H. Li, Secure mobile IP registration scheme with AAA from parings to reduce registration delay, CIS 2006, New York: IEEE Press, 2006, 1037-1042 [6] W. Gao and W. F. Wang, A tight neighborhood union condition on fractional (g, f, n', m)-critical deleted graphs, Colloquium Mathematicum, 149 (2017), 291-298. doi: 10.4064/cm6959-8-2016. [7] J. B. Gurubani, H. Thakkar and D. R. Patel, Improvements over extended LMAP+: RFID authentication protocol, Proceedings of 6th International Conference on Trust Management IFIPTM, Surat: Springer Boston, 2012, 225-231. [8] D. He, N. Kumar and N. Chilamkurti, et al., Lightweight ECC based RFID authentication integrated with an ID verifier transfer protocol, Journal of Medical Systems, 38 (2014), 116. [9] A. Juels, Strengthening EPC Tag against Cloning, Proceedings of ACM Workshop on Wireless Security, Cologne, 2005, 67-76. [10] M. Kianersi, M. Gardeshi and M. Arjmand, SULMA: A secure ultra light-weight mutual authentication protocol for lowcost RFID tags, International Journal of UbiComp (IJU), 2 (2011), 17-24. [11] S. Li, Handwritten character recognition technology combined with artificial intelligence, Journal of Discrete Mathematical Sciences and Cryptography, 20 (2017), 167-178. [12] H. Liu, X. Li and J. Bai, A new one-time group signature based on Hash function, Journal of Beijing Electronic Science and Technology Institute, 21 (2013), 25-29. [13] J. Liu, R.-J. Chen and D.-S. Yan, et al., Efficient identity-based ring signature for RFID authentication scheme, Proceeding of the IEEE International Conference on RFID-Technology and Applications, Guangzhou: IEEE, 2010, 7-10. [14] Y. L. Liu, X. L. Qin and B. H. Li, et al., A Forward-Secure Grouping-proof protocol for Multiple RFID tags, International Journal of Computational Intelligence Systems, 5 (2012), 824-833. [15] M. Ohkubo, K. Suzuki and S. Kinoshita, Hash-chain based forward secure privacy protection scheme for low-cost RFID, Proceedings of the 2004 Symposium on Cryptography and Information Security (SCIS 2004), Sendai, 2004, 719-724. [16] S. E. Sarma, S. A. Weis and D. W. Engels, RFID systems and security and privacy implications, Proceedings of the 4th International Workshop on Cryptographic Hardware and Embedded Systems (CHES 2002), LNCS, 2523, Berlin: Springer-Verlag, 2003, 454-469. [17] Y. Tian, G. L. Chen and J. Li, A New Ultralightweight RFID Authentication Protocol with Permutation, IEEE Communications Letters, 16 (2012), 702-705. [18] S. A. Weis, S. E. Sarma, R. L. Rivest and D. W. Engels, Security and privacy aspects of lowcost radio frequency identification systems, Proceedings of the 1st International Conference on Security in Pervasive Computing, LNCS, 2802, Berlin: Springer-Verlag, 2004, 719-724. [19] J. P. de Wet and S. A. van Aardt, Traceability of locally Hamiltonian and locally traceable graphs, Discrete Mathematics and Theoretical Computer Science, 17 (2016), 245-262. A typical RFID system The proposed RFID batch authentication protocol based on group signature The comparison of the calculation time of server in the two protocols The comparison of the storage amount of tag in the two protocols The comparison of the storage amount of server in the two protocols Notations $K_{ID_i}$ authentication key of each tag, used to authenticate a reader $K_{i}$ private key of each tag in the group signature scheme $X_{i}$ exclusive-OR of the Hash values of $n$ strings in one tag's private key $Y$ group public key $C_{i}$ exclusive-OR of the other $m$-1 tags' $X$ values except the tag that generated group signature $\sigma$ $\sigma =(\sigma_{1}, \sigma_{2}, \ldots, \sigma_{n}, C_{i})$, the group signature be generated by one tag ID$_{i}$ one tag's identity information $K$ MAC value of message $M$ under key $K$ $\vert\vert$ concatenation of two data $K_{ID_i}$ authentication key of each tag, used to authenticate a reader $K_{i}$ private key of each tag in the group signature scheme $X_{i}$ exclusive-OR of the Hash values of $n$ strings in one tag's private key $Y$ group public key $C_{i}$ exclusive-OR of the other $m$-1 tags' $X$ values except the tag that generated group signature $\sigma$ $\sigma =(\sigma_{1}, \sigma_{2}, \ldots, \sigma_{n}, C_{i})$, the group signature be generated by one tag ID$_{i}$ one tag's identity information $K$ MAC value of message $M$ under key $K$ $\vert\vert$ concatenation of two data The security comparisons of the two protocols Mutualauthentication Taganonymity Messageconfidentiality Messageintegrity Messagefreshness The Protocol [13] $\backslash$ $\surd$ $\surd$ $\surd$ $\surd$ Our protocol $\surd$ $\surd$ $\surd$ $\surd$ $\surd$ Mutualauthentication Taganonymity Messageconfidentiality Messageintegrity Messagefreshness The Protocol [13] $\backslash$ $\surd$ $\surd$ $\surd$ $\surd$ Our protocol $\surd$ $\surd$ $\surd$ $\surd$ $\surd$ The performance comparisons of the two protocols Tag'scalculation Server'scalculation Tag'sstorage Server'sstorage The protocol [13] 0 mSM+2$P$ 20$k(m$+2)bytes 20($k+m)$bytes Our protocol 82$h$ ($m$+81)$h$ 3260bytes (42$m$+20)bytes Tag'scalculation Server'scalculation Tag'sstorage Server'sstorage The protocol [13] 0 mSM+2$P$ 20$k(m$+2)bytes 20($k+m)$bytes Our protocol 82$h$ ($m$+81)$h$ 3260bytes (42$m$+20)bytes The cryptography operation times of server (ms) Pairing Scalar multiplication Hash operation 3.16 0.79 0.0002 Pairing Scalar multiplication Hash operation 3.16 0.79 0.0002
2018-12-11 07:42:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32400253415107727, "perplexity": 6845.770626849137}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823588.0/warc/CC-MAIN-20181211061718-20181211083218-00533.warc.gz"}
https://www.entrance360.com/engineering/question-a-bullet-is-fired-from-a-gun-the-force-is-given-by-1470/
## Filters Q Engineering 2 months, 1 week ago # A bullet is fired from a gun The force is given by A bullet is fired from a gun. The force is given by $F=600-2\times10^{5}t$. The force on the bullet becomes zero as soon as it leaves the barrel. What is the average impulse imparted to the bullet? F 600 —2 x 105/ Views S safeer Answered 2 months, 1 week ago We have given,                 $F=600-2*10^5tJ$ At the bullet leaves the barrel, the force on the bullet becomes zero.                 So, $F=600-2*10^5t=0$ $t=600/2*10^5=3*10^{-3}s$ Then, average impulse imparted to the bullet $I=\int_{0}^{t}Fdt\Rightarrow \int_{0}^{3*10^{-3}} (600-2*10^5t)dt$ $=\int_{0}^{3*10^{-3}}[600t-(2*10^5t^2/2)] \Rightarrow I=600*3*10^{-3}-10^5*(3*10^{-3})^2 = 1.8 - 0.9 = 0.9 Ns$ A Avinash Dongre Answered 2 months, 1 week ago @ gopal ## JEE Main Articles Knockout AEEE JEE Main May Exams Articles Questions
2019-03-20 03:21:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9458637833595276, "perplexity": 3699.8299417077765}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202199.51/warc/CC-MAIN-20190320024206-20190320050206-00104.warc.gz"}
http://mathoverflow.net/feeds/question/47908
When is the group of homeomorphisms of a compact space locally compact? - MathOverflow most recent 30 from http://mathoverflow.net 2013-05-25T08:27:11Z http://mathoverflow.net/feeds/question/47908 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/47908/when-is-the-group-of-homeomorphisms-of-a-compact-space-locally-compact When is the group of homeomorphisms of a compact space locally compact? Spencer 2010-12-01T14:50:38Z 2010-12-02T08:25:17Z <blockquote> <p>When is the group of homeomorphisms of a compact space locally compact?</p> </blockquote> <p>I am interested in finding out when the group of homeomorphisms of a compact topological space $X$ (with appropriate topology e.g. 'weak' or compact-open) is a locally compact space.</p> <p>What extra conditions might we be able to put on $X$ to ensure that it is so?... What if $X$ is, say, a metric space and we ask when the isometry group is locally compact?</p> http://mathoverflow.net/questions/47908/when-is-the-group-of-homeomorphisms-of-a-compact-space-locally-compact/47909#47909 Answer by Theo Buehler for When is the group of homeomorphisms of a compact space locally compact? Theo Buehler 2010-12-01T14:58:28Z 2010-12-02T08:25:17Z <p>I do not know what you mean by automorphism group, I guess you mean homeomorphisms. In that case the answer is no:</p> <p>For instance, the homeomorphisms of the circle are in one-to-one correspondence with continuous strictly monotone functions $[0,1] \to \mathbb{R}$ such that $f(0) \in [0,1)$ and $f(1) = f(0)\pm 1$. Compact-open topology just means uniform convergence, and this obviously is not a locally compact space.</p> <p>As for local compactness of the isometry group, it follows from the Arzelà-Ascoli theorem that that the isometry group of a <em>proper</em> metric space (i.e., closed balls are compact) is locally compact.</p> http://mathoverflow.net/questions/47908/when-is-the-group-of-homeomorphisms-of-a-compact-space-locally-compact/47940#47940 Answer by Keivan Karai for When is the group of homeomorphisms of a compact space locally compact? Keivan Karai 2010-12-01T20:03:01Z 2010-12-01T20:03:01Z <p>For a (connected) smooth Riemannian manifold $M$, it has been shown by Myers and Steenrod that that the group of isometries is a Lie group, hence is locally compact. On the other hand the group of homeomorphisms of a smooth manifold $M$ is never locally compact. When the dimension is at least $2$, this group acts $k$-transitively for any $k$ on $M$ and from here I think it should be easy to show that the groups is not locally compact. </p>
2013-05-25 08:27:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8839036822319031, "perplexity": 268.69393524260994}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705790741/warc/CC-MAIN-20130516120310-00032-ip-10-60-113-184.ec2.internal.warc.gz"}
https://atomica.tools/docs/master/general/junctions/Junctions.html
# Junction overview¶ Junctions offer a way to re-parametrize transitions when the relative proportions of outflows are constrained. As discussed in other pages, they are special compartments that are emptied entirely at the end of each timestep. As a result, the total number of people leaving the compartment is not determined by any parameter values, because it directly corresponds to the number of people in the compartment. Instead, the parameter values are all proportions that govern the relative outflows. The general layout of a junction is shown below: In this example, any parameters supplying values for transitions A, B, and C all need to be in ‘proportion’ units. During simulation, the sum of outflows A, B, and C would equal 100 people. Junctions come in two varieties 1. A junction where all of the outflows have been explicitly specified. In this case, outflows are proportionately rescaled across all transitions. This is implemented in the Junction class. 2. A junction where all outflows except one have been specified. In this case, any residual outflow is flushed to a single compartment. This is implemented in the ResidualJunction class. The type of junction depends on the ‘Transitions’ sheet in the Framework file. For a regular junction, simply enter the transitions the same as for any compartment. The schematic above could be implemented as The role of a residual junction is to make it easier to implement junctions where one of the outflows balances all of the others. For example, if the schematic at the top of the page satisfied $$C=1-A-B$$. In that case, rather than defining a parameter for C, the junction could instead be written as This syntax means that the flow from the junction to compartment_3 would equal max(0,1-A-B). That is, if $$A+B<1$$ then the residual will be assigned to $$C$$. Consider the following examples for the compartments shown in the schematic above: Example 1 - scale up, normal junction Transition Parameter value Outflow A 0.1 20 people B 0.3 60 people C 0.1 20 people In this example, all outflows are specified and they sum to a value of 0.5. As a result, they are all rescaled so that the total outflows equal 1. Example 2 - scale up, residual junction Transition Parameter value Outflow A 0.1 10 people B 0.3 30 people C > 60 people In this example, the outflow C is specified as a residual, and parameter values provided only for A and B. The sum of the provided outflows is 0.4. However, instead of being rescaled to 1 like in the previous example, the remaining 0.6 is assigned to C. Therefore, 60 people move via C. The total outflow is still 100 people, as required to empty the junction. Example 3 - scale down, normal junction Transition Parameter value Outflow A 0.6 33.3 people B 0.6 33.3 people C 0.6 33.3 people In this example, the outflows all sum to a value greater than 1. All of them get rescaled proportionately, so the outflow is the same for all three transitions, and the total outflow is 100. Example 4 - scale down, residual junction Transition Parameter value Outflow A 0.6 50 people B 0.6 50 people C > 0 people In this example, the provided outflows sum to a value greater than one. Therefore, the provided outflows are rescaled to 1, and no flow is assigned to the residual transition.
2023-03-23 14:56:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7007651329040527, "perplexity": 992.0056830220674}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945168.36/warc/CC-MAIN-20230323132026-20230323162026-00038.warc.gz"}
http://mathonline.wikidot.com/reflexive-normed-linear-spaces
Reflexive Normed Linear Spaces # Reflexive Normed Linear Spaces Definition: Let $X$ be a normed linear space. Then $X$ is said to be Reflexive if $J(X) = X^{**}$. Observe that a normed linear space $X$ being reflexive is equivalent to saying that the canonical embedding $J : X \to X^{**}$ defined for all $x \in X$ by $J(x) = J_x$ is surjective. The following theorem tells us exactly when a normed linear space $X$ is reflexive. Theorem 1: Let $X$ be a normed linear space. Then $X$ is reflexive if and only if the weak* topology on $X^*$ is the weak topology on $X^*$. • Proof: $\Rightarrow$ Suppose that $X$ is reflexive. Then: (1) • Now by definition, the weak* topology on $X^*$ is the $J(X)$-weak topology on $X^*$. • Also by definition, the weak topology on $X^*$ is the $(X^*)^* = X^{**}$-weak topology on $X^*$. • By $(\dagger)$ these two topologies on $X^*$ are the same. • $\Leftarrow$ Let $\Omega \in X^{**}$. Then $\Omega : X^* \to \mathbb{C}$ is continuous with respect to the weak topology on $X^*$. But then $\Omega$ is continuous with respect to the weak* topology on $X^*$, i.e., $\Omega$ is continuous with respect to the $J(X)$-weak topology on $X^*$. By the theorem on the The W-Weak Topology on a Normed Linear Space page, we must have that: (2) \begin{align} \quad \Omega \in J(X) \end{align} • Therefore: (3) \begin{align} \quad X^{**} \subseteq J(X) \end{align} • And since $J : X \to X^{**}$ we also have that $X^{**} \supseteq J(X)$. Therefore: (4) \begin{align} \quad X^{**} = J(X) \end{align} • So $X$ is reflexive. $\blacksquare$
2017-08-21 12:05:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 4, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9971263408660889, "perplexity": 121.2375882346354}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886108268.39/warc/CC-MAIN-20170821114342-20170821134342-00713.warc.gz"}
http://www.ncatlab.org/nlab/show/generalized+the
# Contents ## Idea In natural language, the definite article (‘the’ in English) is generally used only for nouns which are uniquely characterized by context. Hence we have “the United States of America” and “the book I was just reading,” but only “a car” or “a wild late-night party.” (Sometimes “the book I was just reading” is abbreviated to “the book”, but it should be clear from context that only one book could be meant.) In mathematics, and especially in category theory, homotopy theory and higher category theory, it is common to use “the” more generally for something which is characterized uniquely up to unique coherent isomorphism (that is, a unique isomorphism appropriate given the context). Thus, for instance, we speak (assuming that any exists) of “the” terminal object of a category, “the” product of two objects, “the” left adjoint of a functor, and so on. Outside of pure category theory we have examples such as “the” Dedekind-complete ordered field (the field of real numbers). In higher category theory, we extend this usage to objects that are characterized uniquely up to unique coherent equivalence. Of course, by “unique equivalence” we mean “unique up to 2-equivalence,” and so on. A more homotopy-theoretic way to say this is that the space ($\infty$-groupoid) of all such objects is contractible. ## Formalization The notion of a “generalized the” can be formalized and treated uniformly in homotopy type theory. Here one can define an introduction rule for the as follows: $(A:Type),(t:IsContr(A)) \vdash (the(A,t):A).$ Here the term $t$ is one witness for the contractibility of the type $A$. Since $IsContr(A)$ is itself contractible, we could say that $t$ is the witness for the contractibility of the type $A$, which may explain why we do not generally mention it. Revised on April 28, 2014 10:05:56 by Toby Bartels (64.89.53.201)
2014-09-16 17:29:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 7, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8636869192123413, "perplexity": 621.2722039650623}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657118950.27/warc/CC-MAIN-20140914011158-00164-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
https://tex.stackexchange.com/questions/573025/tier-in-forest-tree-how-to-align-properly
# Tier in Forest tree- how to align properly I've create this tree in LaTex using the package forest. However, I cannot understand why the command tier doesn't work properly. I should have F, P, D and R align on the same line and M, S and L on a different line, just a bit above the other one. The position of m and n is fine. Could you help me? Thanks \documentclass[border=0.5cm]{standalone} \usepackage{forest} \forestset{ nice empty nodes/.style={ for tree={calign=fixed edge angles}, delay={where content={}{shape=coordinate, for current and siblings={anchor=north}}{}} } } \begin{document} \begin{forest} [$\Omega$, nice empty nodes [G] [[[F, tier=word][[P, tier=word][[D, tier=word][[R, tier=word][M, tier=word1[m][n]]]]]][[S, tier=word1][L, tier=word1]]]]]] \end{forest} \end{document} If I understood you correctly, than you like achieve the following result: It is obtained by changing declaration of nodes anchors: \documentclass[border=0.5cm]{standalone} \usepackage{forest} \begin{document} \begin{forest} for tree={calign=fixed edge angles, anchor=north }, delay={where content={}{shape=coordinate}{}}, % [$\Omega$ [G] [ [ [F, tier=word] [ [P, tier=word] [ [D, tier=word] [ [R, tier=word] [M, tier=word [m] [n] ] ] ] ] ] [ [S, tier=word] [L, tier=word] ] ] ] \end{forest} \end{document} • That’s perfect!! Thank you very much, you’ve helped me a lot! Nov 30 '20 at 19:39 • @AndreaC, you are welcome! Nov 30 '20 at 19:41
2021-10-24 08:31:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8383981585502625, "perplexity": 5328.804726935711}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585916.29/warc/CC-MAIN-20211024081003-20211024111003-00523.warc.gz"}
http://bkms.kms.or.kr/journal/view.html?doi=10.4134/BKMS.b180249
- Current Issue - Ahead of Print Articles - All Issues - Search - Open Access - Information for Authors - Downloads - Guideline - Regulations ㆍPaper Submission ㆍPaper Reviewing ㆍPublication and Distribution - Code of Ethics - For Authors ㆍOnlilne Submission ㆍMy Manuscript - For Reviewers - For Editors On strongly Gorenstein hereditary rings Bull. Korean Math. Soc. 2019 Vol. 56, No. 2, 373-382 https://doi.org/10.4134/BKMS.b180249Published online March 1, 2019 Kui Hu, Hwankoo Kim, Fanggui Wang, Longyu Xu, Dechuan Zhou Southwest University of Science and Technology; Hoseo University; Sichuan Normal University; Southwest University of Science and Technology; Southwest University of Science and Technology Abstract : In this note, we mainly discuss strongly Gorenstein hereditary rings. We prove that for any ring, the class of $SG$-projective modules and the class of $G$-projective modules coincide if and only if the class of $SG$-projective modules is closed under extension. From this we get that a ring is an $SG$-hereditary ring if and only if every ideal is $G$-projective and the class of $SG$-projective modules is closed under extension. We also give some examples of domains whose ideals are $SG$-projective. Keywords : strongly Gorenstein projective module, strongly Gorenstein hereditary ring, strongly Gorenstein Dedekind domain MSC numbers : 13G05, 13D03 Downloads: Full-text PDF   Full-text HTML
2020-01-28 00:32:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.270487517118454, "perplexity": 2355.963457798734}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251737572.61/warc/CC-MAIN-20200127235617-20200128025617-00061.warc.gz"}
https://aptitude.gateoverflow.in/6164/cat-2019-set-2-question-88
198 views Let $A$ be a real number. Then the roots of the equation $x^{2}-4x-\log _{2}A=0$ are real and distinct if and only if 1. $A> \frac{1}{16}$ 2. $A> \frac{1}{8}$ 3. $A< \frac{1}{16}$ 4. $A< \frac{1}{8}$ Ans should be an option (A) For a quadratic equation to have real and distinct roots, it’s discriminant should be strictly greater than zero. $\therefore$   $b^{2}-4ac\gt0$ $\Rightarrow$  $16-4(-\log_{2}A)\gt0$   $\Rightarrow$   $\log_{2}A\gt-4$   $\Rightarrow$   $A\gt \frac{1}{16}$ by 438 points 1 vote 1 382 views 2 292 views 1 vote
2022-12-06 08:21:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7967793941497803, "perplexity": 1416.601519890572}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711074.68/warc/CC-MAIN-20221206060908-20221206090908-00630.warc.gz"}
https://simons.berkeley.edu/talks/omri-weinstein-2015-04-20
Talks Spring 2015 # Welfare Maximization with Limited Interaction: Information and Communication in Economics Monday, Apr. 20, 2015 3:30 pm4:00 pm ### Add to Calendar Location: Calvin Lab Auditorium We continue the study of welfare maximization in unit-demand (matching) markets, in a distributed information model where agent?s valuations are unknown to the central planner, and therefore communication is required to determine an efficient allocation. Dobzinski, Nisan and Oren (STOC?14) showed that if the market size is n, then r rounds of interaction (with logarithmic bandwidth) suffice to obtain an n^{1/(r+1)}-approximation to the optimal social welfare. In particular, this implies that such markets converge to a stable state (constant approximation) in time logarithmic in the market size. We obtain the first multi-round lower bound for this setup. We show that even if the allowable per- round bandwidth of each agent is n?(r), the approximation ratio of any r-round (randomized) protocol is no better than ?(n^{1/5^{r+1}}, implying an ?(log log n) lower bound on the rate of convergence of the market to equilibrium. Our construction and techniques may be of interest to round-communication tradeoffs in the more general setting of combinatorial auctions, for which the only known lower bound is for simultaneous (r = 1) protocols [DNO14].
2022-01-16 20:11:00
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8160111308097839, "perplexity": 1803.635061765016}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300010.26/warc/CC-MAIN-20220116180715-20220116210715-00324.warc.gz"}
https://www.physicsforums.com/threads/derivative-of-complex-variables.542190/
# Derivative of Complex Variables 1. Oct 19, 2011 ### S_David Hi, What is the following derivative: $$\frac{\partial}{\partial x}|b-ax|^2$$? Now I know that $$|b-ax|^2=(b-ax)(b^*-a^*x^*)$$, so how to do the differentiation with respect to $$x^*$$? PS.: All variables and constants are complex. 2. Oct 20, 2011 ### Staff: Mentor Is this a homework question? 3. Oct 20, 2011 ### lurflurf It is not clear what that derivative means. A guess would be that it is a Wirtinger derivative in which case we have $$\frac{\partial x^*}{\partial x}=0$$ 4. Oct 20, 2011 ### S_David No it is not. So lurflurf, are you saying that the derivative will be: $$-a(b-ax)^*$$ I thought it will be like: $$-(b-ax)a^*$$ but I couldn't prove it. 5. Oct 20, 2011 ### lurflurf Again it is not clear what that derivative means, but a good guess would be -a(b-ax)*. Then we have d|b-a x|^2=-a(b-a x)* dx+-a*(b-a x) dx* notice that |b-a x|^2 is definitely not complex differentiable as it depends upon x and x* rather than upon x alone 6. Oct 20, 2011 ### S_David No, I just need to the derivative $$\frac{\partial}{\partial x}$$, where the derivative is partial with respect to x. I see some books writing x as a+jb, and then compute the derivatives with respect to a and b. I was just wondering if there is another way to do this. Thanks 7. Oct 20, 2011 ### lurflurf There are different ways because a complex variable can change in more ways than a real variable. We can work with different coordinates or none. So a function of a complex variable can be described in diferent ways with two variables |x| and arg(x) Re(x) and Im(x) x and x* and so on since you are interested in the x partial x and x* are natural, but you could use any set and the chain rule to find the x partial. $$\frac{\partial}{\partial x}=\frac{\partial u}{\partial x}\frac{\partial}{\partial u}+\frac{\partial v}{\partial x}\frac{\partial}{\partial v}$$ However you want to choose u and v
2018-02-18 05:55:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6471289396286011, "perplexity": 972.5102248435917}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891811655.65/warc/CC-MAIN-20180218042652-20180218062652-00294.warc.gz"}
https://www.enotes.com/homework-help/topic/science
• Science Telohase is a stage of mitosis. Mitosis is one type of cellular division, and its goal is to create an identical daughter cell to the original cell. Most of the cellular division that happens in... • Science The phospholipid bilayer that this question is asking about is the cell membrane. The cell membrane is made of mostly a double layer of phospholipids, cholesterol, and fatty acids. The layer has a... • Science Let's start by giving Jules Verne a bunch of credit for writing From the Earth to the Moon, because his fictional story was scientifically accurate in quite a few ways. Keep in mind that he wrote... • Science Egg and Sperm Cells In plants and animals, a whole organism is sexually reproduced from a zygote, a product of fusion between egg and sperm. Zygotes are... • Science Although dividing scientific notation may be intimidating, a simple process can be followed to make the division more simple. The reason this process works is because when you are dealing with... • Science Saturn is a unique planet in our solar system, though not the only kind of its type. Saturn has a stunning array of icy rings, which were formed through various cataclysmic events—typically the... • Science Stereochemistry is a field of chemistry that studies the spatial arrangement of atoms and the three-dimensional structure of molecules. As the name suggests, stereochemistry primarily focuses on... • Science The cell cycle is the entire "life" of a cell. It includes a cell's periods of growth, development, and division. The cell cycle is divided into three main parts, and those parts are interphase,... • Science This is a fairly straightforward question. Knowing that the rate of a reaction will double for every 10 degrees Celsius the temperature rises, you can easily figure out how much the rate will... • Science To multiple numbers when they are in scientific notation, you need to approach the coefficients and their exponents in two separate steps. For example, let’s say we have the expression (1.8 x... • Science They spent all their time watching the heavens. Sure, they didn't have telescopes and such, but the sorts of observations they were making didn't need those. All they needed was a lot of... • Science Nucleic acids, which store and express genetic information and are essential in protein synthesis, are made of nucleotides. Both DNA and RNA are constructed of long, connected chains of... • Science Nuclear fission is the process of bombarding high-density, high-molecular weight atoms with neutrons in order to cause them to split, thereby releasing two, lower-molecular weight atoms and a... • Science The results of the study presented in the article found that the Distress Thermometer (DT) was successful in screening for patients suffering from both psychological and physical problems as a... • Science Orrorin tugenensis is an early hominin species that lived approximately 6 million years ago. The first fossils of this species were discovered on October 25, 2000. Due to the year of the discovery,... • Science That depends on the specific infection. For instance, Sickle Cell Anemia distorts the normal circular shape of the red blood cells. In general, however, every infection causes an increase in... • Science Charles Darwin is famous for his theory of natural selection. In his 1859 book entitled On the Origin of Species Darwin theorized that organisms change over time through a process called evolution.... • Science G1 refers to a period in the eukaryotic cell cycle at the beginning of interphase. In general, cells in interphase are growing and synthesizing mRNA and proteins to prepare for mitosis, which is... • Science Whether or not a virus is alive depends on your definition of life. The simplest definition is that living things can replicate themselves and respond to their environment. According to this... • Science Carbon makes up all life on Earth. The carbon cycle describes the movement of carbon between different areas. Diffusion is the process through which carbon moves between the surface of the ocean... • Science Animal cells and plant cells are similar in several ways. For example, both are comprised of a nucleus and various organelles. Although there are similarities between these two types of cells, they... • Science As a writing prompt, this prompt is very subjective. Just about anything you say is a positive is likely to be argued against by someone else's subjective opinion. The other difficult part of the... • Science While the coronavirus has been an upsetting event that has shut down economies around the world, raised unemployment, and cost hundreds of thousands of lives, aspects of the disease that should... • Science Arguments could be made on both sides as to whether or not the Nexus-6 replicants in Blade Runner were human. It largely depends on how one defines a human being. In fact, the entire thematic... • Science The source that I am linking below, an article from The Conversation, fails to demonstrate quality research for several reasons. It does take a clear position in favor of genetic modifications... • Science This appears to be a completely open-ended question that is open to individual responses. There isn't a definitively correct answer, so feel free to explore your thoughts about contacting an... • Science Buffers are important biologically because they keep pH from rapid deviations from a range needed for homeostasis. For example, blood has a pH of 7.4. If it increased to around 7.7, the change... • Science Stoichiometry is commonly used in chemistry to determine the quantitative data related to the amounts of reactants used and products generated in a given chemical reaction. The key requirement for... • Science The scientific notation is a way of expressing very large or very small numbers. The form of writing a number in the scientific notation is as follows: a x 10^n where a is a number between 1 and... • Science Marsupials reproduce and give birth to a partially developed offspring, whereas mammals give birth to a fully developed offspring. • Science Gold has a density of 19.3 g/cc. The density of a substance is defined as the mass of a unit volume of the same. If a substance has a density of X g/cc, 1 cc of the substance has a mass of X g.... • Science The most common calcluation for optimal heart rate during exercise is as follows: Subtract your age in years from the number 220. So if you are 20, this number is 200. This gives you your maximum... • Science The reaction for HNO3 ad KOH is given by, HNO3 + KOH -------> KNO3 + H2O Therefore 1 mol of HNO3 will neutralize 1 mol of KOH. The amount of KOH in 39 mL of 2.0 M KOH is,... • Science The Earth is farther from the sun than Mercury. Therefore, the length of an Earth year is longer than the length of a Mercury year. One Earth year is 365.26 Earth days. One Mercury year is 87.96... • Science Ferric ion or Iron (III) can be written as Fe^(3+) ion Ferrous ion or Iron (II) can be written as Fe^(2+) ion Their main differences are the oxidation states. When an elemental iron loses 2... • Science Pure ammonia is a gas. Household ammonia that you buy in a bottle in a store is a solution of NH3 in water. So ammonia is definitely not a salt. Ammonia can technically be either an acid or a base.... • Science Assuming the slab is glass with mu_r =1.5 the relative electric permittivity is epsilon_r =4.7 . The speed of electromagnetic waves in a medium (included the speed of light) is v... • Science The formula relating the current I that flows through a conductor r if a voltage V is applied across it is V = I*r In the question, the electric water heater draws 10 A of current from a 240-V... • Science This question has been answered at: https://www.enotes.com/science/q-and-a/an-object-placed-20cm-infront-converging-lens-332066 • Science Multicellular organisms are made of more than one cell ("multi" = many). Unicellular organisms are made of only one cell ("uni" = one). There are six kingdoms within the classification of... • Science Tropic hormones are hormones that are secreted by the anterior (front) of the pituitary gland. Thyroid stimulating hormone (TSH) is one example of a tropic hormone. The release of this hormone, and... • Science I think the answer to your question is that self-reproducing organisms usually grow exponentially, whereas sexually reproducing organisms will usually reproduce arithmetically. There are always... • Science Frequency is simply cycles per second, or Hertz (Hz). Since you dip your finger in the water twice per second, the frequency is 2 Hz. The velocity of a wave (v) is equal to the length of the wave... • Science We can solve this problem using the ideal gas equation. It is expressed as: PV = nRT First we have to correct the pressure that is exerted by the gas since we are collecting it in water. P_(t o... • Science MnO_4^(-) rarr Mn^(2+) Fe^(2+) rarr Fe^(3+) MnO_4^(-)+8H^+ rarr Mn^(2+)+4H_2O Fe^(2+) rarr Fe^(3+) MnO_4^(-)+8H^++5e rarr Mn^(2+)+4H_2O Fe^(2+) rarr Fe^(3+)+e `MnO^(-):Fe^(2+) =... • Science The correct answer to the question here is D. The archegonium produces eggs and the sporangium produces spores. The archegonium is part of the gametophyte phase and it usually has a long neck for... • Science Hello! There is at least one metal with this property, plutonium (Pu). It is a radioactive element used in nuclear weapons and as an energy source in peace applications. As a substance, without... • Science In your Chemical equation, Magnesium is added to a Copper(II) Sulfate solution. A displacement reaction occurs. The Copper becomes a solid precipitate and Magnesium replaces it in the solution....
2020-07-09 22:25:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5505041480064392, "perplexity": 1934.112451619165}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655901509.58/warc/CC-MAIN-20200709193741-20200709223741-00434.warc.gz"}
https://socratic.org/questions/how-do-you-factor-3z-2-3z-2-27z
# How do you factor 3z^2-3z^2-27z? Jun 19, 2018 $- 27 z$ #### Explanation: I'm going to do this problem assuming you meant to have two second-degree terms: The first two terms cancel, which leaves us with $- 27 z$ This expression, since it is a monomial, cannot be factored. Hope this helps!
2019-12-15 23:26:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 2, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9182940125465393, "perplexity": 1353.336247648988}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541310970.85/warc/CC-MAIN-20191215225643-20191216013643-00279.warc.gz"}
https://math.stackexchange.com/questions/1175500/differentiablility-of-fx-xm-sin1-xn
# differentiablility of f(x)=$x^m sin(1/x^n)$ my attempt: i have to choose one from option A and option D. option B can be eliminated by taking m=1,n=2. option C can also be eliminated by taking m=4, n=3. plz help from choosing one from A and C. thanks. $f'(x)=mx^{m-1}sin(1/x^n)-nx^{m-n-1}cos(1/x^n)$ (for $x \neq 0$). The sine and cosine function are continuous; therefore differentiability arises if there are factors $x^a$ with $a \geq 0$. Otherwise the derivative of $f(x)$ would have a jump at $x=0$. • which option seems correct then? – ketan Mar 4 '15 at 19:00 • A and D, because (A): $x^{m-n-1}=x^q$ with $q>0$ and for $m=1$ it must hold (strict inequality!) $n=0$, the 2nd term vanishes. – kryomaxim Mar 4 '15 at 19:05 • but only one of them is correct. – ketan Mar 4 '15 at 19:06 • (D) is not correct; Ex.: m=0,n=1: first term vanishes and second has the factor $x^0$. – kryomaxim Mar 4 '15 at 19:09 I'll assume $n\ge1$, otherwise the problem is trivial. Use the definition of derivative: the derivative exists if and only if $$f'(0)=\lim_{x\to 0}x^{m-1}\sin\frac{1}{x^n}$$ exists and is finite. If $m>1$, this limit exists and is zero, because, for $x\ne0$, $$-|x^{m-1}|\le x^{m-1}\sin\frac{1}{x^n}\le |x^{m-1}|$$ and the squeeze theorem applies, so $f'(0)=0$. If $m\le1$ the limit does not exist. No hypothesis whatsoever is needed on $n$. So, with this interpretation, A and D are true. The case would be much different if the question is “does the function have continuous derivative at $0$?” But “differentiable at $0$” usually means “the function has a derivative at $0$”. If continuous differentiability is required, then we know that $m>1$ and $f'(0)=0$, in this case. Moreover, for $x\ne0$, $$f'(x)=mx^{m-1}\sin\frac{1}{x^n}-nx^{m-n-1}\cos\frac{1}{x^n}$$ and this is continuous everywhere, except possibly at $0$. The limit of this function at $0$ should be $0$. Since $m>1$, we already know that $$\lim_{x\to0}x^{m-1}\sin\frac{1}{x^n}=0$$ so we also need that $$\lim_{x\to0}x^{m-n-1}\cos\frac{1}{x^n}=0$$ Note that if $m-n-1\le0$, the limit doesn't exist. Instead, if $m-n-1>0$, the limit is $0$ with the same argument as before. So the condition for continuous differentiability at $0$ is $m>n+1$. For example, if $m=2$ and $n=1$, the derivative is $$f'(x)=\begin{cases} 2x\sin\dfrac{1}{x}-\cos\dfrac{1}{x} & \text{if x\ne0}\\ 0 & \text{if x=0} \end{cases}$$ and this function is not continuous at $0$. So, if we consider this as the question, we have to choose non differentiability, so either C or D. • is A correct or D? – ketan Mar 8 '15 at 4:38 • @ketan If $m>1$ then the function is differentiable at $0$. Suppose $m>n>0$: then $m>1$ and the function is differentiable at $0$; on the other hand, if $n=0$ the function is differentiable. Hence A is correct. But also D is correct, because for $m=1$ and $n>1$ the function is not differentiable at $0$. – egreg Mar 8 '15 at 9:39
2021-01-20 11:09:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.983858585357666, "perplexity": 199.99419312604476}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703519984.9/warc/CC-MAIN-20210120085204-20210120115204-00627.warc.gz"}
https://academic.oup.com/ndt/pages/Information_To_Authors
# INSTRUCTIONS TO AUTHORS AIMS AND SCOPE AUTHORS: ROLES AND RESPONSIBILITIES TABLES FIGURE PREPARATION ABBREVIATIONS REFERENCES SUPPLEMENTARY MATERIAL COLOUR ILLUSTRATIONS TRANSPARENCY DECLARATION CROSSCHECK PREPARATION OF MANUSCRIPTS TO BE PUBLISHED IN NDT OPEN ACCESS OPTION FOR AUTHORS PAGE CHARGES OFFPRINTS AUTHOR SELF-ARCHIVING/PUBLIC ACCESS POLICY CROSSREF FUNDING DATA REGISTRY DISCLAIMERS EDITORIAL ENQUIRIES PRODUCTION ENQUIRIES Note to authors: ALL ARTICLES MUST BE SUBMITTED ONLINE. Once you have prepared your manuscript according to the Instructions below, Please pay particular attention to the sections on Conflict of Interest Declaration and Figure Preparation. Please visit http://mc.manuscriptcentral.com/ndt to submit to NDT. Instructions on submitting your manuscript online can be viewed here. ## 1. AIMS AND SCOPE NDT – Basic and Clinical Science is an official publication of the European Renal Association-European Dialysis and Transplant Association. NDT publishes Editorials, Reviews and original research. Rapid communications, exceptional cases and (only) online E-letters to the Editor commenting on papers previously published in the journal are also considered. The journal covers the whole territory of nephrology research including experimental work in animal models and molecular biology studies. In the Clinical Science section we consider clinical trials (RCT have a priority in our journal), observational studies at large and original works on health economy as applied to nephrology. We aim to cover the whole spectrum of kidney disease research, from clinical nephrology to haemodialysis and peritoneal dialysis as well as renal transplantation. Only single patient and small case-series providing novel insights – ranging from cellular or molecular levels to the clinical level – or papers describing novel clinical observations will be accepted for publication in NDT . NDT may accept high-quality, peer-reviewed supplements. Please contact [email protected] in the first instance for further information. Abstracts from the annual ERA-EDTA congress are published as a supplement to NDT each year. NDT only accepts online submissions. Please visit http://mc.manuscriptcentral.com/ndt. You will also find more complete submission instructions at this site. ## 2. AUTHORS: ROLES AND RESPONSIBILITIES The journal takes publication ethics very seriously. If misconduct is found or suspected after the manuscript is published, the journal will investigate the matter and this may result in the article subsequently being retracted. Authors should observe high standards with respect to publication ethics as set out by the Commission on Publication Ethics (COPE) and International Committee of Medical Journal Editors (ICMJE). Falsification or fabrication of data, plagiarism, including duplicate publication of the author's own work without proper citation, and misappropriation of the work are all unacceptable practices. Any cases of ethical misconduct are treated very seriously and will be dealt with in accordance with the COPE guidelines. If misconduct is found or suspected after the manuscript is published, the journal will investigate the matter and this may result in the article subsequently being retracted. Each author should have participated sufficiently in the work to take public responsibility for the content. This participation must include: 1. Conception or design, or analysis and interpretation of data, or both. 2. Drafting the article or revising it. 3. Providing intellectual content of critical importance to the work described. 4. Final approval of the version to be published. (See Br Med J 1985; 291: 722-723.) Manuscripts should bear the full name and address, with telephone, fax, and email of the author to whom the proofs and correspondence should be sent (corresponding author). For all authors, first name and surname should be written in full. In a covering letter, the individual contribution of each co-author must be detailed. This letter must contain the statement: 'the results presented in this paper have not been published previously in whole or part, except in abstract form'. Should your manuscript be accepted for publication, you will be required to give signed consent for publication (see copyright section). On acceptance, the corresponding author will be advised of the approximate date of receipt of proofs. Proofs must be returned by the author within 48 hours of receipt. To accelerate publication, only one set of PDF proofs is sent to the corresponding author by email. This shows the layout of the paper as it will appear in the Journal. It is, therefore, essential that manuscripts are submitted in their final form, ready for the printer. Proof-reading must be limited to the correction of typographical errors. Any other changes involve time-consuming and expensive work and may not be permitted at this stage. If additions are necessary, these may be made at the end of the paper in a Note in Proof. Major changes may be subject to editorial approval. Authors are referred to the statement on uniform requirements for manuscripts submitted to biomedical journals prepared by an international committee of medical journal editors. (Br Med J 1982; 284: 1766-1770, Ann Intern Med 1982; 96: 766-771.) ### Protection of Human Subjects and Animals in Research When reporting experiments on animals, authors should indicate whether the institutional and national guide for the care and use of laboratory animals was followed. In particular, NDT recommends compliance with the DIRECTIVE 2010/63/EU of the European Parliament for authors submitting from the European area, and compliance with the Guide for the Care and Use of Laboratory Animals for non-European authors. When reporting experiments on human subjects, authors should indicate whether the procedures followed were in accordance with the ethical standards of the responsible committee on human experimentation (institutional and national) and with the Helsinki Declaration of 1975, as revised in 2000 (5). If doubt exists whether the research was conducted in accordance with the Helsinki Declaration, the authors must explain the rationale for their approach and demonstrate that the institutional review body explicitly approved the doubtful aspects of the study. Patient consent Authors should state in their paper that informed consent has been obtained from the subjects (or their guardians) as specified in the ICMJE Reccomendations ## 3. TABLES All tables must be numbered consecutively and each must have a brief heading describing its contents. Any footnotes to tables should be indicated by superscript characters. Tables must be referred to in the main text in running order. All tables must be simple and not duplicate information given in the text. ## 4. FIGURE PREPARATION Please be aware that the requirements for online submission and for reproduction in the journal are different: (i) for online submission and peer review, please upload your figures either embedded in the word processing file or separately as low-resolution images (.jpg, .tif, .gif or. eps); (ii) for reproduction in the journal, you will be required after acceptance to supply high-resolution .tif files (1200 d.p.i. for line drawings and 300 d.p.i. for colour and half-tone artwork) or high-quality printouts on glossy paper. We advise that you create your high-resolution images first as these can be easily converted into low-resolution images for online submission. We would encourage authors to generate line figures in colour using the following colour palette: Blue (CMYK definition - 96/60/2/1 / RGB definition – 0/101/172) Orange (CMYK definition - 0/71/88/0 / RGB definition – 243/110/53) Pink (CMYK definition - 0/100/50/0 / RGB definition – 237/20/90) Yellow (CMYK definition - 1/29/94/0 / RGB definition – 249/185/40) Green (CMYK definition - 77/10/96/2 / RGB definition – 59/162/75) Magenta (CMYK definition - 65/98/28/25 / RGB definition – 97/33/94) In order to have consistency throughout the journal, the publishers reserve the right to re-draw figures, where necessary, with the appropriate colours from the palette. Authors will have an opportunity to correct inappropriate changes at the proof correction stage. For useful information on preparing your figures for publication, go to http://cpc.cadmus.com/da/index.jsp. Figures will not be relettered by the publisher. The journal reserves the right to reduce the size of illustrative material. Any photomicrographs, electron micrographs or radiographs must be of high quality. Wherever possible, photographs should fit within the print area of 169 x 235 mm (full page) or within the column width of 82 mm. Photomicrographs should provide details of staining technique and a scale bar. Patients shown in photographs should have their identity concealed or should have given their written consent to publication. Normally no more than six illustrations will be accepted for publication in the print issue without charge. Image acquisition and analysis If primary experimental data are presented in the form of a computer-generated image any editing must be described in detail. A linear (rather than sigmoidal) relationship between signal and image intensity is assumed. Unless stated otherwise, it will be assumed that all images are unedited. Inappropriate manipulation of images to highlight desired results is not allowed. Please adhere to the following guidelines to accurately present data: • No specific feature within an image may be enhanced, obscured, moved, removed, or introduced. • The grouping of images from different parts of the same gel, or from different gels, fields, or exposures (ie, the creation of a "composite image") must be made absolutely explicit by the arrangement of the figure (ie, using dividing lines) and explained in the figure legend. • Adjustments of brightness, contrast, or colour balance are acceptable if they are applied to the whole image and as long as they do not obscure, eliminate, or misrepresent any information present in the original, including the background. • Non-linear adjustments (eg, changes to gamma settings) must be disclosed in the figure legend. • Alteration of brightness or contrast that results in the disappearance of any features in a gel (either bands or cosmetic blemishes) or similar alterations in other experimental images is strictly forbidden. Authors should retain unprocessed images and metadata files, as the Journal may request them during manuscript evaluation, and/or after publication should there be a query relating to a specific figure. Files that have been adjusted in any way should be saved separately from the originals, in a non-compressed format. Compressed formats, such as JPG, should only be used for presentation of final figures, when requested, to keep file sizes small for electronic transmission. The Journal reserves the right to use image analysis software on any submitted image. Permissions If any tables, illustrations or photomicrographs have been published elsewhere, written consent for re-publication (in print and online) must be obtained by the author from the copyright holder and the author(s) of the original article, such permission being detailed in the cover letter. Third-Party Content in Open Access papers If you will be publishing your paper under an Open Access licence but it contains material for which you do not have Open Access re-use permissions, please state this clearly by supplying the following credit line alongside the material: Title of content Author, Original publication, year of original publication, by permission of [rights holder] This image/content is not covered by the terms of the Creative Commons licence of this publication. For permission to reuse, please contact the rights holder. ## 5. ABBREVIATIONS Authors should not use abbreviations in headings and figure legends should be comprehensive without extensive repetition of the Subjects and Methods section. Authors are advised to refrain from excessive use of uncommon abbreviations, particularly to describe groups of patients or experimental animals. Non-proprietary (generic) names of products should be used. If a brand name for a drug is used, the British or International non-proprietary (approved) name should be given. The source of any new or experimental preparation should also be given. ## 7. REFERENCES The references should be numbered in the order in which they appear in the text. References to published abstracts should be mentioned in the text but not in the reference list. At the end of the article the full list of references should give the name and initials of all authors unless there are more than six, when only the first three should be given followed by et al. The authors' names should be followed by the title of the article, the title of the Journal abbreviated according to the style of Index Medicus, the year of publication, the volume number and the first and last page numbers. References to books should give the title of the book, which should be followed by the place of publication, the publisher, the year and the relevant pages. EXAMPLES 1. Madaio MP. Renal biopsy. Kidney Int 1990; 38: 529-543 Books: 2. Roberts NK. The cardiac conducting system and the His bundle electrogram. Appleton-Century-Crofts, New York, NY: 1981; 49-56 Chapters: 3. Rycroft RJG, Calnan CD. Facial rashes among visual display unit (VDU) operators. In: Pearce BG, ed. Health hazards of VDUs. Wiley, London, UK: 1984; 13-15 Note: In the online version of NDT, there are automatic links from the reference section of each article to Medline. This is a useful feature for readers, but is only possible if the references are accurate. It is the responsibility of the author to ensure the accuracy of the references in the submitted article. Downloading references direct from Medline is highly recommended. ## 8. SUPPLEMENTARY MATERIAL Supporting material that is not essential for inclusion in the full text of the manuscript, but would nevertheless benefit the reader, can be made available by the publisher as online-only content, linked to the online manuscript. There is no charge for the publication of online-only supplementary data/tables/figures. Such material should not be essential to understanding the conclusions of the paper, but should contain data that is additional or complementary and directly relevant to the article content. Such information might include more detailed methods, extended data sets/data analysis, or additional figures (including colour). All text and figures must be provided in suitable electronic formats (instructions for the preparation of Supplementary material can be viewed here). All material to be considered as Supplementary material must be submitted at the same time as the main manuscript for peer review. It cannot be altered or replaced after the paper has been accepted for publication. Please indicate clearly the material intended as Supplementary material upon submission. Also ensure that the Supplementary material is referred to in the main manuscript where necessary. ## 9. COLOUR ILLUSTRATIONS Colour illustrations are accepted, but the authors will be required to contribute to the cost of the reproduction. Colour figures will incur a printing charge of £350/$600/€525 each (this does not apply to invited contributions). Orders from the UK will be subject to the current UK VAT charge. For orders from elsewhere in the EU you or your institution should account for VAT by way of a reverse charge. Please provide us with your or your institution’s VAT number. Illustrations for which colour is not essential can be reproduced as black and white images in the printed journal and, additionally, in colour as online Supplementary material. This option is not subject to colour charges. Authors should indicate clearly that they would like to take up this option in the covering letter and on the figures. The availability of additional colour images as Supplementary material should be mentioned where relevant in the main text of the manuscript. Instructions on how to submit colour figures as Supplementary material can be viewed online. ## 10. COPYRIGHT Please note that the journal now encourages authors to complete their copyright licence to publish form online Upon receipt of accepted manuscripts at Oxford Journals authors will be invited to complete an online copyright licence to publish form. Please note that by submitting an article for publication you confirm that you are the corresponding/submitting author and that Oxford University Press ("OUP") may retain your email address for the purpose of communicating with you about the article. You agree to notify OUP immediately if your details change. If your article is accepted for publication OUP will contact you using the email address you have used in the registration process. Please note that OUP does not retain copies of rejected articles. It is a condition of publication in the Journal that authors grant an exclusive licence to the Journal, published by Oxford University Press on behalf of the European Renal Association-European Dialysis and Transplant Association. This ensures that requests from third parties to reproduce articles are handled efficiently and consistently and will also allow the article to be as widely disseminated as possible. In assigning the licence, authors may use their own material in other publications provided that the Journal is acknowledged as the original place of publication and Oxford University Press is notified in writing and in advance. ## 11. TRANSPARENCY DECLARATION & ETHICS All authors must make a formal declaration at the time of submission indicating any potential conflict of interest. This is a condition of publication and failure to do so will delay the review process. Such declarations might include, but are not limited to, shareholding in or receipt of a grant, travel award or consultancy fee from a company whose product features in the submitted manuscript or a company that manufactures a competing product. You will be required to provide this information during the online submission process. In addition , in the interests of openness, ALL papers submitted to NDT MUST include a ‘Transparency declarations’ section (which should appear at the end of the paper, before the ‘References’ section) within the article. We suggest authors concentrate on transparency declarations (i.e. conflicts of interest) of a financial nature, although relevant non-financial disclosures can also be made. Authors should either include appropriate declarations or state ‘None to declare’. Importantly, the declarations should be kept as concise as possible, should avoid giving financial details (e.g. sums received, numbers of shares owned etc.), and should be restricted to declarations that are specific to the paper in question. Authors will of course need to consider whether or not the transparency declarations need to be amended when revisions are submitted. Please click here to consult the COPE guidelines on conflict of interest. The editors’ declarations of interest statements can also be viewed online. This Journal takes publication ethics very seriously. If misconduct is found or suspected after the manuscript is published, the journal will investigate the matter and this may result in the article subsequently being retracted. ## 12. CROSSCHECK The NDT editorial team reserves the right to use CrossCheck. CrossCheck is an initiative started by CrossRef to help its members actively engage in efforts to prevent scholarly and professional plagiarism. By submitting your manuscript to the journal it is understood that this it is an original manuscript and is unpublished work and is not under consideration elsewhere. Plagiarism, including duplicate publication of the author’s own work, in whole or in part without proper citation is not tolerated by the journal. ## 13. PREPARATION OF MANUSCRIPTS TO BE PUBLISHED IN NDT ### Language editing Particularly if English is not your first language, before submitting your manuscript you may wish to have it edited for language. This is not a mandatory step, but may help to ensure that the academic content of your paper is fully understood by journal editors and reviewers. Language editing does not guarantee that your manuscript will be accepted for publication. If you would like information about such services please click here. There are other specialist language editing companies that offer similar services and you can also use any of these. Authors are liable for all costs associated with such services. ## Fast Track - publication within 6 weeks Fast Track allows publishing high impact articles submitted to NDT to be prioritized for publication. Essential requirements for Fast Track are: - Paper is very likely to have a major impact on current knowledge in nephrology. - Readership may benefit from this publication as it may represent an important advancement in a particular study field. Authors who believe their manuscript complies with the above should address their request in their cover letter. A dedicated manuscript category for submission is available in the online system: https://mc.manuscriptcentral.com/ndt Please note that only original articles will be considered for Fast Track. After evaluation by the Editor-in-Chief, the manuscripts that have been selected will undergo fast peer-review and will be published within 6 weeks after submission. Word count: 3500 words including an abstract of 250 words but excluding references, tables and figures Keywords: maximum 5 References: maximum 50 ## Original Article Word count: maximum 3500 words including an abstract of 250 words but excluding references, tables and figures Keywords: maximum 5 References: maximum 50 The order of original articles should be as follows: 1. Title page including the title (please bear in mind that we prefer a title to be concise yet eye-catching) and details of all authors, including first or given name, and affiliation. 2. On a separate page an abstract of 250 words, which should consist of four paragraphs labelled Background', Methods', Results' and Conclusions'. They should briefly describe, respectively, the problems being addressed in this study, how the study was performed, the salient results and their originality and what the authors conclude from the results. 3. Keywords: no more than 5, characterizing the scope of the paper, the principal materials, and main subject of work. 4. On a new page: Introduction, Materials and Methods, Results, Discussion, Acknowledgements, Conflict of Interest Statement, Authors’ Contributions, Funding, References, Tables, Legends to figures and Figures. All pages should be numbered consecutively commencing with the title page. Headings (Introduction; Materials and Methods, etc.) should be placed on separate lines. Any statistical method must be detailed in the Materials and Methods section, and any not in common use should be described fully or supported by references. Please note that the Editor-in-Chief will select about one article per week suitable for hosting a short video (three minutes with ten slides) on our website. The Editorial Office will contact the corresponding author and explain the procedure. The video will be widely promoted and published together with the paper. ## Quiz Authors should present their exceptional clinical case, ask for a diagnosis, and then give the answer and discuss the case. Exceptional cases should provide unique insight into the pathophysiology of a disease or describe novel clinical observations. Descriptions of rare diseases will only be considered if they provide new information about the condition. All other case reports should be submitted to ckj (https://mc.manuscriptcentral.com/ckj). Word count: maximum 1000 words No Abstract 1-2 figures or tables Keywords: maximum 5 References: maximum 10 Acknowledgements, Conflict of Interest Statement, Authors’ contributions, Funding, References, Tables, Legends to figures and Figures ## E-letter to the Editor (comments) NDT no longer publishes ‘Letters to the Editor’ in an issue of the journal. However, correspondence relating to a published article can now be submitted electronically through our 'Add comment' facility. This can be accessed through the NDT website (https://academic.oup.com/ndt). Correspondents should access the relevant article on this site and use the ‘Add comment' button. When an e-letter is submitted online the author of the original article automatically receives notification that a comment has been submitted and is invited to respond promptly. Correspondents should register on the Oxford Academic Platform to be able to submit a comment. ## Editorial (on invitation only) Word count: maximum 2500 An abstract is not required for Editorials 2 figures or tables Keywords: maximum 5 References: maximum 30 ## Review (on invitation only) Word count: maximum 3500 Abstract of up to 250 words 4 figures or tables; please note that one figure will be processed by an art-designer Keywords: maximum 5 References: maximum 50 ## Pro/Con Debate (on invitation only) This section comprises two short invited Reviews on controversial issues written by two opponents, each defending their own point of view. Word count: 1500 words Abstract of up to 250 words 2 tables or figures Keywords: maximum 5 References: maximum 25 ## NDT Digest (on invitation only) This educational section will summarise an important renal topic in one educational page for renal fellows. Articles will be chosen, invited and edited in collaboration with the ERA-EDTA Young Nephrologists’ Platform (YNP). Word count: 1000 words 1 table or figure References: maximum 10 For more information about Figures, Tables, References, Authors’ contributions, Supplementary material, Conflict of interest, Copyright, please go to: Figures and colour illustrations: Tables: References: Authors’ contributions: Supplementary material: https://academic.oup.com/ndt/pages/Information_To_Authors#SUPPLEMENTARY MATERIAL Conflict of interest: Copyright: For editorial enquiries, please contact [email protected] For production enquiries, please contact [email protected] ## 14. OPEN ACCESS OPTION FOR AUTHORS NDT authors have the option to publish their paper under the Oxford Open initiative; whereby, for a charge, their paper will be made freely available online immediately upon publication. After your manuscript is accepted the corresponding author will be required to accept a mandatory licence to publish agreement. As part of the licensing process you will be asked to indicate whether or not you wish to pay for open access. If you do not select the open access option, your paper will be published with standard subscription-based access and you will not be charged. Oxford Open articles are published under Creative Commons licences. RCUK/Wellcome Trust funded authors publishing in NDT can use the Creative Commons Attribution licence (CC BY) for their articles. All other authors may use the Creative Commons Attribution Non-Commercial licence (CC BY-NC) Please click here for more information about the Creative Commons licences. You can pay Open Access charges using our Author Services site. This will enable you to pay online with a credit/debit card, or request an invoice by email or post. The open access charges are as follows: • Regular charge: £1850/$3000 / €2450 • Reduced Rate Developing country charge*: £925/ $1500 / €1225 • Free Developing country charge*: £0 /$0 / €0 Please note that these charges are in addition to any colour charges that may apply. Orders from the UK will be subject to the current UK VAT charge. For orders from the rest of the European Union, OUP will assume that the service is provided for business purposes. Please provide a VAT number for yourself or your institution, and ensure you account for your own local VAT correctly. ## 15. PAGE CHARGES Authors will be charged £70/\$133/€105 for every excess page. Excess page charges will be charged for articles that exceed: 5 pages for an Original Article, 3 pages for a Quiz paper. It is the authors’ responsibility to check their proof page extent and act accordingly if they are do not wish to be charge for excess pages. Orders from the UK will be subject to the current UK VAT charge. For orders from elsewhere in the EU you or your institution should account for VAT by way of a reverse charge.  Please provide us with your or your institution’s VAT number. ## 16. OFFPRINTS The authors will receive electronic access to their paper free of charge. Additional printed offprints may be obtained in multiples of 50. Rates are indicated on the Author Services site (the same site used to complete the licence to publish online). ## 18. CROSSREF FUNDING DATA REGISTRY In order to meet your funding requirements authors are required to name their funding sources, or state if there are none, during the submission process. For further information on this process or to find out more about the CHORUS initiative please click here . ## 19. DISCLAIMERS ### Drug Disclaimer The mention of trade names, commercial products or organizations, and the inclusion of advertisements in ndt do not imply endorsement by the Society, the editors, the editorial board, Oxford University Press or the organization to which the authors are affiliated. The editors and publishers have taken all reasonable precautions to verify drug names and doses, the results of experimental work and clinical findings published in ndt . The ultimate responsibility for the use and dosage of drugs mentioned in ndt and in the interpretation of published material lies with the medical practitioner, and the editors and publishers cannot accept liability for damages arising from any errors or omissions in ndt . Please inform the editors of any errors. ### Material Disclaimer The opinions expressed in ndt those of the authors and contributors, and do not necessarily reflect those of the Society the editors, the editorial board, Oxford University Press or the organization to which the authors are affiliated. ## 20. EDITORIAL ENQUIRIES: C Zoccali c/o CNR Azienda Ospedaliera “Bianchi-Melacrino-Morelli” di Reggio Calabria Unità Operativa di Nefrologia, Dialisi e Trapianto di Rene Via Vallone Petrara snc 89124 Reggio Calabria Italy Fax:+39-0965-56005 Email: [email protected] Phone: +32 0472 95 09 85 ## 21. PRODUCTION ENQUIRIES Production Editor, Nephrology Dialysis Transplantation Journals Production Oxford University Press Great Clarendon Street Oxford OX2 6DP, UK Tel: +44 1865 354985 Email: Oxford Journals NDT ##### This Feature Is Available To Subscribers Only This PDF is available to Subscribers Only View Article Abstract & Purchase Options
2017-02-28 17:30:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2523556351661682, "perplexity": 3284.8544512782646}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174167.41/warc/CC-MAIN-20170219104614-00020-ip-10-171-10-108.ec2.internal.warc.gz"}
https://www.scienceforums.net/topic/124051-what-are-numbers-between-0-and-1/#comment-1164757
What are numbers between 0 and 1?? Recommended Posts What are numbers between 0 and 1?? Share on other sites 5 minutes ago, CuriosOne said: What are numbers between 0 and 1?? Proper numbers (or Proper Fractions) There as an infinite number of them. Share on other sites They are portions of one. x-posted with Koti. Edited by joigus Share on other sites 12 minutes ago, CuriosOne said: What are numbers between 0 and 1?? Define numbers.  As is the question doesn't make sense. Share on other sites 6 minutes ago, koti said: Proper numbers (or Proper Fractions) There as an infinite number of them. """A number between 0 and 1""" I'm getting this right out of """text books"" This is why i dont like to Google information and may explain confusions.. Proper fraction larger number on top smaller number on bottom. Improper fraction is this thing in reverse. So then, a number between 0 and 1 must be "a base?" 5 minutes ago, mathematic said: Define numbers.  As is the question doesn't make sense. I dont need to define anything, you either know or you dont... Do you know?? Yes or No?? 12 minutes ago, joigus said: They are portions of one. x-posted with Koti. Sounds like a product to me, not a number.. Share on other sites 15 minutes ago, CuriosOne said: ""A number between 0 and 1""" I'm getting this right out of """text books"" This is why i dont like to Google information and may explain confusions.. Proper fraction larger number on top smaller number on bottom. Improper fraction is this thing in reverse. So then, a number between 0 and 1 must be "a base?" In another thread you said $100s on book.s I'm sorry to tell you that you wasted your money. Which book did you read that in ? $\frac{3}{4}is\;a\;proper\;fraction$ $\frac{4}{3}is\;an\;improper\;fraction$ Link to comment Share on other sites 2 hours ago, studiot said: In another thread you said$100s on book.s I'm sorry to tell you that you wasted your money. Which book did you read that in ? 34isaproperfraction 43isanimproperfraction That can be re-assembled using roots...1/2 3/12 = 0.25 is as easy as 4/16 =0.25 3/4*1/3=0.25 Notice how 3/4 controls 1/3 =0.333...------> infinity.. "Through base 10 "obviously." As 0.25*16=4*3 = 12+3 = 15 *(2x) =30*(2x)= 60 There is that minute you spoke of...lol 60/ [10* (3/12)^1/2] =12 12-3= 3^2+1 = """BASE 10"""" So a number between 0 and 1 "Uses Base 10"" "from what I see." ------->>>Is there a better way??? Edited by CuriosOne Share on other sites ! Moderator Note I guess you forgot you “knew” they were fractions
2022-12-05 10:45:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35370704531669617, "perplexity": 8394.968382769937}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711016.32/warc/CC-MAIN-20221205100449-20221205130449-00018.warc.gz"}
https://en.m.wikipedia.org/wiki/Gaussian_units
# Gaussian units Gaussian units constitute a metric system of physical units. This system is the most common of the several electromagnetic unit systems based on cgs (centimetre–gram–second) units. It is also called the Gaussian unit system, Gaussian-cgs units, or often just cgs units.[1] The term "cgs units" is ambiguous and therefore to be avoided if possible: cgs contains within it several conflicting sets of electromagnetism units, not just Gaussian units, as described below. The most common alternative to Gaussian units are SI units. SI units are predominant in most fields, and continue to increase in popularity at the expense of Gaussian units.[2][3] (Other alternative unit systems also exist, as discussed below.) Conversions between Gaussian units and SI units are not as simple as normal unit conversions. For example, the formulas for physical laws of electromagnetism (such as Maxwell's equations) need to be adjusted depending on what system of units one uses. As another example, quantities that are dimensionless (loosely "unitless") in one system may have dimension in another. ## HistoryEdit Gaussian units existed before the CGS system. The British Association report of 1873 that proposed the CGS contains gaussian units derived from the foot–grain–second and metre–gram–second as well. There are also references to foot–pound–second gaussian units. ## Alternative unit systemsEdit The main alternative to the Gaussian unit system is SI units, historically also called the MKSA system of units for metre–kilogram–second–ampere.[2] The Gaussian unit system is just one of several electromagnetic unit systems within CGS. Others include "electrostatic units", "electromagnetic units", and Lorentz–Heaviside units. Some other unit systems are called "natural units", a category that includes atomic units, Planck units, and others. SI units are by far the most common today. In engineering and practical areas, SI is nearly universal and has been for decades[2]. In technical, scientific literature (such as theoretical physics and astronomy), Gaussian units were predominant until recent decades, but are now getting progressively less so.[2][3] Natural units are most common in more theoretical and abstract fields of physics, particularly particle physics and string theory. ## Major differences between Gaussian and SI unitsEdit ### "Rationalized" unit systemsEdit One difference between Gaussian and SI units is in the factors of 4π in various formulas. SI electromagnetic units are called "rationalized",[4][5] because Maxwell's equations have no explicit factors of 4π in the formulae. On the other hand, the inverse-square force laws – Coulomb's law and the Biot–Savart lawdo have a factor of 4π attached to the r 2. In unrationalized Gaussian units (not Lorentz–Heaviside units) the situation is reversed: Two of Maxwell's equations have factors of 4π in the formulas, while both of the inverse-square force laws, Coulomb's law and the Biot–Savart law, have no factor of 4π attached to r 2 in the denominator. (The quantity 4π appears because 4π r 2 is the surface area of the sphere of radius r. For details, see the articles Relation between Gauss's law and Coulomb's law and Inverse-square law.) ### Unit of chargeEdit A major difference between Gaussian and SI units is in the definition of the unit of charge. In SI, a separate base unit (the ampere) is associated with electromagnetic phenomena, with the consequence that something like electrical charge (1 coulomb = 1 ampere × 1 second) is a unique dimension of physical quantity and is not expressed purely in terms of the mechanical units (kilogram, metre, second). On the other hand, in Gaussian units, the unit of electrical charge (the statcoulomb, statC) can be written entirely as a dimensional combination of the mechanical units (gram, centimetre, second), as: 1 statC = 1 g1/2 cm3/2 s−1 For example, Coulomb's law in Gaussian units is simple: ${\displaystyle F={\frac {Q_{1}Q_{2}}{r^{2}}}}$ where F is the repulsive force between two electrical charges, Q1 and Q2 are the two charges in question, and r is the distance separating them. If Q1 and Q2 are expressed in statC and r in cm, then F will come out expressed in dyne. By contrast, the same law in SI units is: ${\displaystyle F={\frac {1}{4\pi \epsilon _{0}}}{\frac {Q_{1}Q_{2}}{r^{2}}}=k_{\text{e}}{\frac {Q_{1}Q_{2}}{r^{2}}}}$ where ε0 is the vacuum permittivity, a quantity with dimension, namely (charge)2 (time)2 (mass)−1 (length)−3, and ke is Coulomb's constant. Without ε0, the two sides could not have consistent dimensions in SI, and in fact the quantity ε0 does not even exist in Gaussian units. This is an example of how some dimensional physical constants can be eliminated from the expressions of physical law simply by the judicious choice of units. In SI, 1/ε0, converts or scales flux density, D, to electric field, E (the latter has dimension of force per charge), while in rationalized Gaussian units, flux density is the very same as electric field in free space, not just a scaled copy. Since the unit of charge is built out of mechanical units (mass, length, time), the relation between mechanical units and electromagnetic phenomena is clearer in Gaussian units than in SI. In particular, in Gaussian units, the speed of light c shows up directly in electromagnetic formulas like Maxwell's equations (see below), whereas in SI it only shows up implicitly via the relation ${\displaystyle \mu _{0}\epsilon _{0}=1/c^{2}}$ . ### Units for magnetismEdit In Gaussian units, unlike SI units, the electric field E and the magnetic field B have the same dimension. This amounts to a factor of c difference between how B is defined in the two unit systems, on top of the other differences.[4] (The same factor applies to other magnetic quantities such as H and M.) For example, in a planar light wave in vacuum, |E(r, t)| = |B(r, t)| in Gaussian units, while |E(r, t)| = c|B(r, t)| in SI units. ### Polarization, magnetizationEdit There are further differences between Gaussian and SI units in how quantities related to polarization and magnetization are defined. For one thing, in Gaussian units, all of the following quantities have the same dimension: E, D, P, B, H, and M. Another important point is that the electric and magnetic susceptibility of a material is dimensionless in both Gaussian and SI units, but a given material will have a different numerical susceptibility in the two systems. (Equation is given below.) ## List of equationsEdit This section has a list of the basic formulae of electromagnetism, given in both Gaussian and SI units. Most symbol names are not given; for complete explanations and definitions, please click to the appropriate dedicated article for each equation. A simple conversion scheme for use when tables are not available may be found in Ref.[6] All formulas except otherwise noted are from Ref.[4] ### Maxwell's equationsEdit Here are Maxwell's equations, both in macroscopic and microscopic forms. Only the "differential form" of the equations is given, not the "integral form"; to get the integral forms apply the divergence theorem or Kelvin–Stokes theorem. Name Gaussian units SI units Gauss's law (macroscopic) ${\displaystyle \nabla \cdot \mathbf {D} =4\pi \rho _{\text{f}}}$  ${\displaystyle \nabla \cdot \mathbf {D} =\rho _{\text{f}}}$ Gauss's law (microscopic) ${\displaystyle \nabla \cdot \mathbf {E} =4\pi \rho }$  ${\displaystyle \nabla \cdot \mathbf {E} =\rho /\epsilon _{0}}$ Gauss's law for magnetism: ${\displaystyle \nabla \cdot \mathbf {B} =0}$  ${\displaystyle \nabla \cdot \mathbf {B} =0}$ ${\displaystyle \nabla \times \mathbf {E} =-{\frac {1}{c}}{\frac {\partial \mathbf {B} }{\partial t}}}$  ${\displaystyle \nabla \times \mathbf {E} =-{\frac {\partial \mathbf {B} }{\partial t}}}$ Ampère–Maxwell equation (macroscopic): ${\displaystyle \nabla \times \mathbf {H} ={\frac {4\pi }{c}}\mathbf {J} _{\text{f}}+{\frac {1}{c}}{\frac {\partial \mathbf {D} }{\partial t}}}$  ${\displaystyle \nabla \times \mathbf {H} =\mathbf {J} _{\text{f}}+{\frac {\partial \mathbf {D} }{\partial t}}}$ Ampère–Maxwell equation (microscopic): ${\displaystyle \nabla \times \mathbf {B} ={\frac {4\pi }{c}}\mathbf {J} +{\frac {1}{c}}{\frac {\partial \mathbf {E} }{\partial t}}}$  ${\displaystyle \nabla \times \mathbf {B} =\mu _{0}\mathbf {J} +{\frac {1}{c^{2}}}{\frac {\partial \mathbf {E} }{\partial t}}}$ ### Other basic lawsEdit Name Gaussian units SI units Lorentz force ${\displaystyle \mathbf {F} =q\,\left(\mathbf {E} +{\tfrac {1}{c}}\,\mathbf {v} \times \mathbf {B} \right)}$  ${\displaystyle \mathbf {F} =q\,\left(\mathbf {E} +\mathbf {v} \times \mathbf {B} \right)}$ Coulomb's law ${\displaystyle \mathbf {F} ={\frac {q_{1}q_{2}}{r^{2}}}\,\mathbf {\hat {r}} }$  ${\displaystyle \mathbf {F} ={\frac {1}{4\pi \epsilon _{0}}}\,{\frac {q_{1}q_{2}}{r^{2}}}\,\mathbf {\hat {r}} }$ Electric field of stationary point charge ${\displaystyle \mathbf {E} ={\frac {q}{r^{2}}}\,\mathbf {\hat {r}} }$  ${\displaystyle \mathbf {E} ={\frac {1}{4\pi \epsilon _{0}}}\,{\frac {q}{r^{2}}}\,\mathbf {\hat {r}} }$ Biot–Savart law ${\displaystyle \mathbf {B} ={\frac {1}{c}}\!\oint {\frac {I\times \mathbf {\hat {r}} }{r^{2}}}\,\operatorname {d} \!\mathbf {\text{ℓ}} }$ [7] ${\displaystyle \mathbf {B} ={\frac {\mu _{0}}{4\pi }}\!\oint {\frac {I\times \mathbf {\hat {r}} }{r^{2}}}\,\operatorname {d} \!\mathbf {\text{ℓ}} }$ Poynting vector (microscopic) ${\displaystyle \mathbf {S} ={\frac {c}{4\pi }}\,\mathbf {E} \times \mathbf {B} }$  ${\displaystyle \mathbf {S} ={\frac {1}{\mu _{0}}}\,\mathbf {E} \times \mathbf {B} }$ ### Dielectric and magnetic materialsEdit Below are the expressions for the various fields in a dielectric medium. It is assumed here for simplicity that the medium is homogeneous, linear, isotropic, and nondispersive, so that the permittivity is a simple constant. Gaussian units SI units ${\displaystyle \mathbf {D} =\mathbf {E} +4\pi \mathbf {P} }$  ${\displaystyle \mathbf {D} =\epsilon _{0}\mathbf {E} +\mathbf {P} }$ ${\displaystyle \mathbf {P} =\chi _{\text{e}}\mathbf {E} }$  ${\displaystyle \mathbf {P} =\chi _{\text{e}}\epsilon _{0}\mathbf {E} }$ ${\displaystyle \mathbf {D} =\epsilon \mathbf {E} }$  ${\displaystyle \mathbf {D} =\epsilon \mathbf {E} }$ ${\displaystyle \epsilon =1+4\pi \chi _{\text{e}}}$  ${\displaystyle \epsilon /\epsilon _{0}=1+\chi _{\text{e}}}$ where The quantities ${\displaystyle \epsilon }$  in Gaussian units and ${\displaystyle \epsilon /\epsilon _{0}}$  in SI are both dimensionless, and they have the same numeric value. By contrast, the electric susceptibility ${\displaystyle \chi _{e}}$  is unitless in both systems, but has different numeric values in the two systems for the same material: ${\displaystyle \chi _{\text{e}}^{\text{SI}}=4\pi \chi _{\text{e}}^{\text{G}}}$ Next, here are the expressions for the various fields in a magnetic medium. Again, it is assumed that the medium is homogeneous, linear, isotropic, and nondispersive, so that the permeability is a simple constant. Gaussian units SI units ${\displaystyle \mathbf {B} =\mathbf {H} +4\pi \mathbf {M} }$  ${\displaystyle \mathbf {B} =\mu _{0}(\mathbf {H} +\mathbf {M} )}$ ${\displaystyle \mathbf {M} =\chi _{\text{m}}\mathbf {H} }$  ${\displaystyle \mathbf {M} =\chi _{\text{m}}\mathbf {H} }$ ${\displaystyle \mathbf {B} =\mu \mathbf {H} }$  ${\displaystyle \mathbf {B} =\mu \mathbf {H} }$ ${\displaystyle \mu =1+4\pi \chi _{\text{m}}}$  ${\displaystyle \mu /\mu _{0}=1+\chi _{\text{m}}}$ where The quantities ${\displaystyle \mu }$  in Gaussian units and ${\displaystyle \mu /\mu _{0}}$  in SI are both dimensionless, and they have the same numeric value. By contrast, the magnetic susceptibility ${\displaystyle \chi _{\text{m}}}$  is unitless in both systems, but has different numeric values in the two systems for the same material: ${\displaystyle \chi _{\text{m}}^{\text{SI}}=4\pi \chi _{\text{m}}^{\text{G}}}$ ### Vector and scalar potentialsEdit The electric and magnetic fields can be written in terms of a vector potential A and a scalar potential φ: Name Gaussian units SI units Electric field (static) ${\displaystyle \mathbf {E} =-\nabla \phi }$  ${\displaystyle \mathbf {E} =-\nabla \phi }$ Electric field (general) ${\displaystyle \mathbf {E} =-\nabla \phi -{\frac {1}{c}}{\frac {\partial \mathbf {A} }{\partial t}}}$  ${\displaystyle \mathbf {E} =-\nabla \phi -{\frac {\partial \mathbf {A} }{\partial t}}}$ Magnetic B field ${\displaystyle \mathbf {B} =\nabla \times \mathbf {A} }$  ${\displaystyle \mathbf {B} =\nabla \times \mathbf {A} }$ ## Electromagnetic unit namesEdit (For non-electromagnetic units, see main cgs article.) Table 1: Common electromagnetism units in SI vs Gaussian 2.998 is shorthand for exactly 2.99792458 (see speed of light)[8] Quantity Symbol SI unit Gaussian unit (in base units) Conversion factor electric charge q C Fr (cm3/2g1/2s−1) ${\displaystyle {\frac {q_{\text{G}}}{q_{\text{SI}}}}={\frac {1}{\sqrt {4\pi \epsilon _{0}^{\text{SI}}}}}={\frac {2.998\times 10^{9}\,{\text{Fr}}}{1\,{\text{C}}}}}$ electric current I A Fr/s (cm3/2g1/2s−2) ${\displaystyle {\frac {I_{\text{G}}}{I_{\text{SI}}}}={\frac {1}{\sqrt {4\pi \epsilon _{0}^{\text{SI}}}}}={\frac {2.998\times 10^{9}\,{\text{Fr/s}}}{1\,{\text{A}}}}}$ electric potential (voltage) φ V V statV (cm1/2g1/2s−1) ${\displaystyle {\frac {V_{\text{G}}}{V_{\text{SI}}}}={\sqrt {4\pi \epsilon _{0}^{\text{SI}}}}={\frac {1\,{\text{statV}}}{2.998\times 10^{2}\,{\text{V}}}}}$ electric field E V/m statV/cm (cm−1/2g1/2s−1) ${\displaystyle {\frac {\mathbf {E} _{\text{G}}}{\mathbf {E} _{\text{SI}}}}={\sqrt {4\pi \epsilon _{0}^{\text{SI}}}}={\frac {1\,{\text{statV/cm}}}{2.998\times 10^{4}\,{\text{V/m}}}}}$ electric displacement field D C/m2 Fr/cm2 (cm−1/2g1/2s−1) ${\displaystyle {\frac {\mathbf {D} _{\text{G}}}{\mathbf {D} _{\text{SI}}}}={\sqrt {\frac {4\pi }{\epsilon _{0}^{\text{SI}}}}}={\frac {4\pi \times 2.998\times 10^{5}\,{\text{Fr/cm}}^{2}}{1\,{\text{C/m}}^{2}}}}$ magnetic B field B T G (cm−1/2g1/2s−1) ${\displaystyle {\frac {\mathbf {B} _{\text{G}}}{\mathbf {B} _{\text{SI}}}}={\sqrt {\frac {4\pi }{\mu _{0}^{\text{SI}}}}}={\frac {10^{4}\,{\text{G}}}{1\,{\text{T}}}}}$ magnetic H field H A/m Oe (cm−1/2g1/2s−1) ${\displaystyle {\frac {\mathbf {H} _{\text{G}}}{\mathbf {H} _{\text{SI}}}}={\sqrt {4\pi \mu _{0}^{\text{SI}}}}={\frac {4\pi \times 10^{-3}\,{\text{Oe}}}{1\,{\text{A/m}}}}}$ magnetic dipole moment m Am2 erg/G (cm5/2g1/2s−1) ${\displaystyle {\frac {\mathbf {m} _{\text{G}}}{\mathbf {m} _{\text{SI}}}}={\sqrt {\frac {\mu _{0}^{\text{SI}}}{4\pi }}}={\frac {10^{3}\,{\text{erg/G}}}{1\,{\text{A}}\cdot {\text{m}}^{2}}}}$ magnetic flux Φm Wb Gcm2 (cm3/2g1/2s−1) ${\displaystyle {\frac {\Phi _{m,{\text{G}}}}{\Phi _{m,{\text{SI}}}}}={\sqrt {\frac {4\pi }{\mu _{0}^{\text{SI}}}}}={\frac {10^{8}\,{\text{G}}\cdot {\text{cm}}^{2}}{1\,{\text{Wb}}}}}$ resistance R Ω s/cm ${\displaystyle {\frac {R_{\text{G}}}{R_{\text{SI}}}}=4\pi \epsilon _{0}^{\text{SI}}={\frac {1\,{\text{s/cm}}}{2.998^{2}\times 10^{11}\,\Omega }}}$ resistivity ρ Ωm s ${\displaystyle {\frac {\rho _{\text{G}}}{\rho _{\text{SI}}}}=4\pi \epsilon _{0}^{\text{SI}}={\frac {1\,{\text{s}}}{2.998^{2}\times 10^{9}\,\Omega \cdot {\text{m}}}}}$ capacitance C F cm ${\displaystyle {\frac {C_{\text{G}}}{C_{\text{SI}}}}={\frac {1}{4\pi \epsilon _{0}^{\text{SI}}}}={\frac {2.998^{2}\times 10^{11}\,{\text{cm}}}{1\,{\text{F}}}}}$ inductance L H s2/cm ${\displaystyle {\frac {L_{\text{G}}}{L_{\text{SI}}}}=4\pi \epsilon _{0}^{\text{SI}}={\frac {1\,{\text{s}}^{2}/{\text{cm}}}{2.998^{2}\times 10^{11}\,{\text{H}}}}}$ Note: The SI quantities ${\displaystyle \epsilon _{0}^{\text{SI}}}$  and ${\displaystyle \mu _{0}^{\text{SI}}}$  satisfy ${\displaystyle \epsilon _{0}^{\text{SI}}\mu _{0}^{\text{SI}}=1/c^{2}}$ . The conversion factors are written both symbolically and numerically. The numerical conversion factors can be derived from the symbolic conversion factors by dimensional analysis. For example, the top row says ${\displaystyle {\frac {1}{\sqrt {4\pi \epsilon _{0}^{\text{SI}}}}}={\frac {2.998\times 10^{9}\,{\text{Fr}}}{1\,{\text{C}}}}}$ , a relation which can be verified with dimensional analysis, by expanding ${\displaystyle \epsilon _{0}^{\text{SI}}}$  and C in SI base units, and expanding Fr in Gaussian base units. It is surprising to think of measuring capacitance in centimetres. One useful example is that a centimetre of capacitance is the capacitance between a sphere of radius 1 cm in vacuum and infinity. Another surprising unit is measuring resistivity in units of seconds. A physical example is: Take a parallel-plate capacitor, which has a "leaky" dielectric with permittivity 1 but a finite resistivity. After charging it up, the capacitor will discharge itself over time, due to current leaking through the dielectric. If the resistivity of the dielectric is "X" seconds, the half-life of the discharge is ~0.05X seconds. This result is independent of the size, shape, and charge of the capacitor, and therefore this example illuminates the fundamental connection between resistivity and time units. ### Dimensionally equivalent unitsEdit A number of the units defined by the table have different names but are in fact dimensionally equivalent—i.e., they have the same expression in terms of the base units cm, g, s. (This is analogous to the distinction in SI between becquerel and Hz, or between newton metre and joule.) The different names help avoid ambiguities and misunderstandings as to what physical quantity is being measured. In particular, all of the following quantities are dimensionally equivalent in Gaussian units, but they are nevertheless given different unit names as follows:[9] Quantity In Gaussian base units Gaussian unit of measure E cm−1/2 g1/2 s−1 statV/cm D cm−1/2 g1/2 s−1 statC/cm2 P cm−1/2 g1/2 s−1 statC/cm2 B cm−1/2 g1/2 s−1 Gs H cm−1/2 g1/2 s−1 Oe M cm−1/2 g1/2 s−1 dyn/Mx ## General rules to translate a formulaEdit Any formula can be converted between Gaussian and SI units by using the symbolic conversion factors from Table 1 above. For example, the electric field of a stationary point charge has the SI formula ${\displaystyle \mathbf {E} _{\text{SI}}={\frac {q_{\text{SI}}}{4\pi \epsilon _{0}r^{2}}}{\hat {\mathbf {r} }}}$ where r is distance, and the "SI" subscripts indicate that the electric field and charge are defined using SI definitions. If we want the formula to instead use the Gaussian definitions of electric field and charge, we look up how these are related using Table 1, which says: ${\displaystyle {\frac {\mathbf {E} _{\text{G}}}{\mathbf {E} _{\text{SI}}}}={\sqrt {4\pi \epsilon _{0}^{\text{SI}}}}\quad ,\quad {\frac {q_{\text{G}}}{q_{\text{SI}}}}={\frac {1}{\sqrt {4\pi \epsilon _{0}^{\text{SI}}}}}}$ Therefore, after substituting and simplifying, we get the Gaussian-units formula: ${\displaystyle \mathbf {E} _{\text{G}}={\frac {q_{\text{G}}}{r^{2}}}{\hat {\mathbf {r} }}}$ which is the correct Gaussian-units formula, as mentioned in a previous section. For convenience, the table below has a compilation of the symbolic conversion factors from Table 1. To convert any formula from Gaussian units to SI units using this table, replace each symbol in the Gaussian column by the corresponding expression in the SI column (vice versa to convert the other way). This will reproduce any of the specific formulas given in the list above, such as Maxwell's equations, as well as any other formula not listed.[10] For some examples of how to use this table, see:[11] Table 2A: Replacement rules for translating formulas from Gaussian to SI Name Gaussian units SI units Speed of light ${\displaystyle c}$  ${\displaystyle {\frac {1}{\sqrt {\epsilon _{0}\mu _{0}}}}}$ Electric field, Electric potential ${\displaystyle \left(\mathbf {E} ,\varphi \right)}$  ${\displaystyle {\sqrt {4\pi \epsilon _{0}}}\left(\mathbf {E} ,\varphi \right)}$ Electric displacement field ${\displaystyle \mathbf {D} }$  ${\displaystyle {\sqrt {\frac {4\pi }{\epsilon _{0}}}}\mathbf {D} }$ Charge, Charge density, Current, Current density, Polarization density, Electric dipole moment ${\displaystyle \left(q,\rho ,I,\mathbf {J} ,\mathbf {P} ,\mathbf {p} \right)}$  ${\displaystyle {\frac {1}{\sqrt {4\pi \epsilon _{0}}}}\left(q,\rho ,I,\mathbf {J} ,\mathbf {P} ,\mathbf {p} \right)}$ Magnetic B field, Magnetic flux, Magnetic vector potential ${\displaystyle \left(\mathbf {B} ,\Phi _{\text{m}},\mathbf {A} \right)}$  ${\displaystyle {\sqrt {\frac {4\pi }{\mu _{0}}}}\left(\mathbf {B} ,\Phi _{\text{m}},\mathbf {A} \right)}$ Magnetic H field ${\displaystyle \mathbf {H} }$  ${\displaystyle {\sqrt {4\pi \mu _{0}}}\mathbf {H} }$ Magnetic moment, Magnetization ${\displaystyle \left(\mathbf {m} ,\mathbf {M} \right)}$  ${\displaystyle {\sqrt {\frac {\mu _{0}}{4\pi }}}\left(\mathbf {m} ,\mathbf {M} \right)}$ Relative permittivity, Relative permeability ${\displaystyle \left(\epsilon ,\mu \right)}$  ${\displaystyle \left({\frac {\epsilon }{\epsilon _{0}}},{\frac {\mu }{\mu _{0}}}\right)}$ Electric susceptibility, Magnetic susceptibility ${\displaystyle \left(\chi _{\text{e}},\chi _{\text{m}}\right)}$  ${\displaystyle {\frac {1}{4\pi }}\left(\chi _{\text{e}},\chi _{\text{m}}\right)}$ Conductivity, Conductance, Capacitance ${\displaystyle \left(\sigma ,S,C\right)}$  ${\displaystyle {\frac {1}{4\pi \epsilon _{0}}}\left(\sigma ,S,C\right)}$ Resistivity, Resistance, Inductance ${\displaystyle \left(\rho ,R,L\right)}$  ${\displaystyle 4\pi \epsilon _{0}\left(\rho ,R,L\right)}$ Table 2B: Replacement rules for translating formulas from SI to Gaussian Name SI units Gaussian units Final substitution A ${\displaystyle \epsilon _{0}}$  ${\displaystyle {\frac {1}{\mu _{0}c^{2}}}}$ Final substitution B ${\displaystyle \mu _{0}}$  ${\displaystyle {\frac {1}{\epsilon _{0}c^{2}}}}$ Speed of light ${\displaystyle c}$  ${\displaystyle c}$ Electric field, Electric potential ${\displaystyle \left(\mathbf {E} ,\varphi \right)}$  ${\displaystyle {\frac {1}{\sqrt {4\pi \epsilon _{0}}}}\left(\mathbf {E} ,\varphi \right)}$ Electric displacement field ${\displaystyle \mathbf {D} }$  ${\displaystyle {\sqrt {\frac {\epsilon _{0}}{4\pi }}}\mathbf {D} }$ Charge, Charge density, Current, Current density, Polarization density, Electric dipole moment ${\displaystyle \left(q,\rho ,I,\mathbf {J} ,\mathbf {P} ,\mathbf {p} \right)}$  ${\displaystyle {\sqrt {4\pi \epsilon _{0}}}\left(q,\rho ,I,\mathbf {J} ,\mathbf {P} ,\mathbf {p} \right)}$ Magnetic B field, Magnetic flux, Magnetic vector potential ${\displaystyle \left(\mathbf {B} ,\Phi _{\text{m}},\mathbf {A} \right)}$  ${\displaystyle {\sqrt {\frac {\mu _{0}}{4\pi }}}\left(\mathbf {B} ,\Phi _{\text{m}},\mathbf {A} \right)}$ Magnetic H field ${\displaystyle \mathbf {H} }$  ${\displaystyle {\frac {1}{\sqrt {4\pi \mu _{0}}}}\mathbf {H} }$ Magnetic moment, Magnetization ${\displaystyle \left(\mathbf {m} ,\mathbf {M} \right)}$  ${\displaystyle {\sqrt {\frac {4\pi }{\mu _{0}}}}\left(\mathbf {m} ,\mathbf {M} \right)}$ Relative permittivity, Relative permeability ${\displaystyle \left(\epsilon _{r},\mu _{r}\right)}$  ${\displaystyle \left(\epsilon ,\mu \right)}$ Vacuum permittivity, Vacuum permeability ${\displaystyle \left(\epsilon _{0},\mu _{0}\right)}$  ${\displaystyle \left(\epsilon _{0},\mu _{0}\right)}$ Absolute permittivity, Absolute permeability ${\displaystyle \left(\epsilon ,\mu \right)}$  ${\displaystyle \left(\epsilon _{0}\epsilon ,\mu _{0}\mu \right)}$ Electric susceptibility, Magnetic susceptibility ${\displaystyle \left(\chi _{\text{e}},\chi _{\text{m}}\right)}$  ${\displaystyle 4\pi \left(\chi _{\text{e}},\chi _{\text{m}}\right)}$ Conductivity, Conductance, Capacitance ${\displaystyle \left(\sigma ,S,C\right)}$  ${\displaystyle 4\pi \epsilon _{0}\left(\sigma ,S,C\right)}$ Resistivity, Resistance, Inductance ${\displaystyle \left(\rho ,R,L\right)}$  ${\displaystyle {\frac {1}{4\pi \epsilon _{0}}}\left(\rho ,R,L\right)}$ It may be necessary to apply either Final substitution A or Final substitution B (but not both) after all the other rules have been applied and the resulting formula has already been simplified as much as possible. ## Notes and referencesEdit 1. ^ One of many examples of using the term "cgs units" to refer to Gaussian units is: Lecture notes from Stanford University 2. ^ a b c d "CGS", in How Many? A Dictionary of Units of Measurement, by Russ Rowlett and the University of North Carolina at Chapel Hill 3. ^ a b For example, one widely used graduate electromagnetism textbook is Classical Electrodynamics by J.D. Jackson. The second edition, published in 1975, used Gaussian units exclusively, but the third edition, published in 1998, uses mostly SI units. Similarly, Electricity and Magnetism by Edward Purcell is a popular undergraduate textbook. The second edition, published in 1984, used Gaussian units, while the third edition, published in 2013, switched to SI units. 4. ^ a b c Littlejohn, Robert (Fall 2011). "Gaussian, SI and Other Systems of Units in Electromagnetic Theory" (PDF). Physics 221A, University of California, Berkeley lecture notes. Retrieved 2008-05-06. 5. ^ Kowalski, Ludwik, 1986, "A Short History of the SI Units in Electricity, Archived 2009-04-29 at the Wayback Machine." The Physics Teacher 24(2): 97–99. Alternate web link (subscription required) 6. ^ A. Garg, "Classical Electrodynamics in a Nutshell" (Princeton University Press, 2012). 7. ^ Introduction to Electrodynamics by Capri and Panat, p180 8. ^ Cardarelli, F. (2004). Encyclopaedia of Scientific Units, Weights and Measures: Their SI Equivalences and Origins (2nd ed.). Springer. pp. 20–25. ISBN 1-85233-682-X. 9. ^ ''Demystifying Electromagnetic Equations''. Books.google.com. p. 155. Retrieved 2012-12-25. 10. ^ Бредов М.М.; Румянцев В.В.; Топтыгин И.Н. (1985). "Appendix 5: Units transform (p.385)". Классическая электродинамика. Nauka. 11. ^ Units in Electricity and Magnetism. See the section "Conversion of Gaussian formulae into SI" and the subsequent text.
2017-12-15 17:50:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 134, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8789987564086914, "perplexity": 2520.195987910846}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948577018.52/warc/CC-MAIN-20171215172833-20171215194833-00242.warc.gz"}
http://openstudy.com/updates/51cb843ae4b011c79f62e194
## ulises1995 one year ago which is the period of oscillation of a pendulum with L = 0.5625m? 1. Fifciol For small angles: $T=2\pi \sqrt{\frac{ L }{g }}=1,563 s$ 2. Not the answer you are looking for? Search for more explanations. Search OpenStudy
2014-12-18 16:34:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8870928287506104, "perplexity": 2793.620793458051}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802767274.159/warc/CC-MAIN-20141217075247-00153-ip-10-231-17-201.ec2.internal.warc.gz"}
https://physics.stackexchange.com/questions/476748/why-are-there-lots-of-definitions-for-strain
# Why are there lots of definitions for strain? Why do we need Green or Almansi strains and what is True strain? I'm so confused about terminology. ## 1 Answer A physical structure doesn't care what stress and strain measures you use to model it. It just does what it does. However to make a useful mathematical model, the model has to be simple enough so you can actually work with it. That results in different stress and strain measures for different situations. The thing that needs to stay simple is actually the stress-strain relationship $$\sigma_{ij} = C_{ijkl}\epsilon_{kl}$$ where $$C$$ is a fourth-order tensor, with 21 independent components for a general material, and in general all those 21 components can be nonlinear functions of stress, strain, temperature, time, etc, etc ... Life gets much simpler if you can make the approximation that $$C$$ is constant, and one way to do that is to get creative about how to define $$\epsilon$$ and $$\sigma$$. The simplest situation is where the deformations can be assumed to be infinitesimally small. In that case, the only significant terms are the first derivatives of the deformation (i.e. the nine partial derivatives $$\partial u_i/\partial x_j$$), and products of two derivatives are negligible and can be ignored. Those assumptions give "engineering strain," and assuming $$C$$ is constant then gives "engineering stress." Another situation is where the translational deformations can be large, but there is no significant rigid body rotation of the structure. In that case, the nine partial derivatives can be large (e.g. strains of order 1 or higher) but the absence of rotations means that products of the derivatives can still be ignored. If you want to combine large strain increments, things work out better if you take the logarithm of the derivatives (for example if you stretch something by 50% of its original length and then stretch it by 50% of its new length, its final length is 2.25 times the original length, not 2.0 times). Those assumptions lead to "logarithmic strain" or "true strain" and "true stress". A third combination is the "opposite" of the above: the strains are small, but there may be arbitrarily large rigid body rotations. Those assumptions lead to Cauchy-Green strains, and similar strain measures. Of course the final situation is where everything is large - and in that case, it's sometimes not very clear whether an engineering model is "really" solid mechanics, or fluid dynamics of a non-Newtonian fluid! The basic difference between Green strain and Almansi strain is that Green strain is based on the initial configuration of the material, and Almansi strain on its final configuration. To be honest I've never seen any use of Almansi strain at all, but no doubt there is some special application where it is the "best" strain measure to use.
2019-06-20 01:15:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 7, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6285715103149414, "perplexity": 226.33975944203686}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999130.50/warc/CC-MAIN-20190620004625-20190620030625-00117.warc.gz"}
https://www.nat-hazards-earth-syst-sci.net/20/859/2020/
Journal topic Nat. Hazards Earth Syst. Sci., 20, 859–875, 2020 https://doi.org/10.5194/nhess-20-859-2020 Nat. Hazards Earth Syst. Sci., 20, 859–875, 2020 https://doi.org/10.5194/nhess-20-859-2020 Research article 27 Mar 2020 Research article | 27 Mar 2020 # Estimation of evapotranspiration by the Food and Agricultural Organization of the United Nations (FAO) Penman–Monteith temperature (PMT) and Hargreaves–Samani (HS) models under temporal and spatial criteria – a case study in Duero basin (Spain) Estimation of evapotranspiration by the Food and Agricultural Organization of the United Nations (FAO) Penman–Monteith temperature (PMT) and Hargreaves–Samani (HS) models under temporal and spatial criteria – a case study in Duero basin (Spain) Rubén Moratiel1,2, Raquel Bravo3, Antonio Saa1,2, Ana M. Tarquis2, and Javier Almorox1 Rubén Moratiel et al. • 2CEIGRAM (Centro de Estudios e Investigación para la Gestión de Riesgos Agrarios y Medioambientales), C/Senda del Rey 13, Madrid, 28040, Spain • 3Ministerio de Agricultura y Pesca, Alimentación y Medio Ambiente, Paseo de la Infanta Isabel 1, Madrid, 28071, Spain Correspondence: Rubén Moratiel ([email protected]) Abstract The evapotranspiration-based scheduling method is the most common method for irrigation programming in agriculture. There is no doubt that the estimation of the reference evapotranspiration (ETo) is a key factor in irrigated agriculture. However, the high cost and maintenance of agrometeorological stations and high number of sensors required to estimate it make it non-plausible, especially in rural areas. For this reason, the estimation of ETo using air temperature, in places where wind speed, solar radiation and air humidity data are not readily available, is particularly attractive. A daily data record of 49 stations distributed over Duero basin (Spain), for the period 2000–2018, was used for estimation of ETo based on seven models against Penman–Monteith (PM) FAO 56 (FAO – Food and Agricultural Organization of the United Nations) from a temporal (annual or seasonal) and spatial perspective. Two Hargreaves–Samani (HS) models, with and without calibration, and five Penman–Monteith temperature (PMT) models were used in this study. The results show that the models' performance changes considerably, depending on whether the scale is annual or seasonal. The performance of the seven models was acceptable from an annual perspective (R2>0.91, NSE > 0.88, MAE < 0.52 and RMSE < 0.69 mm d−1; NSE – Nash–Sutcliffe model efficiency; MAE – mean absolute error; RMSE – root-mean-square error). For winter, no model showed good performance. In the rest of the seasons, the models with the best performance were the following three models: PMTCUH (Penman–Monteith temperature with calibration of Hargreaves empirical coefficient – kRS, average monthly value of wind speed, and average monthly value of maximum and minimum relative humidity), HSC (Hargreaves–Samani with calibration of kRS) and PMTOUH (Penman–Monteith temperature without calibration of kRS, average monthly value of wind speed and average monthly value of maximum and minimum relative humidity). The HSC model presents a calibration of the Hargreaves empirical coefficient (kRS). In the PMTCUH model, kRS was calibrated and average monthly values were used for wind speed and maximum and minimum relative humidity. Finally, the PMTOUH model is like the PMTCUH model except that kRS was not calibrated. These results are very useful for adopting appropriate measures for efficient water management, especially in the intensive agriculture in semi-arid zones, under the limitation of agrometeorological data. 1 Introduction A growing population and its need for food increasingly demand natural resources such as water. This, linked with the uncertainty of climate change, makes water management a key consideration for future food security. The main challenge is to produce enough food for a growing population that is directly affected by the challenges created by the management of agricultural water, mainly by irrigation management (Pereira, 2017). Evapotranspiration (ET) is the water lost from the soil surface and surface leaves by evaporation and, by transpiration, from vegetation. ET is one of the major components of the hydrologic cycle and represented a loss of water from the drainage basin. ET information is key to understanding and managing water resource systems (Allen et al., 2011). ET is normally modeled using weather data and algorithms that describe aerodynamic characteristics of the vegetation and surface energy. In agriculture, irrigation water is usually applied based on the water balance method in the soil water balance equation, which allows the calculation of the decrease in soil water content as the difference between outputs and inputs of water to the field. In arid areas where rainfall is negligible during the irrigation season, an average irrigation calendar may be defined a priori using mean ET values (Villalobos et al., 2016). The Food and Agricultural Organization of the United Nations (FAO) improved and upgraded the methodologies for reference evapotranspiration (ETo) estimation by introducing the reference crop (grass) concept, described by the FAO Penman–Monteith (PM-ETo) equation (Allen et al., 1998). This approach was tested well under different climates and time step calculations and is currently adopted worldwide (Allen et al., 1998; Todorovic et al., 2013; Almorox et al., 2015). Estimated crop evapotranspiration (ETc) is obtained by a function of two factors (ET${}_{\mathrm{c}}={K}_{\mathrm{c}}\cdot {\mathrm{ET}}_{\mathrm{o}}$): reference crop evapotranspiration (ETo) and the crop coefficient (Kc; Allen et al., 1998). ETo was introduced to study the evaporative demand of the atmosphere independently of crop type, crop stage development and management practices. ETo is only affected by climatic parameters and is computed from weather data. Crop influences are accounted for by using a specific crop coefficient (Kc). However, Kc varies predominately with the specific crop characteristics and only to a limited extent with climate (Allen et al., 1998). The ET is very variable locally and temporally because of the climate differences. Because the ET component is relatively large in water hydrology balances, any small error in its estimate or measurement represents large volumes of water (Allen et al., 2011). Small deviations in ETo estimations affect irrigation and water management in rural areas in which crop extension is significant. For example, in 2017 there was a water shortage at the beginning of the cultivation period (March) at the Duero basin (Spain). The classical irrigated crops, i.e., corn, were replaced by others with lower water needs, such as sunflower. Wind speed (u), solar radiation (Rs), relative humidity (RH) and temperature (T) of the air are required to estimate ETo. Additionally, the vapor pressure deficit (VPD), soil heat flux (G) and net radiation (Rn) measurements or estimates are necessary. The PM-ETo methodology presents the disadvantage that required climate or weather data are normally unavailable or of low quality (Martinez and Thepadia, 2010) in rural areas. In this case, where data are missing, Allen et al. (1998) in the guidelines for PM-ETo recommend two approaches: (a) using the equation of Hargreaves–Samani (Hargreaves and Samani, 1985) and (b) using the Penman–Monteith temperature (PMT) method that requires data of temperature to estimate Rn (net radiation) and VPD for obtaining ETo. In these situations, temperature-based evapotranspiration (TET) methods are very useful (Mendicino and Senatore, 2013). Air temperature is the most available meteorological data, which are available from most climatic weather stations. Therefore, TET methods and temperature databases are a solid base for ET estimation all over the world, including areas with limited data resources (Droogers and Allen, 2002). The first reference of the use of PMT for limited meteorological data was Allen (1995); subsequently, studies like those of Annandale et al. (2002) were carried out, having similar behavior to the Hargreaves–Samani (HS) method and FAO PM, although there was the disadvantage of requiring greater preparation and computation of the data than the HS method. Regarding this point, it should be noted that the researchers do not favor using the PMT formulation and adopt the HS equation, which is simpler and easier to use (Paredes et al., 2018). Authors like Pandey et al. (2014) performed calibrations based on solar radiation coefficients in Hargreaves–Samani equations. Today, the PMT calculation process is easily implemented with the new computers (Pandey and Pandey, 2016; Quej et al., 2019). Todorovic et al. (2013) reported that, in Mediterranean hyper-arid and arid climates, PMT and HS show similar behavior and performance, while for moist sub-humid areas, the best performance was obtained with the PMT method. This behavior was reported for moist sub-humid areas in Serbia (Trajovic, 2005). Several studies confirm this performance in a range of climates (Martinez and Thepadia, 2010; Raziei and Pereira, 2013; Almorox et al., 2015; Ren et al., 2016). Both models (HS and PMT) improved when local calibrations were performed (Gavilán et al., 2006; Paredes et al., 2018). These reduce the problem when wind speed and solar radiation are the major driving variables. Studies in Spain comparing HS and PMT methodologies were studied in moist sub-humid climate zones (northern Spain), showing a better fit in PMT than in HS. (López-Moreno et al., 2009). Tomas-Burguera et al. (2017) reported, for the Iberian Peninsula, a better adjustment of PMT than HS, provided that the lost values were filled by interpolation and not by estimation in the model of PMT. Normally the calibration of models for ETo estimation is done from a spatial approach, calibrating models in the locations studied. Very few studies have been carried out to test models from the seasonal point of view, with the annual calibration being the most studied. Meanwhile spatial and annual approaches are of great interest to climatology and meteorology, as agriculture, seasonal or even monthly calibrations are relevant to crops (Nouri and Homaee, 2018). To improve the accuracy of ETo estimations, Paredes et al. (2018) used the values of the calibration constant values in the models that were derived for the October–March and April–September periods. The aim of this study was to evaluate the performance of temperature models for the estimation of reference evapotranspiration against the FAO 56 Penman–Monteith model, with a temporal (annual or seasonal) and spatial perspective in the Duero basin (Spain). The models evaluated were two HS models, with calibration and without calibration, and five PMT models analyzing the contribution of wind speed, humidity and solar radiation in a situation of limited agrometeorological data. Figure 1Location of study area. The point with the number indicates the location of the agrometeorological stations according to Table 1. 2 Materials and method ## 2.1 Description of the study area The study focuses on the Spanish part of the Duero hydrographic basin. The international hydrographic Duero basin is the most extensive of the Iberian Peninsula; with an area 98 073 km2, it includes the territory of the Duero River basin as well as the transitional waters of the Porto estuary and the associated Atlantic coastal waters (CHD, 2019). It is a shared territory between Portugal, with 19 214 km2 (19.6 % of the total area), and Spain, with 78 859 km2 (80.4 % of the total area). The Duero River basin is located in Spain, between 435 and 4010 N and 74 and 150 W (Fig. 1). This basin aligns almost exactly with the so-called Submeseta Norte, an area with an average altitude of 700 m, delimited by mountain ranges with a much drier central zone that contains large aquifers, which is the most important area of agricultural production; 98.4 % of the Duero basin belongs to the autonomous community of Castilla and Léon, and 70 % of the average annual precipitation is used directly by the vegetation or evaporated from surface – this represents 35 000 h m3. The remaining (30 %) is the total natural runoff. The Mediterranean climate is the predominant climate; 90 % of the surface is affected by summer drought conditions. The average annual values are a temperature of 12 C and precipitation of 612 mm. However, precipitation ranges from minimum values of 400 mm (south–central area of the basin) to a maximum of 1800 mm in the northeast of the basin (CHD, 2019). According to Lautensach (1967), 30 mm is the threshold definition of a dry month. Therefore, between two and five dry periods can be found in the basin (Ceballos et al., 2004). Moreover, the climate variability, especially precipitation, exhibited in the last decade has decreased the water availability for irrigation in this basin (Segovia-Cardozo et al., 2019). The Duero basin has 4×106 ha of rainfed crops and some 500 000 ha of irrigated crops that consume 75 % of the basin's water resources. Barley (Hordeum vulgare L.) is the most important rainfed crop in the basin, occupying 36 % of the national crop surface, followed by wheat (Triticum aestivum L.), with 30 % (MAPAMA, 2019). Sunflower (Helianthus annuus L.) represents 30 % of the national crop surface. This crop is mainly unirrigated (90 %). Maize (Zea mays L.), alfalfa (Medicago sativa L.) and sugar beet (Beta vulgaris L. var. sacharifera) are the main irrigated crops. These crops represents 29 %, 30 % and 68 % of the national crop area, respectively. Finally, vines (Vitis vinifera L.) fill 72 000 ha, being less than 10 % irrigated. For the irrigated crops of the basin there are water allocations that fluctuate depending on the availability of water during the agricultural year and the type of crop. These values fluctuate from 1200–1400 m3 ha−1 for vines to 6400–7000 m3 ha−1 for maize and alfalfa. The use rates of the irrigation systems used in the basin are as follows: 25 %, 68 % and 7 % for surface, sprinkler and drip irrigation, respectively (Plan Hidrológico, 2019). ## 2.2 Meteorological data The daily climate data were downloaded from 49 stations (Fig. 1b) from the agrometeorological network SIAR (Agroclimatic Information System for Irrigation; SIAR in Spanish), which is managed by the Spanish Ministry of Agriculture, Fisheries and Food (SIAR, 2018), providing the basic meteorological data from weather stations distributed throughout the Duero basin (Table 1). Each station incorporates measurements of air temperature (T) and relative humidity (RH; Vaisala HMP155), precipitation (ARG100 rain gauge), solar global radiation (pyranometer Skye SP1110), and wind direction and wind speed (u; wind vane and R.M. Young 05103 anemometer). Sensors were periodically maintained and calibrated, and all data were recorded and averaged hourly on a data logger (Campbell CR10X and CR1000). Characteristics of the agrometeorological stations were described by (Moratiel et al., 2011, 2013a). For quality control, all parameters were checked, and the sensors were periodically maintained and calibrated, with all data being recorded and averaged hourly on a data logger. The database calibration and maintenance are carried out by the Ministry of Agriculture. Transfer of data from stations to the main center is accomplished by modems; the main center incorporates a server which sequentially connects to each station to download the information collected during the day. Once the data from the stations are downloaded, they are processed and transferred to a database. The main center is responsible for quality control procedures that comprise the routine maintenance program of the network, including sensor calibration, checking for validity values and data validation. Moreover, the database was analyzed to find incorrect or missing values. To ensure that high-quality data were used, we used quality control procedures to identify erroneous and suspect data. The quality control procedures applied are the range and limit test, step test, and internal consistency test (Estevez et al., 2016). The period studied was from 2000 to 2018, although the start date may fluctuate depending on the availability of data. Table 1 shows the coordinates of the agrometeorological stations used in the Duero basin and the aridity index based on UNEP (1997). Table 1 shows the predominance of the semi-arid climate zone, with 42 stations of the 49 being semi-arid, 2 being arid, 4 being dry sub-humid and 1 being moist sub-humid. ## 2.3 Estimates of reference evapotranspiration ### 2.3.1 FAO Penman–Monteith (FAO PM) The FAO recommends the PM method for computing ETo and evaluating other ETo models like the Penman–Monteith model using only temperature data (PMT) and other temperature-based models (Allen et al., 1998). The method estimates the potential evapotranspiration from a hypothetical crop with an assumed height of 0.12 m, having an aerodynamic resistance of (ra) 208∕u2 (u2 is the mean daily wind speed measured at a 2 m height over the grass), a surface resistance (rs) of 70 s m−1 and an albedo of 0.23, closely resembling the evaporation of an extension surface of green grass with a uniform height that is actively growing and adequately watered. The ETo (mm d−1) was estimated following FAO 56 (Allen et al., 1998): $\begin{array}{}\text{(1)}& {\mathrm{ET}}_{\mathrm{o}}=\frac{\mathrm{0.408}\mathrm{\Delta }\left({R}_{\mathrm{n}}-G\right)+\mathit{\gamma }\frac{\mathrm{900}}{T+\mathrm{273}}{u}_{\mathrm{2}}\left({e}_{\mathrm{s}}-{e}_{\mathrm{a}}\right)}{\mathrm{\Delta }+\mathit{\gamma }\left(\mathrm{1}+\mathrm{0.34}{u}_{\mathrm{2}}\right)}.\end{array}$ In Eq. (1), Rn is net radiation at the surface (MJ m−2 d−1), G is ground heat flux density (MJ m−2 d−1), γ is the psychrometric constant (kPa C−1), T is mean daily air temperature at 2 m height (C), u2 is wind speed at 2 m height (m s−1), es is the saturation vapor pressure (kPa), ea is the actual vapor pressure (kPa) and Δ is the slope of the saturation vapor pressure curve (kPa C−1). According to Allen et al. (1998) in Eq. (1), G can be considered to be zero. Table 1Agrometeorological station used in the study. Coordinates and aridity index are shown. ### 2.3.2 Hargreaves–Samani (HS) The scarcity of available agrometeorological data (mainly global solar radiation, air humidity and wind speed) limit the use of the FAO PM method in many locations. Allen et al. (1998) recommended applying the Hargreaves–Samani expression in situations where only the air temperature is available. The HS formulation is an empirical method that requires empirical coefficients for calibration (Hargreaves and Samani, 1982, 1985). The Hargreaves–Samani (Hargreaves and Samani, 1982, 1985) method is given by the following equation: $\begin{array}{}\text{(2)}& {\mathrm{ET}}_{\mathrm{o}}=\mathrm{0.0135}\cdot {k}_{\mathrm{RS}}\cdot \mathrm{0.408}\cdot {H}_{\mathrm{o}}\cdot \left({T}_{m}+\mathrm{17.8}\right)\cdot {\left({T}_{x}-{T}_{n}\right)}^{\mathrm{0.5}},\end{array}$ where ETo is the reference evapotranspiration (mm d−1); Ho is extraterrestrial radiation (MJ m−2 d−1); kRS is the Hargreaves empirical coefficient; and Tm, Tx and Tn are the daily mean, maximum air temperature and minimum air temperature (C), respectively. The value kRS was initially set to 0.17 for arid and semi-arid regions (Hargreaves and Samani, 1985). Hargreaves (1994) later recommended using the value of 0.16 for interior regions and 0.19 for coastal regions. Daily temperature variations can occur due to other factors such as topography, vegetation, humidity, etc.; thus using a fixed coefficient may lead to errors. In this study, we use 0.17 as the original coefficient (HSO) and the calibrated coefficient kRS (HSC). kRS reduces the inaccuracy, thus improving the estimation of ETo. This calibration was done for each station. ### 2.3.3 Penman–Monteith temperature (PMT) The FAO PM, when applied using only measured temperature data, is referred to as PMT and retains many of the dynamics of the full-data FAO PM (Pereira et al., 2015; Hargreaves and Allen, 2003). Humidity and solar radiation are estimated in the PMT model using only air temperature as input for the calculation of ETo. Wind speed in the PMT model is set to the constant value of 2 m s−1 (Allen et al., 1998). In this model, where global solar radiation (or sunshine data) is lacking, the difference between the maximum and minimum temperature can be used, as an indicator of cloudiness and atmospheric transmittance, for the estimation of solar radiation (Eq. 3; Hargreaves and Samani, 1982). Net solar shortwave and longwave radiation estimates are obtained as indicated by Allen et al. (1998), in Eqs. (4) and (5), respectively. The expression of PMT is obtained as indicated in the following: $\begin{array}{}\text{(3)}& & {R}_{\mathrm{s}}={H}_{\mathrm{o}}\cdot {k}_{\mathrm{RS}}\cdot {\left({T}_{x}-{T}_{n}\right)}^{\mathrm{0.5}},\text{(4)}& & {R}_{\mathrm{ns}}=\mathrm{0.77}\cdot {H}_{\mathrm{o}}\cdot {k}_{\mathrm{RS}}\cdot {\left({T}_{x}-{T}_{n}\right)}^{\mathrm{0.5}},\end{array}$ where Rs is solar radiation (MJ m−2 d−1); Rns is net solar shortwave radiation (MJ m−2 d−1); Ho is extraterrestrial radiation (MJ m−2 d−1); and Ho was computed as a function of site latitude, solar angle and the day of the year (Allen et al., 1998). Tx is the daily maximum air temperature (C), and Tn is the daily minimum air temperature (C). For kRS Hargreaves (1994) recommended using kRS=0.16 for interior regions and kRS=0.19 for coastal regions. For better accuracy the coefficient kRS can be adjusted locally (Hargreaves and Allen, 2003). In this study two assumptions of kRS were made: one where a value of 0.17 was fixed and another where it was calibrated for each station, $\begin{array}{}\text{(5)}& \begin{array}{rl}{R}_{\mathrm{nl}}& =\left(\mathrm{1.35}\cdot \left(\frac{{k}_{\mathrm{RS}}\cdot {\left({T}_{x}-{T}_{n}\right)}^{\mathrm{0.5}}}{\mathrm{0.75}-\mathrm{2}z{\mathrm{10}}^{-\mathrm{5}}}\right)-\mathrm{0.35}\right)\\ & \cdot \left(\mathrm{0.34}-\mathrm{0.14}{\left(\mathrm{0.6108}\cdot \mathrm{exp}\left(\frac{\mathrm{17.27}\cdot {T}_{\mathrm{d}}}{{T}_{\mathrm{d}}-\mathrm{237.3}}\right)\right)}^{\mathrm{0.5}}\right)\\ & \cdot \mathit{\sigma }\cdot \left(\frac{{\left({T}_{x}+\mathrm{273.15}\right)}^{\mathrm{4}}+{\left({T}_{n}+\mathrm{273.15}\right)}^{\mathrm{4}}}{\mathrm{2}}\right),\end{array}\end{array}$ where Rnl is net longwave radiation (MJ m−2  d−1), Tx is daily maximum air temperature (C), Tn is daily minimum air temperature (C), Td is dew point temperature (C) calculated with the Tn according to Todorovic et al. (2013), σ is the Stefan–Boltzmann constant for a day ($\mathrm{4.903}×{\mathrm{10}}^{-\mathrm{9}}$ MJ K−4 m−2 d−1) and z is the altitude (m): $\begin{array}{}\text{(6)}& {\mathrm{PMT}}_{\mathrm{rad}}=\left(\frac{\mathrm{0.408}\mathrm{\Delta }}{\mathrm{\Delta }+\mathit{\gamma }\left(\mathrm{1}+\mathrm{0.34}{u}_{\mathrm{2}}\right)}\right)\cdot \left({R}_{\mathrm{ns}}-{R}_{\mathrm{nl}}-G\right),\end{array}$ $\begin{array}{}\text{(7)}& & {\mathrm{PMT}}_{\mathrm{aero}}=\frac{\mathit{\gamma }\cdot \frac{\mathrm{900}\cdot {u}_{\mathrm{2}}}{{T}_{\mathrm{m}}+\mathrm{273}}\cdot \left(\left(\frac{{e}_{\mathrm{s}}\left({T}_{x}\right)+{e}_{\mathrm{s}}\left({T}_{n}\right)}{\mathrm{2}}\right)-{e}_{\mathrm{s}}\left({T}_{\mathrm{d}}\right)\right)}{\mathrm{\Delta }+\mathit{\gamma }\left(\mathrm{1}+\mathrm{0.34}{u}_{\mathrm{2}}\right)},\text{(8)}& & \mathrm{PMT}={\mathrm{PMT}}_{\mathrm{rad}}+{\mathrm{PMT}}_{\mathrm{aero}},\end{array}$ where PMT is the reference evapotranspiration estimate by the Penman–Monteith temperature method (mm d−1), PMTrad is the radiative component of PMT (mm d1), PMTaero is the aerodynamic component of PMT (mm d−1), Δ is the slope of the saturation vapor pressure curve (kPa C−1), γ is the psychrometric constant (kPa C−1), Rns is net solar shortwave radiation (MJ m−2 d−1), Rnl is net longwave radiation (MJ m−2 d−1), G is ground heat flux density (MJ m−2 d−1), considered to be zero according to Allen et al. (1998), Tm is mean daily air temperature (C), Tx is maximum daily air temperature, Tn is mean daily air temperature, Td is dew point temperature (C) calculated with the Tn according to Todorovic et al. (2013), u2 is wind speed at 2 m height (m s−1) and es is the saturation vapor pressure (kPa). In this model two assumptions of kRS were made: one where a value of 0.17 was fixed and another where it was calibrated for each station. ### 2.3.4 Calibration and models We studied two methods to estimate the ETo: the HS method and the reference evapotranspiration estimate by PMT. Within these methods, different adjustments are proposed based on the adjustment coefficients of the methods and the missing data. The parametric calibration for the 49 stations was applied in this study. In order to decrease the errors of the evapotranspiration estimates, local calibration was used. The seven methods used with the coefficient (kRS) of the calibrated and characteristics in the different locations studied are shown in Table 2. The calibration of the model coefficients was achieved by the nonlinear least-squares fitting technique. The analysis was made on a yearly and seasonal basis. The seasons were the following: (1) winter (December, January and February, or DJF), (2) spring (March, April and May, or MAM), (3) summer (June, July and August, or JJA) and (4) autumn (September, October and November, or SON). Table 2Characteristics of the models used in this study. 1 Dew point temperature obtained according to Todorovic et al. (2013). 2 Average monthly value of wind speed. 3 Average monthly value of maximum and minimum relative humidity. ## 2.4 Performance assessment The model's suitability, accuracy and performance were evaluated using the coefficient of determination (R2; Eq. 9) of the n pairs of observed (Oi) and predicted (Pi) values. Also, the mean absolute error (MAE; mm d−1; Eq. 10), root-mean-square error (RMSE; Eq. 11) and the Nash–Sutcliffe model efficiency (NSE; Eq. 12; Nash and Sutcliffe, 1970) coefficient were used. The coefficient-of-regression line (b), forced through the origin, is obtained by predicted values divided by observed values (ETmodel∕ETFAO56). The results were represented in a map applying the Kriging method with the Surfer® 8 program: $\begin{array}{}\text{(9)}& & {R}^{\mathrm{2}}={\left\{\frac{\sum _{i=\mathrm{1}}^{n}\left({O}_{i}-\stackrel{\mathrm{‾}}{O}\right)\cdot \left({P}_{i}-\stackrel{\mathrm{‾}}{P}\right)}{{\left[\sum _{i=\mathrm{1}}^{n}{\left({O}_{i}-\stackrel{\mathrm{‾}}{O}\right)}^{\mathrm{2}}\right]}^{\mathrm{0.5}}\cdot {\left[\sum _{i=\mathrm{1}}^{n}{\left({P}_{i}-\stackrel{\mathrm{‾}}{P}\right)}^{\mathrm{2}}\right]}^{\mathrm{0.5}}}\right\}}^{\mathrm{2}},\text{(10)}& & \mathrm{MAE}=\frac{\mathrm{1}}{n}\sum _{i=\mathrm{1}}^{n}\left(\left|{O}_{i}-{P}_{i}\right|\right)\phantom{\rule{0.25em}{0ex}}\left(\mathrm{mm}\phantom{\rule{0.125em}{0ex}}{\mathrm{d}}^{-\mathrm{1}}\right),\text{(11)}& & \mathrm{RMSE}={\left[\frac{\sum _{i=\mathrm{1}}^{n}{\left({O}_{i}-{P}_{i}\right)}^{\mathrm{2}}}{n}\right]}^{\mathrm{0.5}}\phantom{\rule{0.25em}{0ex}}\left(\mathrm{mm}\phantom{\rule{0.125em}{0ex}}{\mathrm{d}}^{-\mathrm{1}}\right),\text{(12)}& & \mathrm{NSE}=\mathrm{1}-\left[\frac{\sum _{i=\mathrm{1}}^{n}{\left({O}_{i}-{P}_{i}\right)}^{\mathrm{2}}}{\sum _{i=\mathrm{1}}^{n}{\left({O}_{i}-\stackrel{\mathrm{‾}}{O}\right)}^{\mathrm{2}}}\right].\end{array}$ 3 Results and discussion In the study period the data indicated that the Duero basin is characterized by being a semi-arid climate zone (94 % of the stations), where the P∕ETo ratio is between 0.2 and 0.5 (Todorovic et al., 2013). The mean annual rainfall is 428 mm, while the average annual ETo for Duero basin is 1079 mm, reaching the maximum values in the center–south zone, with values that slightly surpass 1200 mm (Fig. 2). The great temporal heterogeneity is observed in the Duero basin, with values of 7 % of the ETo during the winter months (DJF), while during the summer months (JJA) they represent 47 % of the annual ETo. In addition, the months from May to September represent 68 % of the annual ETo, with similar values to those reported by Moratiel et al. (2011). Figure 2Mean values season of ETo (mm) during the study period 2000–2018. (a) Annual, (b) winter (December, January and February, or DJF), (c) spring (March, April and May, or MAM), (d) summer (June, July and August, or JJA) and (e) autumn (September, October and November, or SON). Table 3 shows the different statistics analyzed in the seven models studied as a function of the season of the year and annually. From an annual point of view all models show R2 values higher than 0.91, an NSE higher than 0.88, a MAE less than 0.52, a RMSE lower than 0.69, and underestimates or overestimates of the models by ±4 %. The best behavior is shown by the PMTCHU model, with a MAE and RMSE of 0.39 and 0.52 mm d−1, respectively. PMTCUH shows no tendency to overestimate or underestimate the values in which a coefficient of regression b of 1.0 is observed. This model shows values of the NSE and R2 of 0.93. The models HSC and PMTOUH have similar behavior, with the same MAE (0.41 mm d−1), NSE (0.92) and R2 (0.91). The RMSE is 0.55 mm d−1 for the PMTOUH model and 0.54 mm d−1 for the HSC model. The models PMTOUT and HSO showed slightly better performance than PMTO2T and PMTC2T, given that the last two models showed the worst behavior (Fig. 3). The performance of the models (PMTO2T, PMTOUT and PMTOUH) improves as the averages of wind speed (u) and dew temperature (Td) values are incorporated. The same pattern is shown between the PMTCUH models, where the mean u values and Td are incorporated, and PMTC2T, where u is 2 m s−1 and dew temperature is calculated with the approximation of Todorovic et al. (2013). These adjustments are supported because the adiabatic component of evapotranspiration in the PMT equation is very influential in the Mediterranean climate, especially wind speed (Moratiel et al., 2010). Figure 3Performance of the models with an annual focus. (a) Average annual values of ETo (mm d−1). Mean values of MAE (mm d−1): (b) PMTO2T model, (c) HO model, (d) HC model, (e) PMTC2T model, (f) PMTOUT model, (g) PMTOUH model and (h) PMTCUH model. Figure 4Performance of the models with a winter focus (December, January and February). (a) Average values of ETo (mm d−1) in winter. Mean values of MAE (mm d−1): (b) PMTO2T model, (c) HO model, (d) HC model, (e) PMTC2T model, (f) PMTOUT model, (g) PMTOUH model and (h) PMTCUH model. Table 3Statistical indicators for ETo estimation in the seven models studied for different seasons. Average data for the 49 stations studied. From a spatial perspective, it is observed in Fig. 3 that the areas where the values of the MAE are higher are to the east and southwest of the basin. This is due to the fact that the average wind speed in the eastern zone is higher than 2.5 m s−1; for example, the Hinojosa del Campo station shows average annual values of 3.5 m s−1. The southwestern area shows values of wind speeds below 1.5 m s−1, such as at the Ciudad Rodrigo station, with annual average values of 1.19 m s−1. These MAE differences are more pronounced in the models in which the average wind speed is not taken, such as the PMTC2T and PMTO2T models. Most of the basin has values of wind speeds between 1.5 and 2.5 m s−1. The lower MAE values in the northern zone of the basin are due to the lower average values of the VPD than the central area, with values of 0.7 kPa in the northern zone and 0.95 kPa in the central zone. The same trends in the effect of wind on the ETo estimates were detected by Nouri and Homaee (2018), who indicated that values outside the range of 1.5–2.5 m s−1 in models where the default u was set at 2 m s−1 increased the error of the ETo. Even for models such as HS, where the influence of the wind speed values is not directly indicated outside of the ranges previously mentioned, their performance is not good, and some authors have proposed HS calibrations based on wind speeds in Spanish basins such as the Ebro Basin (Martínez-Cob and Tejero-Juste, 2004). In our study, the HSC model showed good performance, with MAE values similar to PMTCUH and PMTOUH (Fig. 3). The performance of the models by season of the year changes considerably, obtaining lower adjustments, with values of R2=0.53 for winter (DJF) in the models HSO and HSC and for summer (JJA) in the models PMTO2T and PMTC2T. All models during spring and autumn show R2 values above 0.8. The NSE for models HSO, PMTC2T, PMTO2T and PMTOUT in summer and winter is at unsatisfactory values below 0.5 (Moriasi et al., 2007). The mean values (49 stations) of the MAE (Fig. 4) and RMSE for the models in the winter were 0.24–0.30 and 0.3–0.37 mm d−1, respectively. For spring, the ranges were between 0.42 and 0.52 mm d−1 for the MAE (Fig. 5) and 0.55 and 0.65 mm d−1 for RMSE. In summer, the MAE (Fig. 6) fluctuated between 0.53 and 0.72 mm d−1, and the RMSE fluctuated between 0.68 and 0.91 mm d−1. Finally, in autumn, the values of the MAE (Fig. 7) and RMSE were 0.38–0.58 and 0.49–0.70 mm d−1, respectively (Table 3). Figure 5Performance of the models with a spring focus (March, April and May). (a) Average annual values of ETo (mm d−1) in spring. Mean values of MAE (mm d−1): (b) PMTO2T model, (c) HO model, (d) HC model, (e) PMTC2T model, (f) PMTOUT model, (g) PMTOUH model and (h) PMTCUH model. Figure 6Performance of the models with a summer focus (June, July and August). (a) Average values of ETo (mm d−1) in summer. Mean values of MAE (mm d−1): (b) PMTO2T model, (c) HO model, (d) HC model, (e) PMTC2T model, (f) PMTOUT model, (g) PMTOUH model and (h) PMTCUH model. Figure 7Performance of the models with an autumn focus (September, October and November). (a) Average values of ETo (mm d−1) in autumn. Mean values of MAE (mm d−1): (b) PMTO2T model, (c) HO model, (d) HC model, (e) PMTC2T model, (f) PMTOUT model, (g) PMTOUH model and (h) PMTCUH model. The model that shows the best performance independently of the season is PMTCUH. The models that can be considered in a second level are HSC and PMTOUH. During the months of more solar radiation (summer and spring) the performance of the HSC model is slightly better than the PMTOUH model. The HSO, PMTO2T, PMTC2T and PMTOUT models have a much poorer performance than the previous models (PMTOUH and HSC). The model that has the worst performance is PMTO2T. The northern area of the basin is the area in which a lower MAE shows in most models and for all seasons. This is due in part to the fact that the lower values of ETo (mm d−1) are located in the northern zone. On the other hand, the eastern zone of the basin shows the highest values of the MAE due to the strong winds that are located in that area. During the winter the seven models tested show no great differences between them, although PMTCUH is the model with the best performance. It is important to indicate that during this season the RMSE (%) is placed in all the models above 30 %, so they can be considered to be very weak models. According to Jamieson et al. (1991) and Bannayan and Hoogenboom (2009) the model is considered excellent with a normalized RMSE (%) less than 10 %, good if the normalized RMSE (%) is greater than 10 and less than 20 %, fair if the normalized RMSE (%) is greater than 20 % and less than 30 %, and poor if the normalized RMSE (%) is greater than 30 %. All models that are made during the spring season (MAM) can be considered to be good or fair, since their RMSE (%) fluctuates between 17 % and 20 %. The seven models that are made during summer season (JJA) can be considered to be good, since their RMSE varies from 12 % to 16 %. Finally, the models that are made during autumn (SON) are considered to be fair or poor, fluctuating between 22 % and 32 %. The models that reached values greater than 30 % during autumn were PMTC2T (31 %) and PMTO2T (32 %), which also had a clear tendency to overestimate (Table 3). In the use of temperature models for estimating ETo, it is necessary to know the objective that is set. For the management of irrigation in crops, it is better to test the models in the period in which the species require the contribution of additional water. In many cases, applying the models with an annual perspective with good performance can lead to more accentuated errors in the period of greater water needs. The studies of different temporal and spatial scales of the temperature models for ETo estimation can give valuable information that allows for managing the water planning in zones where the economic development does not allow the implementation of agrometeorological stations due to its high cost. 4 Discussion In annual seasons our data of the RMSE fluctuate from 0.69 mm d−1 (PMTO2T) to 0.52 mm d−1 (PMTO2T). These data are in accordance with the values cited by other authors in the same climatic zone. Jabloun and Sahli (2008) cited a RMSE of 0.41–0.80 mm d−1 for Tunisia. The authors showed the PMT model performance to be better than that for the Hargreaves non-calibrated model. Raziei and Pereira (2013) reported data of the RMSE for a semi-arid zone in Iran to be between 0.27 and 0.81 mm d−1 for the HSC model and 0.30 and 0.79 mm d−1 for PMTC2T, although these authors use monthly averages in their models. Ren et al. (2016) reported values of RMSE to be in the range of 0.51 to 0.90 mm d−1 for PMTC2T and in the range of 0.81 to 0.94 mm d−1 for HSC in semi-arid locations in Inner Mongolia (China). Todorovic et al. (2013) found the PMTO2T method to have better performance than the uncalibrated HS method (HSO), with a RMSE average of 0.47 mm d−1 for PMTO2T and 0.52 HSO. At this point, we should highlight that in our study daily-value data were used. The original Hargreaves equation was developed by regressing cool-season grass ET in Davis, California; the kRS coefficient is a calibration coefficient. The aridity index for Davis is semi-arid ($P/\mathrm{ET}=\mathrm{0.33}$; Hargreaves and Allen, 2003; Moratiel et al., 2013b), like 94 % of the stations studied, which explains why the behavior of the HSO model is often very similar to HSC. Even so, the calibration coefficient needs to be adjusted for other climates. Numerous studies in the literature have demonstrated the relevance of the kRS calibration model to estimating FAO 56 (Todorovic et al., 2013; Raziei and Pereira, 2013; Paredes et al., 2018). PMT models have improved, considering the average wind speed. In addition, trends and fluctuations of u have been reported as the factor that most influences ETo trends (Nouri et al., 2017; McVicar et al., 2012; Moratiel et al., 2011). Numerous authors have recommended including, as much as possible, average data of local wind speeds for the improvement of the models, like Nouri and Homaee (2018) and Raziei and Pereira (2013) in Iran, Paredes et al. (2018) in the Azores (Portugal), Djaman et al. (2017) in Uganda, Rojas and Sheffield (2013) in Louisiana (USA), Jabloun and Shali (2008) in Tunisia, and Martinez-Cob and Tejero-Juste (2004) in Spain, among others. In addition, even ETo prediction models based on PMT focus their behavior based on the wind speed variable (Yang et al., 2019). It is important to note that PMTOUT generally has better performance than PMTC2T except for in spring. The difference between both models is that in PMTC2T, kRS is calibrated with wind speed set to 2 m s−1, and in PMTOUT, kRS is not calibrated and has an average wind speed. In this case the wind speed variable has less of an effect than the calibration of kRS, since the average values of wind during spring (2.3 m s−1) are very close to 2 m s−1 and there is no great variation between both settings. In this way, kRS calibration shows a greater contribution than the average of the wind speed to improve the model (Fig. 5e and f). In addition, although u is not directly considered for HS, this model is more robust in regions with speed averages around 2 m s−1 (Allen et al., 1998; Nouri and Homaee, 2018). On the other hand errors in the estimation of relative humidity cause substantial changes in the estimation of ETo, as reported by Nouri and Homaee (2018) and Landeras et al. (2008). The results of RMSE values (%) of the different models change considerably by season; values are between 16.6 % and 12.3 % for summer and between 41.2 % and 33.5 % for winter. Similar results were obtained in Iran by Nouri and Homaee (2018), where in the months of December–January and February the performance of the PMT and HS models tested had RMSE (%) values above 30 %. Very few studies, as far as we know, have been carried out with adjustments of evapotranspiration models from a temporal point of view, and generally the models are calibrated and adjusted from an annual point of view. Some authors, such as Aguilar and Polo (2011), differentiate seasons between wet and dry, and others, such as Paredes et al. (2018), divide them into summer and winter; Vangelis et al. (2013) take two periods into account, and Nouri and Homaee (2018) do it from a monthly point of view. In most cases, the results obtained in these studies are not comparable with those presented in this study, since the timescales are different. However, it can be noted that the results of the models according to the timescale season differ greatly with respect to the annual scale. 5 Conclusions The performance of seven temperature-based models (PMT and HS) was evaluated in the Duero basin (Spain) for a total of 49 agrometeorological stations. Our studies revealed that the models tested on an annual or seasonal basis provide different performance. The values of R2 are higher when they are performed annually, with values between 0.91 and 0.93 for the seven models, but when performed from a seasonal perspective, there are values that fluctuate between 0.5 and 0.6 for summer or winter and 0.86 and 0.81 for spring and autumn. The NSE values are high for models tested from an annual perspective, but for the seasons of spring and summer they are below 0.5 for the models HSO, PMTO2T, PMTC2T and PMTOUT. The fluctuations between models with an annual perspective of the RMSE and MAE were greater than if those models were compared from a seasonal perspective. During the winter none of the models showed good performance, with values of R2>0.59, NSE > 0.58 and RMSE (%) > 30 %. From a practical point of view, in the management of irrigated crops, winter is a season where crop water needs are minimal, with daily average values of ETo around 1 mm due to low temperatures, radiation and VPD. The model that showed the best performance was PMTCUH, followed by PMTOUH and HSC for annual and seasonal criteria. PMTOUH is slightly less robust than PMTCUH during the maximum radiation periods of spring and summer, since PMTCHU performs the kRS calibration. The performance of the HSC model is better in the spring period, which is similar to PMTCHU. The spatial distribution of MAEs in the basin shows that it is highly dependent on wind speeds, obtaining greater errors in areas with winds greater than 2.8 m s−1 (east of the basin) and lower than 1.3 m s−1 (south–southwest of the basin). This information of the tested models at different temporal and spatial scales can be very useful for adopting appropriate measures for efficient water management under the limitation of agrometeorological data and under the recent increments of dry periods in this basin. It is necessary to consider that these studies are carried out at a local scale, and in many cases the extrapolation of the results at a global scale is complicated. Future studies should be carried out in this way from a monthly point of view, since there may be high variability within the seasons. Data availability Data availability. Evapotranspiration and agrometeorological data are from the Agroclimatic Information System for Irrigation (SIAR), belonging to the Ministry of Agriculture, Fisheries and Food. These data are available at http://eportal.mapa.gob.es/websiar/Inicio.aspx (last access: 2 June 2018). The processing workflow for these data can be seen in Sect. 2 of this paper. Author contributions Author contributions. RM and JA developed the idea for the research and methodology and prepared the draft of the paper. RB and AS obtained and processed the raw data. AMT and RM prepared the maps and analyzed the statistical variables obtained. RM, JA, AS and AMT reviewed and edited the paper and contributed to the final paper. Competing interests Competing interests. The authors declare that they have no conflict of interest. Special issue statement Special issue statement. Acknowledgements Acknowledgements. Special thanks are due to the Centro de Estudios e Investigación para la Gestión de Riesgos Agrarios y Medioambientales (CEIGRAM). Also, we would like to acknowledge the referees and especially the editor for their valuable comments and efforts in reviewing and handing our paper. Financial support Financial support. This research has been supported by MINECO (Ministerio de Economía y Competitividad) through the projects PRECISOST (AGL2016-77282-C3-2-R) and AGRISOST-CM (S2018/BAA-4330). Review statement Review statement. This paper was edited by Jonathan Rizzi and reviewed by Victor Quej, Pankaj Pandey, and two anonymous referees. References Aguilar, C. and Polo, M. J.: Generating reference evapotranspiration surfaces from the Hargreaves equation at watershed scale, Hydrol. Earth Syst. Sci., 15, 2495–2508, https://doi.org/10.5194/hess-15-2495-2011, 2011. Allen, R. G.: Evaluation of procedures for estimating grass reference evapotranspiration using air temperature data only, Report submitted to Water Resources Development and Management Service, Land and Water Development Division, United Nations Food and Agriculture Service, Rome, Italy, 1995. Allen, R. G., Pereira, L. S., Raes, D., and Smith, M.: Crops evapotranspiration, Guidelines for computing crop requirements, Irrigations and Drainage Paper 56, FAO, Rome, 300 pp., 1998. Allen, R. G., Pereira, L. S., Howell, T. A., and Jensen, E.: Evapotranspiration information reporting: I. Factors governing measurement accuracy, Agr. Water Manage., 98, 899–920, https://doi.org/10.1016/j.agwat.2010.12.015, 2011. Almorox, J., Quej, V. H., and Martí, P.: Global performance ranking of temperature-based approaches for evapotranspiration estimation considering Köppen climate classes, J. Hydrol., 528, 514–522, https://doi.org/10.1016/j.jhydrol.2015.06.057, 2015. Annandale, J., Jovanovic, N., Benade, N., and Allen, R. G.: Software for missing data error analysis of Penman-Monteith reference evapotranspiration, Irrig. Sci., 21, 57–67, https://doi.org/10.1007/s002710100047, 2002. Bannayan, M. and Hoogenboom, G.: Using pattern recognition for estimating cultivar coefficients of a crop simulation model, Field Crop Res., 111, 290–302, https://doi.org/10.1016/j.fcr.2009.01.007, 2009. Ceballos, A., Martínez-Fernández, J., and Luengo-Ugidos, M. A.: Analysis of rainfall trend and dry periods on a pluviometric gradient representative of Mediterranean climate in Duero Basin, Spain, J. Arid Environ., 58, 215–233, https://doi.org/10.1016/j.jaridenv.2003.07.002, 2004. CHD: Confederación Hidrográfica del Duero, available at: http://www.chduero.es, last access: 28 January 2019. Djaman, K., Rudnick, D., Mel, V. C., Mutiibwa, D., Diop, L., Sall, M., Kabenge, I., Bodian, A., Tabari, H., and Irmak, S.: Evaluation of Valiantzas' simplified forms of the FAO-56 Penman-Monteith reference evapotranspiration model in a humid climate, J. Irr. Drain. Eng., 143, 06017005, https://doi.org/10.1061/(ASCE)IR.1943-4774.0001191, 2017. Droogers, P. and Allen, R. G.: Estimating reference evapotranspiration under inaccurate data conditions, Irrig. Drain. Syst., 16, 33–45, https://doi.org/10.1023/A:1015508322413, 2002. Estevez, J., García-Marín, A. P., Morábito, J. A., and Cavagnaro, M.: Quality assurance procedures for validating meteorological input variables of reference evapotranspiration in mendoza province (Argentina), Agric. Water Manage., 172, 96–109. https://doi.org/10.1016/j.agwat.2016.04.019, 2016. Gavilán, P., Lorite, J. I., Tornero, S., and Berengera, J.: Regional calibration of Hargreaves equation for estimating reference ET in a semiarid environment, Agr. Water Manage., 81, 257–281, https://doi.org/10.1016/j.agwat.2005.05.001, 2006. Hargreaves, G. H.: Simplified coefficients for estimating monthly solar radiation in North America and Europe. Departamental Paper, Dept. of Bio. and Irrig. Engrg., Utah State Univ., Logan, Utah, 1994. Hargreaves, G. H. and Allen, R. G.: History and Evaluation of Hargreaves Evapotranspiration Equation, J. Irrig. Drain. Eng., 129, 53–63, https://doi.org/10.1061/(ASCE)0733-9437(2003)129:1(53), 2003. Hargreaves, G. H. and Samani, Z. A.: Estimating potential evapotranspiration, J. Irrig. Drain. Div., 108, 225–230, 1982. Hargreaves, G. H. and Samani, Z. A.: Reference crop evapotranspiration from ambient air temperature, Microfiche Collect. no. fiche no. 85-2517), Am. Soc. Agric. Eng., USA, 1985. Jabloun, M. D. and Sahli, A.: Evaluation of FAO-56 methodology for estimating reference evapotranspiration using limited climatic data: Application to Tunisia, Agr. Water Manage., 95, 707–715, https://doi.org/10.1016/j.agwat.2008.01.009, 2008. Jamieson, P. D., Porter, J. R., and Wilson, D. R.: A test of the computer simulation model ARCWHEAT1 on wheat crops grown in New Zealand, Field Crop Res., 27, 337–350, https://doi.org/10.1016/0378-4290(91)90040-3, 1991. Landeras, G., Ortiz-Barredo, A., and López, J. J.: Comparison of artificial neural network models and empirical and semi-empirical equations for daily reference evapotranspiration estimation in the Basque Country (Northern Spain), Agr. Water Manage., 95, 553–565, https://doi.org/10.1016/j.agwat.2007.12.011, 2008. Lautensach, H.: Geografía de España y Portugal, Vicens Vivens, Barcelona, 814 pp., 1967. López-Moreno, J. I., Hess, T. M., and White, A. S. M.: Estimation of Reference Evapotranspiration in a Mountainous Mediterranean Site Using the Penman-Monteith Equation With Limited Meteorological Data, Pirineos JACA, 164, 7–31, https://doi.org/10.3989/pirineos.2009.v164.27, 2009. MAPAMA – Ministerio de Agricultura Pesca y Alimentación: Anuario de estadística, available at: https://www.mapa.gob.es/es/estadistica/temas/publicaciones/anuario-de-estadistica/, last access: 28 March 2019. Martinez, C. J. and Thepadia, M.: Estimating Reference Evapotranspiration with Minimum Data in Florida, J. Irrig. Drain. Eng., 136, 494–501, https://doi.org/10.1061/(ASCE)IR.1943-4774.0000214, 2010. Martínez-Cob, A. and Tejero-Juste, M.: A wind-based qualitative calibration of the Hargreaves ETo estimation equation in semiarid regions, Agr. Water Manage., 64, 251–264, https://doi.org/10.1016/S0378-3774(03)00199-9, 2004. McVicar, T. R., Roderick, M. L., Donohue, R. J., Li, L. T., VanNiel, T., Thomas, A., Grieser, J., Jhajharia, D., Himri, Y., Mahowald, N. M., Mescherskaya, A. V., Kruger, A. C., Rehman, S., and Dinpashoh, Y.: Global review and synthesis of trends in observed terrestrial near-surface wind speeds: implications for evaporation, J. Hydrol., 416–417, 182–205, https://doi.org/10.1016/j.jhydrol.2011.10.024, 2012. Mendicino, G. and Senatore, A.: Regionalization of the Hargreaves Coefficient for the Assessment of Distributed Reference Evapotranspiration in Southern Italy, J. Irrig. Drain. Eng., 139, 349–362, https://doi.org/10.1061/(ASCE)IR.1943-4774.0000547, 2013. Moratiel, R., Duran, J. M., and Snyder, R.: Responses of reference evapotranspiration to changes in atmospheric humidity and air temperature in Spain, Clim. Res., 44, 27–40, https://doi.org/10.3354/cr00919, 2010. Moratiel, R., Snyder, R. L., Durán, J. M., and Tarquis, A. M.: Trends in climatic variables and future reference evapotranspiration in Duero valley (Spain), Nat. Hazards Earth Syst. Sci.m 11, 1795–1805, https://doi.org/10.5194/nhess-11-1795-2011, 2011. Moratiel, R., Martínez-Cob, A., and Latorre, B.: Variation in the estimations of ETo and crop water use due to the sensor accuracy of the meteorological variables, Nat. Hazards Earth Syst. Sci., 13, 1401–1410, https://doi.org/10.5194/nhess-13-1401-2013, 2013a. Moratiel, R., Spano, D., Nicolosi, P., and Snyder, R. L.: Correcting soil water balance calculations for dew, fog, and light rainfall, Irrig. Sci., 31, 423–429, https://doi.org/10.1007/s00271-011-0320-2, 2013b. Moriasi, D. N., Arnold, J. G., Van Liew, M. W., Bingner, R. L., Harmel, R. D., and Veith, T. L.: Model evaluation guidelines for systematic quantification of accuracy in watershed simulations, T. ASABE, 50, 885–900, https://doi.org/10.13031/2013.23153, 2007. Nash, J. E. and Sutcliffe, J. V.: River flow forecasting through conceptual models. Part I – A discussion of principles, J. Hydrol., 10, 282–290, https://doi.org/10.1016/0022-1694(70)90255-6, 1970. Nouri, M. and Homaee, M.: On modeling reference crop evapotranspiration under lack of reliable data over Iran, J. Hydrol., 566, 705–718, https://doi.org/10.1016/j.jhydrol.2018.09.037, 2018. Nouri, M., Homaee, M., and Bannayan, M.: Quantitative trend, sensitivity and contribution analyses of reference evapotranspiration in some arid environments underclimate change, Water Resour. Manage., 31, 2207–2224, https://doi.org/10.1007/s11269-017-1638-1, 2017. Pandey, P. K. and Pandey, V.: Evaluation of temperature-based Penman–Monteith (TPM) model under the humid environment, Model. Earth Syst. Environ., 2, 152, https://doi.org/10.1007/s40808-016-0204-9, 2016. Pandey, V., Pandey, P. K., and Mahata, P.: Calibration and performance verification of Hargreaves Samani equation in a Humid region, Irrig. Drain., 63, 659–667, https://doi.org/10.1002/ird.1874, 2014. Paredes, P., Fontes, J. C., Azevedo, E. B., and Pereira, L. S.: Daily reference crop evapotranspiration with reduced data sets in the humid environments of Azores islands using estimates of actual vapor pressure, solar radiation, and wind speed, Theor. Appl. Climatol., 134, 1115–1133, https://doi.org/10.1007/s00704-017-2329-9, 2018. Pereira, L. S.: Water, Agriculture and Food: Challenges and Issues, Water Resour. Manage., 31, 2985–2999, https://doi.org/10.1007/s11269-017-1664-z, 2017. Pereira, L. S., Allen, R. G., Smith, M., and Raes, D.: Crop evapotranspiration estimation with FAO56: Past and future, Agr. Water Manage., 147, 4–20, https://doi.org/10.1016/j.agwat.2014.07.031, 2015. Plan Hidrológico: Plan Hidrológico de la parte española de la demarcación hidrográfica del Duero, 2015–2021, Anejo 5, Demandas de Agua, available at: https://www.chduero.es/web/guest/plan-hidrológico-de-la-parte-española-de-la-demarcación-hidrográfica, last access: 18 February 2019. Quej, V. H., Almorox, J., Arnaldo, A., and Moratiel, R.: Evaluation of Temperature-Based Methods for the Estimation of Reference Evapotranspiration in the Yucatán Peninsula, Mexico, J. Hydrol. Eng., 24, 05018029, https://doi.org/10.1061/(ASCE)HE.1943-5584.0001747, 2019. Raziei, T. and Pereira, L. S.: Estimation of ETo with Hargreaves–Samani and FAO-PM temperature methods for a wide range of climates in Iran, Agr. Water Manage., 121, 1–18, https://doi.org/10.1016/j.agwat.2012.12.019, 2013. Ren, X., Qu, Z., Martins, D. S., Paredes, P., and Pereira, L.S.: Daily Reference Evapotranspiration for Hyper-Arid to Moist Sub-Humid Climates in Inner Mongolia, China: I. Assessing Temperature Methods and Spatial Variability, Water Resour. Manage., 30, 3769–3791, https://doi.org/10.1007/s11269-016-1384-9, 2016. Rojas, J. P. and Sheffield, R. E.: Evaluation of daily reference evapotranspiration methods as compared with the ASCE-EWRI Penman-Monteith equation using limited weather data in Northeast Louisiana, J. Irrig. Drain. Eng., 139, 285–292, https://doi.org/10.1061/(ASCE)IR.1943-4774.0000523, 2013. Segovia-Cardozo, D. A., Rodríguez-Sinobas, L., and Zubelzu, S.: Water use efficiency of corn among the irrigation districts across the Duero river basin (Spain): Estimation of local crop coefficients by satellite images, Agr. Water Manage., 212, 241–251, https://doi.org/10.1016/j.agwat.2018.08.042, 2019. SIAR: Sistema de información Agroclimática para el Regadío, available at: http://eportal.mapama.gob.es/websiar/Inicio.aspx, last access: 2 June 2018. Todorovic, M., Karic, B., and Pereira, L. S.: Reference Evapotranspiration estimate with limited weather data across a range of Mediterranean climates, J. Hydrol., 481, 166–176, https://doi.org/10.1016/j.jhydrol.2012.12.034, 2013. Tomas-Burguera, M., Vicente-Serrano, S. M., Grimalt, M., and Beguería, S.: Accuracy of reference evapotranspiration (ETo) estimates under datascarcity scenarios in the Iberian Peninsula, Agr. Water Manage., 182, 103–116, https://doi.org/10.1016/j.agwat.2016.12.013, 2017. Trajkovic, S.: Temperature-based approaches for estimating reference evapotranspiration, J. Irrig. Drain. Eng., 131, 316–323, https://doi.org/10.1061/(ASCE)0733-9437(2005)131:4(316), 2005. UNEP: World atlas of desertification, in: 2nd Edn., edited by: Middleton, N. and Thomas, D., Arnold, London, 182 pp., 1997. Vangelis, H., Tigkas, D., and Tsakiris, G.: The effect of PET method on Reconnaissance Drought Index (RDI) calculation, J. Arid Environ., 88, 130–140, https://doi.org/10.1016/j.jaridenv.2012.07.020, 2013. Villalobos, F. J., Mateos, L., and Fereres, E.: Irrigation Scheduling Using the Water Balance, in: Principles of Agronomy for Sustainable Agriculture, edited by: Villalobos, F. J. and Fereres, E., Springer International Publishing, Switzerland, 269–279, 2016. Yang, Y., Cui, Y., Bai, K., Luo, T., Dai, J., and Wang, W.: Shrot-term forecasting of daily refence evapotranspiration using the reduced-set Penman–Monteith model and public weather forecast, Agr. Water Manage., 211, 70–80, https://doi.org/10.1016/j.agwat.2018.09.036, 2019.
2020-06-01 06:11:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 10, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6463214755058289, "perplexity": 4663.4779866675935}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347414057.54/warc/CC-MAIN-20200601040052-20200601070052-00130.warc.gz"}