url
stringlengths
16
775
text
stringlengths
100
1.02M
date
timestamp[s]
metadata
stringlengths
1.07k
1.1k
https://tjyj.stats.gov.cn/CN/Y2013/V30/I9/29
• 论文 • ### 全球贸易隐含碳排放格局的变动及其影响因素 • 出版日期:2013-09-15 发布日期:2013-09-04 ### Research on the Pattern Change of Carbon Emission Embodied in International Trade and Its Determinants Jiang Xuemei & Liu Yifang • Online:2013-09-15 Published:2013-09-04 Abstract: Based on a set of World Input-Output Tables (WIOT) in1995 and 2008, this paper empirically measures carbon emission embodied in per unit of value added of exports for the main developed and developing countries. By using structural decomposition technique, this paper also explores the underlying reasons behind both changes over time and the difference over space for the countries. It is found that developing countries experienced faster decrease in carbon embodied in per unit value added of exports than developed countries during 1995-2008. However, until 2008, the developing countries are still responsible for much higher carbon emission than developed countries when they receive as same as value added from exports. The structural decomposition shows that the differences in carbon reduction technique (indicated as contribution of carbon intensity) are one of the most important reasons. From the aspect of global carbon reduction, it seems very important to encourage the technique transfer from developed countries to developing countries.
2022-07-02T06:25:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2288520485162735, "perplexity": 2630.74426285243}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103984681.57/warc/CC-MAIN-20220702040603-20220702070603-00558.warc.gz"}
https://www.futurelearn.com/info/courses/python-in-hpc/0/steps/65149
£199.99 £139.99 for one year of Unlimited learning. Offer ends on 28 February 2023 at 23:59 (UTC). T&Cs apply # Communicators In this article we discuss MPI communicators in more detail and show how to create user-defined communicators. © CC-BY-NC-SA 4.0 by CSC - IT Center for Science Ltd. In MPI context, a communicator is a special object representing a group of processes that participate in communication. When a MPI routine is called, the communication will involve some or all of the processes in a communicator. In C and Fortran, all MPI routines expect a communicator as one of the arguments. In Python, most MPI routines are implemented as methods of a communicator object. A single process can belong to multiple communicators and will have an unique ID (rank) in each of the communicators. ## User-defined communicators All processes start in a global communicator called MPI_COMM_WORLD (or MPI.COMM_WORLD in mpi4py), but the user can also define their own custom communicators as needed. • By default a single, universal communicator exists to which all processes belong (MPI.COMM_WORLD) A new communicator is created as a collective operation of an existing communicator. For example, to split the processes in a communicator into smaller sub-groups, one could do the following: comm = MPI.COMM_WORLDrank = comm.Get_rank()color = rank % 4local_comm = comm.Split(color)local_rank = local_comm.Get_rank()print("Global rank: %d Local rank: %d" % (rank, local_rank)) A distinct label (called color, which is actually just an integer number between 0-3) is assigned to each process based on its rank in the original communicator. New communicators are then created based on this value, so that all processes with the same “color” end up in the same communicator. If effect, this splits the processes in the original communicator into 4 sub-groups that each share a new communicator within the sub-group. Can you think of cases where multiple communicators can be useful? © CC-BY-NC-SA 4.0 by CSC - IT Center for Science Ltd.
2023-01-27T05:27:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2059304118156433, "perplexity": 3785.175908507906}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494936.89/warc/CC-MAIN-20230127033656-20230127063656-00031.warc.gz"}
https://www.itl.nist.gov/div898/software/dataplot/refman1/auxillar/cme.htm
Dataplot Vol 1 Vol 2 CME Name: CME Type: Analysis Command Purpose: Estimate the parameters of a generalized Pareto distribution using the conditional mean exceedance (CME) method. Description: The generalized Pareto distribution (GPD) is an asymptotic distribution developed by using the fact that exceedances of a sufficiently high threshold are rare events to which the Poisson distribution applies. The cumulative distribution function of the generalized Pareto distribution is $$G(y) = 1 - {[1 + (cy/a)]^{-1/c}} \hspace{0.5 in} a > 0, (1 + (cy/a)) > 0$$ Here, c is the shape parameter and a is the scale parameter. This equation can be used to represent the conditional cumulative distribution of the excess Y = X - u of the variate X over the threshold u, given X > u for u sufficiently large. The cases c > 0, c = 0, and c < 0 correspond respectively to the extreme value type II (Frechet), extreme value type I (Gumbel), and reverse Weibull domains of attraction. Given the mean E(Y) and standard deviation sY of the variate Y, then a = 0.5*E(Y)*{1 + [E(Y)/sY]2} c = 0.5*{1 - [E(Y)/sY]2} The CME, or mean residual life (MRL), is the expectation of the amount by which a value exceeds a threshold u, conditional on that threshold being attained. If the exceedance data are fitted by the GPD model and c < 1, u > 0, and (a + u*c) > 0, then a plot of CME versus u should follow a line with intercept a/(1-c) and slope c/(1-c). The linearity of the CME plot can thus be used as an indicator of the appropriateness of the GPD model and both c and a can be estimated. Note that for the case where c < 0, then $$\gamma$$ = -1/c is the estimate of the shape parameter for the reverse Weibull (SET MINMAX 2 case in Dataplot) distribution. The CME command performs a least squares fit of the CME versus u data points. It does this as follows: 1. All points above the user specified threshold are saved and sorted. 2. Loop through the sorted points from smallest to largest. 3. For a given point in the loop, set the threshold u equal to that point. Then compute the CME. The CME is simply the sum of the points minus the threshold for those points greater than the threshold divided by the number of points greater than the threshold. The NISTIR 5531 document (see the References section below) gives the formula for the standard deviation of c. Syntax: CME MLE <y> <SUBSET/EXCEPT/FOR qualification> where <y> is the response variable; and where the <SUBSET/EXCEPT/FOR qualification> is optional. Examples: CME MLE Y CME MLE Y SUBSET TAG > 0 Note: The user specified threshold is determined by entering the following command before the CME command: LET THRESHOL = <value> If no threshold is specified, then the minimum data value is used as the threshold. Note: The following internal parameters will be saved. GAMMA = shape parameter for generalized Pareto distribution A = scale parameter for generalized Pareto distribution SDGAMMA = standard deviation of GAMMA If the absolute value of GAMMA is within a user-specified tolerance of zero, then the following are also saved. LOC = location parameter for Gumbel distribution. SCALE = scale parameter for Gumbel distribution. To specify this tolerance, enter the command SET PEAKS OVER THRESHOLD TOLERANCE <value> The default tolerance is 0.05. If GAMMA is less than zero with an absolute value greater than the above tolerance, then the following are also saved. GAMMA2 = shape parameter for reverse Weibull distribution. LOC = location parameter for reverse Weibull distribution. SCALE = scale parameter for reverse Weibull distribution. These estimates for the reverse Weibull and Gumbel distributions are based on moment estimators. The formulas are given on page 3 of NIST Building Science Series 174 (see the Reference section below). Currently, no estimates for the Frechet case (GAMMA > 0) are saved. Note: The May, 2005 version added support for generating the output in Latex or HTML. Enter HELP CAPTURE HTML HELP CAPTURE LATEX for details. The ASCII output was also modified somewhat. This was a cosemetic change to make the output clearer. Note: The PEAKS OVER THRESHOLD PLOT was added in the 5/2005 version. This plot shows how the estimate of the shape parameter changes as the the threshold changes. Default: None. Synonyms: None Related Commands: DEHAAN = Compute the Dehaan estimates for the generalized Pareto distribution. CME PLOT = Compute a CME plot. GEPPDF = Compute the probability density function for the generalized Pareto distribution. PEAKS OVER THRESHOLD PLOT = Generate a peaks over threshold plot. Reference: Johnson, Kotz, and Balakrishnan (1994), "Continuous Univariate Distributions: Volume I," 2nd. ed., John Wiley and Sons. Heckert, Simiu, and Whalen (1998), "Estimates of Hurricane Wind Speeds by the "Peaks Over Threshold" Approach," Journal of Structural Engineering. Simiu and Heckert (1996), "Extreme Wind Distribution Tails: A "Peaks Over Threshold" Approach," Journal of Structural Engineering. Lechner, Simiu, and Heckert (1993), "Assessment of 'peak over threshold' Methods for Estimating Extreme Value Distribution Tails," Structural Safety. Simiu, Heckert, and Whalen (1996), "Estimates of Hurricane Wind Speeds by the 'Peaks Over Threshold' Method," NIST Technical Note 1416. Gross, Simiu, Heckert, and Lechner (1995), "Extreme Wind Estimates by the Conditional Mean Exceedance Procedure," NISTIR 5531. Simiu and Heckert (1995), "Extreme Wind Distribution Tails: A 'Peaks Over Threshold' Approach," NIST Building Science Series 174. Applications: Extreme Value Analysis Implementation Date: 1998/5 2005/5: Updated the output. 2005/5: Added support for HTML and Latex output. 2005/5: Added support for the standard deviation of c. Program: SKIP 25 SET WRITE DECIMALS 5 LET Y2 = SORT Y LET THRESHOL = Y2(900) SET WRITE DECIMALS 5 CME MLE Y The following output is generated. Generalized Pareto Parameter Estimation (CME) (Maximum Case) Summary Statistics (Full Data Set): Number of Observations: 977 Sample Mean: 7.81898 Sample Standard Deviation: 17.76409 Sample Minimum: 0.00000 Sample Maximum: 90.04000 Summary Statistics for Observations Above Threshold: Threshold: 43.36000 Number of Observations Above Threshold: 77 Sample Mean: 56.76623 Sample Standard Deviation: 10.39647 CME Parameter Estimates: Location Parameter: 43.36000 Scale Parameter: 16.24223 Shape Parameter (Gamma): -0.20209 Standard Deviation of Gamma: 0.05361 Log-likelihood: -0.8068094E+02 AIC: 0.1673619E+03 AICc: 0.1676907E+03 BIC: 0.1743933E+03 For negative Gamma, the generalized Pareto is equivalent to a reverse Weibull (SET MINMAX MAX) with: Shape Parameter (Gamma): 4.94840 Location Parameter: 101.72727 Scale Parameter: 48.99755 NIST is an agency of the U.S. Commerce Department. Date created: 06/05/2001 Last updated: 10/13/2015
2019-11-20T21:49:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7432017922401428, "perplexity": 4929.763902986917}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670635.48/warc/CC-MAIN-20191120213017-20191121001017-00352.warc.gz"}
https://mooseframework.inl.gov/newsletter/2018_06.html
## DistributedGeneratedMesh Since the beginning, MOOSE has had the GeneratedMesh for generating cartesian meshes in 1D, 2D and 3D. However, the way this has worked in parallel for distributed-mesh is non-optimal. In parallel, the complete GeneratedMesh is actually built on every processor. If you're using parallel_type = distributed the entire mesh will still be built on every processor, partitioned, and then each processor will delete the portion not assigned to it. We have added a new capability called DistributedGeneratedMesh. This new capability allows for each processor to uniquely create only the portion of the mesh that belongs on that processor. In this way, much larger generated meshes can be created in parallel much more quickly and efficiently. ## VectorPostprocessor Parallel Consistency Just as with Postprocessors, the outputs of VectorPostprocessors (VPPs) are able to be coupled back into other MOOSE objects. However, there is one issue: the vectors a VPP produces are only required to be complete on "processor 0" (the root MPI process). So, if you are needing the value of a VPP in a Kernel (which will most-likely be running on all of the MPI processes) something needs to be done to make the complete vectors available on all of the MPI processes. A new capability was added to MOOSE this month that automatically handles the parallelism of VPP output. To tell MOOSE that you need a complete copy of the vector on every processor you will now add an extra argument to getVectorPostprocessorValue() like so: getVectorPostprocessorValue('the_vpp_parameter_name', 'the_vector_name', true) The true is the new part. By passing this MOOSE will completely handle the parallelism for you and you are free to use the vector values on all processors. In addition a new function called getScatterVectorPostprocessorValue() was added to help in the case that a VPP produces vectors that are num_procs in length and your object only needs to access it's entry in that vector. For more information on all of this check out the bottom of the VectorPostprocessors System page. 2x2 GridPartitioner Example ## GridPartitioner The Partitioner System is responsible for assigning portions of the domain to each MPI process when running in parallel. Most of you never interact with this and are using the default LibmeshPartitioner which defaults to using the METIS package for partitioning. However, sometimes you want more control over the partitioning. This month we added the GridPartitioner. The GridPartitioner allows you to use a perfect grid (similar to a GeneratedMesh) to do the element assignment. To the right is an example of using the GridPartitioner on a GeneratedMesh with a 2x2 grid for use on 4 processors. ## Moose Tagging System Previously, a global Jacobian matrix and a global residual vector are filled out with contributions from all kernels when a sweep is done through all elements . Sometimes, for example for eigenvalue solvers or explicit time steppers, we need a more flexible way to associate kernels with vectors and matrices in an arbitrary way. A kernel can contribute to one or more vectors and matrices, and similarly a matrix and a vector can receive contributes from one or more kernels. This capability currently is used to build explicit sweep and eigenvalue solvers. More details for the MOOSE tagging system is at Tagging System. ## MooseMesh::clone() The MooseMesh::clone() interface returns a reference to the object it allocates, which makes it easy to forget to explicitly delete the object when you are done with it. The new MooseMesh::safeClone() method, which returns a std::unique_ptr<MooseMesh> should be used instead. MooseMesh::clone() has been reimplemented as a mooseError() at the base class level, and should no longer be called or overridden by user code. It will eventually be removed altogether. ## PorousFlowJoiner materials The Porous Flow module now automatically adds all of the PorousFlowJoiner materials that are required, so there is no need to include these pesky objects yourself! While your current input files will still continue to run unchanged, you may want to delete these objects, and forget about them forevermore. ## Strain Periodicity A new global strain calculation approach has been implemented in the TensorMechanics module that relaxes the stresses along the periodic directions and allow corresponding deformation. It generates an auxiliary displacement field which combined with periodic displacements enforces the strain periodicity. This approach enables capturing volume change, shear deformation, etc. while still maintaining periodic BC on the displacements. ## TensorMechanics Documentation The Tensor Mechanics module has a more complete set of documentation for those classes used in engineering scale thermo-mechanical simulations. We've also improved on the documentation for the actions by linking among the actions and the created classes, including the recommended TensorMechanics MasterAction that ensures consistency among the strain and stress divergence formulations. ## Mesh Exploder The visualization tool (Chigger) that is the basis of the Peacock GUI now has the ability to visualize a partitioned mesh by "exploding" the elements on each processor. The test script below creates the image shown in Figure 1. import vtk import chigger # Create a common camera camera = vtk.vtkCamera() camera.SetViewUp(0.1889, 0.9412, -0.2800) camera.SetPosition(3.4055, -0.8236, -1.9897) camera.SetFocalPoint(0.5000, 0.5000, 0.5000) result = chigger.exodus.ExodusResult(reader, variable='u', explode=0.3, camera=camera) window = chigger.RenderWindow(result, size=[300,300], test=True) window.write('explode.png') window.start() (python/chigger/tests/nemesis/explode.py) Figure 1: Visualization of parallel partition using the Chigger script.
2019-02-19T21:15:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35919189453125, "perplexity": 1805.4456389174927}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247492825.22/warc/CC-MAIN-20190219203410-20190219225410-00360.warc.gz"}
https://www.mcs.anl.gov/petsc/petsc-master/docs/manualpages/DM/DMGetAdjacency.html
petsc-master 2019-07-21 Report Typos and Errors Returns the flags for determining variable influence Synopsis #include "petscdm.h" #include "petscdmlabel.h" #include "petscds.h" PetscErrorCode DMGetAdjacency(DM dm, PetscInt f, PetscBool *useCone, PetscBool *useClosure) Not collective Input Parameters dm - The DM object f - The field number, or PETSC_DEFAULT for the default adjacency Output Parameter useCone - Flag for variable influence starting with the cone operation useClosure - Flag for variable influence using transitive closure Notes FEM: Two points p and q are adjacent if q \in closure(star(p)), useCone = PETSC_FALSE, useClosure = PETSC_TRUE FVM: Two points p and q are adjacent if q \in support(p+cone(p)), useCone = PETSC_TRUE, useClosure = PETSC_FALSE FVM++: Two points p and q are adjacent if q \in star(closure(p)), useCone = PETSC_TRUE, useClosure = PETSC_TRUE Further explanation can be found in the User's Manual Section on the Influence of Variables on One Another.
2019-07-23T03:58:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9794058799743652, "perplexity": 12857.821371530803}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195528687.63/warc/CC-MAIN-20190723022935-20190723044935-00406.warc.gz"}
https://zbmath.org/authors/?q=ai%3Aford.kevin-b
## Ford, Kevin B. Compute Distance To: Author ID: ford.kevin-b Published as: Ford, Kevin; Ford, Kevin B.; Ford, K. more...less External Links: MGP · Wikidata Documents Indexed: 92 Publications since 1993 Co-Authors: 54 Co-Authors with 60 Joint Publications 2,135 Co-Co-Authors all top 5 ### Co-Authors 32 single-authored 20 Konyagin, Sergeĭ Vladimirovich 10 Luca, Florian 7 Zaharescu, Alexandru 6 Pomerance, Carl Bernard 6 Shparlinski, Igor E. 4 Green, Ben Joseph 4 Tao, Terence 3 Bourgain, Jean 3 Eberhard, Sean 3 Maynard, James 2 Alkan, Emre 2 Dilworth, Stephen J. 2 Filaseta, Michael A. 2 Hudson, Richard H. 2 Khan, Mizan R. 2 Kutzarova, Denka N. 2 Lamzouri, Youness 2 Pollack, Paul 2 Tenenbaum, Gérald 1 Addario-Berry, Louigi 1 Banks, William D. 1 Bays, Carter 1 Broughan, Kevin A. 1 Buttkewitz, Yvonne 1 Cobeli, Cristian 1 de la Bretèche, Régis 1 Dewitt, Jonathan 1 Diamond, Harold George 1 Elsholtz, Christian 1 Gabdullin, Mikhail R. 1 Garaev, Moubariz Z. 1 Goldstein, Eli 1 Halberstam, Heini 1 Harper, Adam J. 1 Heath-Brown, Roger 1 Hu, Yong 1 Koukoulopoulos, Dimitris 1 Meng, Xianchang 1 Miller, Steven J. 1 Moree, Pieter 1 Moreland, Gwyneth 1 Palsson, Eyvindur Ari 1 Pappalardi, Francesco 1 Qian, Guoyou 1 Rubinstein, Michael O. 1 Schlage-Puchta, Jan-Christoph 1 Senger, Steven 1 Shallit, Jeffrey O. 1 Sneed, Jason 1 Soundararajan, Kannan 1 Vandehey, Joseph 1 Wooley, Trevor D. 1 Yankov, Christian L. 1 Yu, Gang all top 5 ### Serials 6 Acta Arithmetica 5 IMRN. International Mathematics Research Notices 4 Proceedings of the American Mathematical Society 3 Bulletin of the London Mathematical Society 3 Duke Mathematical Journal 3 Transactions of the American Mathematical Society 3 Journal of the American Mathematical Society 3 Annals of Mathematics. Second Series 2 Lithuanian Mathematical Journal 2 Mathematical Proceedings of the Cambridge Philosophical Society 2 Journal of the London Mathematical Society. Second Series 2 Journal of Number Theory 2 Mathematische Annalen 2 Monatshefte für Mathematik 2 The Ramanujan Journal 2 The Quarterly Journal of Mathematics 2 Algebra & Number Theory 1 Bulletin of the Australian Mathematical Society 1 Journal d’Analyse Mathématique 1 Periodica Mathematica Hungarica 1 Mathematics of Computation 1 Acta Mathematica 1 Canadian Mathematical Bulletin 1 Colloquium Mathematicum 1 Compositio Mathematica 1 Functiones et Approximatio. Commentarii Mathematici 1 Illinois Journal of Mathematics 1 Journal für die Reine und Angewandte Mathematik 1 Mathematika 1 Michigan Mathematical Journal 1 Proceedings of the London Mathematical Society. Third Series 1 Statistics & Probability Letters 1 Combinatorica 1 Acta Mathematica Hungarica 1 Probability Theory and Related Fields 1 Forum Mathematicum 1 The Annals of Applied Probability 1 Geometric and Functional Analysis. GAFA 1 Experimental Mathematics 1 Electronic Research Announcements of the American Mathematical Society 1 Smarandache Notions Journal 1 Journal of the European Mathematical Society (JEMS) 1 Integers 1 Journal of the Australian Mathematical Society 1 International Journal of Number Theory 1 Proceedings of the Steklov Institute of Mathematics 1 Rivista di Matematica della Università di Parma. New Series 1 Moscow Journal of Combinatorics and Number Theory 1 Discrete Analysis all top 5 ### Fields 83 Number theory (11-XX) 8 Combinatorics (05-XX) 5 Group theory and generalizations (20-XX) 5 Probability theory and stochastic processes (60-XX) 4 Statistics (62-XX) 1 Algebraic geometry (14-XX) 1 Measure and integration (28-XX) 1 Special functions (33-XX) 1 Difference and functional equations (39-XX) 1 Approximations and expansions (41-XX) 1 Computer science (68-XX) 1 Information and communication theory, circuits (94-XX) ### Citations contained in zbMATH Open 71 Publications have been cited 537 times in 432 Documents Cited by Year The distribution of integers with a divisor in a given interval. Zbl 1181.11058 Ford, Kevin 2008 Explicit constructions of RIP matrices and related problems. Zbl 1236.94027 Bourgain, Jean; Dilworth, Stephen; Ford, Kevin; Konyagin, Sergei; Kutzarova, Denka 2011 Vinogradov’s integral and bounds for the Riemann zeta function. Zbl 1034.11044 Ford, Kevin 2002 On the divisibility of Fermat quotients. Zbl 1223.11116 Bourgain, Jean; Ford, Kevin; Konyagin, Sergei V.; Shparlinski, Igor E. 2010 The distribution of totients. Zbl 0914.11053 Ford, Kevin 1998 Long gaps between primes. Zbl 1392.11071 Ford, Kevin; Green, Ben; Konyagin, Sergei; Maynard, James; Tao, Terence 2018 Sums and products from a finite set of real numbers. Zbl 0908.11008 Ford, Kevin 1998 Large gaps between consecutive prime numbers. Zbl 1338.11083 Ford, Kevin; Green, Ben; Konyagin, Sergei; Tao, Terence 2016 The number of solutions of $$\varphi (x)=m$$. Zbl 0978.11053 Ford, Kevin 1999 Sieving by large integers and covering systems of congruences. Zbl 1210.11020 Filaseta, Michael; Ford, Kevin; Konyagin, Sergei; Pomerance, Carl; Yu, Gang 2007 New estimates for mean values of Weyl sums. Zbl 0821.11050 Ford, Kevin B. 1995 On an irreducibility theorem of A. Schinzel associated with coverings of the integers. Zbl 0966.11046 Filaseta, M.; Ford, K.; Konyagin, S. 2000 Values of the Euler $$\varphi$$-function not divisible by a given odd prime, and the distribution of Euler-Kronecker constants for cyclotomic fields. Zbl 1294.11164 Ford, Kevin; Luca, Florian; Moree, Pieter 2014 On the maximal difference between an element and its inverse in residue rings. Zbl 1131.11005 Ford, Kevin; Khan, Mizan R.; Shparlinski, Igor E.; Yankov, Christian L. 2005 On the distribution of imaginary parts of zeros of the Riemann zeta function. II. Zbl 1160.11042 Ford, Kevin; Soundararajan, Kannan; Zaharescu, Alexandru 2009 Zero-free regions for the Riemann zeta function. Zbl 1034.11045 Ford, Kevin 2002 Generalized Euler constants. Zbl 1143.11036 Diamond, Harold G.; Ford, Kevin 2008 On the distribution of imaginary parts of zeros of the Riemann zeta function. Zbl 1139.11036 Ford, Kevin; Zaharescu, Alexandru 2005 Common values of the arithmetic functions $$\varphi$$ and $$\sigma$$. Zbl 1205.11010 Ford, Kevin; Luca, Florian; Pomerance, Carl 2010 Invariable generation of the symmetric group. Zbl 1475.20098 Eberhard, Sean; Ford, Kevin; Green, Ben 2017 On Vinogradov’s mean value theorem: strongly diagonal behaviour via efficient congruencing. Zbl 1307.11102 Ford, Kevin; Wooley, Trevor D. 2014 Chebyshev’s bias for products of two primes. Zbl 1280.11056 Ford, Kevin; Sneed, Jason 2010 Chebyshev’s conjecture and the prime number race. Zbl 1214.11105 Ford, Kevin; Konyagin, Sergei 2002 Integers with a divisor in $$(y,2y]$$. Zbl 1175.11053 Ford, Kevin 2008 Values of the Euler function in various sequences. Zbl 1197.11125 Banks, William D.; Ford, Kevin; Luca, Florian; Pappalardi, Francesco; Shparlinski, Igor E. 2005 On the largest prime factor of the Mersenne numbers. Zbl 1178.11061 Ford, Kevin; Luca, Florian; Shparlinski, Igor E. 2009 The representation of numbers as sums of unlike powers. Zbl 0816.11049 Ford, Kevin B. 1995 The representation of numbers as sums of unlike powers. II. Zbl 0866.11054 Ford, Kevin B. 1996 Prime chains and Pratt trees. Zbl 1218.11085 Ford, Kevin; Konyagin, Sergei V.; Luca, Florian 2010 Poisson-Dirichlet branching random walks. Zbl 1278.60129 2013 Diophantine approximation with arithmetic functions. I. Zbl 1225.11098 Alkan, Emre; Ford, Kevin; Zaharescu, Alexandru 2009 The prime number race and zeros of Dirichlet $$L$$-functions off the critical line. III. Zbl 1284.11123 Ford, Kevin; Lamzouri, Youness; Konyagin, Sergei 2013 The prime number race and zeros of $$L$$-functions off the critical line. Zbl 1010.11051 Ford, Kevin; Konyagin, Sergei 2002 Geometric properties of points on modular hyperbolas. Zbl 1204.11004 Ford, Kevin; Khan, Mizan R.; Shparlinski, Igor E. 2010 On common values of $$\varphi(n)$$ and $$\sigma(m)$$. II. Zbl 1279.11093 Ford, Kevin; Pollack, Paul 2012 Diophantine approximation with arithmetic functions. II. Zbl 1225.11099 Alkan, Emre; Ford, Kevin; Zaharescu, Alexandru 2009 Residue classes free of values of Euler’s function. Zbl 0931.11037 Ford, Kevin; Konyagin, Sergei; Pomerance, Carl 1999 Permutations fixing a $$k$$-set. Zbl 1404.05004 Eberhard, Sean; Ford, Kevin; Green, Ben 2016 Large gaps between consecutive prime numbers containing perfect powers. Zbl 1391.11125 Ford, Kevin; Heath-Brown, D. R.; Konyagin, Sergei 2015 Zeros of Dirichlet $$L$$-functions near the real axis and Chebyshev’s bias. Zbl 1009.11057 Bays, Carter; Ford, Kevin; Hudson, Richard H.; Rubinstein, Michael 2001 The distribution of totients. Zbl 0888.11003 Ford, Kevin 1998 The prime number race and zeros of $$L$$-functions off the critical line. II. Zbl 1056.11051 Ford, Kevin; Konyagin, Sergei 2003 The image of Carmichael’s $$\lambda$$-function. Zbl 1322.11104 Ford, Kevin; Luca, Florian; Pomerance, Carl 2014 Permutations contained in transitive subgroups. Zbl 1346.05005 Eberhard, Sean; Ford, Kevin; Koukoulopoulos, Dimitris 2016 Breaking the $$k^2$$ barrier for explicit RIP matrices. Zbl 1288.68062 Bourgain, Jean; Dilworth, Stephen J.; Ford, Kevin; Konyagin, Sergei V.; Kutzarova, Denka 2011 The Brun-Hooley sieve. Zbl 0978.11049 Ford, Kevin; Halberstam, H. 2000 Sign changes in $$\pi_{q,a}(x) - \pi_{q,b}(x)$$. Zbl 0986.11063 Ford, Kevin; Hudson, Richard H. 2001 Maximal collections of intersecting arithmetic progressions. Zbl 1046.05077 Ford, K. 2003 Waring’s problem with polynomial summands. Zbl 0964.11044 Ford, Kevin 2000 Sharp probability estimates for random walks with barriers. Zbl 1171.60007 Ford, Kevin 2009 The normal behavior of the Smarandache function. Zbl 1077.11505 Ford, Kevin 1999 From Kolmogorov’s theorem on empirical distribution to number theory. Zbl 1369.62103 Ford, Kevin 2007 Divisors of the Euler and Carmichael functions. Zbl 1237.11038 Ford, Kevin; Hu, Yong 2008 On groups with perfect order subsets. Zbl 1295.11100 Ford, Kevin; Konyagin, Sergei; Luca, Florian 2012 On common values of $$\varphi(n)$$ and $$\sigma(m)$$. I. Zbl 1265.11092 Ford, Kevin; Pollack, Paul 2011 Extremal properties of product sets. Zbl 1414.05288 Ford, Kevin 2018 Extreme biases in prime number races with many contestants. Zbl 1451.11102 Ford, Kevin; Harper, Adam J.; Lamzouri, Youness 2019 An explicit sieve bound and small values of $$\sigma(\varphi(m))$$. Zbl 0980.11004 Ford, Kevin 2001 A strong form of a problem of R. L. Graham. Zbl 1148.11052 Ford, Kevin 2004 On Bombieri’s asymptotic sieve. Zbl 1125.11053 Ford, Kevin 2005 The jumping champions of the Farey series. Zbl 1034.11013 Cobeli, Cristian; Ford, Kevin; Zaharescu, Alexandru 2003 A problem of Ramanujan, Erdős, and Kátai on the iterated divisor function. Zbl 1266.11100 Buttkewitz, Yvonne; Elsholtz, Christian; Ford, Kevin; Schlage-Puchta, Jan-Christoph 2012 On two conjectures of Sierpiński concerning the arithmetic functions $$\sigma$$ and $$\varphi$$. Zbl 0931.11032 Ford, Kevin; Konyagin, Sergei 1999 Unnormalized differences between zeros of $$L$$-functions. Zbl 1319.11057 Ford, Kevin; Zaharescu, Alexandru 2015 Localized large sums of random variables. Zbl 1133.60322 Ford, Kevin; Tenenbaum, Gérald 2008 Generalized Smirnov statistics and the distribution of prime factors. Zbl 1229.11122 Ford, Kevin 2007 Sharp probability estimates for generalized Smirnov statistics. Zbl 1229.62059 Ford, Kevin 2008 Simultaneous distribution of the fractional parts of Riemann zeta zeros. Zbl 1433.11104 Ford, Kevin; Meng, Xianchang; Zaharescu, Alexandru 2017 Integers divisible by a large shifted prime. Zbl 1428.11164 Ford, Kevin 2017 On the smallest simultaneous power nonresidue modulo a prime. Zbl 1422.11003 Ford, Kevin; Garaev, Moubariz Z.; Konyagin, Sergei V. 2017 Chains of large gaps between primes. Zbl 1456.11173 Ford, Kevin; Maynard, James; Tao, Terence 2018 Extreme biases in prime number races with many contestants. Zbl 1451.11102 Ford, Kevin; Harper, Adam J.; Lamzouri, Youness 2019 Long gaps between primes. Zbl 1392.11071 Ford, Kevin; Green, Ben; Konyagin, Sergei; Maynard, James; Tao, Terence 2018 Extremal properties of product sets. Zbl 1414.05288 Ford, Kevin 2018 Chains of large gaps between primes. Zbl 1456.11173 Ford, Kevin; Maynard, James; Tao, Terence 2018 Invariable generation of the symmetric group. Zbl 1475.20098 Eberhard, Sean; Ford, Kevin; Green, Ben 2017 Simultaneous distribution of the fractional parts of Riemann zeta zeros. Zbl 1433.11104 Ford, Kevin; Meng, Xianchang; Zaharescu, Alexandru 2017 Integers divisible by a large shifted prime. Zbl 1428.11164 Ford, Kevin 2017 On the smallest simultaneous power nonresidue modulo a prime. Zbl 1422.11003 Ford, Kevin; Garaev, Moubariz Z.; Konyagin, Sergei V. 2017 Large gaps between consecutive prime numbers. Zbl 1338.11083 Ford, Kevin; Green, Ben; Konyagin, Sergei; Tao, Terence 2016 Permutations fixing a $$k$$-set. Zbl 1404.05004 Eberhard, Sean; Ford, Kevin; Green, Ben 2016 Permutations contained in transitive subgroups. Zbl 1346.05005 Eberhard, Sean; Ford, Kevin; Koukoulopoulos, Dimitris 2016 Large gaps between consecutive prime numbers containing perfect powers. Zbl 1391.11125 Ford, Kevin; Heath-Brown, D. R.; Konyagin, Sergei 2015 Unnormalized differences between zeros of $$L$$-functions. Zbl 1319.11057 Ford, Kevin; Zaharescu, Alexandru 2015 Values of the Euler $$\varphi$$-function not divisible by a given odd prime, and the distribution of Euler-Kronecker constants for cyclotomic fields. Zbl 1294.11164 Ford, Kevin; Luca, Florian; Moree, Pieter 2014 On Vinogradov’s mean value theorem: strongly diagonal behaviour via efficient congruencing. Zbl 1307.11102 Ford, Kevin; Wooley, Trevor D. 2014 The image of Carmichael’s $$\lambda$$-function. Zbl 1322.11104 Ford, Kevin; Luca, Florian; Pomerance, Carl 2014 Poisson-Dirichlet branching random walks. Zbl 1278.60129 2013 The prime number race and zeros of Dirichlet $$L$$-functions off the critical line. III. Zbl 1284.11123 Ford, Kevin; Lamzouri, Youness; Konyagin, Sergei 2013 On common values of $$\varphi(n)$$ and $$\sigma(m)$$. II. Zbl 1279.11093 Ford, Kevin; Pollack, Paul 2012 On groups with perfect order subsets. Zbl 1295.11100 Ford, Kevin; Konyagin, Sergei; Luca, Florian 2012 A problem of Ramanujan, Erdős, and Kátai on the iterated divisor function. Zbl 1266.11100 Buttkewitz, Yvonne; Elsholtz, Christian; Ford, Kevin; Schlage-Puchta, Jan-Christoph 2012 Explicit constructions of RIP matrices and related problems. Zbl 1236.94027 Bourgain, Jean; Dilworth, Stephen; Ford, Kevin; Konyagin, Sergei; Kutzarova, Denka 2011 Breaking the $$k^2$$ barrier for explicit RIP matrices. Zbl 1288.68062 Bourgain, Jean; Dilworth, Stephen J.; Ford, Kevin; Konyagin, Sergei V.; Kutzarova, Denka 2011 On common values of $$\varphi(n)$$ and $$\sigma(m)$$. I. Zbl 1265.11092 Ford, Kevin; Pollack, Paul 2011 On the divisibility of Fermat quotients. Zbl 1223.11116 Bourgain, Jean; Ford, Kevin; Konyagin, Sergei V.; Shparlinski, Igor E. 2010 Common values of the arithmetic functions $$\varphi$$ and $$\sigma$$. Zbl 1205.11010 Ford, Kevin; Luca, Florian; Pomerance, Carl 2010 Chebyshev’s bias for products of two primes. Zbl 1280.11056 Ford, Kevin; Sneed, Jason 2010 Prime chains and Pratt trees. Zbl 1218.11085 Ford, Kevin; Konyagin, Sergei V.; Luca, Florian 2010 Geometric properties of points on modular hyperbolas. Zbl 1204.11004 Ford, Kevin; Khan, Mizan R.; Shparlinski, Igor E. 2010 On the distribution of imaginary parts of zeros of the Riemann zeta function. II. Zbl 1160.11042 Ford, Kevin; Soundararajan, Kannan; Zaharescu, Alexandru 2009 On the largest prime factor of the Mersenne numbers. Zbl 1178.11061 Ford, Kevin; Luca, Florian; Shparlinski, Igor E. 2009 Diophantine approximation with arithmetic functions. I. Zbl 1225.11098 Alkan, Emre; Ford, Kevin; Zaharescu, Alexandru 2009 Diophantine approximation with arithmetic functions. II. Zbl 1225.11099 Alkan, Emre; Ford, Kevin; Zaharescu, Alexandru 2009 Sharp probability estimates for random walks with barriers. Zbl 1171.60007 Ford, Kevin 2009 The distribution of integers with a divisor in a given interval. Zbl 1181.11058 Ford, Kevin 2008 Generalized Euler constants. Zbl 1143.11036 Diamond, Harold G.; Ford, Kevin 2008 Integers with a divisor in $$(y,2y]$$. Zbl 1175.11053 Ford, Kevin 2008 Divisors of the Euler and Carmichael functions. Zbl 1237.11038 Ford, Kevin; Hu, Yong 2008 Localized large sums of random variables. Zbl 1133.60322 Ford, Kevin; Tenenbaum, Gérald 2008 Sharp probability estimates for generalized Smirnov statistics. Zbl 1229.62059 Ford, Kevin 2008 Sieving by large integers and covering systems of congruences. Zbl 1210.11020 Filaseta, Michael; Ford, Kevin; Konyagin, Sergei; Pomerance, Carl; Yu, Gang 2007 From Kolmogorov’s theorem on empirical distribution to number theory. Zbl 1369.62103 Ford, Kevin 2007 Generalized Smirnov statistics and the distribution of prime factors. Zbl 1229.11122 Ford, Kevin 2007 On the maximal difference between an element and its inverse in residue rings. Zbl 1131.11005 Ford, Kevin; Khan, Mizan R.; Shparlinski, Igor E.; Yankov, Christian L. 2005 On the distribution of imaginary parts of zeros of the Riemann zeta function. Zbl 1139.11036 Ford, Kevin; Zaharescu, Alexandru 2005 Values of the Euler function in various sequences. Zbl 1197.11125 Banks, William D.; Ford, Kevin; Luca, Florian; Pappalardi, Francesco; Shparlinski, Igor E. 2005 On Bombieri’s asymptotic sieve. Zbl 1125.11053 Ford, Kevin 2005 A strong form of a problem of R. L. Graham. Zbl 1148.11052 Ford, Kevin 2004 The prime number race and zeros of $$L$$-functions off the critical line. II. Zbl 1056.11051 Ford, Kevin; Konyagin, Sergei 2003 Maximal collections of intersecting arithmetic progressions. Zbl 1046.05077 Ford, K. 2003 The jumping champions of the Farey series. Zbl 1034.11013 Cobeli, Cristian; Ford, Kevin; Zaharescu, Alexandru 2003 Vinogradov’s integral and bounds for the Riemann zeta function. Zbl 1034.11044 Ford, Kevin 2002 Zero-free regions for the Riemann zeta function. Zbl 1034.11045 Ford, Kevin 2002 Chebyshev’s conjecture and the prime number race. Zbl 1214.11105 Ford, Kevin; Konyagin, Sergei 2002 The prime number race and zeros of $$L$$-functions off the critical line. Zbl 1010.11051 Ford, Kevin; Konyagin, Sergei 2002 Zeros of Dirichlet $$L$$-functions near the real axis and Chebyshev’s bias. Zbl 1009.11057 Bays, Carter; Ford, Kevin; Hudson, Richard H.; Rubinstein, Michael 2001 Sign changes in $$\pi_{q,a}(x) - \pi_{q,b}(x)$$. Zbl 0986.11063 Ford, Kevin; Hudson, Richard H. 2001 An explicit sieve bound and small values of $$\sigma(\varphi(m))$$. Zbl 0980.11004 Ford, Kevin 2001 On an irreducibility theorem of A. Schinzel associated with coverings of the integers. Zbl 0966.11046 Filaseta, M.; Ford, K.; Konyagin, S. 2000 The Brun-Hooley sieve. Zbl 0978.11049 Ford, Kevin; Halberstam, H. 2000 Waring’s problem with polynomial summands. Zbl 0964.11044 Ford, Kevin 2000 The number of solutions of $$\varphi (x)=m$$. Zbl 0978.11053 Ford, Kevin 1999 Residue classes free of values of Euler’s function. Zbl 0931.11037 Ford, Kevin; Konyagin, Sergei; Pomerance, Carl 1999 The normal behavior of the Smarandache function. Zbl 1077.11505 Ford, Kevin 1999 On two conjectures of Sierpiński concerning the arithmetic functions $$\sigma$$ and $$\varphi$$. Zbl 0931.11032 Ford, Kevin; Konyagin, Sergei 1999 The distribution of totients. Zbl 0914.11053 Ford, Kevin 1998 Sums and products from a finite set of real numbers. Zbl 0908.11008 Ford, Kevin 1998 The distribution of totients. Zbl 0888.11003 Ford, Kevin 1998 The representation of numbers as sums of unlike powers. II. Zbl 0866.11054 Ford, Kevin B. 1996 New estimates for mean values of Weyl sums. Zbl 0821.11050 Ford, Kevin B. 1995 The representation of numbers as sums of unlike powers. Zbl 0816.11049 Ford, Kevin B. 1995 all top 5 ### Cited by 522 Authors 26 Ford, Kevin B. 26 Shparlinski, Igor E. 17 Luca, Florian 15 Pomerance, Carl Bernard 13 Pollack, Paul 12 Wooley, Trevor D. 9 Mixon, Dustin G. 8 Alkan, Emre 8 Konyagin, Sergeĭ Vladimirovich 7 Banks, William D. 7 Maynard, James 7 Moree, Pieter 7 Tao, Terence 6 Fickus, Matthew C. 6 Roche-Newton, Oliver 6 Rudnev, Misha 6 Shkredov, Il’ya Dmitrievich 5 Bourgain, Jean 5 Chen, Zhixiong 5 Filaseta, Michael A. 5 Fu, Fangwei 5 Lamzouri, Youness 5 Niu, Minyao 5 Zaharescu, Alexandru 4 Baier, Stephan 4 Brüdern, Jörg 4 Garunkštis, Ramūnas 4 Jørgensen, Palle E. T. 4 Martin, Greg 4 Meng, Xianchang 4 Pintz, Janos 4 Sahasrabudhe, Julian 3 Balister, Paul N. 3 Bollobás, Béla 3 Chang, Mei-Chu 3 Cilleruelo, Javier 3 Croot, Ernie 3 Devin, Lucile 3 Elsholtz, Christian 3 Finch, Carrie E. 3 Garzoni, Daniele 3 Granville, Andrew James 3 Gun, Sanoli 3 Halupczok, Karin 3 Harrington, Joshua 3 Jasper, John 3 Kawada, Koichi 3 Languasco, Alessandro 3 Maier, Helmut 3 Morris, Robert D. 3 Murty, Maruti Ram 3 Radziwiłł, Maksym 3 Rassias, Michael Th. 3 Shteĭnikov, Yuriĭ Nikolaevich 3 Tiba, Marius 3 Trudgian, Tim 3 Trudgian, Timothy S. 3 Winterhof, Arne 2 Akbary, Amir 2 Albeverio, Sergio A. 2 Alterman, Sebastian Zuniga 2 Balog, Antal 2 Bandeira, Afonso S. 2 Bansal, Arpit 2 Bays, Carter 2 Bordignon, Matteo 2 Chatterjee, Tapas 2 Chen, Yonggao 2 Cho, Ilwoo 2 de la Bretèche, Régis 2 Dirksen, Sjoerd 2 Dona, Daniele 2 Dose, Titus 2 Dubickas, Artūras 2 Dusart, Pierre 2 Dutkay, Dorin Ervin 2 Eberhard, Sean 2 Gao, You 2 Goldston, Daniel Alan 2 Green, Ben Joseph 2 Harper, Adam J. 2 Hart, Derrick N. 2 Helfgott, Harald Andrés 2 Hudson, Richard H. 2 Ivić, Aleksandar 2 Jankauskas, Jonas 2 Kerr, Bryce 2 Khurana, Suraj Singh 2 Kolpakova, Ol’ga Viktorovna 2 Kotnik, Tadej 2 Koukoulopoulos, Dimitris 2 Laporta, Maurizio B. S. 2 Lee, Ethan Simpson 2 Li, Junxian 2 Li, Liangpan 2 Michelen, Marcus 2 Miller, Steven J. 2 Munsch, Marc 2 Murphy, Brendan 2 Naidu, R. Ramu ...and 422 more Authors all top 5 ### Cited in 147 Serials 34 Journal of Number Theory 21 Mathematics of Computation 19 International Journal of Number Theory 15 Mathematika 12 Journal de Théorie des Nombres de Bordeaux 11 Proceedings of the American Mathematical Society 10 Acta Arithmetica 9 Proceedings of the Steklov Institute of Mathematics 8 Integers 7 Functiones et Approximatio. Commentarii Mathematici 7 Acta Mathematica Hungarica 7 The Ramanujan Journal 6 Bulletin of the Australian Mathematical Society 6 Israel Journal of Mathematics 6 Mathematical Notes 6 Advances in Mathematics 6 Duke Mathematical Journal 6 Monatshefte für Mathematik 6 Annals of Mathematics. Second Series 5 Journal of Mathematical Analysis and Applications 5 Journal of Algebra 5 Mathematische Annalen 5 Linear Algebra and its Applications 5 Experimental Mathematics 5 Research in Number Theory 4 Mathematical Proceedings of the Cambridge Philosophical Society 4 Mathematische Zeitschrift 4 Transactions of the American Mathematical Society 3 Constructive Approximation 3 Journal of the American Mathematical Society 3 SIAM Journal on Discrete Mathematics 3 Bulletin of the American Mathematical Society. New Series 3 Indagationes Mathematicae. New Series 3 Applied and Computational Harmonic Analysis 3 Finite Fields and their Applications 3 Journal of Integer Sequences 2 Discrete Mathematics 2 Lithuanian Mathematical Journal 2 Periodica Mathematica Hungarica 2 Rocky Mountain Journal of Mathematics 2 Ukrainian Mathematical Journal 2 Applied Mathematics and Computation 2 Bulletin of the London Mathematical Society 2 Canadian Journal of Mathematics 2 Compositio Mathematica 2 Illinois Journal of Mathematics 2 Inventiones Mathematicae 2 Journal of Applied Probability 2 Michigan Mathematical Journal 2 Proceedings of the London Mathematical Society. Third Series 2 Combinatorica 2 Graphs and Combinatorics 2 Probability Theory and Related Fields 2 Journal of Complexity 2 Discrete & Computational Geometry 2 Forum Mathematicum 2 Journal of the Ramanujan Mathematical Society 2 Designs, Codes and Cryptography 2 Geometric and Functional Analysis. GAFA 2 Advances in Computational Mathematics 2 The Journal of Fourier Analysis and Applications 2 Journal of Combinatorial Optimization 2 Journal of Group Theory 2 Journal of the Australian Mathematical Society 2 Comptes Rendus. Mathématique. Académie des Sciences, Paris 2 Advances in Mathematics of Communications 2 Discrete Mathematics, Algorithms and Applications 2 Japanese Journal of Mathematics. 3rd Series 2 Cryptography and Communications 2 Mathematics 1 Communications in Algebra 1 Communications on Pure and Applied Mathematics 1 Discrete Applied Mathematics 1 Indian Journal of Pure & Applied Mathematics 1 Journal d’Analyse Mathématique 1 Journal of Statistical Physics 1 The Mathematical Intelligencer 1 Abhandlungen aus dem Mathematischen Seminar der Universität Hamburg 1 Acta Mathematica 1 Annales Scientifiques de l’École Normale Supérieure. Quatrième Série 1 Archiv der Mathematik 1 Canadian Mathematical Bulletin 1 Colloquium Mathematicum 1 Journal of Approximation Theory 1 Journal of Combinatorial Theory. Series A 1 Journal of Combinatorial Theory. Series B 1 Journal of Functional Analysis 1 Journal of the London Mathematical Society. Second Series 1 Journal of Pure and Applied Algebra 1 Journal für die Reine und Angewandte Mathematik 1 Nagoya Mathematical Journal 1 Pacific Journal of Mathematics 1 Proceedings of the Japan Academy. Series A 1 Quaestiones Mathematicae 1 Real Analysis Exchange 1 Rendiconti del Seminario Matematico della Università di Padova 1 Theoretical Computer Science 1 Statistics & Probability Letters 1 Circuits, Systems, and Signal Processing 1 Chinese Annals of Mathematics. Series B ...and 47 more Serials all top 5 ### Cited in 31 Fields 348 Number theory (11-XX) 46 Information and communication theory, circuits (94-XX) 34 Combinatorics (05-XX) 24 Probability theory and stochastic processes (60-XX) 21 Group theory and generalizations (20-XX) 17 Harmonic analysis on Euclidean spaces (42-XX) 13 Numerical analysis (65-XX) 13 Computer science (68-XX) 12 Field theory and polynomials (12-XX) 11 Linear and multilinear algebra; matrix theory (15-XX) 9 Algebraic geometry (14-XX) 7 Convex and discrete geometry (52-XX) 7 Statistics (62-XX) 6 Operations research, mathematical programming (90-XX) 5 Real functions (26-XX) 5 Special functions (33-XX) 5 Functional analysis (46-XX) 4 Commutative algebra (13-XX) 3 History and biography (01-XX) 3 Measure and integration (28-XX) 3 Functions of a complex variable (30-XX) 2 General and overarching topics; collections (00-XX) 2 Geometry (51-XX) 2 Manifolds and cell complexes (57-XX) 2 Statistical mechanics, structure of matter (82-XX) 1 Order, lattices, ordered algebraic structures (06-XX) 1 Associative rings and algebras (16-XX) 1 Topological groups, Lie groups (22-XX) 1 Partial differential equations (35-XX) 1 Approximations and expansions (41-XX) 1 Operator theory (47-XX) ### Wikidata Timeline The data are displayed as stored in Wikidata under a Creative Commons CC0 License. Updates and corrections should be made in Wikidata.
2022-10-03T08:43:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4732690155506134, "perplexity": 7670.072573098693}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00515.warc.gz"}
http://zims-en.kiwix.campusafrica.gos.orange.com/wikipedia_en_all_nopic/A/Academic_grading_in_Germany
# Academic grading in Germany Germany uses a 5- or 6-point grading scale (GPA) to evaluate academic performance for the youngest to the oldest students. Grades vary from 1 (excellent, sehr gut) to 5 (resp. 6) (insufficient, ungenügend). In the final classes of German Gymnasium schools that prepare for university studies, a point system is used with 15 points being the best grade and 0 points the worst. The percentage causes the grade can vary from teacher to teacher. ## Grades by education ### Primary and lower secondary education In primary and lower secondary education (1st to 10th grade), German school children receive grades based on a 6-point grading scale ranging from 1 (excellent, sehr gut) to 6 (insufficient, ungenügend). Variations on the traditional six grade system allow for awarding grades suffixed with "+" and "-". To calculate averages of suffixed grades, they are assigned fractional values, where 1 is 1.0, 1- is 1.3, 2+ is 1.7, 2 is 2.0, 2- is 2.3 and so on. As schools are governed by the states, not by the federal government, there are slight differences. Often a more granular scale of "1-" (equal to 1.25), "1-2" (= 1.5), "2+" (= 1.75), etc. is used; sometimes even decimal grading (1.0, 1.1, 1.2 and so on) is applied. In end-of-year report cards, only unmodified integer grades may be used; in some regions they are written in text form. Many states currently also prescribe the use of behaviour-based notes (Kopfnoten), which grade things such as Orderliness or General Behaviour. #### Pedagogic grading Teachers who teach Grundschule (primary school) or Sonderschule (special education school) are allowed to use "pädagogische Noten" ("pedagogic grades"). Thus if a student tries very hard, but still does very poorly compared to the rest of the class, the teachers are allowed to give them good grades because they tried so hard. ### Upper secondary education In the final classes of Gymnasium schools (11th to 12th/13th grade) the grades are converted to numbers ("points"), where "1+" equals 15 points and "6" equals 0 points. Since 1+ exists in this system, theoretically a final Abitur grade of less than 0.6 is possible and such grades are used in an informal setting, although officially any student with less than 1.0 will be awarded a 1.0 Abitur.[1] When the point system is used, a grade of 4 (5 points) is the lowest passing grade, and 4- (4 points) the highest failing grade. 15-point grading system in upper secondary education[2] Grade + 1 + 2 + 3 + 4 + 5 6 Point 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 ### Tertiary education German universities (except for law schools) grade with a scale of 1 to 5: • 1.0–1.5 sehr gut (very good: an outstanding achievement) • 1.6–2.5 gut (good: an achievement which lies substantially above average requirements) • 2.6–3.5 befriedigend (satisfactory: an achievement which corresponds to average requirements) • 3.6–4.0 ausreichend (sufficient: an achievement which barely meets the requirements) • 5.0 nicht ausreichend / nicht bestanden (not sufficient / failed: an achievement which does not meet the requirements) Most of the universities use "mit Auszeichnung bestanden" (passed with distinction/ excellent) if the grade is a perfect score of 1.0. #### Law schools For law students at German universities, a similar system to the 1 to 5 scale is used that comprises one more grade that is inserted between 2 (gut) and 3 (befriedigend), named vollbefriedigend. This is because the grades 2 (gut) and 1 (sehr gut) are extremely rare, so an additional grade was created below gut to increase differentiation. Every grade is converted into points similarly to the Gymnasium system described above, starting at 18 points (excellent) down to 0 points (poor). 4 points is the lowest pass grade. ### Overview German Grade System PercentageGrades by educationDescriptorConversion to the US system* (varies with school/subject) primary & lower secondary (1st-10th grade) upper secondary (Gymnasium, 11th-12/13th grade) tertiary (Fachhochschule & Universität) 91-100% 1+ 15 points 1.0 "sehr gut" (very good/ excellent: an outstanding achievement) 4.0 1 14 points 1.0 4.0 1- 13 points 1.3 3.7 81-90% 2+ 12 points 1.7 "gut" (good: an achievement that exceeds the average requirements considerably) 3.3 2 11 points 2.0 3.0 2- 10 points 2.3 2.7 66-80% 3+ 9 points 2.7 "befriedigend" (satisfactory: an achievement that fulfills average requirements) 2.3 3 8 points 3.0 2.0 3- 7 points 3.3 1.7 50-65% 4+ 6 points 3.7 "ausreichend" (sufficient: an achievement that fulfills the requirements despite flaws) 1.3 4 5 points 4.0 1.0 0-49% 4- 4 points 5.0 "mangelhaft" / "ungenügend" / "nicht bestanden" (insufficient / failed: an achievement that does not fulfil requirements due to major flaws) 0.0 5+ 3 points 5 2 points 5- 1 point 6 0 points * This conversion scheme is intended as a guideline, as exact conversions may differ. ## Conversion of grades A matter of particular interest for those considering studying abroad or even enrolling full-time in a German university is the conversion of grades. While the below information may prove useful, it is recommended to contact the interested university directly to inquire which method they use to convert grades. ### Modified Bavarian formula A number of systems exist for the conversion of grades from other countries into German grades. One such system, used by most universities in North Rhine-Westphalia and Bavaria, is called the "Modified Bavarian Formula":[3] ${\displaystyle x={N_{\mathrm {max} }-N_{\mathrm {d} } \over N_{\mathrm {max} }-N_{\mathrm {min} }}3+1}$ where ${\displaystyle x}$ = German grade, ${\displaystyle N_{\mathrm {max} }}$ = best possible score in foreign country's grading system, ${\displaystyle N_{\mathrm {min} }}$ = lowest passing score in foreign grading system and ${\displaystyle N_{\mathrm {d} }}$ = obtained foreign grade (to be converted into German grade). The resulting value is rounded to the next German grade (e.g. 1.6 is rounded to the German grade 1.7 and 2.4 is rounded to 2.3). For resulting values between two German grades, the score is rounded to the better grade (e.g. 2.5 is rounded to the German grade 2.3 and 1.15 is rounded to 1.0). ### Latin grades In particular doctorate's degrees, e.g. Dr. phil. or Dr. rer. nat., are graded by using the Latin versions. In this case the grade (Note/Zensur) is called Prädikat. The following rough guide may be used to convert into standard German grades: • summa cum laude (<1.0 = mit Auszeichnung, "with distinction") • magna cum laude (1.0 = sehr gut, "very good") • cum laude (2.0 = gut, "good") • rite (3.0 = bestanden, "passed") There is no fail grade; in that case the dissertation is formally rejected without a grade. ## East Germany (1950s-1980s) In former East Germany, a 5-point grading scale was used until July 1991: Grade Text Explanation 1 sehr gut (very good) best possible grade 2 gut (good) next-highest grade 3 befriedigend (satisfactory) average performance 4 genügend (sufficient) lowest passing grade 5 ungenügend (insufficient) lowest possible grade and the only failing grade With the polytechnic reform of the school system initiated by the Act on Socialistic Development of the School System in the German Democratic Republic the Ministry of People's Education wanted to adapt academic grading for all institutions in its jurisdiction, which were general educational schools, vocational schools and professional schools for the qualification of lower classes teachers, educators and kindergartners. Therefore, a reorganized grading scale was enacted in Directive on the introduction of a unified grading scale for secondary schools, extended secondary schools, special schools, vocational schools, institutes of vocational masters' education, institutes of vocational school teachers' education, institutes of vocational teachers' further education, institutes of teachers' education and pedagogic institutes. This directive was unchangedly effective from September 1, 1960, to August 25, 1993. For all of the different subjects there were further recommendations with even more specific descriptions in relation to the general grading scale. These particular comments should help the teacher to grade the achievements of the students as objectively as possible. This scale is identical to the current Austrian grading scale. ## Criticism of German grading policies ### The case of Sabine Czerny At public schools in Germany, teachers are supposed to evaluate students against fixed course-specific criteria, but often feel implicit pressure to grade students on a curve where grades are awarded based on performance relative to all other individuals rather than performance relative to the difficulty of a specific course. Specifically, in the 2008 case of Sabine Czerny, a Bavarian primary school teacher, Czerny thought that 91% of the class would be able to make a successful transition into a Realschule or a Gymnasium (high schools for which normally only circa 50% of Bavarian children qualify based on their educational achievements). While the parents liked this result, the educational authorities questioned Czerny's grading standards. Czerny claims that her students' results stood up in cross-classroom tests; nonetheless she was transferred to another school.[4][5] Czerny received much public sympathy and later went on to write a book about her experiences. ### Comparisons between Gymnasium and Gesamtschule (comprehensive school) German Gymnasiums are schools which aim to prepare students for college education. These schools are selective, and tough grading is a traditional element of the teaching. The culture of these works against students of average academic ability who barely qualify for a Gymnasium place, and who may then find themselves on the bottom of their class; these same students would have achieved better grades for the same effort if they had attended a non-selective comprehensive school (Gesamtschule).[6] A study revealed that a sample of Gymnasium high school seniors of average mathematical ability[7] who chose to attend advanced college-preparatory math classes at their school ("Leistungskurs") found themselves in the very bottom of their class and had an average grade of 5 (i.e. failed the class). Comprehensive school students of equal mathematical ability found themselves in the upper half of the equivalent course in their school and obtained an average grade of 3+.[8] It was found that students who graduated from a Gesamtschule tend to do worse in college than their grades in high school classes would predict - and vice versa for Gymnasium students. ## Predictive ability Often the German grades are treated like an interval scale to calculate means and deviations for comparisons. Despite the fact that it lacks any psychometric standardization, the grading system is often compared to normally distributed norm-referenced assessments. Using an expected value of 3 and a standard deviation of 1, transformations into other statistical measures like Percentiles, T, Stanine etc. or (like in the PISA studies) an IQ scale are then possible. This transformation is problematic both for high school grades and for university grades: At high school level, schooling in most of Germany is selective — thus for instance a Gymnasium student who is underperforming compared to his classmates is likely to still be close to or above average when compared to his entire age group. At university level, the distribution is highly non-normal and idiosyncratic to the subject. Substantially more German students fail exams in university than in other countries (usually about 20-40%, often even more). Grades awarded vary widely between fields of study and between universities. (In law degrees, for instance, only 10-15% of candidates get a grade better than "befriedigend".) This might be one reason for the low graduation rates at university in international comparisons, as well as for the small number of people who obtain an "Abitur" in the first place. However, several empirical psychological studies show that the grades awarded in Germany at school and university have a high reliability when taking up higher education and research jobs. The universities usually demand high grades in Diploma Thesis or a Master Thesis. Thesis grades are by far the most critical factor while applying for job or higher education e.g. PhDs.[9] One study from 1995 found that GPAs from school are a mild (weak) predictor for success in university and to a slightly better degree for success in vocational trainings, and that GPAs from school or university have nearly no predictive value for job performance.[10] Nevertheless, due to rarity of psychometric testing (like Scholastic Aptitude Test (SAT) or the Medical College Admission Test (MCAT) and the like in the US) the GPA is usually used as the most predictive criterion available within an application process. For job recruiting, school/university grades have a high impact on career opportunities, as independent scientifically based recruitment and assessment is used by less than 8% of the German employers (50-70% in other European countries ).[11] ## References 1. Christ, Sebastian (2007-08-21). "Super-Abiturient Felix Geisler: König der Überflieger". Spiegel Online. Retrieved 2018-01-16. 2. "Die Oberstufe des Gymnasiums in Bayern" (PDF). Retrieved December 20, 2016. 3. "University of Stuttgart: Bavarian Formula" (PDF). Retrieved December 19, 2016. 4. Christian Bleher: "Wenn die Kids zu gut sind: Bitte nicht für Schüler engagieren". TAZ. July 30th 2008 5. Christian Bleher: "Kritische bayerische Lehrkraft versetzt: Störerin des Schulfriedens". TAZ. August 4th 2008 6. Manfred Tücke. "Psychologie für die Schule, Psychologie für die Schule: Eine themenzentreirte Einführung in die Psychologie für (zukünftige) Lehrer". 4. Auflage 2005. Münster LIT Verlag; p. 127 7. who scored 100 on a math test, provided by the scientists 8. Manfred Tücke: "Psychologie in der Schule, Psychologie für die Schule: Eine themenzentrierte Einführung in die Psychologie für (zukünftige) Lehrer". 4 Auflage 2005. Münster: LIT Verlag; p. 127; the study was done in Nordrhein-Westfalen, students were attending a Leistungskurs 9. Ingenkamp, K. (1997). "Handbuch der Pädagogischen Diagnostik". Weinheim: Beltz (Psychologie Verlags Union). Cite journal requires |journal= (help) 10. Hollmann, H.; Reitzig, G. (1995). "Referenzen und Dokumentenanalyse. In W. Sarges (Hrsg.), Management-Diagnostik (2. Aufl.)". Göttingen: Hogrefe. Cite journal requires |journal= (help) 11. Schuler, H. (2000). "Personalauswahl im europäischen Vergleich. In E. Regnet & L. M. Hoffmann (Hrsg.). Peronalmanagement in Europa". Göttingen: Hogrefe. Cite journal requires |journal= (help) [1] 1. "German Grade Calculator". Onlinemacha.com. This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.
2020-10-23T10:40:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 5, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38885319232940674, "perplexity": 7919.06695103125}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107881369.4/warc/CC-MAIN-20201023102435-20201023132435-00678.warc.gz"}
https://detailedpedia.com/wiki-NASA
# NASA Agency overview NASA seal NASA "meatball" and "worm" insignias NASA headquarters in Washington, D.C. NASA July 29, 1958; 64 years ago National Advisory Committee for Aeronautics (1915–1958) Space agencyAeronautics research agency United States Federal Government Washington, D.C..mw-parser-output .geo-default,.mw-parser-output .geo-dms,.mw-parser-output .geo-dec{display:inline}.mw-parser-output .geo-nondefault,.mw-parser-output .geo-multi-punct{display:none}.mw-parser-output .longitude,.mw-parser-output .latitude{white-space:nowrap}38°52′59″N 77°0′59″W / 38.88306°N 77.01639°W "Exploring the secrets of the universe for the benefit of all" Bill Nelson Pamela Melroy 17,960 (2022) US$24.041 billion (2022) NASA.gov The National Aeronautics and Space Administration (NASA /ˈnæsə/) is an independent agency of the US federal government responsible for the civil space program, aeronautics research, and space research. NASA was established in 1958, succeeding the National Advisory Committee for Aeronautics (NACA), to give the U.S. space development effort a distinctly civilian orientation, emphasizing peaceful applications in space science. NASA has since led most American space exploration, including Project Mercury, Project Gemini, the 1968-1972 Apollo Moon landing missions, the Skylab space station, and the Space Shuttle. NASA supports the International Space Station and oversees the development of the Orion spacecraft, the Space Launch System, Commercial Crew vehicles, and the planned Lunar Gateway space station. The agency is also responsible for the Launch Services Program, which provides oversight of launch operations and countdown management for uncrewed NASA launches. NASA's science is focused on better understanding Earth through the Earth Observing System; advancing heliophysics through the efforts of the Science Mission Directorate's Heliophysics Research Program; exploring bodies throughout the Solar System with advanced robotic spacecraft such as New Horizons and planetary rovers such as Perseverance; and researching astrophysics topics, such as the Big Bang, through the James Webb Space Telescope, and the Great Observatories and associated programs. ## Management ### Leadership Administrator Bill Nelson (2021–present) The agency's administration is located at NASA Headquarters in Washington, DC, and provides overall guidance and direction. Except under exceptional circumstances, NASA civil service employees are required to be US citizens. NASA's administrator is nominated by the President of the United States subject to the approval of the US Senate, and serves at the President's pleasure as a senior space science advisor. The current administrator is Bill Nelson, appointed by President Joe Biden, since May 3, 2021. ### Strategic plan NASA operates with four FY2022 strategic goals. • Expand human knowledge through new scientific discoveries • Extend human presence to the Moon and on towards Mars for sustainable long-term exploration, development, and utilization • Catalyze economic growth and drive innovation to address national challenges • Enhance capabilities and operations to catalyze current and future mission success ### Budget NASA budget requests are developed by NASA and approved by the administration prior to submission to the U.S. Congress. Authorized budgets are those that have been included in enacted appropriations bills that are approved by both houses of Congress and enacted into law by the U.S. president. NASA fiscal year budget requests and authorized budgets are provided below. Year Budget Request in bil. US$ Authorized Budget in bil. US$U.S. Government Employees 2018$19.092 $20.736 17,551 2019$19.892 $21.500 17,551 2020$22.613 $22.629 18,048 2021$25.246 $23.271 18,339 2022$24.802 24.041 18,400 est ### Organization NASA funding and priorities are developed through its six Mission Directorates. Mission Directorate Associate Administrator % of NASA Budget (FY22) Aeronautics Research (ARMD) Robert A. Pearce 4% Exploration Systems Development (ESDMD) James Free 28% Space Operations (SOMD) Kathy Lueders 17% Science (SMD) Thomas Zurbuchen 32% Space Technology (STMD) James L. Reuter 5% Mission Support (MSD) Robert Gibbs 14% Center-wide activities such as the Chief Engineer and Safety and Mission Assurance organizations are aligned to the headquarters function. The MSD budget estimate includes funds for these HQ functions. The administration operates 10 major field centers with several managing additional subordinate facilities across the country. Each is led by a Center Director (data below valid as of September 1, 2022). Field Center Primary Location Center Director Ames Research Center Mountain View, California Dr. Eugene L. Tu Armstrong Flight Research Center Palmdale, California Brad Flick (acting) Glenn Research Center Cleveland, Ohio Dr. James A. Kenyon (acting) Goddard Space Flight Center Greenbelt, Maryland Dennis J. Andrucyk Jet Propulsion Laboratory La Canada-Flintridge, California Laurie Leshin Johnson Space Center Houston, Texas Vanessa E. Wyche Kennedy Space Center Merritt Island, Florida Janet Petro Langley Research Center Hampton, Virginia Clayton Turner Marshall Space Flight Center Huntsville, Alabama Jody Singer Stennis Space Center Hancock County, Mississippi Richard J. Gilbrech ## History ### Establishment of NASA Short 2018 documentary about NASA produced for its 60th anniversary Beginning in 1946, the National Advisory Committee for Aeronautics (NACA) began experimenting with rocket planes such as the supersonic Bell X-1. In the early 1950s, there was challenge to launch an artificial satellite for the International Geophysical Year (1957–1958). An effort for this was the American Project Vanguard. After the Soviet space program's launch of the world's first artificial satellite (Sputnik 1) on October 4, 1957, the attention of the United States turned toward its own fledgling space efforts. The US Congress, alarmed by the perceived threat to national security and technological leadership (known as the "Sputnik crisis"), urged immediate and swift action; President Dwight D. Eisenhower counseled more deliberate measures. The result was a consensus that the White House forged among key interest groups, including scientists committed to basic research; the Pentagon which had to match the Soviet military achievement; corporate America looking for new business; and a strong new trend in public opinion looking up to space exploration. On January 12, 1958, NACA organized a "Special Committee on Space Technology," headed by Guyford Stever. On January 14, 1958, NACA Director Hugh Dryden published "A National Research Program for Space Technology," stating, It is of great urgency and importance to our country both from consideration of our prestige as a nation as well as military necessity that this challenge [Sputnik] be met by an energetic program of research and development for the conquest of space ... It is accordingly proposed that the scientific research be the responsibility of a national civilian agency ... NACA is capable, by rapid extension and expansion of its effort, of providing leadership in space technology. While this new federal agency would conduct all non-military space activity, the Advanced Research Projects Agency (ARPA) was created in February 1958 to develop space technology for military application. On July 29, 1958, Eisenhower signed the National Aeronautics and Space Act, establishing NASA. When it began operations on October 1, 1958, NASA absorbed the 43-year-old NACA intact; its 8,000 employees, an annual budget of US100 million, three major research laboratories (Langley Aeronautical Laboratory, Ames Aeronautical Laboratory, and Lewis Flight Propulsion Laboratory) and two small test facilities. Elements of the Army Ballistic Missile Agency and the United States Naval Research Laboratory were incorporated into NASA. A significant contributor to NASA's entry into the Space Race with the Soviet Union was the technology from the German rocket program led by Wernher von Braun, who was now working for the Army Ballistic Missile Agency (ABMA), which in turn incorporated the technology of American scientist Robert Goddard's earlier works. Earlier research efforts within the US Air Force and many of ARPA's early space programs were also transferred to NASA. In December 1958, NASA gained control of the Jet Propulsion Laboratory, a contractor facility operated by the California Institute of Technology. NASA's first administrator was Dr. T. Keith Glennan who was appointed by President Dwight D. Eisenhower. During his term (1958–1961) he brought together the disparate projects in American space development research. James Webb led the agency during the development of the Apollo program in the 1960s. James C. Fletcher has held the position twice; first during the Nixon administration in the 1970s and then at the request of Ronald Reagan following the Challenger disaster. Daniel Goldin held the post for nearly 10 years and is the longest serving administrator to date. He is best known for pioneering the "faster, better, cheaper" approach to space programs. Bill Nelson is currently serving as the 14th administrator of NASA. ### Insignia The NASA seal was approved by Eisenhower in 1959, and slightly modified by President John F. Kennedy in 1961. NASA's first logo was designed by the head of Lewis' Research Reports Division, James Modarelli, as a simplification of the 1959 seal. In 1975, the original logo was first dubbed "the meatball" to distinguish it from the newly designed "worm" logo which replaced it. The "meatball" returned to official use in 1992. The "worm" was brought out of retirement by administrator Jim Bridenstine in 2020. ### Facilities NASA Headquarters in Washington, DC provides overall guidance and political leadership to the agency's ten field centers, through which all other facilities are administered. Aerial views of the NASA Ames (left) and NASA Armstrong (right) centers Ames Research Center (ARC) at Moffett Field is located in the Silicon Valley of central California and delivers wind-tunnel research on the aerodynamics of propeller-driven aircraft along with research and technology in aeronautics, spaceflight, and information technology. It provides leadership in astrobiology, small satellites, robotic lunar exploration, intelligent/adaptive systems and thermal protection. Armstrong Flight Research Center (AFRC) is located inside Edwards Air Force Base and is the home of the Shuttle Carrier Aircraft (SCA), a modified Boeing 747 designed to carry a Space Shuttle orbiter back to Kennedy Space Center after a landing at Edwards AFB. The center focuses on flight testing of advanced aerospace systems. Glenn Research Center is based in Cleveland, Ohio and focuses on air-breathing and in-space propulsion and cryogenics, communications, power energy storage and conversion, microgravity sciences, and advanced materials. View of GSFC campus (left) and Kraft Mission Control Center at JSC (right) Goddard Space Flight Center (GSFC), located in Greenbelt, Maryland develops and operates uncrewed scientific spacecraft. GSFC also operates two spaceflight tracking and data acquisition networks (the Space Network and the Near Earth Network), develops and maintains advanced space and Earth science data information systems, and develops satellite systems for the National Oceanic and Atmospheric Administration (NOAA). Johnson Space Center (JSC) is the NASA center for human spaceflight training, research and flight control. It is home to the United States Astronaut Corps and is responsible for training astronauts from the US and its international partners, and includes the Christopher C. Kraft Jr. Mission Control Center. JSC also operates the White Sands Test Facility in Las Cruces, New Mexico to support rocket testing. View of JPL (left) and the Langley Research Center (right) Jet Propulsion Laboratory (JPL), located in the San Gabriel Valley area of Los Angeles County, C and builds and operates robotic planetary spacecraft, though it also conducts Earth-orbit and astronomy missions. It is also responsible for operating NASA's Deep Space Network (DSN). Langley Research Center (LaRC), located in Hampton, Virginia devotes two-thirds of its programs to aeronautics, and the rest to space. LaRC researchers use more than 40 wind tunnels to study improved aircraft and spacecraft safety, performance, and efficiency. The center was also home to early human spaceflight efforts including the team chronicled in the Hidden Figures story. View of the SLS exiting the VAB at KSC (left) and of the MSFC test stands (right) Kennedy Space Center (KSC), located west of Cape Canaveral Space Force Station in Florida, has been the launch site for every United States human space flight since 1968. KSC also manages and operates uncrewed rocket launch facilities for America's civil space program from three pads at Cape Canaveral. Marshall Space Flight Center (MSFC), located on the Redstone Arsenal near Huntsville, Alabama, is one of NASA's largest centers and is leading the development of the Space Launch System in support of the Artemis program. Marshall is NASA's lead center for International Space Station (ISS) design and assembly; payloads and related crew training; and was the lead for Space Shuttle propulsion and its external tank. Stennis Space Center, originally the "Mississippi Test Facility", is located in Hancock County, Mississippi, on the banks of the Pearl River at the MississippiLouisiana border. Commissioned in October 1961, it is currently used for rocket testing by over 30 local, state, national, international, private, and public companies and agencies. It also contains the NASA Shared Services Center. ### Past human spaceflight programs #### X-15 (1954–1968) X-15 in powered flight NASA inherited NACA's X-15 experimental rocket-powered hypersonic research aircraft, developed in conjunction with the US Air Force and Navy. Three planes were built starting in 1955. The X-15 was drop-launched from the wing of one of two NASA Boeing B-52 Stratofortresses, NB52A tail number 52-003, and NB52B, tail number 52-008 (known as the Balls 8). Release took place at an altitude of about 45,000 feet (14 km) and a speed of about 500 miles per hour (805 km/h). Twelve pilots were selected for the program from the Air Force, Navy, and NACA. A total of 199 flights were made between June 1959 and December 1968, resulting in the official world record for the highest speed ever reached by a crewed powered aircraft (current as of 2014), and a maximum speed of Mach 6.72, 4,519 miles per hour (7,273 km/h). The altitude record for X-15 was 354,200 feet (107.96 km). Eight of the pilots were awarded Air Force astronaut wings for flying above 260,000 feet (80 km), and two flights by Joseph A. Walker exceeded 100 kilometers (330,000 ft), qualifying as spaceflight according to the International Aeronautical Federation. The X-15 program employed mechanical techniques used in the later crewed spaceflight programs, including reaction control system jets for controlling the orientation of a spacecraft, space suits, and horizon definition for navigation. The reentry and landing data collected were valuable to NASA for designing the Space Shuttle. #### Mercury (1958–1963) L. Gordon Cooper, photographed by a slow-scan television camera aboard Faith 7 (May 16, 1963) In 1958, NASA formed an engineering group, the Space Task Group, to manage their human spaceflight programs under the direction of Robert Gilruth. Their earliest programs were conducted under the pressure of the Cold War competition between the US and the Soviet Union. NASA inherited the US Air Force's Man in Space Soonest program, which considered many crewed spacecraft designs ranging from rocket planes like the X-15, to small ballistic space capsules. By 1958, the space plane concepts were eliminated in favor of the ballistic capsule, and NASA renamed it Project Mercury. The first seven astronauts were selected among candidates from the Navy, Air Force and Marine test pilot programs. On May 5, 1961, astronaut Alan Shepard became the first American in space aboard a capsule he named Freedom 7, launched on a Redstone booster on a 15-minute ballistic (suborbital) flight. John Glenn became the first American to be launched into orbit, on an Atlas launch vehicle on February 20, 1962, aboard Friendship 7. Glenn completed three orbits, after which three more orbital flights were made, culminating in L. Gordon Cooper's 22-orbit flight Faith 7, May 15–16, 1963. Katherine Johnson, Mary Jackson, and Dorothy Vaughan were three of the human computers doing calculations on trajectories during the Space Race. Johnson was well known for doing trajectory calculations for John Glenn's mission in 1962, where she was running the same equations by hand that were being run on the computer. Mercury's competition from the Soviet Union (USSR) was the single-pilot Vostok spacecraft. They sent the first man in space, cosmonaut Yuri Gagarin, into a single Earth orbit aboard Vostok 1 in April 1961, one month before Shepard's flight. In August 1962, they achieved an almost four-day record flight with Andriyan Nikolayev aboard Vostok 3, and also conducted a concurrent Vostok 4 mission carrying Pavel Popovich. #### Gemini (1961–1966) Richard Gordon performs a spacewalk to attach a tether to the Agena Target Vehicle on Gemini 11, 1966 Based on studies to grow the Mercury spacecraft capabilities to long-duration flights, developing space rendezvous techniques, and precision Earth landing, Project Gemini was started as a two-man program in 1961 to overcome the Soviets' lead and to support the planned Apollo crewed lunar landing program, adding extravehicular activity (EVA) and rendezvous and docking to its objectives. The first crewed Gemini flight, Gemini 3, was flown by Gus Grissom and John Young on March 23, 1965. Nine missions followed in 1965 and 1966, demonstrating an endurance mission of nearly fourteen days, rendezvous, docking, and practical EVA, and gathering medical data on the effects of weightlessness on humans. Under the direction of Soviet Premier Nikita Khrushchev, the USSR competed with Gemini by converting their Vostok spacecraft into a two- or three-man Voskhod. They succeeded in launching two crewed flights before Gemini's first flight, achieving a three-cosmonaut flight in 1964 and the first EVA in 1965. After this, the program was canceled, and Gemini caught up while spacecraft designer Sergei Korolev developed the Soyuz spacecraft, their answer to Apollo. #### Apollo (1960–1972) Buzz Aldrin on the Moon, 1969 (photograph by Neil Armstrong) The U.S. public's perception of the Soviet lead in the Space Race (by putting the first man into space) motivated President John F. Kennedy to ask the Congress on May 25, 1961, to commit the federal government to a program to land a man on the Moon by the end of the 1960s, which effectively launched the Apollo program. NASA Administrator Bill Nelson announced on June 2, 2021, that the DAVINCI+ and VERITAS missions were selected to launch to Venus in the late 2020s, having beat out competing proposals for missions to Jupiter's volcanic moon Io and Neptune's large moon Triton that were also selected as Discovery program finalists in early 2020. Each mission has an estimated cost of $500 million, with launches expected between 2028 and 2030. Launch contracts will be awarded later in each mission's development. ##### New Frontiers program The New Frontiers program focuses on specific Solar System exploration goals identified as top priorities by the planetary science community. Primary objectives include Solar System exploration employing medium class spacecraft missions to conduct high-science-return investigations. New Frontiers builds on the development approach employed by the Discovery program but provides for higher cost caps and schedule durations than are available with Discovery. Cost caps vary by opportunity; recent missions have been awarded based on a defined cap of$1 Billion. The higher cost cap and projected longer mission durations result in a lower frequency of new opportunities for the program - typically one every several years. OSIRIS-REx and New Horizons are examples of New Frontiers missions. NASA has determined that the next opportunity to propose for the fifth round of New Frontiers missions will occur no later than the fall of 2024. Missions in NASA's New Frontiers Program tackle specific Solar System exploration goals identified as top priorities by the planetary science community. Exploring the Solar System with medium-class spacecraft missions that conduct high-science-return investigations is NASA's strategy to further understand the Solar System. ##### Large strategic missions Large strategic missions (formerly called Flagship missions) are strategic missions that are typically developed and managed by large teams that may span several NASA centers. The individual missions become the program as opposed to being part of a larger effort (see Discovery, New Frontiers, etc.). The James Webb Space Telescope is a strategic mission that was developed over a period of more than 20 years. Strategic missions are developed on an ad-hoc basis as program objectives and priorities are established. Missions like Voyager, had they been developed today, would have been strategic missions. Three of the Great Observatories were strategic missions (the Chandra X-ray Observatory, Compton, and the Hubble Space Telescope). Europa Clipper is the next large strategic mission in development by NASA. #### Planetary science missions NASA continues to play a material in exploration of the Solar System as it has for decades. Ongoing missions have current science objectives with respect to more than five extraterrestrial bodies within the Solar System – Moon (Lunar Reconnaissance Orbiter), Mars (Perseverance rover), Jupiter (Juno), asteroid Bennu (OSIRIS-REx), and Kuiper Belt Objects (New Horizons). The Juno extended mission will make multiple flybys of the Jovian moon Io in 2023 and 2024 after flybys of Ganymede in 2021 and Europa in 2022. Voyager 1 and Voyager 2 continue to provide science data back to Earth while continuing on their outward journeys into interstellar space. On November 26, 2011, NASA's Mars Science Laboratory mission was successfully launched for Mars. The Curiosity rover successfully landed on Mars on August 6, 2012, and subsequently began its search for evidence of past or present life on Mars. In September 2014, NASA's MAVEN spacecraft, which is part of the Mars Scout Program, successfully entered Mars orbit and, as of October 2022, continues its study of the atmosphere of Mars. NASA's ongoing Mars investigations include in-depth surveys of Mars by the Perseverance rover and InSight). NASA's Europa Clipper, planned for launch in October 2024, will study the Galilean moon Europa through a series of flybys while in orbit around Jupiter. Dragonfly will send a mobile robotic rotorcraft to Saturn's biggest moon, Titan. As of May 2021, Dragonfly is scheduled for launch in June 2027. #### Astrophysics missions NASA astrophysics spacecraft fleet, credit NASA GSFC, 2022 The NASA Science Mission Directorate Astrophysics division manages the agency's astrophysics science portfolio. NASA has invested significant resources in the development, delivery, and operations of various forms of space telescopes. These telescopes have provided the means to study the cosmos over a large range of the electromagnetic spectrum. The Great Observatories that were launched in the 1980s and 1990s have provided a wealth of observations for study by physicists across the planent. The first of them, the Hubble Space Telescope, was delivered to orbit in 1990 and continues to function, in part due to prior servicing missions performed by the Space Shuttle. The other remaining active great observatory include the Chandra X-ray Observatory (CXO), launched by STS-93 in July 1999 and is now in a 64-hour elliptical orbit studing X-ray sources that are not readily viewable from terrestrial observatories. Chandra X-ray Observatory (rendering), 2015 The Imaging X-ray Polarimetry Explorer (IXPE) is a space observatory designed to improve the understanding of X-ray production in objects such as neutron stars and pulsar wind nebulae, as well as stellar and supermassive black holes. IXPE launched in December 2021 and is an international collaboration between NASA and the Italian Space Agency (ASI). It is part of the NASA Small Explorers program (SMEX) which designs low-cost spacecraft to study heliophysics and astrophysics. The Neil Gehrels Swift Observatory was launched in November 2004 and is Gamma-ray burst observatory that also monitors the afterglow in X-ray, and UV/Visible light at the location of a burst. The mission was developed in a joint partnership between Goddard Space Flight Center (GSFC) and an international consortium from the United States, United Kingdom, and Italy. Pennsylvania State University operates the mission as part of NASA's Medium Explorer program (MIDEX). The Fermi Gamma-ray Space Telescope (FGST) is another gamma-ray focused space observatory that was launched to low Earth orbit in June 2008 and is being used to perform gamma-ray astronomy observations. In addition to NASA, the mission involves the United States Department of Energy, and government agencies in France, Germany, Italy, Japan, and Sweden. The James Webb Space Telescope (JWST), launched in December 2021 on an Ariane 5 rocket, operates in a halo orbit circling the Sun-Earth L2 point. JWST's high sensitivity in the infrared spectrum and its imaging resolution will allow it to view more distant, faint, or older objects than its predecessors, including Hubble. #### Earth Sciences Program missions (1965–present) Schematic of NASA Earth Science Division operating satellite missions as of February 2015 NASA Earth Science is a large, umbrella program comprising a range of terrestrial and space-based collection systems in order to better understand the Earth system and its response to natural and human-caused changes. Numerous systems have been developed and fielded over several decades to provide improved prediction for weather, climate, and other changes in the natural environment. Several of the current operating spacecraft programs include: Aqua, Aura, Orbiting Carbon Observatory 2 (OCO-2), Gravity Recovery and Climate Experiment Follow-on (GRACE FO), and Ice, Cloud, and land Elevation Satellite 2 (ICESat-2). In addition to systems already in orbit, NASA is designing a new set of Earth Observing Systems to study, assess, and generate responses for climate change, natural hazards, forest fires, and real-time agricultural processes. The GOES-T satellite (designated GOES-18 after launch) joined the fleet of U.S. geostationary weather monitoring satellites in March 2022. NASA also maintains the Earth Science Data Systems (ESDS) program to oversee the life cycle of NASA's Earth science data — from acquisition through processing and distribution. The primary goal of ESDS is to maximize the scientific return from NASA's missions and experiments for research and applied scientists, decision makers, and society at large. The Earth Science program is managed by the Earth Science Division of the NASA Science Mission Directorate. ### Space operations architecture NASA invests in various ground and space based infrastructures to support its science and exploration mandate. The agency maintains access to suborbital and orbital space launch capabilities and sustains ground station solutions to support its evolving fleet of spacecraft and remote systems. #### Deep Space Network (1963–present) The NASA Deep Space Network (DSN) serves as the primary ground station solution for NASA's interplanetary spacecraft and select Earth-orbiting missions. The system employs ground station complexes near Barstow California in the United States, in Spain near Madrid, and in Australia near Canberra. The placement of these ground stations approximately 120 degrees apart around the planet provides the ability for communications to spacecraft throughout the Solar System even as the Earth rotates about its axis on a daily basis. The system is controlled at a 24x7 operations center at JPL in Pasadena California which manages recurring communications linkages with up to 40 spacecraft. The system is managed by the Jet Propulsion Laboratory (JPL). #### Near Space Network (1983–present) Near Earth Network Ground Stations, 2021 The Near Space Network (NSN) provides telemetry, commanding, ground-based tracking, data and communications services to a wide range of customers with satellites in low earth orbit (LEO, geosynchronous orbit (GEO), highly elliptical orbits (HEO), and lunar orbits. The NSN accumulates ground station and antenna assets from the Near Earth Network and the Tracking and Data Relay Satellite System (TDRS) which operates in geosynchronous orbit providing continuous real-time coverage for launch vehicles and low earth orbit NASA missions. The NSN consists of 19 ground stations worldwide operated by the US Government and by contractors including Kongsberg Satellite Services (KSAT), Swedish Space Corporation (SSC), and South African National Space Agency (SANSA). The ground network averages between 120 and 150 spacecraft contacts a day with TDRS engaging with systems on a near-continuous basis as needed; the system is managed and operated by the Goddard Space Flight Center. #### Sounding Rocket Program (1959–present) The NASA Sounding Rocket Program (NSRP) is located at the Wallops Flight Facility and provides launch capability, payload development and integration, and field operations support to execute suborbital missions. The program has been in operation since 1959 and is managed by the Goddard Space Flight Center using a combined US Government and contractor team. The NSRP team conducts approximately 20 missions per year from both Wallops and other launch locations worldwide to allow scientists to collect data "where it occurs". The program supports the strategic vision of the Science Mission Directorate collecting important scientific data for earth science, heliophysics, and astrophysics programs. In June 2022, NASA conducted its first rocket launch from a commmercial spaceport outside the US. It launched a Black Brant IX from the Arnhem Space Centre in Australia. #### Launch Services Program (1990–present) The NASA Launch Services Program (LSP) is responsible for procurement of launch services for NASA uncrewed missions and oversight of launch integration and launch preparation activity, providing added quality and mission assurance to meet program objectives. Since 1990, NASA has purchased expendable launch vehicle launch services directly from commercial providers, whenever possible, for its scientific and applications missions. Expendable launch vehicles can accommodate all types of orbit inclinations and altitudes and are ideal vehicles for launching Earth-orbit and interplanetary missions. LSP operates from Kennedy Space Center and falls under the NASA Space Operations Mission Directorate (SOMD). ### Aeronautics Research The Aeronautics Research Mission Directorate (ARMD) is one of five mission directorates within NASA, the other four being the Exploration Systems Development Mission Directorate, the Space Operations Mission Directorate, the Science Mission Directorate, and the Space Technology Mission Directorate. The ARMD is responsible for NASA's aeronautical research, which benefits the commercial, military, and general aviation sectors. ARMD performs its aeronautics research at four NASA facilities: Ames Research Center and Armstrong Flight Research Center in California, Glenn Research Center in Ohio, and Langley Research Center in Virginia. #### NASA X-57 Maxwell aircraft (2016–present) The NASA X-57 Maxwell is an experimental aircraft being developed by NASA to demonstrate the technologies required to deliver a highly-efficient all-electric aircraft. The primary goal of the program is to develop and deliver all-electric technology solutions that can also achieve airworthiness certification with regulators. The program involves development of the system in several phases, or modifications, to incrementally grow the capability and operability of the system. The initial configuration of the aircraft has now completed ground testing as it approaches its first flights. In mid-2022, the X-57 was scheduled to fly before the end of the year. The development team includes staff from the NASA Armstrong, Glenn, and Langley centers along with number of industry partners from the United States and Italy. #### Next Generation Air Transportation System (2007–present) NASA is collaborating with the Federal Aviation Administration and industry stakeholders to modernize the United States National Airspace System (NAS). Efforts begain in 2007 with a goal to deliver major modernization components by 2025. The modernization effort intends to increase the safety, efficiency, capacity, access, flexibility, predictability, and resilience of the NAS while reducing the environmental impact of aviation. The Aviation Systems Division of NASA Ames operates the joint NASA/FAA North Texas Research Station. The station supports all phases of NextGen research, from concept development to prototype system field evaluation. This facility has already transitioned advanced NextGen concepts and technologies to use through technology transfers to the FAA. NASA contributions also include development of advanced automation concepts and tools that provide air traffic controllers, pilots, and other airspace users with more accurate real-time information about the nation's traffic flow, weather, and routing.Ames' advanced airspace modeling and simulation tools have been used extensively to model the flow of air traffic flow across the U.S., and to evaluate new concepts in airspace design, traffic flow management, and optimization. ### Technology research #### Nuclear in-space power and propulsion (ongoing) NASA has made use of technologies such as the multi-mission radioisotope thermoelectric generator (MMRTG), which is a type of radioisotope thermoelectric generator used to power spacecraft. Shortages of the required plutonium-238 have curtailed deep space missions since the turn of the millennium. An example of a spacecraft that was not developed because of a shortage of this material was New Horizons 2. In July 2021, NASA announced contract awards for development of nuclear thermal propulsion reactors. Three contractors will develop individual designs over 12 months for later evaluation by NASA and the U.S. Department of Energy. NASA's space nuclear technologies portfolio are led and funded by its Space Technology Mission Directorate. #### Other initiatives Free Space Optics. NASA contracted a third party to study the probability of using Free Space Optics (FSO) to communicate with Optical (laser) Stations on the Ground (OGS) called laser-com RF networks for satellite communications. Water Extraction from Lunar Soil. On July 29, 2020, NASA requested American universities to propose new technologies for extracting water from the lunar soil and developing power systems. The idea will help the space agency conduct sustainable exploration of the Moon. ### Human Spaceflight Research (2005–present) SpaceX Crew-4 astronaut Samantha Cristoforetti operating the rHEALTH ONE on the ISS to address key health risks for space travel. NASA's Human Research Program (HRP) is designed to study the effects of space on human health and also to provide countermeasures and technologies for human space exploration. The medical effects of space exploration are reasonably limited in low Earth orbit or in travel to the Moon. Travel to Mars, however is significantly longer and deeper into space and significant medical issues can result. This includes bone loss, radiation exposure, vision changes, circadian rhythm disturbances, heart remodeling, and immune alterations. In order to study and diagnose these ill-effects, HRP has been tasked with identifying or developing small portable instrumentation with low mass, volume, and power to monitor the health of astronauts. To achieve this aim, on May 13, 2022, NASA and SpaceX Crew-4 astronauts successfully tested its rHEALTH ONE universal biomedical analyzer for its ability to identify and analyzer biomarkers, cells, microorganisms, and proteins in a spaceflight environment. ### Planetary Defense (2016–present) NASA established the Planetary Defense Coordination Office (PDCO) in 2016 to catalog and track potentially hazardous near-Earth objects (NEO), such as asteroids and comets and develop potential responses and defenses against these threats. The PDCO is chartered to provide timely and accurate information to the government and the public on close approaches by Potentially hazardous objects (PHOs) and any potential for impact. The office functions within the Science Mission Directorate Planetary Science division. The PDCO augmented prior cooperative actions between the United States, the European Union, and other nations which had been scanning the sky for NEOs since 1998 in an effort called Spaceguard. #### Near Earth object detection (1998–present) From the 1990s NASA has run many NEO detection programs from Earth bases observatories, greatly increasing the number of objects that have been detected. However, many asteroids are very dark and the ones that are near the Sun are much harder to detect from Earth-based telescopes which observe at night, and thus face away from the Sun. NEOs inside Earth orbit only reflect a part of light also rather than potentially a "full Moon" when they are behind the Earth and fully lit by the Sun. In 1998, the United States Congress gave NASA a mandate to detect 90% of near-Earth asteroids over 1 km (0.62 mi) diameter (that threaten global devastation) by 2008. This initial mandate was met by 2011. In 2005, the original USA Spaceguard mandate was extended by the George E. Brown, Jr. Near-Earth Object Survey Act, which calls for NASA to detect 90% of NEOs with diameters of 140 m (460 ft) or greater, by 2020 (compare to the 20-meter Chelyabinsk meteor that hit Russia in 2013). As of January 2020, it is estimated that less than half of these have been found, but objects of this size hit the Earth only about once in 2,000 years. In January 2020, NASA officials estimated it would take 30 years to find all objects meeting the 140 m (460 ft) size criteria, more than twice the timeframe that was built into the 2005 mandate. In June 2021, NASA authorized the development of the NEO Surveyor spacecraft to reduce that projected duration to achieve the mandate down to 10 years. #### Involvement in current robotic missions NASA has incorporated planetary defense objectives into several ongoing missions. In 1999, NASA visited 433 Eros with the NEAR Shoemaker spacecraft which entered its orbit in 2000, closely imaging the asteroid with various instruments at that time. NEAR Shoemaker became the first spacecraft to successfully orbit and land on an asteroid, improving our understanding of these bodies and demonstrating our capacity to study them in greater detail. OSIRIS-REx used its suite of instruments to transmit radio tracking signals and capture optical images of Bennu during its study of the asteroid that will help NASA scientists determine its precise position in the solar system and its exact orbital path. As Bennu has the potential for recurring approaches to the Earth-Moon system in the next 100–200 years, the precision gained from OSIRIS-REx will enable scientists to better predict the future gravitational interactions between Bennu and our planet and resultant changes in Bennu's onward flight path. The WISE/NEOWISE mission was launched by NASA JPL in 2009 as an infrared-wavelength astronomical space telescope. In 2013, NASA repurposed it as the NEOWISE mission to find potentially hazardous near-Earth asteroids and comets; its mission has been extended into 2023. NASA and Johns Hopkins Applied Physics Laboratory (JHAPL) jointly developed the first planetary defense purpose-built satellite, the Double Asteroid Redirection Test (DART) to test possible planetary defense concepts. DART was launched in November 2021 by a SpaceX Falcon 9 from California on a trajectory designed to impact the Dimorphos asteroid. Scientists were seeking to determine whether an impact could alter the subsequent path of the asteroid; a concept that could be applied to future planetary defense. On September 26, 2022, DART hit its target. Studies in the weeks following impact will determine the extent that the impact changed the trajectory of the NEO. NEO Surveyor, formerly called the Near-Earth Object Camera (NEOCam) mission, is a space-based infrared telescope under development to survey the Solar System for potentially hazardous asteroids. The spacecraft is scheduled to launch in 2026. ### Study of Unidentified Aerial Phenomena (2022–present) In June 2022, the head of the NASA Science Mission Directorate, Thomas Zurbuchen, confirmed that NASA would join the hunt for Unidentified Flying Objects (UFOs)/Unidentified Aerial Phenomena (UAPs). At a speech before the National Academies of Science, Engineering and Medicine, Zurbuchen said the space agency would bring a scientific perspective to efforts already underway by the Pentagon and intelligence agencies to make sense of dozens of such sightings. He said it was “high-risk, high-impact” research that the space agency should not shy away from, even if it is a controversial field of study. ## Collaboration ### NASA Advisory Council In response to the Apollo 1 accident, which killed three astronauts in 1967, Congress directed NASA to form an Aerospace Safety Advisory Panel (ASAP) to advise the NASA Administrator on safety issues and hazards in NASA's air and space programs. In the aftermath of the Shuttle Columbia disaster, Congress required that the ASAP submit an annual report to the NASA Administrator and to Congress. By 1971, NASA had also established the Space Program Advisory Council and the Research and Technology Advisory Council to provide the administrator with advisory committee support. In 1977, the latter two were combined to form the NASA Advisory Council (NAC). The NASA Authorization Act of 2014 reaffirmed the importance of ASAP. ### National Oceanic and Atmospheric Administation (NOAA) NASA and NOAA have cooperated for decades on the development, delivery and operation of polar and geosynchronous weather satellites. The relationship typically involves NASA developing the space systems, launch solutions, and ground control technology for the satellites and NOAA operating the systems and delivering weather forecasting products to users. Multiple generations of NOAA Polar orbiting platforms have operated to provide detailed imaging of weather from low altitude. Geostationary Operational Environmental Satellites (GOES) provide near-real-time coverage of the western hemisphere to ensure accurate and timely understanding of developing weather phenominom. ### United States Space Force The United States Space Force (USSF) is the space service branch of the United States Armed Forces, while the National Aeronautics and Space Administration (NASA) is an independent agency of the United States government responsible for civil spaceflight. NASA and the Space Force's predecessors in the Air Force have a long-standing cooperative relationship, with the Space Force supporting NASA launches out of Kennedy Space Center, Cape Canaveral Space Force Station, and Vandenberg Space Force Base, to include range support and rescue operations from Task Force 45. NASA and the Space Force also partner on matters such as defending Earth from asteroids. Space Force members can be NASA astronauts, with Colonel Michael S. Hopkins, the commander of SpaceX Crew-1, commissioned into the Space Force from the International Space Station on December 18, 2020. In September 2020, the Space Force and NASA signed a memorandum of understanding formally acknowledging the joint role of both agencies. This new memorandum replaced a similar document signed in 2006 between NASA and Air Force Space Command. ### U.S. Geological Survey The Landsat program is the longest-running enterprise for acquisition of satellite imagery of Earth. It is a joint NASA / USGS program. On July 23, 1972, the Earth Resources Technology Satellite was launched. This was eventually renamed to Landsat 1 in 1975. The most recent satellite in the series, Landsat 9, was launched on September 27, 2021. The instruments on the Landsat satellites have acquired millions of images. The images, archived in the United States and at Landsat receiving stations around the world, are a unique resource for global change research and applications in agriculture, cartography, geology, forestry, regional planning, surveillance and education, and can be viewed through the U.S. Geological Survey (USGS) "EarthExplorer" website. The collaboration between NASA and USGS involves NASA designing and delivering the space system (satellite) solution, launching the satellite into orbit with the USGS operating the system once in orbit. As of October 2022, nine satellites have been built with eight of them successfully operating in orbit. ### European Space Agency (ESA) NASA collaborates with the European Space Agency on a wide range of scientific and exploration requirements. From participation with the Space Shuttle (the Spacelab missions) to major roles on the Artemis program (the Orion Service Module), ESA and NASA have supported the science and exploration missions of each agency. There are NASA payloads on ESA spacecraft and ESA payloads on NASA spacecraft. The agencies have developed joint missions in areas including heliophysics (e.g. Solar Orbiter) and astronomy (Hubble Space Telescope, James Webb Space Telescope). Under the Artemis Gateway partnership, ESA will contribute habitation and refueling modules, along with enhanced lunar communications, to the Gateway. NASA and ESA continue to advance cooperation in relation to Earth Science including climate change with agreements to cooperate on various missions including the Sentinel-6 series of spacecraft ### Japan Aerospace Exploration Agency (JAXA) NASA and the Japan Aerospace Exploration Agency (JAXA) cooperate on a range of space projects. JAXA is a direct participant in the Artemis program, including the Lunar Gateway effort. JAXA's planned contributions to Gateway include I-Hab's environmental control and life support system, batteries, thermal control, and imagery components, which will be integrated into the module by the European Space Agency (ESA) prior to launch. These capabilities are critical for sustained Gateway operations during crewed and uncrewed time periods. JAXA and NASA have collaborated on numerous satellite programs, especially in areas of Earth science. NASA has contributed to JAXA satellites and vice versa. Japanese instruments are flying on NASA's Terra and Aqua satellites, and NASA sensors have flown on previous Japanese Earth-observation missions. The NASA-JAXA Global Precipitation Measurement mission was launched in 2014 and includes both NASA- and JAXA-supplied sensors on a NASA satellite launched on a JAXA rocket. The mission provides the frequent, accurate measurements of rainfall over the entire globe for use by scientists and weather forecasters. ### Roscosmos NASA and Roscosmos have cooperated on the development and operation of the International Space Station since September 1993. The agencies have used launch systems from both countries to deliver station elements to orbit. Astronauts and Cosmonauts jointly maintain various elements of the station. Both countries provide access to the station via launch systems noting Russia's unique role as the sole provider of delivery of crew and cargo upon retirement of the space shuttle in 2011 and prior to commencement of NASA COTS and crew flights. In July 2022, NASA and Roscosmos signed a deal to share space station flights enabling crew from each country to ride on the systems provided by the other. Current geopolitical conditions in late 2022 make it unlikely that cooperation will be extended to other programs such as Artemis or lunar exploration. ### Indian Space Research Organisation In September 2014, NASA and Indian Space Research Organisation (ISRO) signed a partnership to collaborate on and launch a joint radar mission, the NASA-ISO Synthetic Aperature Radar (NISAR) mission. The mission is targeted to launch in 2024. NASA will provide the mission's L-band synthetic aperture radar, a high-rate communication subsystem for science data, GPS receivers, a solid-state recorder and payload data subsystem. ISRO provides the spacecraft bus, the S-band radar, the launch vehicle and associated launch services. ### Artemis Accords The Artemis Accords have been established to define a framework for cooperating in the peaceful exploration and exploitation of the Moon, Mars, asteroids, and comets. The Accords were drafted by NASA and the U.S. State Department and are executed as a series of bilateral agreements between the United States and the participating countries. As of September 2022, 21 countries have signed the accords. They are: Australia, Bahrain, Brazil, Canada, Colombia, France, Israel, Italy, Japan, the Republic of Korea, Luxembourg, Mexico, New Zealand, Poland, Romania, the Kingdom of Saudi Arabia, Singapore, Ukraine, the United Arab Emirates, the United Kingdom, and the United States. ### China National Space Administration The Wolf Amendment was passed by the U.S. Congress into law in 2011 and prevents NASA from engaging in direct, bilateral cooperation with the Chinese government and China-affiliated organizations such as the China National Space Administration without the explicit authorization from Congress and the Federal Bureau of Investigation. The law has been renewed annually since by inclusion in annual appropriations bills. ## Sustainability ### Environmental impact The exhaust gases produced by rocket propulsion systems, both in Earth's atmosphere and in space, can adversely effect the Earth's environment. Some hypergolic rocket propellants, such as hydrazine, are highly toxic prior to combustion, but decompose into less toxic compounds after burning. Rockets using hydrocarbon fuels, such as kerosene, release carbon dioxide and soot in their exhaust. However, carbon dioxide emissions are insignificant compared to those from other sources; on average, the United States consumed 803 million US gal (3.0 million m3) of liquid fuels per day in 2014, while a single Falcon 9 rocket first stage burns around 25,000 US gallons (95 m3) of kerosene fuel per launch. Even if a Falcon 9 were launched every single day, it would only represent 0.006% of liquid fuel consumption (and carbon dioxide emissions) for that day. Additionally, the exhaust from LOx- and LH2- fueled engines, like the SSME, is almost entirely water vapor. NASA addressed environmental concerns with its canceled Constellation program in accordance with the National Environmental Policy Act in 2011. In contrast, ion engines use harmless noble gases like xenon for propulsion. An example of NASA's environmental efforts is the NASA Sustainability Base. Additionally, the Exploration Sciences Building was awarded the LEED Gold rating in 2010. On May 8, 2003, the Environmental Protection Agency recognized NASA as the first federal agency to directly use landfill gas to produce energy at one of its facilities—the Goddard Space Flight Center, Greenbelt, Maryland. In 2018, NASA along with other companies including Sensor Coating Systems, Pratt & Whitney, Monitor Coating and UTRC launched the project CAUTION (CoAtings for Ultra High Temperature detectION). This project aims to enhance the temperature range of the Thermal History Coating up to 1,500 °C (2,730 °F) and beyond. The final goal of this project is improving the safety of jet engines as well as increasing efficiency and reducing CO2 emissions. ### Climate change NASA also researches and publishes on climate change. Its statements concur with the global scientific consensus that the global climate is warming. Bob Walker, who has advised US President Donald Trump on space issues, has advocated that NASA should focus on space exploration and that its climate study operations should be transferred to other agencies such as NOAA. Former NASA atmospheric scientist J. Marshall Shepherd countered that Earth science study was built into NASA's mission at its creation in the 1958 National Aeronautics and Space Act. NASA won the 2020 Webby People's Voice Award for Green in the category Web. ### STEM Initiatives Educational Launch of Nanosatellites (ELaNa). Since 2011, the ELaNa program has provided opportunities for NASA to work with university teams to test emerging technologies and commercial-off-the-shelf solutions by providing launch opportunities for developed CubeSats using NASA procured launch opportunities. By example, two NASA-sponsored CubeSats launched in June 2022 on a Virgin Orbit LauncherOne vehicle as the ELaNa 39 mission. Cubes in Space. NASA started an annual competition in 2014 named "Cubes in Space". It is jointly organized by NASA and the global education company I Doodle Learning, with the objective of teaching school students aged 11–18 to design and build scientific experiments to be launched into space on a NASA rocket or balloon. On June 21, 2017, the world's smallest satellite, KalamSAT, was launched. ### Use of the metric system US law requires the International System of Units to be used in all US Government programs, "except where impractical". In 1969, Apollo 11 landed on the Moon using a mix of United States customary units and metric units. In the 1980s, NASA started the transition towards the metric system, but was still using both systems in the 1990s. On September 23, 1999, a mixup between NASA's use of SI units and Lockheed Martin Space's use of US units resulted in the loss of the Mars Climate Orbiter. In August 2007, NASA stated that all future missions and explorations of the Moon would be done entirely using the SI system. This was done to improve cooperation with space agencies of other countries that already use the metric system. As of 2007, NASA is predominantly working with SI units, but some projects still use US units, and some, including the International Space Station, use a mix of both. ## Media presence ### NASA TV Approaching 40 years of service, the NASA TV channel airs content ranging from live coverage of manned missions to video coverage of significant milestones for operating robotic spacecraft (e.g. rover landings on Mars for example) and domestic and international launches. The channel is delivered by NASA and is broadcast by satellite and over the internet. The system initially started to capture archival footage of important space events for NASA managers and engineers and expanded as public interest grew. The Apollo 8 Christmas Eve broadcast while in orbit around the Moon was received by more than a billion people. NASA's video transmission of the Apollo 11 Moon landing was awarded a primetime emmy in commemoration of the 40th anniversary of the landing. The channel is a product of the U.S. Government and is widely available across many television and internet platforms. ### NASAcast NASAcast is the official audio and video podcast of the NASA website. Created in late 2005, the podcast service contains the latest audio and video features from the NASA web site, including NASA TV's This Week at NASA and educational materials produced by NASA. Additional NASA podcasts, such as [email protected], are also featured and give subscribers an in-depth look at content by subject matter. ### NASA EDGE NASA EDGE broadcasting live from White Sands Missile Range in 2010 NASA EDGE is a video podcast which explores different missions, technologies and projects developed by NASA. The program was released by NASA on March 18, 2007, and, as of August 2020, there have been 200 vodcasts produced. It is a public outreach vodcast sponsored by NASA's Exploration Systems Mission Directorate and based out of the Exploration and Space Operations Directorate at Langley Research Center in Hampton, Virginia. The NASA EDGE team takes an insiders look at current projects and technologies from NASA facilities around the United States, and it is depicted through personal interviews, on-scene broadcasts, computer animations, and personal interviews with top scientists and engineers at NASA. The show explores the contributions NASA has made to society as well as the progress of current projects in materials and space exploration. NASA EDGE vodcasts can be downloaded from the NASA website and from iTunes. In its first year of production, the show was downloaded over 450,000 times. As of February 2010, the average download rate is more than 420,000 per month, with over one million downloads in December 2009 and January 2010. NASA and the NASA EDGE have also developed interactive programs designed to complement the vodcast. The Lunar Electric Rover App allows users to drive a simulated Lunar Electric Rover between objectives, and it provides information about and images of the vehicle. The NASA EDGE Widget provides a graphical user interface for accessing NASA EDGE vodcasts, image galleries, and the program's Twitter feed, as well as a live NASA news feed. ### Astronomy Picture of the Day Astronomy Picture of the Day (APOD) is a website provided by NASA and Michigan Technological University (MTU). According to the website, "Each day a different image or photograph of our universe is featured, along with a brief explanation written by a professional astronomer." The photograph does not necessarily correspond to a celestial event on the exact day that it is displayed, and images are sometimes repeated. However, the pictures and descriptions often relate to current events in astronomy and space exploration. The text has several hyperlinks to more pictures and websites for more information. The images are either visible spectrum photographs, images taken at non-visible wavelengths and displayed in false color, video footage, animations, artist's conceptions, or micrographs that relate to space or cosmology. Past images are stored in the APOD Archive, with the first image appearing on June 16, 1995. This initiative has received support from NASA, the National Science Foundation, and MTU. The images are sometimes authored by people or organizations outside NASA, and therefore APOD images are often copyrighted, unlike many other NASA image galleries. When the APOD website was created, it received a total of 14 page views on its first day. As of 2012, the APOD website has received over a billion image views throughout its lifetime. APOD is also translated into 21 languages daily. ## Explanatory notes 1. ^ The descent stage of the LM stayed on the Moon after landing, while the ascent stage brought the two astronauts back to the CSM and then was discarded in lunar orbit. 2. ^ a b c Orbital Sciences was awarded a CRS contract in 2008. In 2015, Orbital Sciences became Orbital ATK through a business merger. Orbital ATK was awarded a CRS-2 contract in 2016. In 2018, Orbital ATK was acquired by Northrop Grumman. 3. ^ NASA EDGE Cast and Crew: Chris Giersch (Host); Blair Allen (Co-host and senior producer); Franklin Fitzgerald (News anchor and "everyman"); Jaqueline Mirielle Cortez (Special co-host); Ron Beard (Director and "set therapist"); and Don Morrison (Audio/video engineer) 4. ^ From left to right: Launch vehicle of Apollo (Saturn 5), Gemini (Titan 2) and Mercury (Atlas). Left, top-down: Spacecraft of Apollo, Gemini and Mercury. The Saturn IB and Mercury-Redstone launch vehicles are left out. This page was last updated at 2022-11-21 07:45 UTC. . View original page. All our content comes from Wikipedia and under the Creative Commons Attribution-ShareAlike License. Contact Top
2022-11-29T20:38:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1977125108242035, "perplexity": 6139.636687590991}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710711.7/warc/CC-MAIN-20221129200438-20221129230438-00377.warc.gz"}
https://lammps.sandia.gov/abstracts/abstract.16712.html
Atomistic simulation study on the crack growth stability of graphene under uniaxial tension and indentation S Lee and NM Pugno and S Ryu, MECCANICA, 54, 1915-1926 (2019). DOI: 10.1007/s11012-019-01027-x Combining a series of atomistic simulations with fracture mechanics theory, we systematically investigate the crack growth stability of graphene under tension and indentation, with a pre-existing crack made by two methods: atom removal and (artificial) bonding removal. In the tension, the monotonically increasing energy release rate umentclass12ptminimal \usepackageamsmath \usepackagewasysym \usepackageamsfonts \usepackageamssymb \usepackageamsbsy \usepackagemathrsfs \usepackageupgreek \setlength\oddsidemargin-69pt \begindocument$$G$$\enddocument is consistent with the unstable crack growth. In contrast, the non- monotonic G\documentclass12ptminimal \usepackageamsmath \usepackagewasysym \usepackageamsfonts \usepackageamssymb \usepackageamsbsy \usepackagemathrsfs \usepackageupgreek \setlength\oddsidemargin-69pt \begindocument$$G$$\enddocument with a maximum for indentation explains the transition from unstable to stable crack growth when the crack length is comparable to the diameter of the contact zone. We also find that the crack growth stability within a stable crack growth regime can be significantly affected by the crack tip sharpness even down to a single atom scale. A crack made by atom removal starts to grow at a higher indentation force than the ultimately sharp crack made by bonding removal, which leads to a large force drop at the onset of the crack growth that can cause unstable crack growth under indentation with force control. In addition, we investigate the effect of the offset distance between the indenter and the crack to the indentation fracture force and find that the graphene with a smaller initial crack is more sensitive. The findings reported in this study can be applied to other related 2D materials because crack growth stability is determined primarily by the geometrical factors of the mechanical loading.
2020-02-23T20:57:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5916818380355835, "perplexity": 1945.0500485778784}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145839.51/warc/CC-MAIN-20200223185153-20200223215153-00247.warc.gz"}
https://zbmath.org/authors/?q=ai%3Apersson.lars-erik
# zbMATH — the first resource for mathematics Compute Distance To: Documents Indexed: 295 Publications since 1972, including 21 Books Biographic References: 4 Publications all top 5 #### Co-Authors 16 single-authored 28 Maligranda, Lech 27 Pečarić, Josip 22 Wall, Peter 21 Kufner, Alois 18 Barza, Sorina 18 Samko, Natasha 16 Oguntuase, James Adedayo 16 Wedestig, Anna 15 Lukkassen, Dag 15 Stepanov, Vladimir Dmitrievich 12 Abramovich, Shoshana 11 Tephnadze, George 10 Gogatishvili, Amiran 9 Jain, Pankaj 8 Oĭnarov, Ryskul Oĭnarovich 8 Okpoti, Christopher Adjei 7 Čižmešija, Aleksandra 7 Nikolova, Ludmila 7 Peetre, Jaak 6 Johansson, Maria 6 Kruglyak, Natan Ya. 6 Marcoci, Anca-Nicoleta 6 Marcoci, Liviu-Gabriel 6 Perić, Ivan 6 Samko, Stefan Grigorievich 5 Kopezhanova, Aĭgerim Nurzhanovna 5 Meidell, Annette 5 Nikolova, Lyudmila I. 5 Popa, Nicolae 4 Carro, María Jesús 4 Chechkin, Gregory A. 4 Kamińska, Anna 4 Koroleva, Yu. O. 4 Kuliev, Komil D. 4 Larsson, Leo 4 Niculescu, Constantin P. 4 Nikolova, L. Y. 4 Nursultanov, Erlan D. 4 Soria, Javier 4 Temirkhanova, Ainur Maralkyzy 4 Ushakova, Elena Pavlovna 4 Varošanec, Sanja 3 Abylaeva, Akbota Muhamediyarovna 3 Blahota, István 3 Burenkov, Viktor Ivanovich 3 Cwikel, Michael 3 Dechevsky, Lubomir T. 3 Essel, Emmanuel Kwame 3 Fabelurin, Olanrewaju Olabiyi 3 Heinig, Hans P. 3 Kalybay, Aigerim Aisultankyzy 3 Koski, Timo J. T. 3 Krulić Himmelreich, Kristina 3 Popova, Olga V. 3 Shambilova, Guldarya Ermakovna 3 Silvestrov, Sergei D. 3 Zachariades, Theodossios 2 Allotey, Francis Kofi Ampenyin 2 Arendarenko, L. S. 2 Asekritova, Irina U. 2 Baĭarystanov, A. O. 2 Bergh, Jöran 2 Burtseva, Evgeniya 2 Cobos, Fernando 2 Čuljak, Vera 2 Dragomir, Sever Silvestru 2 Engliš, Miroslav 2 Engström, Jonas 2 Ericsson, Stefan 2 Høibakk, Ralph 2 Isac, George 2 Jain, Pawan Kumar 2 Kaijser, Sten 2 Lindblom, Ove 2 Lions, Jacques-Louis 2 Mustafayev, Rza Ch. 2 Páles, Zsolt 2 Rafeiro, Humberto 2 Shaimardan, Serikbol 2 Sinnamon, Gord 2 Sparr, Gunnar 2 Svanstedt, Nils E. M. 2 Tleukhanova, Nazerke Tulekovna 2 Upreti, Priti 2 Wyller, John A. 1 Abdikalikova, Zamira 1 Adeagbo-Sheikh, Abdulaziz Gbadebo 1 Adeleke, Emmanuel Oyeyemi 1 Agarwal, Ravi P. 1 Aglić Aljinović, Andrea 1 Akhmetkaliyeva, Raya Duisenbekovna 1 Åström, Kalle 1 Baramidze, Lasha 1 Dasht, Johan 1 Edmunds, David Eric 1 Euler, Marianna 1 Euler, Norbert 1 Fällström, Karl-Evert 1 Fiedler, Miroslav 1 Finol, Carlos E. ...and 51 more Co-Authors all top 5 all top 5 #### Fields 173 Real functions (26-XX) 94 Functional analysis (46-XX) 50 Operator theory (47-XX) 43 Harmonic analysis on Euclidean spaces (42-XX) 20 Partial differential equations (35-XX) 14 Difference and functional equations (39-XX) 10 Approximations and expansions (41-XX) 8 General and overarching topics; collections (00-XX) 8 History and biography (01-XX) 7 Integral equations (45-XX) 7 Mechanics of deformable solids (74-XX) 5 Calculus of variations and optimal control; optimization (49-XX) 5 Numerical analysis (65-XX) 4 Special functions (33-XX) 4 Information and communication theory, circuits (94-XX) 3 Linear and multilinear algebra; matrix theory (15-XX) 3 Probability theory and stochastic processes (60-XX) 2 Functions of a complex variable (30-XX) 2 Potential theory (31-XX) 2 Ordinary differential equations (34-XX) 2 Integral transforms, operational calculus (44-XX) 2 Quantum theory (81-XX) 2 Operations research, mathematical programming (90-XX) 1 Mathematical logic and foundations (03-XX) 1 Nonassociative rings and algebras (17-XX) 1 Measure and integration (28-XX) 1 Dynamical systems and ergodic theory (37-XX) 1 Sequences, series, summability (40-XX) 1 Abstract harmonic analysis (43-XX) 1 Geometry (51-XX) 1 Convex and discrete geometry (52-XX) 1 General topology (54-XX) 1 Global analysis, analysis on manifolds (58-XX) 1 Fluid mechanics (76-XX) 1 Biology and other natural sciences (92-XX) 1 Mathematics education (97-XX) #### Citations contained in zbMATH 206 Publications have been cited 1,851 times in 1,177 Documents Cited by Year Weighted inequalities of Hardy type. Zbl 1065.26018 2003 The Hardy inequality. About its history and some related results. Zbl 1213.42001 Kufner, Alois; Maligranda, Lech; Persson, Lars-Erik 2007 Convex functions and their applications. A contemporary approach. Zbl 1100.26002 2006 Some inequalities of Hadamard type. Zbl 0834.26009 Dragomir, Sever Silvestru; Pečarić, Josip E.; Persson, L. E. 1995 The prehistory of the Hardy inequality. Zbl 1153.01015 Kufner, Alois; Maligranda, Lech; Persson, Lars-Erik 2006 Generalized duality of some Banach function spaces. Zbl 0704.46018 1989 Interpolation with a parameter function. Zbl 0619.46064 1986 Reiterated homogenization of nonlinear monotone operators. Zbl 0979.35047 Lions, J. L.; Lukkassen, D.; Persson, L. E.; Wall, P. 2001 On Carleman and Knopp’s inequalities. Zbl 1049.26014 Kaijser, Sten; Persson, Lars-Erik; Öberg, Anders 2002 Properties of some functionals related to Jensen’s inequality. Zbl 0847.26013 Dragomir, Sever Silvestru; Pečarić, Josip E.; Persson, L. E. 1996 Weighted Hardy and potential operators in the generalized Morrey spaces. Zbl 1211.42018 2011 Weighted integral inequalities with the geometric mean operator. Zbl 1024.26008 2002 Convex functions and their applications. A contemporary approach. 2nd edition. Zbl 1404.26003 2018 Weighted inequalities of Hardy type. 2nd updated edition. Zbl 1380.26001 Kufner, Alois; Persson, Lars-Erik; Samko, Natasha 2017 Hardy-type inequalities via convexity. Zbl 1083.26013 Kaijser, Sten; Nikolova, Ludmila; Persson, Lars-Erik; Wedestig, Anna 2005 The homogenization method. An introduction. Zbl 0847.73003 1993 Weighted norm inequalities for integral transforms with product kernels. Zbl 1257.44002 Kokilashvili, Vakhtang; Meskhi, Alexander; Persson, Lars-Erik 2009 An equivalence theorem for integral conditions related to Hardy’s inequality. Zbl 1070.26015 Gogatishvili, Amiram; Kufner, Alois; Persson, Lars-Erik; Wedestig, Anna 2004 On strengthened Hardy and Pólya-Knopp’s inequalities. Zbl 1034.26008 Čižmešija, Aleksandra; Pečarić, Josip; Persson, Lars-Erik 2003 Some new Hardy type inequalities with general kernels. Zbl 1177.26038 Krulić, Kristina; Pečarić, Josip; Persson, Lars-Erik 2009 Reiterated homogenization of monotone operators. Zbl 0953.35041 Lions, Jacques-Louis; Lukkassen, Dag; Persson, Lars-Erik; Wall, Peter 2000 On the connection between real and complex interpolation of quasi-Banach spaces. Zbl 0953.46037 Cobos, Fernando; Peetre, Jaak; Persson, Lars Erik 1998 Hardy and singular operators in weighted generalized Morrey spaces with applications to singular integral equations. Zbl 1252.42026 Lukkassen, D.; Meidell, A.; Persson, L.-E.; Samko, N. 2012 Old and new on the Hermite-Hadamard inequality. Zbl 1073.26015 2004 Time scales Hardy-type inequalities via superquadracity. Zbl 1368.26026 2014 Some new iterated Hardy-type inequalities. Zbl 1260.26023 Gogatishvili, A.; Mustafayev, R. Ch.; Persson, L.-E. 2012 Hardy-type inequalities on the weighted cones of quasi-concave functions. Zbl 1312.26046 Persson, L.-E.; Shambilova, G. E.; Stepanov, V. D. 2015 2008 Mixed norm and multidimensional Lorentz spaces. Zbl 1110.46018 Barza, Sorina; Kamińska, Anna; Persson, Lars-Erik; Soria, Javier 2006 Quasi-monotone weight functions and their characteristics and applications. Zbl 1254.26016 Persson, Lars-Erik; Samko, Natasha; Wall, Peter 2012 Indices, convexity and concavity of Calderón-Lozanovskii spaces. Zbl 1026.46020 Maligranda, L.; Kamińska, A.; Persson, L. E. 2003 Convexity, concavity, type and cotype of Lorentz spaces. Zbl 0937.46027 Kamińska, A.; Maligranda, L.; Persson, L. E. 1998 Maximal operators of Vilenkin-Nörlund means. Zbl 1311.42071 2015 Characterisation of embeddings in Lorentz spaces. Zbl 1128.26012 Gogatishvili, A.; Johansson, M.; Okpoti, C. A.; Persson, L.-E. 2007 Equivalence of Hardy-type inequalities with general measures on the cones of non-negative respective non-increasing functions. Zbl 1093.26023 Persson, L.-E.; Stepanov, V. D.; Ushakova, E. P. 2006 On an elementary approach to the fractional Hardy inequality. Zbl 0935.26008 Krugljak, Natan; Maligranda, Lech; Persson, Lars Erik 2000 Hardy type operators in local vanishing Morrey spaces on fractal sets. Zbl 1345.46023 Lukkassen, Dag; Persson, Lars-Erik; Samko, Natasha 2015 Some new refined Hardy type inequalities with general kernels and measures. Zbl 1214.26012 Abramovich, Shoshana; Krulić, Kristina; Pečarić, Josip; Persson, Lars-Erik 2010 Reverse Cauchy-Schwarz inequalities for positive $$C^{*}$$-valued sesquilinear forms. Zbl 1188.46037 2009 Some new scales of weight characterizations of the class $$B_p$$. Zbl 1199.26057 Gogatishvili, A.; Kufner, A.; Persson, L.-E. 2009 Some scales of equivalent weight characterizations of Hardy’s inequality: the case $$q < p$$. Zbl 1122.26016 2007 An equivalence theorem for some integral conditions with general measures related to Hardy’s inequality. Zbl 1113.26024 Okpoti, Christopher A.; Persson, Lars-Erik; Sinnamon, Gord 2007 Multiplicative inequalities of Carlson type and interpolation. Zbl 1102.41001 2006 Carleman’s inequality – history, proofs and some new generalizations. Zbl 1064.26020 Johansson, Maria; Persson, Lars-Erik; Wedestig, Anna 2003 Sharp weighted multidimensional integral inequalities for monotone functions. Zbl 0944.26025 Barza, Sorina; Persson, Lars-Erik; Soria, Javier 2000 Distribution and rearrangement estimates of the maximal function and interpolation. Zbl 0888.42011 Asekritova, Irina U.; Krugljak, Natan Ya.; Maligranda, Lech; Persson, Lars-Erik 1997 On some fractional order Hardy inequalities. Zbl 0880.26021 Heinig, Hans P.; Kufner, Alois; Persson, Lars-Erik 1997 Best constants in reversed Hardy’s inequalities for quasimonotone functions. Zbl 0805.26008 Bergh, Jöran; Burenkov, Victor; Persson, Lars Erik 1994 Some new iterated Hardy-type inequalities: the case $${\theta=1}$$. Zbl 1297.26035 Gogatishvili, Amiran; Mustafayev, Rza; Persson, Lars-Erik 2013 An extension of Rothe’s method to non-cylindrical domains. Zbl 1164.65463 2007 A matriceal analogue of Fejér’s theory. Zbl 1043.15020 Barza, Sorina; Persson, Lars-Erik; Popa, Nicolae 2003 Lions–Peetre reiteration formulas for triples and their applications. Zbl 0987.46024 Asekritova, Irina; Krugljak, Natan; Maligranda, Lech; Nikolova, Lyudmila; Persson, Lars-Erik 2001 Indices and regularizations of measurable functions. Zbl 0965.46019 Kamińska, A.; Maligranda, L.; Persson, L. E. 2000 The failure of the Hardy inequality and interpolation of intersections. Zbl 1021.46024 Krugljak, Natan; Maligranda, Lech; Persson, Lars-Erik 1999 Stolarsky’s inequality with general weights. Zbl 0824.26012 Maligranda, Lech; Pečarić, Josip E.; Persson, Lars Erik 1995 What should have happened if Hardy had discovered this? Zbl 1282.26038 2012 On $$n$$-th James and Khintchine constants of Banach spaces. Zbl 1147.46015 2008 An equivalence theorem for some integral conditions with general measures related to Hardy’s inequality. II. Zbl 1128.26016 Okpoti, Christopher A.; Persson, Lars-Erik; Sinnamon, Gord 2008 On the precise asymptotics of the constant in Friedrich’s inequality for functions vanishing on the part of the boundary with microinhomogeneous structure. Zbl 1144.35357 Chechkin, G. A.; Koroleva, Yu. O.; Persson, L.-E. 2007 Multidimensional rearrangement and Lorentz spaces. Zbl 1074.46019 Barza, S.; Persson, L.-E.; Soria, J. 2004 From Hardy to Carleman and general mean-type inequalities. Zbl 1005.26018 Jain, Pankaj; Persson, Lars-Erik; Wedestig, Anna 2000 On the best constants in certain integral inequalities for monotone functions. Zbl 0814.26011 Myasnikov, E. A.; Persson, L. E.; Stepanov, V. D. 1994 On Clarkson’s inequalities and interpolation. Zbl 0777.46041 1992 Some new estimates of the ‘Jensen gap’. Zbl 1336.26019 2016 On Hardy $$q$$-inequalities. Zbl 1349.26037 Maligranda, Lech; Oinarov, Ryskul; Persson, Lars-Erik 2014 Multidimensional Hardy-type inequalities with general kernels. Zbl 1152.26020 Oguntuase, James A.; Persson, Lars-Erik; Essel, Emmanuel K. 2008 Carleman-Knopp type inequalities via Hardy inequalities. Zbl 0989.26015 Jain, Pankaj; Persson, Lars-Erik; Wedestig, Anna 2001 On some sharp bounds for the homogenized $$p$$-Poisson equation. Zbl 0832.35009 Lukkassen, Dag; Persson, Lars Erik; Wall, Peter 1995 Weighted Favard and Berwald inequalities. Zbl 0834.26012 Maligranda, L.; Pečarić, J. E.; Persson, L. E. 1995 Some properties of generalized exponential entropies with applications to data compression. Zbl 0746.94006 1992 General Beckenbach’s inequality with applications. Zbl 0686.26006 1989 Calderón-Zygmund type singular operators in weighted generalized Morrey spaces. Zbl 1344.42013 Persson, Lars-Erik; Samko, Natasha; Wall, Peter 2016 A note on the best constants in some Hardy inequalities. Zbl 1314.26021 2015 Weighted Hardy-type inequalities on the cone of quasi-concave functions. Zbl 1291.39049 Persson, L.-E.; Popova, O. V.; Stepanov, V. D. 2014 The Beckenbach-Dresher inequality in the $$\Psi$$-direct sums of spaces and related results. Zbl 1275.26042 Nikolova, Ludmila; Persson, Lars-Erik; Varošanec, Sanja 2012 On the Friedrichs inequality in a domain perforated aperiodically along the boundary. Homogenization procedure. Asymptotics for parabolic problems. Zbl 1180.35072 Chechkin, G. A.; Koroleva, Yu. O.; Meidell, A.; Persson, L.-E. 2009 Multidimensional Hardy-type inequalities via convexity. Zbl 1148.26028 Oguntuase, James A.; Persson, Lars-Erik; Čižmešija, Aleksandra 2008 Homogenization of random degenerated nonlinear monotone operators. Zbl 1118.35003 Engström, J.; Persson, L.-E.; Piatnitski, A.; Wall, P. 2006 Some difference inequalities with weights and interpolation. Zbl 0976.26013 1998 Inequalities related to isotonicity of projection and antiprojection operators. Zbl 0904.46010 1998 A Carlson type inequality with blocks and interpolation. Zbl 0824.46088 Kruglyak, Natan Ya.; Maligranda, Lech; Persson, Lars Erik 1993 Real interpolation between weighted $$L^ p$$ and Lorentz spaces. Zbl 0674.46047 1987 Generalized noncommutative Hardy and Hardy-Hilbert type inequalities. Zbl 1204.26026 Hansen, Frank; Krulić, Kristina; Pečarić, Josip; Persson, Lars-Erik 2010 Two-sided Hardy-type inequalities for monotone functions. Zbl 1217.26034 2010 Refined multidimensional Hardy-type inequalities via superquadracity. Zbl 1165.26337 Oguntuase, J. A.; Persson, L.-E.; Essel, E. K.; Popoola, B. A. 2008 Multidimensional Hardy type inequalities for $$p<0$$ and $$0<p<1$$. Zbl 1138.26016 Oguntuase, James A.; Okpoti, Christopher A.; Persson, Lars-Erik; Allotey, Francis K. A. 2007 On the formula of Jacques-Louis Lions for reproducing kernels of harmonic and other functions. Zbl 1053.46016 Engliš, Miroslav; Lukkassen, Dag; Peetre, Jaak; Persson, Lars-Eric 2004 On strengthened weighted Carleman’s inequality. Zbl 1049.26008 Čižmešija, Aleksandra; Pečarić, Josip; Persson, Lars-Erik 2003 Real interpolation for divisible cones. Zbl 0932.46016 Carro, María J.; Ericsson, Stefan; Persson, Lars-Erik 1999 Some real interpolation methods for families of Banach spaces: A comparison. Zbl 0892.46015 Carro, María J.; Nikolova, Ljudmila I.; Peetre, Jaak; Persson, Lars-Erik 1997 Interpolation and partial differential equations. Zbl 0820.41004 Maligranda, Lech; Persson, Lars Erik; Wyller, John 1994 On some sharp reversed Hölder and Hardy type inequalities. Zbl 0829.26008 Bergh, Jöran; Burenkov, Victor; Persson, Lars Erik 1994 Some properties of $$X^ p$$-spaces. Zbl 0769.46021 1991 Extensions and refinements of Fejer and Hermite-Hadamard type inequalities. Zbl 1400.26039 2018 Weighted Hardy type inequalities for supremum operators on the cones of monotone functions. Zbl 1346.26009 2016 Some scales of equivalent conditions to characterize the Stieltjes inequality: the case $$q<p$$. Zbl 1298.26052 2014 Matrix spaces and Schur multipliers. Matriceal harmonic analysis. Zbl 1303.47018 2014 Some new scales of refined Hardy type inequalities via functions related to superquadracity. Zbl 1280.26030 2013 The weighted Stieltjes inequality and applications. Zbl 1268.26026 Gogatishvili, Amiran; Kufner, Alois; Persson, Lars-Erik 2013 On a new class of Hardy-type inequalities. Zbl 1279.26033 Adeleke, E. O.; Čižmešija, A.; Oguntuase, J. A.; Persson, L.-E.; Pokaz, D. 2012 A sharp boundedness result for restricted maximal operators of Vilenkin-Fourier series on martingale Hardy spaces. Zbl 1440.42127 2019 Equivalent integral conditions related to bilinear Hardy-type inequalities. Zbl 1434.26037 Kanjilal, Saikat; Persson, Lars-Erik; Shambilova, Guldarya E. 2019 Multidimensional Hardy-type inequalities on time scales with variable exponents. Zbl 1425.26008 Fabelurin, O. O.; Oguntuase, J. A.; Persson, L.-E. 2019 Convex functions and their applications. A contemporary approach. 2nd edition. Zbl 1404.26003 2018 Extensions and refinements of Fejer and Hermite-Hadamard type inequalities. Zbl 1400.26039 2018 On the Nörlund logarithmic means with respect to Vilenkin system in the martingale Hardy space $$H_1$$. Zbl 1399.42082 2018 On an approximation of 2-dimensional Walsh-Fourier series in martingale Hardy spaces. Zbl 1382.42017 2018 Two-sided estimates of the Lebesgue constants with respect to Vilenkin systems and applications. Zbl 1379.42012 2018 Weighted inequalities of Hardy type. 2nd updated edition. Zbl 1380.26001 Kufner, Alois; Persson, Lars-Erik; Samko, Natasha 2017 Fejér and Hermite-Hadamard type inequalities for $$N$$-quasiconvex functions. Zbl 1382.26016 2017 Hardy type inequalities with kernels: the current status and some new results. Zbl 1357.26028 Kufner, Alois; Persson, Lars-Erik; Samko, Natasha 2017 Historical synopsis of the Taylor remainder. Zbl 1387.26001 Persson, Lars-Erik; Rafeiro, Humberto; Wall, Peter 2017 Some new estimates of the ‘Jensen gap’. Zbl 1336.26019 2016 Calderón-Zygmund type singular operators in weighted generalized Morrey spaces. Zbl 1344.42013 Persson, Lars-Erik; Samko, Natasha; Wall, Peter 2016 Weighted Hardy type inequalities for supremum operators on the cones of monotone functions. Zbl 1346.26009 2016 A note on the maximal operators of Vilenkin-Nörlund means with non-increasing coefficients. Zbl 1399.42079 2016 Continuous forms of classical inequalities. Zbl 1353.26017 Nikolova, Ludmila; Persson, Lars-Erik; Varošanec, Sanja 2016 Some new Hardy-type inequalities in $$q$$-analysis. Zbl 1348.26009 Baiarystanov, A. O.; Persson, L. E.; Shaimardan, S.; Temirkhanova, A. 2016 A sharp boundedness result concerning some maximal operators of Vilenkin-Fejér means. Zbl 1358.42023 2016 Some sharp inequalities for integral operators with homogeneous kernel. Zbl 1334.26037 Lukkassen, Dag; Persson, Lars-Erik; Samko, Stefan G. 2016 Hardy-type inequalities on the weighted cones of quasi-concave functions. Zbl 1312.26046 Persson, L.-E.; Shambilova, G. E.; Stepanov, V. D. 2015 Maximal operators of Vilenkin-Nörlund means. Zbl 1311.42071 2015 Hardy type operators in local vanishing Morrey spaces on fractal sets. Zbl 1345.46023 Lukkassen, Dag; Persson, Lars-Erik; Samko, Natasha 2015 A note on the best constants in some Hardy inequalities. Zbl 1314.26021 2015 On $$\gamma$$-quasiconvexity, superquadracity and two-sided reversed Jensen type inequalities. Zbl 1326.26036 Abramovich, S.; Persson, L.-E.; Samko, N. 2015 Some new results concerning a class of third-order differential equations. Zbl 1308.34075 Akhmetkaliyeva, R. D.; Persson, L.-E.; Ospanov, K. N.; Wall, P. 2015 A new discrete Hardy-type inequality with kernels and monotone functions. Zbl 1336.26036 Kalybay, Aigerim; Persson, Lars-Erik; Temirkhanova, Ainur 2015 Some new Hardy-type inequalities for Riemann-Liouville fractional $$q$$-integral operator. Zbl 1336.26042 2015 On the Nörlund means of Vilenkin-Fourier series. Zbl 1374.42054 2015 Some new ($$H_p,L_p$$) type inequalities of maximal operators of Vilenkin-Nörlund means with non-decreasing coefficients. Zbl 1329.42030 2015 Time scale Hardy-type inequalities with ‘broken’ exponent $$p$$. Zbl 1308.26035 2015 Time scales Hardy-type inequalities via superquadracity. Zbl 1368.26026 2014 On Hardy $$q$$-inequalities. Zbl 1349.26037 Maligranda, Lech; Oinarov, Ryskul; Persson, Lars-Erik 2014 Weighted Hardy-type inequalities on the cone of quasi-concave functions. Zbl 1291.39049 Persson, L.-E.; Popova, O. V.; Stepanov, V. D. 2014 Some scales of equivalent conditions to characterize the Stieltjes inequality: the case $$q<p$$. Zbl 1298.26052 2014 Matrix spaces and Schur multipliers. Matriceal harmonic analysis. Zbl 1303.47018 2014 Some new sharp limit Hardy-type inequalities via convexity. Zbl 1372.26014 Barza, Sorina; Persson, Lars-Erik; Samko, Natasha 2014 Some new scales of refined Jensen and Hardy type inequalities. Zbl 1297.26039 Abramovich, S.; Persson, L. E.; Samko, N. 2014 Inequalities and convexity. Zbl 1318.26039 2014 Some new refined Hardy type inequalities with breaking points $$p = 2$$ or $$p = 3$$. Zbl 1318.26043 2014 Some Hardy type inequalities with “broken” exponent $$p$$. Zbl 1308.26036 Oguntuase, James A.; Persson, Lars-Erik; Samko, Natasha 2014 On the equivalence between some multidimensional Hardy-type inequalities. Zbl 1280.26040 Oguntuase, J. A.; Persson, L.-E.; Samko, N.; Sonubi, A. 2014 Some new iterated Hardy-type inequalities: the case $${\theta=1}$$. Zbl 1297.26035 Gogatishvili, Amiran; Mustafayev, Rza; Persson, Lars-Erik 2013 Some new scales of refined Hardy type inequalities via functions related to superquadracity. Zbl 1280.26030 2013 The weighted Stieltjes inequality and applications. Zbl 1268.26026 Gogatishvili, Amiran; Kufner, Alois; Persson, Lars-Erik 2013 Weighted Hardy-type inequalities in variable exponent Morrey-type spaces. Zbl 1298.46029 Lukkassen, Dag; Persson, Lars-Erik; Samko, Stefan; Wall, Peter 2013 Some new scales of weight characterizations of Hardy-type inequalities. Zbl 1270.26020 Kufner, Alois; Persson, Lars-Erik; Samko, Natasha 2013 Some new Hardy-type integral inequalities on cones of monotone functions. Zbl 1277.47061 Arendarenko, L. S.; Oinarov, R.; Persson, L.-E. 2013 Hardy and singular operators in weighted generalized Morrey spaces with applications to singular integral equations. Zbl 1252.42026 Lukkassen, D.; Meidell, A.; Persson, L.-E.; Samko, N. 2012 Some new iterated Hardy-type inequalities. Zbl 1260.26023 Gogatishvili, A.; Mustafayev, R. Ch.; Persson, L.-E. 2012 Quasi-monotone weight functions and their characteristics and applications. Zbl 1254.26016 Persson, Lars-Erik; Samko, Natasha; Wall, Peter 2012 What should have happened if Hardy had discovered this? Zbl 1282.26038 2012 The Beckenbach-Dresher inequality in the $$\Psi$$-direct sums of spaces and related results. Zbl 1275.26042 Nikolova, Ludmila; Persson, Lars-Erik; Varošanec, Sanja 2012 On a new class of Hardy-type inequalities. Zbl 1279.26033 Adeleke, E. O.; Čižmešija, A.; Oguntuase, J. A.; Persson, L.-E.; Pokaz, D. 2012 On scales of equivalent conditions characterizing weighted Stieltjes inequality. Zbl 1261.42005 Gogatishvili, A.; Persson, L.-E.; Stepanov, V. D.; Wall, P. 2012 Weighted Hardy operators in complementary Morrey spaces. Zbl 1268.46023 Lukkassen, Dag; Persson, Lars-Erik; Samko, Stefan 2012 Best constants between equivalent norms in Lorentz sequence spaces. Zbl 1244.46007 Barza, S.; Marcoci, A. N.; Persson, L. E. 2012 Weighted Hardy and potential operators in the generalized Morrey spaces. Zbl 1211.42018 2011 On inequalities for the Fourier transform of functions from Lorentz spaces. Zbl 1284.42008 Kopezhanova, A. N.; Nursultanov, E. D.; Persson, L.-E. 2011 A new weighted Friedrichs-type inequality for a perforated domain with a sharp constant. Zbl 1234.35027 Chechkin, G. A.; Koroleva, Yu. O.; Persson, L.-E.; Wall, P. 2011 On Friedrichs-type inequalities in domains rarely perforated along the boundary. Zbl 1275.39013 Koroleva, Yulia; Persson, Lars-Erik; Wall, Peter 2011 Some new refined Hardy type inequalities with general kernels and measures. Zbl 1214.26012 Abramovich, Shoshana; Krulić, Kristina; Pečarić, Josip; Persson, Lars-Erik 2010 Generalized noncommutative Hardy and Hardy-Hilbert type inequalities. Zbl 1204.26026 Hansen, Frank; Krulić, Kristina; Pečarić, Josip; Persson, Lars-Erik 2010 Two-sided Hardy-type inequalities for monotone functions. Zbl 1217.26034 2010 Homogenization of quasilinear parabolic problems by the method of Rothe and two scale convergence. Zbl 1224.35188 Essel, Emmanuel Kwame; Kuliev, Komil; Kulieva, Gulchehra; Persson, Lars-Erik 2010 On summability of the Fourier coefficients in bounded orthonormal systems for functions from some Lorentz type spaces. Zbl 1226.46022 2010 Some new scales of characterization of Hardy’s inequality. Zbl 1193.26017 Gogatishvili, Amiran; Kufner, Alois; Persson, Lars-Erik 2010 Some new Stein and Hardy type inequalities. Zbl 1229.26033 2010 Schur multiplier characterization of a class of infinite matrices. Zbl 1224.15066 Marcoci, A.; Marcoci, L.; Persson, L. E.; Popa, N. 2010 Nonlinear variational methods for estimating effective properties of multiscale materials. Zbl 1185.49017 Lukkassen, Dag; Meidell, Annette; Persson, Lars-Erik 2010 Weighted norm inequalities for integral transforms with product kernels. Zbl 1257.44002 Kokilashvili, Vakhtang; Meskhi, Alexander; Persson, Lars-Erik 2009 Some new Hardy type inequalities with general kernels. Zbl 1177.26038 Krulić, Kristina; Pečarić, Josip; Persson, Lars-Erik 2009 Reverse Cauchy-Schwarz inequalities for positive $$C^{*}$$-valued sesquilinear forms. Zbl 1188.46037 2009 Some new scales of weight characterizations of the class $$B_p$$. Zbl 1199.26057 Gogatishvili, A.; Kufner, A.; Persson, L.-E. 2009 On the Friedrichs inequality in a domain perforated aperiodically along the boundary. Homogenization procedure. Asymptotics for parabolic problems. Zbl 1180.35072 Chechkin, G. A.; Koroleva, Yu. O.; Meidell, A.; Persson, L.-E. 2009 Weighted inequalities for a class of matrix operators: the case $$p\leqslant q$$. Zbl 1182.26036 Oinarov, Ryskul; Persson, Lars-Erik; Temirkhanova, Ainur 2009 A new characterization of Bergman-Schatten spaces and a duality result. Zbl 1192.46016 Marcoci, L. G.; Persson, L. E.; Popa, I.; Popa, N. 2009 Two-sided Hardy-type inequalities for monotone functions. Zbl 1196.26033 Stepanov, V. D.; Persson, L. E.; Popova, O. V. 2009 2008 On $$n$$-th James and Khintchine constants of Banach spaces. Zbl 1147.46015 2008 An equivalence theorem for some integral conditions with general measures related to Hardy’s inequality. II. Zbl 1128.26016 Okpoti, Christopher A.; Persson, Lars-Erik; Sinnamon, Gord 2008 Multidimensional Hardy-type inequalities with general kernels. Zbl 1152.26020 Oguntuase, James A.; Persson, Lars-Erik; Essel, Emmanuel K. 2008 Multidimensional Hardy-type inequalities via convexity. Zbl 1148.26028 Oguntuase, James A.; Persson, Lars-Erik; Čižmešija, Aleksandra 2008 Refined multidimensional Hardy-type inequalities via superquadracity. Zbl 1165.26337 Oguntuase, J. A.; Persson, L.-E.; Essel, E. K.; Popoola, B. A. 2008 Refinement of Hardy’s inequality for “all” $$p$$. Zbl 1160.26317 2008 A new approach to the Sawyer and Sinnamon characterizations of Hardy’s inequality. Zbl 1149.26030 Johansson, Maria; Persson, Lars-Erik; Wedestig, Anna 2008 The Hardy inequality. About its history and some related results. Zbl 1213.42001 Kufner, Alois; Maligranda, Lech; Persson, Lars-Erik 2007 Characterisation of embeddings in Lorentz spaces. Zbl 1128.26012 Gogatishvili, A.; Johansson, M.; Okpoti, C. A.; Persson, L.-E. 2007 Some scales of equivalent weight characterizations of Hardy’s inequality: the case $$q < p$$. Zbl 1122.26016 2007 An equivalence theorem for some integral conditions with general measures related to Hardy’s inequality. Zbl 1113.26024 Okpoti, Christopher A.; Persson, Lars-Erik; Sinnamon, Gord 2007 An extension of Rothe’s method to non-cylindrical domains. Zbl 1164.65463 2007 On the precise asymptotics of the constant in Friedrich’s inequality for functions vanishing on the part of the boundary with microinhomogeneous structure. Zbl 1144.35357 Chechkin, G. A.; Koroleva, Yu. O.; Persson, L.-E. 2007 Multidimensional Hardy type inequalities for $$p<0$$ and $$0<p<1$$. Zbl 1138.26016 Oguntuase, James A.; Okpoti, Christopher A.; Persson, Lars-Erik; Allotey, Francis K. A. 2007 Inequalites and properties of some generalized Orlicz classes and spaces. Zbl 1164.26016 Jain, P.; Persson, L. E.; Upreti, P. 2007 Weighted multidimensional Hardy type inequalities via Jensen’s inequality. Zbl 1152.26015 Oguntuase, J. A.; Okpoti, C. A.; Persson, L.-E.; Allotey, F. K. A. 2007 Some multi-dimensional Hardy type integral inequalities. Zbl 1160.26009 2007 General inequalities via isotonic subadditive functionals. Zbl 1120.26013 Abramovich, S.; Persson, L.-E.; Pečarić, J.; Varošanec, S. 2007 Weighted inequalities of Hardy type for matrix operators: the case $$q<p$$. Zbl 1129.26015 Oinarov, Ryskul; Okpoti, Christopher A.; Persson, Lars-Erik 2007 Convex functions and their applications. A contemporary approach. Zbl 1100.26002 2006 The prehistory of the Hardy inequality. Zbl 1153.01015 Kufner, Alois; Maligranda, Lech; Persson, Lars-Erik 2006 ...and 106 more Documents all top 5 all top 5 #### Cited in 300 Serials 89 Journal of Mathematical Analysis and Applications 55 Journal of Inequalities and Applications 28 Mathematical Inequalities & Applications 26 Mathematische Nachrichten 23 Mediterranean Journal of Mathematics 20 Journal of Functional Analysis 20 Proceedings of the American Mathematical Society 20 Positivity 19 Journal of Approximation Theory 19 Aequationes Mathematicae 17 Applications of Mathematics 16 Eurasian Mathematical Journal 16 Journal of Function Spaces 15 Journal of Function Spaces and Applications 15 Banach Journal of Mathematical Analysis 13 Doklady Mathematics 12 Bulletin of the Australian Mathematical Society 12 Acta Mathematica Hungarica 11 Journal of Differential Equations 11 The Journal of Fourier Analysis and Applications 11 Acta Mathematica Sinica. English Series 11 Journal of Mathematical Inequalities 10 Results in Mathematics 10 Abstract and Applied Analysis 10 Revista de la Real Academia de Ciencias Exactas, Físicas y Naturales. Serie A: Matemáticas. RACSAM 9 Mathematical Notes 9 Rocky Mountain Journal of Mathematics 9 Applied Mathematics and Computation 9 Integral Equations and Operator Theory 9 Siberian Mathematical Journal 9 Proceedings of the Steklov Institute of Mathematics 8 Analysis Mathematica 8 Nonlinear Analysis. Theory, Methods & Applications. Series A: Theory and Methods 8 Journal of Mathematical Sciences (New York) 8 Revista Matemática Complutense 8 Tbilisi Mathematical Journal 7 Studia Mathematica 7 Fractional Calculus & Applied Analysis 7 Proyecciones 7 Transactions of A. Razmadze Mathematical Institute 6 Computers & Mathematics with Applications 6 Linear and Multilinear Algebra 6 Ukrainian Mathematical Journal 6 Archiv der Mathematik 6 Collectanea Mathematica 6 The Journal of Geometric Analysis 6 Indagationes Mathematicae. New Series 6 Bulletin of the Malaysian Mathematical Sciences Society. Second Series 6 Frontiers of Mathematics in China 6 International Journal of Analysis and Applications 5 Annali di Matematica Pura ed Applicata. Serie Quarta 5 Fasciculi Mathematici 5 Information Sciences 5 Journal of Computational and Applied Mathematics 5 Transactions of the American Mathematical Society 5 Journal of Scientific Computing 5 Georgian Mathematical Journal 5 Complex Variables and Elliptic Equations 5 Advances in Operator Theory 4 Journal of Mathematical Physics 4 Fuzzy Sets and Systems 4 Proceedings of the Edinburgh Mathematical Society. Series II 4 Annales de l’Institut Henri Poincaré. Analyse Non Linéaire 4 Applied Mathematics Letters 4 International Journal of Mathematics 4 Journal of Contemporary Mathematical Analysis. Armenian Academy of Sciences 4 Journal de Mathématiques Pures et Appliquées. Neuvième Série 4 Linear Algebra and its Applications 4 Calculus of Variations and Partial Differential Equations 4 St. Petersburg Mathematical Journal 4 Central European Journal of Mathematics 4 Science China. Mathematics 4 Journal of Mathematics 4 International Journal of Analysis 3 American Mathematical Monthly 3 Applicable Analysis 3 Archive for Rational Mechanics and Analysis 3 Communications in Mathematical Physics 3 Journal d’Analyse Mathématique 3 Mathematical Methods in the Applied Sciences 3 Periodica Mathematica Hungarica 3 Arkiv för Matematik 3 Advances in Mathematics 3 Functional Analysis and its Applications 3 Mathematische Zeitschrift 3 Monatshefte für Mathematik 3 Zeitschrift für Analysis und ihre Anwendungen 3 Mathematical and Computer Modelling 3 Proceedings of the Royal Society of Edinburgh. Section A. Mathematics 3 SIAM Journal on Optimization 3 Potential Analysis 3 NoDEA. Nonlinear Differential Equations and Applications 3 Journal of Applied Analysis 3 Soft Computing 3 Lobachevskii Journal of Mathematics 3 Annales Mathematicae Silesianae 3 Journal of Nonlinear Mathematical Physics 3 The Journal of Prime Research in Mathematics 3 Communications in Mathematical Analysis 3 Complex Analysis and Operator Theory ...and 200 more Serials all top 5 #### Cited in 48 Fields 567 Real functions (26-XX) 367 Functional analysis (46-XX) 186 Operator theory (47-XX) 160 Partial differential equations (35-XX) 153 Harmonic analysis on Euclidean spaces (42-XX) 51 Difference and functional equations (39-XX) 45 Numerical analysis (65-XX) 37 Approximations and expansions (41-XX) 36 Ordinary differential equations (34-XX) 36 Integral equations (45-XX) 31 Probability theory and stochastic processes (60-XX) 28 Special functions (33-XX) 27 Fluid mechanics (76-XX) 23 Mechanics of deformable solids (74-XX) 22 Measure and integration (28-XX) 20 Information and communication theory, circuits (94-XX) 18 Convex and discrete geometry (52-XX) 16 Linear and multilinear algebra; matrix theory (15-XX) 16 Calculus of variations and optimal control; optimization (49-XX) 14 Integral transforms, operational calculus (44-XX) 14 Statistics (62-XX) 12 Functions of a complex variable (30-XX) 12 Operations research, mathematical programming (90-XX) 10 Potential theory (31-XX) 9 Quantum theory (81-XX) 8 Combinatorics (05-XX) 8 Abstract harmonic analysis (43-XX) 6 Differential geometry (53-XX) 5 Geometry (51-XX) 5 General topology (54-XX) 5 Computer science (68-XX) 4 Sequences, series, summability (40-XX) 4 Global analysis, analysis on manifolds (58-XX) 4 Classical thermodynamics, heat transfer (80-XX) 4 Astronomy and astrophysics (85-XX) 4 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 4 Biology and other natural sciences (92-XX) 4 Systems theory; control (93-XX) 3 Mathematical logic and foundations (03-XX) 3 Number theory (11-XX) 3 Statistical mechanics, structure of matter (82-XX) 2 Nonassociative rings and algebras (17-XX) 2 Several complex variables and analytic spaces (32-XX) 2 Dynamical systems and ergodic theory (37-XX) 1 History and biography (01-XX) 1 Associative rings and algebras (16-XX) 1 Topological groups, Lie groups (22-XX) 1 Mathematics education (97-XX) #### Wikidata Timeline The data are displayed as stored in Wikidata under a Creative Commons CC0 License. Updates and corrections should be made in Wikidata.
2021-04-18T09:02:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7402751445770264, "perplexity": 10720.682899307796}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038469494.59/warc/CC-MAIN-20210418073623-20210418103623-00116.warc.gz"}
https://mooseframework.inl.gov/source/materials/ComputeRSphericalSmallStrain.html
# Compute R-Spherical Small Strain Compute a small strain 1D spherical symmetry case. ## Description The material ComputeRSphericalSmallStrain calculates the small total strain for 1D R-Spherical systems. The 1D RSpherical materials and kernel are designed to model sphere geometries with 1D models. Symmetry in the polar () and azimuthal () directions is assumed, and the model is considered to revolve in both of these directions. In the 1D R-Spherical code, the material properties, variables (e.g. temperature), and loading conditions are all assumed to be spherically symmetric: these attributes only depend on the axial position. note The COORD_TYPE in the Problem block of the input file must be set to RSPHERICAL. As in the plane strain and axisymmetric cases, the stress and strain tensors are modified in the spherical problem; only the diagonal components are non-zero in this 1D problem. (1) where the value of the normal strain components in the polar and azimuth directions and depends on the displacement and position in the radial direction (2) Although axisymmetric problems solve for 3D stress and strain fields, the problem is mathematically 1D. In the cylindrical coordinate axisymmetric system, the values of stress and strain in the and directions do not depend on the or coordinates. The RSpherical specific ComputeRSphericalSmallStrain class calculates the radial strain as normally done for a small total strain material: (3) while the calculation of the total strain components and are found with Eq. (2). ## Example Input File Syntax The small R-spherical strain calculator can be activated in the input file through the use of the TensorMechanics Master Action, as shown below. [Modules/TensorMechanics/Master] [./all] strain = SMALL save_in = residual_r [../] [] (modules/tensor_mechanics/test/tests/1D_spherical/smallStrain_1DSphere.i) note:Use of the Tensor Mechanics Master Action Recommended The TensorMechanics Master Action is designed to automatically determine and set the strain and stress divergence parameters correctly for the selected strain formulation. We recommend that users employ the TensorMechanics Master Action whenever possible to ensure consistency between the test function gradients and the strain formulation selected. ## Input Parameters • displacementsThe displacements appropriate for the simulation geometry and coordinate system C++ Type:std::vector Options: Description:The displacements appropriate for the simulation geometry and coordinate system ### Required Parameters • global_strainOptional material property holding a global strain tensor applied to the mesh as a whole C++ Type:MaterialPropertyName Options: Description:Optional material property holding a global strain tensor applied to the mesh as a whole • computeTrueWhen false, MOOSE will not call compute methods on this material. The user must call computeProperties() after retrieving the Material via MaterialPropertyInterface::getMaterial(). Non-computed Materials are not sorted for dependencies. Default:True C++ Type:bool Options: Description:When false, MOOSE will not call compute methods on this material. The user must call computeProperties() after retrieving the Material via MaterialPropertyInterface::getMaterial(). Non-computed Materials are not sorted for dependencies. • base_nameOptional parameter that allows the user to define multiple mechanics material systems on the same block, i.e. for multiple phases C++ Type:std::string Options: Description:Optional parameter that allows the user to define multiple mechanics material systems on the same block, i.e. for multiple phases • eigenstrain_namesList of eigenstrains to be applied in this strain calculation C++ Type:std::vector Options: Description:List of eigenstrains to be applied in this strain calculation • volumetric_locking_correctionFalseFlag to correct volumetric locking Default:False C++ Type:bool Options: Description:Flag to correct volumetric locking • boundaryThe list of boundary IDs from the mesh where this boundary condition applies C++ Type:std::vector Options: Description:The list of boundary IDs from the mesh where this boundary condition applies • blockThe list of block ids (SubdomainID) that this object will be applied C++ Type:std::vector Options: Description:The list of block ids (SubdomainID) that this object will be applied ### Optional Parameters • output_propertiesList of material properties, from this material, to output (outputs must also be defined to an output type) C++ Type:std::vector Options: Description:List of material properties, from this material, to output (outputs must also be defined to an output type) • outputsnone Vector of output names were you would like to restrict the output of variables(s) associated with this object Default:none C++ Type:std::vector Options: Description:Vector of output names were you would like to restrict the output of variables(s) associated with this object ### Outputs Parameters • control_tagsAdds user-defined labels for accessing object parameters via control logic. C++ Type:std::vector Options: Description:Adds user-defined labels for accessing object parameters via control logic. • enableTrueSet the enabled status of the MooseObject. Default:True C++ Type:bool Options: Description:Set the enabled status of the MooseObject. • seed0The seed for the master random number generator Default:0 C++ Type:unsigned int Options: Description:The seed for the master random number generator • implicitTrueDetermines whether this object is calculated using an implicit or explicit form Default:True C++ Type:bool Options: Description:Determines whether this object is calculated using an implicit or explicit form • constant_onNONEWhen ELEMENT, MOOSE will only call computeQpProperties() for the 0th quadrature point, and then copy that value to the other qps.When SUBDOMAIN, MOOSE will only call computeSubdomainProperties() for the 0th quadrature point, and then copy that value to the other qps. Evaluations on element qps will be skipped Default:NONE C++ Type:MooseEnum Options:NONE ELEMENT SUBDOMAIN Description:When ELEMENT, MOOSE will only call computeQpProperties() for the 0th quadrature point, and then copy that value to the other qps.When SUBDOMAIN, MOOSE will only call computeSubdomainProperties() for the 0th quadrature point, and then copy that value to the other qps. Evaluations on element qps will be skipped
2019-02-21T03:36:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27340415120124817, "perplexity": 3806.7478136185687}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247499009.48/warc/CC-MAIN-20190221031117-20190221053117-00463.warc.gz"}
https://arrow.tudublin.ie/scschmatart/316/
## Articles Article #### Rights Available under a Creative Commons Attribution Non-Commercial Share Alike 4.0 International Licence 1.1 MATHMATICS #### Abstract A method is proposed with which the locations of the roots of the monic symbolic quintic polynomial $x^5 + a_4 x^4 + a_3 x^3 + a_2 x^2 + a_1 x + a_0$ can be determined using the roots of two {\it resolvent} quadratic polynomials: $q_1(x) = x^2 + a_4 x + a_3$ and $q_2(x) = a_2 x^2 + a_1 x + a_0$, whose coefficients are exactly those of the quintic polynomial. The different cases depend on the coefficients of $q_1(x)$ and $q_2(x)$ and on some specific relationships between them. The method is illustrated with the full analysis of one of the possible cases. Some of the roots of the symbolic quintic equation for this case have their isolation intervals determined and, as this cannot be done for all roots with the help of quadratic equations only, finite intervals containing 1 or 3 roots, or 0 or 2 roots, or, rarely, 0, or 2, or 4 roots of the quintic are identified. Knowing the stationary points of the quintic polynomial, lifts the latter indeterminacy and allows one to find the isolation interval of each of the roots of the quintic. Separately, using the complete root classification of the quintic, one can also lift this indeterminacy. The method also allows to see how variation of the individual coefficients of the quintic affect its roots. No root finding iterations or any numerical approximations are used and no equations of degree higher than 2 are solved. #### DOI https://doi.org/10.21427/vc0x-gn52 COinS
2022-05-17T18:49:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3578028976917267, "perplexity": 362.39392898427116}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662519037.11/warc/CC-MAIN-20220517162558-20220517192558-00342.warc.gz"}
https://zbmath.org/authors/?q=rv%3A7926
## Bovier, Anton Compute Distance To: Author ID: bovier.anton Published as: Bovier, Anton; Bovier, A. Homepage: https://wt.iam.uni-bonn.de/bovier/home External Links: MGP · Wikidata · Google Scholar · dblp · GND · IdRef · theses.fr Documents Indexed: 122 Publications since 1980, including 5 Books 4 Contributions as Editor Reviewing Activity: 68 Reviews Co-Authors: 65 Co-Authors with 109 Joint Publications 1,329 Co-Co-Authors all top 5 ### Co-Authors 16 single-authored 25 Gayrard, Véronique 11 Kurkova, Irina A. 8 Picco, Pierre 7 Ghez, Jean-Michel 6 Klein, Markus 5 Arguin, Louis-Pierre 5 Černý, Jiří 5 den Hollander, Frank 5 Kistler, Nicola 5 Külske, Christof 4 Ben Arous, Gérard 4 Eckhoff, Michael 4 Hartung, Lisa Bärbel 4 Ioffe, Dmitry 4 Zahradník, Miloš 3 Bellissard, Jean V. 3 Lüling, M. 3 Neukirch, Rebecca 3 Wyler, Daniel 2 Baar, Martina 2 Bianchi, Alessandra 2 Coquille, Loren 2 Faggionato, Alessandra 2 Klein, Abel 2 Klimovsky, Anton 2 Marello, Saeda 2 Mayer, Hannah 2 Niederhauser, Beat M. 2 van Enter, Aernout C. D. 1 Abresch, Uwe 1 Baake, Ellen 1 Baake, Michael 1 Barret, Florent 1 Bashiri, K. 1 Biskup, Marek 1 Bolthausen, Erwin 1 Boutet de Monvel, Anne Marie 1 Brydges, David C. 1 Campanino, Massimo 1 Champagnat, Nicolas 1 Coja-Oghlan, Amin 1 Dalibart, Jean 1 Dunlop, François 1 Fröhlich, Jürg Martin 1 Geldhauser, Carina 1 Glaus, U. 1 Hryniv, Ostap 1 Kotecký, Roman 1 Kraut, Anna 1 Lawler, Gregory Francis 1 Lechtenfeld, Olaf 1 Löwe, Matthias 1 Manzo, Francesco 1 Mason, David M. 1 Méléard, Sylvie 1 Merola, Immacolata 1 Müller, Patrick E. 1 Nardi, Francesca Romana 1 Perez, J. Fernando 1 Presutti, Errico 1 Pulvirenti, Elena 1 Rittenberg, Vladimir 1 Smadi, Charline 1 Spitoni, Cristian 1 Švejda, Adéla 1 Wang, Shidong 1 Weymans, G. all top 5 ### Serials 18 Journal of Statistical Physics 12 Communications in Mathematical Physics 8 Markov Processes and Related Fields 7 The Annals of Probability 7 Probability Theory and Related Fields 7 Electronic Journal of Probability 6 The Annals of Applied Probability 5 Journal of Mathematical Biology 5 Journal of Mathematical Physics 5 Journal of Physics A: Mathematical and General 3 Annales de l’Institut Henri Poincaré. Probabilités et Statistiques 2 Communications on Pure and Applied Mathematics 2 Reviews in Mathematical Physics 2 Journal of the European Mathematical Society (JEMS) 2 Lecture Notes in Mathematics 2 Progress in Probability 2 ALEA. Latin American Journal of Probability and Mathematical Statistics 1 Journal of Applied Probability 1 Random Structures & Algorithms 1 Stochastic Processes and their Applications 1 Resenhas do Instituto do Matemática e Estatística da Universidade de São Paulo 1 Mathematical Physics, Analysis and Geometry 1 Advances in Theoretical and Mathematical Physics 1 International Journal of Theoretical and Applied Finance 1 Acta Physica Polonica B 1 Journal of Statistical Mechanics: Theory and Experiment 1 Cambridge Studies in Advanced Mathematics 1 Grundlehren der Mathematischen Wissenschaften 1 Cambridge Series in Statistical and Probabilistic Mathematics all top 5 ### Fields 97 Probability theory and stochastic processes (60-XX) 95 Statistical mechanics, structure of matter (82-XX) 11 Biology and other natural sciences (92-XX) 7 Quantum theory (81-XX) 6 Dynamical systems and ergodic theory (37-XX) 5 Partial differential equations (35-XX) 4 General and overarching topics; collections (00-XX) 4 Number theory (11-XX) 4 Operator theory (47-XX) 4 Computer science (68-XX) 3 Group theory and generalizations (20-XX) 3 Topological groups, Lie groups (22-XX) 3 Ordinary differential equations (34-XX) 2 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 1 Combinatorics (05-XX) 1 Linear and multilinear algebra; matrix theory (15-XX) 1 Functional analysis (46-XX) 1 Calculus of variations and optimal control; optimization (49-XX) 1 Statistics (62-XX) 1 Operations research, mathematical programming (90-XX) ### Citations contained in zbMATH Open 114 Publications have been cited 1,530 times in 783 Documents Cited by Year Metastability in reversible diffusion processes. I: Sharp asymptotics for capacities and exit times. Zbl 1076.82045 Bovier, Anton; Eckhoff, Michael; Gayrard, Véronique; Klein, Markus 2004 Metastability and low lying spectra in reversible Markov chains. Zbl 1010.60088 Bovier, Anton; Eckhoff, Michael; Gayrard, Véronique; Klein, Markus 2002 Metastability in reversible diffusion processes. II: Precise asymptotics for small eigenvalues. Zbl 1105.82025 Bovier, Anton; Gayrard, Véronique; Klein, Markus 2005 Statistical mechanics of disordered systems. A mathematical perspective. Zbl 1108.82002 Bovier, Anton 2006 Metastability. A potential-theoretic approach. Zbl 1339.60002 Bovier, Anton; den Hollander, Frank 2015 Metastability in stochastic dynamics of disordered mean-field models. Zbl 1012.82015 Bovier, Anton; Eckhoff, Michael; Gayrard, Véronique; Klein, Markus 2001 The extremal process of branching Brownian motion. Zbl 1286.60045 Arguin, Louis-Pierre; Bovier, Anton; Kistler, Nicola 2013 Derrida’s generalised random energy models. I: Models with finitely many hierarchies. Zbl 1121.82020 Bovier, Anton; Kurkova, Irina 2004 Spectral properties of a tight binding Hamiltonian with period doubling potential. Zbl 0726.58038 Bellissard, Jean; Bovier, Anton; Ghez, Jean-Michel 1991 Fluctuations of the free energy in the REM and the $$p$$-spin SK models. Zbl 1018.60094 Bovier, Anton; Kurkova, Irina; Löwe, Matthias 2002 Derrida’s generalized random energy models. II: Models with continuous hierarchies. Zbl 1121.82021 Bovier, Anton; Kurkova, Irina 2004 Genealogy of extremal particles of branching Brownian motion. Zbl 1236.60081 Arguin, L.-P.; Bovier, A.; Kistler, N. 2011 Metastability in Glauber dynamics in the low-temperature limit: Beyond exponential asymptotics. Zbl 1067.82041 Bovier, Anton; Manzo, Francesco 2002 Spectral properties of one-dimensional Schrödinger operators with potentials generated by substitutions. Zbl 0820.35099 Bovier, Anton; Ghez, Jean-Michel 1993 Glauber dynamics of the random energy model. I: Metastable motion on the extreme states. Zbl 1037.82038 Ben Arous, Gérard; Bovier, Anton; Gayrard, Véronique 2003 Glauber dynamics of the random energy model. II: Aging below the critical temperature. Zbl 1037.82039 Ben Arous, Gérard; Bovier, Anton; Gayrard, Véronique 2003 Poissonian statistics in the extremal process of branching Brownian motion. Zbl 1255.60152 Arguin, Louis-Pierre; Bovier, Anton; Kistler, Nicola 2012 Metastability: a potential theoretic approach. Zbl 1099.60052 Bovier, Anton 2006 Gap labelling theorems for one dimensional discrete Schrödinger operators. Zbl 0791.47009 Bellissard, Jean; Bovier, Anton; Ghez, Jean-Michel 1992 Homogeneous nucleation for Glauber and Kawasaki dynamics in large volumes at low temperatures. Zbl 1193.60114 Bovier, Anton; den Hollander, Frank; Spitoni, Cristian 2010 Universality of the REM for dynamics of mean-field spin glasses. Zbl 1208.82024 Ben Arous, Gérard; Bovier, Anton; Černý, Jiří 2008 Sharp asymptotics for metastability in the random field Curie-Weiss model. Zbl 1186.82069 Bianchi, Alessandra; Bovier, Anton; Ioffe, Dmitry 2009 Gibbs states of the Hopfield model in the regime of perfect memory. Zbl 0810.60094 Bovier, Anton; Gayrard, Véronique; Picco, Pierre 1994 Sharp asymptotics for Kawasaki dynamics on a finite box with open boundary. Zbl 1099.60066 Bovier, A.; den Hollander, F.; Nardi, F. R. 2006 Gibbs states of the Hopfield model with extensively many patterns. Zbl 1081.82570 Bovier, Anton; Gayrard, Véronique; Picco, Pierre 1995 The low-temperature phase of Kac-Ising models. Zbl 0929.60080 Bovier, Anton; Zahradnik, Miloś 1997 A simple inductive approach to the problem of convergence of cluster expansions of polymer models. Zbl 1055.82532 Bovier, Anton; Zahradník, Miloš 2000 Spectral characterization of aging: the REM-like trap model. Zbl 1086.60064 Bovier, Anton; Faggionato, Alessandra 2005 Hopfield models as generalized random mean field models. Zbl 0899.60087 Bovier, Anton; Gayrard, Véronique 1998 The density of states in the Anderson model at weak disorder: A renormalization group analysis of the hierarchical model. Zbl 0718.60121 Bovier, Anton 1990 Gaussian processes on trees. From spin glasses to branching Brownian motion. Zbl 1378.60004 Bovier, Anton 2017 A law of the iterated logarithm for random geometric series. Zbl 0770.60029 Bovier, Anton; Picco, Pierre 1993 Statistical mechanics of disordered systems. A mathematical perspective. Reprint of the 2006 hardback ed. Zbl 1246.82001 Bovier, Anton 2012 Convergence of clock processes in random environments and ageing in the $$p$$-spin SK model. Zbl 1267.82114 Bovier, Anton; Gayrard, Véronique 2013 The retrieval phase of the Hopfield model: A rigorous analysis of the overlap distribution. Zbl 0866.60085 Bovier, Anton; Gayrard, Véronique 1997 Smoothness of the density of states in the Anderson model at high disorder. Zbl 0644.60057 Bovier, Anton; Campanino, Massimo; Klein, Abel; Perez, J. Fernando 1988 The extremal process of two-speed branching Brownian motion. Zbl 1288.60108 Bovier, Anton; Hartung, Lisa Bärbel 2014 The thermodynamics of the Curie-Weiss model with random couplings. Zbl 1100.82515 Bovier, Anton; Gayrard, Véronique 1993 Uniform estimates for metastable transition times in a coupled bistable system. Zbl 1191.82040 Barret, Florent; Bovier, Anton; Méléard, Sylvie 2010 Rigorous bounds on the storage capacity of the dilute Hopfield model. Zbl 0900.82064 Bovier, Anton; Gayrard, Véronique 1992 A rigorous renormalization group method for interfaces in random media. Zbl 0802.60098 Bovier, Anton; Külske, Christof 1994 Sharp upper bounds on perfect retrieval in the Hopfield model. Zbl 0947.60092 Bovier, Anton 1999 Weak disorder expansion of the invariant measure for the one-dimensional Anderson model. Zbl 1086.82500 Bovier, Anton; Klein, Abel 1988 Stochastic symmetry-breaking in a Gaussian Hopfield model. Zbl 0964.82024 Bovier, Anton; van Enter, Aernout C. D.; Niederhauser, Beat 1999 There are no nice interfaces in $$(2+1)$$-dimensional SOS models in random media. Zbl 1081.82571 Bovier, Anton; Külske, Christof 1996 On the Gibbs phase rule in the Pirogov-Sinai regime. Zbl 1129.82304 Bovier, A.; Merola, I.; Presutti, E.; Zahradník, M. 2004 An asymptotic maximum principle for essentially linear evolution models. Zbl 1055.92038 Baake, Ellen; Baake, Michael; Bovier, Anton; Klein, Markus 2005 Much ado about Derrida’s GREM. Zbl 1116.82018 Bovier, Anton; Kurkova, Irina 2007 From stochastic, individual-based models to the canonical equation of adaptive dynamics in one step. Zbl 1371.92094 Baar, Martina; Bovier, Anton; Champagnat, Nicolas 2017 Metastability. Zbl 1180.82008 Bovier, Anton 2009 Remarks on the spectral properties of tight-binding and Kronig-Penney models with substitution sequences. Zbl 1044.82547 Bovier, Anton; Ghez, Jean-Michel 1995 An almost sure central limit theorem for the Hopfield model. Zbl 0907.60078 Bovier, A.; Gayrard, V. 1997 Convergence of a kinetic equation to a fractional diffusion equation. Zbl 1198.82052 Basile, G.; Bovier, A. 2010 Variable speed branching Brownian motion. I: Extremal processes in the weak correlation regime. Zbl 1321.60173 Bovier, Anton; Hartung, Lisa 2015 Local energy statistics in disordered systems: a proof of the local REM conjecture. Zbl 1104.82026 Bovier, Anton; Kurkova, Irina 2006 Spectral analysis of Sinai’s walk for small eigenvalues. Zbl 1154.60078 Bovier, Anton; Faggionato, Alessandra 2008 Convergence to extremal processes in random environments and extremal ageing in SK models. Zbl 1284.82048 Bovier, Anton; Gayrard, Véronique; Švejda, Adéla 2013 A tomography of the GREM: Beyond the REM conjecture. Zbl 1104.82027 Bovier, Anton; Kurkova, Irina 2006 Survival of a recessive allele in a Mendelian diploid model. Zbl 1370.92093 Neukirch, Rebecca; Bovier, Anton 2017 Finite subgroups of SU(3). Zbl 0493.20030 Bovier, A.; Lueling, M.; Wyler, D. 1981 Rigorous results on the thermodynamics of the dilute Hopfield model. Zbl 1096.82517 Bovier, Anton; Gayrard, Véronique 1993 Pointwise estimates and exponential laws in metastable systems via coupling methods. Zbl 1237.82039 Bianchi, Alessandra; Bovier, Anton; Ioffe, Dmitry 2012 Erratum: Spectral properties of one-dimensional Schrödinger operators with potentials generated by substitutions. Zbl 0841.35072 Bovier, Anton; Ghez, Jean-Michel 1994 The Aizenman-Sims-Starr and Guerra’s schemes for the SK model with multidimensional spins. Zbl 1205.60166 Bovier, Anton; Klimovsky, Anton 2009 Metastability and small eigenvalues in Markov chains. Zbl 0970.82035 Bovier, Anton; Eckhoff, Michael; Gayrard, Véronique; Klein, Markus 2000 Mathematical aspects of spin classes and neural networks. Zbl 0881.00017 1998 Large deviation principles for the Hopfield model and the Kac-Hopfield model. Zbl 0826.60090 Bovier, Anton; Gayrard, Véronique; Picco, Pierre 1995 Mathematical aspects of the physics of disordered systems. Zbl 0669.60098 Fröhlich, Jürg; Bovier, A.; Glaus, U. 1986 Metastates in the Hopfield model in the replica symmetric regime. Zbl 0910.60081 Bovier, Anton; Gayrard, Véronique 1998 Metastability and ageing in stochastic dynamics. Zbl 1085.82011 Bovier, Anton 2004 An almost sure large deviation principle for the Hopfield model. Zbl 0871.60022 Bovier, Anton; Gayrard, Véronique 1996 Fluctuations of the partition function in the generalized random energy model with external field. Zbl 1159.81307 Bovier, Anton; Klimovsky, Anton 2008 Spin glasses. Zbl 1103.82003 2007 An ergodic theorem for the frontier of branching Brownian motion. Zbl 1286.60082 Arguin, Louis-Pierre; Bovier, Anton; Kistler, Nicola 2013 The spin-glass phase-transition in the Hopfield model with $$p$$-spin interactions. Zbl 1011.82008 Bovier, Anton; Niederhauser, Beat 2001 Cluster expansions and Pirogov-Sinai theory for long range spin systems. Zbl 1035.82010 Bovier, A.; Zahradník, M. 2002 Extreme value behavior in the Hopfield model. Zbl 1024.82015 Bovier, Anton; Mason, David M. 2001 From spin glasses to branching Brownian motion – and back? Zbl 1337.60210 Bovier, Anton 2015 Extended convergence of the extremal process of branching Brownian motion. Zbl 1373.60145 Bovier, Anton; Hartung, Lisa 2017 Crossing a fitness valley as a metastable transition in a stochastic population model. Zbl 1433.92033 Bovier, Anton; Coquille, Loren; Smadi, Charline 2019 Gradient flow approach to local mean-field spin systems. Zbl 1471.60145 Bashiri, K.; Bovier, A. 2020 Metastability for the dilute Curie-Weiss model with Glauber dynamics. Zbl 1469.60308 Bovier, Anton; Marello, Saeda; Pulvirenti, Elena 2021 Self-averaging in a class of generalized Hopfield models. Zbl 0843.60093 Bovier, Anton 1994 Rigorous results on the Hopfield model of neural networks. Zbl 0846.92001 Bovier, Anton; Gayrard, Véronique 1994 Finite nonabelian subgroups of $$SU(n)$$ with analytic expressions for the irreducible representations and the Clebsch-Gordan coefficients. Zbl 0453.22007 Abresch, Uwe; Bovier, A.; Lechtenfeld, O.; Lüling, M.; Rittenberg, V.; Weymans, G. 1980 Limit theorems for Bernoulli convolutions. Zbl 0879.60019 Bovier, A.; Picco, P. 1996 Rigorous results on some simple spin glass models. Zbl 1031.82053 Bovier, A.; Kurkova, I. 2003 A short course on mean field spin glasses. Zbl 1209.82042 Bovier, Anton; Kurkova, Irina 2009 Local energy statistics in spin glasses. Zbl 1122.82047 Bovier, Anton; Kurkova, Irina 2007 Poisson convergence in the restricted $$k$$-partitioning problem. Zbl 1136.90448 Bovier, Anton; Kurkova, Irina 2007 The opinion game: stock price evolution from microscopic market modeling. Zbl 1131.91021 Bovier, Anton; Černý, Jiří; Hryniv, Ostap 2006 The recovery of a recessive allele in a Mendelian diploid model. Zbl 1415.92123 Bovier, Anton; Coquille, Loren; Neukirch, Rebecca 2018 Discrete Schrödinger operators with potentials generated by substitutions. Zbl 0790.11018 Bellissard, Jean; Bovier, Anton; Ghez, Jean-Michel 1993 Bernoulli convolutions, dynamical systems and automata. Zbl 0886.58056 Bovier, Anton 1996 Distribution of overlap profiles in the one-dimensional Kac-Hopfield model. Zbl 0884.60095 Bovier, Anton; Gayrard, Véronique; Picco, Pierre 1997 Perturbation theory for the random dimer model. Zbl 0748.60101 Bovier, Anton 1992 Statistical mechanics of neural networks: The Hopfield model and the Kac-Hopfield model. Zbl 0910.60086 Bovier, A.; Gayrard, V. 1997 An ergodic theorem for the extremal process of branching Brownian motion. Zbl 1315.60063 Arguin, Louis-Pierre; Bovier, Anton; Kistler, Nicola 2015 A conditional strong large deviation result and a functional central limit theorem for the rate function. Zbl 1321.60045 Bovier, Anton; Mayer, Hannah 2015 Representations and Clebsch-Gordan coefficients of Z-metacyclic groups. Zbl 0501.20005 Bovier, A.; Lueling, M.; Wyler, D. 1981 Metastability for the dilute Curie-Weiss model with Glauber dynamics. Zbl 1469.60308 Bovier, Anton; Marello, Saeda; Pulvirenti, Elena 2021 Gradient flow approach to local mean-field spin systems. Zbl 1471.60145 Bashiri, K.; Bovier, A. 2020 From 1 to 6 : a finer analysis of perturbed branching Brownian motion. Zbl 1445.60060 Bovier, Anton; Hartung, Lisa 2020 Crossing a fitness valley as a metastable transition in a stochastic population model. Zbl 1433.92033 Bovier, Anton; Coquille, Loren; Smadi, Charline 2019 From adaptive dynamics to adaptive walks. Zbl 1430.37111 Kraut, Anna; Bovier, Anton 2019 The recovery of a recessive allele in a Mendelian diploid model. Zbl 1415.92123 Bovier, Anton; Coquille, Loren; Neukirch, Rebecca 2018 The polymorphic evolution sequence for populations with phenotypic plasticity. Zbl 1415.92120 Baar, Martina; Bovier, Anton 2018 The hydrodynamic limit for local mean-field dynamics with unbounded spins. Zbl 1401.35291 Bovier, Anton; Ioffe, Dmitry; Müller, Patrick 2018 Gaussian processes on trees. From spin glasses to branching Brownian motion. Zbl 1378.60004 Bovier, Anton 2017 From stochastic, individual-based models to the canonical equation of adaptive dynamics in one step. Zbl 1371.92094 Baar, Martina; Bovier, Anton; Champagnat, Nicolas 2017 Survival of a recessive allele in a Mendelian diploid model. Zbl 1370.92093 Neukirch, Rebecca; Bovier, Anton 2017 Extended convergence of the extremal process of branching Brownian motion. Zbl 1373.60145 Bovier, Anton; Hartung, Lisa 2017 Metastability. A potential-theoretic approach. Zbl 1339.60002 Bovier, Anton; den Hollander, Frank 2015 Variable speed branching Brownian motion. I: Extremal processes in the weak correlation regime. Zbl 1321.60173 Bovier, Anton; Hartung, Lisa 2015 From spin glasses to branching Brownian motion – and back? Zbl 1337.60210 Bovier, Anton 2015 An ergodic theorem for the extremal process of branching Brownian motion. Zbl 1315.60063 Arguin, Louis-Pierre; Bovier, Anton; Kistler, Nicola 2015 A conditional strong large deviation result and a functional central limit theorem for the rate function. Zbl 1321.60045 Bovier, Anton; Mayer, Hannah 2015 The extremal process of two-speed branching Brownian motion. Zbl 1288.60108 Bovier, Anton; Hartung, Lisa Bärbel 2014 A note on metastable behaviour in the zero-range process. Zbl 1325.60153 Bovier, Anton; Neukirch, Rebecca 2014 The extremal process of branching Brownian motion. Zbl 1286.60045 Arguin, Louis-Pierre; Bovier, Anton; Kistler, Nicola 2013 Convergence of clock processes in random environments and ageing in the $$p$$-spin SK model. Zbl 1267.82114 Bovier, Anton; Gayrard, Véronique 2013 Convergence to extremal processes in random environments and extremal ageing in SK models. Zbl 1284.82048 Bovier, Anton; Gayrard, Véronique; Švejda, Adéla 2013 An ergodic theorem for the frontier of branching Brownian motion. Zbl 1286.60082 Arguin, Louis-Pierre; Bovier, Anton; Kistler, Nicola 2013 Trait substitution trees on two time scales analysis. Zbl 1301.92060 Bovier, A.; Wang, Shi-Dong 2013 Poissonian statistics in the extremal process of branching Brownian motion. Zbl 1255.60152 Arguin, Louis-Pierre; Bovier, Anton; Kistler, Nicola 2012 Statistical mechanics of disordered systems. A mathematical perspective. Reprint of the 2006 hardback ed. Zbl 1246.82001 Bovier, Anton 2012 Pointwise estimates and exponential laws in metastable systems via coupling methods. Zbl 1237.82039 Bianchi, Alessandra; Bovier, Anton; Ioffe, Dmitry 2012 Metastability: from mean field models to SPDEs. Zbl 1251.82042 Bovier, Anton 2012 Genealogy of extremal particles of branching Brownian motion. Zbl 1236.60081 Arguin, L.-P.; Bovier, A.; Kistler, N. 2011 Homogeneous nucleation for Glauber and Kawasaki dynamics in large volumes at low temperatures. Zbl 1193.60114 Bovier, Anton; den Hollander, Frank; Spitoni, Cristian 2010 Uniform estimates for metastable transition times in a coupled bistable system. Zbl 1191.82040 Barret, Florent; Bovier, Anton; Méléard, Sylvie 2010 Convergence of a kinetic equation to a fractional diffusion equation. Zbl 1198.82052 Basile, G.; Bovier, A. 2010 Sharp asymptotics for metastability in the random field Curie-Weiss model. Zbl 1186.82069 Bianchi, Alessandra; Bovier, Anton; Ioffe, Dmitry 2009 Metastability. Zbl 1180.82008 Bovier, Anton 2009 The Aizenman-Sims-Starr and Guerra’s schemes for the SK model with multidimensional spins. Zbl 1205.60166 Bovier, Anton; Klimovsky, Anton 2009 A short course on mean field spin glasses. Zbl 1209.82042 Bovier, Anton; Kurkova, Irina 2009 Universality of the REM for dynamics of mean-field spin glasses. Zbl 1208.82024 Ben Arous, Gérard; Bovier, Anton; Černý, Jiří 2008 Spectral analysis of Sinai’s walk for small eigenvalues. Zbl 1154.60078 Bovier, Anton; Faggionato, Alessandra 2008 Fluctuations of the partition function in the generalized random energy model with external field. Zbl 1159.81307 Bovier, Anton; Klimovsky, Anton 2008 Much ado about Derrida’s GREM. Zbl 1116.82018 Bovier, Anton; Kurkova, Irina 2007 Spin glasses. Zbl 1103.82003 2007 Local energy statistics in spin glasses. Zbl 1122.82047 Bovier, Anton; Kurkova, Irina 2007 Poisson convergence in the restricted $$k$$-partitioning problem. Zbl 1136.90448 Bovier, Anton; Kurkova, Irina 2007 Hydrodynamic limit for the $$A+B\rightarrow \emptyset$$ model. Zbl 1156.82022 Bovier, A.; Černý, Jiří 2007 Statistical mechanics of disordered systems. A mathematical perspective. Zbl 1108.82002 Bovier, Anton 2006 Metastability: a potential theoretic approach. Zbl 1099.60052 Bovier, Anton 2006 Sharp asymptotics for Kawasaki dynamics on a finite box with open boundary. Zbl 1099.60066 Bovier, A.; den Hollander, F.; Nardi, F. R. 2006 Local energy statistics in disordered systems: a proof of the local REM conjecture. Zbl 1104.82026 Bovier, Anton; Kurkova, Irina 2006 A tomography of the GREM: Beyond the REM conjecture. Zbl 1104.82027 Bovier, Anton; Kurkova, Irina 2006 The opinion game: stock price evolution from microscopic market modeling. Zbl 1131.91021 Bovier, Anton; Černý, Jiří; Hryniv, Ostap 2006 Metastability in reversible diffusion processes. II: Precise asymptotics for small eigenvalues. Zbl 1105.82025 Bovier, Anton; Gayrard, Véronique; Klein, Markus 2005 Spectral characterization of aging: the REM-like trap model. Zbl 1086.60064 Bovier, Anton; Faggionato, Alessandra 2005 An asymptotic maximum principle for essentially linear evolution models. Zbl 1055.92038 Baake, Ellen; Baake, Michael; Bovier, Anton; Klein, Markus 2005 Coarse-graining techniques for (random) Kac models. Zbl 1081.82008 Bovier, Anton; Külske, Christof 2005 Energy statistics in disordered systems: the local REM conjecture and beyond. Zbl 1371.82048 Bovier, Anton; Kurkova, Irina 2005 Metastability in reversible diffusion processes. I: Sharp asymptotics for capacities and exit times. Zbl 1076.82045 Bovier, Anton; Eckhoff, Michael; Gayrard, Véronique; Klein, Markus 2004 Derrida’s generalised random energy models. I: Models with finitely many hierarchies. Zbl 1121.82020 Bovier, Anton; Kurkova, Irina 2004 Derrida’s generalized random energy models. II: Models with continuous hierarchies. Zbl 1121.82021 Bovier, Anton; Kurkova, Irina 2004 On the Gibbs phase rule in the Pirogov-Sinai regime. Zbl 1129.82304 Bovier, A.; Merola, I.; Presutti, E.; Zahradník, M. 2004 Metastability and ageing in stochastic dynamics. Zbl 1085.82011 Bovier, Anton 2004 Glauber dynamics of the random energy model. I: Metastable motion on the extreme states. Zbl 1037.82038 Ben Arous, Gérard; Bovier, Anton; Gayrard, Véronique 2003 Glauber dynamics of the random energy model. II: Aging below the critical temperature. Zbl 1037.82039 Ben Arous, Gérard; Bovier, Anton; Gayrard, Véronique 2003 Rigorous results on some simple spin glass models. Zbl 1031.82053 Bovier, A.; Kurkova, I. 2003 Metastability and low lying spectra in reversible Markov chains. Zbl 1010.60088 Bovier, Anton; Eckhoff, Michael; Gayrard, Véronique; Klein, Markus 2002 Fluctuations of the free energy in the REM and the $$p$$-spin SK models. Zbl 1018.60094 Bovier, Anton; Kurkova, Irina; Löwe, Matthias 2002 Metastability in Glauber dynamics in the low-temperature limit: Beyond exponential asymptotics. Zbl 1067.82041 Bovier, Anton; Manzo, Francesco 2002 Cluster expansions and Pirogov-Sinai theory for long range spin systems. Zbl 1035.82010 Bovier, A.; Zahradník, M. 2002 Metastability in stochastic dynamics of disordered mean-field models. Zbl 1012.82015 Bovier, Anton; Eckhoff, Michael; Gayrard, Véronique; Klein, Markus 2001 The spin-glass phase-transition in the Hopfield model with $$p$$-spin interactions. Zbl 1011.82008 Bovier, Anton; Niederhauser, Beat 2001 Extreme value behavior in the Hopfield model. Zbl 1024.82015 Bovier, Anton; Mason, David M. 2001 A simple inductive approach to the problem of convergence of cluster expansions of polymer models. Zbl 1055.82532 Bovier, Anton; Zahradník, Miloš 2000 Metastability and small eigenvalues in Markov chains. Zbl 0970.82035 Bovier, Anton; Eckhoff, Michael; Gayrard, Véronique; Klein, Markus 2000 Sharp upper bounds on perfect retrieval in the Hopfield model. Zbl 0947.60092 Bovier, Anton 1999 Stochastic symmetry-breaking in a Gaussian Hopfield model. Zbl 0964.82024 Bovier, Anton; van Enter, Aernout C. D.; Niederhauser, Beat 1999 Hopfield models as generalized random mean field models. Zbl 0899.60087 Bovier, Anton; Gayrard, Véronique 1998 Mathematical aspects of spin classes and neural networks. Zbl 0881.00017 1998 Metastates in the Hopfield model in the replica symmetric regime. Zbl 0910.60081 Bovier, Anton; Gayrard, Véronique 1998 The Kac version of the Sherrington-Kirkpatrick model at high temperatures. Zbl 0928.60089 Bovier, Anton 1998 The low-temperature phase of Kac-Ising models. Zbl 0929.60080 Bovier, Anton; Zahradnik, Miloś 1997 The retrieval phase of the Hopfield model: A rigorous analysis of the overlap distribution. Zbl 0866.60085 Bovier, Anton; Gayrard, Véronique 1997 An almost sure central limit theorem for the Hopfield model. Zbl 0907.60078 Bovier, A.; Gayrard, V. 1997 Distribution of overlap profiles in the one-dimensional Kac-Hopfield model. Zbl 0884.60095 Bovier, Anton; Gayrard, Véronique; Picco, Pierre 1997 Statistical mechanics of neural networks: The Hopfield model and the Kac-Hopfield model. Zbl 0910.60086 Bovier, A.; Gayrard, V. 1997 There are no nice interfaces in $$(2+1)$$-dimensional SOS models in random media. Zbl 1081.82571 Bovier, Anton; Külske, Christof 1996 An almost sure large deviation principle for the Hopfield model. Zbl 0871.60022 Bovier, Anton; Gayrard, Véronique 1996 Limit theorems for Bernoulli convolutions. Zbl 0879.60019 Bovier, A.; Picco, P. 1996 Bernoulli convolutions, dynamical systems and automata. Zbl 0886.58056 Bovier, Anton 1996 Gibbs states of the Hopfield model with extensively many patterns. Zbl 1081.82570 Bovier, Anton; Gayrard, Véronique; Picco, Pierre 1995 Remarks on the spectral properties of tight-binding and Kronig-Penney models with substitution sequences. Zbl 1044.82547 Bovier, Anton; Ghez, Jean-Michel 1995 Large deviation principles for the Hopfield model and the Kac-Hopfield model. Zbl 0826.60090 Bovier, Anton; Gayrard, Véronique; Picco, Pierre 1995 Gibbs states of the Hopfield model in the regime of perfect memory. Zbl 0810.60094 Bovier, Anton; Gayrard, Véronique; Picco, Pierre 1994 A rigorous renormalization group method for interfaces in random media. Zbl 0802.60098 Bovier, Anton; Külske, Christof 1994 Erratum: Spectral properties of one-dimensional Schrödinger operators with potentials generated by substitutions. Zbl 0841.35072 Bovier, Anton; Ghez, Jean-Michel 1994 Self-averaging in a class of generalized Hopfield models. Zbl 0843.60093 Bovier, Anton 1994 Rigorous results on the Hopfield model of neural networks. Zbl 0846.92001 Bovier, Anton; Gayrard, Véronique 1994 Spectral properties of one-dimensional Schrödinger operators with potentials generated by substitutions. Zbl 0820.35099 Bovier, Anton; Ghez, Jean-Michel 1993 A law of the iterated logarithm for random geometric series. Zbl 0770.60029 Bovier, Anton; Picco, Pierre 1993 The thermodynamics of the Curie-Weiss model with random couplings. Zbl 1100.82515 Bovier, Anton; Gayrard, Véronique 1993 Rigorous results on the thermodynamics of the dilute Hopfield model. Zbl 1096.82517 Bovier, Anton; Gayrard, Véronique 1993 Discrete Schrödinger operators with potentials generated by substitutions. Zbl 0790.11018 Bellissard, Jean; Bovier, Anton; Ghez, Jean-Michel 1993 ...and 14 more Documents all top 5 ### Cited by 894 Authors 45 Bovier, Anton 22 Löwe, Matthias 19 Nardi, Francesca Romana 18 Külske, Christof 16 Landim, Claudio 14 Barra, Adriano 14 Kistler, Nicola 13 Arguin, Louis-Pierre 13 Gayrard, Véronique 12 Damanik, David 11 Ben Arous, Gérard 11 Černý, Jiří 10 Klein, Markus 10 Picco, Pierre 10 Vermet, Franck 9 Warzel, Simone 8 Agliari, Elena 8 den Hollander, Frank 8 Hartung, Lisa Bärbel 8 Jagannath, Aukosh 8 Kabluchko, Zakhar A. 8 Louidor, Oren 8 Mallein, Bastien 8 Presutti, Errico 8 Seo, Insuk 8 van Enter, Aernout C. D. 7 Contucci, Pierluigi 7 Fontes, Luiz Renato G. 7 Gaudilliere, Alexandre 7 Pavlyukevich, Ilya 7 Rosenberger, Elke 7 Tsagkarogiannis, Dimitrios K. 6 Bellissard, Jean V. 6 Berglund, Nils 6 Chatterjee, Sourav 6 Fernández, Roberto 6 Giardinà, Cristian 6 Guerra, Francesco 6 Högele, Michael Anton 6 Le Peutrec, Dorian 6 Lelièvre, Tony 6 Molchanov, Stanislav Alekseevich 6 Nectoux, Boris 6 Schütte, Christof 6 Smadi, Charline 6 Spitoni, Cristian 5 Beckus, Siegfried 5 Bendikov, Alexander D. 5 Berestycki, Julien 5 Betz, Volker 5 Borgs, Christian 5 Cassandro, Marzio 5 Chayes, Jennifer Tour 5 Cirillo, Emilio Nicola Maria 5 Cortines, Aser 5 Daletskii, Alexei 5 Faggionato, Alessandra 5 Gentz, Barbara 5 Imkeller, Peter 5 Katsoulakis, Markos A. 5 Kurkova, Irina A. 5 Lenz, Daniel H. 5 Liu, Qinghui 5 Madaule, Thomas 5 Merola, Immacolata 5 Orlandi, Enza 5 Schlichting, André 5 Schubert, Kristina 5 Talagrand, Michel 5 Toninelli, Fabio Lucio 5 Vanden-Eijnden, Eric 5 von Soosten, Per 5 Zeitouni, Ofer 4 Baake, Ellen 4 Baake, Michael 4 Berestycki, Nathanaël 4 Bissacot, Rodrigo 4 de Oliveira, César R. 4 Di Gesù, Giacomo 4 Ding, Jian 4 Fachechi, Alberto 4 Gantert, Nina 4 Giacomin, Giambattista 4 Gorodetskii, Anton Semenovich 4 Klein, Abel 4 Kondrat’yev, Yuriĭ Grygorovych 4 Lacoin, Hubert 4 Maillard, Pascal 4 Menz, Georg 4 Ouimet, Frédéric 4 Panchenko, Dmitry 4 Perez, J. Fernando 4 Plecháč, Petr 4 Rozikov, Utkir A. 4 Schmidt, Marius Alexander 4 Schulz-Baldes, Hermann 4 Scoppola, Elisabetta 4 Sollich, Peter 4 Tantari, Daniele 4 van der Hofstad, Remco W. ...and 794 more Authors all top 5 ### Cited in 158 Serials 145 Journal of Statistical Physics 55 Communications in Mathematical Physics 40 The Annals of Probability 40 Probability Theory and Related Fields 34 Stochastic Processes and their Applications 29 The Annals of Applied Probability 27 Journal of Mathematical Physics 21 Annales Henri Poincaré 19 Annales de l’Institut Henri Poincaré. Probabilités et Statistiques 19 Electronic Journal of Probability 16 Journal of Statistical Mechanics: Theory and Experiment 11 Electronic Communications in Probability 10 ALEA. Latin American Journal of Probability and Mathematical Statistics 9 Journal of Mathematical Biology 9 Journal of Functional Analysis 8 Statistics & Probability Letters 8 Journal of Theoretical Probability 7 Journal of Mathematical Analysis and Applications 6 Reviews in Mathematical Physics 6 Bernoulli 6 Mathematical Physics, Analysis and Geometry 5 Journal of Applied Probability 5 SIAM Journal on Mathematical Analysis 5 Journal of the European Mathematical Society (JEMS) 5 Brazilian Journal of Probability and Statistics 5 Comptes Rendus. Mathématique. Académie des Sciences, Paris 4 Communications on Pure and Applied Mathematics 4 Letters in Mathematical Physics 4 Advances in Mathematics 4 Journal of Differential Equations 4 Proceedings of the American Mathematical Society 4 Ergodic Theory and Dynamical Systems 4 Potential Analysis 4 Chaos 4 Stochastics and Dynamics 3 Advances in Applied Probability 3 Archive for Rational Mechanics and Analysis 3 Theory of Probability and its Applications 3 Inventiones Mathematicae 3 Theoretical Computer Science 3 Random Structures & Algorithms 3 Journal de Mathématiques Pures et Appliquées. Neuvième Série 3 Annales de l’Institut Henri Poincaré. Physique Théorique 3 Journal of Nonlinear Science 3 Multiscale Modeling & Simulation 3 Stochastics 2 Nonlinearity 2 Physica A 2 Russian Mathematical Surveys 2 Bulletin of Mathematical Biology 2 The Annals of Statistics 2 Duke Mathematical Journal 2 Stochastic Analysis and Applications 2 Physica D 2 Neural Networks 2 Communications in Partial Differential Equations 2 Journal of Dynamics and Differential Equations 2 Calculus of Variations and Partial Differential Equations 2 Applied and Computational Harmonic Analysis 2 Journal of Mathematical Sciences (New York) 2 Journal of Difference Equations and Applications 2 European Series in Applied and Industrial Mathematics (ESAIM): Probability and Statistics 2 European Series in Applied and Industrial Mathematics (ESAIM): Proceedings 2 Acta Mathematica Sinica. English Series 2 Physical Review Letters 2 Oberwolfach Reports 2 Journal of Spectral Theory 2 Annals of PDE 1 International Journal of Modern Physics B 1 Discrete Applied Mathematics 1 Journal d’Analyse Mathématique 1 Journal of Computational Physics 1 Linear and Multilinear Algebra 1 Mathematical Methods in the Applied Sciences 1 Mathematical Proceedings of the Cambridge Philosophical Society 1 Mathematische Semesterberichte 1 Nuclear Physics. B 1 Physics Reports 1 Theoretical and Mathematical Physics 1 Mathematics of Computation 1 Journal of Geometry and Physics 1 Annales de l’Institut Fourier 1 Annali di Matematica Pura ed Applicata. Serie Quarta 1 Annales Scientifiques de l’École Normale Supérieure. Quatrième Série 1 Applied Mathematics and Optimization 1 Automatica 1 Integral Equations and Operator Theory 1 Journal of Mathematical Economics 1 Journal of Number Theory 1 Journal of Statistical Planning and Inference 1 Mathematische Nachrichten 1 Monatshefte für Mathematik 1 Osaka Journal of Mathematics 1 Quarterly of Applied Mathematics 1 SIAM Journal on Control and Optimization 1 Transactions of the Moscow Mathematical Society 1 Advances in Applied Mathematics 1 Chinese Annals of Mathematics. Series B 1 Acta Mathematica Hungarica 1 Acta Applicandae Mathematicae ...and 58 more Serials all top 5 ### Cited in 50 Fields 479 Probability theory and stochastic processes (60-XX) 440 Statistical mechanics, structure of matter (82-XX) 70 Partial differential equations (35-XX) 68 Quantum theory (81-XX) 59 Dynamical systems and ergodic theory (37-XX) 56 Operator theory (47-XX) 48 Biology and other natural sciences (92-XX) 35 Combinatorics (05-XX) 27 Numerical analysis (65-XX) 27 Computer science (68-XX) 25 Ordinary differential equations (34-XX) 23 Linear and multilinear algebra; matrix theory (15-XX) 22 Number theory (11-XX) 21 Statistics (62-XX) 19 Functional analysis (46-XX) 17 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 13 Convex and discrete geometry (52-XX) 13 Global analysis, analysis on manifolds (58-XX) 11 Group theory and generalizations (20-XX) 11 Calculus of variations and optimal control; optimization (49-XX) 10 Difference and functional equations (39-XX) 9 Measure and integration (28-XX) 9 Operations research, mathematical programming (90-XX) 7 Classical thermodynamics, heat transfer (80-XX) 6 Topological groups, Lie groups (22-XX) 5 $$K$$-theory (19-XX) 5 Information and communication theory, circuits (94-XX) 4 Potential theory (31-XX) 4 Mechanics of deformable solids (74-XX) 4 Fluid mechanics (76-XX) 4 Systems theory; control (93-XX) 3 General and overarching topics; collections (00-XX) 3 Real functions (26-XX) 3 Mechanics of particles and systems (70-XX) 3 Geophysics (86-XX) 2 Functions of a complex variable (30-XX) 2 Approximations and expansions (41-XX) 2 Integral equations (45-XX) 2 Differential geometry (53-XX) 2 General topology (54-XX) 2 Relativity and gravitational theory (83-XX) 1 History and biography (01-XX) 1 Order, lattices, ordered algebraic structures (06-XX) 1 Field theory and polynomials (12-XX) 1 Algebraic geometry (14-XX) 1 Category theory; homological algebra (18-XX) 1 Special functions (33-XX) 1 Sequences, series, summability (40-XX) 1 Harmonic analysis on Euclidean spaces (42-XX) 1 Algebraic topology (55-XX) ### Wikidata Timeline The data are displayed as stored in Wikidata under a Creative Commons CC0 License. Updates and corrections should be made in Wikidata.
2022-09-30T12:01:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5425901412963867, "perplexity": 9201.817399464047}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00330.warc.gz"}
https://par.nsf.gov/biblio/10352243-blanco-decam-bulge-survey-bdbs-cleaning-foreground-populations-from-galactic-bulge-colour-magnitude-diagrams-using-gaia-edr3
This content will become publicly available on August 1, 2023 Blanco DECam Bulge Survey (BDBS): V. Cleaning the foreground populations from Galactic bulge colour-magnitude diagrams using Gaia EDR3 Aims. The Blanco DECam Bulge Survey (BDBS) has imaged more than 200 square degrees of the southern Galactic bulge, providing photometry in the ugrizy filters for ∼250 million unique stars. The presence of a strong foreground disk population, along with complex reddening and extreme image crowding, has made it difficult to constrain the presence of young and intermediate age stars in the bulge population. Methods. We employed an accurate cross-match of BDBS with the latest data release (EDR3) from the Gaia mission, matching more than 140 million sources with BDBS photometry and Gaia EDR3 photometry and astrometry. We relied on Gaia EDR3 astrometry, without any photometric selection, to produce clean BDBS bulge colour-magnitude diagrams (CMDs). Gaia parallaxes were used to filter out bright foreground sources, and a Gaussian mixture model fit to Galactic proper motions could identify stars kinematically consistent with bulge membership. We applied this method to 127 different bulge fields of 1 deg 2 each, with | ℓ | ≤ 9.5° and −9.5° ≤ b  ≤ −2.5°. Results. The astrometric cleaning procedure removes the majority of blue stars in each field, especially near the Galactic plane, where the ratio of blue to red stars is ≲10%, increasing to values ∼20% at higher more » Authors: ; ; ; ; ; ; ; ; ; ; Award ID(s): Publication Date: NSF-PAR ID: 10352243 Journal Name: Astronomy & Astrophysics Volume: 664 Page Range or eLocation-ID: A124 ISSN: 0004-6361 4. ABSTRACT We construct from Gaia eDR3 an extensive catalogue of spatially resolved binary stars within ≈1 kpc of the Sun, with projected separations ranging from a few au to 1 pc. We estimate the probability that each pair is a chance alignment empirically, using the Gaia catalogue itself to calculate the rate of chance alignments as a function of observables. The catalogue contains 1.3 (1.1) million binaries with >90 per cent (>99 per cent) probability of being bound, including 16 000 white dwarf – main-sequence (WD + MS) binaries and 1400 WD + WD binaries. We make the full catalogue publicly available, as well as the queries and code to produce it. We then use this sample to calibrate the published Gaia DR3 parallax uncertainties, making use of the binary components’ near-identical parallaxes. We show that these uncertainties are generally reliable for faint stars (G ≳ 18), but are underestimated significantly for brighter stars. The underestimates are generally $\leq30{{\ \rm per\ cent}}$ for isolated sources with well-behaved astrometry, but are larger (up to ∼80 per cent) for apparently well-behaved sources with a companion within ≲4 arcsec, and much larger for sources with poor astrometric fits. We provide an empirical fitting function to inflate published σϖ values for isolated sources. The publicmore »
2023-03-21T23:15:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5931835174560547, "perplexity": 5003.192426325231}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943747.51/warc/CC-MAIN-20230321225117-20230322015117-00186.warc.gz"}
https://www-physics.lbl.gov/seminars/old/ligeti2.html
Title: B Physics Beyond CKM Abstract: Recently, our knowledge of flavor physics and CP violation increased tremendously. CP violation provides some of the most precise constraints on the flavor sector, and there are significant new bounds on the deviations from the Standard Model in $B_d$ and $B_s$ mixing, and in $b\to s$ and $b\to d$ decays. We discuss the implications of these for the Standard Model and some of its extensions. Some highlights of theoretical developments for exclusive nonleptonic decays and their implications for interpreting the data will also be discussed.
2022-10-02T05:41:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7193514108657837, "perplexity": 478.26669228846447}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00038.warc.gz"}
https://www.sandia.gov/quantum/Projects/QSCOUT_Jaqal.html
# The Quantum Assembly Language for QSCOUT A. J. Landahl, D. S. Lobser, B. C. A. Morrison, K. M. Rudinger, A. E. Russo, J. W. Van Der Wall, P. Maunz (Dated: August 6, 2020 (v1.1)) # Introduction QSCOUT is the Quantum Scientific Computing Open User Testbed, a trapped-ion quantum computer testbed realized at Sandia National Laboratories on behalf of the Department of Energy’s Office of Science and its Advanced Scientific Computing Research (ASCR) program. As an open user testbed, QSCOUT provides the following to its users: • Transparency: Full implementation specifications of the underlying native trapped-ion quantum gates. • Extensibility: Pulse definitions can be programmed to generate custom trapped-ion gates. • Schedulability: Users have full control of sequential and parallel execution of quantum gates. ## QSCOUT Hardware 1.0 The first version (1.0) of the QSCOUT hardware realizes a single register of qubits stored in the hyperfine clock states of trapped 171Yb+ ions arranged in a one-dimensional chain. Single and multi-qubit gates are realized by tightly focused laser beams that can address individual ions. The native operations available on this hardware include the following: • Global preparation and measurement of all qubits in the $$z$$ basis. • Parallel single-qubit rotations about any axis in the equatorial plane of the Bloch sphere. • The Mølmer–Sørensen two-qubit gate between any pair of qubits, in parallel with no other gates. • Single-qubit $$Z$$ gates executed virtually by adjusting the reference clocks of individual qubits. Importantly, QSCOUT 1.0 does not support measurement of a subset of the qubits. Consequently, it also does not support classical feedback. This is because, for ions in a single chain, the resonance fluorescence measurement process destroys the quantum states of all qubits in the ion chain, so that there are no quantum states onto which feedback can be applied. Future versions of the QSCOUT hardware will support feedback. QSCOUT 1.0 uses Just Another Quantum Assembly Language (Jaqal) (described here) to specify quantum programs executed on the testbed. On QSCOUT 1.0, every quantum computation starts with preparation of the quantum state of the entire qubit register in the $$z$$ basis. Then it executes a sequence of parallel and sequential single and two-qubit gates. After this, it executes a simultaneous measurement of all qubits in the $$z$$ basis, returning the result as a binary string. This sequence of prepare-all/do-gates/measure-all can be repeated multiple times in a Jaqal program, if desired. However, any adaptive program that uses the results of one such sequence to issue a subsequent sequence must be done with metaprogramming, because Jaqal does not currently support feedback. Once the QSCOUT platform supports classical feedback, Jaqal will be extended to support it as well. # Gate Pulse File The laser pulses that implement built-in or custom trapped-ion gates are defined in a Gate Pulse File (GPF). Eventually, users will be able to write their own GPF files, but that capability will not be available in our initial software release. However, users will be free to specify composite gates by defining them as sub-circuit macros. Additionally, custom native gates can be added in collaboration with Sandia scientists by specifying the pulse sequences that have to be applied to the trapped ion qubits to realize the gate. We have provided a GPF file for the built-in gates on the QSCOUT 1.0 platform. This file is not intended to be modified by users, so we are not specifying its contents here. However, a full specification of the built-in gates will be available to users of the QSCOUT 1.0 platform. This GPF file contains pulse-level gate definitions for the QSCOUT 1.0 built-in gates listed below. All angle arguments in this list are in the units of radians, with 40 bits of precision. The chirality of rotations is determined using the right-hand rule. • prepare_all Prepares all qubits in the quantum register in the $$|0\rangle$$ state in the $$z$$ basis. • R <qubit> <axis angle> <rotation angle> Counter-clockwise rotation around an axis in the equatorial plane of the Bloch sphere defined by <axis-angle>, measured counter-clockwise from the $$x$$ axis, by the angle defined by <rotation angle>. • Rx <qubit> <rotation angle> Counter-clockwise rotation around the $$x$$ axis, by the angle defined by <rotation angle>. • Ry <qubit> <rotation angle> Counter-clockwise rotation around the $$y$$ axis, by the angle defined by <rotation angle>. • Rz <qubit> <angle> Counter-clockwise rotation around the $$z$$ axis, by the angle defined by <rotation angle>. • Px <qubit> Counter-clockwise rotation around the $$x$$ axis, by $$\pi$$. (Pauli $$X$$ gate.) • Py <qubit> Counter-clockwise rotation around the $$y$$ axis, by $$\pi$$. (Pauli $$Y$$ gate.) • Pz <qubit> Counter-clockwise rotation around the $$z$$ axis, by $$\pi$$. (Pauli $$Z$$ gate.) • Sx <qubit> Counter-clockwise rotation around the $$x$$ axis, by $$\pi/2$$. ($$\sqrt{X}$$ gate.) • Sy <qubit> Counter-clockwise rotation around the $$y$$ axis, by $$\pi/2$$. ($$\sqrt{Y}$$ gate.) • Sz <qubit> Counter-clockwise rotation around the $$z$$ axis, by $$\pi/2$$. ($$\sqrt{Z}$$ gate.) • Sxd <qubit> Clockwise rotation around the $$x$$ axis, by $$\pi/2$$. ($$\sqrt{X}^\dagger$$ gate.) • Syd <qubit> Clockwise rotation around the $$y$$ axis, by $$\pi/2$$. ($$\sqrt{Y}^\dagger$$ gate.) • Szd <qubit> Clockwise rotation around the $$z$$ axis, by $$\pi/2$$. ($$\sqrt{Z}^\dagger$$ gate.) • MS <qubit> <qubit> <axis angle> <rotation angle> The general two-qubit Mølmer–Sørensen gate. (If we let $$\theta$$ represent <rotation angle> and $$\varphi$$ represent <axis angle>, then the gate is $\exp\left(-i\left(\frac{\theta}{2}\right)(\cos \varphi X + \sin \varphi Y)^{\otimes 2}\right).$ • Sxx <qubit> <qubit> The XX-type two-qubit Mølmer–Sørensen gate: $\exp\left(-i\left(\frac{\pi}{4}\right) X\otimes X\right).$ • measure_all Measures all qubits of the quantum register in the $$z$$ basis. After measurement, ions will be outside the qubit space. Therefore, the qubits have to be prepared again before any other gates can be applied. The gate pulse definitions also include idle gates with the same duration as the single- and two-qubit gates. These have a prefix of I_. For example an idle gate of the same duration as a Px can be obtained by I_Px <qubit>. It is important to note that it is not necessary to explicitly insert idle on idling qubits in a parallel block. Explicit idle gates are meant to be used for performance testing and evaluation. # Jaqal Quantum Assembly Language The open nature of the QSCOUT testbed requires a flexible Quantum Assembly Language (QASM) that empowers QSCOUT users to extend the set of native gates and fully control the execution of the quantum program on the QSCOUT testbed. Due to the proliferation of such languages in this fledgling field, ours is named Just Another Quantum Assembly Language, or Jaqal. To realize our objectives, the Jaqal QASM language fulfills the following requirements: • Jaqal fully specifies the allocation of qubits within the quantum register, which cannot be altered during execution. • Jaqal requires the scheduling of sequential and parallel gate sequencing to be fully and explicitly specified. • Jaqal can execute any native (built-in or custom) gate specified in any GPF file it references. While Jaqal is built upon a lower-level pulse definition in GPF files, it is the lowest-level QASM programming language exposed to users in QSCOUT. We anticipate that users will develop their own higher-level programming languages that compile down to Jaqal. We plan to release Jaqal-branded metaprogramming tools after user-driven innovation at this meta-programming level settles down. # Jaqal Syntax A Jaqal file consists of gates and metadata making those gates easier to read and write. The gates that are run on the machine can be deterministically computed by inspection of the source text. This implies that there are no conditional statements at this level. This section will describe the workings of each statement type. Whitespace is largely unimportant except as a separator between statements and their elements. If it is desirable to put two statements on the same line, a ‘;’ separator may be used. In a parallel block, the pipe (‘|’) must be used instead of the ‘;’. Like the semicolon, however, the pipe is unnecessary to delimit statements on different lines. Both Windows and Linux newline styles will be accepted. ## Identifiers Gate names and qubit names have the same character restrictions. Similar to most programming languages, they may contain, but not start with, numerals. They are case sensitive and may contain any non-accented Latin character plus the underscore. Identifiers cannot be any of the keywords of the language. C/C++ style comments are allowed and treated as whitespace. A comment starting with ‘//’ runs to the end of the current line, while a comment with ‘/*’ runs until a ‘*/’ is encountered. These comments do not nest, which is the same behavior as C/C++. A properly formatted Jaqal file comprises a header and body section. All header statements must precede all body statements. The order of header statements is otherwise arbitrary except that all objects must be defined before their first use. ### Register Statement A register statement serves to declare the user’s intention to use a certain number of qubits, referred to in the file with a given name. If the machine cannot supply this number of qubits then the entire program is rejected immediately. The following line declares a register named q which holds 7 qubits. register q[7] ### Map Statement While it is sufficient to refer to qubits by their offset in a single register, it is more convenient to assign names to individual qubits. The map statement effectively provides an alias to a qubit or array of qubits under a different name. The following lines declare the single qubit q[0] to have the name ancilla and the array qubits to be an alias for q. Array indices start with 0. register q[3] map ancilla q[0] map qubits q The map statement will also support Python-style slicing. In this case, the map statement always declares an array alias. In the following line we relabel every other qubit to be an ancilla qubit, starting with index 1. register q[7] map ancilla q[1:7:2] After this instruction, ancilla[0] corresponds to q[1]; ancilla[1] and ancilla[2] correspond to q[3]and q[5], respectively. ### Let Statement We allow identifiers to replace integers or floating point numbers for convenience. There are no restrictions on capitalization. An integer defined in this way may be used in any context where an integer literal is valid and a floating point may similarly be used in any context where a floating point literal is valid. Note that the values are constant, once defined. Example: let total_count 4 let rotations 1.5 ## Body Statements ### Gate Statement Gates are listed, one per statement, meaning it is terminated either by a newline or a separator. The first element of the statement is the gate name followed by the gate’s arguments which are whitespace-separated numbers or qubits. Elements of quantum registers, mapped aliases, and local variables (see section on macros) may be freely interchanged as qubit arguments to each gate. The names of the gates are fixed but determined in the Gate Pulse File, except for macros. The number of arguments (“arity”) must match the expected number. The following is an example of what a 2-qubit gate may look like. register q[3] map ancilla q[1] Sxx q[0] ancilla The invocation of a macro is treated as completely equivalent to a gate statement. ### Gate Block Multiple gates and/or macro invocations may be combined into a single block. This is similar, but not completely identical, to how C or related languages handle statement blocks. Macro definitions and header statements are not allowed in gate blocks. Additionally, statements such as macro definitions or loops expect a gate block syntactically and are not satisfied with a single gate, unlike C. Two different gate blocks exist: sequential and parallel. Sequential gate blocks use the standard C-style ‘{}’ brackets while parallel blocks use angled ‘<>’ brackets, similar to C++ templates. This choice was made to not conflict with ‘[]’ brackets, which are used in arrays, and to reserve ‘()’ for possible future use. In a sequential block, each statement, macro, or gate block waits for the previous to finish before executing. In a parallel gate block, all operations are executed at the same time. It is an error to request parallel operations that the machine is incapable of performing, however it is not syntactically possible to forbid these as they are determined by hardware constraints which may change with time. Looping statements are allowed inside sequential blocks, but not inside parallel blocks. Blocks may be arbitrarily nested so long as the hardware can support the resulting sequence of operations. Blocks may not be nested directly within other blocks of the same type. The following statement declares a parallel block with two gates. < Sx q[0] | Sy q[1] > This does the same but on different lines. < Sx q[0] Sy q[1] > Here is a parallel block nested inside a sequential one. { Sxx q[0] q[1] < Sx q[0] | Sy q[1] > } And sequential blocks may be nested inside parallel blocks. < Sx q[0] { Sx q[1] ; Sy q[1] } > #### Timing within a parallel block If two gates are in a parallel block but have different durations (e.g., two single-qubit gates of different length), the default behavior is to start each gate within the parallel block simultaneously. The shorter gate(s) will then be padded with idles until the end of the gate block. For example, the command < Rx q[1] 0.1 Sx q[2] > results in the Rx gate on q[1] with angle 0.1 radians and Sx gate on q[2] both starting at the same time; the Rx gate will finish first and q[1] will idle while the Sx gate finishes. Once the Jaqal gate set becomes user-extensible, users may define their own scheduling within parallel blocks (e.g., so that gates all finish at the same time instead). ### Macro Statement A macro can be used to treat a sequence of gates as a single gate. Gates inside a macro can access the same qubit registers and mapped aliases at the global level as all other gates, and additionally have zero or more arguments which are visible. Arguments allow the same macro to be applied on different combinations of physical qubits, much like a function in a classical programming language. A macro may use other macros that have already been declared. A macro declaration is complete at the end of its code block. This implies that recursion is impossible. It also implies that macros can only reference other macros created earlier in the file. Due to the lack of conditional statements, recursion always creates an infinite loop and is therefore never desirable. A macro is declared using the macro keyword, followed by the name of the macro, zero or more arguments, and a code block. Unlike C, a macro must use a code block, even if it only has a single statement. The following example declares a macro. macro foo a b { Sx a Sxx a q[0] Sxx b q[0] } To simplify parsing, a line break is not allowed before the initial ‘{’, unlike C. However, statements may be placed on the same line following the ‘{’. ### Loop Statement A gate block may be executed for a fixed number of repetitions using the loop statement. The loop statement is intentionally restricted to running for a fixed number of iterations. This ensures it is easy to deterministically evaluate the runtime of a program. Consequently, it is impossible to write a program which will not terminate. The following loop executes a sequence of statements seven times. loop 7 { Sx q[0] Sz q[1] Sxx q[0] q[1] } The same rules apply as in macro definitions: ‘{’ must appear on the same line as loop, but other statements may follow on the same line. Loops may appear in sequential gate blocks, but not in parallel gate blocks. # Extensibility As Jaqal and the QSCOUT project more broadly have extensibility as stated goals, it is important to clarify what is meant by this term. Primarily, Jaqal offers extensibility in the gates that can be performed. This will occur through the gate pulse file and the use of macros to define composite gates that can be used in all contexts a native gate can. Jaqal will be incrementally improved as new hardware capabilities come online and real world use identifies areas for enhancement. The language itself, however, is not intended to have many forms of user-created extensibility as a software developer might envision the term. Features we do not intend to support include, but are not limited to, pragma statements, user-defined syntax, and a foreign function interface (i.e. using custom C or Verilog code in a Jaqal file). # Examples ## Bell state preparation This example prepares a Bell state using the classic Hadamard and controlled X circuit, then measures it in the computational basis. Up to the limits of gate fidelity, the measurements of the two qubits should always match. macro hadamard target { // A Hadamard gate can be implemented as Sy target // a pi/2 rotation around Y Px target // followed by a pi rotation around X. } macro cnot control target { // CNOT implementation from Maslov (2017) Sy control // Sxx control target <Sxd control | Sxd target> // we can perform these in parallel Syd control } register q[2] prepare_all // Prepare each qubit in the computational basis. cnot q[1] q[0] measure_all // Measure each qubit and read out the results. However, there’s a more efficient way of preparing a Bell state that takes full advantage of the native Mølmer-Sørensen interaction of the architecture, rather than using it to replicate a controlled-X gate. The following snippet of code repeats that interaction 1024 times, measuring and resetting the ions after each time. All 1024 measurement results will be reported to the user. register q[2] loop 1024 { prepare_all Sxx q[0] q[1] measure_all } ## Single-Qubit Gate Set Tomography register q[1] // Fiducials macro F0 qubit { } macro F1 qubit { Sx qubit } macro F2 qubit { Sy qubit } macro F3 qubit { Sx qubit; Sx qubit} macro F4 qubit { Sx qubit; Sx qubit; Sx qubit } macro F5 qubit { Sy qubit; Sy qubit; Sy qubit } // Germs macro G0 qubit { Sx qubit } macro G1 qubit { Sy qubit } macro G2 qubit { I_Sx qubit } macro G3 qubit { Sx qubit; Sy qubit } macro G4 qubit { Sx qubit; Sx qubit; Sy qubit } macro G5 qubit { Sx qubit; Sy qubit; Sy qubit } macro G6 qubit { Sx qubit; Sy qubit; I_Sx qubit } macro G7 qubit { Sx qubit; I_Sx qubit; I_Sx qubit } macro G8 qubit { Sy qubit; I_Sx qubit; I_Sx qubit } macro G9 qubit { Sx qubit; Sy qubit; Sy qubit; I_Sx qubit } macro G10 qubit { Sx qubit; Sx qubit; Sy qubit; Sx qubit; Sy qubit; Sy qubit } // Length 1 prepare_all F0 q[0] measure_all prepare_all F1 q[0] measure_all prepare_all F2 q[0] measure_all prepare_all F3 q[0] measure_all prepare_all F4 q[0] measure_all prepare_all F5 q[0] measure_all prepare_all F1 q[0]; F1 q[0] measure_all prepare_all F1 q[0]; F2 q[0] measure_all // and many more // Repeated germs can be realized with the loop prepare_all F1 q[0] loop 8 { G1 q[0] } F1 q[0] measure_all # Data Output Format When successfully executed, a single Jaqal file will generate a single ASCII text file (Linux line endings) in the following way: 1. Each call of measure_all at runtime will add a new line of data to the output file. (If measure_all occurs within a loop (or nested loops), then multiple lines of data will be written to the output file, one for each call of measure_all during execution.) 2. Each line of data written to file will be a single bitstring, equal in length to the positive integer passed to register at the start of the program. 3. Each bitstring will be written in least-significant bit order (little endian). For example, consider the program: register q[2] loop 2 { prepare_all Px q[0] measure_all } loop 2 { prepare_all Px q[1] measure_all } Assuming perfect execution, the output file would read as: 10 10 01 01 While this output format will be “human-readable”, it may nevertheless be unwieldy to work with directly. Therefore, a Python-based parser will be written to aid users in manipulating output data. # Possible Future Capabilities Jaqal is still under development, and will gain new features as the QSCOUT hardware advances. While the precise feature set of future versions of Jaqal is still undetermined, we discuss some features that may be added, and in some cases identify workarounds for the current lack of those features. ## Subset Measurement Currently, the measurement operation of the QSCOUT hardware acts on all ions in the trap, destroying their quantum state and taking them out of the computational subspace. Future versions of the QSCOUT hardware will allow for the isolation and measurement of a subset of qubits with a command of the form measure_subset <qubit> .... Similarly, a prepare_subset <qubit> ... operation will allow the reuse of measured qubits without destroying the quantum state of the remainder. These would be implemented in a Gate Pulse File, and not require a change to the Jaqal language. ## Measurement Feedback The QSCOUT hardware does not currently support using measurement outcomes to conditionally execute future gates. We expect this capability will be added in a future version of the QSCOUT hardware, and Jaqal programs will be able to use that capability once it exists. We have chosen to delay adding the syntax for measurement feedback to Jaqal until that time, in order to allow us the flexibility to choose a syntax that best allows users to take advantage of the actual capabilities of our hardware, once those are known. ## Classical Computation Jaqal does not currently support any form of classical computation. We understand that this is a limitation, and expect future versions of Jaqal to do so. There are two relevant forms of classical computation that we are considering for Jaqal. ### Compile-Time Classical Computation Performing classical computations at compile-time, before the program is sent to the quantum computer, can vastly increase the expressiveness of the language. For example, consider the following experiment, which is not currently legal Jaqal code: register q[1] let pi 3.1415926536 loop 100 { prepare_all Ry q[0] pi/32 measure_all prepare_all Ry q[0] pi/16 measure_all prepare_all Ry q[0] 3*pi/32 measure_all prepare_all Ry q[0] pi/8 measure_all } Currently, Jaqal does not support inline parameter calculations like the above. The recommended workaround is to define additional constants as needed: register q[1] let pi_32 0.09817477042 let pi_16 0.1963495408 let pi_3_32 0.2945243113 let pi_8 0.3926990817 loop 100 { prepare_all Ry q[0] pi_32 measure_all prepare_all Ry q[0] pi_16 measure_all prepare_all Ry q[0] pi_3_32 measure_all prepare_all Ry q[0] pi_8 measure_all } Another example of a case where compile-time classical computation could be useful is in macro definition. For example, if you wished to define a macro for a controlled z rotation in terms of a (previously-defined) CNOT macro: ... macro CNOT control target { ... } macro CRz control target angle { Rz target angle/2 CNOT control target Rz target -angle/2 CNOT control target } ... Again, the above example is not currently legal Jaqal. We recommend, in such cases, that you manually unroll macros as needed, then define additional constants as above. That is, rather than using the above macro: ... let phi 0.7853981634; ... CRz q[0] q[1] phi; ... You should instead call the gates the macro is made up of, substituting the results of the appropriate calculations yourself: ... let phi 0.7853981634; let phi_2 0.3926990817; let phi_m_2 -0.3926990817; ... Rz q[1] phi_2; CNOT q[0] q[1]; Rz q[1] phi_m_2; CNOT q[0] q[1]; ... We recognize that this “manual compilation” is a significant inconvenience for writing readable and expressive code in Jaqal. We expect to include compile-time classical computation in a relatively early update to Jaqal, likely even before measurement feedback is available. Fortunately, metaprogramming (automated code generation) significantly eases the burden of the lack of classical computation features, and we highly recommend it to users of Jaqal. ### Run-Time Classical Computation Users may also wish to do classical computation while a Jaqal program is running, based on the results of measurements. For example, in hybrid variational algorithms, a classical optimizer may use measurement results from one circuit to choose rotation angles used in the next circuit. In error-correction experiments, a decoder may need to compute which gates are necessary to restore a state based on the results of stabilizer measurements. Adaptive tomography protocols may need to perform statistical analyses on measurement results to determine which measurements will give the most information. As can be seen from the above examples, run-time classical computation is useful only when measurement feedback is possible. Accordingly, we will consider this feature after we have added support for measurement feedback. However, use cases like adaptive tomography and variational algorithms can be implemented via metaprogramming techniques. After running a Jaqal file on the QSCOUT hardware, a metaprogram can parse the measurement results, then use that information to generate a new Jaqal file to run. ## Randomness Executing quantum programs with gates chosen via classical randomness is desirable for a variety of reasons. Applications of randomized quantum programs include hardware benchmarking, error mitigation, and some quantum simulation algorithms. Jaqal does not currently have built-in support for randomization, although it may in the future, likely in combination with support for run-time classical computation. Our currently recommended workaround is to pre-compute any randomized elements of the algorithm, automatically generating Jaqal code to execute the random circuit selected. For example, the following program isn’t currently possible, as there’s no means of generating a random angle in Jaqal directly: register q[1] loop 100 { prepare_all // Do an X rotation on q[0] by a random angle between 0 and 2*pi. measure_all } However, the same effect can be obtained by a metaprogram (written in Python, for the sake of example) that generates a Jaqal program: from random import uniform from math import pi with open("randomness_example.jql", "w") as f: f.write("register q[1]\n\n") for idx in range(100): angle = uniform(0.0, 2.0 * pi) f.write("prepare_all\n") f.write("Rx q[0] %f\n" % angle) f.write("measure_all\n\n") While the generated Jaqal program is much larger than one that could be written in a potential future version of Jaqal that supported randomized execution, the metaprogram that generates it is quite compact. # Acknowledgements Sandia National Laboratories is a multi-mission laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of Honeywell International, Inc., for DOE’s National Nuclear Security Administration under contract DE-NA0003525. # Changelog ## v1.1 (2020-08-06) • Corrected definition of Sxx. • Fixed typo in GST example. • Fixed typo in Compile-Time Classical Computation sample code. • Added syntax coloring.
2021-07-25T08:54:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5552634000778198, "perplexity": 3529.009460662873}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046151641.83/warc/CC-MAIN-20210725080735-20210725110735-00433.warc.gz"}
https://lammps.sandia.gov/doc/pair_meso.html
# pair_style tdpd command ## Syntax pair_style style args • style = edpd or mdpd or mdpd/rhosum or tdpd • args = list of arguments for a particular style edpd args = cutoff seed cutoff = global cutoff for eDPD interactions (distance units) seed = random # seed (integer) (if <= 0, eDPD will use current time as the seed) mdpd args = T cutoff seed T = temperature (temperature units) cutoff = global cutoff for mDPD interactions (distance units) seed = random # seed (integer) (if <= 0, mDPD will use current time as the seed) mdpd/rhosum args = tdpd args = T cutoff seed T = temperature (temperature units) cutoff = global cutoff for tDPD interactions (distance units) seed = random # seed (integer) (if <= 0, tDPD will use current time as the seed) ## Examples pair_style edpd 1.58 9872598 pair_coeff * * 18.75 4.5 0.41 1.58 1.42E-5 2.0 1.58 pair_coeff 1 1 18.75 4.5 0.41 1.58 1.42E-5 2.0 1.58 power 10.54 -3.66 3.44 -4.10 pair_coeff 1 1 18.75 4.5 0.41 1.58 1.42E-5 2.0 1.58 power 10.54 -3.66 3.44 -4.10 kappa -0.44 -3.21 5.04 0.00 pair_style hybrid/overlay mdpd/rhosum mdpd 1.0 1.0 65689 pair_coeff 1 1 mdpd/rhosum 0.75 pair_coeff 1 1 mdpd -40.0 25.0 18.0 1.0 0.75 pair_style tdpd 1.0 1.58 935662 pair_coeff * * 18.75 4.5 0.41 1.58 1.58 1.0 1.0E-5 2.0 pair_coeff 1 1 18.75 4.5 0.41 1.58 1.58 1.0 1.0E-5 2.0 3.0 1.0E-5 2.0 ## Description The edpd style computes the pairwise interactions and heat fluxes for eDPD particles following the formulations in (Li2014_JCP) and Li2015_CC. The time evolution of an eDPD particle is governed by the conservation of momentum and energy given by where the three components of <font size=”4”>F<sub>i</sub></font> including the conservative force <font size=”4”>F<sub>ij</sub><sup>C</sup></font>, dissipative force <font size=”4”>F<sub>ij</sub><sup>D</sup></font> and random force <font size=”4”>F<sub>ij</sub><sup>R</sup></font> are expressed as in which the exponent of the weighting function <font size=”4”><i>s</i></font> can be defined as a temperature-dependent variable. The heat flux between particles accounting for the collisional heat flux <font size=”4”>q<sup>C</sup></font>, viscous heat flux <font size=”4”>q<sup>V</sup></font>, and random heat flux <font size=”4”>q<sup>R</sup></font> are given by where the mesoscopic heat friction <font size=”4”>&kappa;</font> is given by with <font size=”4”>&upsilon;</font> being the kinematic viscosity. For more details, see Eq.(15) in (Li2014_JCP). The following coefficients must be defined in eDPD system for each pair of atom types via the pair_coeff command as in the examples above. • A (force units) • gamma (force/velocity units) • power_f (positive real) • cutoff (distance units) • kappa (thermal conductivity units) • power_T (positive real) • cutoff_T (distance units) • optional keyword = power or kappa The keyword power or kappa is optional. Both “power” and “kappa” require 4 parameters <font size=”4”>c<sub>1</sub>, c<sub>2</sub>, c<sub>4</sub>, c<sub>4</sub></font> showing the temperature dependence of the exponent <center><font size=”4”> <i>s</i>(<i>T</i>) = power_f*(1+c<sub>1</sub>*(T-1)+c<sub>2</sub>*(T-1)<sup>2</sup> +c<sub>3</sub>*(T-1)<sup>3</sup>+c<sub>4</sub>*(T-1)<sup>4</sup>)</font></center> and of the mesoscopic heat friction <center><font size=”4”> <i>s<sub>T</sub>(T)</i> = kappa*(1+c<sub>1</sub>*(T-1)+c<sub>2</sub>*(T-1)<sup>2</sup> +c<sub>3</sub>*(T-1)<sup>3</sup>+c<sub>4</sub>*(T-1)<sup>4</sup>)</font></center> If the keyword power or kappa is not specified, the eDPD system will use constant power_f and kappa, which is independent to temperature changes. The mdpd/rhosum style computes the local particle mass density rho for mDPD particles by kernel function interpolation. The following coefficients must be defined for each pair of atom types via the pair_coeff command as in the examples above. • cutoff (distance units) The mdpd style computes the many-body interactions between mDPD particles following the formulations in (Li2013_POF). The dissipative and random forces are in the form same as the classical DPD, but the conservative force is local density dependent, which are given by where the first term in <font size=”4”>F<sup>C</sup></font> with a negative coefficient A < 0 stands for an attractive force within an interaction range <font size=”4”>r<sub>c</sub></font>, and the second term with B > 0 is the density-dependent repulsive force within an interaction range <font size=”4”>r<sub>d</sub></font>. The following coefficients must be defined for each pair of atom types via the pair_coeff command as in the examples above. • A (force units) • B (force units) • gamma (force/velocity units) • cutoff_c (distance units) • cutoff_d (distance units) The tdpd style computes the pairwise interactions and chemical concentration fluxes for tDPD particles following the formulations in (Li2015_JCP). The time evolution of a tDPD particle is governed by the conservation of momentum and concentration given by where the three components of <font size=”4”>F<sub>i</sub></font> including the conservative force <font size=”4”>F<sub>ij</sub><sup>C</sup></font>, dissipative force <font size=”4”>F<sub>ij</sub><sup>D</sup></font> and random force <font size=”4”>F<sub>ij</sub><sup>R</sup></font> are expressed as The concentration flux between two tDPD particles includes the Fickian flux <font size=”4”>Q<sub>ij</sub><sup>D</sup></font> and random flux <font size=”4”>Q<sub>ij</sub><sup>R</sup></font>, which are given by where the parameters kappa and epsilon determine the strength of the Fickian and random fluxes. <font size=”4”><i>m</i><sub>s</sub></font> is the mass of a single solute molecule. In general, <font size=”4”><i>m</i><sub>s</sub></font> is much smaller than the mass of a tDPD particle <font size=”4”><i>m</i></font>. For more details, see (Li2015_JCP). The following coefficients must be defined for each pair of atom types via the pair_coeff command as in the examples above. • A (force units) • gamma (force/velocity units) • power_f (positive real) • cutoff (distance units) • cutoff_CC (distance units) • kappa_i (diffusivity units) • epsilon_i (diffusivity units) • power_cc_i (positive real) The last 3 values must be repeated Nspecies times, so that values for each of the Nspecies chemical species are specified, as indicated by the “I” suffix. In the first pair_coeff example above for pair_style tdpd, Nspecies = 1. In the second example, Nspecies = 2, so 3 additional coeffs are specified (for species 2). Example scripts There are example scripts for using all these pair styles in examples/USER/meso. The example for an eDPD simulation models heat conduction with source terms analog of periodic Poiseuille flow problem. The setup follows Fig.12 in (Li2014_JCP). The output of the short eDPD simulation (about 2 minutes on a single core) gives a temperature and density profiles as The example for a mDPD simulation models the oscillations of a liquid droplet started from a liquid film. The mDPD parameters are adopted from (Li2013_POF). The short mDPD run (about 2 minutes on a single core) generates a particle trajectory which can be visualized as follows. The first image is the initial state of the simulation. If you click it a GIF movie should play in your browser. The second image is the final state of the simulation. The example for a tDPD simulation computes the effective diffusion coefficient of a tDPD system using a method analogous to the periodic Poiseuille flow. The tDPD system is specified with two chemical species, and the setup follows Fig.1 in (Li2015_JCP). The output of the short tDPD simulation (about one and a half minutes on a single core) gives the concentration profiles of the two chemical species as Mixing, shift, table, tail correction, restart, rRESPA info: The styles edpd, mdpd, mdpd/rhosum and tdpd do not support mixing. Thus, coefficients for all I,J pairs must be specified explicitly. The styles edpd, mdpd, mdpd/rhosum and tdpd do not support the pair_modify shift, table, and tail options. The styles edpd, mdpd, mdpd/rhosum and tdpd do not write information to binary restart files. Thus, you need to re-specify the pair_style and pair_coeff commands in an input script that reads a restart file. ## Restrictions The pair styles edpd, mdpd, mdpd/rhosum and tdpd are part of the USER-MESO package. It is only enabled if LAMMPS was built with that package. See the Build package doc page for more info.
2019-12-07T12:51:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5898966789245605, "perplexity": 7920.917831622421}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540499389.15/warc/CC-MAIN-20191207105754-20191207133754-00340.warc.gz"}
https://oceancolor.gsfc.nasa.gov/atbd/qwip/
# Quality Water Index Polynomial (QWIP) ## 1 - Product Summary This algorithm returns the Quality Water Index Polynomial (QWIP) score, representing a quantitative metric to evaluate the quality of ocean color remote sensing reflectance ($R_{rs}$) data (Dierssen et al. 2022). The relationship between the Apparent Visible Wavelength (AVW; Vandermeulen et al. 2020) and a multi-channel waveband index is used to identify spectra that fall outside the general trends observed in aquatic optics for optically deep waters. The approach was developed with a large global dataset representing blue, green, and brown waters and was further tested extensively with field and satellite datasets. This simple approach can provide a level of uncertainty about a retrieved spectrum and flag questionable or unusual spectra for further analysis. Algorithm Point of Contact: Heidi Dierssen, University of Connecticut ## 2 - Algorithm Description Inputs: $R_{rs}$ at all available wavelengths between 400 – 700 nm (rrs_vvv). Outputs: qwip, Quality Water Index Polynomial score (unitless) Approach: The Quality Water Index Polynomial (QWIP) is a mathematical model relating the calibrated Apparent Visible Wavelength (AVW, Vandermeulen et al. 2020) to a normalized difference index (NDI). The QWIP score is calculated as the difference between a measured and AVW-predicted NDI. To initiate, the measured NDI is determined as: $$NDI = \frac{R_{rs} (\lambda_2) - R_{rs} (\lambda_1)}{R_{rs}(\lambda_2) + R_{rs} (\lambda_1)}$$ where $\lambda1$ = Rrs_490, and $\lambda2$ = Rrs_665 (for multispectral sensors, the closest wavelength match is used). The predicted NDI is related to the Apparent Visible Wavelength (AVW) as follows: $$NDI_{predicted}=p_1AVW^{4}+p_2 AVW^{3}+p_3 AVW^{2}+p_4 AVW+p_5$$ $$p=(-8.399885\times10^{-9},1.715532 \times10^{-5},-1.301670 \times 10^{-2},4.357838 \times 10^{0},-5.449532 \times 10^{2} )$$ Finally, the QWIP score is calculated as the difference between the NDI and NDI_predicted: $$QWIP_{Score}=NDI(490,665)-NDI_{predicted}$$ The NDI provides a means to highlight the variability of logarithmically distributed data on a linear scale such that the distance either above or below the central tendency (QWIP) can be scored with a positive or negative value. Generally, hyperspectral data with QWIP scores exceeding a value of |0.2| may be subject to additional screening to determine any evident spectral anomalies. It maybe necessary to relax the nominal threshold (e.g. |0.3|) when applying QWIP to multispectral sensors. ## 3 - Implementation Product Short Name: qwip Level-2 Product Suite: None Calling in L2GEN: l2prod = qwip qwip_coef = [p1,p2,p3,p4,p5] ## 4 - Assessment Algorithm Development: The method was developed using a large global dataset of remote sensing reflectance (n = 1,629) compiled from different studies (CASCK-P dataset, see Dierssen et al. 2022). Algorithm Verification: The QWIP approach was tested using several different regional field datasets collected with above-water methodology and on satellite-retrievals of water-leaving reflectance data. In situ verification: In situ verification: Satellite verification: ## 5 - References Dierssen, H. M., Vandermeulen, R. A., Barnes, B. B., Castagna, A., Knaeps, E., & Vanhellemont, Q., 2022: “QWIP: A Quantitative Metric for Quality Control of Aquatic Reflectance Spectral Shape using the Apparent Visible Wavelength,” Frontiers in Remote Sensing, 32. https://doi.org/10.3389/frsen.2022.869611 Vandermeulen, R. A., Mannino, A., Craig, S.E., Werdell, P.J., 2020: “150 shades of green: Using the full spectrum of remote sensing reflectance to elucidate color shifts in the ocean,” Remote Sensing of Environment, 247, 111900, https://doi.org/10.1016/j.rse.2020.111900 Vanhellemont, Q., 2020: “Sensitivity analysis of the dark spectrum fitting atmospheric correction for metre-and decametre-scale satellite imagery using autonomous hyperspectral radiometry,” Optics Express, 28(20), 29948-29965, https://doi.org/10.1364/OE.397456 TBD
2023-03-22T13:50:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4509238600730896, "perplexity": 7914.773380221377}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943809.76/warc/CC-MAIN-20230322114226-20230322144226-00074.warc.gz"}
https://ikariam.fandom.com/wiki/Building:Carpenter
## FANDOM 1,797 Pages Carpenter Function: Lowers demand Requirements: Carpentry () Expansion requirements: and Use requirements: None ## Description Only the best lumbers are used at the Carpenter's Workshop. Therefore our handy craftsmen are able to build a solid framework and our houses don't have to be repaired all the time. • Every level of the Carpenter lowers demand for by 1% per expansion (only in the town it is built). ## Explanation of Reduction Buildings All reduction buildings add their deduction percentage to that given by Pulley, Geometry and Spirit Level. This means the actual discount is an additional 1% per level of the reduction building, plus the reduction from the level of research you have completed. For example, a Level 1 reduction building combined with Pulley has a net of 3% reduction, with Geometry there is a net of 7%, and with Spirit Level there is a net of 15%. With the maximum level of reduction buildings (Level 32, 32% reduction) and Spirit Level (14%), the maximum reduction for that resource is 46%. This means you use only 54% of the original cost. ## New look The Carpenter is getting a new look in patch 0.5.0. ## Expansion Details The time (in seconds) it takes to upgrade to the next level is determined by the following formula: ${ \text{Building time (seconds)} = \left \lbrack \cfrac{125,660}{37} \times 1.06^\text{Level} - 2,808\right \rbrack }$ The accumulative time (in seconds) it takes to upgrade up to the next level is determined by the following formula: ${ \text{Accumulative building time (seconds)} = \left \lbrack \cfrac{6,659,980}{111} \times \left (\ 1.06^\text{Level} -\ 1\ \right ) - 2,808 \times \text{Level}\right \rbrack}$ ## Other Reduction Buildings Community content is available under CC-BY-SA unless otherwise noted.
2020-04-09T17:38:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4557415843009949, "perplexity": 1720.4529287349662}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371861991.79/warc/CC-MAIN-20200409154025-20200409184525-00036.warc.gz"}
https://mooseframework.inl.gov/syntax/ADBCs/index.html
## Information and Tools A BC is an object that represents a PDE boundary condition. It is applied to select boundaries of the simulation domain using the boundary parameter in the relevant sub-block of the ADBCs block of a MOOSE input file. There are two different flavors of BCs: IntegratedBCs and NodalBCs. IntegratedBCs are integrated over the domain boundary and are imposed weakly. NodalBCs are applied strongly at individual nodes and are not integrated. An AD prefix, as in ADNodalBC, indicates that the Jacobians for subclasses deriving from the parent type are computed using automatic differentiation. In an ADBC subclass the computeQpResidual() function must be overridden. To create a custom ADNodalBC, you can follow the pattern of the ADFunctionDirichletBC object implemented and included in the MOOSE framework. For demonstration of an ADIntegratedBC, you can refer to the ADRobinBC test object.
2019-02-20T08:17:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4011909067630768, "perplexity": 1848.4978294846062}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247494485.54/warc/CC-MAIN-20190220065052-20190220091052-00076.warc.gz"}
https://cryptography.fandom.com/wiki/Secure_Remote_Password_protocol
FANDOM 573 Pages The Secure Remote Password protocol (SRP) is a password-authenticated key agreement protocol. Overview Edit The SRP protocol has a number of desirable properties: it allows a user to authenticate himself to a server, it is resistant to dictionary attacks mounted by an eavesdropper, and it does not require a trusted third party. It effectively conveys a zero-knowledge password proof from the user to the server. Only one password can be guessed at per attempt in revision 6 of the protocol. One of the interesting properties of the protocol is that even if one or two of the cryptographic primitives it uses are attacked, it is still secure. The SRP protocol has been revised several times, and is currently at revision six. The SRP protocol creates a large private key shared between the two parties in a manner similar to Diffie-Hellman, then verifies to both parties that the two keys are identical and that both sides have the user's password. In cases where encrypted communications as well as authentication are required, the SRP protocol is more secure than the alternative SSH protocol and faster than using Diffie-Hellman with signed messages. It is also independent of third parties, unlike Kerberos. The SRP protocol, version 3 is described in RFC 2945. SRP version 6 is also used for strong password authentication in SSL/TLS[1] and other standards such as EAP[2] and SAML, and is being standardized in IEEE P1363 and ISO/IEC 11770-4. Protocol Edit The following notation is used in this description of the protocol, version 6: • q and N = 2q + 1 are chosen such that both are prime (N is a safe prime and q is a Sophie Germain prime). N must be large enough so that computing discrete logarithms modulo N is infeasible. • All arithmetic is performed in the field of integers modulo N, $\mathbb{Z}_N$. • g is a generator of the multiplicative group. • k is a parameter derived by both sides; for example, k = H(N, g). • s is a small salt. • I is an identifying username. • p is the user's password. • H() is a hash function; e.g., SHA-256. • v is the host's password verifier, v = gx, x = H(s,p). • u, a and b are random. • | denotes concatenation. All other variables are defined in terms of these. First, to establish a password p with Steve, Carol picks a small random salt s, and computes x = H(s, p), v = gx. Steve stores v and s, indexed by I, as Carol's password verifier and salt. x is discarded because it is equivalent to the plaintext password p. This step is completed before the system is used. 1. Carol -> Steve: I | A, with A = ga 2. Steve -> Carol: s | B, with B = kv + gb 3. Both: u = H(A, B) 4. Carol: SCarol = (B - kgx)(a + ux) 5. Carol: KCarol = H(SCarol) 6. Steve: SSteve = (Avu)b 7. Steve: KSteve = H(SSteve) Now the two parties have a shared, strong session key K. To complete authentication, they need to prove to each other that their keys match. One possible way is as follows: 1. Carol -> Steve: M1 = H(H(N) XOR H(g) | H(I) | s | A | B | KCarol). Steve verifies M1. 2. Steve -> Carol: M2 = H(A | M1 | KSteve). Carol verifies M2. This method requires guessing more of the shared state to be successful in impersonation than just the key. While most of the additional state is public, private information could safely be added to the inputs to the hash function, like the server private key. The two parties also employ the following safeguards: 1. Carol will abort if she receives B == 0 (mod N) or u == 0. 2. Steve will abort if he receives A == 0 (mod N). 3. Carol must show her proof of K first. If Steve detects that Carol's proof is incorrect, he must abort without showing his own proof of K. Implementation example in Python Edit # An example SRP-6a authentication # based on http://srp.stanford.edu/design.html import hashlib import random def global_print(*names): x = lambda s: ["%s", "0x%x"][isinstance(s, long)] % s print "".join("%s = %s\n" % (name, x(globals()[name])) for name in names) def H(*a): # a one-way hash function return int(hashlib.sha256(str(a)).hexdigest(), 16) % N def cryptrand(n=1024): return random.SystemRandom().getrandbits(n) % N # A large safe prime (N = 2q+1, where q is prime) # All arithmetic is done modulo N # (generated using "openssl dhparam -text 1024") N = '''00:c0:37:c3:75:88:b4:32:98:87:e6:1c:2d:a3:32: 4b:1b:a4:b8:1a:63:f9:74:8f:ed:2d:8a:41:0c:2f: c2:1b:12:32:f0:d3:bf:a0:24:27:6c:fd:88:44:81: 97:aa:e4:86:a6:3b:fc:a7:b8:bf:77:54:df:b3:27: c7:20:1f:6f:d1:7f:d7:fd:74:15:8b:d3:1c:e7:72: c9:f5:f8:ab:58:45:48:a9:9a:75:9b:5a:2c:05:32: 16:2b:7b:62:18:e8:f1:42:bc:e2:c3:0d:77:84:68: 9a:48:3e:09:5e:70:16:18:43:79:13:a8:c3:9c:3d: d0:d4:ca:3c:50:0b:88:5f:e3''' N = int(''.join(N.split()).replace(':', ''), 16) g = 2 # A generator modulo N k = H(N, g) # Multiplier parameter (k=3 in legacy SRP-6) print "#. H, N, g, and k are known beforehand to both client and server:" global_print("H", "N", "g", "k") print "0. server stores (I, s, v) in its password database" # the server must first generate the password verifier s = cryptrand(64) # Salt for the user x = H(s, p) # Private key v = pow(g, x, N) # Password verifier global_print("I", "p", "s", "x", "v") print "1. client sends username I and public ephemeral value A to the server" a = cryptrand() A = pow(g, a, N) global_print("a", "A") # client->server (I, A) print "2. server sends user's salt s and public ephemeral value B to to cient" b = cryptrand() B = (k * v + pow(g, b, N)) % N global_print("b", "B") # server->client (s, B) print "3. client and server calculate the random scrambling parameter" u = H(A, B) # Random scrambling parameter global_print("u") print "4. client computes session key" x = H(s, p) S_c = pow(B - k * pow(g, x, N), a + u * x, N) K_c = H(S_c) global_print("S_c", "K_c") print "5. server computes session key" S_s = pow(A * pow(v, u, N), b, N) K_s = H(S_c) global_print("S_s", "K_s") print "6. client sends proof of session key to server" M_c = H(H(N) ^ H(g), H(I), s, A, B, K_s) global_print("M_c") # client->server (M_c) ; server verifies M_c print "7. server sends proof of session key to client" M_s = H(A, M_c, K_c) global_print("M_s") # server->client (M_s) ; client verifies M_s Edit Community content is available under CC-BY-SA unless otherwise noted.
2020-08-09T18:14:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39725399017333984, "perplexity": 6197.738902176172}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738562.5/warc/CC-MAIN-20200809162458-20200809192458-00455.warc.gz"}
https://indico.fnal.gov/event/15949/contributions/34957/
# 36th Annual International Symposium on Lattice Field Theory Jul 22 – 28, 2018 Kellogg Hotel and Conference Center EST timezone ## $J/\psi$-nucleon scattering in $P_{c}^{+}$ pentaquark channels Jul 27, 2018, 3:40 PM 20m 105 (Kellogg Hotel and Conference Center) ### 105 #### Kellogg Hotel and Conference Center 219 S Harrison Rd, East Lansing, MI 48824 ### Speaker Ms Ursa Skerbis (Jozef Stefan Institute, Ljubljana, Slovenija) ### Description Two pentaquarks $P_{c}^{+}$ were discovered by LHCb collaboration as peaks in the $J/\psi$-nucleon invariant mass. We performed the lattice QCD study of the scattering between $J/\psi$ meson and nucleon in the channels with $J^{P}=\frac{3}{2}^{+},\frac{3}{2}^{-}, \frac{5}{2}^{+}, \frac{5}{2}^{-}$, where $P_{c}^{+}$ was discovered. Energies of the eigenstates in these channels are extracted for the first time from the lattice. We consider the single-channel approximation as a first step towards understanding these challenging channels. ### Primary author Ms Ursa Skerbis (Jozef Stefan Institute, Ljubljana, Slovenija) ### Co-author Prof. Sasa Prelovsek (University of Ljubljana) Slides
2022-10-01T20:38:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5173308253288269, "perplexity": 14807.894295501723}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00255.warc.gz"}
http://www.ck12.org/book/CK-12-Chemistry---Second-Edition/r13/section/9.6/
<meta http-equiv="refresh" content="1; url=/nojavascript/"> You are reading an older version of this FlexBook® textbook: CK-12 Chemistry - Second Edition Go to the latest version. # 9.6: Periodic Trends in Electron Affinity Difficulty Level: At Grade Created by: CK-12 ## Lesson Objectives The student will: • define electron affinity. • describe the trends for electron affinity in the periodic table. ## Vocabulary • electron affinity ## Introduction We have talked about atomic structure, electronic configurations, size of atoms and ions, ionization energy, and electronegativity. The final periodic trend that we will examine is how atoms gain electrons. ## Electron Affinity Defined Atoms can gain or lose electrons. When an atom gains an electron, energy is given off and is known as the electron affinity. Electron affinity is defined as the energy released when an electron is added to a gaseous atom or ion. $T_{(g)} + e^- \rightarrow T^-_{(g)}$ For most elements, the addition of an electron to a gaseous atom releases potential energy. $\text{Br}_{(g)} + e^- \rightarrow \text{Br}^-_{(g)} \ \ \ \ \ \ \Delta H = -325 \ \text{kJ/mol}$ ## Group and Period Trends in Electron Affinity Let's look at the electron configurations of a few elements and the trend that develops within groups and periods. Table below shows the electron affinities for the halogen family. Electron Affinities for Group 7A Element Electron Configuration Electron Affinity, kJ/mol Fluorine, F $\text{[He]}2s^22p^5$ $-328$ Chlorine, Cl $\text{[Ne]}3s^23p^5$ $-349$ Bromine, Br $\text{[Ar]}4s^24p^5$ $-325$ Iodine, I $\text{[Kr]}5s^25p^5$ $-295$ Going down a group, the electron affinity generally decreases because of the increase in size of the atoms. Remember that within a family, atoms located lower on the periodic table are larger because there are more filled energy levels. When an electron is added to a large atom, less energy is released because the electron cannot move as close to the nucleus as it can in a smaller atom. Therefore, as the atoms in a family get larger, the electron affinity gets smaller. There are exceptions to this trend, especially when comparing the electron affinity of smaller atoms. In Table above, the electron affinity for fluorine is less than that for chlorine. This phenomenon is observed in other families as well. The electron affinity of all the elements in the second period is less than the the electron affinity of the elements in the third period. For instance, the electron affinity for oxygen is less than the electron affinity for sulfur. This is most likely due to the fact that the elements in the second period have such small electron clouds (n = 2) that electron repulsion of these elements is greater than that of the rest of the family. Overall, each row in the periodic table shows a general trend similar to the one below. The general trend in the electron affinity for atoms is almost the same as the trend for ionization energy. This is because both electron affinity and ionization energy are highly related to atomic size. Large atoms have low ionization energy and low electron affinity. Therefore, they tend to lose electrons. In general, the opposite is true for small atoms. Since they are small, they have high ionization energies and high electron affinities. Therefore, the small atoms tend to gain electrons. The major exception to this rule is the noble gases. Noble gases follow the general trend for ionization energies, but do not follow the general trend for electron affinities. Even though the noble gases are small atoms, their outer energy levels are completely filled with electrons. Any added electron cannot enter their outer most energy level and would have to be the first electron in a new (larger) energy level. This causes the noble gases to have essentially zero electron affinity. When atoms become ions, the process involves either releasing energy (through electron affinity) or absorbing energy (ionization energy). Therefore, the atoms that require a large amount of energy to release an electron will most likely be the atoms that give off the most energy while accepting an electron. In other words, nonmetals will gain electrons most easily since they have large electron affinities and large ionization energies. Metals will lose electrons since they have the low ionization energies and low electron affinities. ## Lesson Summary • Electron affinity is the energy released when an electron is added to a gaseous atom or ion. • Electron affinity generally decreases going down a group and increases left to right across a period. • Nonmetals tend to have the highest electron affinities. This video shows the relationships between atomic size, ionization energy, and electron affinity. This pdf document reviews the causes and relationships of the trends in atomic size, ionization energy, electronegativity, and electron affinity. ## Review Questions 1. Define electron affinity and write an example equation. 2. Choose the element in each pair that has the lower electron affinity. 1. Li or N 2. Cl or Na 3. Ca or K 4. Mg or F 3. Why is the electron affinity for calcium higher than that of potassium? 4. Which of the following will have the largest electron affinity? 1. Se 2. F 3. Ne 4. Br 5. Which of the following will have the smallest electron affinity? 1. Na 2. Ne 3. Al 4. Rb 6. Place the following elements in order of increasing electron affinity: Tl, Br, S, K, Al. 7. Describe the general trend for electron affinities in period 2. 8. Why does sulfur have a greater electron affinity than phosphorus does? All images, unless otherwise stated, are created by the CK-12 Foundation and are under the Creative Commons license CC-BY-NC-SA. Feb 23, 2012 Jan 07, 2015
2015-06-02T18:23:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 10, "texerror": 0, "math_score": 0.4236559569835663, "perplexity": 1146.2310559378857}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1433195036112.6/warc/CC-MAIN-20150601214356-00079-ip-10-180-206-219.ec2.internal.warc.gz"}
https://www.usgs.gov/media/videos/thermal-halemaumau-lava-lake
Thermal of Halemaumau Lava Lake Detailed Description This Quicktime movie shows a time-lapse sequence of the lava lake captured by a thermal camera on the rim of Halema‘uma‘u crater. The sequence is shown at a speed of about 30 times actual. By viewing the sequence at this speed, spotting the upwelling area in the lake is easier than in a still photograph. Details Image Dimensions: 850 x 480 Date Taken: Length: 00:00:28 Location Taken: HI, US n/a
2020-01-29T18:23:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6249414086341858, "perplexity": 7404.960703402073}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251801423.98/warc/CC-MAIN-20200129164403-20200129193403-00341.warc.gz"}
https://publications.drdo.gov.in/ojs/index.php/dsj/article/download/5034/4584
Effect of Surface Fluorination of Poly (p-Phenylene Terephthalamide) Fiber The rail track rocket sled (RTRS) national test facility at Terminal Ballistics Research Laboratory (TBRL) has been established to provide simulated flight environment for carrying out aero dynamic studies, terminal studies and kinematic studies of variety of test articles. The sled velocity is a critical parameter in evaluation trials.  This velocity is also used to ensure that the maximum speed and allowable g loading does not exceed the value which the test article will experience under free flight in air1. Overseas, the facilities have been set up to attain velocities ranging from sub-sonic to hypersonic2. The rocket sled at TBRL can be presently accelerated to travel along the rail track at velocities up to 500 m/s and capability is being built to increase velocity beyond 500 m/s. Signals acquired from existing magneto-inductive arrangement have been analysed in the present work. The experiments indicate that with increase in velocity the rate of change of flux increases, the amplitude of induced emf also increases but terminal voltage decreases and shape of the acquired pulse gets distorted. The parameters of magneto-inductive pick up have been modified in such a way that there is improvement in amplitude and shape of the received pulse with increase in velocity. The improved signals have been analysed and simulation results validated with feasible experiments. This paper also discusses issues, challenges and proposes recommendations in improving the sensor for measurement of velocity beyond Mach 1.5. It has been found that it is prudent to reduce the inductance by reducing the number of turns and changing the core from soft iron core to air core which will improve the response of inductive pick up coil at high velocity. The problem of the motion of a magnetic field due to the motion of a permanent magnet has been subject of scientific controversy for many decades3,4. The debate has been whether the field due to permanent magnet exhibits wave like behaviour or moves as a body. It has bearing on our understanding of the way emf is induced. The discussion goes on from old papers7-9 to the current papers3,4. The present work is not a work on this theory but being experimental outcome of a practical application wherein simple modifications enhance the capability and will help studies in magneto-kinematics, too. The pickup coil sensors are one of the oldest and operating principles of coil sensor are generally known, but technical details and practical implementation of induction coil sensor are only known to specialists5. The implementation of megneto-inductive technology in an application specific integrated circuit (ASIC) which gives inherently digital output (oscillates) and resolution of the order of 10 nT can be obtained while sensing external magnetic field6 and are commercially available. The disadvantage of such a system is that it does not offer a time reference, requires a board to be designed along with ASIC, driver and multiplex circuit. Further, 1200 such boards with power supply need to be deployed in the field. The proposed sensor is just a pickup coil which is ruggedized for use in field. At RTRS the test articles are mounted on specially designed sleds, which slides and accelerates down the rail track, to the desired velocity using solid propellant rocket motors. The sled velocity is a critical parameter in all dynamic trials. At RTRS, the measurement of sled velocity at regular distances is carried out through a magnet and coil system. As shown in Fig 1, a U-shaped (or horse-shoe) magnet is mounted on the sled and inductive pickups are fixed along the rail track at 10 m separation from one another. Although use of inductive pickups for measurement of sled velocity was reported more than 50 years ago7,8, but even now the interest in these systems continues to grow. This is mainly because of the fact that the network of magneto-inductive sensors can be installed over distances of several kilometres and yet they obtain large signals without any excitation or signal conditioning and the sensor can be fabricated by user. Use of accelerometers in combination with coils has also been reported9, which are forgotten but are relevant in the present context. A feasibility study to measure velocity using interferometry10 has been carried out but no further development has been reported. In this study the signal pulse from network of magnet and coil arrangement has to propagate distances as large as 4 km. The sensor is passive, that is, there is no excitation or active circuit and the arrangement is maintenance-free. The moving magnet, generates an approximately sinusoidal voltage signal in the pickup coil which is transmitted through a coaxial cable to record the corresponding ‘instantaneous’ event time in the data acquisition system. The sled velocity is computed from such recorded event intervals and the known separation distance between the fixed coils. The study will help the readers understand the limitations of pickup and how to improve it so as to capture faster phenomenon. Figure 1.Deployment of sensor along the track. 1.1 Problems Faced Due to Limitations of Existing Approaches The existing magnet-coil velocity measurement system, installed nearly 25 years ago, has served the intended purpose, its overall precision and consistency needs to be significantly improved to meet the current and futuristic challenges but with some shortcomings. The present analysis of the existing velocity measurement system has been undertaken with the objective of improving its detection beyond Mach 1.5 and to remove the known shortcomings. The major challenges which still need to be addressed are: • Significant attenuation in signal at higher sled velocities. The attenuation is so high that the signal will not be detectable at higher velocities, beyond Mach 1.5. The signal will further attenuate when more coils will be added in parallel due to doubling of the length of rail track under the augmentation programme. • Distortion in the signal pulse shape tends to increase with increasing sled velocity. • Problems in estimating the instantaneous sled velocity from distorted pulse shape due to shifting of reference point. Figure 2.Schematic representation of magnet moving over the coil. A schematic representation of the existing velocity measurement system is shown at Fig. 2. Each pole face of the U shaped magnet mounted on the sled is of 11.5 mm X 30 mm area. The distance (d) between the two pole faces is about 32 mm. Specified maximum magnetic field strength (B0) of about 1500 Gauss (0.15 T) corresponds to an equivalent pole strength (p) of about 80 Am. Such an equivalent magnetic pole can be assumed to be located around 6 mm (half pole width) inside the pole face. On the basis of equivalent magnetic pole strength (+ for North and – for South pole), the effective magnetic flux density, at a distance r from the pole (along r vector), can be computed as, $B=\frac{{\mu }_{0}p}{4\pi {r}^{2}}$   T            (1) where μ0 is the permeability of free space in T.m/Amp If Rn is the radial distance from north pole to the center of coil and Rs is the radial distance from south pole to the center of coil, with y1 as the vertical distance of the coil center from the two poles (Fig. (3)), the downward (along Y axis) component of resultant magnetic flux density By due to the combined effect of two poles is given by, Figure 3.Variation of magnetic flux density by with relative location of magnet wrt coil. ${B}_{y}=\frac{{\mu }_{0}p{y}_{1}}{4\pi }\left(\frac{1}{{\left({R}_{n}\right)}^{3}}-\frac{1}{{\left({R}_{s}\right)}^{3}}\right)$  T            (2) A graph of magnetic flux density, By (Eqn. (2)) for various positions of the magnet on the X-axis is given at Fig. 4. The variation of flux density By along X-axis is obviously independent of the sled velocity. Figure 4.Equivalent circuit of the magneto-inductive velocity measurement system at RTRS. The existing sensor coil consists of 10,000 turns (Nc) of fine copper wire (SWG-36) wound around 18 mm diameter soft iron core of 38 mm length. The ohmic resistance (RL) of the coil is 650 Ω and the inductance (Lc) of the coil is 2.5 Henry. Let Ac denote the core cross-sectional area in m2, then the total magnetic flux in the core will be given by By.Ac in Webers. Then $E=-\frac{d}{dt}\left({N}_{c}{B}_{y}{A}_{c}\right)=-{N}_{c}{A}_{c}\frac{d}{dx}\left({B}_{y}\right).\frac{dx}{dt}=-U.{N}_{c}{A}_{c}\frac{d}{dx}\left({B}_{y}\right)$ Volts             (3) Therefore, when the magnet moves over the fixed coil at velocity U along the X-axis, the induced emf (E) in the coil of Nc turns can be found from Eqn. (3). 2.1   Analysis of Existing Sensor Without taking into account any marginal effects of a lossless transmission line, the pulse voltage output of the sensor coil, at the load end, can be estimated from an equivalent circuit diagram shown at Fig. 4. The impedance Zsh of the other sensors in parallel and the line resistance RS has been ignored for simplicity. Here, the magnet induced emf (Eqn. 3) in the coil drives a current I through coil resistance RL, coil inductance Lc and the output load resistance Rop. Applying Kirchhoff’s voltage law for the voltage drops across various elements in the current loop, we get the following relationship. $E-I{R}_{L}-{L}_{c}\frac{dI}{dt}-I.{R}_{op}=0$ ${V}_{op}=E-I{R}_{L}-{L}_{c}\frac{dI}{dt}$            (4) Or, $\frac{dI}{dt}=\frac{1}{{L}_{c}}\left(E-I\left({R}_{c}+{R}_{op}\right)\right)$                         (5) Substituting the value of E from Eqn. (3) as a function of x or t, and solving Eqn. (5) through improved Euler method of numerical integration, to obtain an I pulse corresponding to the E pulse of Eqn. (3). The output voltage (Vop) pulse is then obtained through Vop= I.Rop. The Rop is set 4.7 kΩ during track testing and 1.8 kΩ while testing on the rotor. The value was so chosen that the sensor output remains within +/-5 V input range of the PC base data acquisition card. Three sample signal pulses for E and Vop at velocities of 100 m/s, 300 m/s, 1000 m/s obtained using SCILAB 5.4.0 application package are shown in Fig. 5. Figure 5.Variation of induced emf and the output terminal voltage with respect to magnet crossing over the magneto-inductive sensor at 100 m/s, 300 m/s and 1000 m/s velocity and with air core Lc = 2.5 H. 2.2   Experiment with Existing Sensor at Highest Feasible Velocity An experiment was conducted on rail track. The maximum velocity achieved in this experiment was 500 m/s. Presently, available infrastructure does not allow achieving velocity higher than 500 m/s on the rail track. The experiment was carried out with 20 sensors connected in parallel. Table 1 and Fig. 6 show that the value of peak amplitude reduces with velocity. The signal from existing sensor will not be detectable beyond 500 m/s. In this experiment an existing inductive pick up of 2.5 H, soft iron core was used and the signal captured shown in Fig. 7. Figure 6. Experimental data with velocity up to 500 m/s. Table 1.Experimental data of track testing with Rop= 4.7 kΩ. The distortion in terminal voltage is similar to the simulation results of Fig 5. In simulation the drop across shunted coils and transmission line is not considered as the focus is to study the attenuation of signal with velocity in the isolated sensor. The fabrication of proposed sensor will also be in small quantity as the large quantity required for parallel network of sensors cannot be realised without finalising isolated sensor.  The decision for detection of signal is taken for +ve amplitude, so as to have consistent reference for pulse to pulse interval. Further, a diode can be connected across each coil when very large number of sensors is connected in parallel. These diodes will eliminate the 2nd -ve peak. The amplitude has reduced from 3.2 V at 106 m/s to 1.5 V at 312 m/s and to 0.73 V at 500 m/s. The value at serial no 6 in Table 1 is erroneous may be due to relatively large separation between magnet and sensor. Figure 7.Signals captured at 106 m/s, 312 m/s and 500 m/s in experiment on rail track Figure 7 show the waveform of the signals received from field sensors. Simulation results of Fig. 5 when compared with the experimental results shown at Fig. 7 shows that in existing sensor the positive peak attenuates, trailing –ve pulse gets distorted and reduces in amplitude as the velocity increases. As the focus is to study the sensor alone, simulation has not taken into consideration the effect of transmission line and the shunting due to other sensors on the same line which were there in experiments. The inductive sensors can be directly manufactured by users5. These are the best sensors when dimensions are not a constraint5.Although the focus of this paper is analysis but with minor changes the capability of the sensor to detect higher velocities gets enhanced.The Eqns. (3) and (4) show that induced emf is desirable but large Lc is undesirable. In the sensor, the output falls with increase in velocity (Fig. 6) due to high value of inductance. The inductance Lc of the sensor coil is high because of its soft iron core, which also adversely effects the high frequency response. It may be noted that the coil inductance is directly proportional to the relative permeability μr of the core material. However, in the present set up the magnetic flux density B linking permanent magnet and the inductive sensor is nearly independent of μr of the core material. Hence the provision of soft iron core in the sensor coil merely increases the value of inductance Lc without increasing the total flux or the induced emf in the coil. Therefore, it is proposed that the soft iron core be eliminated. Tumanski5 reports the non-linearity of coil with core can be removed by changing it to air cored sensors and gives following empirical formula for inductance. ${L}_{c}={N}_{c}{}^{2}\frac{{\mu }_{0}{\mu }_{c}{A}_{c}}{{l}_{c}}{\left(l}{{l}_{c}}\right)}^{-3/5}$ where  μc is the resultant permeability of the core and is much lower than the permeability of the material, l and Lc are the coil and the core length respectively. Further, if the number of turns are halved, the emf will also become ½ as per Eqn. (3) but the inductance will reduce by ¼. An attempt was also made with turns reduced to 1/4th, but results were better when turns were reduced to ½ in case of isolated Lc. Further, experiments and studies were done using the proposed sensor along with the existing sensor so as to have identical magnet to coil gap and velocity. The realized inductance was measured using LCR meter for coil of 10,000, 5000, 4000 and 2500 turns. Except existing coil of 10,000 turns, remaining all were air core coils. Table 2.Simulated studies of parameters of existing and proposed sensors . It is the impedance which attenuates the output voltage with increase in velocity. Therefore, comparison of impedances was undertaken. The signal pulse produced in the sensor coil, is transmitted to the output load (Rop) of about 4.7x103 Ω, through a coaxial cable of more than 2 km length. The cable used at RTRS is a twin coaxial cable (RG 58C) of characteristic impedance (Z0) 50 Ω, line capacitance 100 pf/m and line inductance of 2.4x10-7 H/m with a velocity factor of about 68 percent. In the kHz frequency range of the signal pulse, the coaxial cable functions as a lossless transmission line. As such a high impedance output load (Rop) is used in the present case for monitoring the signal pulse voltage. Consequently, a part of the signal power does get reflected from the output end of the cable. Presently the output impedence is 4.7x103 Ω. It will be fully absorbed at the far end of the cable, if terminated with a matched impedance of 50 Ω. Further, in the present case it is not feasible to match the sensor coil impedance with the characteristic line impedance of the cable since the inductive reactance of the coil keeps increasing with the velocity of the rocket sled. Therefore, under the circumstances, full power of the signal pulse does not get transferred to the transmission cable. The power transfer from the induced signal pulse in the sensor coil to the coaxial cable can however be maximized by minimizing the inductive reactance of the coil by minimizing its inductance. To calculate the impedance, nominal frequency (fr) of the induced emf pulse can be computed from the time period of the pulse as seen in Fig. 5 above. Total impedance Zc of the coil with inductance Lc and internal resistance RL is given by, ${Z}_{c}=\sqrt{\left({R}_{L}{}^{2}+{\left(2\pi {f}_{r}{L}_{c}\right)}^{2}\right)}$            (6) Since frequency fr of the pulse increases with sled velocity, total impedance of the coil will also increase with the velocity. The corresponding total impedance of the coil with iron core and air core is given in Table 2 for different values of the rocket sled velocity. 4.1   Comparative Estimation of Variation of Output Voltage with Velocity Due to the voltage drop across the coil inductance Lc the output voltage Vop available across load Rop gets reduced considerably in comparison with the induced emf. The impedances of Table 2 have been taken into consideration in calculating output. Comparison of maximum values of this output voltage for the soft iron core (Lc= 2.5) and for air core (Lc= 0.284 H) versions of the sensor coil is given in Table 3. It is observed that the voltage output from proposed sensor is higher for any given velocity. It may be noted that these values are for isolated sensors. They will become part of a network of parallel sensor during implementation and the output in that case will fall considerably. As the experimental data show that in case of existing sensor it becomes so less that the output is not detectable. In case of proposed sensor the output remains appreciably high even at higher velocities and can be detected by data acquisition card. The sensor impedance is much more than the characteristic line impedance of the coaxial cable. Therefore, the amplitude of the output voltage is likely to get further attenuated because the full power of the induced pulse cannot get transferred to the transmission line due to the impedance mismatch. The sample signal pulses for E and Vop obtained for the soft iron core coil of Lc 2.5 H and air cored coil with Lc of 0.284 Henry are given below in Figure 8. As seen from these Figures, with the decrease in coil impedance, the output voltage approaches the induced emf not only in amplitude but also in shape. That is, the distortion in the output voltage pulse noted with soft iron core gets minimized with the use of air core due to the corresponding reduction in coil impedance. Simulation results of variation of induced emf and the output terminal voltage with magnet crossing over the coil at 100 m/s, 300 m/s and 1000 m/s respectively using existing sensor having soft iron core are shown in Fig 5. The difference between induced emf and output voltage is more at higher velocity due to increase in impedance of coil with velocity. Since it is not viable to conduct rocket sled test on and off for validation. The experiments need to be conducted with identical magnet to sensor gap and identical velocity. To meet the requirement a rotating fan with radius 1 m was improvised (See Fig. 10). The magnets were fixed on both ends of the fan. The sensor coil was stationary and opposite the magnet. The sensor C1 is the existing sensor and C2 is the proposed sensor. Initially a velocity of the order of 36 m/s could only be achieved and results are shown in Fig. 9. The gap between magnet and coil was kept 33 mm so as to have a signal within ±5 V range of data acquisition card of National Instruments (NI) which was used for this purpose. Table 4 lists the 3 sensors used in experiment and its comparative data. The data set 1 was taken again on another day with recording as set 2 so as to have adequate data and check consistency. As per results of simulation, reducing the inductance would improve the high velocity response of the sensor. However, reducing inductance should not be so much that it reduces the amplitude below threshold at low velocity. Table 4 shows that the sensor C2 with Lc 0.284 H gives higher output and is proposed as a choice for further work involving larger quantity of sensors. Further improvements were carried out in the rotor apparatus so as to attain higher velocity and to check validation at range of velocities. The results are shown as comparison of existing and proposed sensors at different velocity when tested on rotor as shown in Table 5. Magnet to sensor gap was kept at gap of 34 mm. The gap was kept fixed as we have focused on relative performance. Figure 8.Variation of induced emf (in red) and the output terminal voltage (in blue) with respect to magnet crossing over the magneto-inductive sensor at velocity of 100 m/s, 300 m/s and 1000 m/s and with proposed air core sensor having Lc = 0.284 H. Table 3.Simulation results of output of sensors at different velocities and identical conditions Figure 9.Experimentally acuired signals captured at 36 m/s velocity using rotor. Table 4.Comparison of amplitude at 36 m/s Figure 10.Rotating magnet and experimental results. Table 5.Comparative experimental data The focus of study has been the comparison of amplitude and shape of the existing and proposed sensor rather than the comparison of absolute values of amplitude. It reduces the efforts in experimentation as the gap between magnet and pickup need not be changed. The load resistance in this case had to be reduced to meet the requirement of within ±5 V range of data acquisition card. It is found that the amplitude is higher and distortion is less in case of air core based sensor with Lc = 0.284 H and it validates the findings of simulation results, too. This sensor is recommended for use in establishing network of sensors along the rail track. The experiments at higher velocity will be feasible thereafter. The future work includes realization of sensors in larger quantity and experimenting on the track when capability to achieve higher sled velocity is acquired. In field implementation the proposed sensor will become part of network of hundreds of other similar sensors. The effect of change in sensor impedance with velocity will have effect on coupling of signal with the transmission line. There will also be effect of loading of impedance by other sensors, especially, at lower velocity. The effect of change in reference for calculation of time will generate error and needs to be quantified. Finding the instantaneous velocity from single pulse will be a challenge and this study will be a step in that direction. These effects need to be studied and their remedy proposed. It is found that 10,000 turn iron core based inductive sensor has limitation of amplitude at higher velocity. As soon as the signal level falls below the threshold level of 0.7 V at Mach 1.5, it cannot be detected due to noise in the long distance transmission lines. Moreover, the 2nd negative peak attenuates significantly thereby distorting the signal and shifting the reference. The proposed sensor is air core based with lesser inductance. It has higher output voltage for any given velocity. Unlike existing sensor, the signal can be detected for measurement of velocity beyond 500 m/s. The distortion in the signal is also considerably lower. In this process the sensor has become capable of use at higher velocities. We thank Gp Capt GS Sandhu (Retd) and Mrs Sonu Devi for their suggestions and improvements in presentation. 1.    Nakata, Daisuke; Yajima, Jun; Nishine, Kenji; Higashino, Kazuyuki & Tanatsugu, Nobuhiro. Research and development of high speed test track facility in Japan. In Aerospace Exposition 09 - 12 January 2012, Nashville, Tennessee pp.8.[Full text via CrossRef] 2.    Turnbull, Dennis; Hooser, Clinton & Hooser, Michael. Soft sled test capability at the Holloman high speed test track. AIAA 2010-1708 U.S. Air Force T&E Days 2010 2 - 4 February 2010, Nashville, Tennessee.[Full text via CrossRef] 3.     Leus, Vladimir & Taylor, Stephen. Experimental evidence for the magneto-kinematic effect. In Progress in Electromagnetic Research Seminar (PIERS) Proceedings, Moscow, Russia, August 19-23, 2012. 4.     Leus, Vladimir A. Magneto-kinematical and electro-kinematical fields. Progress In Electromagnetics Research M, 2013, 32, pp.27-41.[Full text via CrossRef] 5.     Tumanski, Slawomir. Induction coil sensors– a review. Measurement Sci. Techno., 18 R31 [Full text via CrossRef] 6.     Andrew, Leuzinger & Andrew, Taylor. Magneto-inductive technology overview. A white paper by PNI Sensor Corporation; www.pnicorp.com, Feb 2010. 7.     Beutler, F.J. & Rauch, L.L. Precision measurement of supersonic rocket sled velocity. J. Jet Propulsion, 1957, 27 (9), pp.1021-1024.[Full text via CrossRef] 8.     Beutler, F.J. Precision measurement of supersonic rocket sled velocity-Part II. J. Jet Propulsion, 1958, 28,  809-816.[Full text via CrossRef] 9.     Stirton, J. & Glatt, B. Hybrid velocity data for the velocity measuring system of the supersonic naval ordnance research track. In the Proceedings of the IRE,1959, 963, pp.1.[Full text via CrossRef] < /p> 10.    Naumann, W.; Engberg, K.; Hogg, R.D.; Hunka, J. & Oliver, G. Rocket sled improved velocity measuring system feasibility study. Defense Technical Information Center, 1980. Mr P.K. Khosla has passed MTech (ECE) from Kurkshetra University. He is presently working as Group Director of Rail Track Rocket Sled National Test Facility of TBRL and Group Director (Automation & Networking). His current interests include velocimetry and working on various elements to double the speed of rocket sled from present capability of Mach 1.5. He is recipient of DRDO Award for Performance Excellence - 2006 and National Technology Day Award - 2012. Dr Rajesh Khanna received his ME (Electrical Communication) from Indian Institute of Science, Bangalore, in 1998 and PhD (Wireless Communications) from Thapar University, in 2006. Presently he is working as Professor in the Department of Electronics and Communication at Thapar Institute of Engineering and Technology, Patiala. He has published more than 30 paper in International Journals. He has guided more than 60 ME theses and guiding 10 PhD students. His research interest includes wireless and mobile communication, and antennas. Dr Sanjay P. Sood obtained his PhD in Information Technology. Currently, he is the Head & Principal Consultant, State eGovernance Mission (Chandigarh Administration), Chandigarh. He has been the founder Director at C-DAC School of Advanced Computing in Mauritius. He has conceptualized and lead program management, project implementations, research and development including academics in the domain of healthcare IT & eGovernance. He has authored over 60 articles including five book chapters on cutting edge applications of IT in healthcare.
2020-06-04T13:47:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 8, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42940184473991394, "perplexity": 1388.7305939746716}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347441088.63/warc/CC-MAIN-20200604125947-20200604155947-00068.warc.gz"}
https://www.legisquebec.gouv.qc.ca/en/version/cs/M-25.2?code=se:17_12_22&history=20221006
M-25.2 - Act respecting the Ministère des Ressources naturelles et de la Faune 17.12.22. The following sums are credited to the Fund: (0.1)  the annual contribution collected from energy distributors under section 17.1.11; (1)  the fees collected for an exploration, production or storage licence or an authorization to produce brine under the Petroleum Resources Act (chapter H-4.2) that are not credited to the fossil energy management component of the Natural Resources Fund; (2)  the royalties paid for petroleum and brine production that are determined by the Government and the fees paid for petroleum storage under the Petroleum Resources Act; (3)  the fines paid by offenders against the Act respecting energy efficiency and energy conservation standards for certain products (chapter N-1.01); (4)  the sums transferred to it by the Minister out of the appropriations allocated for that purpose by Parliament; (5)  the sums transferred to it by the Minister of Finance under sections 53 and 54 of the Financial Administration Act (chapter A‑6.001); (6)  the gifts, legacies and other contributions paid into the Fund to further the achievement of its objects; and (7)  the revenue generated by the sums credited to the Fund. 2016, c. 35, s. 23; 2020, c. 19, s. 52; I.N. 2020-12-10; 2021, c. 28, s. 11. 17.12.22. The following sums are credited to the Fund: (0.1)  the annual contribution collected from energy distributors under section 17.1.11; (1)  the fees collected for an exploration, production or storage licence or an authorization to produce brine under the Petroleum Resources Act (chapter H-4.2) that are not credited to the fossil energy management component of the Natural Resources Fund; (2)  the royalties paid for petroleum and brine production that are determined by the Government and the fees paid for petroleum storage under the Petroleum Resources Act; (3)  the fines paid by offenders against the Act respecting energy efficiency and energy conservation standards for certain electrical or hydrocarbon-fuelled appliances (chapter N-1.01); (4)  the sums transferred to it by the Minister out of the appropriations allocated for that purpose by Parliament; (5)  the sums transferred to it by the Minister of Finance under sections 53 and 54 of the Financial Administration Act (chapter A‑6.001); (6)  the gifts, legacies and other contributions paid into the Fund to further the achievement of its objects; and (7)  the revenue generated by the sums credited to the Fund. 2016, c. 35, s. 23; 2020, c. 19, s. 52; I.N. 2020-12-10. 17.12.22. The following sums are credited to the Fund: (0.1)  the annual contribution collected from energy distributors under section 17.1.11; (1)  the fees collected for an exploration, production or storage licence or an authorization to produce brine under the Petroleum Resources Act (chapter H-4.2) that are not credited to the fossil energy management component of the Natural Resources Fund; (2)  the royalties paid for petroleum and brine production that are determined by the Government and the fees paid for petroleum storage under the Petroleum Resources Act; (3)  the fines paid by offenders against the Act respecting energy efficiency and innovation (chapter E-1.3); (4)  the sums transferred to it by the Minister out of the appropriations allocated for that purpose by Parliament; (5)  the sums transferred to it by the Minister of Finance under sections 53 and 54 of the Financial Administration Act (chapter A‑6.001); (6)  the gifts, legacies and other contributions paid into the Fund to further the achievement of its objects; and (7)  the revenue generated by the sums credited to the Fund. 2016, c. 35, s. 23; 2020, c. 19, s. 52. 17.12.22. The following sums are credited to the Fund: (1)  the fees collected for an exploration, production or storage licence or an authorization to produce brine under the Petroleum Resources Act (chapter H-4.2); (2)  the royalties paid for petroleum and brine production that are determined by the Government and the fees paid for petroleum storage under the Petroleum Resources Act; (3)  the fines paid by offenders against the Act respecting energy efficiency and innovation (chapter E-1.3); (4)  the sums transferred to it by the Minister out of the appropriations allocated for that purpose by Parliament; (5)  the sums transferred to it by the Minister of Finance under sections 53 and 54 of the Financial Administration Act (chapter A‑6.001); (6)  the gifts, legacies and other contributions paid into the Fund to further the achievement of its objects; and (7)  the revenue generated by the sums credited to the Fund. 2016, c. 35, s. 23. 17.12.22. The following sums are credited to the Fund: Not in force (1)  the fees collected for an exploration, production or storage licence or an authorization to produce brine under the Petroleum Resources Act (chapter H-4.2); Not in force (2)  the royalties paid for petroleum and brine production that are determined by the Government and the fees paid for petroleum storage under the Petroleum Resources Act; (3)  the fines paid by offenders against the Act respecting energy efficiency and innovation (chapter E-1.3); (4)  the sums transferred to it by the Minister out of the appropriations allocated for that purpose by Parliament; (5)  the sums transferred to it by the Minister of Finance under sections 53 and 54 of the Financial Administration Act (chapter A‑6.001); (6)  the gifts, legacies and other contributions paid into the Fund to further the achievement of its objects; and (7)  the revenue generated by the sums credited to the Fund. 2016, c. 35, s. 23.
2022-12-05T02:05:01
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8215397596359253, "perplexity": 9780.418799920708}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711001.28/warc/CC-MAIN-20221205000525-20221205030525-00715.warc.gz"}
https://par.nsf.gov/biblio/10333842-iq-collaboratory-iii-empirical-dust-attenuation-frameworktaking-hydrodynamical-simulations-grain-dust
This content will become publicly available on February 1, 2023 IQ Collaboratory. III. The Empirical Dust Attenuation Framework—Taking Hydrodynamical Simulations with a Grain of Dust Abstract We present the empirical dust attenuation (EDA) framework—a flexible prescription for assigning realistic dust attenuation to simulated galaxies based on their physical properties. We use the EDA to forward model synthetic observations for three state-of-the-art large-scale cosmological hydrodynamical simulations: SIMBA, IllustrisTNG, and EAGLE. We then compare the optical and UV color–magnitude relations, ( g − r ) − M r and (far-UV −near-UV) − M r , of the simulations to a M r < − 20 and UV complete Sloan Digital Sky Survey galaxy sample using likelihood-free inference. Without dust, none of the simulations match observations, as expected. With the EDA, however, we can reproduce the observed color–magnitude with all three simulations. Furthermore, the attenuation curves predicted by our dust prescription are in good agreement with the observed attenuation–slope relations and attenuation curves of star-forming galaxies. However, the EDA does not predict star-forming galaxies with low A V since simulated star-forming galaxies are intrinsically much brighter than observations. Additionally, the EDA provides, for the first time, predictions on the attenuation curves of quiescent galaxies, which are challenging to measure observationally. Simulated quiescent galaxies require shallower attenuation curves with lower amplitude than star-forming galaxies. The EDA, combined with forward more » Authors: ; ; ; ; ; ; ; ; ; ; Award ID(s): Publication Date: NSF-PAR ID: 10333842 Journal Name: The Astrophysical Journal Volume: 926 Issue: 2 Page Range or eLocation-ID: 122 ISSN: 0004-637X 2. ABSTRACT We present predictions for high redshift (z = 2−10) galaxy populations based on the IllustrisTNG simulation suite and a full Monte Carlo dust radiative transfer post-processing. Specifically, we discuss the H α and H β + $[\rm O \,{\small III}]$ luminosity functions up to z = 8. The predicted H β + $[\rm O \,{\small III}]$ luminosity functions are consistent with present observations at z ≲ 3 with ${\lesssim} 0.1\, {\rm dex}$ differences in luminosities. However, the predicted H α luminosity function is ${\sim }0.3\, {\rm dex}$ dimmer than the observed one at z ≃ 2. Furthermore, we explore continuum spectral indices, the Balmer break at 4000 Å; (D4000) and the UV continuum slope β. The median D4000 versus specific star formation rate relation predicted at z = 2 is in agreement with the local calibration despite a different distribution pattern of galaxies in this plane. In addition, we reproduce the observed AUV versus β relation and explore its dependence on galaxy stellar mass, providing an explanation for the observed complexity of this relation. We also find a deficiency in heavily attenuated, UV red galaxies in the simulations. Finally, we provide predictions for the dust attenuation curves of galaxies at z = 2−6 and investigate their dependence on galaxy colours andmore » 3. ABSTRACT The galaxy size–stellar mass and central surface density–stellar mass relationships are fundamental observational constraints on galaxy formation models. However, inferring the physical size of a galaxy from observed stellar emission is non-trivial due to various observational effects, such as the mass-to-light ratio variations that can be caused by non-uniform stellar ages, metallicities, and dust attenuation. Consequently, forward-modelling light-based sizes from simulations is desirable. In this work, we use the skirt  dust radiative transfer code to generate synthetic observations of massive galaxies ($M_{*}\sim 10^{11}\, \rm {M_{\odot }}$ at z = 2, hosted by haloes of mass $M_{\rm {halo}}\sim 10^{12.5}\, \rm {M_{\odot }}$) from high-resolution cosmological zoom-in simulations that form part of the Feedback In Realistic Environments project. The simulations used in this paper include explicit stellar feedback but no active galactic nucleus (AGN) feedback. From each mock observation, we infer the effective radius (Re), as well as the stellar mass surface density within this radius and within $1\, \rm {kpc}$ (Σe and Σ1, respectively). We first investigate how well the intrinsic half-mass radius and stellar mass surface density can be inferred from observables. The majority of predicted sizes and surface densities are within a factor of 2 of the intrinsic values.more »
2022-11-30T00:59:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6414425373077393, "perplexity": 3251.953839673162}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710712.51/warc/CC-MAIN-20221129232448-20221130022448-00011.warc.gz"}
https://par.nsf.gov/biblio/10370832-observations-simulations-radio-emission-magnetic-fields-minkowski-object
Observations and Simulations of Radio Emission and Magnetic Fields in Minkowski's Object Abstract We combine new data from the Karl G. Jansky Very Large Array with previous radio observations to create a more complete picture of the ongoing interactions between the radio jet from galaxy NGC 541 and the star-forming system known as Minkowski’s Object (MO). We then compare those observations with synthetic radio data generated from a new set of magnetohydrodynamic simulations of jet–cloud interactions specifically tailored to the parameters of MO. The combination of radio intensity, polarization, and spectral index measurements all convincingly support the interaction scenario and provide additional constraints on the local dynamical state of the intracluster medium and the time since the jet–cloud interaction first began. In particular, we show that only a simulation with a bent radio jet can reproduce the observations. Authors: ; ; ; ; ; ; Publication Date: NSF-PAR ID: 10370832 Journal Name: The Astrophysical Journal Volume: 936 Issue: 2 Page Range or eLocation-ID: Article No. 130 ISSN: 0004-637X Publisher: DOI PREFIX: 10.3847 National Science Foundation ##### More Like this 1. ABSTRACT This is the fourth paper of a series investigating the AGN fuelling/feedback processes in a sample of 11 nearby low-excitation radio galaxies (LERGs). In this paper, we present follow-up Atacama Large Millimeter/submillimeter Array (ALMA) observations of one source, NGC 3100, targeting the 12CO(1-0), 12CO(3-2), HCO+(4-3), SiO(3-2), and HNCO(6-5) molecular transitions. 12CO(1-0) and 12CO(3-2) lines are nicely detected and complement our previous 12CO(2-1) data. By comparing the relative strength of these three CO transitions, we find extreme gas excitation conditions (i.e. Tex ≳ 50 K) in regions that are spatially correlated with the radio lobes, supporting the case for a jet–ISM interaction. An accurate study of the CO kinematics demonstrates that although the bulk of the gas is regularly rotating, two distinct non-rotational kinematic components can be identified in the inner gas regions: one can be associated to inflow/outflow streaming motions induced by a two-armed spiral perturbation; the second one is consistent with a jet-induced outflow with vmax ≈ 200 km s−1 and $\dot{M}\lesssim 0.12$ M⊙ yr−1. These values indicate that the jet-CO coupling ongoing in NGC 3100 is only mildly affecting the gas kinematics, as opposed to what expected from existing simulations and other observational studies of (sub-)kpc scale jet–cold gas interactions. HCO+(4-3) emission is tentatively detectedmore » 2. Abstract Thin synchrotron-emitting filaments are increasingly seen in the intracluster medium (ICM). We present the first example of a direct interaction between a magnetic filament, a radio jet, and a dense ICM clump in the poor cluster A194. This enables the first exploration of the dynamics and possible histories of magnetic fields and cosmic rays in such filaments. Our observations are from the MeerKAT Galaxy Cluster Legacy Survey and the LOFAR Two-Meter Sky Survey. Prominent 220 kpc long filaments extend east of radio galaxy 3C40B, with very faint extensions to 300 kpc, and show signs of interaction with its northern jet. They curve around a bend in the jet and intersect the jet in Faraday depth space. The X-ray surface brightness drops across the filaments; this suggests that the relativistic particles and fields contribute significantly to the pressure balance and evacuate the thermal plasma in a ∼35 kpc cylinder. We explore whether the relativistic electrons could have streamed along the filaments from 3C40B, and present a plausible alternative whereby magnetized filaments are (a) generated by shear motions in the large-scale, post-merger ICM flow, (b) stretched by interactions with the jet and flows in the ICM, amplifying the embedded magnetic fields,more » 3. Abstract Recently, the Hydrogen Epoch of Reionization Array (HERA) has produced the experiment’s first upper limits on the power spectrum of 21 cm fluctuations atz∼ 8 and 10. Here, we use several independent theoretical models to infer constraints on the intergalactic medium (IGM) and galaxies during the epoch of reionization from these limits. We find that the IGM must have been heated above the adiabatic-cooling threshold byz∼ 8, independent of uncertainties about IGM ionization and the radio background. Combining HERA limits with complementary observations constrains the spin temperature of thez∼ 8 neutral IGM to 27 K$〈T¯S〉$630 K (2.3 K$〈T¯S〉$640 K) at 68% (95%) confidence. They therefore also place a lower bound on X-ray heating, a previously unconstrained aspects of early galaxies. For example, if the cosmic microwave background dominates thez∼ 8 radio background, the new HERA limits imply that the first galaxies produced X-rays more efficiently than local ones. Thez∼ 10 limits require even earlier heating if dark-matter interactions cool the hydrogen gas. If an extra radio background is produced by galaxies, we rule out (at 95% confidence) the combination of high radio and low X-raymore » 4. ABSTRACT Relativistic amplification boosts the contribution of the jet base to the total emission in blazars, thus making single-dish observations useful and practical to characterize their physical state, particularly during episodes of enhanced multiwavelength activity. Following the detection of a new gamma-ray source by Fermi-LAT in 2017 July, we observed S4 0444+63 in order to secure its identification as a gamma-ray blazar. We conducted observations with the Medicina and Noto radio telescopes at 5, 8, and 24 GHz for a total of 12 epochs between 2017 August 1 and 2018 September 22. We carried out the observations with on-the-fly cross-scans and reduced the data with our newly developed Cross-scan Analysis Pipeline, which we present here in detail for the first time. We found the source to be in an elevated state of emission at radio wavelength, compared to historical values, which lasted for several months. The maximum luminosity was reached on 2018 May 16 at 24 GHz, with $L_{24}=(1.7\pm 0.3)\times 10^{27}\ \mathrm{W\, Hz}^{-1}$; the spectral index was found to evolve from slightly rising to slightly steep. Besides the new observations, which have proved to be an effective and efficient tool to secure the identification of the source, additional single dish and very longmore » 5. ABSTRACT The highly-substructured outskirts of the Magellanic Clouds provide ideal locations for studying the complex interaction history between both Clouds and the Milky Way (MW). In this paper, we investigate the origin of a >20° long arm-like feature in the northern outskirts of the Large Magellanic Cloud (LMC) using data from the Magellanic Edges Survey (MagES) and Gaia EDR3. We find that the arm has a similar geometry and metallicity to the nearby outer LMC disc, indicating that it is comprised of perturbed disc material. Whilst the azimuthal velocity and velocity dispersions along the arm are consistent with those in the outer LMC, the in-plane radial velocity and out-of-plane vertical velocity are significantly perturbed from equilibrium disc kinematics. We compare these observations to a new suite of dynamical models of the Magellanic/MW system, which describe the LMC as a collection of tracer particles within a rigid potential, and the SMC as a rigid Hernquist potential. Our models indicate the tidal force of the MW during the LMC’s infall is likely responsible for the observed increasing out-of-plane velocity along the arm. Our models also suggest close LMC/SMC interactions within the past Gyr, particularly the SMC’s pericentric passage ∼150 Myr ago and amore »
2023-02-05T04:02:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 4, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5354172587394714, "perplexity": 2841.615702623823}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500215.91/warc/CC-MAIN-20230205032040-20230205062040-00778.warc.gz"}
https://par.nsf.gov/biblio/10352460-constraining-type-ia-supernova-delay-time-spatially-resolved-star-formation-histories
Constraining Type Ia Supernova Delay Time with Spatially Resolved Star Formation Histories Abstract We present the delay time distribution (DTD) estimates of Type Ia supernovae (SNe Ia) using spatially resolved SN Ia host galaxy spectra from MUSE and MaNGA. By employing a grouping algorithm based on k -means and earth mover’s distances (EMDs), we separated the host galaxy stellar population age distributions (SPADs) into spatially distinct regions and used maximum likelihood method to constrain the DTD of SN Ia progenitors. When a power-law model of the form DTD( t ) ∝ t s ( t > τ ) is used, we find an SN rate decay slope s = − 1.41 − 0.33 + 0.32 and a delay time τ = 120 − 83 + 142 Myr . Moreover, we tested other DTD models, such as a broken power-law model and a two-component power-law model, and found no statistically significant support for these alternative models. Authors: ; ; Award ID(s): Publication Date: NSF-PAR ID: 10352460 Journal Name: The Astrophysical Journal Volume: 922 Issue: 1 Page Range or eLocation-ID: 15 ISSN: 0004-637X 1. ABSTRACT Type Iax supernovae (SNe Iax) are the most common class of peculiar SNe. While they are thought to be thermonuclear white-dwarf (WD) SNe, SNe Iax are observationally similar to, but distinct from SNe Ia. Unlike SNe Ia, where roughly 30 per cent occur in early-type galaxies, only one SN Iax has been discovered in an early-type galaxy, suggesting a relatively short delay time and a distinct progenitor system. Furthermore, one SN Iax progenitor system has been detected in pre-explosion images with its properties consistent with either of two models: a short-lived (<100 Myr) progenitor system consisting of a WD primary and a He-star companion, or a singular Wolf–Rayet progenitor star. Using deep Hubble Space Telescope images of nine nearby SN Iax host galaxies, we measure the properties of stars within 200 pc of the SN position. The ages of local stars, some of which formed with the SN progenitor system, can constrain the time between star formation and SN, known as the delay time. We compare the local stellar properties to synthetic photometry of single-stellar populations, fitting to a range of possible delay times for each SN. With this sample, we uniquely constrain the delay-time distribution for SNe Iax, with a median and 1σ confidence interval delay time of $63_{- 15}^{+more » 2. ABSTRACT While conventional Type Ia supernova (SN Ia) cosmology analyses rely primarily on rest-frame optical light curves to determine distances, SNe Ia are excellent standard candles in near-infrared (NIR) light, which is significantly less sensitive to dust extinction. An SN Ia spectral energy distribution (SED) model capable of fitting rest-frame NIR observations is necessary to fully leverage current and future SN Ia data sets from ground- and space-based telescopes including HST, LSST, JWST, and RST. We construct a hierarchical Bayesian model for SN Ia SEDs, continuous over time and wavelength, from the optical to NIR (B through H, or$0.35{-}1.8\, \mu$m). We model the SED as a combination of physically distinct host galaxy dust and intrinsic spectral components. The distribution of intrinsic SEDs over time and wavelength is modelled with probabilistic functional principal components and the covariance of residual functions. We train the model on a nearby sample of 79 SNe Ia with joint optical and NIR light curves by sampling the global posterior distribution over dust and intrinsic latent variables, SED components and population hyperparameters. Photometric distances of SNe Ia with NIR data near maximum obtain a total RMS error of 0.10 mag with our BayeSN model, compared tomore » 3. Abstract Type Ia supernovae (SNe Ia) are more precise standardizable candles when measured in the near-infrared (NIR) than in the optical. With this motivation, from 2012 to 2017 we embarked on the RAISIN program with the Hubble Space Telescope (HST) to obtain rest-frame NIR light curves for a cosmologically distant sample of 37 SNe Ia (0.2 ≲z≲ 0.6) discovered by Pan-STARRS and the Dark Energy Survey. By comparing higher-zHST data with 42 SNe Ia atz< 0.1 observed in the NIR by the Carnegie Supernova Project, we construct a Hubble diagram from NIR observations (with only time of maximum light and some selection cuts from optical photometry) to pursue a unique avenue to constrain the dark energy equation-of-state parameter,w. We analyze the dependence of the full set of Hubble residuals on the SN Ia host galaxy mass and find Hubble residual steps of size ∼0.06-0.1 mag with 1.5σ−2.5σsignificance depending on the method and step location used. Combining our NIR sample with cosmic microwave background constraints, we find 1 +w= −0.17 ± 0.12 (statistical + systematic errors). The largest systematic errors are the redshift-dependent SN selection biases and the properties of the NIR mass step. We also use these data to measureH0=more » 4. Aims . We present a comprehensive dataset of optical and near-infrared photometry and spectroscopy of type Ia supernova (SN) 2016hnk, combined with integral field spectroscopy (IFS) of its host galaxy, MCG -01-06-070, and nearby environment. Our goal with this complete dataset is to understand the nature of this peculiar object. Methods . Properties of the SN local environment are characterized by means of single stellar population synthesis applied to IFS observations taken two years after the SN exploded. We performed detailed analyses of SN photometric data by studying its peculiar light and color curves. SN 2016hnk spectra were compared to other 1991bg-like SNe Ia, 2002es-like SNe Ia, and Ca-rich transients. In addition, we used abundance stratification modeling to identify the various spectral features in the early phase spectral sequence and also compared the dataset to a modified non-LTE model previously produced for the sublumnious SN 1999by. Results . SN 2016hnk is consistent with being a subluminous ( M B = −16.7 mag, s B V =0.43 ± 0.03), highly reddened object. The IFS of its host galaxy reveals both a significant amount of dust at the SN location, residual star formation, and a high proportion of old stellar populations in themore » 5. ABSTRACT We compare the constraints from two (2019 and 2021) compilations of H ii starburst galaxy (H iiG) data and test the model independence of quasar (QSO) angular size data using six spatially flat and non-flat cosmological models. We find that the new 2021 compilation of H iiG data generally provides tighter constraints and prefers lower values of cosmological parameters than those from the 2019 H iiG data. QSO data by themselves give relatively model-independent constraints on the characteristic linear size, lm, of the QSOs within the sample. We also use Hubble parameter [H(z)], baryon acoustic oscillation (BAO), Pantheon Type Ia supernova (SN Ia) apparent magnitude (SN-Pantheon), and DES-3 yr binned SN Ia apparent magnitude (SN-DES) measurements to perform joint analyses with H iiG and QSO angular size data, since their constraints are not mutually inconsistent within the six cosmological models we study. A joint analysis of H(z), BAO, SN-Pantheon, SN-DES, QSO, and the newest compilation of H iiG data provides almost model-independent summary estimates of the Hubble constant,$H_0=69.7\pm 1.2\ \rm {km\,s^{-1}\,Mpc^{-1}}$, the non-relativistic matter density parameter,$\Omega _{\rm m_0}=0.293\pm 0.021\$, and lm = 10.93 ± 0.25 pc.
2023-02-04T18:18:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7098519206047058, "perplexity": 4543.488441090223}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500151.93/warc/CC-MAIN-20230204173912-20230204203912-00373.warc.gz"}
http://www.itl.nist.gov/div898/handbook/eda/section3/eda362.htm
1. Exploratory Data Analysis 1.3. EDA Techniques 1.3.6. Probability Distributions ## Related Distributions Probability distributions are typically defined in terms of the probability density function. However, there are a number of probability functions used in applications. Probability Density Function For a continuous function, the probability density function (pdf) is the probability that the variate has the value x. Since for continuous distributions the probability at a single point is zero, this is often expressed in terms of an integral between two points. $$\int_{a}^{b} {f(x) dx} = Pr[a \le X \le b]$$ For a discrete distribution, the pdf is the probability that the variate takes the value x. $$f(x) = Pr[X = x]$$ The following is the plot of the normal probability density function. Cumulative Distribution Function The cumulative distribution function (cdf) is the probability that the variable takes a value less than or equal to x. That is $$F(x) = Pr[X \le x] = \alpha$$ For a continuous distribution, this can be expressed mathematically as $$F(x) = \int_{-\infty}^{x} {f(\mu) d\mu}$$ For a discrete distribution, the cdf can be expressed as $$F(x) = \sum_{i=0}^{x} {f(i)}$$ The following is the plot of the normal cumulative distribution function. The horizontal axis is the allowable domain for the given probability function. Since the vertical axis is a probability, it must fall between zero and one. It increases from zero to one as we go from left to right on the horizontal axis. Percent Point Function The percent point function (ppf) is the inverse of the cumulative distribution function. For this reason, the percent point function is also commonly referred to as the inverse distribution function. That is, for a distribution function we calculate the probability that the variable is less than or equal to x for a given x. For the percent point function, we start with the probability and compute the corresponding x for the cumulative distribution. Mathematically, this can be expressed as $$Pr[X \le G(\alpha)] = \alpha$$ or alternatively $$x = G(\alpha) = G(F(x))$$ The following is the plot of the normal percent point function. Since the horizontal axis is a probability, it goes from zero to one. The vertical axis goes from the smallest to the largest value of the cumulative distribution function. Hazard Function The hazard function is the ratio of the probability density function to the survival function, S(x). $$h(x) = \frac {f(x)} {S(x)} = \frac {f(x)} {1 - F(x)}$$ The following is the plot of the normal distribution hazard function. Hazard plots are most commonly used in reliability applications. Note that Johnson, Kotz, and Balakrishnan refer to this as the conditional failure density function rather than the hazard function. Cumulative Hazard Function The cumulative hazard function is the integral of the hazard function. $$H(x) = \int_{-\infty}^{x} {h(\mu) d\mu}$$ This can alternatively be expressed as $$H(x) = -\ln {(1 - F(x))}$$ The following is the plot of the normal cumulative hazard function. Cumulative hazard plots are most commonly used in reliability applications. Note that Johnson, Kotz, and Balakrishnan refer to this as the hazard function rather than the cumulative hazard function. Survival Function Survival functions are most often used in reliability and related fields. The survival function is the probability that the variate takes a value greater than x. $$S(x) = Pr[X > x] = 1 - F(x)$$ The following is the plot of the normal distribution survival function. For a survival function, the y value on the graph starts at 1 and monotonically decreases to zero. The survival function should be compared to the cumulative distribution function. Inverse Survival Function Just as the percent point function is the inverse of the cumulative distribution function, the survival function also has an inverse function. The inverse survival function can be defined in terms of the percent point function. $$Z(\alpha) = G(1 - \alpha)$$ The following is the plot of the normal distribution inverse survival function. As with the percent point function, the horizontal axis is a probability. Therefore the horizontal axis goes from 0 to 1 regardless of the particular distribution. The appearance is similar to the percent point function. However, instead of going from the smallest to the largest value on the vertical axis, it goes from the largest to the smallest value.
2016-09-26T03:37:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.872918426990509, "perplexity": 211.86358946454243}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660602.38/warc/CC-MAIN-20160924173740-00272-ip-10-143-35-109.ec2.internal.warc.gz"}
https://zbmath.org/authors/?q=ai%3Azadeh.lotfi-asker
# zbMATH — the first resource for mathematics Compute Distance To: Documents Indexed: 151 Publications since 1949, including 47 Books Biographic References: 23 Publications all top 5 #### Co-Authors 95 single-authored 17 Rutkowski, Leszek 17 Tadeusiewicz, Ryszard 16 Zurada, Jacek M. 14 Scherer, Rafał 12 Korytkowski, Marcin 7 Yager, Ronald R. 6 Nikravesh, Masoud 4 Kacprzyk, Janusz 3 Bellman, Richard Ernest 3 Desoer, Charles A. 2 Balakrishnan, Alampallam Venkatachalaiyer 2 Bouchon-Meunier, Bernadette 2 Loia, Vincenzo 2 Neustadt, Lucien W. 2 Reformat, Marek Z. 2 Shahbazova, Shahnaz N. 2 Sheu, Phillip 1 Abbasov, Əli Məmməd oğlu 1 Anderson, James Andrew 1 Azvine, Ben 1 Batyrshin, Il’dar Zakirzyanovich 1 Belenki, Alexander 1 Chan, Christine W. 1 Chang, Sheldon S. L. 1 Dionysiou, Dionysios D. 1 Eaton, J. H. 1 Fattah, Ahmed A. 1 Fu, King-Sun 1 Gaines, Brian R. 1 González-Concepción, Concepción 1 Gunn, Steve R. 1 Guyon, Isabelle 1 Jaberg, Helmut 1 Joshi, Aravind K. 1 Kalaba, Robert E. 1 Kaynak, Okyay 1 Kinsner, Witold 1 Korotkikh, Victor 1 Kostic, Miliovoje M. 1 Kreinovich, Vladik Yakovlevich 1 Langari, Reza 1 Latombe, Jean-Claude 1 Lin, Tsauyoung 1 Mastorakis, Nikos E. 1 Miller, Kenneth S. 1 Patel, Dilip 1 Pedrycz, Witold 1 Perlovsky, Leonid I. 1 Pouchkarev, V. 1 Ragazzini, John R. 1 Ramamoorthy, Chitoor V. 1 Rudas, Imre J. 1 Rutkowska, Danuta 1 Ryjov, Alexander 1 Sanchez, Elie 1 Sheremetov, Leonid B. 1 Shibata, Takanori 1 Shimura, Masamichi 1 Siekmann, Jörg H. 1 Sopian, Kamaruzaman 1 Tanaka, Kokichi 1 Thomas, John Bowman 1 Tsai, Jeffrey J. P. 1 Turksen, Burhan 1 Wang, Yingxu 1 Yao, Yiyu 1 Yao, Yiyu Y. 1 Yen, John 1 Yu, Heather 1 Zaharim, Azami 1 Zhang, Du 1 Zimmermann, Hans-Jürgen all top 5 #### Serials 18 Lecture Notes in Computer Science 13 Information Sciences 11 Studies in Fuzziness and Soft Computing 10 Fuzzy Sets and Systems 8 Journal of Applied Physics 4 Nechetkie Sistemy i Myagkie Vychisleniya 3 Information and Control 3 IEEE Transactions on Systems, Man, and Cybernetics 3 Advances in Fuzzy Systems – Applications and Theory 2 Computers & Mathematics with Applications 2 International Journal of Man-Machine Studies 2 Journal of Mathematical Analysis and Applications 2 International Journal of Applied Mathematics and Computer Science 2 Journal of Mathematics and Physics 1 IEEE Transactions on Information Theory 1 Information Processing and Management 1 ZAMP. Zeitschrift für angewandte Mathematik und Physik 1 Journal of Statistical Planning and Inference 1 Journal A 1 Synthese 1 International Journal of Intelligent Systems 1 Applied Mathematics Letters 1 Japanese Journal of Fuzzy Theory and Systems 1 IEEE Transactions on Circuits and Systems. I: Fundamental Theory and Applications 1 Automation and Remote Control 1 Problemy Peredachi Informatsii 1 Proceedings of the National Academy of Sciences of the United States of America 1 Computational Statistics and Data Analysis 1 Modeling, Identification and Control 1 Multiple-Valued Logic 1 Soft Computing 1 Fundamenta Informaticae 1 Journal of the Society for Industrial & Applied Mathematics 1 Journal of Research of the National Bureau of Standards 1 Management Science. Ser. B, Application Series 1 NATO ASI Series. Series F. Computer and Systems Sciences 1 Studies in Computational Intelligence 1 1 Applied and Computational Mathematics 1 Fuzzy Systems and Mathematics 1 IRE Transactions on Information Theory all top 5 #### Fields 85 Computer science (68-XX) 56 Mathematical logic and foundations (03-XX) 36 General and overarching topics; collections (00-XX) 25 Information and communication theory, circuits (94-XX) 17 Systems theory; control (93-XX) 9 History and biography (01-XX) 6 Operations research, mathematical programming (90-XX) 5 Probability theory and stochastic processes (60-XX) 5 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 4 Statistics (62-XX) 2 Numerical analysis (65-XX) 2 Biology and other natural sciences (92-XX) 1 Combinatorics (05-XX) 1 Dynamical systems and ergodic theory (37-XX) 1 Operator theory (47-XX) #### Citations contained in zbMATH 102 Publications have been cited 15,427 times in 10,488 Documents Cited by Year Fuzzy sets. Zbl 0139.24606 1965 The concept of a linguistic variable and its application to approximate reasoning. I. Zbl 0397.68071 1975 Fuzzy sets as a basis for a theory of possibility. Zbl 0377.04002 1978 Decision-making in a fuzzy environment. Zbl 0224.90032 Bellman, R. E.; Zadeh, L. A. 1970 Outline of a new approach to the analysis of complex systems and decision processes. Zbl 0273.93002 1973 Similarity relations and fuzzy orderings. Zbl 0218.02058 1971 The concept of a linguistic variable and its application to approximate reasoning. III. Zbl 0404.68075 1975 Probability measures of fuzzy events. Zbl 0174.49002 1968 Toward a generalized theory of uncertainty (GTU) – an outline. Zbl 1074.94021 2005 Linear system theory. The state space approach. Zbl 1145.93303 Zadeh, Lofti A.; Desoer, Charles A. 1963 The concept of a linguistic variable and its application to approximate reasoning. II. Zbl 0404.68074 1975 A computational approach to fuzzy quantifiers in natural languages. Zbl 0517.94028 1983 Toward a theory of fuzzy information granulation and its centrality in human reasoning and fuzzy logic. Zbl 0988.03040 1997 On fuzzy mapping and control. Zbl 0305.94001 Chang, Sheldon S. L.; Zadeh, Lofti A. 1972 Fuzzy logic and approximate reasoning. Zbl 0319.02016 1975 Is there a need for fuzzy logic? Zbl 1148.68047 2008 PRUF - a meaning representation language for natural languages. Zbl 0406.68063 1978 The role of fuzzy logic in the management of uncertainty in expert systems. Zbl 0553.68049 1983 Fuzzy algorithms. Zbl 0182.33301 1968 Quantitative fuzzy semantics. Zbl 0218.02057 1971 From computing with numbers to computing with words—from manipulation of measurements to manipulation of perceptions. Zbl 0954.68513 1999 Calculus of fuzzy restrictions. Zbl 0327.02018 1975 Abstraction and pattern classification. Zbl 0134.15305 Bellman, R.; Kalaba, R.; Zadeh, L. A. 1966 Local and fuzzy logics. Zbl 0382.03017 Bellman, R. E.; Zadeh, L. A. 1977 A fuzzy-algorithmic approach to the definition of complex or imprecise concepts. Zbl 0332.68068 1976 Generalized theory of uncertainty (GTU) – principal concepts and ideas. Zbl 1157.62312 2006 Shadows of fuzzy sets. Zbl 0263.02028 1966 Toward a perception-based theory of probabilistic reasoning with imprecise probabilities. Zbl 1010.62005 2002 Feature extraction. Foundations and applications. Papers from NIPS 2003 workshop on feature extraction, Whistler, BC, Canada, December 11–13, 2003. With CD-ROM. Zbl 1114.68059 Guyon, Isabelle (ed.); Gunn, Steve (ed.); Nikravesh, Massoud (ed.); Zadeh, Lotfi A. (ed.) 2006 A note on $$Z$$-numbers. Zbl 1217.94142 2011 Syllogistic reasoning in fuzzy logic and its application to usuality and reasoning with dispositions. Zbl 0593.03033 1985 Fuzzy sets and their applications to cognitive and decision processes. Proceedings of the U.-S.-Japan seminar on fuzzy sets and their applications, held at the University of California, Berkeley, California, July 1-4, 1974. Zbl 0307.00008 Zadeh, Lotfi A. (ed.); Fu, King-Sun (ed.); Tanaka, Kokichi (ed.); Shimura, Masamichi (ed.) 1975 Industrial applications of fuzzy logic and intelligent systems. Zbl 0864.00015 Yen, John (ed.); Langari, Reza (ed.); Zadeh, Lotfi A. (ed.) 1995 Computing with words in information/intelligent systems 2. Applications. Zbl 0931.00023 Zadeh, Lotfi A. (ed.); Kacprzyk, Janusz (ed.) 1999 New frontiers in fuzzy logic. Zbl 1333.03076 1996 Fuzzy sets and decision analysis. Zbl 0534.00023 Zimmermann, H.-J. (ed.); Zadeh, L. A. (ed.); Gaines, B. R. (ed.) 1984 Toward extended fuzzy logic – a first step. Zbl 1185.03042 2009 Fuzzy sets and applications. Selected papers. Ed. and with a preface by R. R. Yager, R. M. Tong, S. Ovchinnikov and H. T. Nguyen. Zbl 0671.01031 1987 Fuzzy probabilities. Zbl 0543.60007 1984 Computing with words. Principal concepts and ideas. Zbl 1267.68238 2012 Data mining, rough sets and granular computing. Zbl 0983.00027 Lin, Tsau Young (ed.); Yao, Yiyu Y. (ed.); Zadeh, Lotfi A. (ed.) 2002 An introduction to fuzzy logic applications in intelligent systems. Zbl 0755.68018 Yager, Ronald R. (ed.); Zadeh, Lotfi A. (ed.) 1992 Fuzzy logic = computing with words. Zbl 0947.03038 1999 Computing with words in information/intelligent systems 1. Foundations. Zbl 0931.00022 Zadeh, Lotfi A. (ed.); Kacprzyk, Janusz (ed.) 1999 Fuzzy sets, fuzzy logic, and fuzzy systems. Selected papers of Lotfi Asker Zadeh. Ed. by G. J. Klir and Bo Yuan. Zbl 0873.01048 1996 A computational theory of dispositions. Zbl 0641.68153 1987 Fuzzy probabilities and their role in decision analysis. Zbl 0532.90003 1982 Fuzzy logic and its application to approximate reasoning. Zbl 0361.68126 1974 From computing with numbers to computing with words – from manipulation of measurements to manipulation of perceptions. Zbl 1062.68583 2002 Fuzzy logic – a personal perspective. Zbl 1368.03037 2015 Fuzzy logic and the calculi of fuzzy rules and fuzzy graphs: A precis. Zbl 0906.03022 1996 Genetic algorithms and fuzzy logic systems. Soft computing perspectives. Zbl 1213.68585 Sanchez, Elie (ed.); Shibata, Takanori (ed.); Zadeh, Lotfi A. (ed.) 1997 Outline of a theory of usuality based on fuzzy logic. Zbl 0626.03016 1986 Optimum nonlinear filters. Zbl 0051.19402 1953 The determination of the impulsive response of variable networks. Zbl 0040.41704 1950 From imprecise to granular probabilities. Zbl 1106.60002 2005 A new direction in AI – toward a computational theory of perceptions. Zbl 1043.68646 2001 Fuzzy logic and the calculi of fuzzy rules, fuzzy graphs, and fuzzy probabilities. Zbl 0939.03506 1999 Fuzzy sets, neural networks, and soft computing. Zbl 0831.68080 Yager, R. R. (ed.); Zadeh, L. A. (ed.) 1994 Inference in fuzzy logic. Zbl 0546.03014 1980 Circuit analyis of linear varying-parameter networks. Zbl 0040.41705 1950 Toward a perception-based theory of probabilistic reasoning. Zbl 1014.68543 2001 Is probability theory sufficient for dealing with uncertainty in AI: A negative view. Zbl 0607.68075 1986 Stochastic finite-state systems in control theory. Zbl 1320.93094 2013 From computing with numbers to computing with words – from manipulation of measurements to manipulation of perceptions. Zbl 1103.68124 2005 A note on similarity-based definitions of possibility and probability. Zbl 1339.60005 2014 Toward a logic of perceptions based on fuzzy logic. Zbl 1096.68743 2000 Fuzzy logic, neural networks and soft computing. Zbl 0925.93483 1995 The role of fuzzy logic in modeling, identification and control. Zbl 0850.93463 1994 The birth and evolution of fuzzy logic. Zbl 0800.03015 1990 Dispositional logic. Zbl 0634.03020 1988 Possibility theory and its application to information analysis. Zbl 0484.94046 1978 Computing methods in optimization problems. 2. Papers presented at a conference held at San Remo, Italy, September 9–13, 1968. Zbl 0185.00104 Zadeh, L. A. (ed.); Neustadt, L. W. (ed.); Balakrishnan, A. V. (ed.) 1969 The concept of state in system theory. Zbl 0196.45601 1964 On stability of linear varying-parameter systems. Zbl 0044.41404 1951 Toward a restriction-centered theory of truth and meaning (RCT). Zbl 1335.03008 2013 Linear system theory. The state space approach. Reprint of the 1993 original. Zbl 1153.93302 Zadeh, Lofti A; Desoer, Charles A. 2008 Fuzzy partial differential equations and relational equations. Reservoir characterization and modeling. Zbl 1052.68006 Nikravesh, Masoud (ed.); Zadeh, Lotfi A. (ed.); Korotkikh, Victor (ed.) 2004 Why the success of fuzzy logic is not paradoxical. Zbl 1009.03532 1994 The concept of state in system theory. Zbl 0186.01401 1968 Recent developments and new directions in soft computing. Zbl 1298.68035 Zadeh, Lotfi A. (ed.); Abbasov, Ali M. (ed.); Yager, Ronald R. (ed.); Shahbazova, Shahnaz N. (ed.); Reformat, Marek Z. (ed.) 2014 Probability theory should be based on fuzzy logic – a contentious view. Zbl 1062.60002 2004 Remark on the paper by Bellman and Kalaba. Zbl 0106.14004 1962 Note on an integral equation occuring in the prediction, detection, and analysis of multiple time series. Zbl 0099.35501 Thomas, J. B.; Zadeh, L. A. 1961 The information principle. Zbl 1360.94511 2015 A note on modal logic and possibility theory. Zbl 1354.03025 2014 Artificial intelligence and soft computing. 12th international conference, ICAISC 2013, Zakopane, Poland, June 9–13, 2013. Proceedings, Part II. Zbl 1283.68041 Rutkowski, Leszek (ed.); Korytkowski, Marcin (ed.); Scherer, Rafał (ed.); Tadeusiewicz, Ryszard (ed.); Zadeh, Lotfi A. (ed.); Zurada, Jacek M. (ed.) 2013 Artificial intelligence and soft computing. 11th international conference, ICAISC 2012, Zakopane, Poland, April 29–May 3, 2012. Proceedings, Part II. Zbl 1241.68035 Rutkowski, Leszek (ed.); Korytkowski, Marcin (ed.); Scherer, Rafał (ed.); Tadeusiewicz, Ryszard (ed.); Zadeh, Lotfi A. (ed.); Zurada, Jacek M. (ed.) 2012 My life and work a retrospective view. Zbl 1208.01040 2011 Semantic computing. Zbl 1192.68282 Sheu, Phillip (ed.); Yu, Heather (ed.); Ramamoorthy, C. V. (ed.); Joshi, Arvind K. (ed.); Zadeh, Lotfi A. (ed.) 2010 Fuzzy logic. Zbl 1308.03044 2009 Some reflections on information granulation and its centrality in granular computing, computing with words, the computational theory of perceptions and precisiated natural language. Zbl 1027.68703 2002 Computational intelligence: soft computing and fuzzy-neuro integration with applications. Proceedings of the NATO ASI, Manavgat, Antalya, Turkey, August 21–31, 1996. Zbl 0910.00051 Kaynak, Okyay (ed.); Zadeh, Lotfi A. (ed.); Türkşen, Burhan (ed.); Rudas, Imre J. (ed.) 1998 Fuzzy logic and soft computing. Zbl 0943.00013 1995 Linguistic cybernetics. Zbl 0361.68124 1974 On the analysis of large-scale systems. Zbl 0335.93005 1974 Linear system theory. The state space approach. Translation from the English by V. N. Varygin, A. S. Konstansov, A. A. Poduzov, M. D. Potapov, R. S. Rutman. Edited by G. S. Pospelov. Zbl 0241.93002 Zadeh, Lofti A.; Desoer, Charles A. 1970 On a class of stochastic operators. Zbl 0053.27102 1953 Nonlinear multipoles. Zbl 0051.19403 1953 On the theory of filtration of signals. Zbl 0048.20103 1952 Fuzzy logic – a personal perspective. Zbl 1368.03037 2015 The information principle. Zbl 1360.94511 2015 A note on similarity-based definitions of possibility and probability. Zbl 1339.60005 2014 Recent developments and new directions in soft computing. Zbl 1298.68035 Zadeh, Lotfi A. (ed.); Abbasov, Ali M. (ed.); Yager, Ronald R. (ed.); Shahbazova, Shahnaz N. (ed.); Reformat, Marek Z. (ed.) 2014 A note on modal logic and possibility theory. Zbl 1354.03025 2014 Stochastic finite-state systems in control theory. Zbl 1320.93094 2013 Toward a restriction-centered theory of truth and meaning (RCT). Zbl 1335.03008 2013 Artificial intelligence and soft computing. 12th international conference, ICAISC 2013, Zakopane, Poland, June 9–13, 2013. Proceedings, Part II. Zbl 1283.68041 Rutkowski, Leszek (ed.); Korytkowski, Marcin (ed.); Scherer, Rafał (ed.); Tadeusiewicz, Ryszard (ed.); Zadeh, Lotfi A. (ed.); Zurada, Jacek M. (ed.) 2013 Computing with words. Principal concepts and ideas. Zbl 1267.68238 2012 Artificial intelligence and soft computing. 11th international conference, ICAISC 2012, Zakopane, Poland, April 29–May 3, 2012. Proceedings, Part II. Zbl 1241.68035 Rutkowski, Leszek (ed.); Korytkowski, Marcin (ed.); Scherer, Rafał (ed.); Tadeusiewicz, Ryszard (ed.); Zadeh, Lotfi A. (ed.); Zurada, Jacek M. (ed.) 2012 A note on $$Z$$-numbers. Zbl 1217.94142 2011 My life and work a retrospective view. Zbl 1208.01040 2011 Semantic computing. Zbl 1192.68282 Sheu, Phillip (ed.); Yu, Heather (ed.); Ramamoorthy, C. V. (ed.); Joshi, Arvind K. (ed.); Zadeh, Lotfi A. (ed.) 2010 Toward extended fuzzy logic – a first step. Zbl 1185.03042 2009 Fuzzy logic. Zbl 1308.03044 2009 Is there a need for fuzzy logic? Zbl 1148.68047 2008 Linear system theory. The state space approach. Reprint of the 1993 original. Zbl 1153.93302 Zadeh, Lofti A; Desoer, Charles A. 2008 Generalized theory of uncertainty (GTU) – principal concepts and ideas. Zbl 1157.62312 2006 Feature extraction. Foundations and applications. Papers from NIPS 2003 workshop on feature extraction, Whistler, BC, Canada, December 11–13, 2003. With CD-ROM. Zbl 1114.68059 Guyon, Isabelle (ed.); Gunn, Steve (ed.); Nikravesh, Massoud (ed.); Zadeh, Lotfi A. (ed.) 2006 Toward a generalized theory of uncertainty (GTU) – an outline. Zbl 1074.94021 2005 From imprecise to granular probabilities. Zbl 1106.60002 2005 From computing with numbers to computing with words – from manipulation of measurements to manipulation of perceptions. Zbl 1103.68124 2005 Fuzzy partial differential equations and relational equations. Reservoir characterization and modeling. Zbl 1052.68006 Nikravesh, Masoud (ed.); Zadeh, Lotfi A. (ed.); Korotkikh, Victor (ed.) 2004 Probability theory should be based on fuzzy logic – a contentious view. Zbl 1062.60002 2004 Toward a perception-based theory of probabilistic reasoning with imprecise probabilities. Zbl 1010.62005 2002 Data mining, rough sets and granular computing. Zbl 0983.00027 Lin, Tsau Young (ed.); Yao, Yiyu Y. (ed.); Zadeh, Lotfi A. (ed.) 2002 From computing with numbers to computing with words – from manipulation of measurements to manipulation of perceptions. Zbl 1062.68583 2002 Some reflections on information granulation and its centrality in granular computing, computing with words, the computational theory of perceptions and precisiated natural language. Zbl 1027.68703 2002 A new direction in AI – toward a computational theory of perceptions. Zbl 1043.68646 2001 Toward a perception-based theory of probabilistic reasoning. Zbl 1014.68543 2001 Toward a logic of perceptions based on fuzzy logic. Zbl 1096.68743 2000 From computing with numbers to computing with words—from manipulation of measurements to manipulation of perceptions. Zbl 0954.68513 1999 Computing with words in information/intelligent systems 2. Applications. Zbl 0931.00023 Zadeh, Lotfi A. (ed.); Kacprzyk, Janusz (ed.) 1999 Fuzzy logic = computing with words. Zbl 0947.03038 1999 Computing with words in information/intelligent systems 1. Foundations. Zbl 0931.00022 Zadeh, Lotfi A. (ed.); Kacprzyk, Janusz (ed.) 1999 Fuzzy logic and the calculi of fuzzy rules, fuzzy graphs, and fuzzy probabilities. Zbl 0939.03506 1999 Computational intelligence: soft computing and fuzzy-neuro integration with applications. Proceedings of the NATO ASI, Manavgat, Antalya, Turkey, August 21–31, 1996. Zbl 0910.00051 Kaynak, Okyay (ed.); Zadeh, Lotfi A. (ed.); Türkşen, Burhan (ed.); Rudas, Imre J. (ed.) 1998 Toward a theory of fuzzy information granulation and its centrality in human reasoning and fuzzy logic. Zbl 0988.03040 1997 Genetic algorithms and fuzzy logic systems. Soft computing perspectives. Zbl 1213.68585 Sanchez, Elie (ed.); Shibata, Takanori (ed.); Zadeh, Lotfi A. (ed.) 1997 New frontiers in fuzzy logic. Zbl 1333.03076 1996 Fuzzy sets, fuzzy logic, and fuzzy systems. Selected papers of Lotfi Asker Zadeh. Ed. by G. J. Klir and Bo Yuan. Zbl 0873.01048 1996 Fuzzy logic and the calculi of fuzzy rules and fuzzy graphs: A precis. Zbl 0906.03022 1996 Industrial applications of fuzzy logic and intelligent systems. Zbl 0864.00015 Yen, John (ed.); Langari, Reza (ed.); Zadeh, Lotfi A. (ed.) 1995 Fuzzy logic, neural networks and soft computing. Zbl 0925.93483 1995 Fuzzy logic and soft computing. Zbl 0943.00013 1995 Fuzzy sets, neural networks, and soft computing. Zbl 0831.68080 Yager, R. R. (ed.); Zadeh, L. A. (ed.) 1994 The role of fuzzy logic in modeling, identification and control. Zbl 0850.93463 1994 Why the success of fuzzy logic is not paradoxical. Zbl 1009.03532 1994 An introduction to fuzzy logic applications in intelligent systems. Zbl 0755.68018 Yager, Ronald R. (ed.); Zadeh, Lotfi A. (ed.) 1992 The birth and evolution of fuzzy logic. Zbl 0800.03015 1990 Dispositional logic. Zbl 0634.03020 1988 Fuzzy sets and applications. Selected papers. Ed. and with a preface by R. R. Yager, R. M. Tong, S. Ovchinnikov and H. T. Nguyen. Zbl 0671.01031 1987 A computational theory of dispositions. Zbl 0641.68153 1987 Outline of a theory of usuality based on fuzzy logic. Zbl 0626.03016 1986 Is probability theory sufficient for dealing with uncertainty in AI: A negative view. Zbl 0607.68075 1986 Syllogistic reasoning in fuzzy logic and its application to usuality and reasoning with dispositions. Zbl 0593.03033 1985 Fuzzy sets and decision analysis. Zbl 0534.00023 Zimmermann, H.-J. (ed.); Zadeh, L. A. (ed.); Gaines, B. R. (ed.) 1984 Fuzzy probabilities. Zbl 0543.60007 1984 A computational approach to fuzzy quantifiers in natural languages. Zbl 0517.94028 1983 The role of fuzzy logic in the management of uncertainty in expert systems. Zbl 0553.68049 1983 Fuzzy probabilities and their role in decision analysis. Zbl 0532.90003 1982 Inference in fuzzy logic. Zbl 0546.03014 1980 Fuzzy sets as a basis for a theory of possibility. Zbl 0377.04002 1978 PRUF - a meaning representation language for natural languages. Zbl 0406.68063 1978 Possibility theory and its application to information analysis. Zbl 0484.94046 1978 Local and fuzzy logics. Zbl 0382.03017 Bellman, R. E.; Zadeh, L. A. 1977 A fuzzy-algorithmic approach to the definition of complex or imprecise concepts. Zbl 0332.68068 1976 The concept of a linguistic variable and its application to approximate reasoning. I. Zbl 0397.68071 1975 The concept of a linguistic variable and its application to approximate reasoning. III. Zbl 0404.68075 1975 The concept of a linguistic variable and its application to approximate reasoning. II. Zbl 0404.68074 1975 Fuzzy logic and approximate reasoning. Zbl 0319.02016 1975 Calculus of fuzzy restrictions. Zbl 0327.02018 1975 Fuzzy sets and their applications to cognitive and decision processes. Proceedings of the U.-S.-Japan seminar on fuzzy sets and their applications, held at the University of California, Berkeley, California, July 1-4, 1974. Zbl 0307.00008 Zadeh, Lotfi A. (ed.); Fu, King-Sun (ed.); Tanaka, Kokichi (ed.); Shimura, Masamichi (ed.) 1975 Fuzzy logic and its application to approximate reasoning. Zbl 0361.68126 1974 Linguistic cybernetics. Zbl 0361.68124 1974 On the analysis of large-scale systems. Zbl 0335.93005 1974 Outline of a new approach to the analysis of complex systems and decision processes. Zbl 0273.93002 1973 On fuzzy mapping and control. Zbl 0305.94001 Chang, Sheldon S. L.; Zadeh, Lofti A. 1972 Similarity relations and fuzzy orderings. Zbl 0218.02058 1971 Quantitative fuzzy semantics. Zbl 0218.02057 1971 Decision-making in a fuzzy environment. Zbl 0224.90032 Bellman, R. E.; Zadeh, L. A. 1970 Linear system theory. The state space approach. Translation from the English by V. N. Varygin, A. S. Konstansov, A. A. Poduzov, M. D. Potapov, R. S. Rutman. Edited by G. S. Pospelov. Zbl 0241.93002 Zadeh, Lofti A.; Desoer, Charles A. 1970 Computing methods in optimization problems. 2. Papers presented at a conference held at San Remo, Italy, September 9–13, 1968. Zbl 0185.00104 Zadeh, L. A. (ed.); Neustadt, L. W. (ed.); Balakrishnan, A. V. (ed.) 1969 Probability measures of fuzzy events. Zbl 0174.49002 1968 Fuzzy algorithms. Zbl 0182.33301 1968 The concept of state in system theory. Zbl 0186.01401 1968 Abstraction and pattern classification. Zbl 0134.15305 Bellman, R.; Kalaba, R.; Zadeh, L. A. 1966 Shadows of fuzzy sets. Zbl 0263.02028 1966 Fuzzy sets. Zbl 0139.24606 1965 The concept of state in system theory. Zbl 0196.45601 1964 Linear system theory. The state space approach. Zbl 1145.93303 Zadeh, Lofti A.; Desoer, Charles A. 1963 Remark on the paper by Bellman and Kalaba. Zbl 0106.14004 1962 Note on an integral equation occuring in the prediction, detection, and analysis of multiple time series. Zbl 0099.35501 Thomas, J. B.; Zadeh, L. A. 1961 Optimum nonlinear filters. Zbl 0051.19402 1953 On a class of stochastic operators. Zbl 0053.27102 1953 Nonlinear multipoles. Zbl 0051.19403 1953 On the theory of filtration of signals. Zbl 0048.20103 1952 On stability of linear varying-parameter systems. Zbl 0044.41404 1951 Time-dependent Heaviside operators. Zbl 0043.32003 1951 The determination of the impulsive response of variable networks. Zbl 0040.41704 1950 ...and 2 more Documents all top 5 all top 5 #### Cited in 523 Serials 2,417 Fuzzy Sets and Systems 968 Information Sciences 413 International Journal of Approximate Reasoning 355 Soft Computing 335 European Journal of Operational Research 283 Journal of Intelligent and Fuzzy Systems 228 Computers & Mathematics with Applications 226 International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 213 International Journal of General Systems 207 Journal of Mathematical Analysis and Applications 167 Applied Mathematical Modelling 149 Fuzzy Optimization and Decision Making 141 International Journal of Intelligent Systems 120 Applied Mathematics and Computation 103 Mathematical Problems in Engineering 93 Kybernetika 87 International Journal of Systems Science 84 Mathematical and Computer Modelling 84 Symmetry 83 Kybernetes 72 Computational and Applied Mathematics 68 Advances in Fuzzy Systems 66 Chaos, Solitons and Fractals 65 Journal of Applied Mathematics 64 Opsearch 63 Cybernetics and Systems 62 Fuzzy Information and Engineering 61 International Journal of Production Research 59 Journal of Computational and Applied Mathematics 57 Automatica 57 Iranian Journal of Fuzzy Systems 57 Afrika Matematika 56 Annals of Operations Research 55 New Mathematics and Natural Computation 53 Pattern Recognition 48 Computers & Operations Research 48 Journal of Applied Mathematics and Computing 47 Journal of the Franklin Institute 44 Artificial Intelligence 39 Complexity 39 Abstract and Applied Analysis 34 Mathematics and Computers in Simulation 32 Cybernetics and Systems Analysis 32 Fixed Point Theory and Applications 31 Journal of Inequalities and Applications 29 International Journal of Mathematics and Mathematical Sciences 29 Journal of Optimization Theory and Applications 28 Computational Statistics and Data Analysis 26 Computer Methods in Applied Mechanics and Engineering 25 Mathematics 25 International Journal of Systems Science. Principles and Applications of Systems and Integration 24 Journal of Computer and System Sciences 24 International Journal of Applied and Computational Mathematics 23 International Journal of Control 22 International Journal of Computer Mathematics 22 Discrete Dynamics in Nature and Society 21 Czechoslovak Mathematical Journal 21 Theoretical Computer Science 20 Asia-Pacific Journal of Operational Research 20 Sādhanā 20 Korean Journal of Mathematics 19 Journal of the Egyptian Mathematical Society 19 International Journal of Information Technology & Decision Making 18 Insurance Mathematics & Economics 18 International Journal of Applied Mathematics and Computer Science 17 International Journal of Theoretical Physics 17 Studia Logica 17 Journal of Information & Optimization Sciences 17 Applied Mathematics Letters 17 Algorithms 16 Journal of Computer and Systems Sciences International 16 International Transactions in Operational Research 16 Journal of Interdisciplinary Mathematics 16 Journal of Multiple-Valued Logic and Soft Computing 16 Cogent Mathematics 15 Aplikace Matematiky 15 Synthese 15 Journal of Vibration and Control 15 Axioms 15 Journal of Mathematics 14 Mathematical Social Sciences 14 Journal of Applied Statistics 14 Journal of Discrete Mathematical Sciences & Cryptography 14 RAIRO. Operations Research 14 International Journal of Control, I. Series 14 Journal of Function Spaces 12 Journal of Applied Non-Classical Logics 12 Journal of Mathematical Sciences (New York) 12 Advances in Difference Equations 12 Journal of Nonlinear Science and Applications 12 Arabian Journal for Science and Engineering 11 Stochastic Analysis and Applications 11 Automation and Remote Control 11 Applied Mathematics. Series B (English Edition) 11 Annals of Mathematics and Artificial Intelligence 11 Nonlinear Dynamics 11 Stochastic Environmental Research and Risk Assessment 11 Journal of Linear and Topological Algebra 11 Open Mathematics 10 Mathematical Biosciences ...and 423 more Serials all top 5 #### Cited in 59 Fields 2,888 Mathematical logic and foundations (03-XX) 2,544 Computer science (68-XX) 2,155 Operations research, mathematical programming (90-XX) 1,462 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 933 Systems theory; control (93-XX) 832 Statistics (62-XX) 712 Information and communication theory, circuits (94-XX) 699 General topology (54-XX) 468 Order, lattices, ordered algebraic structures (06-XX) 425 Probability theory and stochastic processes (60-XX) 335 Measure and integration (28-XX) 312 Group theory and generalizations (20-XX) 311 Numerical analysis (65-XX) 258 Real functions (26-XX) 231 Biology and other natural sciences (92-XX) 174 Ordinary differential equations (34-XX) 166 Combinatorics (05-XX) 157 Associative rings and algebras (16-XX) 142 General algebraic systems (08-XX) 135 Linear and multilinear algebra; matrix theory (15-XX) 130 Functional analysis (46-XX) 120 Operator theory (47-XX) 110 Category theory; homological algebra (18-XX) 96 Calculus of variations and optimal control; optimization (49-XX) 75 Sequences, series, summability (40-XX) 73 Mechanics of deformable solids (74-XX) 59 Dynamical systems and ergodic theory (37-XX) 47 Quantum theory (81-XX) 41 Mechanics of particles and systems (70-XX) 40 Difference and functional equations (39-XX) 38 Commutative algebra (13-XX) 38 Partial differential equations (35-XX) 34 Integral equations (45-XX) 33 Approximations and expansions (41-XX) 31 General and overarching topics; collections (00-XX) 31 Geophysics (86-XX) 29 History and biography (01-XX) 23 Convex and discrete geometry (52-XX) 18 Fluid mechanics (76-XX) 16 Topological groups, Lie groups (22-XX) 14 Field theory and polynomials (12-XX) 13 Integral transforms, operational calculus (44-XX) 10 Nonassociative rings and algebras (17-XX) 9 Geometry (51-XX) 8 Number theory (11-XX) 8 Algebraic topology (55-XX) 8 Optics, electromagnetic theory (78-XX) 7 Classical thermodynamics, heat transfer (80-XX) 7 Statistical mechanics, structure of matter (82-XX) 6 Global analysis, analysis on manifolds (58-XX) 5 Harmonic analysis on Euclidean spaces (42-XX) 4 Algebraic geometry (14-XX) 4 Functions of a complex variable (30-XX) 2 Differential geometry (53-XX) 2 Relativity and gravitational theory (83-XX) 1 Special functions (33-XX) 1 Abstract harmonic analysis (43-XX) 1 Manifolds and cell complexes (57-XX) 1 Mathematics education (97-XX) #### Wikidata Timeline The data are displayed as stored in Wikidata under a Creative Commons CC0 License. Updates and corrections should be made in Wikidata.
2021-01-19T05:32:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38841286301612854, "perplexity": 4941.719255947806}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703517966.39/warc/CC-MAIN-20210119042046-20210119072046-00511.warc.gz"}
https://physics.fandom.com/wiki/Natural_units
## FANDOM 162 Pages In physics, natural units are physical units of measurement based only on universal physical constants. For example, the elementary charge e is a natural unit of electric charge, and the speed of light c is a natural unit of speed. ## Fundamental units Edit A set of fundamental dimensions is a minimal set of units such that every physical quantity can be expressed in terms of this set and where no quantity in the set can be expressed in terms of the others.[1] Fundamental units: Some physicists have not recognized temperature as a fundamental dimension of physical quantity since it simply expresses the energy per particle per degree of freedom which can be expressed in terms of energy. ## CGS system of units Edit Quantity Quantity symbol CGS unit name Unit symbol Unit definition Equivalent in SI units length, position L, x centimetre cm 1/100 of metre = 10−2 m mass m gram g 1/1000 of kilogram = 10−3 kg time t seconds 1 second = 1 s velocity v centimetre per second cm/s cm/s = 10−2 m/s acceleration a gal Gal cm/s2 = 10−2 m/s2 force F dyne dyn g⋅cm/s2 = 10−5 N energy E erg erg g⋅cm2/s2 = 10−7 J power P erg per seconderg/s g⋅cm2/s3 = 10−7 W pressure p barye Ba g/(cm⋅s2) = 10−1 Pa dynamic viscosity μ poise P g/(cm⋅s) = 10−1 Pa⋅s kinematic viscosity ν stokes St cm2/s = 10−4 m2/s wavenumber k kayser cm−1 cm−1 = 100 m−1 charge q Statcoulomb statC cm3/2 g1/2 s−1 = ## Natural units Edit The surface area of a sphere $4 \pi r^2$ In Lorentz–Heaviside units (rationalized units), Coulomb's law is: • $F=\frac{q_1 q_2}{r^2} \frac{1}{4 \pi}$ In Gaussian units (non-rationalized units), Coulomb's law is: • $F=\frac{q_1 q_2}{r^2}$ Planck units are defined by c = ħ = G = ke = kB = 1, Stoney units are defined by: c = G = ke = e = kB = 1, Hartree atomic units are defined by: e = me = ħ = ke = kB = 1 c = 1α Rydberg atomic units are defined by: e2 = 2me = ħ = ke = kB = 1 c = 2α Quantum chromodynamics (QCD) units are defined by: c = mp = ħ = kB = 1 Natural units generally means: ħ = c = kB = 1. where: ### Base unitsEdit Base units Dimension Planck (L-H) Planck (Gauss) Stoney Hartree Rydberg Natural (L-H) Natural (Gauss) QCD (Original) QCD (L-H) QCD (Gauss) Length (L) $\sqrt{4 \pi \hbar G \over c^3}$ $\sqrt{\frac{\hbar G}{c^3}}$ $\sqrt{\frac{G k_\text{e} e^2}{c^4}}$ $\frac{\hbar^2 (4 \pi \epsilon_0)}{m_\text{e} e^2}$ $\frac{\hbar^2 (4 \pi \epsilon_0)}{m_\text{e} e^2}$ $\frac{\hbar c}{1\,\text{eV}}$ $\frac{\hbar c}{1\,\text{eV}}$ $\frac{\hbar}{m_\text{p} c}$ $\frac{\hbar}{m_\text{p} c}$ $\frac{\hbar}{m_\text{p} c}$ Time (T) $\sqrt{4 \pi \hbar G \over c^5}$ $\frac{\hbar}{m_\text{P}c^2} = \sqrt{\frac{\hbar G}{c^5}}$ $\sqrt{\frac{G k_\text{e} e^2}{c^6}}$ $\frac{\hbar^3 (4 \pi \epsilon_0)^2}{m_\text{e} e^4}$ $\frac{2 \hbar^3 (4 \pi \epsilon_0)^2}{m_\text{e} e^4}$ $\frac{\hbar}{1\,\text{eV}}$ $\frac{\hbar}{1\,\text{eV}}$ $\frac{\hbar}{m_\text{p} c^2}$ $\frac{\hbar}{m_\text{p} c^2}$ $\frac{\hbar}{m_\text{p} c^2}$ Mass (M) $\sqrt{\hbar c \over 4 \pi G}$ $\sqrt{\frac{\hbar c}{G}}$ $\sqrt{\frac{k_\text{e} e^2}{G}}$ $m_\text{e} \$ $2 m_\text{e} \$ $\frac{1\,\text{eV}}{c^2}$ $\frac{1\,\text{eV}}{c^2}$ $m_\text{p} \$ $m_\text{p} \$ $m_\text{p} \$ Electric charge (Q) $\sqrt{\hbar c \epsilon_0}$ $\frac{e}{\sqrt{\alpha}}$ $e \$ $e \$ $\frac{e}{\sqrt{2}} \$ $\frac{e}{\sqrt{4\pi\alpha}}$ $\frac{e}{\sqrt{\alpha}}$ $e$ $\frac{e}{\sqrt{4\pi\alpha}}$ $\frac{e}{\sqrt{\alpha}}$ Temperature (Θ) with $f=2$ $\sqrt{\frac{\hbar c^5}{4 \pi G {k_\text{B}}^2}}$ $\frac{m_\text{P} c^2}{k_\text{B}} = \sqrt{\frac{\hbar c^5}{G k_\text{B}^2}}$ $\sqrt{\frac{c^4 k_\text{e} e^2}{G {k_\text{B}}^2}}$ $\frac{m_\text{e} e^4}{\hbar^2 (4 \pi \epsilon_0)^2 k_\text{B}}$ $\frac{m_\text{e} e^4}{2 \hbar^2 (4 \pi \epsilon_0)^2 k_\text{B}}$ $\frac{1\,\text{eV}}{k_\text{B}}\cdot\frac{2}{f}$ $\frac{1\,\text{eV}}{k_\text{B}}\cdot\frac{2}{f}$ $\frac{m_\text{p} c^2}{k_\text{B}}$ $\frac{m_\text{p} c^2}{k_\text{B}}$ $\frac{m_\text{p} c^2}{k_\text{B}}$ ### Summary table Edit Quantity / Symbol Planck (L-H) Planck (Gauss) Stoney Hartree Rydberg "Natural" (L-H) "Natural" (Gauss) QCD (original) QCD (L-H) QCD (Gauss) Speed of light $c \,$ $1 \,$ $1 \,$ $1 \,$ $\frac{1}{\alpha} \$ $\frac{2}{\alpha} \$ $1 \,$ $1 \,$ $1 \,$ $1 \,$ $1 \,$ Reduced Planck constant $\hbar=\frac{h}{2 \pi}$ $1 \,$ $1 \,$ $\frac{1}{\alpha} \$ $1 \,$ $1 \,$ $1 \,$ $1 \,$ $1 \,$ $1 \,$ $1 \,$ Elementary charge $e \,$ $\sqrt{4\pi\alpha} \,$ $\sqrt{\alpha} \,$ $1 \,$ $1 \,$ $\sqrt{2} \,$ $\sqrt{4\pi\alpha}$ $\sqrt{\alpha}$ $1 \,$ $\sqrt{4\pi\alpha} \,$ $\sqrt{\alpha} \,$ Vacuum permittivity $\varepsilon_0 \,$ $1 \,$ $\frac{1}{4 \pi}$ $\frac{1}{4 \pi}$ $\frac{1}{4 \pi}$ $\frac{1}{4 \pi}$ $1 \,$ $\frac{1}{4 \pi}$ $\frac{1}{4 \pi \alpha}$ $1 \,$ $\frac{1}{4 \pi}$ Vacuum permeability $\mu_0 = \frac{1}{\epsilon_0 c^2} \,$ $1 \,$ $4 \pi$ $4 \pi$ $4 \pi \alpha^2$ $\pi \alpha^2$ $1 \,$ $4 \pi$ $4 \pi \alpha$ $1 \,$ $4 \pi$ Impedance of free space $Z_0 = \frac{1}{\epsilon_0 c} = \mu_0 c \,$ $1 \,$ $4 \pi$ $4 \pi$ $4 \pi \alpha$ $2 \pi \alpha$ $1 \,$ $4 \pi$ $4 \pi \alpha$ $1 \,$ $4 \pi$ Josephson constant $K_\text{J} =\frac{e}{\pi \hbar} \,$ $\sqrt{\frac{4\alpha}{\pi}} \,$ $\frac{\sqrt{\alpha}}{\pi} \,$ $\frac{\alpha}{\pi} \,$ $\frac{1}{\pi} \,$ $\frac{\sqrt{2}}{\pi} \,$ $\sqrt{\frac{4\alpha}{\pi}} \,$ $\frac{\sqrt{\alpha}}{\pi} \,$ $\frac{1}{\pi} \,$ $\sqrt{\frac{4\alpha}{\pi}} \,$ $\frac{\sqrt{\alpha}}{\pi} \,$ von Klitzing constant $R_\text{K} =\frac{2 \pi \hbar}{e^2} \,$ $\frac{1}{2\alpha}$ $\frac{2\pi}{\alpha} \,$ $\frac{2\pi}{\alpha} \,$ $2\pi \,$ $\pi \,$ $\frac{1}{2\alpha}$ $\frac{2 \pi}{\alpha}$ $2\pi \,$ $\frac{1}{2\alpha}$ $\frac{2\pi}{\alpha} \,$ Coulomb constant $k_e=\frac{1}{4 \pi \epsilon_0}$ $\frac{1}{4 \pi}$ $1 \,$ $1 \,$ $1 \,$ $1 \,$ $\frac{1}{4 \pi}$ $1 \,$ $\alpha$ $\frac{1}{4 \pi}$ $1 \,$ Gravitational constant $G \,$ $\frac{1}{4 \pi}$ $1 \,$ $1 \,$ $\frac{\alpha_\text{G}}{\alpha} \,$ $\frac{8 \alpha_\text{G}}{\alpha} \,$ $\frac{\alpha_\text{G}}{{m_\text{e}}^2} \,$ $\frac{\alpha_\text{G}}{{m_\text{e}}^2} \,$ $\mu^2 \alpha_\text{G}$ $\mu^2 \alpha_\text{G}$ $\mu^2 \alpha_\text{G}$ Boltzmann constant $k_\text{B} \,$ $1 \,$ $1 \,$ $1 \,$ $1 \,$ $1 \,$ $1 \,$ $1 \,$ $1 \,$ $1 \,$ $1 \,$ Proton rest mass $m_\text{p} \,$ $\mu \sqrt{4 \pi \alpha_\text{G}} \,$ $\mu \sqrt{\alpha_\text{G}} \,$ $\mu \sqrt{\frac{\alpha_\text{G}}{\alpha}} \,$ $\mu \,$ $\frac{\mu}{2} \,$ $938 \text{ MeV}$ $938 \text{ MeV}$ $1 \,$ $1 \,$ $1 \,$ Electron rest mass $m_\text{e} \,$ $\sqrt{4 \pi \alpha_\text{G}} \,$ $\sqrt{\alpha_\text{G}} \,$ $\sqrt{\frac{\alpha_\text{G}}{\alpha}} \,$ $1 \,$ $\frac{1}{2} \,$ $511 \text{ keV}$ $511 \text{ keV}$ $\frac{1}{\mu}$ $\frac{1}{\mu}$ $\frac{1}{\mu}$ where: #### Fine-structure constant Edit The Fine-structure constant, α, in terms of other fundamental physical constants: $\alpha = \frac{1}{4 \pi \varepsilon_0} \frac{e^2}{\hbar c} = \frac{\mu_0}{4 \pi} \frac{e^2 c}{\hbar} = \frac{k_\text{e} e^2}{\hbar c} = \frac{c \mu_0}{2 R_\text{K}} = \frac{e^2}{4 \pi}\frac{Z_0}{\hbar}$ where: #### Gravitational coupling constant Edit The Gravitational coupling constant, αG, is typically defined in terms of the gravitational attraction between two electrons. More precisely, $\alpha_\mathrm{G} = \frac{G m_\mathrm{e}^2}{\hbar c} = \left( \frac{m_\mathrm{e}}{m_\mathrm{P}} \right)^2 \approx 1.751751596 \times 10^{-45}$ where: ## Maxwell's equations Edit Name SI units Gaussian units Lorentz–Heaviside units Gauss's law (macroscopic) $\nabla \cdot \mathbf{D} = \rho_\text{f}$ $\nabla \cdot \mathbf{D} = 4\pi\rho_\text{f}$ $\nabla \cdot \mathbf{D} = \rho_\text{f}$ Gauss's law (microscopic) $\nabla \cdot \mathbf{E} = \rho/\epsilon_0$ $\nabla \cdot \mathbf{E} = 4\pi\rho$ $\nabla \cdot \mathbf{E} = \rho$ Gauss's law for magnetism: $\nabla \cdot \mathbf{B} = 0$ $\nabla \cdot \mathbf{B} = 0$ $\nabla \cdot \mathbf{B} = 0$ Maxwell–Faraday equation: $\nabla \times \mathbf{E} = -\frac{\partial \mathbf{B}} {\partial t}$ $\nabla \times \mathbf{E} = -\frac{1}{c}\frac{\partial \mathbf{B}} {\partial t}$ $\nabla \times \mathbf{E} = -\frac{1}{c}\frac{\partial \mathbf{B}} {\partial t}$ Ampère–Maxwell equation (macroscopic): $\nabla \times \mathbf{H} = \mathbf{J}_{\text{f}} + \frac{\partial \mathbf{D}} {\partial t}$ $\nabla \times \mathbf{H} = \frac{4\pi}{c}\mathbf{J}_{\text{f}} + \frac{1}{c}\frac{\partial \mathbf{D}} {\partial t}$ $\nabla \times \mathbf{H} = \frac{1}{c}\mathbf{J}_{\text{f}} + \frac{1}{c}\frac{\partial \mathbf{D}} {\partial t}$ Ampère–Maxwell equation (microscopic): $\nabla \times \mathbf{B} = \mu_0\mathbf{J} + \frac{1}{c^2}\frac{\partial \mathbf{E}} {\partial t}$ $\nabla \times \mathbf{B} = \frac{4\pi}{c}\mathbf{J} + \frac{1}{c}\frac{\partial \mathbf{E}} {\partial t}$ $\nabla \times \mathbf{B} = \frac{1}{c}\mathbf{J} + \frac{1}{c}\frac{\partial \mathbf{E}} {\partial t}$ ### Gravitoelectromagnetism Edit According to general relativity, the gravitational field produced by a rotating object (or any rotating mass–energy) can, in a particular limiting case, be described by equations that have the same form as in classical electromagnetism. Starting from the basic equation of general relativity, the Einstein field equation, and assuming a weak gravitational field or reasonably flat spacetime, the gravitational analogs to Maxwell's equations for electromagnetism, called the "GEM equations", can be derived. GEM equations compared to Maxwell's equations in SI units are: GEM equations Maxwell's equations $\nabla \cdot \mathbf{E}_\text{g} = -4 \pi G \rho_\text{g} \$ $\nabla \cdot \mathbf{E} = \frac{\rho}{\epsilon_0}$ $\nabla \cdot \mathbf{B}_\text{g} = 0 \$ $\nabla \cdot \mathbf{B} = 0 \$ $\nabla \times \mathbf{E}_\text{g} = -\frac{\partial \mathbf{B}_\text{g} } {\partial t} \$ $\nabla \times \mathbf{E} = -\frac{\partial \mathbf{B} } {\partial t} \$ $\nabla \times \mathbf{B}_\text{g} = -\frac{4 \pi G}{c^2} \mathbf{J}_\text{g} + \frac{1}{c^2} \frac{\partial \mathbf{E}_\text{g}} {\partial t}$ $\nabla \times \mathbf{B} = \frac{1}{\epsilon_0 c^2} \mathbf{J} + \frac{1}{c^2} \frac{\partial \mathbf{E}} {\partial t}$ where: ### Electromagnetism Edit The total energy in the electric field surrounding a hollow spherical shell of radius r and charge q is: $E = k \frac{1}{2} \frac{q^2}{r}$ Therefore: ${\color{red}2E \cdot \frac{r}{q^2} } = k =constant$ The constant k is a property of space. It is the "stiffness" of space. (If space were stiffer then c would be faster.) Coulomb's law states that: $F=k_e \frac{q_1 q_2}{d^2}$ The Coulomb constant has units of Energy * distance/charge2 which gives: $F = {\color{red}F \cdot d \frac{d}{q^2}} \frac{q_1 q_2}{d^2}$ The factor of 1/2 in the first equation above comes from the fact that the field diminishes to zero as it penetrates the shell. ### Gravity Edit Newton's law of universal gravitation states that: $F = G \frac{m_1 m_2}{d^2}$ where: $F = F \cdot d \frac{d}{m^2} \frac{m_1 m_2}{d^2}$ But its probably better to say that: $a = \frac{d}{t^2} = G \frac{m}{d^2}\$ The obvious unit of charge is one electron but there is no obvious unit of mass. We can, however, create one by setting the electric force between two electrons equal to the gravitational force between two equal masses: $G \frac{m_1 m_2}{d^2} = k_e\frac{q_1 q_2}{d^2}$ Solving we get m = The Schwarzschild radius of a Stoney mass is 2 Stoney lengths. ### Boltzmann constant Edit Gas Specific heat ratio Degrees of freedom Helium 1.667 3 Neon 1.667 3 Argon 1.667 3 Hydrogen 1.597[2] 3.35 Hydrogen 1.41 4.88 Nitrogen 1.4 5 Oxygen 1.395 5.06 Chlorine 1.34 5.88 Carbon dioxide 1.289 6.92 Methane 1.304 6.58 Ethane 1.187 10.7 Engineering ToolBox (2003)[3] For monatomic gases: $P V^{\frac{5}{3}} = Constant$ The Boltzmann constant, k, is a scaling factor between macroscopic (thermodynamic temperature) and microscopic (thermal energy) physics. Macroscopically, the ideal gas law states: $k_B T = P \frac{V}{n}$ where: • kB is the Boltzmann constant • T is the temperature • P is the pressure • V is the volume • n is the number of molecules of gas. #### Single particle Edit The pressure exerted on one face of a cube of length d by a single particle bouncing back and forth perpendicular to the face with mass m and velocity $v = \sqrt{v_x + v_y + v_z}$ is: $pressure = \frac{force}{area} = \frac{\frac{momentum}{time}}{d^2} = \frac{\frac{2 m v_x}{2 d / v_x}}{d^2} = \frac{m v_x^2}{d^3} = \frac{2 E_x}{V_0} = \frac{2 \frac{E}{3}}{V_0}$ where: • V0 = d3 is the volume occupied by a single particle. • vx is the velocity perpendicular to the face • Twice the velocity means twice as much momentum transferred per collision and twice as many collisions per unit time. • Ex is the kinetic energy per particle • E = Ex +Ey + Ez Therefore: $V_0 = \frac{V}{n}$ Therefore: $T = p \frac{V}{n} = p V_0 = m v^2 = 2 E$ Therefore temperature is twice the energy per degree of freedom per particle • $T = 2 E$ Planck's law states that $B_\nu(\nu, T) = \frac{2h\nu^3}{c^2}\frac{1}{e^{h\nu/kT} - 1},$ where Bν(T) is the spectral radiance (the power per unit solid angle and per unit of area normal to the propagation) density of frequency ν radiation per unit frequency at thermal equilibrium at temperature T. h is the Planck constant; c is the speed of light in a vacuum; k is the Boltzmann constant; ν is the frequency of the electromagnetic radiation; T is the absolute temperature of the body. Most of the electromagnetic radiation is emitted (and absorbed) during the brief but intense acceleration's during the atomic collisions. For velocities that are small relative to the speed of light, the total power radiated is given by the Larmor formula: $P = {2 \over 3} \frac{q^2 a^2}{ 4 \pi \varepsilon_0 c^3}= \frac{q^2 a^2}{6 \pi \varepsilon_0 c^3} \mbox{ (SI units)}$ ## References Edit 1. Wikipedia:Base unit (measurement) 2. at -181 C 3. Engineering ToolBox, (2003). Specific Heat and Individual Gas Constant of Gases. [online] Available at: https://www.engineeringtoolbox.com/specific-heat-capacity-gases-d_159.html [Accessed 20-4-2019]. ## Edit Community content is available under CC-BY-SA unless otherwise noted.
2019-12-15T20:49:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8428938984870911, "perplexity": 1915.1926275775536}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541310866.82/warc/CC-MAIN-20191215201305-20191215225305-00224.warc.gz"}
https://www.usgs.gov/publications/how-well-can-wave-runup-be-predicted-comment-laudier-et-al-2011-and-stockdon-et-al
# How well can wave runup be predicted? comment on Laudier et al. (2011) and Stockdon et al. (2006) August 11, 2015 Laudier et al. (2011) suggested that there may be a systematic bias error in runup predictions using a model developed by Stockdon et al. (2006). Laudier et al. tested cases that sampled beach and wave conditions that differed from those used to develop the Stockdon et al. model. Based on our re-analysis, we found that in two of the three Laudier et al. cases observed overtopping was actually consistent with the Stockdon et al. predictions. In these cases, the revised predictions indicated substantial overtopping with, in one case, a freeboard deficit of 1 m. In the third case, the revised prediction had a low likelihood of overtopping, which reflected a large uncertainty due to wave conditions that included a broad and bi-modal frequency distribution. The discrepancy between Laudier et al. results and our re-analysis appear to be due, in part, to simplifications made by Laudier et al. when they implemented a reduced version of the Stockdon et al. model. ## Citation Information Publication Year 2015 How well can wave runup be predicted? comment on Laudier et al. (2011) and Stockdon et al. (2006) 10.1016/j.coastaleng.2015.05.001 Nathaniel G. Plant, Hilary F. Stockdon Article Journal Article Coastal Engineering 70155836 USGS Publications Warehouse St. Petersburg Coastal and Marine Science Center
2022-06-27T03:45:54
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8212337493896484, "perplexity": 3219.114902075927}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103324665.17/warc/CC-MAIN-20220627012807-20220627042807-00744.warc.gz"}
https://math.libretexts.org/Bookshelves/Calculus/Book%3A_Calculus_(Apex)/7%3A_Applications_of_Integration
# 7: Applications of Integration $$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$ $$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$ • 7.1: Area Between Curves This chapter employs the following technique to a variety of applications. Suppose the value QQ of a quantity is to be calculated. We first approximate the value of QQ using a Riemann Sum, then find the exact value via a definite integral.  This idea will make more sense after we have had a chance to use it several times. We begin with Area Between Curves. • 7.2: Volume by Cross-Sectional Area: Disk and Washer Methods Given an arbitrary solid, we can approximate its volume by cutting it into nn thin slices. When the slices are thin, each slice can be approximated well by a general right cylinder. Thus the volume of each slice is approximately its cross-sectional area ×× thickness. (These slices are the differential elements.) • 7.3: The Shell Method The previous section introduced the Disk and Washer Methods, which computed the volume of solids of revolution by integrating the cross--sectional area of the solid. This section develops another method of computing volume, the Shell Method. Instead of slicing the solid perpendicular to the axis of rotation creating cross-sections, we now slice it parallel to the axis of rotation, creating "shells." • 7.4: Arc Length and Surface Area In this section, we address a simple question: Given a curve, what is its length? This is often referred to as arc length. • 7.5: Work Work is the scientific term used to describe the action of a force which moves an object. The SI unit of force is the Newton (N), and the SI unit of distance is a meter (m). The fundamental unit of work is one Newton--meter, or a joule (J). That is, applying a force of one Newton for one meter performs one joule of work. • 7.6: Fluid Forces In the unfortunate situation of a car driving into a body of water, the conventional wisdom is that the water pressure on the doors will quickly be so great that they will be effectively unopenable. How can this be true? How much force does it take to open the door of a submerged car? In this section we will find the answer to this question by examining the forces exerted by fluids. • 7.E: Applications of Integration (Exercises) ### Contributors • Gregory Hartman (Virginia Military Institute). Contributions were made by Troy Siemers and Dimplekumar Chalishajar of VMI and Brian Heinold of Mount Saint Mary's University. This content is copyrighted by a Creative Commons Attribution - Noncommercial (BY-NC) License. http://www.apexcalculus.com/
2019-03-21T19:15:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.805789053440094, "perplexity": 771.6020353037256}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202530.49/warc/CC-MAIN-20190321172751-20190321194751-00261.warc.gz"}
https://www.anl.gov/phy/fundamental-symmetries
Physics Division # Fundamental Symmetries Argonne’s Physics Division conducts precision experiments aimed at testing the fundamental symmetries inherent in the basic laws of physics. In their core, these experiments search for signatures of phenomena that lie beyond the Standard Model of physics. While describing many aspects of the universe from its smallest components to the largest structures with amazing precision, the Standard Model still fails to account for some major phenomena, such as the dominance of matter over antimatter in the universe, the apparent observation of effects caused by so called dark matter and dark energy, and the structure of gravity. Our experiments exploit many different techniques in atomic and nuclear physics. These include the manipulation of neutral atoms and ions to make precision measurements of nuclear decays and to measure their masses and moments. Others relate to the search for exotic decay modes such as neutrinoless double beta decay. Specific projects include: #### Electric dipole moments of Radium-225 We are investigating the electric dipole moment of Radium-225, a short-lived isotope of the element radium. According to the Standard Model, this property is expected to be unmeasurably small. However, Beyond Standard Model theories predict this property to be much larger. Our search is based on laser manipulation of neutral Radium-225 atoms to cool them to near absolute zero, and trap them in a laser beam for a sensitive measurement of their electric dipole moment. #### Beta-neutrino angular correlation We search for signatures of physics beyond the Standard Model by measuring the relative angle between the electron and the neutrino that results from the beta decay of unstable nuclei. These experiments require precision control of the initial state and sensitive detection of all outgoing particles, since the direction of the unobserved neutrino can only be deduced from the direction and energy of all other particles. We employ ion and atom traps to cool and capture the nuclei and to precicely know where and when the decay happened. Isotopes that are currently studied are Helium-6, Lithium-8, and Boron-8, all with lifetimes of less than one second. Hence, these isotopes need to be produced at accelerator facilities, captured, and detected within a fraction of a second. #### Nuclear matrix elements for rare decays Neutrinoless double beta decay is an hypothesized decay mode in which a nucleus undergoes two simultaneous beta decays, emitting two electrons, but no neutrinos. Such a decay would imply that the neutrinos annihilate, being their own antiparticle and violating lepton number conservation. Current experimental searches place limits of around 1026 years in the half-life of this decay, which corresponds to a neutrino mass of around 100 meV. However, the nuclear matrix elements connecting the half-life and mass are uncertain by factors of 2-3. The calculations can be constrained by experimental data from various nuclear reactions based on single-nucleon transfer data.
2019-05-24T20:53:57
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8090629577636719, "perplexity": 743.4543116492765}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257767.49/warc/CC-MAIN-20190524204559-20190524230559-00039.warc.gz"}
https://finalfantasy.fandom.com/wiki/Goblin_Punch
FANDOM 36,902 Pages Attack with a thorough beating. Final Fantasy Tactics description Goblin Punch (ゴブリンパンチ, Goburin Panchi?), also known as GblinPnch, is a recurring ability of Goblin-type monsters and is a Blue Magic spell. It tends to have a random factor affecting its damage. It usually deals more damage if the caster's and the enemies' levels are the same. Oftentimes the spell costs no MP and deals non-elemental physical damage. AppearancesEdit Final Fantasy IVEdit Goblin Punch is used by the Goblin summon dealing weak non-elemental damage to one enemy. In the 3D version, Goblin Punch instead deals moderate non-elemental damage. Final Fantasy IV -Interlude-Edit Goblin Punch is Goblin's attack when summoned. It deals weak non-elemental damage to one enemy. Final Fantasy IV: The After YearsEdit Goblin Punch is used by the Goblin summon. It deals damage to an enemy with base power 8, and costs 1 MP to cast. Final Fantasy VEdit Attacks one enemy with a goblin's strength. —Description Goblin Punch costs 0 MP and deals massive damage to enemies with a level equal to the caster's. Otherwise it deals damage equal to what the fight command would normally do, except it ignores rows. In either case, the attack ignores evasion. There is a glitch where using the Power Drink item only enhances the power of the Goblin Punch, and no other attacks. There is also a glitch where if the player has the Excalipoor, or any staff/rod with a "false" attack power listed, equipped and uses Goblin Punch, the coding that causes the Excalipoor to do 1 damage will be ignored, allowing one to potentially deal massive damage. Both glitches are fixed in the mobile and Steam versions. Final Fantasy VIIEdit Non-elemental damage on any one opponent —Description Goblin Punch is learned as an Enemy Skill from the Goblin enemy and deals damage equal to 75% of Attack, but x8 of the damage if the target is of equal level, and requires no MP to use. The skill can also be of use at the Gold Saucer Battle Square if one has their weapon broken as Goblin Punch will still do normal damage. Crisis Core -Final Fantasy VII-Edit There are five versions of Goblin Punch. They are all Command Materia. The base Goblin Punch has Zack wind up for several seconds before unleashing a powerful attack. It uses 10 AP. Iron Fist is a slightly stronger Goblin Punch that also costs 10 AP. Magical Punch costs 99 MP and inflicts more damage the closer Zack is to max MP. Hammer Punch costs 99 AP and does higher damage as Zack is closer to max AP. The strongest form, Costly Punch, expends 1/128th of Zack's max HP, doing greater damage as Zack is low on HP, and costs no AP. It will deal 0 damage if Zack's HP is greater than 1.11x his max HP. Even if he is at max HP, Costly Punch often does 9999 damage (up to 99,999 if Zack can break the damage limit). Goblin Punch can be stolen from the boss in mission 4-4-4, and one can also be obtained in the waterfall minigame near Gongaga. Final Fantasy IXEdit Causes Non-elemental damage to the enemy. —Description Quina can learn Goblin Punch by eating a Goblin or Goblin Mage. The damage dealt increases the closer the caster's level is to the target's. It is also affected by the caster's strength and the target's Defense, as well as any status effects. The ability costs 4 MP to use. It can't be reflected and works with Return Magic. If the caster's level is not the same as the target's level, then the damage formula follows the normal Blue Magic formula. If the level is the same, it uses the following formula instead: $Base = 20$ $Bonus = (Mag + Lv) ... [(Lv + Mag) / 8] + (Mag + Lv)$ $Damage = Base * Bonus$ Final Fantasy XII: Revenant WingsEdit Goblin Punch is the Esper Goblin's attack and deals basic damage. Final Fantasy XIIIEdit The enemies Munchkin, Goblin, and Borgbear use the ability Goblin Punch. The ability deals moderate physical damage. This ability is only used when Munchkin, Goblin, and Borgbear have status enhancements bestowed on them. Final Fantasy XIII-2Edit Goblin Punch is enemy-exclusive ability available to Goblin, Munchkin, Moblin, Buccaboo, and Gancanagh usable only under the effects of Goblinhancement are stacked on them. In addition to normal damage, Goblin Punch also inflicts heavy Wound damage. Lightning Returns: Final Fantasy XIIIEdit This article or section is a stub about an ability in Lightning Returns: Final Fantasy XIII. You can help the Final Fantasy Wiki by expanding it. Final Fantasy TacticsEdit Gobbledyguck is the only enemy that can naturally use Goblin Punch; however, both Black Goblin and Goblin can also use it when an ally with the Monster Skill ability is next to it. It does damage equal to the user's max HP - current HP. All-out punch. Damage varies. —Description Goblin Punch is learned from Goblins, and costs 8 MP to cast. Its base attack power is higher than a regular attack. It is worth noting that this spell can be missed permanently after a certain point, as regular Goblins become extinct. Goblin Punch does a normal 100% damage attack, but the final damage is subject to extra variance: $Final Damage = [Rand{128..384} * Damage / 256]$ (50% to 150% of damage) Final Fantasy DimensionsEdit Goblins use Goblin Punch an an attack. This article or section is a stub about an ability in Final Fantasy Dimensions. You can help the Final Fantasy Wiki by expanding it. Dissidia Final FantasyEdit Bartz Klauser can use Goblin Punch while in EX Mode. It is a short-ranged HP attack performed by pressing and . The attack hits opponents for Bravery damage and knocks them away, causing Wall Rush. If Bartz and his opponent are the same level, Goblin Punch deals eight times as much damage. Dissidia 012 Final FantasyEdit Goblin Punch is Bartz's EX Mode attack, unchanged from Dissidia. Dissidia Final Fantasy NTEdit Goblin Punch is one of Bartz moves. He can use also chain Goblin Punch to missiles. This article or section is a stub about an ability in Dissidia Final Fantasy NT. You can help the Final Fantasy Wiki by expanding it. Pictlogica Final FantasyEdit This article or section is a stub about an ability in Pictlogica Final Fantasy. You can help the Final Fantasy Wiki by expanding it. This article or section is a stub about an ability in Final Fantasy Airborne Brigade. You can help the Final Fantasy Wiki by expanding it. Final Fantasy Artniks DiveEdit This article or section is a stub about an ability in Final Fantasy Artniks Dive. You can help the Final Fantasy Wiki by expanding it. Final Fantasy Record KeeperEdit This article or section is a stub about an ability in Final Fantasy Record Keeper. You can help the Final Fantasy Wiki by expanding it. Final Fantasy ExplorersEdit Punch repeatedly in the direction you are facing. —Description This article or section is a stub about a spell in Final Fantasy Explorers. You can help the Final Fantasy Wiki by expanding it. Final Fantasy Brave ExviusEdit This article or section is a stub about an ability in Final Fantasy Brave Exvius. You can help the Final Fantasy Wiki by expanding it. Mobius Final FantasyEdit This article or section is a stub about an ability in Mobius Final Fantasy. You can help the Final Fantasy Wiki by expanding it. World of Final FantasyEdit Goblin Punch is an active physical ability that inflicts neutral physical damage on a single target for 4 AP. It hits multiple times and has low topple strength. It can be used by Goblin. It is also an enemy ability used by Goblin. Chocobo no Fushigi na DungeonEdit This article or section is a stub about an ability in Chocobo no Fushigi na Dungeon. You can help the Final Fantasy Wiki by expanding it. GalleryEdit This gallery is incomplete and requires Final Fantasy XI, Final Fantasy XIII, Final Fantasy XIII-2, Final Fantasy XIV and Chocobo no Fushigi na Dungeon added. You can help the Final Fantasy Wiki by uploading images. EtymologyEdit A goblin is a small, mischievous creature found in many European folk tales and legends. The word "goblin" comes from the Norman French word Gobelinus, the name of a ghost that haunted the town of Évreux in the 12th century. The Goblin Punch is likely inspired by Germanic mythology where troll and goblin creatures often fought bare-handed. Community content is available under CC-BY-SA unless otherwise noted.
2019-12-08T17:58:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32620999217033386, "perplexity": 9669.717678861723}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540514475.44/warc/CC-MAIN-20191208174645-20191208202645-00006.warc.gz"}
http://dergipark.gov.tr/ijot/issue/5753/76676
| | | | Effect of the Structured Packing on Column Diameter, Pressure Drop and Height in a Mass Transfer Unit Rosa Chávez [1] , Javier Guadarrama [2] , Abel Hernández-Guerrero [3] 154 368 In order to determine the dimension of a separation column, hydrodynamic and mass transfer models are necessary to evaluate the pressure drop and the mass transfer unit height. The present work evaluates the dependency of those parameters with respect to the diameter of the column by means of an absorption column. The process within the absorption column is carried out using three different structured-packings (ININ, Sulzer BX, and Mellapak) and one hazardous packing (Raschig rings), in order to recover SO2. Structured packing has been achieving wider acceptance due to its greater efficiency in the separation process. The results show how the ININ packing does the best work because it has the lowest height of the global mass transfer unit and the Mellapak packing has the largest capacity because it manages the largest flows. structured packing, mass transfer unit, absorption column Primary Language en Regular Original Research Article Author: Rosa Chávez Author: Javier Guadarrama Author: Abel Hernández-Guerrero Bibtex @ { ijot76676, journal = {International Journal of Thermodynamics}, issn = {1301-9724}, eissn = {2146-1511}, address = {Yaşar DEMİREL}, year = {2004}, volume = {7}, pages = {141 - 148}, doi = {}, title = {Effect of the Structured Packing on Column Diameter, Pressure Drop and Height in a Mass Transfer Unit}, key = {cite}, author = {Chávez, Rosa and Guadarrama, Javier and Hernández-Guerrero, Abel} } APA Chávez, R , Guadarrama, J , Hernández-Guerrero, A . (2004). Effect of the Structured Packing on Column Diameter, Pressure Drop and Height in a Mass Transfer Unit. International Journal of Thermodynamics, 7 (3), 141-148. Retrieved from http://dergipark.gov.tr/ijot/issue/5753/76676 MLA Chávez, R , Guadarrama, J , Hernández-Guerrero, A . "Effect of the Structured Packing on Column Diameter, Pressure Drop and Height in a Mass Transfer Unit". International Journal of Thermodynamics 7 (2004): 141-148 Chicago Chávez, R , Guadarrama, J , Hernández-Guerrero, A . "Effect of the Structured Packing on Column Diameter, Pressure Drop and Height in a Mass Transfer Unit". International Journal of Thermodynamics 7 (2004): 141-148 RIS TY - JOUR T1 - Effect of the Structured Packing on Column Diameter, Pressure Drop and Height in a Mass Transfer Unit AU - Rosa Chávez , Javier Guadarrama , Abel Hernández-Guerrero Y1 - 2004 PY - 2004 N1 - DO - T2 - International Journal of Thermodynamics JF - Journal JO - JOR SP - 141 EP - 148 VL - 7 IS - 3 SN - 1301-9724-2146-1511 M3 - UR - Y2 - 2019 ER - EndNote %0 International Journal of Thermodynamics Effect of the Structured Packing on Column Diameter, Pressure Drop and Height in a Mass Transfer Unit %A Rosa Chávez , Javier Guadarrama , Abel Hernández-Guerrero %T Effect of the Structured Packing on Column Diameter, Pressure Drop and Height in a Mass Transfer Unit %D 2004 %J International Journal of Thermodynamics %P 1301-9724-2146-1511 %V 7 %N 3 %R %U ISNAD Chávez, Rosa , Guadarrama, Javier , Hernández-Guerrero, Abel . "Effect of the Structured Packing on Column Diameter, Pressure Drop and Height in a Mass Transfer Unit". International Journal of Thermodynamics 7 / 3 (September 2004): 141-148.
2019-04-19T22:33:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17140626907348633, "perplexity": 6814.862026614414}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578528430.9/warc/CC-MAIN-20190419220958-20190420002958-00183.warc.gz"}
https://atb.nrel.gov/transportation/2020/electricity
# Electricity Electricity is produced from energy sources such as wind and solar energy, hydropower, nuclear energy, stored hydrogen, oil, coal, natural gas. It is defined as an alternative fuel by the Energy Policy Act of 1992 (DOE, 2019). For additional background, see the Alternative Fuels Data Center's Electricity Basics. On this page, explore the fuel price and emissions intensity of electricity. Electricity ## Key Assumptions The data and estimates presented here are based on the following key assumptions: • The fuel price (e.g., Lowest Cost, Lowest Emissions) is associated with a single year. Because we do not provide a time-series trajectory, here we show fuel price at a frozen level for all years so we can offer a range of fuel price values. In the levelized cost of driving and emissions charts, this approach clearly distinguishes effects of fuels from those of vehicle technologies, because fuels remain constant while vehicle technologies change over time. • Multiple charging and grid mix scenarios are provided, which are meant to encompass the potential variability of electricity prices and emissions. • In the plug-in electric vehicle (PEV) charging scenarios, the electricity price represents an estimate of the average price paid by a current PEV user. This price is a weighted average of different electricity prices. We assume 81% of PEV charging happens at home (Borlaug et al., 2020). Moreover, electric vehicles are often charged at favorable time-of-use rates that provide discounted electricity during certain hours of the day (usually at night) and align well with electric vehicle charging needs (Kaluza et al., 2016). We assume 50% of home PEV charging takes advantage of time-of-use rates and that these provide a 50% price saving. We assume the remaining 19% of charging happens at workplace/public stations, for which costs and business models are variable. We assume 14% of PEV charging pays commercial electricity prices and 5% of PEV charging pays the current DC fast charge price, estimated at $0.27/kilowatt-hour (Borlaug et al., 2020). The table below summarizes the assumptions and costs used in the PEV charging, National grid mix scenario. Electricity Price and Charging Assumptions for Plug-in Electric Vehicle Charging • The PEV charging high-cost scenario is included to explore the sensitivity of levelized cost of driving to electricity price. The high-cost case represents a scenario with no time of use of rates. The price is calculated using the home, workplace, and public charging shares above (with no time of use). • The DC fast charging scenario is based on the estimated price of DC fast charging of$0.27/kilowatt-hour under high cost assumptions (Borlaug et al., 2020). The higher cost assumptions are used as an upper bound on electricity prices. • For current grid mix scenarios (National, IN, and CA) residential and commercial electricity prices are estimated from 2018 electricity retail price data from EIA (EIA, 2019b). The grid mixes for the national and Indiana grid mix scenarios are based on 2018 electricity generation from EIA (EIA, 2019a), and the grid mix for California is based on 2018 in-state generation and imports from the California Energy Commission (California Energy Commission, 2019). Note that the in-state generation mix in California (without imports) includes less coal, and thus has lower emissions factors than the values shown here; for example, the CO2e emissions for in-state generation only is 67,700 g/mmBtu CO2e based EIA (EIA, 2019a). • The Future National Grid Mix and Future Low Renewable Energy Penetration scenarios are based on 2050 values in the Annual Energy Outlook 2020 for the Reference and High renewable cost cases, respectively (EIA, 2020). The Future High renewable energy Penetration scenario is estimated from the 2050 results of the Low renewable energy Cost scenario in the 2019 NREL Standard Scenarios analysis (Cole et al., 2019). The Standard Scenario analysis only provides wholesale electricity prices, therefore we apply the percent change in wholesale prices from 2018-2050 to the 2018 residential and commercial rates from the Annual Energy Outlook Reference case to estimate electricity prices for the Future High renewable energy Penetration scenario. • The emissions intensities are estimated using GREET 2018 (Argonne National Laboratory, 2018)and are based on grid mixes corresponding to each scenario described above. The table below shows the generation penetration by technology for each grid mix scenario. Electricity Mix by Technology for Alternative Grid Mix Scenarios • In the vehicle levelized cost of driving calculation, we include charger equipment and installation costs for battery electric vehicles and plug-in hybrid electric vehicles, based on Borlaug et al. (Borlaug et al., 2020). We assume plug-in hybrid electric vehicles use 50% Level 1 and 50% Level 2 chargers, and battery electric vehicles use 16% Level 1 and 84% Level 2 chargers. We use charger costs for Level 1 at $0 and Level 2 costs at$1,836. These costs are added to the capital cost of the vehicle. We assume the technology will last the life of the vehicle. We note that if charging equipment is already available, a consumer would not need to incur this cost to purchase a plug-in hybrid electric vehicle or battery electric vehicle. • The electricity price was converted to dollars per gasoline gallon equivalent from dollars per kilowatt-hour, assuming 1 gge = 33.7 kilowatt-hours (EPA, 2011). ## Definitions For detailed definitions, see: CO2e emissions Electricity Fuel price Natural gas NOX emissions PM emissions Scenarios SOX emissions Well-to-tank emissions ## References The following references are specific to this page; for all references in this ATB, see References. DOE. “Alternative Fuels Data Center,” 2019. https://afdc.energy.gov/. EIA. “Annual Energy Outlook 2020.” Washington, D.C.: U.S. Energy Information Administration, January 29, 2020. https://www.eia.gov/outlooks/aeo/. California Energy Commission. “2018 Total System Electric Generation.” California Energy Commission, June 24, 2019. https://www.energy.ca.gov/data-reports/energy-almanac/california-electricity-data/2018-total-system-electric-generation. Argonne National Laboratory. GREET Model: The Greenhouse Gases, Regulated Emissions, and Energy Use in Transportation Model. Argonne, IL (United States): Argonne National Laboratory, 2018. https://greet.es.anl.gov/. Borlaug, Brennan, Shawn Salisbury, Mindy Gerdes, and Matteo Muratori. “Levelized Cost of Charging Electric Vehicles in the United States.” Joule 4, no. 7 (July 15, 2020): 1470–85. https://doi.org/10.1016/j.joule.2020.05.013. Cole, Wesley, Nathaniel Gates, Trieu Mai, Daniel Greer, and Paritosh Das. “2019 Standard Scenarios Report: A U.S. Electricity Sector Outlook.” National Renewable Energy Lab. (NREL), Golden, CO (United States), December 2019. https://doi.org/10.2172/1481848. EIA. “Annual State-Level Generation and Fuel Consumption Data,” 2019a. https://www.eia.gov/electricity/data.php. EIA. “Electricity Data Browser: Average Retail Price of Electricity,” 2019b. https://www.eia.gov/electricity/data/browser/#/topic/7?agg=0,1&geo=vvvvvvvvvvvvo&endsec=vg&freq=A&start=2001&end=2017&ctype=linechart&ltype=pin&rtype=s&pin=&rse=0&maptype=0. Kaluza, Sebastian, David Almeida, and Paige Mullen. “BMW i ChargeForward: PG&E’s Electric Vehicle Smart Charging Pilot.” A cooperation between BMW Group and Pacific Gas and Electricty Company, 2016. http://www.pgecurrents.com/wp-content/uploads/2017/06/PGE-BMW-iChargeForward-Final-Report.pdf.
2023-03-31T10:06:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.863561749458313, "perplexity": 5621.051252975742}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949598.87/warc/CC-MAIN-20230331082653-20230331112653-00364.warc.gz"}
https://www.usgs.gov/center-news/volcano-watch-keep-mauis-1938-earthquake-mind
# Volcano Watch — Keep Maui's 1938 earthquake in mind Release Date: An ancient Japanese proverb says that the most recent disaster fades from memory just before the next one strikes. Recently our friend Garret Hew of East Maui Irrigation inquired about the great 1938 Maui earthquake. That's good news; that earthquake hasn't faded from memory yet. An ancient Japanese proverb says that the most recent disaster fades from memory just before the next one strikes. Recently our friend Garret Hew of East Maui Irrigation inquired about the great 1938 Maui earthquake. That's good news; that earthquake hasn't faded from memory yet. On January 22, 1938, at about 10:03 p.m. local time, a magnitude 6.8 earthquake struck the central part of the Hawaiian island chain. The submarine earthquake was located about 20 km (12 mi) northeast of Keanae Point (East Maui) at a depth of roughly 20 km (12 mi). For the people who lived closest, the quake might as well have been beneath their feet. The north coast of Maui took the brunt of the damage. Landslides blocked the road to Hana and completely severed communications for several days. Two large oil tanks near Hana shattered, and 110,000 liters (30,000 gallons) of oil flowed into the sea. Ranches in southeastern Maui suffered heavy damage as water tanks and stone walls were razed. Fortunately, no lives were lost, and injuries were few. No tsunami accompanied the shock. Central and west Maui weren't spared from damage. Concrete buildings cracked from Kahului to Lahaina. The fire station tower in Kahului shifted 13 mm (0.5 in.) The 1938 earthquake was one of the few Hawaii earthquakes felt throughout the islands. Kauai residents reported it as the severest shock in memory, but damage was trifling. On Oahu rocks rolled onto roadways, and some plaster cracked and fell in buildings. Most damage, however, was limited to broken crockery and glassware. Molokai and Lanai had small cracks open in the ground. Water pipes broke in a few places. Big Island residents, accustomed as they were to earthquakes, remained calm compared to others in the state. The shock was felt by most people but was no greater in intensity than the temblors that commonly occur. Dishes were broken, pictures fell from the walls, and plaster cracked. A phenomenon called "earthquake lights" accompanied the 1938 earthquake. During and immediately after the earthquake, intense bursts, glows, or flashes of white-to-bluish light, lasting from a few seconds to about a minute, were observed in many parts of Hawaii—even by campers as far away as Halape on the Big Island's south coast, 210 km (125 mi) from the epicenter. Although there is no generally accepted scientific explanation for their occurrence, these lights apparently result from earthquake-induced oscillations or distortions of the atmosphere. The 1938 Maui earthquake was unrelated to volcanism. It's an example of the other type of Hawaiian earthquake, the tectonic kind, that results from loading and bending of the Earth's crust by the mass of each island. These earthquakes diminish in frequency as each island moves off the hotspot and away from the zone of flexure associated with the larger islands. University of Hawaii seismologists think that, for Maui County, magnitude 6 earthquakes of this type could occur about once every 50 years, and magnitude 7 earthquakes about once every 250 years. The occurrence of magnitude 7 earthquakes on the Big Island is more frequent. With improved monitoring, we hope to better understand the mechanism of the next large deep earthquake in the Hawaiian island chain. ### Volcano Activity Update Lava continues to erupt from Puu O`o and flow through a network of tubes from the vent to the sea near Kamokuna. A slow-moving pahoehoe flow emanating from a breach in the tube system remains active on the coastal flats since Friday, March 26. The flow is 1.7 km (1.0 mi) long, and the distal end is about 0.9 km (0.54 mi) from the ocean. The public is reminded that the ocean entry areas are extremely hazardous, with explosions accompanying frequent collapses of the new land. The steam clouds are also very hazardous, being highly acidic and laced with glass particles. There were no felt earthquakes since March 18.
2019-11-22T13:12:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18802008032798767, "perplexity": 5895.960773501082}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496671260.30/warc/CC-MAIN-20191122115908-20191122143908-00161.warc.gz"}
https://www.abs.gov.au/statistics/microdata-tablebuilder/datalab/input-and-output-clearance
# Input and output clearance DataLab Requesting input and output clearance, output rules Released 19/11/2021 Outputs from DataLab must be approved by ABS before they can be released. You must not remove anything (data, code, notes, etc.) from the DataLab yourself. Before you ask for output clearance, apply the appropriate DataLab output rules to each statistic. ## Output rules ### Rule of 10 • Each cell/statistic should have at least 10 (unweighted) contributors • Provide unweighted counts ### Dominance rules • (1,50) rule: the largest contributor of a cell/statistic should not exceed 50% of the total for that cell/statistic • (2,67) rule: the two largest contributors of a cell/statistic should not exceed 67% of the total for that cell/statistic • Replace negative values with absolute values, take the largest one (two) absolute value(s) and calculate the (1,50) and (2,67) statistics for the contribution to the total of absolute values • Provide evidence ### Applying dominance rules The dominance rule applies to tables that present magnitude or continuous variables such as income or turnover. This does not apply to categorical variables or counts. The rule is designed to prevent the re-identification of units that contribute a large percentage of a cell's total value, which could in turn reveal information about individuals, households or businesses. The cell dominance rule defines the number of units that are allowed to contribute a defined percentage of the total. DataLab has a (1,50) and (2,67) rule. This means that the top contributor cannot contribute more than 50% of the total value to a cell and the top 2 contributors cannot contribute more than 67% of the total value to a cell. Dominance is required if any mean, total, ratio, proportion or measure of concentration statistic can be calculated for continuous or magnitude variables. While ratios/proportions can be continuous, if the numerator and denominator of the ratios/proportions are counts, we do not need dominance statistics. It is also required when there is a regression with a continuous dependent variable and categorical independent variables. In this case, every combination of categorical variables (crosstab) will need to be tested for dominance against the dependent variable. The below table shows an example of the additional information that analysts need to provide for output clearance when requesting a mean, total, ratio, proportion or measure of concentration There are multiple instances where the (1,50) (2,67) rule is violated. The top contributor in LGA 3 contributes 2.51/3.22 = 78% of the total. This violates the (1,50) rule. The top 2 contributors in LGA 3 contributes 3.03/3.22 = 94% of the total. This violates the (2,67) rule. You may also need to apply consequential suppression to your table so suppressed values cannot be derived. LGATotal Profit ($M)Top 1 Contributor ($M)Top 2 Contributors ($M)Top 1 Contribution to Total Profit (%)Top 2 Contribution to Total Profit (%) 11.650.510.823150 20.940.110.151216 33.222.513.037894 42.11.521.837287 52.050.50.82439 ### Group disclosure rule • In all tabular and similar outputs, no cell should contain 90% or more of the column or row total • Provide evidence ### Minimum contributors for percentiles PercentileMinimum contributors 0.01500 0.05100 0.1050 0.2520 0.5010 0.7520 0.9050 0.95100 0.99500 ### Minimum 10 degrees of freedom • All modelled output should have at least 10 degrees of freedom • Degrees of freedom = number of observations - number of parameters - other restrictions of the model ### Consequential suppression If one or more of the rules fail and suppression is applied, one or more additional cells should be suppressed to protect the value of the primary suppressed cell from being worked out. In the case of the rule of 10 failing, if someone has access to multiple tables regarding the same sample, they cannot use these multiple tables to deduce values of cells with less than 10 observations. In the case of the dominance rules failing, if area11 + area12 + area13 = area1, and a cell in area11 is suppressed, then the same cell in area12 and/or area13 also needs to be suppressed such that both dominance rules pass for the combined suppressed cells. Likewise, for any other relationships. Examples include: • Industry11 + Industry12 + Industry13 = Industry1 • variable1 + variable2 + variable3 = variable4 • (variable1 - variable2) / variable1 = variable3 • variable1 / variable2 = variable3 ## Preparing your output for clearance ### Descriptive statistics ##### Frequency tables • Rule of 10 • Group disclosure rule • Consequential suppression ##### Magnitude tables, means, totals, indices, indicators, proportions, measures of concentration • Rule of 10 • Dominance rules • Group disclosure rule • Consequential suppression ##### Ratios • Rule of 10 • Dominance rules • Group disclosure rule • Consequential suppression • If the ratio is calculated at the business or individual level, the ratio is treated as another variable on the dataset and the (1,50) and (2,67) dominance rules applies as usual • If the ratio is in the form of aggregate/aggregate, the (1,50) and (2,67) dominance rules applies to the numerator and denominator separately. If either the numerator or denominator fail, the ratio is suppressed ##### Maximums, minimums Subject to minimum contributors for percentiles, use: • 99th and 1st percentiles • 95th and 5th percentiles • 90th and 10th percentiles ##### Quantiles (including median, quartiles, quintiles, deciles, percentiles) • Minimum contributors for percentiles ##### Box plot • Same rules apply as per quartiles, maximums and minimums • Minimum contributors for percentiles ##### Mode • Rule of 10 ##### Higher moments of distributions/measures of spread (including variance, covariance, kurtosis, skewness) • Rule of 10 ##### Graphs, pictorial representations of actual data • Not normally released if showing individual observations ### Correlation and regression analysis ##### Regression coefficients, and summary and test statistics • Minimum 10 degrees of freedom • R-squared ≤ 0.8 For regressions that have a continuous dependent variable and only categorical independent variables, the regression will return the average of each category. In this case: • Rule of 10 • Dominance rules • Provide a cross-tab of the independent variables. Each cell must have at least 10 observations. • Each cell in the cross-tab needs to be tested for the (1,50) and (2,67) dominance rules for the dependent variable. ##### Hazard models • Rule of 10 • There must be at least 10 'failures' ##### Estimation residuals • Not normally released • Provide justification ##### Correlation coefficients • Rule of 10 ### How to apply dominance rule and rule of 10 for regression #### Example 1: Linear Regression A linear regression was run to predict income by age and health status: Age was binned into three categories: <18 years, >18 and < 30 and > 30 years, where <18 was the reference category. Health status was categorised according to healthy or unhealthy, where unhealthy was the reference category. Suppose the desired output was a regression summary below: Beta CoefficientP Value Constant1.50.001 Age >18 and <3020.004 Age >3030.002 N=1000, R-Squared=0.67 We should provide a crosstabulation of counts and a dominance table for the output clearance team. #### Crosstabulation of Counts UnhealthyHealthy Age < 181530 Age >18 and <304070 Age >306089 Counts for each combination of variables are greater than 10. The rule of 10 is satisfied #### Dominance Table We should provide a dominance table for the output clearance team like below: Please note: Only the columns Top 1 and Top 2 Contribution to Total Income are required. The other columns are presented to illustrate the calculation. This table is also usually presented in one long spreadsheet. Total IncomeTop 1 IncomeTop2 IncomeTop 1 Contribution to Total IncomeTop 2 Contribution to Total Income UnhealthyUnhealthyUnhealthyUnhealthyUnhealthy Age < 18$1,500$500$900500/1,500 =33%900/1,500 = 60% Age >18 and <30$130,000$55,000$85,00055,000/130,000 = 42%85,000/130,000 = 65% Age >30$1,000,000$520,000$600,000520,000/1,000,000 = 52%600,000/1,000,000=60% Total IncomeTop 1 IncomeTop2 IncomeTop 1 Contribution to Total IncomeTop 2 Contribution to Total Income HealthyHealthyHealthyHealthyHealthy Age < 18$2,500$1,000$1,3001,000/2,500 = 40%1,300/2,500 = 52% Age >18 and <30$230,000$155,000$200,000155,000/230,000 = 67%200,000/230,000 = 87% Age >30$2,000,000$600,000\$900,000600,000/2,000,000=30%900,000/1,000,000=90% There are multiple instances where the (1,50) and (2,67) rules are violated. Adjustments to the regression output will need to be applied before it can be cleared. The most common suggestion is to suppress the constant/intercept. ### Unit records Print, list or other commands that produce unit record level data • Prohibited ## Request output clearance To request output clearance: 1. Make sure you have applied the output clearance rules. 2. Move your output to the Output drive. 3. Use the 'Request output clearance' link at the top of this page. If the Request output button does not generate an email, use the template below to submit your request. Outputs generally take 2-3 business days to be cleared if all the rules have been followed. Outputs where the rules have been improperly applied will take longer. Large outputs will also take longer. To minimise clearance time, ensure that requests contain only necessary outputs and the rules have been correctly applied. To: [email protected] Subject: Request DataLab output clearance Dear DataLab team I have saved my output to the Output drive for ABS review. Project name: Output file name(s): Data file(s) used (e.g. BLADE1617_CORE): Description of the original and self-constructed variables: Description of the analysis: Additional requirements are listed below: • Weighted outputs: I have included the unweighted frequencies in my output. • Graphs/charts: I have included the underlying numbers used to produce the graphs/charts. • I have included any relevant code and log files. ## Request input clearance If you have your own data, code or files that you would like to use in DataLab, they need to be approved before they can be loaded. This is known as input clearance. Examples of inputs include: • data - aggregated data, tables, microdata and classifications • code - user written code and packages • other files - Word documents and PDFs To request input clearance use the 'Request input clearance' link at the top of this page. If the Request input clearance link does not generate an email, use the template below to submit your request. We aim to respond to your input clearance request within two to three business days. It is likely to take longer if your request is large, complex or needs clarification. To: [email protected] Subject: Request DataLab input file load Dear DataLab team I would like to load the attached file(s) to my DataLab project. Project name: File type (e.g. code or data): Description of each file: Additional information required for each data file: • organisation/individual owner of the data: • source of the data (include website link if applicable): • any terms of use or licensing that applies to the data that may restrict its use in the ABS DataLab and require additional permissions or conditions:
2022-08-09T02:46:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26841381192207336, "perplexity": 3568.4683915961114}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570879.37/warc/CC-MAIN-20220809003642-20220809033642-00196.warc.gz"}
http://mathonline.wikidot.com/deleted:matrix-form-of-vectors-and-the-dot-product
Matrix Form of Vectors and The Dot Product # Matrix Form of Vectors and The Dot Product So far we have looked at vectors in $\mathbb{R}^n$ in the form $\vec{u} = (u_1, u_2, ..., u_n)$, however, we can also write a vector as either a $1 \times n$ row matrix or an $n \times 1$ column matrix (omitting the vector arrows to denote a vector in matrix form) (1) \begin{align} u = \begin{bmatrix} u_1 & u_2 & \cdots & u_n \end{bmatrix} \quad , \quad u = \begin{bmatrix} u_1 \\ u_2 \\ \vdots \\ u_n \end{bmatrix} \end{align} Sometimes this notation is useful as many vector operations are illustrated the same way if the vectors are in matrix form, for example consider the vector operation of addition, that is $\vec{u} + \vec{v} = (u_1 + v_1, u_2 + v_2, ..., u_n + v_n)$ in matrix form: (2) \begin{align} \quad \begin{bmatrix}u_1 \\ u_2 \\ \vdots \\ u_n\end{bmatrix} + \begin{bmatrix}v_1 \\ v_2 \\ \vdots \\ v_n \end{bmatrix} = \begin{bmatrix}u_1 + v_1\\ u_2 + v_2\\ \vdots \\ u_n + v_n \end{bmatrix} \end{align} # The Dot Product in Matrix Form Let $\vec{u}, \vec{v} \in \mathbb{R}^n$, and write $u = \begin{bmatrix}u_1 \\ u_2 \\ \vdots \\ u_n\end{bmatrix}$ and $v = \begin{bmatrix}v_1 \\ v_2 \\ \vdots \\ v_n \end{bmatrix}$. The dot product $\vec{u} \cdot \vec{v}$ can be obtained by the following formula: (3) \begin{align} \begin{bmatrix}v_1 & v_2 & ... & v_n \end{bmatrix}_{1 \times n} \begin{bmatrix}u_1 \\ u_2 \\ \vdots \\ u_n\end{bmatrix}_{n \times 1} = \begin{bmatrix} u_1v_1 + u_2v_2 + ... u_nv_n \end{bmatrix}_{1 \times 1} = \vec{u} \cdot \vec{v}\\ \end{align} Therefore we obtain that $\vec{u} \cdot \vec{v} = v^T u$. Furthermore, consider an $n \times n$ matrix $A$ and the following equivalences: (4) \begin{align} Au \cdot v = v^T(Au) \\ Au \cdot v = (v^TA)u \\ Au \cdot v = (A^Tv)^Tu \\ Au \cdot v = u \cdot A^T v \end{align} (5) \begin{align} u \cdot Av = (Av)^T u \\ u \cdot Av = (v^TA^T)u \\ u \cdot Av = v^T(A^Tu) \\ u \cdot Av = A^Tu \cdot v \end{align} We thereby get the formulas $Au \cdot v = u \cdot A^T v$ and $u \cdot Av = A^T u \cdot v$. Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-ShareAlike 3.0 License
2017-08-17T19:27:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 5, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000100135803223, "perplexity": 782.8020060839468}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886103910.54/warc/CC-MAIN-20170817185948-20170817205948-00073.warc.gz"}
https://phys.libretexts.org/Bookshelves/College_Physics/Book%3A_College_Physics_(OpenStax)/26%3A_Vision_and_Optical_Instruments
$$\require{cancel}$$ # 26: Vision and Optical Instruments It is through optics and imaging that physics enables advancement in major areas of biosciences. This chapter illustrates the enabling nature of physics through an understanding of how a human eye is able to see and how we are able to use optical instruments to see beyond what is possible with the naked eye. It is convenient to categorize these instruments on the basis of geometric optics and wave Optics. • 26.0: Introduction to Vision and Optical Instruments Intricate images help us understand nature and are invaluable for developing techniques and technologies in order to improve the quality of life. The image of a red blood cell that almost fills the cross-sectional area of a tiny capillary makes us wonder how blood makes it through and not get stuck. We are able to see bacteria and viruses and understand their structure. • 26.1: Physics of the Eye The eye is perhaps the most interesting of all optical instruments. The eye is remarkable in how it forms images and in the richness of detail and color it can detect. However, our eyes commonly need some correction, to reach what is called “normal” vision, but should be called ideal rather than normal. Image formation by our eyes and common vision correction are easy to analyze with Geometric Optics. • 26.2: Vision Correction The need for some type of vision correction is very common.  Nearsightedness, or myopia, is the inability to see distant objects clearly while close objects are clear. The eye overconverges the nearly parallel rays from a distant object, and the rays cross in front of the retina. Farsightedness, or hyperopia, is the inability to see close objects clearly while distant objects may be clear. A farsighted eye does not converge sufficient rays from a close object to make the rays meet on the retina. • 26.3: Color and Color Vision The gift of vision is made richer by the existence of color. Objects and lights abound with thousands of hues that stimulate our eyes, brains, and emotions. Two basic questions are addressed in this brief treatment -- what does color mean in scientific terms, and how do we, as humans, perceive it? • 26.4: Microscopes In this section we will examine microscopes, instruments for enlarging the detail that we cannot see with the unaided eye. The microscope is a multiple-element system having more than a single lens or mirror. A microscope can be made from two convex lenses. The image formed by the first element becomes the object for the second element. The second element forms its own image, which is the object for the third element, and so on. Ray tracing helps to visualize the image formed. • 26.5: Telescopes Telescopes are meant for viewing distant objects, producing an image that is larger than the image that can be seen with the unaided eye. Telescopes gather far more light than the eye, allowing dim objects to be observed with greater magnification and better resolution. • 26.6: Aberrations Real lenses behave somewhat differently from how they are modeled using the thin lens equations, producing aberrations. An aberration is a distortion in an image. There are a variety of aberrations due to a lens size, material, thickness, and position of the object. • 26.E: Vision and Optical Instruments (Exercise) Thumbnail: The human eye, showing the iris. (CC-BY-SA-2.5; "Petr Novák, Wikipedia").
2020-09-29T00:47:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44206666946411133, "perplexity": 1198.6911096470292}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401617641.86/warc/CC-MAIN-20200928234043-20200929024043-00498.warc.gz"}
https://zbmath.org/authors/?q=ai%3Aolkin.ingram
## Olkin, Ingram Compute Distance To: Author ID: olkin.ingram Published as: Olkin, Ingram; Olkin, I. Homepage: http://www.srsm.org/ingram-olkin-memorial-page.html External Links: MGP · Wikidata · dblp · GND · IdRef Documents Indexed: 162 Publications since 1951, including 10 Books 9 Contributions as Editor · 2 Further Contributions Biographic References: 2 Publications Co-Authors: 90 Co-Authors with 145 Joint Publications 2,239 Co-Co-Authors all top 5 ### Co-Authors 25 single-authored 37 Marshall, Albert W. 9 Sobel, Milton 8 Gleser, Leon Jay 5 Anderson, Theodore Wilbur jun. 4 Brown, Lawrence David 4 Gibbons, Jean Dickinson 4 Kiefer, Jack Carl 4 Madow, William G. 4 Sacks, Jerome 4 Shepp, Lawrence Alan 4 Tong, Yung Liang 4 Wynn, Henry P. 3 Bush, Kenneth A. 3 Cottle, Richard W. 3 Eaton, Morris L. 3 Ghurye, Sudhish G. 3 Guttman, Irwin 3 Hedges, Larry V. 3 Viana, Marlos A. G. 2 Derman, Cyrus 2 Perlman, Michael D. 2 Pratt, John W. 2 Pukelsheim, Friedrich 2 Rachev, Svetlozar T. 2 Rubin, Herman 2 Sampson, Allan R. 2 Yitzhaki, Shlomo 1 Anderson, Blair M. 1 Arnold, Barry Charles 1 Beckenbach, Edwin Ford 1 Bowker, Albert H. 1 Bravata, Dawn M. 1 Cacoullos, Theophilos 1 Chatterjee, Samprit 1 Chen, Dayue 1 Churye, Sudhish G. 1 Das Gupta, Somesh 1 de Bruijn, Nicolaas Govert 1 Deemer, Walter L. jun. 1 Diaz, Joaquin Basilio 1 Drton, Mathias 1 Eaves, B. Curtis 1 Eisenhart, Churchill 1 Fan, Ky 1 Fang, Kai-Tai 1 Friedman, Avner 1 Gel, Yulia R. 1 Golbeck, Amanda L. 1 Goldberger, Arthur S. 1 Gosselin, Richard P. 1 Greenwood, J. Arthur 1 Hoeffding, Wassily 1 Horn, Roger Alan 1 Huh, Myung-Hoe 1 Kadane, Joseph Born 1 Karlin, Samuel 1 Katehakis, Michael N. 1 Katz, Leo 1 Kim, Dai Young 1 Kraft, Charles Hall 1 Kulinskaya, Elena 1 Littlewood, John Edensor 1 Liu, Ruixue 1 Luzar, Vesna 1 Mann, Henry Berthold 1 Marcus, Marvin D. 1 Marsaglia, George 1 Massam, Helene M. 1 Metcalf, Frederic T. 1 Meza, Juan C. 1 Motzkin, Theodore Samuel 1 Nanda, Harsh 1 Nisselson, Harold 1 Ogawa, Junjiro 1 Payne, Lawrence Edward 1 Petkau, Albert John 1 Philips, Robert 1 Pólya, George 1 Press, S. James 1 Proschan, Frank 1 Ross, Sheldon Mark 1 Roy, Samarendra Nath 1 Rubin, Donald Bruce 1 Saner, Hilary 1 Santner, Thomas J. 1 Saunders, Sam Cundiff 1 Savage, I. Richard 1 Savage, Leonard Jimmie 1 Scarsini, Marco 1 Schoenberg, Isaac Jacob 1 Selliah, J. B. 1 Shafer, Glenn Ray 1 Shisha, D. 1 Shisha, Oved 1 Shrikhande, Shartchandra Shankar 1 Siotani, Minoru 1 Spiegelman, Clifford H. 1 Stephens, Michael A. 1 Sylvan, M. 1 Tate, R. ...and 13 more Co-Authors all top 5 ### Serials 13 Annals of Mathematical Statistics 12 Linear Algebra and its Applications 10 Biometrika 7 The Annals of Statistics 7 Journal of the American Statistical Association 7 Statistics & Probability Letters 6 Statistical Science 5 Journal of Multivariate Analysis 4 American Mathematical Monthly 4 Journal of Statistical Planning and Inference 3 Psychometrika 3 Duke Mathematical Journal 3 Journal of Applied Probability 3 Probability in the Engineering and Informational Sciences 2 Annals of the Institute of Statistical Mathematics 2 Biometrics 2 International Statistical Review 2 Numerische Mathematik 2 Pacific Journal of Mathematics 2 SIAM Journal on Scientific and Statistical Computing 2 Annals of Operations Research 2 Aequationes Mathematicae 2 Springer Series in Statistics 2 Springer Collected Works in Mathematics 1 Advances in Applied Probability 1 The American Statistician 1 Journal of Mathematical Analysis and Applications 1 Linear and Multilinear Algebra 1 The Annals of Probability 1 Archiv der Mathematik 1 British Journal of Mathematical & Statistical Psychology 1 Econometrica 1 Sankhyā. Series A. Methods and Techniques 1 SIAM Journal on Control and Optimization 1 American Journal of Mathematical and Management Sciences 1 Social Choice and Welfare 1 SIAM Journal on Matrix Analysis and Applications 1 Journal of Global Optimization 1 Communications in Statistics. Simulation and Computation 1 Journal of the Royal Statistical Society. Series B 1 Proceedings of the National Academy of Sciences of the United States of America 1 Statistica Sinica 1 Statistical Modelling 1 Statistical Methods in Medical Research 1 Skandinavisk Aktuarietidskrift 1 Classics in Applied Mathematics 1 Institute of Mathematical Statistics Lecture Notes - Monograph Series 1 Mathematics in Science and Engineering all top 5 ### Fields 97 Statistics (62-XX) 35 Probability theory and stochastic processes (60-XX) 27 Linear and multilinear algebra; matrix theory (15-XX) 18 History and biography (01-XX) 11 Real functions (26-XX) 8 Numerical analysis (65-XX) 5 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 4 Difference and functional equations (39-XX) 3 General and overarching topics; collections (00-XX) 3 Operations research, mathematical programming (90-XX) 2 Special functions (33-XX) 1 Abstract harmonic analysis (43-XX) 1 Calculus of variations and optimal control; optimization (49-XX) 1 Geophysics (86-XX) 1 Information and communication theory, circuits (94-XX) ### Citations contained in zbMATH Open 142 Publications have been cited 4,804 times in 4,386 Documents Cited by Year Inequalities: theory of majorization and its applications. Zbl 0437.26007 Marshall, Albert W.; Olkin, Ingram 1979 Inequalities: theory of majorization and its applications. 2nd edition. Zbl 1219.26003 Marshall, Albert W.; Olkin, Ingram; Arnold, Barry C. 2011 A multivariate exponential distribution. Zbl 0147.38106 Marshall, Albert W.; Olkin, Ingram 1967 A new method for adding a parameter to a family of distributions with application to the exponential and Weibull families. Zbl 0888.62012 Marshall, Albert W.; Olkin, Ingram 1997 Life distributions. Structure of nonparametric, semiparametric, and parametric families. Zbl 1304.62019 Marshall, Albert W.; Olkin, Ingram 2007 Families of multivariate distributions. Zbl 0683.62029 Marshall, Albert W.; Olkin, Ingram 1988 Statistical methods for meta-analysis. Zbl 0666.62002 Hedges, Larry V.; Olkin, Ingram 1985 Selecting and ordering populations: A new statistical methodology. Zbl 0464.62022 Gibbons, Jean Dickinson; Olkin, Ingram; Sobel, Milton 1977 Multivariate correlation models with mixed discrete and continuous variables. Zbl 0113.35101 Olkin, I.; Tate, R. 1961 Multivariate beta distributions and independence properties of the Wishart distribution. Zbl 0128.14002 Olkin, I.; Rubin, H. 1966 A bivariate beta distribution. Zbl 1116.60309 Olkin, Ingram; Liu, Ruixue 2003 A generalized bivariate exponential distribution. Zbl 0155.24001 Marshall, A. W.; Olkin, I. 1967 Scaling of matrices to achieve specified row and column sums. Zbl 0165.17401 Marshall, A. W.; Olkin, I. 1968 Matrix versions of the Cauchy and Kantorovich inequalities. Zbl 0706.15019 Marshall, A. W.; Olkin, I. 1990 Domains of attraction of multivariate extreme value distributions. Zbl 0508.60022 Marshall, Albert W.; Olkin, Ingram 1983 The distance between two random vectors wigh given dispersion matrices. Zbl 0527.60015 Olkin, I.; Pukelsheim, F. 1982 A family of bivariate distributions generated by the bivariate Bernoulli distribution. Zbl 0575.60023 Marshall, Albert W.; Olkin, Ingram 1985 Inequalities on the probability content of convex regions for elliptically contoured distributions. Zbl 0253.60021 Das Gupta, S.; Olkin, I.; Savage, L. J.; Eaton, M. L.; Perlman, M.; Sobel, M. 1972 Maximum-likelihood estimation of the parameters of a multivariate normal distribution. Zbl 0586.62074 Anderson, T. W.; Olkin, I. 1985 Unbiased estimation of certain correlation coefficients. Zbl 0094.14403 Olkin, Ingram; Pratt, John W. 1958 Testing and estimation for a circular stationary model. Zbl 0186.51801 Olkin, I.; Press, S. J. 1969 Multivariate ratio estimation for finite populations. Zbl 0088.12505 Olkin, Ingram 1958 Multivariate Chebyshev inequalities. Zbl 0244.60013 Marshall, Albert W.; Olkin, Ingram 1960 The Jacobians of certain matrix transformations useful in multivariate analysis. Zbl 0043.34204 Deemer, Walter L.; Olkin, Ingram 1951 A characterization of the Wishart distribution. Zbl 0111.34202 Olkin, I.; Rubin, H. 1962 Maximum likelihood estimators and likelihood ratio criteria in multivariate components of variance. Zbl 0631.62063 Anderson, Blair M.; Anderson, T. W.; Olkin, Ingram 1986 Majorization in multivariate distributions. Zbl 0292.62037 Marshall, Albert W.; Olkin, Ingram 1974 A comparison of n estimators for the binomial distribution. Zbl 0472.62037 Olkin, Ingram; Petkau, A. John; Zidek, James V. 1981 A multivariate Tchebycheff inequality. Zbl 0085.35204 Olkin, Ingram; Pratt, John W. 1958 A semiparametric approach to density estimation. Zbl 0632.62027 Olkin, Ingram; Spiegelman, Clifford H. 1987 A characterization of the multivariate normal distribution. Zbl 0109.13401 Ghurye, S. G.; Olkin, I. 1962 Correlation analysis of extreme observations from a multivariate normal distribution. Zbl 0868.62050 Olkin, Ingram; Viana, Marlos 1995 Estimating covariances in a multivariate normal distribution. Zbl 0432.62032 Olkin, I.; Selliah, J. B. 1977 Unbiasedness of invariant tests for MANOVA and other multivariate problems. Zbl 0465.62046 Perlman, Michael D.; Olkin, Ingram 1980 Best equivariant estimators of a Cholesky decomposition. Zbl 0629.62057 Eaton, Morris L.; Olkin, Ingram 1987 Generation of random orthogonal matrices. Zbl 0637.65004 Anderson, T. W.; Olkin, I.; Underhill, L. G. 1987 Generating currelation matrices. Zbl 0552.65006 Marsaglia, George; Olkin, Ingram 1984 Gini regression analysis. Zbl 0755.62053 Olkin, Ingram; Yitzhaki, Shlomo 1992 Reserval of the Lyapunov, Hölder, and Minkowski inequalities and other extensions of the Kantorovich inequality. Zbl 0136.04201 Marshall, A. W.; Olkin, I. 1964 Unbiased estimation of some multivariate probability densities and related functions. Zbl 0202.17103 Ghurye, S. G.; Olkin, I. 1969 Moments of minors of Wishart matrices. Zbl 1152.62343 Drton, Mathias; Massam, Hélène; Olkin, Ingram 2008 Multivariate distributions generated from mixtures of convolution and product families. Zbl 0769.62037 Marshall, Albert W.; Olkin, Ingram 1990 Concentration indices and concentration curves. Zbl 0755.90016 Yitzhaki, Shlomo; Olkin, Ingram 1991 On multivariate distribution theory. Zbl 0055.37304 Olkin, I.; Roy, S. N. 1954 Integral expressions for tail probabilities of the multinomial and negative multinomial distributions. Zbl 0129.11702 Olkin, I.; Sobel, M. 1965 Selecting and ordering populations. A new statistical methodology. Repr. of the 1977 ed. Zbl 0923.62028 Gibbons, Jean Dickinson; Olkin, Ingram; Sobel, Milton 1999 Peakedness in multivariate distributions. Zbl 0678.62057 Olkin, I.; Tong, Y. L. 1988 Multivariate exponential and geometric distributions with limited memory. Zbl 0877.62049 Marshall, Albert W.; Olkin, Ingram 1995 When does $$A^*A=B^*B$$ and why does one want to know? Zbl 0861.15011 Horn, Roger A.; Olkin, Ingram 1996 A determinantal proof of the Craig-Sakamoto theorem. Zbl 0887.15009 Olkin, Ingram 1997 The 70th anniversary of the distribution of random matrices: A survey. Zbl 1018.15022 Olkin, Ingram 2002 Norms and inequalities for condition numbers. Zbl 0166.30001 Marshall, A. W.; Olkin, I. 1965 A majorization comparison of apportionment methods in proportional representation. Zbl 1072.91533 Marshall, Albert W.; Olkin, Ingram; Pukelsheim, Friedrich 2002 Entropy of the sum of independent Bernoulli random variables and of the multinomial distribution. Zbl 0534.60020 Shepp, L. A.; Olkin, I. 1981 Inequalities: theory of majorization and its applications. (Neravenstva: teoriya mazhorizatsii i ee prilozheniya). Transl. from the English. Zbl 0544.26008 Marshall, Albert W.; Olkin, Ingram 1983 Maximum submatrix traces for positive definite matrices. Zbl 0772.15009 Olkin, I.; Rachev, S. T. 1993 Probability models and applications. Zbl 0428.60001 Olkin, Ingram; Gleser, Leon J.; Derman, Cyrus 1980 Jacobians of matrix transformations and induced functional equations. Zbl 0241.15003 Olkin, Ingram; Sampson, Allan R. 1972 A class of integral identities with matrix argument. Zbl 0086.04204 Olkin, Ingram 1959 A tale of two countries: The Craig-Sakamoto-Matusita theorem. Zbl 1152.15017 Ogawa, Junjiro; Olkin, Ingram 2008 Bivariate life distributions from Pólya’s urn model for contagion. Zbl 0786.60015 Marshall, Albert W.; Olkin, Ingram 1993 A matrix variance inequality. Zbl 1058.60013 Olkin, Ingram; Shepp, Larry 2005 Asymptotic distribution of functions of a correlation matrix. Zbl 0369.62056 Olkin, Ingram; Siotani, Minoru 1976 Admissible and minimax estimation for the multinomial distribution and for k independent binomial distributions. Zbl 0407.62005 Olkin, Ingram; Sobel, Milton 1979 Note on “The Jacobians of certain matrix transformations” useful in multivariate analysis. Zbl 0052.14904 Olkin, Ingram 1953 A convexity proof of Hadamard’s inequality. Zbl 0526.15011 Marshall, A. W.; Olkin, I. 1982 Mass transportation problems with capacity constraints. Zbl 0946.60009 Rachev, S. T.; Olkin, I. 1999 Symmetrically dependent models arising in visual assessment data. Zbl 1060.62675 Viana, Marlos; Olkin, Ingram 2000 Inequalities. Proceedings of a symposium held at Wright-Patterson Air Force Base, Ohio, August 19–27, 1965. Zbl 0178.00102 1967 Monotonicity properties of Dirichlet integrals with applications to the multinomial distribution and the analysis of variance. Zbl 0241.62037 Olkin, Ingram 1972 A bivariate Gompertz-Makeham life distribution. Zbl 1320.62027 Marshall, Albert W.; Olkin, Ingram 2015 Incomplete data in sample surveys. Volume 2: Theory and bibliographies. Zbl 0561.62008 1983 Constructions for a bivariate beta distribution. Zbl 1314.62043 Olkin, Ingram; Trikalinos, Thomas A. 2015 Comparison of meta-analysis versus analysis of variance of individual patient data. Zbl 1058.62641 Olkin, Ingram; Sampson, Allan 1998 On the bias of functions of characteristic roots of a random matrix. Zbl 0132.13206 Cacoullos, T.; Olkin, I. 1965 Estimation for a regression model with an unknown covariance matrix. Zbl 0232.62033 Gleser, Leon Jay; Olkin, Ingram 1972 A numerical procedure for finding the positive definite matrix closest to a patterned matrix. Zbl 0850.62486 Hu, H.; Olkin, I. 1991 Testing and estimation for structures which are circularly symmetric in blocks. Zbl 0261.62055 Olkin, Ingram 1973 Estimating a Cholesky decomposition. Zbl 0567.62041 Olkin, Ingram 1985 Statistical inference for constants of proportionality. Zbl 0597.62048 Guttman, Irwin; Kim, Dai Young; Olkin, Ingram 1985 Combining correlated unbiased estimators of the mean of a normal distribution. Zbl 1268.62052 Keller, Timothy; Olkin, Ingram 2004 A new class of multivariate tests based on the union-intersection principle. Zbl 0471.62048 Olkin, Ingram; Tomsky, Jack L. 1981 Range restrictions for product-moment correlation matrices. Zbl 0479.62047 Olkin, Ingram 1981 Maximum likelihood characterizations of distributions. Zbl 0826.60011 Marshall, Albert W.; Olkin, Ingram 1993 Extrema of quadratic forms with applications to statistics. Zbl 0090.36601 Bush, K. A.; Olkin, I. 1959 An inequality satisfied by the gamma function. Zbl 0097.35105 Olkin, Ingram 1959 Testing for equality of means, equality of variances, and equality of covariances under restrictions upon the parameter space. Zbl 0181.45602 Gleser, L. J.; Olkin, I. 1969 A minimum-distance interpretation of limited-information estimation. Zbl 0224.62051 Goldberger, Arthur S.; Olkin, Ingram 1971 Approximations for trimmed Fisher procedures in research synthesis. Zbl 1121.62648 Olkin, Ingram; Saner, Hilary 2001 Bivariate distributions generated from Pólya-Eggenberger urn models. Zbl 0711.62043 Marshall, Albert W.; Olkin, Ingram 1990 Functional equations for multivariate exponential distributions. Zbl 0739.62038 Marshall, Albert W.; Olkin, Ingram 1991 Bounds for a k-fold integral for location and scale parameter models with applications to statistical ranking and selection problems. Zbl 0581.62026 Olkin, I.; Sobel, Milton; Tong, Y. L. 1982 Inequalities for the trace function. Zbl 0585.15003 Marshall, Albert W.; Olkin, Ingram 1985 Characterizations of distributions through coincidences of semiparametric families. Zbl 1146.62008 Marshall, Albert W.; Olkin, Ingram 2007 An inequality for a sum of forms. Zbl 0515.15007 Olkin, Ingram 1983 Adjusting p values to account for selection over dichotomies. Zbl 0534.62076 Shafer, Glenn; Olkin, Ingram 1983 Positive dependence of a class of multivariate exponential distributions. Zbl 0796.62085 Olkin, Ingram; Tong, Y. L. 1994 Contributions to probability and statistics. Essays in honor of Harold Hotelling. Zbl 0094.31604 1960 Corrigenda. Biometrika (1959), 46, pp. 483-486. ”Extrema of quadratic forms with applications to statistics”. Zbl 0102.14802 Bush, K. A.; Olkin, I. 1961 Game theoretic proof that Chebyshev inequalities are sharp. Zbl 0118.13703 Marshall, A. W.; Olkin, I. 1961 Probability models and applications. 2nd revised edition. Zbl 06964426 Olkin, Ingram; Gleser, Leon J.; Derman, Cyrus 2020 Life distributions: a brief discussion. Zbl 1347.62219 Olkin, Ingram 2016 Collected papers. Supplementary volume. Edited by Lawrence D. Brown, Ingram Olkin, Jerome Sacks and Henry P. Wynn. Reprint of the 1986 edition. Zbl 1369.01039 Kiefer, Jack Carl 2016 A bivariate Gompertz-Makeham life distribution. Zbl 1320.62027 Marshall, Albert W.; Olkin, Ingram 2015 Constructions for a bivariate beta distribution. Zbl 1314.62043 Olkin, Ingram; Trikalinos, Thomas A. 2015 An overdispersion model in meta-analysis. Zbl 07257895 Kulinskaya, Elena; Olkin, Ingram 2014 A determinantal inequality for correlation matrices. Zbl 1369.62128 Olkin, Ingram 2014 On the life and work of Cyrus Derman. Zbl 1274.90007 Katehakis, Michael N.; Olkin, Ingram; Ross, Sheldon M.; Yang, Jian 2013 An inequality that subsumes the inequalities of Radon, Bohr, and Shannon. Zbl 1370.26052 Olkin, Ingram; Shepp, Larry 2013 Pao-Lu Hsu (Xu, Bao-Lu): the grandparent of probability and statistics in China. Zbl 1331.62014 Chen, Dayue; Olkin, Ingram 2012 Inequalities: theory of majorization and its applications. 2nd edition. Zbl 1219.26003 Marshall, Albert W.; Olkin, Ingram; Arnold, Barry C. 2011 Schur-convexity, gamma functions, and moments. Zbl 1266.26027 Marshall, Albert W.; Olkin, Ingram 2009 Moments of minors of Wishart matrices. Zbl 1152.62343 Drton, Mathias; Massam, Hélène; Olkin, Ingram 2008 A tale of two countries: The Craig-Sakamoto-Matusita theorem. Zbl 1152.15017 Ogawa, Junjiro; Olkin, Ingram 2008 Life distributions. Structure of nonparametric, semiparametric, and parametric families. Zbl 1304.62019 Marshall, Albert W.; Olkin, Ingram 2007 Characterizations of distributions through coincidences of semiparametric families. Zbl 1146.62008 Marshall, Albert W.; Olkin, Ingram 2007 Several colorful inequalities. Zbl 1153.60012 Olkin, Ingram; Shepp, Larry 2006 Nonparametric estimation for quadratic regression. Zbl 1090.62038 Chatterjee, Samprit; Olkin, Ingram 2006 A matrix variance inequality. Zbl 1058.60013 Olkin, Ingram; Shepp, Larry 2005 Combining correlated unbiased estimators of the mean of a normal distribution. Zbl 1268.62052 Keller, Timothy; Olkin, Ingram 2004 A bivariate beta distribution. Zbl 1116.60309 Olkin, Ingram; Liu, Ruixue 2003 The 70th anniversary of the distribution of random matrices: A survey. Zbl 1018.15022 Olkin, Ingram 2002 A majorization comparison of apportionment methods in proportional representation. Zbl 1072.91533 Marshall, Albert W.; Olkin, Ingram; Pukelsheim, Friedrich 2002 Approximations for trimmed Fisher procedures in research synthesis. Zbl 1121.62648 Olkin, Ingram; Saner, Hilary 2001 Symmetrically dependent models arising in visual assessment data. Zbl 1060.62675 Viana, Marlos; Olkin, Ingram 2000 Selecting and ordering populations. A new statistical methodology. Repr. of the 1977 ed. Zbl 0923.62028 Gibbons, Jean Dickinson; Olkin, Ingram; Sobel, Milton 1999 Mass transportation problems with capacity constraints. Zbl 0946.60009 Rachev, S. T.; Olkin, I. 1999 Comparison of meta-analysis versus analysis of variance of individual patient data. Zbl 1058.62641 Olkin, Ingram; Sampson, Allan 1998 The density of the inverse and pseudo-inverse of a random matrix. Zbl 1246.62134 Olkin, Ingram 1998 A new method for adding a parameter to a family of distributions with application to the exponential and Weibull families. Zbl 0888.62012 Marshall, Albert W.; Olkin, Ingram 1997 A determinantal proof of the Craig-Sakamoto theorem. Zbl 0887.15009 Olkin, Ingram 1997 Correlation analysis of ordered observations from a block-equicorrelated multivariate normal distribution. Zbl 0903.62055 Viana, Marlos; Olkin, Ingram 1997 When does $$A^*A=B^*B$$ and why does one want to know? Zbl 0861.15011 Horn, Roger A.; Olkin, Ingram 1996 Correlation analysis of extreme observations from a multivariate normal distribution. Zbl 0868.62050 Olkin, Ingram; Viana, Marlos 1995 Multivariate exponential and geometric distributions with limited memory. Zbl 0877.62049 Marshall, Albert W.; Olkin, Ingram 1995 Asymptotic aspects of ordinary ridge regression. Zbl 0855.62058 Huh, Myung-Hoe; Olkin, Ingram 1995 Positive dependence of a class of multivariate exponential distributions. Zbl 0796.62085 Olkin, Ingram; Tong, Y. L. 1994 Maximum submatrix traces for positive definite matrices. Zbl 0772.15009 Olkin, I.; Rachev, S. T. 1993 Bivariate life distributions from Pólya’s urn model for contagion. Zbl 0786.60015 Marshall, Albert W.; Olkin, Ingram 1993 Maximum likelihood characterizations of distributions. Zbl 0826.60011 Marshall, Albert W.; Olkin, Ingram 1993 On making the shortlist for the selection of candidates. Zbl 0826.62017 Olkin, Ingram; Stephens, Michael A. 1993 Gini regression analysis. Zbl 0755.62053 Olkin, Ingram; Yitzhaki, Shlomo 1992 Concentration indices and concentration curves. Zbl 0755.90016 Yitzhaki, Shlomo; Olkin, Ingram 1991 A numerical procedure for finding the positive definite matrix closest to a patterned matrix. Zbl 0850.62486 Hu, H.; Olkin, I. 1991 Functional equations for multivariate exponential distributions. Zbl 0739.62038 Marshall, Albert W.; Olkin, Ingram 1991 A conversation with W. Allen Wallis. Zbl 0955.01528 Olkin, Ingram 1991 Matrix versions of the Cauchy and Kantorovich inequalities. Zbl 0706.15019 Marshall, A. W.; Olkin, I. 1990 Multivariate distributions generated from mixtures of convolution and product families. Zbl 0769.62037 Marshall, Albert W.; Olkin, Ingram 1990 Bivariate distributions generated from Pólya-Eggenberger urn models. Zbl 0711.62043 Marshall, Albert W.; Olkin, Ingram 1990 Inequalities for predictive ratios and posterior variances in natural exponential families. Zbl 0711.62013 Kadane, Joseph B.; Olkin, Ingram; Scarsini, Marco 1990 Interface between statistics and linear algebra. Zbl 0712.62043 Olkin, Ingram 1990 A conversation with Maurice Bartlett. Zbl 0955.01521 Olkin, Ingram 1989 Families of multivariate distributions. Zbl 0683.62029 Marshall, Albert W.; Olkin, Ingram 1988 Peakedness in multivariate distributions. Zbl 0678.62057 Olkin, I.; Tong, Y. L. 1988 A semiparametric approach to density estimation. Zbl 0632.62027 Olkin, Ingram; Spiegelman, Clifford H. 1987 Best equivariant estimators of a Cholesky decomposition. Zbl 0629.62057 Eaton, Morris L.; Olkin, Ingram 1987 Generation of random orthogonal matrices. Zbl 0637.65004 Anderson, T. W.; Olkin, I.; Underhill, L. G. 1987 A conversation with Albert H. Bowker. Zbl 0955.01519 Olkin, Ingram 1987 Maximum likelihood estimators and likelihood ratio criteria in multivariate components of variance. Zbl 0631.62063 Anderson, Blair M.; Anderson, T. W.; Olkin, Ingram 1986 Statistical methods for meta-analysis. Zbl 0666.62002 Hedges, Larry V.; Olkin, Ingram 1985 A family of bivariate distributions generated by the bivariate Bernoulli distribution. Zbl 0575.60023 Marshall, Albert W.; Olkin, Ingram 1985 Maximum-likelihood estimation of the parameters of a multivariate normal distribution. Zbl 0586.62074 Anderson, T. W.; Olkin, I. 1985 Estimating a Cholesky decomposition. Zbl 0567.62041 Olkin, Ingram 1985 Statistical inference for constants of proportionality. Zbl 0597.62048 Guttman, Irwin; Kim, Dai Young; Olkin, Ingram 1985 Inequalities for the trace function. Zbl 0585.15003 Marshall, Albert W.; Olkin, Ingram 1985 Collected papers III: Design of experiments. Publ. with the co-operation of the Institute of Mathematical Statistics and ed. by Lawrence D. Brown, Ingram Olkin, Jerome Sacks, Henry P. Wynn. Zbl 0586.62002 Kiefer, Jack Carl 1985 Generating currelation matrices. Zbl 0552.65006 Marsaglia, George; Olkin, Ingram 1984 Domains of attraction of multivariate extreme value distributions. Zbl 0508.60022 Marshall, Albert W.; Olkin, Ingram 1983 Inequalities: theory of majorization and its applications. (Neravenstva: teoriya mazhorizatsii i ee prilozheniya). Transl. from the English. Zbl 0544.26008 Marshall, Albert W.; Olkin, Ingram 1983 Incomplete data in sample surveys. Volume 2: Theory and bibliographies. Zbl 0561.62008 1983 An inequality for a sum of forms. Zbl 0515.15007 Olkin, Ingram 1983 Adjusting p values to account for selection over dichotomies. Zbl 0534.62076 Shafer, Glenn; Olkin, Ingram 1983 Incomplete data in sample surveys. Volume 1: Report and case studies. Zbl 0561.62007 1983 Incomplete data in sample surveys. Volume 3: Proceedings of the Symposium (held in Washington, D. C., on August 10-11, 1979). Zbl 0561.62009 1983 The distance between two random vectors wigh given dispersion matrices. Zbl 0527.60015 Olkin, I.; Pukelsheim, F. 1982 A convexity proof of Hadamard’s inequality. Zbl 0526.15011 Marshall, A. W.; Olkin, I. 1982 Bounds for a k-fold integral for location and scale parameter models with applications to statistical ranking and selection problems. Zbl 0581.62026 Olkin, I.; Sobel, Milton; Tong, Y. L. 1982 A comparison of n estimators for the binomial distribution. Zbl 0472.62037 Olkin, Ingram; Petkau, A. John; Zidek, James V. 1981 Entropy of the sum of independent Bernoulli random variables and of the multinomial distribution. Zbl 0534.60020 Shepp, L. A.; Olkin, I. 1981 A new class of multivariate tests based on the union-intersection principle. Zbl 0471.62048 Olkin, Ingram; Tomsky, Jack L. 1981 Range restrictions for product-moment correlation matrices. Zbl 0479.62047 Olkin, Ingram 1981 Maximum likelihood estimation in a two-way analysis of variance with correlated errors in one classification. Zbl 0484.62077 Olkin, Ingram; Vaeth, Michael 1981 The asymptotic distribution of commonality components. Zbl 0482.62007 Hedges, Larry V.; Olkin, Ingram 1981 Unbiasedness of invariant tests for MANOVA and other multivariate problems. Zbl 0465.62046 Perlman, Michael D.; Olkin, Ingram 1980 Probability models and applications. Zbl 0428.60001 Olkin, Ingram; Gleser, Leon J.; Derman, Cyrus 1980 Inequalities: theory of majorization and its applications. Zbl 0437.26007 Marshall, Albert W.; Olkin, Ingram 1979 Admissible and minimax estimation for the multinomial distribution and for k independent binomial distributions. Zbl 0407.62005 Olkin, Ingram; Sobel, Milton 1979 Matrix extensions of Liouville-Dirichlet type integrals. Zbl 0425.15006 Olkin, Ingram 1979 A subset selection technique for scoring items on a multiple choice test. Zbl 0416.62077 Gibbons, Jean D.; Olkin, Ingram; Sobel, Milton 1979 An extremal problem for positive definite matrices. Zbl 0416.15010 Anderson, T. W.; Olkin, I. 1978 Selecting and ordering populations: A new statistical methodology. Zbl 0464.62022 Gibbons, Jean Dickinson; Olkin, Ingram; Sobel, Milton 1977 Estimating covariances in a multivariate normal distribution. Zbl 0432.62032 Olkin, I.; Selliah, J. B. 1977 Correlational analysis when some variances and covariances are known. Zbl 0364.62053 Olkin, I.; Sylvan, M. 1977 Asymptotic distribution of functions of a correlation matrix. Zbl 0369.62056 Olkin, Ingram; Siotani, Minoru 1976 A note on Box’s general method of approximation for the null distributions of likelihood criteria. Zbl 0369.62023 Gleser, Leon Jay; Olkin, Ingram 1975 Majorization in multivariate distributions. Zbl 0292.62037 Marshall, Albert W.; Olkin, Ingram 1974 Testing and estimation for structures which are circularly symmetric in blocks. Zbl 0261.62055 Olkin, Ingram 1973 Norms and inequalities for condition numbers. III. Zbl 0351.15015 Marshall, Albert W.; Olkin, Ingram 1973 Multivariate statistical inference under marginal structure. Zbl 0272.62013 Gleser, Leon Jay; Olkin, Ingram 1973 Identically distributed linear forms and the normal distribution. Zbl 0258.60017 Ghurye, S. G.; Olkin, I. 1973 ...and 42 more Documents all top 5 all top 5 ### Cited in 521 Serials 393 Linear Algebra and its Applications 294 Communications in Statistics. Theory and Methods 228 Journal of Multivariate Analysis 208 Statistics & Probability Letters 196 Journal of Statistical Planning and Inference 86 Linear and Multilinear Algebra 72 Journal of Statistical Computation and Simulation 71 Annals of the Institute of Statistical Mathematics 70 Insurance Mathematics & Economics 68 Computational Statistics and Data Analysis 67 Journal of Applied Statistics 64 Communications in Statistics. Simulation and Computation 57 Statistics 54 Statistical Papers 53 Journal of Mathematical Analysis and Applications 48 Probability in the Engineering and Informational Sciences 46 Metrika 45 Journal of Inequalities and Applications 44 European Journal of Operational Research 43 Journal of Applied Probability 41 The Annals of Statistics 34 Journal of Computational and Applied Mathematics 32 American Journal of Mathematical and Management Sciences 32 Methodology and Computing in Applied Probability 31 Psychometrika 30 Biometrics 29 Journal of Economic Theory 28 The Canadian Journal of Statistics 28 Statistical Science 25 Sequential Analysis 24 Journal of Statistical Theory and Practice 23 Discrete Mathematics 22 Discrete Applied Mathematics 22 Mathematical Social Sciences 22 Annals of Operations Research 21 Applied Mathematics and Computation 21 Operations Research Letters 21 Social Choice and Welfare 21 Test 20 Journal of Econometrics 20 Bernoulli 18 Biometrical Journal 18 Journal of Mathematical Economics 18 Brazilian Journal of Probability and Statistics 18 Sankhyā. Series B 17 Metron 17 Computational Statistics 17 Aequationes Mathematicae 17 Scandinavian Actuarial Journal 16 Proceedings of the American Mathematical Society 16 Stochastic Processes and their Applications 16 Journal of Mathematical Sciences (New York) 15 Journal of Functional Analysis 15 Journal of Theoretical Probability 15 Filomat 15 Dependence Modeling 15 Journal of Statistical Distributions and Applications 14 Extremes 13 Advances in Applied Probability 13 Lifetime Data Analysis 13 Electronic Journal of Statistics 12 Statistical Methods and Applications 11 Mathematical Biosciences 11 Transactions of the American Mathematical Society 11 Economics Letters 11 The Annals of Applied Probability 11 Journal of the Korean Statistical Society 10 Computers & Mathematics with Applications 10 Journal of Mathematical Physics 10 The Annals of Probability 10 Information Sciences 10 Probability Theory and Related Fields 10 Queueing Systems 10 Economic Theory 10 Journal of Nonparametric Statistics 10 Statistical Methodology 9 Physica A 9 Rocky Mountain Journal of Mathematics 9 Automatica 9 Fuzzy Sets and Systems 9 Journal of Mathematical Psychology 9 Naval Research Logistics 9 Mathematical Programming. Series A. Series B 9 Stochastic Models 9 ASTIN Bulletin 9 The Annals of Applied Statistics 8 Statistica 8 ELA. The Electronic Journal of Linear Algebra 8 Journal of Probability and Statistics 7 International Journal of Control 7 Scandinavian Journal of Statistics 7 Kybernetika 7 Theory and Decision 7 Economic Quality Control 7 Mathematical Methods of Statistics 7 Mathematical Problems in Engineering 7 Mathematical Inequalities & Applications 7 Applied Stochastic Models in Business and Industry 7 Quantitative Finance 7 Mediterranean Journal of Mathematics ...and 421 more Serials all top 5 ### Cited in 57 Fields 2,494 Statistics (62-XX) 1,125 Probability theory and stochastic processes (60-XX) 640 Linear and multilinear algebra; matrix theory (15-XX) 401 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 316 Operations research, mathematical programming (90-XX) 281 Numerical analysis (65-XX) 229 Real functions (26-XX) 190 Combinatorics (05-XX) 149 Operator theory (47-XX) 74 Information and communication theory, circuits (94-XX) 70 Functional analysis (46-XX) 66 Computer science (68-XX) 65 Special functions (33-XX) 61 Quantum theory (81-XX) 61 Systems theory; control (93-XX) 60 Biology and other natural sciences (92-XX) 54 Convex and discrete geometry (52-XX) 43 Order, lattices, ordered algebraic structures (06-XX) 38 Difference and functional equations (39-XX) 31 Calculus of variations and optimal control; optimization (49-XX) 30 Functions of a complex variable (30-XX) 27 History and biography (01-XX) 24 Dynamical systems and ergodic theory (37-XX) 22 Measure and integration (28-XX) 20 Number theory (11-XX) 20 Ordinary differential equations (34-XX) 17 Group theory and generalizations (20-XX) 15 Partial differential equations (35-XX) 15 Approximations and expansions (41-XX) 15 Differential geometry (53-XX) 14 Statistical mechanics, structure of matter (82-XX) 13 Harmonic analysis on Euclidean spaces (42-XX) 11 Nonassociative rings and algebras (17-XX) 11 Integral transforms, operational calculus (44-XX) 10 Topological groups, Lie groups (22-XX) 9 Abstract harmonic analysis (43-XX) 9 Geometry (51-XX) 9 Global analysis, analysis on manifolds (58-XX) 8 Mechanics of deformable solids (74-XX) 5 General and overarching topics; collections (00-XX) 5 Several complex variables and analytic spaces (32-XX) 5 Geophysics (86-XX) 4 Mathematical logic and foundations (03-XX) 4 Field theory and polynomials (12-XX) 4 Algebraic geometry (14-XX) 4 Mechanics of particles and systems (70-XX) 4 Mathematics education (97-XX) 3 Potential theory (31-XX) 3 Integral equations (45-XX) 3 General topology (54-XX) 3 Fluid mechanics (76-XX) 2 Associative rings and algebras (16-XX) 2 Classical thermodynamics, heat transfer (80-XX) 1 Sequences, series, summability (40-XX) 1 Manifolds and cell complexes (57-XX) 1 Optics, electromagnetic theory (78-XX) 1 Relativity and gravitational theory (83-XX) ### Wikidata Timeline The data are displayed as stored in Wikidata under a Creative Commons CC0 License. Updates and corrections should be made in Wikidata.
2022-07-04T06:25:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6210272908210754, "perplexity": 7821.558236573452}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104354651.73/warc/CC-MAIN-20220704050055-20220704080055-00055.warc.gz"}
https://itl.nist.gov/div898/software/dataplot/refman1/auxillar/coefvate.htm
Dataplot Vol 1 Vol 2 # COEFFICIENT OF VARIATION TEST Name: COEFFICIENT OF VARIATION TEST Type: Analysis Command Purpose: Perform either a one sample coefficient of variation test or a two sample coefficient of variation test. Description: The coefficient of variation is defined as the ratio of the standard deviation to the mean $$\mbox{cv} = \frac{\sigma} {\mu}$$ where σ and μ denote the population standard deviation and population mean, respectively. The sample coefficient of variation is defined as $$\mbox{cv} = \frac{s} {\bar{x}}$$ where s and $$\bar{x}$$ denote the sample standard deviation and sample mean respectively. The coefficient of variation should typically only be used for ratio data. That is, the data should be continuous and have a meaningful zero. Although the coefficient of variation statistic can be computed for data that is not on a ratio scale, the interpretation of the coeffcient of variation may not be meaningful. Currently, this command is only supported for non-negative data. If the response variable contains one or more negative numbers, an error message will be returned. The one sample coefficient of variation tests whether the coefficient of variation is equal to a given value. Note that this can be for either a single sample or for the common coefficient for multile groups of data (it is assummed the groups have equal population coefficient of variation values). H0: γ = γ0 Ha: γ ≠ γ0 The test statistic is $$\sum_{i}^{k}{\frac{(n_{i} - 1) u_{i}} {\theta_{0}}}$$ where k = the number of groups $$u_{i}$$ = $$\frac{c_{i}^{2}} {1 + c_{i}^{2} (n_{i} - 1)}$$ ci = coefficient of variation for the i-th group ni = sample size for the i-th group θ0 = $$\frac{ \gamma_{0}^{2}} {1 + \gamma_{0}^{2}}$$ where γ is the common coefficient of variation and γ0 is the hypothesized value. This statistic is compared to a chi-square with $$\sum_{i}^{k}{n_{i} - 1}$$ degrees of freedom. The most common usage is the case for a single group (i.e., k = 1). The two sample coefficient of variation tests whether two distinct samples have equal, but unspecified, coefficients of variations. As with the single sample case, each of the two samples can consist of either a single group or multiple groups of data. H0: γ1 = γ2 Ha: γ1 ≠ γ2 The test statistic is $$F = \frac {\mbox{NUM}} {\mbox{DENOM}}$$ where $$\mbox{NUM} = \frac {\sum_{i}^{k}{(n_{1i} - 1) u_{1i}}} {\sum_{i}^{k}{n_{1i} - 1}}$$ $$\mbox{DENOM} = \frac {\sum_{i}^{k}{(n_{2i} - 1) u_{2i}}} {\sum_{i}^{k}{n_{2i} - 1}}$$ where k1 = the number of groups for sample one k2 = the number of groups for sample two $$u_{ri}$$ = $$\frac{c_{ri}^{2}} {1 + c_{ri}^{2}*(n_{ri} - 1)/n_{ri}}$$ $$c_{ri}$$ = coefficient of variation for the i-th group and the r-th sample $$n_{ri}$$ = the sample size for the i-the group and the r-th sample r = 1, 2 (i.e., the two samples) when k1 = k2 = 1, the test simplifies to $$F = \frac{c_{1}^{2}/(1 + c_{1}^{2}(n_{1} - 1)/n_{1})} {c_{2}^{2}/(1 + c_{2}^{2}(n_{2} - 1)/n_{2})}$$ This statistic is compared to the F distribution with $$\sum_{i=1}^{k_{1}}{n_{1i} -1}$$ and $$\sum_{i=1}^{k_{2}}{n_{2i} -1}$$ degrees of freedom. The test implemented here was proposed by Forkman (see the References below). There are a number of alternative tests (see the paper by Krishnamooorthy and Lee in the References section). Simulations by Forkman and also by Krishnamoorthy and Lee indicate that the Forkman test has good nominal coverage and reasonable power. Syntax 1: ONE SAMPLE COEFFICIENT OF VARIATION TEST <y> <x> <gamma0> <SUBSET/EXCEPT/FOR qualification> where <y> is the response variable; <x> is the optional group-id variable; <gamma0> is a parameter that specifies the hypothesized value; and where the <SUBSET/EXCEPT/FOR qualification> is optional. This syntax performs a two-tailed test. If there are no groups in the data, the group-id variable can be omitted. The <gamma0> can either be given on this command or specified before entering this command by entering LET GAMMA0 = <value> If the <x> variable is given, it should have the same number of rows as the <y> variable. Syntax 2: ONE SAMPLE COEFFICIENT OF VARIATION <LOWER/UPPER> TAILED TEST <y> <x> <gamma0> <SUBSET/EXCEPT/FOR qualification> where <y> is the response variable; <x> is the optional group-id variable; <gamma0> is a parameter that specifies the hypothesized value; and where the <SUBSET/EXCEPT/FOR qualification> is optional. This syntax performs a one-tailed test. If LOWER is entered, then the alternate hypothesis is Ha: gamma < gamma0 If UPPER is entered, then the alternative hypothesis is Ha: gamma > gamma0 If there are no groups in the data, the group-id variable can be omitted. The <gamma0> can either be given on this command or specified before entering this command by entering LET GAMMA0 = <value> If the <x> variable is given, it should have the same number of rows as the <y> variable. Syntax 3: TWO SAMPLE COEFFICIENT OF VARIATION TEST <y1> <x1> <y2> <x2> <SUBSET/EXCEPT/FOR qualification> where <y1> is the first response variable; <x1> is the optional first group-id variable; <y2> is the second response variable; <x2> is the optional second group-id variable; and where the <SUBSET/EXCEPT/FOR qualification> is optional. This syntax performs a two-tailed test. If there are no groups in the data, the group-id variables can be omitted. However, if a group-id variable is specified for one response variable, it should also be specified for the second response variable. If one of the response variables has groups but the other response variable does not, then a group-id variable can be created that has all values equal to 1. The <y1> and <x1> variables should have the same number of rows. Likewise the <y2> and <x2> variables should have the same number of rows. However, <y1> and need not have the same number of rows. Syntax 4: TWO SAMPLE COEFFICIENT OF VARIATION <LOWER/UPPER> TAILED TEST <y1> <x1> <y2> <x2> <SUBSET/EXCEPT/FOR qualification> where <y1> is the first response variable; <x1> is the optional first group-id variable; <y2> is the second response variable; <x2> is the optional second group-id variable; and where the <SUBSET/EXCEPT/FOR qualification> is optional. This syntax performs a one-tailed test. If LOWER is entered, then the alternate hypothesis is Ha: gamma1 < gamma2 If UPPER is entered, then the alternative hypothesis is Ha: gamma1 > gamma2 If there are no groups in the data, the group-id variable can be omitted. If there are no groups in the data, the group-id variables can be omitted. However, if a group-id variable is specifiend for one response variable, it should also be specified for the second response variable. If one of the response variables has groups but the other response variable does not, then a group-id variable can be created that has all values equal to 1. The <y1> and <x1> variables should have the same number of rows. Likewise the <y2> and <x2> variables should have the same number of rows. However, <y1> and <y2> need not have the same number of rows. Examples: LET GAMMA0 = 0.1 ONE SAMPLE COEFFICIENT OF VARIATION TEST Y GAMMA0 ONE SAMPLE COEFFICIENT OF VARIATION TEST Y X GAMMA0 ONE SAMPLE COEFFICIENT OF VARIATION UPPER TAILED TEST ... Y X GAMMA0 ONE SAMPLE COEFFICIENT OF VARIATION TEST Y X GAMMA0 ... SUBSET X > 2 TWO SAMPLE COEFFICIENT OF VARIATION TEST Y1 Y2 TWO SAMPLE COEFFICIENT OF VARIATION TEST Y1 X1 Y2 X2 TWO SAMPLE COEFFICIENT OF VARIATION LOWER TAILED TEST Y1 Y2 Note: A table of confidence limits is printed for alpha levels of 50.0, 80.0, 90.0, 95.0, 99.0, and 99.9. Note: In addition to the COEFFICIENT OF VARIATION CONFIDENCE LIMIT command, the following commands can also be used: LET A = ONE SAMPLE COEFFICIENT OF VARIATION TEST ... Y X LET A = ONE SAMPLE COEFFICIENT OF VARIATION TEST ... CDF Y X LET A = ONE SAMPLE COEFFICIENT OF VARIATION TEST ... PVALUE Y X LET A = ONE SAMPLE COEFFICIENT OF VARIATION LOWER ... PVALUE Y X LET A = ONE SAMPLE COEFFICIENT OF VARIATION UPPER ... PVALUE Y X LET A = TWO SAMPLE COEFFICIENT OF VARIATION TEST ... Y1 Y2 LET A = TWO SAMPLE COEFFICIENT OF VARIATION TEST ... CDF Y1 Y2 LET A = TWO SAMPLE COEFFICIENT OF VARIATION TEST ... PVALUE Y1 Y2 LET A = TWO SAMPLE COEFFICIENT OF VARIATION LOWER ... PVALUE Y1 Y2 LET A = TWO SAMPLE COEFFICIENT OF VARIATION UPPER ... PVALUE Y1 Y2 The LOWER PVALUE and UPPER PVALUE refer to the p-values based on lower tailed and upper tailed tests, respectively. For the one sample test, these statistics can be computed from summary data as well LET A = SUMMARY ONE SAMPLE COEFFICIENT OF VARIATION ... TEST YMEAN YSD YN LET A = SUMMARY ONE SAMPLE COEFFICIENT OF VARIATION ... CDF YMEAN YSD YN LET A = SUMMARY ONE SAMPLE COEFFICIENT OF VARIATION ... PVALUE where YMEAN, YSD, and YN are arrays that contain the sample means, sample standard deviations, and sample sizes, respectively. In addition to the above LET commands, built-in statistics are supported for 20+ different commands (enter HELP STATISTICS for details). Note: As mentioned above, there are a number of tests that have been proposed. For the two sample test with no groups for the samples, Dataplot also supports the Miller test. This test statistic is given by $$\frac {c_{1} - c_{2}} {\sqrt{\frac{c^{2}}{2(n_{1} - 1)} + \frac{c^{4}}{n_{1} - 1} + \frac{c^{2}}{2(n_{2} - 1)} + \frac{c^{4}}{n_{2} - 1}}}$$ where n1 = = the sample size for sample one c1 = the sample coefficient of variation for sample one n2 = = the sample size for sample two c2 = the sample coefficient of variation for sample two c = $$\frac{(n_{1} - 1)c_{1} + (n_{2} - 1)c_{2}} {n_{1} + n_{2} - 2}$$ To use the Miller test, enter the command (before the TWO SAMPLE COEFFICENT OF VARIATION TEST command) SET TWO SAMPLE COEFFICIENT OF VARIATION TEST MILLER To reset the default Forkman test, enter SET TWO SAMPLE COEFFICIENT OF VARIATION TEST FORKMAN Default: None Synonyms: None Related Commands: COMMON COEFFICENT OF VARIATION CONFIDENCE LIMITS = Generate a confidence interval for a common coefficient of variation. COEFFICIENT OF VARIATION = Compute the coefficient of variation. COEFFICENT OF VARIATION CONFIDENCE LIMITS = Compute a confidence interval for the coefficient of variation. CONFIDENCE LIMITS = Generate a confidence limit for the mean. SD CONFIDENCE LIMITS = Generate a confidence limit for the standard deviation. PREDICTION LIMITS = Generate prediction limits for the mean. TOLERANCE LIMITS = Generate a tolerance limit. References: Forkman (2009), "Estimator and Tests for Common Coefficients of Variation in Normal Distributions", Communications in Statistics - Theory and Methods, Vol. 38, pp. 233-251. Miller (1991), "Asymptotic Test Statistics for Coefficient of Variation", Communications in Statistics - Theory and Methods, Vol. 20, pp. 3351-3363. McKay (1932), "Distributions of the Coefficient of Variation and the Extended 't' Distribution", Journal of the Royal Statistical Society, Vol. 95, pp. 695-698. Krishnamoorthy and Lee (2014), "Improved Tests for the Equality of Normal Coefficients of Variation", Computational Statistics, Vol. 29, pp. 215-232. Applications: Confirmatory Data Analysis Implementation Date: 2017/06 Program 1: . Step 1: Read the data . skip 25 skip 0 set write decimals 6 . . Step 2: Define plot control . title case asis title offset 2 label case asis . y1label Coefficient of Variation x1label Group title Coefficient of Variation for GEAR.DAT let ngroup = unique x xlimits 1 ngroup major x1tic mark number ngroup minor x1tic mark number 0 tic mark offset units data x1tic mark offset 0.5 0.5 y1tic mark label decimals 3 . character X line blank . . . Step 3: Plot the coefficient of variation over the batches . set statistic plot reference line average coefficient of variation plot y x . . Step 4: Demonstrate the LET commands for the test statistics . using raw data . let gamma0 = 0.005 let statval = one sample coef of variation test y x let statcdf = one sample coef of variation test cdf y x let pvalue = one sample coef of variation test pvalue y x let pvall = one sample coef of variation lower pvalue y x let pvalu = one sample coef of variation upper pvalue y x print statval statcdf pvalue pvall pvalu . . Step 4: Demonstrate the LET commands for the test statistics . using summary data . set let cross tabulate collapse let ymean = cross tabulate mean y x let ysd = cross tabulate sd y x let yn = cross tabulate size x . let statval2 = summary one sample coef of variation test ymean ysd yn let statcdf2 = summary one sample coef of variation cdf ymean ysd yn let pvalue2 = summary one sample coef of variation pvalue ymean ysd yn print statval2 statcdf2 pvalue2 . . Step 5: Hypothesis test for common coefficient of variation . let gamma0 = 0.005 one sample coefficient of variation test y x one sample coefficient of variation upper tail test y x one sample coefficient of variation lower tail test y x The following output is generated. PARAMETERS AND CONSTANTS-- STATVAL -- 127.554980 STATCDF -- 0.994327 PVALUE -- 0.011346 PVALL -- 0.994327 PVALU -- 0.005673 PARAMETERS AND CONSTANTS-- STATVAL2-- 127.554980 STATCDF2-- 0.994327 PVALUE2 -- 0.011346 Forkman One Sample Coefficient of Variation Test Response Variable: Y Group-ID Variable: X H0: Coefficient of Variation Equal 0.005000 Ha: Coefficient of Variation Not Equal 0.005000 Summary Statistics: Total Number of Observations: 100 Number of Groups: 10 Number of Groups Included in Test: 10 Sample Common Coefficient of Variation: 0.005953 Test: Gamma0: 0.005000 Test Statistic Value: 127.554980 Degrees of Freedom: 90 CDF Value: 0.994327 P-Value (2-tailed test): 0.011346 P-Value (lower-tailed test): 0.994327 P-Value (upper-tailed test): 0.005673 Two-Tailed Test H0: Gamma = Gamma0; Ha: Gamma <> Gamma0 --------------------------------------------------------------------------- Lower Upper Null Significance Test Critical Critical Hypothesis Level Statistic Value Value Conclusion --------------------------------------------------------------------------- 50.0% 127.554980 80.624665 98.649932 REJECT 80.0% 127.554980 73.291090 107.565009 REJECT 90.0% 127.554980 69.126030 113.145270 REJECT 95.0% 127.554980 65.646618 118.135893 REJECT 99.0% 127.554980 59.196304 128.298944 ACCEPT 99.9% 127.554980 52.275778 140.782281 ACCEPT Forkman One Sample Coefficient of Variation Test Response Variable: Y Group-ID Variable: X H0: Coefficient of Variation Equal 0.005000 Ha: Coefficient of Variation > 0.005000 Summary Statistics: Total Number of Observations: 100 Number of Groups: 10 Number of Groups Included in Test: 10 Sample Common Coefficient of Variation: 0.005953 Test: Gamma0: 0.005000 Test Statistic Value: 127.554980 Degrees of Freedom: 90 CDF Value: 0.994327 P-Value (2-tailed test): 0.011346 P-Value (lower-tailed test): 0.994327 P-Value (upper-tailed test): 0.005673 Upper One-Tailed Test H0: Gamma = Gamma0; Ha: Gamma > Gamma0 ------------------------------------------------------------ Null Significance Test Critical Hypothesis Level Statistic Value (>) Conclusion ------------------------------------------------------------ 50.0% 127.554980 89.334218 REJECT 80.0% 127.554980 101.053723 REJECT 90.0% 127.554980 107.565009 REJECT 95.0% 127.554980 113.145270 REJECT 99.0% 127.554980 124.116319 REJECT 99.9% 127.554980 137.208354 ACCEPT Forkman One Sample Coefficient of Variation Test Response Variable: Y Group-ID Variable: X H0: Coefficient of Variation Equal 0.005000 Ha: Coefficient of Variation < 0.005000 Summary Statistics: Total Number of Observations: 100 Number of Groups: 10 Number of Groups Included in Test: 10 Sample Common Coefficient of Variation: 0.005953 Test: Gamma0: 0.005000 Test Statistic Value: 127.554980 Degrees of Freedom: 90 CDF Value: 0.994327 P-Value (2-tailed test): 0.011346 P-Value (lower-tailed test): 0.994327 P-Value (upper-tailed test): 0.005673 Lower One-Tailed Test H0: Gamma = Gamma0; Ha: Gamma < Gamma0 ------------------------------------------------------------ Null Significance Test Critical Hypothesis Level Statistic Value (<) Conclusion ------------------------------------------------------------ 50.0% 127.554980 89.334218 ACCEPT 80.0% 127.554980 78.558432 ACCEPT 90.0% 127.554980 73.291090 ACCEPT 95.0% 127.554980 69.126030 ACCEPT 99.0% 127.554980 61.754079 ACCEPT 99.9% 127.554980 54.155244 ACCEPT Program 2: . Step 1: Read the data . skip 25 skip 0 set write decimals 6 retain y2 subset y2 > 0 . . Test for equal coefficient of variation . let statval = two sample coef of variation test y1 y2 let statcdf = two sample coef of variation test cdf y1 y2 let pvalue = two sample coef of variation test pvalue y1 y2 let pvall = two sample coef of variation lower pvalue y1 y2 let pvalu = two sample coef of variation upper pvalue y1 y2 print statval statcdf pvalue pvall pvalu . . Test for equal coefficient of variation . two sample coefficient of variation test y1 y2 two sample coefficient of variation lower tail test y1 y2 two sample coefficient of variation upper tail test y1 y2 set two sample coefficient of variation test miller two sample coefficient of variation test y1 y2 two sample coefficient of variation lower tail test y1 y2 two sample coefficient of variation upper tail test y1 y2 The following output is generated. PARAMETERS AND CONSTANTS-- STATVAL -- 2.384724 STATCDF -- 0.999992 PVALUE -- 0.000015 PVALL -- 0.999992 PVALU -- 0.000008 Forkman Two Sample Test for Equal Coefficient of Variations First Response Variable: Y1 Second Response Variable: Y2 H0: Population Coefficients of Variation Are Equal (gamma1 = gamma2) Ha: gamma1 <> gamma2 Sample One Summary Statistics: Total Number of Observations: 249 Number of Groups Included: 1 Sample Mean: 20.144578 Sample Standard Deviation: 6.414699 Sample Coefficient of Variation: 0.318433 Sample Two Summary Statistics: Total Number of Observations: 79 Number of Included Groups: 1 Sample Mean: 30.481013 Sample Standard Deviation: 6.107710 Sample Coefficient of Variation: 0.200378 Forkman Test Statistic Value: 2.384724 Degrees of Freedom 248 Degrees of Freedom 78 CDF Value: 0.999992 P-Value (2-tailed test): 0.000015 P-Value (lower-tailed test): 0.999992 P-Value (upper-tailed test): 0.000008 Forkman Two Sample Test for Equal Coefficient of Variations H0: gamma1 = gamma2; Ha: gamma1 <> gamma2 --------------------------------------------------------------------------- Lower Upper Null Significance Test Critical Critical Hypothesis Level Statistic Value Value Conclusion --------------------------------------------------------------------------- 50.0% 2.384724 0.889585 1.140474 REJECT 80.0% 2.384724 0.798111 1.280145 REJECT 90.0% 2.384724 0.748580 1.373470 REJECT 95.0% 2.384724 0.708464 1.461059 REJECT 99.0% 2.384724 0.636939 1.652378 REJECT 99.9% 2.384724 0.563977 1.913576 REJECT Forkman Two Sample Test for Equal Coefficient of Variations First Response Variable: Y1 Second Response Variable: Y2 H0: Population Coefficients of Variation Are Equal (gamma1 = gamma2) Ha: gamma1 < gamma2 Sample One Summary Statistics: Total Number of Observations: 249 Number of Groups Included: 1 Sample Mean: 20.144578 Sample Standard Deviation: 6.414699 Sample Coefficient of Variation: 0.318433 Sample Two Summary Statistics: Total Number of Observations: 79 Number of Included Groups: 1 Sample Mean: 30.481013 Sample Standard Deviation: 6.107710 Sample Coefficient of Variation: 0.200378 Forkman Test Statistic Value: 2.384724 Degrees of Freedom 248 Degrees of Freedom 78 CDF Value: 0.999992 P-Value (2-tailed test): 0.000015 P-Value (lower-tailed test): 0.999992 P-Value (upper-tailed test): 0.000008 Lower One-Tailed Test H0: gamma1 = gamma2; Ha: gamma1 < gamma2 ------------------------------------------------------------ Null Significance Test Critical Hypothesis Level Statistic Value (<) Conclusion ------------------------------------------------------------ 50.0% 2.384724 1.005895 REJECT 80.0% 2.384724 0.863240 REJECT 90.0% 2.384724 0.798111 REJECT 95.0% 2.384724 0.748580 REJECT 99.0% 2.384724 0.664874 REJECT 99.9% 2.384724 0.583430 REJECT Forkman Two Sample Test for Equal Coefficient of Variations First Response Variable: Y1 Second Response Variable: Y2 H0: Population Coefficients of Variation Are Equal (gamma1 = gamma2) Ha: gamma1 > gamma2 Sample One Summary Statistics: Total Number of Observations: 249 Number of Groups Included: 1 Sample Mean: 20.144578 Sample Standard Deviation: 6.414699 Sample Coefficient of Variation: 0.318433 Sample Two Summary Statistics: Total Number of Observations: 79 Number of Included Groups: 1 Sample Mean: 30.481013 Sample Standard Deviation: 6.107710 Sample Coefficient of Variation: 0.200378 Forkman Test Statistic Value: 2.384724 Degrees of Freedom 248 Degrees of Freedom 78 CDF Value: 0.999992 P-Value (2-tailed test): 0.000015 P-Value (lower-tailed test): 0.999992 P-Value (upper-tailed test): 0.000008 Forkman Two Sample Test for Equal Coefficient of Variations H0: gamma1 = gamma2; Ha: gamma1 <> gamma2 ------------------------------------------------------------ Lower Null Significance Test Critical Hypothesis Level Statistic Value Conclusion ------------------------------------------------------------ 50.0% 2.384724 1.005895 ACCEPT 80.0% 2.384724 1.177041 ACCEPT 90.0% 2.384724 1.280145 ACCEPT 95.0% 2.384724 1.373470 ACCEPT 99.0% 2.384724 1.571455 ACCEPT 99.9% 2.384724 1.835646 ACCEPT THE FORTRAN COMMON CHARACTER VARIABLE TWO SAMP HAS JUST BEEN SET TO MILL Miller Two Sample Test for Equal Coefficient of Variations First Response Variable: Y1 Second Response Variable: Y2 H0: Population Coefficients of Variation Are Equal (gamma1 = gamma2) Ha: gamma1 <> gamma2 Sample One Summary Statistics: Number of Observations: 249 Sample Mean: 20.144578 Sample Standard Deviation: 6.414699 Sample Coefficient of Variation: 0.318433 Sample Two Summary Statistics: Number of Observations: 79 Sample Mean: 30.481013 Sample Standard Deviation: 6.107710 Sample Coefficient of Variation: 0.200378 Miller Test Statistic Value: 4.100052 CDF Value: 0.999979 P-Value (2-tailed test): 0.000041 P-Value (lower-tailed test): 0.999979 P-Value (upper-tailed test): 0.000021 Miller Two Sample Test for Equal Coefficient of Variations H0: gamma1 = gamma2; Ha: gamma1 <> gamma2 --------------------------------------------------------------------------- Lower Upper Null Significance Test Critical Critical Hypothesis Level Statistic Value Value Conclusion --------------------------------------------------------------------------- 50.0% 4.100052 -0.674490 0.674490 REJECT 80.0% 4.100052 -1.281552 1.281552 REJECT 90.0% 4.100052 -1.644854 1.644854 REJECT 95.0% 4.100052 -1.959964 1.959964 REJECT 99.0% 4.100052 -2.575829 2.575829 REJECT 99.9% 4.100052 -3.290527 3.290527 REJECT Miller Two Sample Test for Equal Coefficient of Variations First Response Variable: Y1 Second Response Variable: Y2 H0: Population Coefficients of Variation Are Equal (gamma1 = gamma2) Ha: gamma1 < gamma2 Sample One Summary Statistics: Number of Observations: 249 Sample Mean: 20.144578 Sample Standard Deviation: 6.414699 Sample Coefficient of Variation: 0.318433 Sample Two Summary Statistics: Number of Observations: 79 Sample Mean: 30.481013 Sample Standard Deviation: 6.107710 Sample Coefficient of Variation: 0.200378 Miller Test Statistic Value: 4.100052 CDF Value: 0.999979 P-Value (2-tailed test): 0.000041 P-Value (lower-tailed test): 0.999979 P-Value (upper-tailed test): 0.000021 Lower One-Tailed Test H0: gamma1 = gamma2; Ha: gamma1 < gamma2 ------------------------------------------------------------ Null Significance Test Critical Hypothesis Level Statistic Value (<) Conclusion ------------------------------------------------------------ 50.0% 4.100052 0.000000 REJECT 80.0% 4.100052 -0.841621 REJECT 90.0% 4.100052 -1.281552 REJECT 95.0% 4.100052 -1.644854 REJECT 99.0% 4.100052 -2.326348 REJECT 99.9% 4.100052 -3.090232 REJECT Miller Two Sample Test for Equal Coefficient of Variations First Response Variable: Y1 Second Response Variable: Y2 H0: Population Coefficients of Variation Are Equal (gamma1 = gamma2) Ha: gamma1 > gamma2 Sample One Summary Statistics: Number of Observations: 249 Sample Mean: 20.144578 Sample Standard Deviation: 6.414699 Sample Coefficient of Variation: 0.318433 Sample Two Summary Statistics: Number of Observations: 79 Sample Mean: 30.481013 Sample Standard Deviation: 6.107710 Sample Coefficient of Variation: 0.200378 Miller Test Statistic Value: 4.100052 CDF Value: 0.999979 P-Value (2-tailed test): 0.000041 P-Value (lower-tailed test): 0.999979 P-Value (upper-tailed test): 0.000021 Miller Two Sample Test for Equal Coefficient of Variations H0: gamma1 = gamma2; Ha: gamma1 <> gamma2 ------------------------------------------------------------ Lower Null Significance Test Critical Hypothesis Level Statistic Value Conclusion ------------------------------------------------------------ 50.0% 4.100052 0.000000 ACCEPT 80.0% 4.100052 0.841621 ACCEPT 90.0% 4.100052 1.281552 ACCEPT 95.0% 4.100052 1.644854 ACCEPT 99.0% 4.100052 2.326348 ACCEPT 99.9% 4.100052 3.090232 ACCEPT NIST is an agency of the U.S. Commerce Department. Date created: 06/27/2017 Last updated: 06/27/2017
2021-11-27T02:53:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7151955962181091, "perplexity": 13967.352849711315}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358078.2/warc/CC-MAIN-20211127013935-20211127043935-00545.warc.gz"}
https://par.nsf.gov/biblio/10005826-measurement-higgs-boson-production-diphoton-decay-channel-pp-collisions-center-mass-energies-atlas-detector
skip to main content Measurement of Higgs boson production in the diphoton decay channel in $pp$ collisions at center-of-mass energies of 7 and 8 TeV with the ATLAS detector
2022-09-30T02:51:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8572960495948792, "perplexity": 2960.296930407768}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00364.warc.gz"}
https://mooseframework.inl.gov/getting_started/installation/test_moose.html
Compile and Test MOOSE After libMesh has compiled the next step is to compile and test MOOSE. cd ~/projects/moose/test make -j 4 ./run_tests -j 4 If the installation was successful you should see most of the tests passing (some tests will be skipped depending on your system environment).
2019-05-23T15:59:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4226413071155548, "perplexity": 4554.204098310316}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257259.71/warc/CC-MAIN-20190523143923-20190523165923-00482.warc.gz"}
https://www.abs.gov.au/methodologies/causes-death-australia-methodology/2018
This is not the latest release View the latest release # Causes of Death, Australia methodology Reference period 2018 Released 25/09/2019 ## Explanatory notes ### Introduction 1 This publication contains statistics on causes of death for Australia, together with selected statistics on perinatal deaths. 2 Statistics on perinatal deaths for the 2007-2009 reference years were published separately in Perinatal Deaths, Australia, 2009 (cat. no. 3304.0). 3 In order to complete a death registration, the death must be certified by either a doctor using the Medical Certificate of Cause of Death, or by a coroner. In 2018, 88.1% of deaths were certified by a doctor. The remaining 11.9% were certified by a coroner. 4 In order to complete a perinatal death registration, the death must be certified by either a doctor, using the Medical Certificate of Cause of Perinatal Death, or by a coroner. In 2018, 97.4% of perinatal deaths were certified by a doctor, with the remaining 2.6% certified by a coroner. 5 Although there is variation across jurisdictions in what constitutes a death that is reportable to a coroner, they are generally reported in circumstances such as: • where the person died unexpectedly and the cause of death is unknown • where the person died in a violent or unnatural manner • where the person died during, or as a result of an anaesthetic • where the person was 'held in care' or in custody immediately before they died • where the identity of the person who has died is unknown. 6 The registration of deaths is the responsibility of the eight individual state and territory Registrars of Births, Deaths and Marriages. As part of the registration process, information about the cause of death is supplied by the medical practitioner certifying the death or by a coroner. Other information about the deceased is supplied by a relative or other person acquainted with the deceased, or by an official of the institution where the death occurred. The information is provided to the Australian Bureau of Statistics (ABS) by individual Registrars for coding and compilation into aggregate statistics. In addition, the ABS supplements this data with information from the National Coronial Information System (NCIS). The following diagram shows the process undertaken in producing cause of death statistics for Australia. The diagram below outlines the Australian Cause of Death Statistics System. Each death is certified by either a doctor or coroner and the resultant information is provided to the Australian Bureau of Statistics (ABS) through the Registrar of Births, Deaths and Marriages in each state or territory. Information is also provided via the National Coronial Information System for those deaths certified by a coroner. The ABS processes, codes and validates this information, which is then provided in statistical outputs. ### Australian cause of death statistics system The flow chart begins with a death event. A death even has two options, a funeral director or reportable cause of death. Funeral director registers the death with the registrar of births deaths and marriages. A reportable death has two options, yes or no. No, a Not reportable death, will be certified by a doctor then registered with the registrar of births deaths and marriages. Yes, a reportable death, goes to a coroner investigation. Coroner investigation contains three fields, police investigation, autopsy, and other (e.g. toxicology). Coroner investigation goes to certification by coroner. There are two options from certification by coroner, registrar of births deaths and marriages and National Coronial Information System. The next section of the flow chart is called ABS processing. The flow chart continues from registrar of births deaths and marriages and National Coronial Information System to Australian Bureau of Statistics amalgamation and record checks. This flows to cause of death coding and validation process. This then flows to validation and finalisation of deaths file. The flow chat ends at the next section called statistics available to users at the statistical outputs option. ### 2018 scope and coverage 7 Ideally, for compiling annual time series, the number of deaths should be recorded and reported as those which occurred within a given reference period, such as a calendar year. However, there can be lags in the registration of deaths with the state or territory registries and so not all deaths are registered in the year that they occur. There may also be further delays to the ABS receiving notification of the death from the registries due to processing or data transfer lags. Therefore, every death record will have: • a date on which the death occurred (the date of occurrence) • a date on which the death is registered with the state and territory registry (date of registration); and • a date on which the registered death is lodged with the ABS and deemed in scope. 8 With exception to the statistics published by Year of Occurrence section (Data Cube 13), all deaths referred to in this publication relate to the number of deaths registered, not those which actually occurred, in the years shown. ### Scope of causes of death statistics 9 The scope for each reference year of the death registrations includes: • deaths registered in the reference year and received by the ABS in the reference year; • deaths registered in the reference year and received by the ABS in the first quarter of the subsequent year; and • deaths registered in the years prior to the reference year but not received by ABS until the reference year or the first quarter of the subsequent year, provided that these records have not been included in any statistics from earlier periods. 10 From 2007 onwards, data for a particular reference year includes all deaths registered in Australia for the reference year that are received by the ABS by the end of the March quarter of the subsequent year. Death records received by the ABS during the March quarter of 2019 which were initially registered in 2018 (but for which registration was not fully completed until 2019) were assigned to the 2018 reference year. Any registrations relating to 2018 which were received by the ABS from April 2019 will be assigned to the 2019 reference year. Approximately 4% to 7% of deaths occurring in one year are not registered until the following year or later. 11 Prior to 2007, the scope for the reference year of the Death Registrations collection included: • deaths registered in the reference year and received by the ABS in the reference year; • deaths registered in the reference year and received by the ABS in the first quarter of the subsequent year; and • deaths registered during the two years prior to the reference year but not received by the ABS until the reference year. ### Coverage of causes of death statistics 12 The ABS Causes of Death collection includes all deaths that occurred and were registered in Australia, including deaths of persons whose usual residence is overseas. Deaths of Australian residents that occurred outside Australia may be registered by individual Registrars, but are not included in ABS deaths or causes of death statistics. 13 Deaths registered on Norfolk Island from 1 July 2016 are included in this publication. This is due to the introduction of the Norfolk Island Legislation Amendment Act 2015. Norfolk Island deaths are included in statistics for "Other Territories" as well as totals for all of Australia. Deaths registered on Norfolk Island prior to 1 July 2016 were not in scope for death statistics. Prior to 1 July 2016, deaths of people that occurred in Australia with a usual residence of Norfolk Island were included in Australian totals, but assigned a usual residence of 'overseas'. With the inclusion of Norfolk Island as a territory of Australia in the ASGS 2016, those deaths which occurred in Australia between January and June 2016 with a usual residence of Norfolk Island were allocated to the Norfolk Island SA2 code instead of the 'overseas' category. 14 The current scope of the statistics includes: • all deaths being registered for the first time; • deaths in Australia of temporary visitors to Australia; • deaths occurring within Australian Territorial waters; • deaths occurring in Australian Antarctic Territories or other external territories (including Norfolk Island); • deaths occurring in transit (i.e. on ships or planes) if registered in the State of 'next port of call'; • deaths of Australian Nationals overseas who were employed at Australian legations and consular offices (i.e. deaths of Australian diplomats while overseas) where able to be identified; and • deaths that occurred in earlier reference periods that have not been previously registered (late registrations). 15 The scope of the statistics excludes: • repatriation of human remains where the death occurred overseas; • deaths overseas of foreign diplomatic staff (where these are able to be identified); • stillbirths/fetal deaths (these are included in perinatal counts (see Explanatory Notes 16-20, below). In 2007-2009 these were published separately in Perinatal Deaths, Australia (cat. no. 3304.0), but are now included in this publication. ### Scope of perinatal death statistics 16 The scope of the perinatal death statistics includes all registered fetal deaths (at least 20 weeks' gestation or at least 400 grams' birth weight) and all registered neonatal deaths (all live born babies who die within 28 completed days of birth, regardless of gestation or birth weight). The ABS scope rules for fetal deaths are consistent with the legislated requirement for all state and territory Registrars of Births, Deaths and Marriages to register all fetal deaths which meet the above-mentioned gestation and birth weight criteria. Based on this legislative requirement, in the case of missing gestation and/or birth weight data, the fetal record is considered in scope and included in the dataset. A record is only considered out of scope if both gestation and birth weight data are present, and both fall outside the scope criteria (i.e. gestation of 19 weeks or less and birth weight of 399 grams or fewer). This scope was adopted for the 2007 Perinatal Deaths collection, and was applied to historical data for 1999-2006. For more information on the changes in scope rules see Perinatal Deaths, Australia, 2007 (cat. no. 3304.0) Explanatory Notes 18-20. These rules have been applied to all perinatal data presented in this publication. 17 The World Health Organization (WHO) definition of a perinatal death differs to that used by the ABS. The WHO definition includes all neonatal deaths, and those fetuses weighing at least 500 grams or having a gestational age of at least 22 weeks, or body length of 25 centimetres from crown to heel. A summary table based on the WHO definition of perinatal deaths is included in the perinatal data cube in this release. See Explanatory Note 81, below, for more details on the interpretation of this table for 2018. 18 Fetal deaths are registered only as a stillbirth, and are not in scope of either the Births, Australia (cat. no. 3301.0) or Deaths, Australia (cat. no. 3302.0) collections. Fetal deaths are part of the Perinatal collection, but not the Causes of Death collection. Neonatal deaths are in scope of the Deaths, Causes of Death and Perinatal collections. 19 This publication only includes information on registered fetal and neonatal deaths. Registered deaths are sourced through jurisdictional Registries of Births Deaths and Marriages (see Explanatory note 6). This scope differs from other Australian data sources on perinatal deaths. For this reason alternative datasets are not directly comparable and caution should be taken when using multiple sources for analysis. 20 Perinatal death data reported by the ABS are not comparable with the National Perinatal Mortality Data Collection (NPMDC) coordinated by the AIHW. As outlined in Explanatory note 19 the ABS data are sourced from state and territory registrars of Births, Deaths and Marriages. This differs from the NPMDC whose data are sourced from health systems, including clinical records. The table below was published in the AIHW Stillbirths and neonatal deaths in Australia 2015 and 2016 report . The table shows that the ABS perinatal dataset is affected by delayed registrations which results in an under count of perinatal deaths, especially those of stillbirths. Caution should be taken when interpreting this data. Number of perinatal deaths reported by Australian Bureau of Statistics (ABS) and the National Perinatal Mortality Data Collection (NPMDC) by Year of Death, Australia, 2013–2016 (sourced from AIHW, NPMDC, 2018) YearNPMDC StillbirthsABS StillbirthsNPMDC Neonatal deathsABS Neonatal deaths 20132,1941,706822793 20142,2251,720796742 20152,1481,714688691 20162,1151,650751696 ### Socio-demographic classifications 21 A range of socio-demographic data are available from the ABS Causes of Death collection. Standard classifications used in the presentation of causes of death statistics include age, sex, and Aboriginal and Torres Strait Islander status. Statistical standards for social and demographic variables have been developed by the ABS. Where these are not released in the Causes of Death published outputs, they can be sourced on request from the ABS. ### Geographic classifications 22 Since the publication of Causes of Death, Australia, 2011, the ABS has released data based on the Australian Statistical Geography Standard (ASGS). The ASGS is a hierarchical classification system that defines more stable, consistent and meaningful areas than those of the Australian Standard Geographical Classification (ASGC), which was used to define geographical areas for output prior to the release of 2011 reference year data. Under the ASGS, causes of death statistics are coded to Statistical Area 2 (SA2) level, and are presented at the state/territory and national level in this publication. 23 The Standard Australian Classification of Countries (SACC) groups neighbouring countries into progressively broader geographic areas on the basis of their similarity in terms of social, cultural, economic and political characteristics. ABS causes of death statistics are coded using the SACC, as the collection includes overseas residents whose death occurred while they were in Australia. 24 For further information, refer to the Australian Statistical Geography Standard (ASGS): Volume 1 - Main Structure and Greater Capital City Statistical Areas, July 2016 (cat. no. 1270.0.55.001) and the Standard Australian Classification of Countries (SACC), 2011 (cat. no. 1269.0). ### International Classification of Diseases (ICD) 25 The International Classification of Diseases (ICD) is the international standard classification for epidemiological purposes and is designed to promote international comparability in the collection, processing, classification, and presentation of causes of death statistics. The classification is used to classify diseases and causes of disease or injury as recorded on many types of medical records as well as death records. The ICD has been revised periodically to incorporate changes in the medical field. Currently the ICD 10th revision is used for Australian causes of death statistics. 26 The ICD-10 is a variable-axis classification meaning that the classification does not group diseases only based on anatomical sites, but also on the type of disease. Epidemiological data and statistical data is grouped according to: • epidemic diseases; • constitutional or general diseases; • local diseases arranged by site; • developmental diseases; and • injuries. 27 For example, a systemic disease such as sepsis is grouped with infectious diseases; a disease primarily affecting one body system, such as a myocardial infarction, is grouped with circulatory diseases; and a congenital condition, such as spina bifida, is grouped with congenital conditions. 28 For further information about the ICD refer to WHO International Classification of Diseases (ICD). 29 The versions of the ICD 10th Revision are available 30 The Update and Revision Committee (URC), a WHO advisory group on updates to ICD-10, maintains the cumulative and annual lists of approved updates to the ICD-10 classification. The updates to ICD-10 are of numerous types including the addition and deletion of codes, changes to coding instructions and modification and clarification of terms. 31 From the 2013 reference year, the ABS implemented a new automated coding system called Iris. The 2013-2017 data coded in the Iris system applied an updated version of the ICD-10 (2013 version for 2013 data, and 2015 version for 2014-2017 data) when coding multiple causes of death, and when selecting the underlying cause of death. For details of further impacts of this change from 2013 data onwards, please see the ABS Implementation of the Iris Software: Understanding Coding and Process Improvements Technical Note, in the Causes of Death, Australia, 2013 (cat. no. 3303.0) publication 32 The 2018 reference year cause of death data presented in this publication was coded using version 5.4.0 of Iris software. This system replaced Iris version 4.4.1 which was used to code the 2013-2017 cause of death data. Version 5.4.0 of the Iris software applied the WHO ICD updates (2016 version) which have resulted in changes to output. For more information see Technical Note Updates to Iris coding software: Implementing WHO updates and improvements in coding processes, in the Causes of Death, Australia, 2018 (cat. no. 3303.0) publication. 33 Prior to the 2013 reference year, the 2006 version of the ICD-10 was the most recent version used for coding deaths, with the exception of two updates that were applied after the 2006 reference year. The first update was implemented in 2007 and related to the use of mental and behavioural disorders due to psychoactive substance use, acute intoxication (F10.0, F11.0...F19.0) as an underlying cause of death. If the acute intoxication initiated the train of morbid events it is now assigned an external accidental poisoning code (X40-X49) corresponding to the type of drug used. For example, if the death had been due to alcohol intoxication, the underlying cause before the update was F10.0, and after the update the underlying cause is X45, with poisoning code T51.9. The second update implemented from the 2009 reference year was the addition of Influenza due to certain identified virus (J09) to the Influenza and Pneumonia block. This addition was implemented to capture deaths due to Swine flu and Avian flu, which were reaching health epidemic status worldwide. 34 The cumulative list of ICD-10 updates can be found online. ### Automated coding 35 From the 2013 reference year onwards, the cause of death data presented in this publication was coded using the Iris coding software. This system replaced the Mortality Medical Data System (MMDS), which was used for coding cause of death data for the 1997-2012 reference years. Like MMDS, Iris is an automated coding system. Iris assigns ICD-10 codes to the diseases and conditions listed on the death certificate and then applies decision tables to select the underlying cause of death. Iris version 4.4.1 was used to code 2013-2017 deaths data. Iris version 5.4.0 was used to code 2018 data. For further details on the change to Iris coding software and associated impacts on data, please see the ABS Implementation of the Iris Software: Understanding Coding and Process Improvements Technical Note, in the Causes of Death, Australia, 2013 (cat. no. 3303.0) publication and Technical Note Updates to Iris coding software: Implementing WHO updates and improvements in coding processes, in the Causes of Death, Australia, 2018 (cat. no. 3303.0) publication. ### Types of death 36 All causes of death can be grouped to describe the type of death, whether it be from a disease or condition, or from an injury, or whether the cause is unknown. These are generally described as: • Natural Causes - deaths due to diseases (for example diabetes, cancer, heart disease etc.) (A00-Q99, R00-R98) • External Causes - deaths due to causes external to the body (for example intentional self-harm, transport accidents, falls, poisoning etc.) (V01-Y98) • Unknown Causes - deaths where it is unable to be determined whether the cause was natural or external (R99). ### External causes of death 37 Where an accidental or violent death occurs, the underlying cause is classified according to the circumstances of the fatal injury, rather than the nature of the injury, which is coded separately. For example, a motorcyclist may crash into a tree (V27.4) and sustain multiple fractures to the skull and facial bones (S02.7), which leads to death. The underlying cause of death is the crash itself (V27.4), as it is the circumstance which led to the injuries that ultimately caused the death. 38 Ranking causes of death is a useful method of describing patterns of mortality in a population and allows comparison over time and between populations. However, different methods of grouping causes of death can result in a vastly different list of leading causes for any given population. A ranking of leading causes of death based on broad cause groupings such as 'cancers' or 'heart disease' does not identify the leading causes within these groups, which is needed to inform policy on interventions and health advocacy. Similarly, a ranking based on very narrow cause groupings or including diseases that have a low frequency, can be meaningless in informing policy. 39 Tabulations of leading causes presented in this publication are based on research presented in the Bulletin of the World Health Organization, Volume 84, Number 4, April 2006, 297-304. The determination of groupings in this list is primarily driven by data from individual countries representing different regions of the world. Other groupings are based on prevention strategies, or to maintain homogeneity within the groups of cause categories. Since the aforementioned bulletin was published, a decision was made by WHO to include deaths associated with the H1N1 influenza strain (commonly known as swine flu) in the ICD-10 classification as Influenza due to certain identified influenza virus (J09). This code has been included with the Influenza and Pneumonia leading cause grouping in the Causes of Death publication since the 2009 reference year. 40 Since 2015, the ABS includes C26.0 (malignant neoplasm of the intestinal tract, part unspecified) in the WHO leading cause grouping for Malignant neoplasm of colon, sigmoid, rectum and anus (now C18-C21, C26.0). For further details on the reasoning behind the inclusion of C26.0 in this leading cause grouping, see Complexities in the measurement of bowel cancer in Australia, in Causes of Death, Australia, 2015 (cat. no. 3303.0). This change has been applied in this publication to data for all reference years that appear in tables involving leading cause tabulations. This differs to publications prior to 2015, for which C26.0 was not included in this leading cause grouping, and also differs to the suggested WHO tabulation of leading causes for these cancers. Comparisons with data for this leading cause, and associated leading cause rankings, as they appear in previous publications should therefore be made with caution. Time-series data by leading causes has been published in Australia's leading causes of death, 2017 in this publication. 41 The ABS now includes Y87.0 (Sequelae of intentional self-harm), Y87.1 (Sequelae of assault) and Y85 (Sequelae transport accidents) in the WHO leading cause grouping for Intentional self-harm (now X60-X84, Y87.0), Assault (now X85-Y09, Y87.1) and Land transport Accidents (V01-V89, Y85). This change has been applied to harmonise data between the WHO leading cause grouping and subject specific datacubes for intentional self-harm, assault and transport accidents which is published as part of the ABS Causes of Death collection. This change applies to publication data for all reference years that appear in tables involving leading cause tabulations. This differs to previous publications, where Y87.0, Y87.1 and Y85 were not included in these leading cause groupings, and also differs to the suggested WHO tabulation of leading causes. Comparisons with data for these leading causes, and associated leading cause rankings, as they appear in previous publications should therefore be made with caution. Time-series data by leading causes has been published in Australia's leading causes of death, 2017 in this publication. ### Years of Potential Life Lost (YPLL) 42 Years of Potential Life Lost (YPLL) measures the extent of 'premature' mortality, which is assumed to be any death between the ages of 1-78 years inclusive, and aids in assessing the significance of specific diseases or trauma as a cause of premature death. 43 Estimates of YPLL are calculated for deaths of persons aged 1-78 years based on the assumption that deaths occurring at these ages are premature. The inclusion of deaths under one year would bias the YPLL calculation because of the relatively high mortality rate for that age, and 79 years was the median age at death when this series of YPLL was calculated using 2001 as the standard year. As shown below, the calculation uses the current ABS standard population of all persons in the Australian population at 30 June 2001. 44 YPLL is derived from: $$YPPL=\sum_{x}\left(D_{x}(79-A_{x}\right))$$ where: $$A_{x}$$ = adjusted age at death. As age at death is only available in completed years the midpoint of the reported age is chosen (e.g. age at death 34 years was adjusted to 34.5). $$D_{x}$$ = registered number of deaths at age $${x}$$ due to a particular cause of death. YPLL is directly standardised for age using the following formula: where the age correction factor $$C_{x}$$ is defined for age $${x}$$ as: $$C_{x}=\frac{N_{xs}}{N_{s}}.\frac{1}{N_{x}}.N$$ where: $${N}$$ = estimated number of persons resident in Australia aged 1-78 years at 30 June 2018 $$N_{x}$$ = estimated number of persons resident in Australia aged $${x}$$ years at 30 June 2018 $$N_{xs}$$ = estimated number of persons resident in Australia aged $${x}$$ years at 30 June 2001 (standard population) $$N_{s}$$ = estimated number of persons resident in Australia aged 1-78 years at 30 June 2001 (standard population). 45 The data cubes contain directly standardised death rates and YPLL for males, females and persons. In some cases the summation of the results for males and females will not equate to persons. The reason for this is that different standardisation factors are applied separately for males, females and persons. ### Standardised death rates 46 Age-standardised death rates enable the comparison of death rates over time and between populations of different age-structures. Along with adult, infant and child mortality rates, they are used to determine whether the mortality rate of the Aboriginal and Torres Strait Islander population is declining over time, and whether the gap between Aboriginal and Torres Strait Islander and non-Indigenous populations is narrowing. However, there have been inconsistencies in the way different government agencies have calculated age-standardised death rates in the past. The ABS uses the direct method of age-standardisation as it allows for valid comparisons of mortality rates between different study populations and across time. This method was agreed to by the ABS, Australian Institute of Health and Welfare (AIHW) and other stakeholders. For further information see: AIHW (2011) Principles on the use of direct age-standardisation in administrative data collections: for measuring the gap between Indigenous and non-Indigenous Australians. Cat. no. CSI 12. Canberra: AIHW. 47 The direct method has been used throughout the publication and data cubes for age-standardised death rates. Age-standardised death rates for specific causes of death with fewer than a total of 20 deaths have not been published due to issues of robustness. 48 For further information, see Appendix: Principles on the use of direct age-standardisation, from Deaths, Australia, 2010 (cat. no. 3302.0). 49 In this publication, age-standardised and age-specific death rates for Aboriginal and Torres Strait Islander persons for the 2009-2018 reference years have been calculated using 2016-Census-based population estimates (projections and backcasts). Non-Indigenous estimates have been derived by subtracting the 2016-census based Aboriginal and Torres Strait Island population estimates from the total 2016-Census-based estimated resident population (ERP). Rates calculated from population denominators derived from different Censuses may cause artificially large rate differences. Rate comparisons should not be made with previous publications for Aboriginal and Torres Islander data. See Estimates and Projections, Aboriginal and Torres Strait Islander Australians (cat. no. 3238.0) for more information. ### State and territory data 50 Causes of death statistics for states and territories in this publication have been compiled based on the state or territory of usual residence of the deceased, regardless of where in Australia the death occurred and was registered. Deaths of persons usually resident overseas which occur in Australia are included in the state/territory in which their death was registered. 51 Statistics compiled on a state or territory of registration basis are available on request. ### Perinatal state and territory data 52 Given the small number of perinatal deaths which occur in some states and territories, some data provided on a state/territory basis in this publication have been aggregated for South Australia, Western Australia, Northern Territory, Australian Capital Territory and Other Territories. ### Data quality 53 In compiling causes of death statistics, the ABS employs a variety of measures to improve quality, which include: • providing certifiers with certification booklets for guidance in reporting causes of death on medical certificates, see Information Paper: Cause of Death Certification Australia, 2008 (cat. no. 1205.0.55.001); • seeking detailed information from the National Coronial Information System (NCIS); and • editing checks at the individual record and aggregate levels. ### Coroner-certified deaths 54 The quality of causes of death coding can be affected by changes in the way information is reported by certifiers, by lags in completion of coroner cases and the processing of the findings. While changes in reporting and lags in coronial processes can affect coding of all causes of death, those coded to Chapter XVIII: Symptoms, signs and abnormal clinical and laboratory findings, not elsewhere classified and Chapter XX: External causes of morbidity and mortality are more likely to be affected because the code assigned within the chapter may vary depending on the coroner's findings (in accordance with ICD-10 coding rules). 55 It is the role of the coroner to investigate the circumstances surrounding all reportable deaths and to establish, wherever possible, the circumstances surrounding the death, and the cause(s) of death. Generally most deaths due to external causes will be referred to a coroner for investigation; this includes those deaths which are possible instances of intentional self-harm (suicide). 56 Where a case remains open on the NCIS at the time the ABS ceases processing, and insufficient information is available to code a cause of death (e.g. a coroner certified death was yet to be finalised by the coroner), less specific ICD codes are assigned, as required by the ICD coding rules. 57 The specificity with which open cases are able to be coded is directly related to the amount and type of information available on the NCIS. The amount of information available for open cases varies considerably from no information to detailed police, autopsy and toxicology reports. There may also be interim findings of 'intent'. 58 The manner or intent of an injury which leads to death, is determined by whether the injury was inflicted purposefully or not. When it was inflicted purposefully (intentional), a determination should be made as to whether the injury was self-inflicted (suicide) or inflicted by another person (assault). However, intent cannot be determined in all cases. ### Revisions process and other quality improvements 59 These published outputs include 2018 and 2017 preliminary data, 2016 revised data and 2015 final data. The standard ABS revisions process has not yet been applied to the 2016 and 2017 reference years that would, in the past, be subject to revisions in this publication. Causes of death revisions data will be released in early 2020. 60 For coroner-certified deaths, the specificity of cause of death coding can be affected by the length of time for the coronial process to be finalised and the coroner case closed. To improve the quality of ICD coding, all coroner-certified deaths registered after 1 January 2006 are subject to a revisions process. 61 Up to and including deaths registered in 2005, ABS Causes of Death processing was finalised at a point in time. At this point, not all coroners' cases had been investigated, the case closed and relevant information loaded into the National Coronial Information System (NCIS). The coronial process can take several years if an inquest is being held or complex investigations are being undertaken. In these instances, the cases remain open on the NCIS and relevant reports may be unavailable. Coroners' cases that have not been closed or had all information made available can impact on data quality as less specific ICD codes often need to be applied. 62 The revisions process to date has focused on cases that remain open on the NCIS database. ABS coders investigate and use additional information from police reports, toxicology reports, autopsy reports and coroners' findings to assign more specific causes of death. The use of this additional information occurs at either 12 or 24 months after initial processing and the specificity of the assigned ICD-10 codes increase over time. As 12 or 24 months pass after initial processing, many coronial cases are closed, with the coroner having dispensed a cause of death and relevant reports become available. This allows ABS coders to assign a more specific cause of death. ### Deaths of Aboriginal and Torres Strait Islander persons 63 The Aboriginal and Torres Strait Islander status of a deceased person is captured through the death registration process. It can be noted on the Death Registration Form and/or the Medical Certificate of Cause of Death. However it is recognised that not all such deaths are captured through these processes, leading to under-identification. While data is provided to the ABS for the Aboriginal and Torres Strait Islander status question for around 99% of all deaths, there are concerns regarding the accuracy of the data. 64 The ABS Death Registrations collection identifies a death as being of an Aboriginal and Torres Strait Islander person where the deceased is recorded as Aboriginal, Torres Strait islander, or both on the Death Registration Form (DRF). The Indigenous status is also derived from the Medical Certificate of Cause of Death (MCCD) for South Australia, Western Australia, Tasmania, the Northern Territory and the Australian Capital Territory from 2007. From 2015 data onwards, the Queensland Registry of Births, Deaths and Marriages also used MCCD information to derive Indigenous status. For New South Wales and Victoria, the Indigenous status of the deceased is derived from the DRF only. If the Indigenous status reported in the DRF does not agree with that in the MCCD, an identification from either source that the deceased was an Aboriginal and/or Torres Strait Islander person is given preference over non-Indigenous. 65 There are several data collection forms on which people are asked to state whether they are of Aboriginal and Torres Strait Islander origin. Due to a number of factors, the results are not always consistent. The likelihood that a person will identify, or be identified, as an Aboriginal and Torres Strait Islander person on a specific form is known as their propensity to identify. 66 Propensity to identify as an Aboriginal and Torres Strait Islander person is determined by a range of factors, including: • how the information is collected (e.g. census, survey, or administrative data); • who provides the information (e.g. the person in question, a relative, a health professional, or an official); • the perception of why the information is required, and how it will be used; • educational programs about identifying as an Aboriginal and Torres Strait Islander person; and • cultural aspects and feelings associated with identifying as Aboriginal and Torres Strait Islander Australian. 67 In addition to those deaths where the deceased is identified as an Aboriginal and Torres Strait Islander person, a number of deaths occur each year for which the Indigenous status is not stated on the death registration form. In 2018, there were 1062 deaths registered in Australia for whom Indigenous status was not stated, representing 0.7% of all deaths registered, representing a slight increase from 2017 (0.6%). This difference is largely driven by a higher number of deaths with a not stated Indigenous status registered in New South Wales (from 600 in 2017 to 731 in 2018), and Victoria (from 144 in 2017 to 173 in 2018). All other states experienced a decrease in deaths where indigenous status was not stated representing an improvement in the dataset. See Explanatory Note 63-66 for further details. 68 Data presented in this publication may therefore underestimate the level of Aboriginal and Torres Strait Islander deaths and mortality in Australia. Caution should be exercised when interpreting data for Aboriginal and Torres Strait Islander Australians presented in this publication, especially with regard to year-to-year changes. 69 Information on causes of death relating to Aboriginal and Torres Strait Islander persons is included in articles throughout this publication. Data cube 12 also provides information on causes of death for Aboriginal and Torres Strait Islander Australians. In data cube 12, numbers and rates of death are reported by jurisdiction of usual residence for New South Wales, Queensland, South Australia, Western Australia and the Northern Territory only. Data for Victoria, Tasmania and the Australian Capital Territory have been excluded in line with national reporting guidelines. For information on issues with Aboriginal and Torres Strait Islander identification, see Explanatory Notes 63-75. 70 Individual state/territory disaggregations of deaths of Aboriginal and Torres Strait Islander Australians by WHO Leading Causes (see Explanatory Notes 38-41) for the 2018 reference year are presented for New South Wales, Queensland, Western Australia and the Northern Territory only. No data are presented for South Australia, due to the small number of deaths by WHO leading causes - most causes have a count of fewer than 20 deaths, which is too small for the production of robust Standardised Death Rates (SDRs). See Explanatory Notes 46-49 for further details. 71 In this publication, age-standardised and age-specific death rates for Aboriginal and Torres Strait Islander persons for the 2009-2018 reference years have been calculated using 2016-Census-based population estimates (projections and backcasts). Non-Indigenous estimates have been derived by subtracting the 2016-census based Aboriginal and Torres Strait Island population estimates from the total 2016-Census-based estimated resident population (ERP). Rates calculated from population denominators derived from different Censuses may cause artificially large rate differences. Rate comparisons should not be made with previous publications for Aboriginal and Torres Islander data. See Estimates and Projections, Aboriginal and Torres Strait Islander Australians (cat. no. 3238.0) for more information. 72 Coronial cases are more likely to be affected by a lag in registration time, especially those which are due to external causes, including suicide, homicide and drug-induced deaths. Due to small numbers these lagged coroner-referred registrations can create large yearly variation in some causes of deaths of Aboriginal and Torres Strait Islander persons. Caution should be taken when making year to year analysis. 73 The ABS undertakes significant work aimed at improving Aboriginal and Torres Strait Islander identification. The ABS is working closely with the state and territory registries of births, deaths and marriages through the National Civil Registration and Statistics Improvement Committee (NCRSIC) to progress towards improved identification in a nationally consistent way. 74 Quality studies conducted as part of the Census Data Enhancement (CDE) project have investigated the levels and consistency of Aboriginal and Torres Strait Islander identification between the 2011 Census and death registrations. See Information Paper: Death registrations to Census linkage project - Methodology and Quality Assessment, 2011-2012 (cat. no. 3302.0.55.004). 75 An assessment of various methods for adjusting incomplete Aboriginal and Torres Strait Islander death registration data for use in compiling Aboriginal and Torres Strait Islander life tables and life expectancy estimates is presented in Discussion Paper: Assessment of Methods for Developing Life Tables for Aboriginal and Torres Strait Islander Australians, 2006 (cat. no. 3302.0.55.002), released on 17 November 2008. Final tables based on feedback received from this discussion paper, using information from the Census Data Enhancement (CDE) study, can be found in Life Tables for Aboriginal and Torres Strait Islander Australians, 2010-2012 (cat. no. 3302.0.55.003). ### Deaths by type of certifier 76 For deaths in the 2018 reference year, 11.9% were certified by a coroner. There are variations between jurisdictions in relation to the proportion of deaths certified by a coroner, ranging from 8.2% of deaths certified by a coroner and registered in Queensland, to 26.1% of deaths certified by a coroner and registered in the Northern Territory. The proportion of deaths certified by a coroner in 2018 is comparable to previous years. ### Issues to be considered when interpreting time-series and 2018 data 77 The 2018 publication has followed the same process of publication as the 2017 data. The 2017 publication was released six months earlier than usual, allowing for more timely access to mortality data in Australia. For further details on data considerations, see A more timely annual collection: changes to ABS processes (Technical Note), Causes of Death, Australia, 2015 (cat. no. 3303.0). ### Use of Iris as a new auto-coding system and implementation of updates to ICD-10 78 From the 2013 reference year onwards, the cause of death data presented in this publication was coded using the Iris coding software. This system replaced the Mortality Medical Data System (MMDS), which was used for coding cause of death data for the 1997-2012 reference years. Like MMDS, Iris is an automated coding system. Iris assigns ICD-10 codes to the diseases and conditions listed on the death certificate and then applies decision tables to select the underlying cause of death. Iris version 4.4.1 was used to code 2013-2017 deaths data. Iris version 5.4.0 was used to code 2018 data. For further details on the change to Iris coding software and associated impacts on data, please see the ABS Implementation of the Iris Software: Understanding Coding and Process Improvements Technical Note, in the Causes of Death, Australia, 2013 (cat. no. 3303.0) publication and Technical Note Updates to Iris coding software: Implementing WHO updates and improvements in coding processes, in the Causes of Death, Australia, 2018 (cat. no. 3303.0) publication. 79 Users analysing time-series or 2018 cause of death data should take into account a number of issues, as outlined below, which are unrelated to the implementation of Iris. ### Coding of perinatal deaths 80 For perinatal data output in the Causes of Death, Australia, 2013 publication, the ABS began a review of its method of coding perinatal deaths, which resulted in an interim change to how this data was output. One significant change was that neonatal deaths were not assigned an underlying cause of death when output in tables of all ages, as had previously occurred. (Details of this change can be found in the Changes to Perinatal Death Coding Technical Note in Causes of Death, Australia, 2013 (cat. no. 3303.0). Further review and consultation has now been undertaken with the national and international coding community, and has resulted in the ABS applying a new method of coding perinatal deaths. The new method creates a sequence of causes on a Medical Certificate of Cause of Perinatal Death which allows for an underlying cause of death to be assigned to a neonatal death. This aligns the output for neonatal deaths to deaths of the general population which are certified using the Medical Certificate of Cause of Death. The change in coding method reinstates the condition arising in the mother being assigned as an underlying cause of death. This method has been applied to the 2014-2018 data, and has also been applied retrospectively to the 2013 neonatal data that is output in tables of all ages in this publication, thus enabling a consistent time-series. Please see the Changes to Perinatal Death Coding Technical Note in Causes of Death, Australia, 2014 (cat. no. 3303.0) for further details. 81 From the 2013 reference year onwards, process changes have led to a reduction in the number of both stillbirths and neonatal deaths where a 'main condition in mother' was recorded, compared to previous years. This has led to a reduction in the number of records assigned within the code block P00-P04: Fetus and newborn affected by maternal factors and by complications of pregnancy, labour and delivery, as main condition in the mother. These changes will affect data output in the Perinatal data cube of this publication only. 82 Doctor-certified neonatal deaths with no cause of death information are coded to Conditions originating in the perinatal period, unspecified (P969). As these deaths have been certified by a doctor, the assumption is made that the neonate died of natural causes. Where a neonatal death is referred to a coroner, but no cause of death information is available, these deaths are coded to Other ill-defined and unspecified causes of mortality (R99). As a reportable death, it cannot be determined whether the neonate died of natural or external causes, in the absence of further information. 83 The count of fetal deaths in scope for the World Health Organization (WHO) definition of perinatal deaths differs to those previously published for 2012 and 2015. This is due to an enhancement to birth weight and gestation information, which resulted in some deaths no longer meeting the World Health Organization definition of a fetal death (that is, a gestational age of at least 22 weeks or weighing at least 500 grams). For 2012, there are two fewer fetal deaths than previously published (1 male and 1 female). For 2015, there are 38 fewer fetal deaths than previously published (18 males, 19 females, 1 death of undetermined sex). Some corresponding death rates have also been affected. Table 14.21 in the perinatal data cube presents fetal and neonatal data according to the WHO scope. No other tables in the perinatal deaths data cube are affected by these changes. ### Increased number of deaths, New South Wales 84 In September quarter 2011 the high number of death registrations in New South Wales was queried with the New South Wales Registry of Births, Deaths and Marriages. Information provided by the Registry indicates that these fluctuations may be the result of changes in processing rates. This may have contributed to the increase in the number of deaths registered in New South Wales in 2011. New South Wales deaths in 2011 (50,182) were 5.8% higher than in 2010 (47,453). ### Accident to watercraft causing drowning and submersion (V90) 85 The number of deaths attributable to Accident to watercraft causing drowning and submersion (V90) increased from 26 in 2010 to 75 in 2011. This increase is primarily due to deaths resulting from an incident in December 2010 when a boat collided with cliffs on Christmas Island. These deaths were registered with the Western Australian Registry of Births, Deaths and Marriages in January 2011, resulting in an increase in the number of deaths coded to V90 in Western Australia. ### Pneumonia, organism unspecified (J18) 86 As part of a collection-wide initiative by the ABS to improve specificity of cause of death coding in the 2008 and 2009 reference years, doctor-certified deaths due to Pneumonia, organism unspecified (J18) reduced substantially. This was as a result of the ABS manually interrogating conditions located in Part 2 of the Medical Certificate Cause of Death (MCCD), reallocating them to a more specific cause of death code. 87 In 2010 there was a shift in this pattern. The number of doctor-certified deaths assigned to J18 increased by 690 deaths, or 49.5%. The reason for the 2010 data movement was a more consistent use of coding software decision tables throughout both coding and quality assurance processes. These decision tables provide clear rules for when Pneumonia can be selected as an underlying cause of death, in relation to the information listed in Part 2 of the MCCD. 88 The 2010 increase represented a return to counts observed prior to 2008. In 2007, 2,293 doctor certified deaths were assigned to J18, therefore the 2010 count for this cause of death (2,085) is considered a return to the trend which existed prior to the coding of 2008 and 2009 data. The data from 2011 onwards has been consistent with this trend. ### Transport accidents (V01-V79, Y32) 89 There were 1,273 deaths attributed to road crashes (V01-V79, V892, X82, Y32) in 2018. Of these, 37 were of suicidal intent (X82) and there were a further 16 where the intent could not be determined (Y32). When making comparisons between road deaths from the ABS Causes of Death collection and road deaths from other sources, the scope and coverage rules applying to each collection should be considered. It should be noted that the number of road-traffic-related deaths attributed to transport accidents for 2018 is expected to change as data is subject to the revisions process. See Explanatory Notes 59-62 and the Causes of Death Revisions, 2015 Final Data (Technical Note) in Causes of Death, Australia, 2017 for further details. ### Assault (X85-Y09, Y87.1) 90 The number of deaths recorded as Assault (X85-Y09, Y87.1) i.e. murder, manslaughter and their sequelae, published in the ABS Causes of Death publication, differ from those published by the ABS in Recorded Crime - Victims, Australia, 2017 (cat. no. 4510.0). Reasons for the different counts include differences in scope and coverage between the two collections, as well as legal proceedings that are pending finalisation. It is important to note that the number of deaths attributed to assault for 2018 is expected to change as data is subject to the revisions process. See Explanatory Notes 59-62 and Causes of Death Revisions, 2015 Final Data (Technical Note) and 2016 Revised Data (Technical Note) in Causes of Death, Australia, 2017 (cat. no. 3303.0). ### Intentional self-harm (suicide) (X60-X84, Y87.0) 91 The number of deaths attributed to intentional self-harm for 2018 is expected to increase as data is subject to the revisions process. For further information, see Explanatory Notes 59-62 and the Causes of Death Revisions, 2015 Final Data (Technical Note) and 2016 Revised Data (Technical Note) in Causes of Death, Australia, 2017 (cat. no. 3303.0). 92 From 2006 onwards, the ABS implemented a revisions process for coroner-certified deaths (such as suicides), which has enabled additional suicide deaths to be identified beyond initial processing (see Explanatory Notes 54-62). It is recognised that in the four years prior to the implementation of the revisions process (2002-2005), suicide deaths may have been understated as the ABS began using the National Coronial Information System as the sole source for coding coroner-referred deaths. 93 In addition to the revisions process, new coding guidelines were applied for deaths registered from 1 January 2007. The new guidelines improve data quality by enabling deaths to be coded to suicide if evidence indicates the death was from intentional self-harm. Previously, coding rules required a coroner to determine a death as intentional self-harm for it to be coded to suicide. However, in some instances the coroner does not make a finding on intent. The reasons for this may include legislative or regulatory barriers around the requirement to determine intent, or sensitivity to the feelings, cultural practices and religious beliefs of the family of the deceased. Further, for some mechanisms of death it may be very difficult to determine suicidal intent (e.g. single vehicle incidents, drowning). In these cases the burden of proof required for the coroner to establish that the death was as a result of intentional self-harm may make a finding of suicide less likely. 94 Under the new coding guidelines, in addition to coroner-determined suicides, deaths may also be coded to suicide following further investigation of information on the NCIS. Further investigation of a death would be initiated when the mechanism of death indicates a possible suicide and the coroner does not specifically state the intent as accidental or homicidal. Information that would support a determination of suicide includes indications by the person that they intended to take their own life, the presence of a suicide note, or knowledge of previous suicide attempts. The processes for coding open and closed coroner cases are illustrated in the below diagrams (open/closed case coding decision trees) 95 Over time, the NCIS has worked with jurisdictions to improve the timeliness and completeness of information flowing from the coronial systems to the NCIS database. These improvements lead to changes in the information available to ABS coding staff. It is therefore important that data users are aware of any significant improvements in the management of coronial data to enable better interpretation of data within, and between, reference periods. 96 In 2012, the implementation of JusticeLink in the NSW coronial system significantly changed how information is exchanged between the NSW coroners courts and the NCIS. This system enables nightly uploads of all new information to the NCIS, and as a result information pertaining to NSW coronial cases is available earlier in the investigation process and the information is more complete for the purposes of coding causes of death. 97 There is evidence that the system change in NSW has improved the quality of preliminary coding in relation to deaths due to intentional self-harm. There has been an increase in the number of preliminary intentional self-harm deaths registered in NSW when comparing 2012-2017 counts (708, 694, 798, 819, 799 and 881, respectively) with those of 2011 (568), coupled with fewer cases of deaths of undetermined intent (Y10-Y34). 98 Coronial cases are more likely to be affected by a lag in registration time, especially those which are due to external causes, including suicide, homicide and drug-related deaths. Due to small numbers these lagged coroner-referred registrations can create large yearly variation in some causes of deaths of Aboriginal and Torres Strait Islander persons. Caution should be taken when making year to year analysis. 99 More broadly, change in administrative systems highlights how various factors (including administrative and system changes, certification practices, classification updates or coding rule changes) can impact on the mortality dataset. Data users should note this particular change and be cautious when making comparisons between reference periods. The change does not explain away differences between years, but is a factor to consider. It should also be noted as a factor that may influence the magnitude of any increases in suicide numbers as revisions are applied. 100 The two flow charts below highlight the guidelines used by the ABS when coding a death to intentional self-harm for open and closed coroner cases, where the intent status at the time of coding is neither intentional self-harm nor assault. In these cases, the ABS considers additional information available on NCIS, such as the mechanism and other available data (e.g. the presence of a suicide note or previous suicide attempts) when determining the intent of such deaths for coding purposes. ### Coding of Open Cases on NCIS to Intentional Self-harm Flowchart begins with: Open case on NCIS is the first option with only one option. Flows to: Is there any cause information available? With two options Y or N. N flows to Code to ICD-10 code R99. Y flows to: Is there and external cause? With two options Y or N. N flows to Code to ICD-10 cods A00-Q99. Y flows to: Does the record have and initial intent status of intentional self-harm or assault? With two options Y or N. Y flows to: Code to relevant intentional self-harm code (X60-X84, Y87.0) or assault code (X85-Y09, Y87.1) N flows to: Does the mechanism indicate a possible suicide (e.g. Death due to hanging, falling from a man-made or natural structure, a firearm, a sharp or blunt object, or carbon monoxide poisoning due to exhaust fumes)? With on option Y. Y flows to: Coders assess available data such as: (List of 3) Mention of intent to self inflict or self harm. Wording such as 'there is no evident to suggest this death was accidental or suspicious'. Mention of a suicide note, previous suicide attempts or a history of mental illness in the police and pathology reports Flows on to: Is there sufficient evidence to indicate the death was a suicide? With two options Y and N. Y flows to: Code to relevant intentional self-harm code (X60-X84, Y78.2) N flows to: Does the record have an initial intent status of accident? With two options Y or N. Y flows to: Code mechanism to relevant accident code (V01-X59,Y85, Y86) N flows to: Code to relevant undetermined intent code (Y10-y34, Y78.2) End of flow chart ### Registration of outstanding deaths, Queensland 101 In November 2010, the Queensland Registrar of Births, Deaths and Marriages advised the ABS of an outstanding deaths registration initiative undertaken by the registry. This initiative resulted in the November 2010 registration of 374 previously unregistered deaths which occurred between 1992 and 2006 (including a few for which a date of death was unknown). Of these, around three-quarters (284) were deaths of Aboriginal and Torres Strait Islander Australians. A data adjustment is made for tables which include Aboriginal and Torres Strait Islander data for Queensland for 2010. For further information refer to Technical Notes, Registration of Outstanding Deaths, Queensland, 2010 in Deaths, Australia, 2010 (cat. no. 3302.0) and Retrospective Deaths by Causes of Death, Queensland, 2010, in Causes of Death, Australia, 2010 (cat. no. 3303.0). ### Births data 102 See the 'Data Used in Calculating Death Rates' Appendix for details of the number of live births registered which have been used to calculate the fetal, neonatal and perinatal death rates shown in this publication. This Appendix also provides data on fetal deaths used in the calculation of fetal and perinatal death rates. These also enable further rates to be calculated. 103 In 2016 the NSW Registry changed the 'proof of identity' requirements for parents registering a new birth. This led to delays in registration of births for 2016 and 2017. The ABS has been working with the NSW Registry to improve the birth registration lag, The recent launch of online birth registration in 2018 by the NSW Registry appears to be improving birth registration time frames for both Aboriginal and Torres Strait Islander and non-Indigenous Australians. 104 In 2016 and 2017 there were lower than expected registration counts for New South Wales. The ABS worked with the NSW Registry of Births, Deaths and Marriages (NSW RBDM) to investigate these counts, highlighting that changes to identity requirements in 2016 had prevented some registrations from being finalised. The NSW RBDM worked with parents to finalise these registrations, enabling many to be included in 2018 counts. Other initiatives also contributed to the higher count of births in NSW in 2018, including the implementation of an online birth registration system and a campaign aimed at increasing registrations among Aboriginal and Torres Strait Islander parents. 105 In 2018, the Northern Territory Registry of Births, Deaths and Marriages identified a processing issue that led to delays in completing the registration of some births that occurred in previous years. These births have since been registered, resulting in 355 additional births being included in 2018 data, the majority of which (339) were of Aboriginal and Torres Strait Islander children. Care should be taken when interpreting changes in birth counts and fertility rates for the Northern Territory in recent years. ### Use of multiple cause of death data 106 Multiple causes of death include all causes and conditions reported on the death certificate (i.e. both underlying and associated causes; see the Glossary for further details). As all entries on the death certificate are taken into account, multiple cause of death statistics are valuable in recognising the impact of conditions and diseases which are less likely to be an underlying cause, highlighting relationships between concurrent disease processes, and giving an indication of injuries which occur as a result of specific external events. These features of multiple cause of death data provide a more in depth picture of mortality in Australia. 107 When analysing data on multiple causes of death, data can be presented in two ways: by counts of deaths or by counts of mentions. When analysis is conducted by counts of death, the figures are describing the number of people who have died with a particular disease or disorder. Multiple Cause of Death data derived from counts of mentions is the total number of incidences of a particular disease or disorder on the death certificate. For example, an individual may have had Breast cancer (C50) and then developed Secondary lung cancer (C78.0). This individual would be counted once if counts were by the number of deaths from cancer, but twice if the counts were by the number of mentions of cancer. Care should be taken to differentiate between counts and mentions when analysing multiple cause of death data. 108 Changes in patterns of mortality are studied by policy makers and researchers to improve health outcomes for all Australians. However, changes in patterns of mortality can occur for many reasons. Changes can reflect a real increase or decrease in the prevalence of a disease or disorder, or a change in medical treatment. Mortality data changes can also be a result of administrative processes which can potentially impact on the data, for example, International Classification of Disease (ICD) coding classification changes and updates, and differences in how deaths are certified. Analysis of the multiple causes of death data can give a deeper understanding of how the complete dataset may be affected by both real and administrative changes. For example, in 2009, the World Health Organization (WHO) recommended introducing code J09 (Influenza due to certain identified influenza virus) to the ICD-10 in response to the worldwide epidemics of swine flu and avian flu. There were 98 people who died as a direct consequence of contracting these strains of the flu across 2009 and 2010. In addition there were 51 people who had this flu when they died and for whom this would have been a complicating factor. Additional health risk factors may also be identified. When swine or avian flu was the underlying cause of death, multiple cause data shows obesity and respiratory problems as a common associated cause. In this way, multiple cause data provides policy makers and researchers a greater insight beyond the underlying cause of death. ### Confidentialisation of data 109 Data cells with small values have been randomly assigned to protect confidentiality. As a result some totals will not equal the sum of their components. Cells with 0 values have not been affected by confidentialisation. ### Effects of rounding 110 Where figures have been rounded, discrepancies may occur between totals and sums of the component items. ### Emerging issues 111 The Victorian Registry of Births, Deaths and Marriages (RBDM) implemented a new registration system in February 2019. As part of this system implementation, certain policies and procedures have changed within the registry. Of note, the Victorian RBDM has changed their procedures regarding the registration of coroner-referred deaths. Previously coroner-certified deaths were not submitted to the ABS until the case was finalised in the Victorian Coroner Court. From 2019, this has changed and interim registrations (open cases) have been submitted to the ABS. This procedural change has resulted in an additional delay to registrations in 2018 and previous years, but the change to procedure is expected to lead to an increased number of coroner-referred registrations in 2019. ### ABS products 112 ABS published outputs are available free of charge from the ABS website. Click on 'Statistics' to gain access to the full range of ABS statistical and reference information. For details on products scheduled for release in the coming week, click on the Future Releases link on the ABS homepage. ## Appendix - data used in calculating death rates ### Show all #### Data input The following tables contain data used in calculating the various rates referred to in this publication. Table A1.1 presents Estimated Resident Population (ERP) as at 30 June 2018. These data have been used to calculate Standardised Death Rates, Age-specific death rates and Years of Potential Life Lost for 2018 data. These data were released in Australian Demographic Statistics, Jun 2018 (cat. no. 3101.0), on 20 December 2018. The rates produced for Aboriginal and Torres Strait Islander persons in this publication are based on estimates and projections. Aboriginal and Torres Strait Islander population data are based on Series B population projections released in Estimates and Projections, Aboriginal and Torres Strait Islander Australians, 2006 to 2031 (cat. no. 3238.0), which have backcast estimates of the Aboriginal and Torres Strait Islander and non-Indigenous population for the period 30 June 2006 to 30 June 2016. These estimates have been derived on the 2016 Census data. When comparison rates are produced for non-Indigenous persons, the denominator is derived by subtracting the Aboriginal and Torres Strait Islander population estimates/projections from the relevant total persons ERP. The rebased population is larger than the previous 2011 base population, and as a result the size and structure of the population by Indigenous status is different to previously published. Given this difference in population estimates, caution should be used when interpreting data and comparing rates from previous publications based on 2011 estimates. Such figures have a degree of uncertainty and should be used with caution, particularly as the time from the base year of the projection series increases. See Explanatory Note 49 for further information. A1.1 Estimated resident population, by age and sex: 30 June 2018 MalesFemalesPersons Under 1161,159152,410313,569 1-4651,696616,9511,268,647 5-9823,368781,0421,604,410 10-14779,124736,4991,515,623 15-19765,092725,7741,490,866 20-24890,778849,2591,740,037 25-29941,167936,5021,877,669 30-34921,438941,0311,862,469 35-39857,764864,6431,722,407 40-44793,368800,4961,593,864 45-49818,607851,5861,670,193 50-54749,281779,5851,528,866 55-59749,919779,3681,529,287 60-64661,454697,9871,359,441 65-69590,074617,0541,207,128 70-74500,070517,9671,018,037 75-79333,768366,717700,485 80-84218,486267,200485,686 85-89126,256182,810309,066 90-9452,33697,426149,762 95 and over12,69332,16444,857 All ages12,397,89812,594,47124,992,369 Table A1.2 presents the number of live births for Australia for 2009 to 2018. These data have been used in calculating infant death rates - the number of deaths of children under one year of age per 1,000 live births in the same period. Data for 2009 to 2017 were released in Births, Australia, 2017 (cat. no. 3301.0). At the time of this publication's release, a summary of births data for 2018 is presented in Deaths, Australia, 2018 (cat. no. 3302.0). A1.2 Live births registered, Australia: 2009, 2014-2018 MalesFemalesPersons 2009154,875146,378301,253 2014153,592146,105299,697 2015157,088148,289305,377 2016159,537151,567311,104 2017159,221149,921309,142 2018162,088153,059315,147 #### Perinatal death rate For comparison and measuring purposes, perinatal deaths in this publication have also been expressed as rates. These rates are defined as follows: • for fetal deaths and total perinatal deaths, the rates represent the number of deaths per 1,000 all births, which comprises live births and fetal deaths combined (where gestation is at least 20 weeks or birth weight is at least 400 grams). • for neonatal deaths, the rates represent the number of deaths per 1,000 live births. #### 20 weeks' gestation or 400 grams birth weight The following tables contain births data used in calculating perinatal death rates. Tables A1.3 and A1.4 are used to calculate perinatal death rates based on the 20 weeks' gestation or 400 grams birth weight definition for fetal deaths. In this publication, this definition has been applied to all 2009-2018 reference year data, with exception to one table in the Perinatal data cube, which applies the World Health Organisation definition of a perinatal death (see details further below). STATE OR TERRITORY OF USUAL RESIDENCE NSWVic.QldSAWATas.NTACTAust.(d) 2018 Live Births Males55,21640,35431,8859,82817,1132,8552,0852,732162,088 Females52,12738,13430,0469,28516,1442,6921,9652,642153,059 Persons1,073,4378,48861,93119,11333,2575,5474,0505,374315,147 Stillbirths(c) Males27724716725109202315883 Females2381881623899201815778 Persons(b)518442332642094045321,682 Total Males55,49340,60132,0529,85317,2222,8752,1082,747162,971 Females52,36538,32230,2089,32316,2432,7121,9832,657153,837 Persons(b)107,86178,93062,26319,17733,4665,5874,0955,406316,829 2017 Live Births Males49,53642,30831,7209,86717,7432,8431,9813,208159,221 Females47,05539,78629,4389,20516,7552,7671,9012,999149,921 Persons96,59182,09461,15819,07234,4985,6103,8826,207309,142 Stillbirths(c) Males24924721227131161618916 Females23721517633105181820822 Persons(b)491471390602363834401,760 Total Males49,78542,55531,9329,89417,8742,8591,9973,226160,137 Females47,29240,00129,6149,23816,8602,7851,9193,019150,743 Persons(b)97,08282,56561,54819,13234,7345,6483,9166,247310,902 2016 Live Births49,22042,43831,49510,24018,3963,0522,0332,647159,537 Males46,86340,45430,3469,53217,0332,9161,8942,505151,567 Females96,08382,89261,84119,77235,4295,9683,9275,152311,104 Persons Stillbirths(c)23724921230130291811916 Males225204190389620911793 Females465460406682264928221724 Persons(b) Total49,45742,68731,70710,27018,5263,0812,0512,658160,453 Males47,08840,65830,5369,57017,1292,9361,9032,516152,360 Females96,54883,35262,24719,84035,6556,0173,9555,174312,828 Persons(b) 2015 Live Births Males51,34237,83231,85010,01818,0762,9322,1092,910157,088 Females48,73735,73629,8959,56917,0592,7481,8952,632148,289 Persons100,07973,56861,74519,58735,1355,6804,0045,542305,377 Stillbirths(c) Males29416820549126201618896 Females24017219235115172110802 Persons(b)540348400842423838281 718 Total Males51,63638,00032,05510,06718,2022,9522,1252,928157,984 Females48,97735,90830,0879,60417,1742,7651,9162,642149,091 Persons(b)100,61973,91662,14519,67135,3775,7184,0425,570307,095 2014 Live Births Males46,68938,05732,29210,37018,1843,0292,0722,883153,592 Females44,38536,16730,77410,01417,2192,9061,9542,669146,105 Persons91,07474,22463,06620,38435,4035,9354,0265,552299,697 Stillbirths(c) Males20219723145125341123868 Females20119419042113351720812 Persons(b)405402424882386929431,698 Total Males46,89138,25432,52310,41518,3093,0632,0832,906154,460 Females44,58636,36130,96410,05617,3322,9411,9712,689146,917 Persons(b)91,47974,62663,49020,47235,6416,0044,0555,595301,395 2013 Live Births Males51,87538,05632,75910,43517,6743,0352,0252,831158,706 Females48,58735,91330,5959,65516,8423,0142,0282,714149,359 Persons100,46273,96963,35420,09034,5166,0494,0535,545308,065 Stillbirths(c) Males28124320539115242212941 Females2782041683690181715826 Persons(b)561450376772054439291,781 Total Males52,15638,29932,96410,47417,7893,0592,0472,843159,647 Females48,86536,11730,7639,69116,9323,0322,0452,729150,185 Persons(b)101,02374,41963,73020,16734,7216,0934,0925,574309,846 2012 Live Births Males50,64139,65632,87610,51717,2543,1442,0782,803158,988 Females47,86737,74930,9619,91616,3733,0242,0262,658150,594 Persons98,50877,40563,83720,43333,6276,1684,1045,461309,582 Stillbirths(c) Males28321921637118271521943 Females2302112333412118818875 Persons(b)517435452712394523411,832 Total Males50,92439,87533,09210,55417,3723,1712,0932,824159,931 Females48,09737,96031,1949,95016,4943,0422,0342,676151,469 Persons(b)99,02577,84064,28920,50433,8666,2134,1275,502311,414 2011 Live Births Males50,98636,90032,23710,14616,5573,4052,0392,708154,996 Females48,06834,54431,0169,74615,7023,2031,9152,413146,621 Persons99,05471,44463,25319,89232,2596,6083,9545,121301,617 Stillbirths(c) Males28122718142128262115923 Females23117019446126231313818 Persons(b)513400377892544934281 748 Total Males51,26737,12732,41810,18816,6853,4312,0602,723155,919 Females48,29934,71431,2109,79215,8283,2261,9282,426147,439 Persons(b)99,56771,84463,63019,98132,5136,6573,9885,149303,365 2010 Live Births Males51,94336,14133,03110,39716,0633,3172,0262,662155,591 Females49,32334,43131,4929,68115,3613,0681,8732,490147,727 Persons101,26670,57264,52320,07831,4246,3853,8995,152303,318 Stillbirths(c) Males2702092293992331844934 Females2281942043993211327819 Persons(b)499407441781855431721,767 Total Males52,21336,35033,26010,43616,1553,3502,0442,706156,525 Females49,55134,62531,6969,72015,4543,0891,8862,517148,546 Persons(b)101,76570,97964,96420,15631,6096,4393,9305,224305,085 2009 Live Births Males50,41136,28434,08610,23315,8963,3921,9912,574154,875 Females47,82034,64432,0639,50214,9833,2351,8292,286146,378 Persons98,23170,92866,14919,73530,8796,6273,8204,860301,253 Stillbirths(c) Males25023623835121311818947 Females240194199409425207820 Persons(b)495432441752155739251,780 Total Males50,66136,52034,32410,26816,0173,4232,0092,592155,822 Females48,06034,83832,2629,54215,0773,2601,8492,293147,198 Persons(b)98,72671,36066,59019,81031,0946,6843,8594,885303,033 a. All births consists of all live births, plus all fetal deaths that conform to the 20 weeks' gestation or 400 grams birth weight definition. b. The stillbirths count for 'Persons' includes those stillbirth deaths for which the sex could not be determined. The sum of male and female stillbirths may therefore not sum to the Persons total. c. Includes those where it is unknown if heartbeat ceased before or after the delivery. d. Includes Other Territories. #### A1.3 All births(a), by sex of child(b) - and state or territory of usual residence of mother A1.4 All births(a), by sex(b) Live BirthsStillbirths(c)Total MalesFemalesPersonsMalesFemalesPersons(b)MalesFemalesPersons(b) 2018162,088153,059315,1478837781,682162,971153,837316,829 2017159,221149,921309,1429168221,760160,137150,743310,902 2016159,537151,567311,1049167931,724160,453152,360312,828 2015157,088148,289305,3778968021,718157,984149,091307,095 2014153,592146,105299,6978688121,698154,460146,917301,395 2013158,706149,359308,0659418261,781159,647150,185309,846 2012158,988150,594309,5829438751,832159,931151,469311,414 2011154,996146,621301,6179238181,748155,919147,439303,365 2010155,591147,727303,3189348191,767156,525148,546305,085 2009154,875146,378301,2539478201,780155,822147,198303,033 1. All births consists of all live births, plus all fetal deaths that conform to the 20 weeks' gestation or 400 grams birth weight definition. 2. The stillbirths count for 'Persons' includes those stillbirth deaths for which the sex could not be determined. The sum of male and female stillbirths may therefore not sum to the Persons total. 3. Includes those where it is unknown if heartbeat ceased before or after the delivery. #### 22 weeks' gestation or 500 grams birth weight Table A1.5 contains births data used in the calculation of perinatal death rates based on the WHO definition of all neonatal deaths and those fetal deaths of 22 weeks' gestation or 500 grams birth weight. A time series of perinatal death counts based on the WHO definition is presented in the Perinatal datacube. A1.5 All births(a), by sex(b) Live BirthsStillbirths(c)Total MalesFemalesPersonsMalesFemalesPersons(b)MalesFemalesPersons(b) 2018162,088153,059315,1477536651,433162,841153,724316,580 2017159,221149,921309,1427727181,508159,993150,639310,650 2016159,537151,567311,1047676751,451160,304152,242312,555 2015157,088148,289305,3777757141,504157,863149,003306,881 2014153,592146,105299,6977597381,513154,351146,843301,210 2013158,706149,359308,0658307601,603159,536150,119309,668 2012158,988150,594309,5827847181,514159,772151,312311,096 2011154,996146,621301,6177416721,416155,737147,293303,033 2010155,591147,727303,3187977211,524156,388148,448304,842 2009154,875146,378301,2538967711,679155,771147,149302,932 1. All births consists of all live births, plus all fetal deaths that conform to the 22 weeks' gestation or 500 grams birth weight definition. 2. The stillbirths count for 'Persons' includes those stillbirth deaths for which the sex could not be determined. The sum of male and female stillbirths may therefore not sum to the Persons total. 3. Includes those where it is unknown if heartbeat ceased before or after the delivery. 4. The count of fetal deaths in scope for the World Health Organisation (WHO) definition of perinatal deaths differs to those previously published for 2012 and 2015. This is due to an enhancement to birth weight and gestation information, which resulted in some deaths no longer meeting the World Health Organisation definition of a fetal death (that is, a gestational age of at least 22 weeks or weighing at least 500 grams).  For 2012, there are two fewer fetal deaths than previously published (1 male and 1 female).  For 2015, there are 38 fewer fetal deaths than previously published (18 males, 19 females, 1 death of undetermined sex).  Some corresponding death rates have also been affected.  No other tables in this data cube are affected by these changes. ## Appendix - tabulation of selected causes of death ### Show all #### Introduction There are standard ways for listing causes of death and there are formal recommendations concerning lists for tabulation to assist international comparisons. The World Health Organization (WHO) provides a number of standard tabulation lists for presentation of causes of death statistics, that assist international comparability. WHO also recommend that when there is not a need for international comparability then lists should be designed to reflect local requirements. These special lists can be developed, for example, to monitor progress of local health programmes. #### Firearm deaths tabulation Causes of death attributable to firearm mortality include ICD-10 codes: W32-W34, Accidental discharge of firearms; X72-X74, Intentional self-harm (suicide) by discharge of firearms; X93-X95, Assault (homicide) by discharge of firearms; Y22-Y24, Discharge of firearms, undetermined intent; and Y35.0, Legal intervention involving firearm discharge. Deaths from injury by firearms exclude deaths due to explosives and other causes indirectly related to firearms. #### Drug-induced deaths tabulation The data presented for drug-induced deaths iin this publication is based upon tabulation created by the United States Centers for Disease Control and Prevention (CDC). Causes of death attributable to drug-induced mortality include ICD-10 codes: D52.1, Drug-induced folate deficiency anaemia; D59.0, Drug-induced haemolytic anaemia; D59.2, Drug-induced nonautoimmune haemolytic anaemia; D61.1, Drug-induced aplastic anaemia; D64.2, Secondary sideroblastic anaemia due to drugs and toxins; E06.4, Drug-induced thyroiditis; E16.0, Drug-induced hypoglycaemia without coma; E23.1, Drug-induced hypopituitarism; E24.2, Drug-induced Cushing’s syndrome; E66.1, Drug-induced obesity; F11.0-F11.5, Use of opoids causing intoxication, harmful use (abuse), dependence, withdrawal or psychosis F11.7-F11.9, Use of opoid causing late onset psychosis, other mental and behavioural disorders and unspecified behavioural disorders. F12.0-F12.5, Use of cannabis causing intoxication, harmful use (abuse), dependence, withdrawal or psychosis F12.7-F12.9, Use of cannabis causing late onset psychosis, other mental and behavioural disorders and unspecified behavioural disorders. F13.0-F13.5, Use of sedative or hypnotics causing intoxication, harmful use (abuse), dependence, withdrawal or psychosis F13.7-F13.9, Use of sedative or hypnotics causing late onset psychosis, other mental and behavioural disorders and unspecified behavioural disorders. F14.0-F14.5, Use of cocaine causing intoxication, harmful use (abuse), dependence, withdrawal or psychosis F14.7-F14.9, Use of cocaine causing late onset psychosis, other mental and behavioural disorders and unspecified behavioural disorders. F15.0-F15.5, Use of caffeine causing intoxication, harmful use (abuse), dependence, withdrawal or psychosis F15.7-F15.9, Use of caffeine causing late onset psychosis, other mental and behavioural disorders and unspecified behavioural disorders. F16.0-F16.5, Use of hallucinogens causing intoxication, harmful use (abuse), dependence, withdrawal or psychosis F16.7-F16.9, Use of hallucinogens causing late onset psychosis, other mental and behavioural disorders and unspecified behavioural disorders. F17.0, Use of tobacco causing intoxication F17.3-F17.5, Use of tobacco causing withdrawal or psychosis F17.7-F17.9, Use of tobacco causing late onset psychosis, other mental and behavioural disorders and unspecified behavioural disorders. F18.0-F18.5, Use of volatile solvents causing intoxication, harmful use (abuse), dependence, withdrawal or psychosis F18.7-F18.9, Use of volatile solvents causing late onset psychosis, other mental and behavioural disorders and unspecified behavioural disorders. F19.0-F19.5, Use of multiple drugs and other psychoactive substances causing intoxication, harmful use (abuse), dependence, withdrawal or psychosis F19.7-F19.9, Use of multiple drugs and other psychoactive substances causing late onset psychosis, other mental and behavioural disorders and unspecified behavioural disorders. G21.1, Other drug-induced secondary Parkinsonism; G24.0, Drug-induced dystonia; G25.1, Drug-induced tremor; G25.4, Drug-induced chorea; G25.6, Drug-induced tics and other tics of organic origin; G44.4, Drug-induced headache, not elsewhere classified; G62.0, Drug-induced polyneuropathy; G72.0, Drug-induced myopathy; I95.2, Hypotension due to drugs; J70.2, Acute drug-induced interstitial lung disorders; J70.3, Chronic drug-induced interstitial lung disorders; J70.4, Drug-induced interstitial lung disorder, unspecified; L10.5, Drug-induced pemphigus; L27.0, Generalized skin eruption due to drugs and medicaments; L27.1, Localized skin eruption due to drugs and medicaments; M10.2, Drug-induced gout; M32.0, Drug-induced systemic lupus erythematosus; M80.4, Drug-induced osteoporosis with pathological fracture; M81.4, Drug-induced osteoporosis; M83.5, Other drug-induced osteomalacia in adults; M87.1, Osteonecrosis due to drugs; R78.1, Finding of opiate drug in blood; R78.2, Finding of cocaine in blood; R78.3, Finding of hallucinogen in blood; R78.4, Finding of other drugs of addictive potential in blood; R78.5, Finding of psychotropic drug in blood; X40-X44, Accidental poisoning by and exposure to drugs, medicaments and biological substances; X60-X64, Intentional self-poisoning (suicide) by and exposure to drugs, medicaments and biological substances; X85, Assault (homicide) by drugs, medicaments and biological substances; and Y10-Y14, Poisoning by and exposure to drugs, medicaments and biological substances, undetermined intent. Drug-induced causes exclude accidents, homicides, and other causes indirectly related to drug use. Also excluded are newborn deaths associated with mother’s drug use. #### Poisoning by opioids tabulation The data presented for opioid-induced deaths in this publication is a modified version of the drug-induced deaths tabulation created by the United States Centers for Disease Control and Prevention (CDC). To capture opioid-induced deaths, the following poisoning codes present at the multiple cause of death level were used in combination with the CDC drug-induced underlying cause of death tabulation. Causes of death attributable to opioids include ICD-10 codes present at the multiple cause of death level: T40.0, Opium; T40.1, Heroin; T40.2, Other opioids (e.g. Codeine, Morphine); T40.4, Other synthetic narcotics (e.g. Pethidine); and T40.6, Other and unspecified narcotics. #### Alcohol-induced deaths tabulation Causes of death attributable to alcohol-induced mortality include ICD-10 codes: E24.4, Alcohol-induced pseudo-Cushing’s syndrome; F10, Mental and behavioural disorders due to alcohol use; G31.2, Degeneration of nervous system due to alcohol; G62.1, Alcoholic polyneuropathy; G72.1, Alcoholic myopathy; I42.6, Alcoholic cardiomyopathy; K29.2, Alcoholic gastritis; K70, Alcoholic liver disease; K85.2 Alcohol-induced acute pancreatitis; K86.0, Alcohol-induced chronic pancreatitis; R78.0, Finding of alcohol in blood; X45, Accidental poisoning by and exposure to alcohol; X65, Intentional self-poisoning by and exposure to alcohol; and Y15, Poisoning by and exposure to alcohol, undetermined intent. Alcohol-induced causes exclude accidents, homicides, and other causes indirectly related to alcohol use. This category also excludes newborn deaths associated with maternal alcohol use. 1. Miniño AM, Heron MP, Murphy SL, Kochankek, KD. Deaths: Final Data for 2004. National vital statistics reports; vol 55 no 19. Hyattsville, MD: National Center for Health Statistics. 2007. #### Deaths from Non-Communicable Diseases (NCD) tabulation Causes of death attributable to Non-Communicable Diseases include ICD-10 codes: C00-C97, D45-D46, D47.1, D47.3-D47.5, Cancers; I00-I99, Cardiovascular diseases; E10-E14, Diabetes; and J30-J98, Chronic lower respiratory diseases. 1. World Health Organization (WHO). Non-Communicable Diseases Global Monitoring Framework: Indicator Definitions and Specifications. Note: The WHO Cancer tabulation for NCDs includes only C00-C97. To be consistent with ABS Causes of Death reporting additional cancers codes (D45-D46, D47.1, D47.3-D47.5) have been included for this publication when analysing cancer related NCDs. ## Technical note - updates to Iris coding software: implementing WHO updates and improvements in coding processes Since 2014 the national Causes of Death dataset has been coded using Iris, a software program which automates the assignment of codes from the International Classification of Diseases 10th Revision (ICD-10) to death records, and assists in the identification of an underlying cause of death. Iris is developed and maintained by the German Institute of Medical Documentation and Information (DIMDI), who produce regular software updates, as well as implement World Health Organization (WHO) updates to the ICD-10 that are embedded within the Iris system. In order to recognise and statistically represent changing health trends and advances in medical science research, specialist committees within WHO meet annually to review and recommend updates to the ICD-10. Updates can be minor, consisting of spelling updates or small updates to medical terminology within existing codes, or they can be major, consisting of reclassification of medical conditions to different codes, updates to coding rules or the addition or deletion of codes. To maintain consistency in statistical outputs major changes are only implemented on 3 year cycles. For the coding of 2018 cause of death data the ABS implemented a new version of Iris software (version 5.4.0) which incorporates a new underlying cause of death processing system called the Multicausal and Unicausal Selection Engine (MUSE). Like previous versions of Iris, MUSE assigns codes to medical terms on the Medical Certificate of Cause of Death (MCCD) and applies WHO coding rules to appropriately code and modify multiple causes of death and select an underlying cause of death. This version of Iris also incorporates the most recent major updates to the ICD-10 (2016 coding year updates). The implementation of MUSE, alongside the updates to the ICD-10, align the Australian mortality data up to date with international best practice. The ABS have also implemented extra validation processes with the implementation of MUSE to ensure maximum alignment with WHO guidelines and coding rules. Key statistical measures need to be considered when interpreting time series data with administrative changes made to processing. There are generally four ways in which output can change: 1. A true change in disease or external event. 2. Administrative changes such as changes to certification or events at point of registration. 3. Updates to the WHO ICD-10 classification, Volume 2 coding rules and application of decision tables. 4. Process changes, such as the implementation of new software or changes to local coding practice. Understandably, the focus of health policy is on true changes in patterns of mortality. The ABS uses explanatory notes to highlight administrative and process changes to enable better interpretation of trends in data over time. This technical note will provide an overview of software changes, WHO updates and local coding changes to assist users in interpretting changes in the 2018 dataset. The information focusses on factors influencing the selection of underlying causes of death, although the multiple cause dataset is also acknowledged as an integral tool in tracking changes over time. ### Chapter I, certain infectious and parasitic diseases (A00-B99) Updates to WHO guidelines have resulted in changes to causes located in Chapter I, Certain infectious and parasitic diseases: A40 Streptococcal sepsis - A41 Other sepsis: A common certification issue is the recording of an infection such as sepsis in Part 1 of a death certificate with no preceding cause, with chronic conditions listed on the same certificate in Part 2. Previously, when sepsis was certified in Part 1 of the MCCD and selected chronic conditions were reported in Part 2 of the MCCD, the selection rules did not provide a mechanism for the condition in Part 2 to be selected as the underlying cause. An update to selection rules now allows more chronic conditions, such as neoplasms coded to (C00-C80), in Part 2 to be selected as the underlying cause when sepsis appears in Part 1. In particular, this has resulted in an increase in deaths assigned to Chapter II Neoplasms and a subsequent decrease in the number of deaths assigned to A40-A41 as an underlying cause. A90 Dengue fever [classical dengue] and A91 Dengue hemorrhagic fever: These codes are no longer valid for causes of death outputs. All deaths previously assigned to A90 or A91 will now be assigned to A97 Dengue A97 Dengue: This is a new code and includes multiple fourth digit options for coding. A97 replaces deaths previously assigned to A90 and A91. B94 Sequelae of other and unspecified infectious and parasitic disease: Previously, if an infectious disease was reported with a duration of greater than one year, the sequelae code was assigned (B94). The processing of time intervals as they relate to infectious diseases now means that B94 is only assigned when late or residual effects of the infection are reported. This has resulted in a decrease in deaths assigned to B94. ### Chapter II, neoplasms (C00-D48) Updates to WHO guidelines and ABS coding practices have resulted in an increase in deaths assigned to Chapter II Neoplasms. There are now more causal relationships between acute conditions, including Bacterial sepsis (A40-A41), and Malignant neoplasms (C00-C80). Previously, when sepsis was mentioned in Part 1 of the MCCD and a malignant neoplasm was mentioned in Part 2, the sepsis would commonly be assigned as the underlying cause of death. In accordance with updated decision tables, there are now additional relationships by which neoplasms falling within C00-C80 can be selected as the underlying cause of death when mentioned in Part 2. This has resulted in an increase in deaths assigned to Chapter II Neoplasms and a subsequent decrease in the number of deaths assigned to A40-A41 as an underlying cause. ### Chapter V, mental and behavioural disorders (F00-F99) F05 Delirium, not induced by alcohol and other psychoactive substances: Improvements in coding practices have resulted in a decrease in deaths coded to F05. When F05 is mentioned on the MCCD the decision tables do not provide a mechanism by which the causal condition of the delirium can be chosen as the underlying cause of death. The ABS mortality team have implemented improved coding and validation processes to ensure the causal condition of the delirium is now selected as the underlying cause. There is a notable decrease in deaths coded to delirium as an underlying cause of death as a result of this change. ### Chapter VI, diseases of the nervous system (G00-G99) G23 Other degenerative diseases of basal ganglia: A new fourth-digit code, G233 Multiple system atrophy, cerebellar type [MSA-C], has been added to this category which has resulted in an increase in deaths assigned to G23. Most deaths assigned to G233 were previously coded to G903 Multiple system degeneration. G83 Other paralytic syndromes: A new fourth-digit code, G835 Locked-in syndrome, has been added to this category. There was 1 death assigned to G835 in 2018. G90 Disorders of autonomic nervous system: The fourth-digit code, G903 Multiple system degeneration has been removed as a valid code for cause of death coding. This has resulted in a decrease in deaths coded to G90. Causes previously coded to G903 are now coded to G238 Other specified degenerative diseases of basal ganglia. ### Chapter IX, diseases of the circulatory system (I00-I99) I67 Other cerebrovascular diseases: Changes in coding processes have been made in relation to I67. Previously, if Chronic cerebrovascular disease was reported with a duration of greater than one year, the sequelae code was assigned (I69). Further, a note under I69 dictates that Chronic cerebrovascular disease is to be coded to I60-I67. Changes have been implemented resulting in the sequelae codes only being used if late or residual effects of the disease are reported. This change has resulted in an increase in deaths assigned to I67 and a subsequent decrease in deaths assigned to I69. I69 Sequelae of cerebrovascular disease: Changes in coding processes have been made in relation to I69. Previously, if Chronic cerebrovascular disease was reported with a duration of greater than one year, the sequelae code was assigned (I69). Further, a note under I69 dictates that Chronic cerebrovascular disease is to be coded to I60-I67. Changes have been implemented resulting in the sequelae codes only being used if late or residual effects of the disease are reported. This change has resulted in a decrease in deaths assigned to I69 and a subsequent increase in deaths assigned to I67. ### Chapter XII, diseases of the skin and subcutaneous tissue (L00-L99) L98 Other disorders of skin and subcutaneous tissue, not elsewhere classified: A new fourth-digit code, L987 Excessive and redundant skin and subcutaneous tissue, has been added to this category. No conditions were assigned to this code in 2018. ### Chapter XIII, diseases of the musculoskeletal system and connective tissue (M00-M99) M19 Other arthrosis: A relationship between J18 Pneumonia, organism unspecified and M19 Other arthrosis has been removed in 2018. When J18 was reported in Part 1 of the MCCD and M19 in Part 2, previous relationships allowed M19 to be chosen as the underlying cause of death. The removal of this relationship no longer assigns M19 as the underlying cause in these cases. This has resulted in a decrease in deaths assigned to M19. ### Chapter XV, pregnancy, childbirth and puerperium (O00-O99) O94 Sequelae of complication of pregnancy, childbirth and the puerperium: This is a new code in 2016. O94 is used for morbidity coding only and therefore has not affected output in this publication. ### Chapter XVI, certain conditions originating in the perinatal period (P00-P99) P91 Other disturbances of cerebral status of newborn: This category has a new fourth-digit code, P917 Acquired hydrocephalus of newborn. No conditions were assigned to this code in 2018. ### Chapter XX, external causes of morbidity and mortality (V01-Y98) W26 Contact with other sharp object(s): There are now multiple fourth-digit options for W26. Previously, when a death occurred as a result of W26, there was no option to further specify the type of sharp object involved. ABS mortality coders are now required to choose from multiple fourth-digit options to further specify the death: • W260 Contact with knife, sword or dagger • W268 Contact with other sharp object(s), not elsewhere classified • W269 Contact with unspecified sharp object(s) As a result, deaths previously assigned to W26 will now be assigned a fourth-digit during coding. No conditions were assigned to W260-W269 in 2018. ## Technical note - causes of death revisions, 2016 final data ### Overview 1  Deaths that are referred to a coroner can take time to be fully investigated. To account for this, the ABS has implemented a revisions process for those deaths where coronial investigations remained open at the time a preliminary cause of death was assigned. Data are deemed preliminary when first published, revised when published the following year and final when published after a second year. This technical note focusses specifically on final data for 2016 coroner-certified deaths. 2  The revisions process has been applied to all reference periods from 2006 onwards. Revisions are one of two measures implemented to enable timely data to be released on coroner-certified deaths (see Explanatory Notes 54-62 for further information). The second measure, referred to as 'open coding', ensures that all available documentation is taken into account when assigning a cause of death to coronial cases that are yet to be finalised. The combination of these two measures, along with ongoing enhancements in the timeliness and completeness of documentation on the National Coronial Information System (NCIS), have resulted in significant improvements to the quality of preliminary Causes of Death data. 3  There are three main improvements to the Causes of Death data which are gained through the revisions process. Firstly, for deaths from natural causes a more specified condition may be identified. For example, a death may be coded to a condition such as cardiac arrest at preliminary coding, but with the later addition of an autopsy report, an underlying ischaemic heart condition could be identified. Secondly, for deaths from external causes (accidents, assaults and suicides) more information might be provided on mechanism. For example, a death coded to an unspecified accident with a fracture of hip, may later be found to have been caused by a fall down steps. Lastly, external causes may also have the intent of death updated through revisions. For example, a drug overdose where the intent of death was not determined at preliminary coding, may be updated to an intentional drug overdose when a coronial finding has been made. ### Changes to Cause of Death processing and revisions 4 Until the 2014 reference period, the ABS released the annual Causes of Death dataset 15 months after the end of each reference period (i.e. data for the 2014 reference period was published in March 2016). The 2015 release of Causes of Death, Australia was released 6 months earlier, representing a significant change in processing of the national mortality dataset. 5 Bringing forward the release of Causes of Death data meant that preliminary coding of coroner-certified deaths occurred approximately 6 months earlier than in previous years. Given that the timeliness of report availability on the NCIS is critical to the ABS's ability to assign specific cause of death codes, considerable analysis was undertaken to ensure the preliminary dataset would be of sufficient quality to be fit for purpose. See Technical Note 1 A More Timely Annual Collection: Changes to ABS Processes in the 2015 publication. 6  With earlier release of preliminary data, there is now a period of 30 months between the release of preliminary and final data. The table below shows the impact of this changed revisions process at the International Classification of Diseases, 10th revision (ICD-10) chapter level. The continued earlier release of data resulted in more deaths assigned at preliminary coding to the Symptoms, signs and abnormal clinical and laboratory findings, not elsewhere classified (Symptoms and signs) (R00-R99) chapter for the 2016 reference period. Consequently, a larger number of deaths have been reassigned from R00-R99 to other chapters over the 2016 revisions period. The redistribution of deaths to more specified ICD-10 codes is discussed in more detail below. 20122013201420152016 Cause of death and ICD-10 code%%%%% Certain infectious and parasitic diseases (A00-B99)0.00.20.20.50.5 Neoplasms (C00-D48)0.00.00.00.10.1 Diseases of the blood and blood-forming organs and certain disorders involving the immune mechanism (D50-D89)0.00.40.00.60.2 Endocrine, nutritional and metabolic diseases (E00-E90)0.00.10.20.80.3 Mental and behavioural disorders (F00-F99)-0.1-0.10.00.10.1 Diseases of the nervous system (G00-G99)0.20.20.20.60.2 Diseases of the circulatory system (I00-I99)0.20.00.00.60.5 Diseases of the respiratory system (J00-J99)0.00.10.10.40.3 Diseases of the digestive system (K00-K93)0.00.00.21.00.5 Diseases of the skin and subcutaneous tissue (L00-L99)-0.30.00.20.80.4 Diseases of the musculoskeletal system and connective tissue (M00-M99)-0.10.30.21.23.1 Diseases of the genitourinary system (N00-N99)0.0-0.10.00.50.5 Certain conditions originating in the perinatal period (P00-P96)0.4-0.50.60.50.5 Congenital malformations, deformations and chromosomal abnormalities (Q00-Q99)0.40.31.01.41.2 Symptoms, signs and abnormal clinical and laboratory findings, not elsewhere classified (R00-R99)-11.2-5.5-11.3-34.3-23.8 External causes of morbidity and mortality (V01-Y98)0.70.40.90.8-0.6 a. Excludes deaths coded to H00-H59, H60-H95, and O00-O99 as these causes of death account for small amount of deaths and changes through revisions are minimal. b. Since 2015 the release of Causes of Death, Australia has occurred 6 months earlier, representing a significant change in processing of the national mortality dataset. For further information regarding changes to ABS coding processes, see A More Timely Annual Collection: Changes to ABS Processes (Technical Note) in Causes of Death, Australia, 2015 (cat. no. 3303.0). ### Causes of death revisions for 2012 to 2016 - changes from preliminary to final data by percentage, by selected ICD-10 chapter, all certified deaths (a)(b) 7 The table below provides the counts of deaths by ICD-10 chapter for the 2016 reference period across the revisions process. Revisions are most likely to result in decreases in the number of deaths assigned to Symptoms and signs (R00-R99) with corresponding increases in other chapters. 8 Deaths which are originally coded to the Symptoms and signs (R00-R99) chapter can be reassigned to specific natural or external causes of death. The majority of those reassigned are subsequently found to be deaths from natural causes (76.7%), with Diseases of the circulatory system (I00-I99) being the most common natural cause. Of those reassigned to external causes of death, 18 were deaths due to intentional self-harm (suicide) and 39 were due to Accidental drug poisoning (X40-X44). 2016 reference yearChange (preliminary to final) PRF Cause of death and ICD-10 codenononono% Certain infectious and parasitic diseases (A00-B99)281828292832140.5 Neoplasms (C00-D48)463074632546331240.1 Diseases of the blood and blood-forming organs and certain disorders involving the immune mechanism (D50-D89)49049149110.2 Endocrine, nutritional and metabolic diseases (E00-E90)675067676770200.3 Mental and behavioural disorders (F00-F99)99319935994090.1 Diseases of the nervous system (G00-G99)879488058814200.2 Diseases of the circulatory system (I00-I99)4396344157441862230.5 Diseases of the respiratory system (J00-J99)147831482214829460.3 Diseases of the digestive system (K00-K93)575357735784310.5 Diseases of the skin and subcutaneous tissue (L00-L99)53253353420.4 Diseases of the musculoskeletal system and connective tissue (MOO-M99)137113761413423.1 Diseases of the genitourinary system (N00-N99)345834633475170.5 Certain conditions originating in the perinatal period (P00-P96)55055355330.5 Congenital malformations, deformations and chromosomal abnormalities (Q00-Q99)58759159471.2 Symptoms, signs and abnormal clinical and laboratory findings, not elsewhere classified (R00-R99)165113081258-393-23.8 External causes of morbidity and mortality (V01-Y98)107361074610670-66-0.6 Total(a)15850415850415850400.0 a. Includes deaths coded to H00-H59, H60-H95, and O00-O99. b. This table includes both doctor and coroner-certified deaths. ### Impact of revisions - underlying cause of death 9 The expected outcome of the revisions process is to improve data quality. Enhancements to underlying cause of death data quality may include updates to either mechanism or intent or identifying an underlying cause where not previously possible. While the revisions process has a minimal impact on statistical output at the chapter level of the ICD-10 (with the exception of R00-R99), data improvements become more apparent when considering movements within individual chapters. 10 The table below shows selected data for coroner-certified deaths only at the sub-chapter level. There were key data improvements for specification of mechanism for external causes of deaths over the 2016 revisions period. There were 263 deaths where intent was coded but mechanism was unspecified at preliminary coding. Through the revisions process a mechanism was identified for 163 (62.0%) of these deaths. The majority of these records had no change in intent, but were assigned a more specific mechanism. For example, a suicide death where the mechanism was unspecified at preliminary coding (Intentional self-harm by unspecified means (X84) may be reassigned to a suicidal drowning (Intentional self-harm by drowning (X71)) during the revisions process when an autopsy becomes available for analysis. 11 The table below demonstrates that for deaths which are certified by a coroner, the reduction in the number of cases assigned to Other ill-defined and unspecified causes of mortality (R99) decreased by 39.8% over the full revisions process. 2016 reference yearChange (preliminary to final) PRF Cause of death and ICD-10 codenononono% Other ill-defined and unspecified causes of mortality (R99)1006659606-400-39.8 Unspecified mechanism (X59, X84, Y09)263124100-163-62.0 Accidental exposure to other specified factors (X59) 2038978-125-61.6 Intentional self-harm by unspecified means (X84) 26127-19-73.1 Assault by unspecified means (Y09) 342315-19-55.9 Event of undetermined intent (Y10-Y34)140122115-25-17.9 a. This table includes coroner-certified deaths only. ### Causes of death revisions for 2016 - preliminary, revised and final, by selected causes of death, coroner-certified deaths (a) 12 The table below shows changes at the sub-chapter level for the 2016 reference period, with a focus on the External causes of morbidity and mortality (V01-Y98) chapter. Notable increases in deaths due to external causes over the full revisions process include: • Falls (W00-W19) increased by 63 deaths. For many of the deaths reassigned to a fall, the type of injury was known at preliminary coding (e.g. neck of femur fracture), yet the mechanism was unknown (e.g. the broken hip was caused by an unspecified accident). Over the full revisions process, additional information about the nature of the mechanism became available allowing these records to be reassigned to a fall (e.g. the broken hip was identified to be due to a fall down stairs) • Accidental drug poisoning (X40-X44) increased by 37 deaths. Many of the deaths reassigned to an Accidental drug poisoning (X40-X44) were originally assigned to Ill-defined causes of mortality (R99). Drug-induced deaths require intensive investigations to accurately determine not only the cause and manner in which the death occurred, but also the attribution of a drug(s) to the death. Over time, as investigations are finalised, more information on the NCIS becomes available allowing these deaths to be reassigned to Accidental drug poisonings (X40-X44) • Intentional self-harm (X60-X84, Y870) increased by 43 deaths. Of these 43 deaths, the majority were reassigned from Accidental drug poisonings (X40-X44), Undetermined intent (Y10-Y34) and Ill-defined causes of mortality (R99) • Intentional drug poisoning (X60-X64) increased by 22 deaths. Of these 22 deaths, the majority were reassigned due to updated intent information becoming available (especially the final coronial finding). The highest number of Intentional drug poisonings (X60-X64) were reassigned from Accidental drug poisoning (X40-X44) • Car occupant injured in transport accident (V40-V49) increased by 20 deaths. The majority of these deaths were reassigned as more specificity surrounding the circumstances of the death became available. Deaths were predominantly reassigned from Unspecified motor vehicle and transport accidents (V89, V98-V99) and Exposure to unspecified factor (X59) • Assault (X85-Y09) increased by 18 deaths. Of these 18 deaths, the majority were reassigned due to updated intent information becoming available. Deaths were predominantly reassigned from Ill-defined causes of mortality (R99) and Undetermined intent (Y10-Y34) 2016Change (preliminary to final) PRF Cause of death and ICD-10 codenononono% Transport accidents (V01-V99)145314671469161.1 Car occupant injured in transport accident (V40-V49) 751770771202.7 Other land transport accidents (V80-V89) 715754-17-23.9 Other external causes of accidental injury (W00-X59)57055684571380.1 Falls (W00-W19) 266627192729632.4 Accidental drug poisoning (X40-X44) 128913111326372.9 Exposure to unspecified factor (X59) (a) 1004893885-119-11.9 Intentional self-harm (X60-X84, Y870) (b)286629112909431.5 Intentional drug poisoning (X60-X64) 411439433225.4 Intentional self-harm by hanging or suffocation (X70) 158315901596130.8 Intentional self-harm by unspecified means (X84) 26127-19-73.1 Assault (X85-Y09)244254262187.4 Event of undetermined intent (Y10-Y34)141123116-25-17.7 a. Deaths assigned to Exposure to unspecified factor (X59) are more likely to be certified by a doctor. As such, % change shown in this table is different compared to the table above. b. Care should be taken in interpreting figures relating to intentional self-harm. See Explanatory Notes 91-100. c. This table consists of both doctor and coroner-certified deaths. Figures presented in this table may show differences compared to the table above. ### Causes of death revisions for 2016 - preliminary, revised and final, by ICD-10 selected causes, all certified deaths (a)(b)(c) 13 Various improvements to the availability and timeliness of national mortality information have been undertaken over several years. One major improvement undertaken by the NCIS is the more timely upload of reports and information for open coroner cases. This information can then be used at an earlier point by the ABS to improve open coding data quality. Earlier availability of reports can reduce the number of deaths from Ill-defined and unspecified causes of mortality (R99) present in the dataset at preliminary coding. The improved timeliness in report attachment on the NCIS was a key factor in enabling the ABS to bring forward the publication of annual causes of death data. A comparison of 2014, 2015 and 2016 final Ill-defined and unspecified causes of mortality (R99) counts indicate a substantial reduction, from 956 in 2014 to 722 in 2015, then to 606 in 2016. 14 There are some specific causes of death that may be more impacted by the changed revisions process. These include Accidental drug poisoning (X40-X44), Intentional drug poisoning (X60-X64) and Sudden Infant Death Syndrome (SIDS) (R95). Deaths from these causes require intensive investigations to accurately determine the cause and manner in which the death occurred. Therefore some key reports may not be available on the NCIS when preliminary coding of these deaths occur. These deaths are particularly sensitive to the revisions process in that more detailed information regarding the context of the death is often gained over time as information from investigations becomes available on the NCIS. 15 The number of deaths assigned to SIDS (R95) increased by 10 deaths between preliminary and final coding. Of these 10 deaths, 9 were previously assigned to Ill-defined causes of mortality (R99). While revised data captures a significant proportion of SIDS deaths, the rules for classifying these deaths are influenced by specific terminology used in coronial findings. Data users should consider combining deaths coded to SIDS in conjunction with infant deaths coded to Ill-defined and unspecified causes of mortality (R99) when seeking to understand how many sudden unexplained deaths in infants occur in total. 16 Over the revisions process there was an increase of 62 drug-induced deaths (includes all intents: Accidental (X40-X44), Intentional (X60-X64) and Undetermined (Y10-Y14). Accidental drug poisoning (X40-X44) contributed the largest increase across intent types for drug poisonings over the 2016 revisions process, accounting for 59.7% of the increase. 17 The process for determining that a death was caused by Accidental drug poisoning (X40-X44) is complex as multiple factors such as drug type, intent and presence of pre-existing natural disease need to be considered. Just under half (47.6%) of the deaths reassigned to an Accidental drug poisoning (X40-X44) were initially coded to Ill-defined and unspecified causes of mortality (R99). These deaths typically did not have toxicology and/or pathology reports available on the NCIS at the time of preliminary coding. A further 19.5% of those reassigned to this category were initially coded to Intentional drug poisoning (X60-X64) followed by Undetermined drug poisoning (Y10-Y14) (12.8%). 18 Determining deaths from Intentional drug poisoning (X60-X64) is similarly complex. Over one-third (37.9%) of deaths reassigned to an Intentional drug poisoning (X60-X64) were coded at preliminary as Accidental drug poisoning (X40-X44) deaths. These deaths typically had only an initial police report available at preliminary coding, where details on the intent of death can be unclear. A further 20.7% of reassigned Intentional drug poisoning (X60-X64) deaths were initially coded to Ill-defined causes of mortality (R99). These deaths typically did not have police, toxicology and/or pathology reports available on NCIS at the time of preliminary coding. ### Impact of revisions - associated causes of death 19 The revisions process has traditionally focused on improving specificity of the underlying cause of death. More recently, there has been growing interest in associated cause statistics which can provide a more complete picture of the diseases and/or circumstances that contributed to a death. Associated causes include the type of injuries sustained by a deceased person, drug type in a drug-induced death (e.g. heroin, cannabis), chronic disease (e.g. cancer) and mental and behavioural disorders (e.g. depression, anxiety). The ABS has maximised the use of improved report attachment on the NCIS to enhance associated cause statistics through the revisions process. Analysis of associated causes of death can better enable targeted policy and prevention initiatives, especially for those deaths which are deemed preventable. For this reason, the revisions process typically focusses on associated cause of death enhancements for two key areas - drug specification in drug-induced deaths and mental and behavioural disorders implicated in deaths from external causes. ### Changes to drug types for drug poisoning deaths 20 There are multiple complex factors which need to be considered when a death is certified as due to a drug poisoning. The timing between the death and toxicology testing can influence the levels and types of drugs detected, making it difficult to determine the true level of a drug at the time of death. Individual tolerance levels may also vary considerably depending on multiple factors, including sex, body mass and a person’s previous exposure to a drug. Consideration of contextual factors around the death must also be considered such as pre-existing natural disease and reports from friends and families regarding the circumstances surrounding death. For these reasons, the certification of a death as a drug poisoning can take significant time to complete, making these deaths particularly sensitive to the revisions process. 21 Policies directed at reducing deaths due to drug poisoning employ a variety of strategies depending on drug type. Information regarding the type of drug(s) in a drug poisoning can often depend on the availability of an autopsy, toxicology or coronial finding report. When these reports are not available, the drug type is unknown and coded to Other and unspecified drugs, medicaments and biological substances (Unspecified drug) (T509). Importantly, deaths coded with an Unspecified drug (T509) are still counted as a drug poisonings at preliminary output, but they may be enhanced with more specific information about drug type via the revisions process. 22 From preliminary to final, the number of drug-induced deaths in 2016 where drug type was not specified (T509) decreased from 110 to 4. As a result there was an increase in the number of specified drug types (see table below) with Benzodiazepines (T424) recording the largest increase (134 mentions) when analysed by single drug type. This was followed by Other and unspecified antipsychotics and neuroleptics (T435) (81 mentions) and Other and unspecified antidepressants (T432) (69 mentions). 2016 reference yearChange (preliminary to final) PRF Cause of death and ICD-10 codenononono% Benzodiazepines (T424)66277179613420.2 Other and unspecified antipsychotics and neuroleptics (T435)2162832978137.5 Other and unspecified antidepressants (T432)2763363456925.0 Other opioids (T402)5506036136311.5 Psychostimulants with abuse potential (T436)3624144226016.6 Cannabis (T407)1321671794735.6 Tricyclic and tetracyclic antidepressants (T430)1641912064225.6 Aminophenol derivatives (T391)1651962043923.6 Heroin (T401)3613973993810.5 a. This table includes coroner-certified deaths only. b. Data in this table indicates the number of deaths with each specified drug recorded. Drug types are not mutually exclusive and deaths with multiple drugs present at will be included in more than one category. As a result, categories cannot be summed to obtain the total number of drug-induced deaths. ### Changes to associated causes for intentional self-harm and accidental drug poisonings 23 Associated causes of death may provide important contextual information for deaths due to Intentional self-harm (X60-X84, Y870). At preliminary coding, 79.7% of suicides in 2016 had associated causes mentioned as contributory factors to death. Through the revisions process, this proportion increased to 84.0%. The table below shows the top 5 increases for associated causes of death as they relate to Intentional self-harm (X60-X84, Y870). The number of deaths with Mental and behavioural disorders due to psychoactive substance use (F10-F19) mentioned as an associated cause increased by the most over the revisions period, followed by Mood disorders (F30-F39), including depression, and Suicide ideation (R458). 2016 reference yearChange (preliminary to final) PRF Cause of death and ICD-10 codenononono% Mental and behavioural disorders due to psychoactive substance use (F10-F19)66884387020230.2% Mood disorders (F30-F39)11351318133419917.5% Suicide ideation (R458)817980101319624.0% Anxiety and stress-related disorders (F40-F48)32744746513842.2% Findings of drugs and other substances, not normally found in blood (R78)45952956810923.7% a. This table includes coroner-certified deaths only. ### Changes to intentional self-harm associated causes for 2016 - preliminary, revised and final, coroner-certified deaths (a) 24 Associated causes of death may also provide critical insights into deaths due to Accidental drug poisoning (X40-X44). The table below shows the top 5 increases for associated causes of death as they relate to Accidental drug poisoning (X40-X44). As additional evidence and documentation was added to the NCIS there were 130 accidental drug overdoses where a Mental and behavioural disorders due to psychoactive substance use (F10-F19), such as addiction or chronic substance misuse, was identified. Mood disorders (F30-F39) were identified as being a factor in 69 accidental drug-induced deaths via the revisions process and anxiety and stress-related disorders (F40-F49) were identified as a factor in 49 deaths. 2016 reference yearChange (preliminary to final) PRF Cause of death and ICD-10 codenononono% Mental and behavioural disorders due to psychoactive substance use (F10-F19)58870071813022.1 Mood disorders (F30-F32633133316825.9 Anxiety and stress-related disorders (F40-F48)1361751854936.0 Suicide ideation (R458)5280863465.4 Schizophrenia, schizotypal and delusional disorders (F20-F29)851001062124.7 Chronic pain (R522)5671772137.5 a. This table includes coroner-certified deaths only. ## Technical note - causes of death revisions, 2017 revised data ### Overview 1  Deaths that are referred to a coroner can take time to be fully investigated. To account for this, the ABS has implemented a revisions process for those deaths where coronial investigations remained open at the time a preliminary cause of death was assigned. Data are deemed preliminary when first published, revised when published the following year and final when published after a second year. This technical note focusses specifically on revised data for 2017 coroner-certified deaths. 2  The revisions process has been applied to all reference periods from 2006 onwards. Revisions are one of two measures implemented to enable timely data to be released on coroner-certified deaths (see Explanatory Notes 54-62 for further information). The second measure, referred to as 'open coding', ensures that all available documentation is taken into account when assigning a cause of death to coronial cases that are yet to be finalised. The combination of these two measures, along with ongoing enhancements in the timeliness and completeness of documentation on the National Coronial Information System (NCIS), have resulted in significant improvements to the quality of preliminary Causes of Death data. 3  There are three main improvements to the Causes of Death data which are gained through the revisions process. Firstly, for deaths from natural causes a more specified condition may be identified. For example, a death may be coded to a condition such as cardiac arrest at preliminary coding, but with the later addition of an autopsy report, an underlying ischaemic heart condition could be identified. Secondly, for deaths from external causes (accidents, assaults and suicides) more information might be provided on mechanism. For example, a death coded to an unspecified accident with a fracture of hip, may later be found to have been caused by a fall down steps. Lastly, external causes may also have the intent of death updated through revisions. For example, a drug overdose where the intent of death was not determined at preliminary coding, may be updated to an intentional drug overdose when a coronial finding has been made. ### Changes to cause of death processing and revisions 4 Until the 2014 reference period, the ABS released the annual Causes of Death dataset 15 months after the end of each reference period (i.e. data for the 2014 reference period was published in March 2016). The 2015 release of Causes of Death, Australia was released 6 months earlier, representing a significant change in processing of the national mortality dataset. 5 Bringing forward the release of Causes of Death data meant that preliminary coding of coroner-certified deaths occurred approximately 6 months earlier than in previous years. Given that the timeliness of report availability on the NCIS is critical to the ABS's ability to assign specific cause of death codes, considerable analysis was undertaken to ensure the preliminary dataset would be of sufficient quality to be fit for purpose. See Technical Note 1 A More Timely Annual Collection: Changes to ABS Processes in the 2015 publication. 6  With earlier release of preliminary data, there is now a period of 18 months between the release of preliminary and revised data. The table below shows the impact of this changed revisions process at the International Classification of Diseases, 10th revisions (ICD-10) chapter level. As anticipated, the magnitude of changes is the largest for deaths assigned to the Symptoms, signs and abnormal clinical and laboratory findings, not elsewhere classified (Symptoms and signs) (R00-R99) chapter, decreasing by 26.7% for the 2017 reference year. This is comparable to the decrease in 2015 (26.9%), the first year the publication was released 6 months earlier. The redistribution of deaths to more specified ICD-10 codes is discussed in detail below. 20132014201520162017 Cause of death and ICD-10 code%%%%% Certain infectious and parasitic diseases (A00-B99)0.20.10.40.40.5 Neoplasms (C00-D48)0.00.00.00.00.1 Diseases of the blood and blood-forming organs and certain disorders involving the immune mechanism (D50-D89)0.20.00.60.20.2 Endocrine, nutritional and metabolic diseases (E00-E90)0.10.10.60.30.5 Mental and behavioural disorders (F00-F99)0.00.00.10.00.0 Diseases of the nervous system (G00-G99)0.10.10.40.10.2 Diseases of the circulatory system (I00-I99)0.00.00.50.40.5 Diseases of the respiratory system (J00-J99)0.00.10.40.30.4 Diseases of the digestive system (K00-K93)-0.10.10.70.30.3 Diseases of the skin and subcutaneous tissue (L00-L99)0.00.20.20.20.0 Diseases of the musculoskeletal system and connective tissue (M00-M99)0.20.20.30.42.7 Diseases of the genitourinary system (N00-N99)-0.10.00.30.10.4 Certain conditions originating in the perinatal period (P00-P96)-0.50.00.40.50.0 Congenital malformations, deformations and chromosomal abnormalities (Q00-Q99)0.20.51.00.71.1 Symptoms, signs and abnormal clinical and laboratory findings, not elsewhere classified (R00-R99)-2.9-5.6-26.9-20.8-26.7 External causes of morbidity and mortality (V01-Y98)0.30.50.80.10.4 a. Excludes deaths coded to H00-H59, H60-H95, and O00-O99 as these causes of death account for small amount of deaths and changes through revisions are minimal. b. Since 2015 the release of Causes of Death, Australia has occurred 6 months earlier, representing a significant change in processing of the national mortality dataset. For further information regarding changes to ABS coding processes, see A More Timely Annual Collection: Changes to ABS Processes (Technical Note) in Causes of Death, Australia, 2015 (cat. no. 3303.0). ### Causes of death revisions for 2013 to 2017 - changes from preliminary to revised data by percentage, by selected ICD-10 chapter, all certified deaths (a)(b) 7 The table below provides the counts of deaths by ICD-10 chapter for the 2017 reference period from preliminary to revised. Revisions are most likely to result in decreases in the number of deaths assigned to the Symptoms and signs (R00-R99) chapter with corresponding increases in other chapters. 8 Deaths which are originally coded to the Symptoms and signs (R00-R99) chapter can be reassigned to specific natural or external causes of death. The majority of those reassigned are subsequently found to be deaths from natural causes (70.7%), with Diseases of the circulatory system (I00-I99) being the most common natural cause. Of those reassigned to external causes of death, 20 were found to be suicides. 2017 reference yearChange (preliminary to revised) PR Cause of death and ICD-10 codenonono% Certain infectious and parasitic diseases (A00-B99)         2,636         2,650              140.5 Neoplasms (C00-D48)       46,399       46,433              340.1 Diseases of the blood and blood-forming organs and certain disorders involving the immune mechanism (D50-D89)            537            538                10.2 Endocrine, nutritional and metabolic diseases (E00-E90)         6,820         6,855              350.5 Mental and behavioural disorders (F00-F99)       10,157       10,162                50.0 Diseases of the nervous system (G00-G99)         9,205         9,222              170.2 Diseases of the circulatory system (I00-I99)       43,477       43,713           2360.5 Diseases of the respiratory system (J00-J99)       16,203       16,261              580.4 Diseases of the digestive system (K00-K93)         5,930         5,949              190.3 Diseases of the skin and subcutaneous tissue (L00-L99)            551            551               00.0 Diseases of the musculoskeletal system and connective tissue (MOO-M99)         1,401         1,439              382.7 Diseases of the genitourinary system (N00-N99)         3,698         3,713              150.4 Certain conditions originating in the perinatal period (P00-P96)            582            582               00.0 Congenital malformations, deformations and chromosomal abnormalities (Q00-Q99)            634            641                71.1 Symptoms, signs and abnormal clinical and laboratory findings, not elsewhere classified (R00-R99)         1,938         1,420          -518-26.7 External causes of morbidity and mortality (V01-Y98)       10,709       10,747              380.4 Total(a)     160,909     160,909               00.0 a. Includes deaths coded to H00-H59, H60-H95, and O00-O99. b. This table includes both doctor and coroner-certified deaths. ### Impact of revisions - underlying cause of death 9 The expected outcome of the revisions process is to improve data quality. Enhancements to underlying cause data quality may include improved understanding of either mechanism or intent or identifying an underlying cause where not previously possible. While the revisions process has a minimal impact on statistical output at the chapter level of the ICD-10 (with the exception of R00-R99), data improvements become more apparent when considering movements within individual chapters. 10 The table below shows data for coroner-certified deaths only at the sub-chapter level. There were key data improvements for specification of mechanism for external causes of deaths over the 2017 revisions period. There were 148 deaths where intent was coded but mechanism was unspecified at preliminary coding. Through the revisions process a mechanism was identified for 66 (44.6%) of these deaths. The majority of these records had no change in intent, but were assigned a more specific mechanism. For example, a suicide death where the mechanism was unspecified at preliminary coding (Intentional self-harm by unspecified means (X84)) may be reassigned to a suicidal drowning (Intentional self-harm by drowning (X71)) during the revisions process when an autopsy becomes available for analysis. Intentional self-harm by drowning (X71) during the revisions process when an autopsy becomes available for analysis. 11 The table below further demonstrates that the number of coroner-certified deaths assigned to Other ill-defined and unspecified causes of mortality (R99) decreased by 43.6% from preliminary to revised. 2017 reference yearChange (prelim to revised) PR Cause of death and ICD-10 codenonono% Other ill-defined and unspecified causes of mortality (R99)1199676-523-43.6 Unspecified mechanism (X59, X84, Y09)14882-66-44.6 Accidental exposure to unspecified factor (X59) 11460-54-47.4 Intentional self-harm by unspecified means (X84) 1910-9-47.4 Assault by unspecified means (Y09) 1512-3-20.0 Event of undetermined intent (Y10-Y34)209138-71-34.0 a. This table includes coroner-certified deaths only. ### Causes of death revisions for 2017 - preliminary and revised, by selected causes of death, coroner-certified deaths (a) 12 The table below provides information on changes at the sub-chapter level for the 2017 reference period, with a focus on the External causes of morbidity and mortality (V01-Y98) chapter. Notable increases in deaths due to external causes include: • Accidental drug poisoning (X40-X44) increased by 98 deaths. Many of the deaths reassigned to an accidental drug poisoning death were originally assigned to Ill-defined causes of mortality (R99). Drug-induced deaths require intensive investigations to accurately determine not only the cause and manner in which the death occurred, but also the attribution of a drug(s) to the death. Over time, as investigations are finalised, more information on the NCIS becomes available allowing these deaths to be reassigned to an accidental drug poisoning. • Intentional self-harm (X60-X84, Y870) increased by 69 deaths. Of these 69 deaths the majority were reassigned due to updated intent information becoming available (especially the final coronial finding). Deaths were predominantly reassigned from Undetermined intent (Y10-Y34), Accidental drug poisoning (X40-X44) and Other ill-defined and unspecified causes of mortality (R99). • Intentional drug poisoning (X60-X64) increased by 36 deaths. Of these 36 deaths the majority were reassigned due to updated intent information becoming available (especially the final coronial finding). Deaths were predominantly reassigned from Accidental drug poisoning (X40-X44). 2017Change (preliminary to revised) PR Cause of death and ICD-10 codenonono% Transport Accidents (V01-V99)13711394231.7 Pedestrian injured in transport accident (V01-V09) 187202158 Other external causes of accidental injury (W00-X59)55735659861.5 Falls (W00-W19) 27822800180.6 Accidental drug poisoning (X40-X44) 12161314988.1 Exposure to unspecified factor (X59) (b) 938888-50-5.3 Intentional self-harm (X60-X84, Y870) (a)31283197692.2 Intentional drug poisoning (X60-X64) 440476368.2 Intentional self-harm by hanging or suffocation (X70) 18291843140.8 Intentional self-harm by jumping from a high place (X80) 159174159.4 Intentional self-harm by unspecified means (X84) 1910-9-47.4 Assault (X85-Y09)18018995 Event of undetermined intent (Y10-Y34)210139-71-33.8 a. Care should be taken in interpreting figures relating to intentional self-harm. See Explanatory Notes 91-100. b. This table includes both doctor and coroner-certified deaths. Figures presented in this table may show differences from the table above. c. Deaths assigned to Exposure to unspecified factor (X59) are more likely to be certified by a doctor. As such, % change shown in this table differs from the table presented above. ### Causes of death revisions for 2017 - preliminary and revised, by ICD-10 selected causes, all certified deaths (a)(b)(c) 13 Various improvements to the national mortality system have been undertaken over several years. One major improvement undertaken by the NCIS is the more timely upload of reports and information for open coroner cases. This information can then be used at an earlier point by the ABS to improve open coding data quality. Specifically, earlier availability of reports can reduce the number Ill-defined and unspecified causes of mortality (R99) present in the dataset at preliminary coding. These improvements are now being reflected in the mortality dataset. A comparison of 2015 and 2017 preliminary R99 counts of coroner-certified deaths indicate a substantial reduction, from 1,427 in 2015 to 1,199 in 2017. 14 There are some specific causes of death that may be more impacted by the changed revisions process. These include Accidental drug poisoning (X40-X44), Intentional drug poisoning (X60-X64) and Sudden Infant Death Syndrome (SIDS) (R95). Deaths from these causes require intensive investigations to accurately determine the cause and manner in which the death occurred. Therefore some key reports may not be available on the NCIS when preliminary coding of these deaths occurs. These deaths are particularly sensitive to the revisions process, in that more detailed information regarding the context of the death is often gained through revisions. 15 The number of deaths assigned to SIDS (R95) increased by 6 deaths between preliminary and revised coding. All 6 deaths were initially coded to Ill-defined causes of mortality (R99). While revised data captures a significant proportion of SIDS deaths, the rules for classifying these deaths are heavily influenced by specific terminology used in coronial findings. Data users should consider combining deaths coded to SIDS (R95) in conjunction with infant deaths coded to Ill-defined and unspecified causes of mortality (R99) when seeking to understand how many sudden unexplained deaths in infants occur in total. 16 Over the revisions process there was an increase of 100 drug-induced deaths (includes all intents: Accidental (X40-X44), Intentional (X60-X64) and Undetermined (Y10-Y14)). Accidental drug poisonings (X40-X44) contributed the largest increase across intent types for drug poisonings over the 2017 revisions process. 17 The process for determining that a death was caused by an Accidental drug poisoning (X40-X44) is complex, as multiple factors such as drug type, intent and presence of pre-existing natural disease need to be considered. Of the deaths reassigned to Accidental drug poisoning (X40-X44), approximately 62.3% were initially coded to Other ill-defined and unspecified causes of mortality (R99). A further 23.1% of those reassigned to this category were initially coded to Undetermined drug death (Y10-Y14). These deaths typically had only an initial police report available at preliminary coding, where circumstances surrounding death can be unclear and often appear similar to deaths from natural causes. 18 Determining deaths from Intentional drug poisoning (X60-X64) is similarly complex. Around 37.3% of deaths reassigned to an Intentional drug poisoning (X60-X64) were coded at preliminary as Accidental drug poisoning (X40-X44). These deaths often had only an initial police report available at preliminary coding, where details on the intent of death can be unclear. A further 23.5% of those reassigned to this category were initially coded to Ill-defined and unspecified causes of mortality (R99). These deaths typically did not have toxicology and/or pathology reports available on NCIS at the time of preliminary coding. ### Impact of revisions - associated causes of death 19 The revisions process has traditionally focussed on improving specificity of the underlying cause of death. More recently, there has been growing interest in associated cause statistics which can provide a more complete picture of the diseases and/or circumstances that contributed to a death. Associated causes include the type of injuries sustained by a deceased person, drug type in a drug-induced death (e.g. heroin, cannabis), chronic disease (e.g. cancer) and mental and behavioural disorders (e.g. depression, anxiety). The ABS has maximised the use of improved report attachment on the NCIS to enhance associated cause statistics through the revisions process. Analysis of associated causes of death can better enable targeted policy and prevention initiatives, especially for those deaths which are deemed preventable. For this reason, the revisions process typically focusses on associated cause of death enhancements for two key areas - drug specification in drug-induced deaths and mental and behavioural disorders implicated in deaths from external causes. ### Changes to drug types for drug-induced deaths 20 There are multiple complex factors which need to be considered when a death is certified as drug-induced. The timing between the death and toxicology testing can influence the levels and types of drugs detected, making it difficult to determine the true level of a drug at the time of death. Individual tolerance levels may also vary considerably depending on multiple factors, including sex, body mass and a person’s previous exposure to a drug. Consideration of contextual factors around the death must also be considered such as pre-existing natural disease and reports from friends and families regarding the circumstances surrounding death. For these reasons, the certification of a death as being drug-induced can take significant time to complete, making these deaths particularly sensitive to the revisions process. 21 Policies directed at reducing deaths due to drug poisoning employ a variety of strategies depending on drug type. Information regarding the type of drug(s) in a drug poisoning can often depend on the availability of an autopsy, toxicology or coronial finding report. When these reports are not available, the drug type is unknown and coded to Other and unspecified drugs, medicaments and biological substances (Unspecified drug) (T509). Importantly, deaths coded with an Unspecified drug (T509) are still counted as a drug-induced death at preliminary output, but they may be enhanced with more specific information about drug type via the revisions process. 22 From preliminary to revised, the number of drug-induced deaths in 2017 where drug type was not specified (Unspecified drug (T509)) decreased from 100 to 22. As a result, there was an increase in the number of specified drug types (see table below) with Benzodiazepines (T424) recording the largest increase (142 additional mentions) when analysed by single drug type. This was followed by Other opioids (T402) (85 additional mentions) and Other and unspecified antipsychotics and neuroleptics (T435) (64 additional mentions). 2017 reference yearChange (preliminary to revised) PR Cause of death and ICD-10 codenonono% Benzodiazepines (T424)82296414217.3 Other opioids (T402)5316168516.0 Other and unspecified antipsychotics and neuroleptics (T435)2903546422.1 Tricyclic and tetracyclic antidepressants (T430)2423056326.0 Cannabis (T407)1842456133.2 Other and unspecified antidepressants (T432)3564105415.2 4-Aminophenol derivatives (T391)2362865021.2 Psychostimulants with abuse potential (T436)3764254913.0 Antiepileptic and sedative-hypnotic drugs, unspecified (T427)1001383838.0 Other synthetic narcotics (T404)2512873614.3 a. This table includes coroner-certified deaths only. b. Data in this table indicates the number of deaths with each specified drug recorded. Drug types are not mutually exclusive and deaths with multiple drugs present at will be included in more than one category. As a result, categories cannot be summed to obtain the total number of drug-induced deaths. ### Changes to associated causes for intentional self-harm and accidental drug poisonings 23 Associated causes of death may provide important contextual information for deaths due to Intentional self-harm (X60-X84, Y870). At preliminary coding, approximately 78.7% of suicides in 2017 had associated causes mentioned as contributory factors to death. Through revisions, this proportion increased to 82.3%. The table below shows the top 5 increases for associated causes of death as they relate to Intentional self-harm (X60-X84, Y870). Mood disorders (F30-F39), which include depression and bipolar affective disorder, were the most common associated causes of death identified during the revisions process, followed by Mental and behavioural disorders due to psychoactive substance use (F10-F19) and Suicide ideation (R458). 2017 reference yearChange (preliminary to revised) PR Cause of death and ICD-10 codenonono Mood disorders (F30-F39)13451476131 Mental and behavioural disorders due to psychoactive substance use (F10-F19)9231047124 Suicide ideation (R458)565678113 Anxiety and stress-related disorders (F40-F48)546654108 Findings of alcohol, drugs and other substances in blood (R78)46754174 a. This table includes coroner-certified deaths only. ### Changes to intentional self-harm associated causes for 2017 - preliminary and revised, coroner-certified deaths (a) 24 Associated causes may also provide critical insight into deaths due to Accidental drug poisoning (X40-X44). The table below shows the top 5 largest increases in associated causes for Accidental drug poisonings (X40-X44). As additional evidence and documentation was added to the NCIS there were 101 accidental drug overdoses where a Mental and behavioural disorders due to psychoactive substance use (F10-F19) such as addiction or chronic substance misuse was identified. Mood disorders (F30-F39) were identified as being a factor in 66 accidental drug-induced deaths via the revisions process and anxiety and stress-related disorders (F40-F49) were identified as a factor in 55 deaths. 2017 reference yearChange (preliminary to revised) PR Cause of death and ICD-10 codenonono Mental and behavioural disorders due to psychoactive substance use (F10-F19)733834101 Mood disorders (F30-F39)31638266 Anxiety and stress-related disorders (F40-F48)19124655 Ischaemic heart disease (I20-I25)12714720 Schizophrenia, schizotypal and delusional disorders (F20-F29)11513217 a. This table includes coroner-certified deaths only. ## Technical note - updates to 2016 and 2017 suicide data 1 As part of the ABS's revisions process for Causes of Death, the ABS updates causes for coroner-certified deaths at 12 and 24 months after initial processing, to reflect the latest available information. Revisions have now been applied to 2016 and 2017 data. As coronial investigations regarding deaths due to suspected suicide can be extensive, it is a cause of death which may be more heavily impacted by revisions. It is important from a public health perspective to have accurate counts of suicides. As such, this technical note focusses on how the revisions process has changed suicide counts in 2016 and 2017. 2 Over time there has been a reduction in the number of deaths that are reassigned to suicide through the revisions process. In 2006 and 2007, the first years for which revisions were applied, the number of suicide deaths increased by 17.7% and 18.5%, respectively. In 2016, the final suicide count was 1.5% higher than the preliminary count. Several factors have impacted on the increased quality of preliminary data, including enhanced coding practices, enabling greater use of documents available on the National Coronial Information System (NCIS) and more timely report attachment. ### 2016 final suicide count 3 The final number of deaths due to suicide recorded for 2016 is 2,909, a net increase of 43 deaths (1.5%) from the preliminary count of 2,866. There was an increase of 45 suicides over the first revision period and a reduction of 2 suicides in the second revision period. 4 Deaths which have been reassigned to suicide through the revisions process were most likely to be coded to an Accidental poisoning (X40-X44) (22 deaths) or Event of undetermined intent (Y10-Y34) (19 deaths) when initially coded. There were 18 deaths that were initially coded to Other ill-defined and unspecified causes of mortality (R99) and later identified as suicide deaths through the revisions process. There were also some minor changes in the recorded mechanism of death associated with additional information becoming available, especially toxicology and pathology reports. ### 2017 revised suicide count 5 The revised number of suicides in 2017 is 3,197, a net increase of 69 suicide deaths (2.2%) over the first year of the revisions process. Most deaths reassigned to suicide were initially coded to Event of undetermined intent (Y10-Y34) (32 deaths). There were 20 deaths that were reassigned from both Accidental poisoning (X40-X44) and from Other ill-defined and unspecified causes of mortality (R99). The table below shows the total suicide counts for Australia at each stage of the revisions process for 2016 and 2017. 6 New South Wales (NSW) recorded the largest increase in deaths due to suicide for the 2017 revisions period. The revised number of suicides for NSW in 2017 is 929, a net increase of 49 (5.6%). Updated suicide data for jurisdictions is provided at the end of this technical note. PreliminaryRevisedFinal Cause of death and ICD-10 codenonono 20162 8662 9112 909 20173 1283 197na na Not Applicable a. Intentional self-harm includes ICD-10 codes X60-X84 and Y87.0. Care needs to be taken in interpreting figures relating to suicide. See Explanatory Notes 91-100. b. All causes of death data from 2006 onward are subject to a revisions process - once data for a reference year are 'final', they are no longer revised. Affected data in this table are: 2016 (final) and 2017 (revised). See Explanatory Notes 59-62 and 2016 Final Data (Technical Note) and 2017 Revised Data (Technical Note) in this publication. c. Since 2015 the release of Causes of Death, Australia has occurred 6 months earlier, representing a significant change in processing of the national mortality dataset. For further information regarding changes to ABS coding processes, see A More Timely Annual Collection: Changes to ABS Processes (Technical Note) in Causes of Death, Australia, 2015 (cat. no. 3303.0). ### Count of suicides throughout revisions - 2016 and 2017(a)(b)(c) 7 The number and age-standardised death rate of deaths due to intentional self-harm by state and territory from 2009 to 2018 are shown in the tables below. These tables provide an updated time series that includes the revisions for 2016 and 2017 and should now be used in preference to those published in September 2018. A more detailed table which includes revised suicide counts by mechanism (ICD-10 codes X60-X84 and Y87.0) are provided in the Revisions data cube in the Data downloads section of this publication. Further tabulations are available on request. Please contact the National Information and Referral Service on 1300 135 070. 2009201020112012201320142015201620172018 MALES NSW466520465526523620637624717684 Vic.434426401391394509514456446440 Qld415441438477519498579532611618 SA138157167150152186170164163154 WA218250229271252277295269310285 Tas59475157545666676162 NT31393841223331383739 ACT23342317282836204438 Australia1 7851 9141 8121 9301 9442 2082 3292 1712 3902 320 FEMALES NSW157154152201195212202198212215 Vic.142132125123139149164181176153 Qld110147140154157160182156201168 SA47404548515764576358 WA61638096849010710410898 Tas20172314201318261916 NT66671123178148 ACT97107910108149 Australia552566581650666714764738807726 PERSONS NSW623674617727718832839822929899 Vic.576558526514533658678637622593 Qld525588578631676658761688812786 SA185197212198203243234221226212 WA279313309367336367402373418383 Tas79647471746984938078 NT37454448335648465147 ACT32413324373846285847 Australia2 3372 4802 3932 5802 6102 9223 0932 9093 1973 046 a. Intentional self-harm includes ICD-10 codes X60-X84 and Y87.0. Care needs to be taken in interpreting figures relating to suicide. See Explanatory Notes 91-100. b. All causes of death data from 2006 onward are subject to a revisions process - once data for a reference year are 'final', they are no longer revised. Affected data in this table are: 2009-2016 (final), 2017 (revised), 2018 (preliminary). See Explanatory Notes 59-62 and 2016 Final Data (Technical Note) and 2017 Revised Data (Technical Note) in this publication. c. Since 2015 the release of Causes of Death, Australia has occurred 6 months earlier, representing a significant change in processing of the national mortality dataset. For further information regarding changes to ABS coding processes, see A More Timely Annual Collection: Changes to ABS Processes (Technical Note) in Causes of Death, Australia, 2015 (cat. no. 3303.0). ### Intentional self-harm, number of deaths, states and territories of usual residence, 2009-2018(a)(b)(c) 2009201020112012201320142015201620172018 MALES NSW13.414.712.914.514.116.516.816.118.217.1 Vic.16.115.614.313.913.517.117.214.614.113.6 Qld19.520.419.921.222.721.424.922.425.325.3 SA17.419.120.418.018.022.419.419.518.717.9 WA19.321.719.122.119.821.722.92124.121.9 Tas24.019.620.222.121.421.825.725.52423.2 NT28.131.630.531.018.524.627.230.727.931.3 ACT13.119.214.08.714.614.517.910.821.818.3 Australia16.517.516.217.016.818.819.71819.518.6 FEMALES NSW4.34.24.05.35.15.45.355.25.2 Vic.5.14.64.34.44.64.95.35.75.44.7 Qld5.06.66.26.86.76.77.56.386.6 SA5.84.75.55.76.06.57.56.87.36.3 WA5.45.46.77.96.77.28.488.37.6 Tas7.6np8.9np7.4npnp9.2npnp NTnpnpnpnpnp18.5npnpnpnp ACTnpnpnpnpnpnpnpnpnpnp Australia5.05.05.15.65.66.06.366.45.7 PERSONS NSW8.79.38.49.89.510.810.910.511.611.1 Vic.10.510.19.29.08.910.911.110.19.69.1 Qld12.113.412.913.914.614.016.014.216.515.8 SA11.511.812.911.711.914.413.31312.912 WA12.313.612.914.913.314.415.614.516.214.7 Tas15.413.014.113.714.212.816.217.115.614.5 NT17.418.818.519.214.321.720.319.220.219.5 ACT8.911.39.36.29.69.811.47.214.111 Australia10.711.210.511.211.112.312.911.912.812.1 a. Intentional self-harm includes ICD-10 codes X60-X84 and Y87.0. Care needs to be taken in interpreting figures relating to suicide. See Explanatory Notes 91-100. b. All causes of death data from 2006 onward are subject to a revisions process - once data for a reference year are 'final', they are no longer revised. Affected data in this table are: 2009-2016 (final), 2017 (revised), 2018 (preliminary). See Explanatory Notes 59-62 and 2016 Final Data (Technical Note) and 2017 Revised Data (Technical Note) in this publication. c. Since 2015 the release of Causes of Death, Australia has occurred 6 months earlier, representing a significant change in processing of the national mortality dataset. For further information regarding changes to ABS coding processes, see A More Timely Annual Collection: Changes to ABS Processes (Technical Note) in Causes of Death, Australia, 2015 (cat. no. 3303.0). c. Age-standardised death rates (SDRs) enable the comparison of death rates between populations with different age structures. The SDRs in this table are presented on a per 100,000 population basis, using the estimated mid-year population (30 June) for each year. See Explanatory Notes 46-49 and the Glossary in Causes of Death, Australia, 2018 (cat. no. 3303.0) for further information. d. Age-standardised death rates for the 2016 reference year have been calculated using 2016 Census-based population estimates. See explanatory notes 46-49. ## Glossary ### Show all #### Aboriginal and/or Torres Strait Islander Persons who identify themselves as being of Aboriginal and/or Torres Strait Islander origin. #### Aboriginal and/or Torres Strait Islander death The death of a person who is recorded as being an Aboriginal, Torres Strait Islander, or both on the Death Registration Form (DRF). The Indigenous status is also derived from the Medical Certificate of Cause of Death (MCCD) for South Australia, Western Australia, Tasmania, the Northern Territory and the Australian Capital Territory from 2007 and for Queensland from 2015. If the Indigenous status reported in the DRF does not agree with that in the MCCD, an identification from either source that the deceased was an Aboriginal and/or Torres Strait Islander person is given preference over non-Indigenous. For New South Wales and Victoria, Indigenous status of the deceased is derived from the DRF only. #### Age-specific death rate Age-specific death rates (ASDRs) are the number of deaths (occurred or registered) during the reference year at a specified age per 100,000 of the estimated resident population of the same age at the mid-point of the year (30 June). ASDR for deaths under 1 year of age are calculated based on 1,000 live births for that year. #### All births All births comprises all live births plus all fetal deaths (gestation at least 20 weeks or birth weight at least 400 grams) for a specific year. This is the denominator used in calculating perinatal and fetal death rates in this publication. For data tables pertaining to the World Health Organization definition of a perinatal death, all births comprises all live births plus all fetal deaths with gestation of at least 22 weeks or a birth weight of at least 500 grams. See Appendix 1 (Data used in calculating death rates) for further information. #### Associated causes of death All causes listed on a death certificate other than the underlying cause. #### Australian Statistical Geographic Standard (ASGS) The ASGS provides a common framework of statistical geography and thereby enables the production of statistics that are comparable and can be spatially integrated. See Explanatory Notes 22-24 in this publication for more information. #### Cause of death The causes of death to be entered on the Medical Certificate of Cause of Death are all those diseases, morbid conditions or injuries that either resulted in or contributed to death and the circumstances of the accident or violence that produced any such injuries. #### Certifier type Deaths may be certified by either a medical practitioner, using the Medical Certificate of Cause of Death or Medical Certificate of Cause of Perinatal Death, or coroner. Natural causes are predominantly certified by doctors, whereas external and unknown causes are usually certified by a coroner. However, some deaths for natural causes are referred to coroners for investigation, for example, unaccompanied deaths. See Explanatory Notes 3-6 in this publication for more information. #### Confidentialised From 2006, data cells with small values have been randomly assigned to protect confidentiality. As a result some totals will not equal the sum of their components. It is important to note that cells with 0 values have not been affected by confidentialisation. Data presented at the Australia level (with exception to youth suicide tables) is not confidentialised - the death counts presented are exact counts. #### Coroner-certified deaths Deaths that were certified by a coroner. Deaths certified by a coroner represent 11-14% of all deaths each year. Coroner cases remain open while cause of death investigations are undertaken, and are closed when coronial investigations are complete. Following completion, causes of death information is passed to the Registrar of Births, Deaths and Marriages, as well as to the National Coronial Information System (NCIS). All coroner certified deaths registered after 1 January 2006 will be subject to a revision process. For more information see Explanatory Notes 59-62 and the Causes of Death Revisions, 2015 Final Data Technical Note in Causes of Death, Australia, 2017. #### Country of birth The classification of countries used is the Standard Australian Classification of Countries (SACC). For more detailed information refer to the Standard Australian Classification of Countries (SACC) (cat. no. 1269.0). #### Counts of death A form of multiple cause of death analysis that is a calculation of the number of people who have died with a particular disease/s or disorder/s. #### Counts of mentions A form of multiple cause of death analysis that calculates the total number of incidences of particular disease/s or disorder/s listed on the death certificates. #### Crude death rate The crude death rate (CDR) is the number of deaths registered during the reference year per 100,000 estimated resident population at 30 June. #### Data cubes Data cubes are a series of spreadsheets which present Causes of Death data. Causes of Death data cubes can be found on the web page under the Data downloads section. #### Death Death is the permanent disappearance of all evidence of life after birth has taken place. The definition excludes all deaths prior to live birth. For the purposes of the Deaths and Causes of Death collections of the Australian Bureau of Statistics (ABS), a death refers to any death that occurs in, or en route to, Australia and is registered with a state or territory Registry of Births, Deaths and Marriages. #### Doctor-certified deaths Deaths that were certified by a doctor or medical practitioner, which were not required to be referred on to a coroner. Deaths certified by a doctor represent around 86%-89% of all deaths each year. Doctor certified deaths are not subject to the revisions process. #### Early neonatal death Death of a live born baby within seven days of birth. #### Estimated resident population (ERP) The official measure of the population of Australia is based on the concept of residence. It refers to all people, regardless of nationality or citizenship, who usually live in Australia, with the exception of foreign diplomatic personnel and their families. It includes usual residents who are overseas for fewer than 12 months over a 16-month period and excludes those who are in Australia for fewer than 12 months over a 16-month period. #### External causes of death Deaths due to causes external to the body (for example suicide, transport accidents, falls, poisoning etc.). These relate to ICD-10 codes V01-Y98. #### External territories Australian external territories include Australian Antarctic Territory, Coral Sea Islands Territory, Territory of Ashmore and Cartier Islands, and Territory of Heard and McDonald Islands. #### Fetal death A fetal death is a death prior to the complete expulsion or extraction from its mother as a product of conception of at least 20 completed weeks of gestation or with a birth weight of at least 400 grams (or at least 22 weeks gestation or 500 grams birthweight when using the World Health Organization definition of a fetal death). The death is indicated by the fact that after such separation the fetus does not breathe or show any other evidence of life, such as beating of the heart, pulsation of the umbilical cord, or definite movement of voluntary muscles. See Explanatory Notes 16-19 for further information. #### ​​​​​​​Fetal death rate The number of fetal deaths in a reference year per 1,000 all births (live births plus fetal deaths of relevant scope) in the same year. See 'All births' above. #### ICD International Statistical Classification of Diseases and Related Health Problems. The purpose of the ICD is to permit the systematic recording, analysis, interpretation and comparison of mortality and morbidity data collected in different countries or areas and at different times. The ICD, which is endorsed by the World Health Organization (WHO), is primarily designed for the classification of diseases and injuries with a formal diagnosis. The ICD-10 is the current classification system, which is structured using an alphanumeric coding scheme. Each disease or health problem listed on the death certificate is assigned a 3-character identification code. Cause of death statistics can be produced for aggregates of these, for example, chapter level (letter), 2-character code (first two characters of the assigned code), and 3-character code (first three characters of the assigned code). See Explanatory Notes 25-29 for more information on ICD. Further information also is available from the WHO website. #### Indirect standardised death rate (ISDR) See Standardised Death Rate (SDR). #### Infant death An infant death is the death of a live born child who dies before reaching his/her first birthday. #### Infant death rate The number of deaths of children under one year of age in a reference year per 1,000 live births in the same reference year. #### Intent The manner or intent of an injury that leads to death is determined by whether the injury was inflicted purposefully or not (in some cases, intent cannot be determined). The determination of "intent" for each death is essential for determining the appropriate ICD-10 code to use for a death. See Explanatory Notes 54-58 for more information. #### Late neonatal death Death of a live born baby after seven completed days and within 28 completed days of birth. Ranking causes of death is a useful method for describing patterns of mortality in a population and allows comparison over time and between populations. The ranking of leading causes of death in this publication is based on research presented in the Bulletin of the World Health Organization, Volume 84, Number 4, April 2006, 297-304. From 2016 reference year data onwards, an amendment has been made to the leading cause grouping for Malignant neoplasm of colon, sigmoid, rectum and anus (C18-C21) to also include Malignant neoplasm: Intestinal tract, part unspecified (C26.0). See Explanatory Note 40 for further information. #### Live births A live birth is the complete expulsion or extraction of a child from its mother as a product of conception, irrespective of the duration of pregnancy, which after such separation, breathes or shows any other evidence of life, such as beating of the heart, pulsation of the umbilical cord or definite movement of voluntary muscles, whether or not the umbilical cord has been cut or the placenta is attached; each product of such a birth is considered live born. This is the denominator used in calculating neonatal and infant death rates in this publication, and contributes to the denominator used for calculating fetal and total perinatal death rates. See Explanatory Notes 102-105. #### Mechanism of death Mechanisms of external cause of death by which a person may die include: poisoning; hanging and other threats to breathing; drowning and submersion; firearms; contact with sharp object; and falls. #### Median age at death This refers to the age at death at the 50th percentile for the relevant demographic group. #### Morbid train of events The events and diseases that lead to death. Death. #### Multiple causes of death All morbid conditions, diseases and injuries entered on the death certificate. These include those involved in the morbid train of events leading to death which were classified as either the underlying cause, the immediate cause, or any intervening causes, and those conditions that contributed to death but were not related to the disease or condition causing death. For deaths where the underlying cause was identified as an external cause (for example, injury or poisoning, etc.) multiple causes include circumstances of injury and the nature of injury as well as any other conditions reported on the death certificate. See Explanatory Notes 106-108 for further information. #### National Coronial Information System (NCIS) The NCIS is a national data storage system which contains information about all deaths referred to a coroner since July 2000 (January 2001 for Queensland). #### Natural cause of death Deaths due to diseases (for example diabetes, cancer, heart disease etc.) that are not external or unknown. #### Neonatal death A neonatal death is death of a live born baby within 28 completed days of birth. #### Neonatal death rate The number of deaths in a reference year of live born babies within 28 completed days of birth per 1,000 live births in the same reference year. #### Neonatal period The neonatal period commences at birth and ends 28 completed days after birth. #### Other territories Following the 1992 amendments to the Acts Interpretation Act, the Indian Ocean Territories of Christmas Island and the Cocos (Keeling) Islands are included as part of geographic Australia. As of 01 July, 2016, Norfolk Island is now also considered part of geographic Australia, due to the introduction of the Norfolk Island Legislation Amendment Act 2015. Jervis Bay Territory (previously included with the Australian Capital Territory), Christmas Island, the Cocos (Keeling) Islands and Norfolk Island appear as "Other Territories", which is another category that has been created at the same level as states and territories within the Australian Statistical Geography Standard (ASGS). #### Perinatal death A death that is either a fetal death (i.e. a death prior to the complete expulsion or extraction from its mother as a product of conception of 20 completed weeks of gestation or with a birth weight of at least 400 grams (or 22 weeks' gestation or 500 grams' birth weight according to World Health Organization scope)), or a neonatal death (i.e. death of a live born baby within 28 completed days of birth). #### Perinatal death rate For comparison and measuring purposes, perinatal deaths in this publication have also been expressed as rates. Perinatal death rates are the number of perinatal deaths in a reference year (i.e. fetal and neonatal deaths) per 1,000 all births in the same reference year. See 'All births'. #### Perinatal period The perinatal period commences at 20 weeks of gestation and ends within 28 completed days of birth. #### Period of gestation Period of gestation is measured from the first day of the last normal menstrual period to the date of birth and is expressed in completed weeks. #### Post neonatal death Death of a live born baby after 28 days and within one year of birth. #### Rate difference Rate difference is calculated by subtracting the standardised death rate for one group (such as all persons with a usual residence of Queensland) from the standardised death rate for the total relevant population (such as all persons with a usual residence of Australia). #### Rate ratio Rate ratio is calculated by dividing the standardised death rate for one group (such as all persons with a usual residence of Queensland) by the standardised death rate for the total relevant population (such as all persons with a usual residence of Australia). #### Reference year The year that presented data refers to. For example, this publication presents data for the 2018 reference year, as well as some historical data for the 2009 to 2017 reference years. Data for a particular reference year includes all deaths registered in Australia for the reference year that are received by the ABS by the end of the March quarter of the subsequent year. For example, data for the 2018 reference year includes all deaths registered in Australia in 2018 that were received by the ABS by the end of March 2019. See Explanatory Notes 7-20 for more information about scope and coverage. #### Registration year Data presented on a year of registration basis relate to the date the death was registered with the relevant state or territory Registrar of Births, Deaths and Marriages. In most cases the year of registration and year of occurrence for a particular death will be the same, but in some cases there may be a delay between occurrence and registration of death. #### Registry of Births, Deaths and Marriages Each state and territory has a Registry of Births, Deaths and Marriages. It is a legal requirement that all deaths are recorded by the relevant Registry for the state or territory in which the death occurred. #### Reportable deaths Deaths which are reported to a coroner. See Explanatory Note 5 for further information on what constitutes a reportable death. #### Revisions process When additional information about an 'open' coroner certified death is received by the ABS, a more specific ICD-10 code may be applied, thereby 'revising' the cause of death. See Explanatory Notes 59-62 and A More Timely Annual Collection: Changes to ABS Processes (Technical Note) and Causes of Death Revisions, 2015 Final Data (Technical Note) and 2016 Revised Data (Technical Note) in Causes of Death, Australia, 2017 (cat. no. 3303.0). #### Sex indeterminate Sex indeterminate refers to deaths where the deceased has not been specified as male or female. Fetal deaths where sex is indeterminate are included in person totals only, where applicable. #### Sex ratio The number of males per 100 females. The sex ratio is defined for total population, at birth, at death and among age groups by appropriately selecting the numerator and denominator of the ratio. #### Standardised death rate (SDR) Standardised death rates (SDRs) enable the comparison of death rates between populations with different age structures by relating them to a standard population. The current standard population is all persons in the Australian population at 30 June 2001. SDRs are expressed per 100,000 persons. There are two methods of calculating standardised death rates: • The direct method - this is used when the populations under study are large and the age-specific death rates are reliable. It is the overall death rate that would have prevailed in the standard population if it had experienced at each age the death rates of the population under study. • The indirect method - this is used when the populations under study are small and the age-specific death rates are unreliable or not known. It is an adjustment to the crude death rate of the standard population to account for the variation between the actual number of deaths in the population under study and the number of deaths that would have occurred if the population under study had experienced the age-specific death rates of the standard population. Throughout this publication, when SDRs are produced for comparison between the Aboriginal and Torres Strait Islander population and the non-Indigenous population, they are produced according to the principles outlined in Appendix: Principles on the use of direct age-standardisation, from Deaths, Australia, 2010 (cat. no. 3302.0). Rates based on a total persons death count of fewer than 20 deaths are not published, in accordance with Principle 3. Standardised Death Rates for the total population have been produced according to the same principles, with the main exception being the use of data up to the 85 and over year age grouping. #### State or territory of registration State or territory of registration refers to the state or territory in which the death was registered. It is the state or territory in which the death occurred, but is not necessarily the deceased's state or territory of usual residence. #### State or territory of usual residence State or territory of usual residence refers to the state or territory in which the person has lived or intended to live for a total of six months or more in a given reference year. #### Stillbirth See fetal death. #### Underlying cause of death The disease or injury that initiated the train of morbid events leading directly to death. Accidental and violent deaths are classified according to the external cause, that is, to the circumstances of the accident or violence which produced the fatal injury rather than to the nature of the injury. #### Unknown cause of death Deaths for which it is not possible to determine between a natural and an external cause. #### Usual residence Usual residence within Australia refers to that address at which the person has lived or intended to live for a total of six months or more in a given reference year. #### Year of occurrence Data presented on a year of occurrence basis relate to the date the death occurred rather than when it was registered with the relevant state or territory Registrar of Births, Deaths and Marriages. See Explanatory Notes 7-8 for more information. #### Years of Potential Life Lost (YPLL) YPLL measures the extent of 'premature' mortality, where 'premature' mortality is assumed to be any death at age 1-78 years inclusive. By estimating YPLL for deaths of people aged 1-78 years it is possible to assess the significance of specific diseases or trauma as a cause of premature death. See Explanatory Notes 42-45 for an explanation of the calculation of YPLL. ## Quality declaration - causes of death data, summary ### Institutional environment For information on the institutional environment of the Australian Bureau of Statistics (ABS), including the legislative obligations of the ABS, financing and governance arrangements, and mechanisms for scrutiny of ABS operations, please see ABS Institutional Environment. Statistics presented in Causes of Death, Australia, 2018 (cat. no. 3303.0) are sourced from death registrations administered by the various state and territory Registry of Births, Deaths and Marriages. It is a legal requirement of each state and territory that all deaths are registered. Information about the deceased is supplied by a relative or other person acquainted with the deceased, or by an official of the institution where the death occurred, on a Death Registration Form. As part of the registration process, information on the cause of death is either supplied by the medical practitioner certifying the death on a Medical Certificate of Cause of Death, or supplied as a result of a coronial investigation. Death records are provided electronically to the ABS by individual Registrars on a monthly basis. Each death record contains both demographic data and medical information from the Medical Certificate of Cause of Death, where available. Information from coronial investigations is provided to the ABS through the National Coronial Information System (NCIS). ### Relevance The ABS Causes of Death collection includes all deaths that occurred and were registered in Australia, including deaths of persons whose usual residence is overseas. Deaths registered on Norfolk Island from 1 July 2016 are also included due to the introduction of the Norfolk Island Legislation Amendment Act 2015. See Explanatory Note 13 in Causes of Deaths, Australia, 2018 (cat. no. 3303.0) for more information. Deaths of Australian residents that occurred outside Australia may be registered by individual Registrars, but are not included in ABS deaths or causes of death statistics. From the 2007 reference year, the scope of the collection is: • all deaths registered in Australia for the reference year and received by the ABS by the end of the March quarter of the subsequent year; and • deaths registered prior to the reference year but not previously received from the Registrar nor included in any statistics reported for an earlier period. For example, records received by the ABS during the March quarter of 2019 which were initially registered in 2018 or prior (but not forwarded to the ABS until 2019) are assigned to the 2018 reference year. Any death registrations relating to the 2018 reference period which are received by the ABS after the end of the March 2019 quarter are assigned to the 2019 reference year. Data in the Causes of Death collection include causes of death information, as well as some demographic items. Causes of death information is obtained from the Medical Certificate of Cause of Death (general deaths), the Medical Certificate of Cause of Perinatal Death (perinatal deaths) and the National Coronial Information System (coroner-certified deaths). Causes of death are coded according to the International Classification of Diseases (ICD). Issues for causes of death data: • The primary objective of the owner of the source data can differ from the information needs of the statistical users. Registrars of Births, Deaths and Marriages and coroners have legislative and administrative obligations to meet, as well as being the source of statistics. As a result, the population covered by the source data, the time reference period for some data, and the data items available in the registration system, may not align exactly with the requirements of users of the statistics. • There can be differences between the defined scope of the population (i.e. every death occurring in Australia) and the actual coverage achieved by the registration system. Levels of registration can be influenced by external factors and coverage achieved will be influenced by the steps taken by the owners of death registration systems to ensure all deaths are registered. For example, a death certificate may need to be produced in order to finalise certain other legal requirements e.g. finalisation of a person's estate. • There are eight different registration systems within Australia. Each jurisdiction's registration system, while similar in many ways, also has a number of differences. These can include the types of data items collected, the definition of those collected data items, and business processes undertaken within Registries of Births, Deaths and Marriages including coding and quality assurance practices. ### Timeliness Causes of Death, Australia dataset is released annually, approximately nine months after the end of the reference period and in conjunction with Deaths, Australia 2018 (cat. no. 3302.0). Prior to the release of the 2015 dataset, causes of death data had been released approximately 15 months after the end of the reference period, however changes to ABS processes allowed for more timely access to Australian mortality data. For more information see A more timely annual collection: changes to ABS processes (Technical Note) in Causes of Death, Australia, 2015 (cat. no. 3303.0). There is a focus on fitness for purpose when causes of death statistics are released. To meet user requirements for accurate causes of death data it is necessary to obtain information from other administrative sources before all information for the reference period is available. This specifically applies to coroner certified deaths, where extra information relating to the death is provided through police, toxicology, autopsy and coronial finding reports. A balance therefore needs to be maintained between accuracy (completeness) of data and timeliness. ABS provides the data in a timely manner, ensuring that all coding possible can be undertaken with accuracy prior to publication. As coroner certified deaths can have ill-defined causes of death until a case is closed within the coronial system, a revisions process was introduced that applies to all coroner certified deaths registered after 1 January 2006 to enhance the cause of death output for open coroner cases. This process enables the use of additional information for coding relating to coroner certified deaths at approximately 12 and/or 24 months after initial processing. See Explanatory Notes 59-62 in this publication and Causes of Death Revisions, 2015 Final Data (Technical Note) and 2016 Revised Data (Technical Note) in Causes of Death, Australia, 2017 (cat. no. 3303.0) for more information on the revisions process. Causes of Death, Australia, 2018, includes preliminary data for 2018 and 2017, revised data for 2016 and final data for 2015 and prior years. Revised output for the 2016 and 2017 data will be released in early 2020. Issues for causes of death data: • A balance is maintained between accuracy (completeness) and timeliness, taking into account the different needs of users and maximising the fitness for purpose of the data. Documentation including explanatory notes and technical notes are provided for causes of death statistics. These should be used to assess the fitness for purpose of the data to ensure informed decisions can be made. • The timeliness of administrative information that supports cause of death coding can be impacted by legislative requirements, systems and resources available to maintain/update systems. ### Accuracy Non-sampling errors may influence accuracy in datasets which constitute a complete census of the population, such as the Causes of Death collection. Non-sampling error arises from inaccuracies in collecting, recording and processing the data. Every effort is made to minimise non-sampling error by working closely with data providers, undertaking quality checks throughout the data processing cycle, training of processing staff, and efficient data processing procedures. The ABS has implemented a revisions process that applies to all coroner certified deaths registered after 1 January 2006. This is a change from preceding years where all ABS processing of causes of death data for a particular reference period was finalised approximately 13 months after the end of the reference period. The revisions process enables the use of additional information relating to coroner certified deaths as it becomes available over time, resulting in increased specificity of the assigned ICD-10 codes. See Explanatory Notes 59-62 in this publication and Causes of Death Revisions, 2015 Final Data (Technical Note) and 2016 Revised Data (Technical Note) in Causes of Death, Australia, 2017 (cat. no. 3303.0) for more information on the revisions process. Issues for causes of death data: • Completeness of the dataset e.g. impact of registration lags, processing lags and duplicate records. • Extent of coverage of the population (while all deaths are legally required to be registered some cases may not be registered for an extended time). • Some lack of consistency in the application of questions or forms used by administrative data providers. • The level of specificity and completeness in coronial reports or doctor's findings on the Medical Certificate of Cause of Death. • Errors in the coding of the causes of a death to ICD-10. The majority of cause of death coding is undertaken through an automated coding process, which is estimated to have a very high level of accuracy. Human coding can be subject to error, however the ABS mitigates this risk through rigorous coder training, detailed documentation and instructions for coding complex or difficult cases, and extensive data quality checks. • Cases where coronial proceedings remain open at the end of ABS processing for a reference period are potentially assigned a less specific ICD-10 cause of death code. • Where coroner-certified deaths become closed during the revisions process, additional information is often made available, making more specific coding possible. ### Coherence Use of explanatory notes and technical notes released with the statistics is important for assessing coherence within the dataset and when comparing the statistics with data from other sources. Changing business rules over time and/or across data sources can affect consistency and hence interpretability of statistical output, especially when assessing time series data. The ICD is the international standard classification for epidemiological purposes and is designed to promote international comparability in the collection, processing, classification, and presentation of cause of death statistics. The classification is used to classify diseases, conditions, injuries and external events as recorded on many types of medical records as well as death records. It is used for both morbidity and mortality purposes, with the morbidity version incorporating clinical modifications. The ICD is revised periodically to incorporate changes in the medical field. The 10th revision of ICD (ICD-10) was used for coding the 2018 data. Issues for causes of death data: • Changes to questions, scope etc. over time can affect the consistency of data collected over the period, even when the source of the data is the same. These changes can be the result of legislative or program objective changes. • The completeness or quality of older versus newer data can also impact on comparisons across time or domains. • Statistical concepts for questions are not always suited to the administrative purpose or the means of collection. ### Interpretability In 2014, the ABS implemented Iris, a new automated coding software product for assisting in the processing of cause of death data. This software has been used from 2013 reference year cause of death data onwards. With the introduction of new coding software, the ABS also implemented the most up to date versions of the ICD-10 when coding 2013 and 2014-2017 data (using the 2013 and 2015 versions, respectively), and improved a number of coding practices to realign with international best practice. As part of this, the ABS began a review of its method of coding perinatal deaths which, for the 2013-2018 data published in this issue, has meant a change to the method used for assigning an underlying cause of death to neonatal deaths. The 2018 reference year cause of death data presented in this publication was coded using the 2016 version (version 5.4.0) of Iris software. This system replaced Iris version 4.4.1 which was used to code the 2013-2017 cause of death data. Version 5.4.0 of the Iris software applied the World Health Organization (WHO) ICD-10 updates and a new underlying cause of death processing system called the Multicausal and Unicausal Selection Engine (MUSE). This has resulted in changes to the automated coding path for some causes of death. The implementation of MUSE, alongside the updates to the ICD-10, align the Australian mortality data up to date with international best practice. The ABS have also implemented extra validation processes with the implementation of MUSE to ensure maximum alignment with WHO guidelines and coding rules. It is advised that data users refer to the below technical notes for further details: The Causes of Death publication contains detailed Explanatory Notes, Technical Notes, Appendices and a Glossary that provide information on the data sources, terminology, classifications and other technical aspects associated with these statistics. Issues for causes of death data: • Information on some aspects of statistical quality may be hard to obtain as information on the source data has not been kept over time. This is related to the administrative rather than statistical purpose of the collection of the source data. • Changes to data processing practices, such as the implementation of new software, updates to causes of death classifications, or changes to local coding practices, should be taken into consideration when comparing data over time. ### Accessibility In addition to the information provided in this publication, a series of data cubes are also available, providing detailed breakdowns by causes of death. The ABS observes strict confidentiality protocols as required by the Census and Statistics Act (1905). This may restrict access to data at a very detailed level which is sought by some users. Issues for causes of death data: • Often an administrative source can provide the basis for statistical information which has a different nature and focus to the source's principal administrative purpose. There may be a reduced focus or availability of funding within the program to ensure the accessibility of information for non-administrative uses. • Each jurisdiction has its own legislation governing death registration as well as that governing the coronial process. Jurisdictions also have privacy legislation which governs the accessibility of the statistics. • The ABS observes strict confidentiality protocols as required by the Census and Statistics Act (1905). This may restrict access to data at a very detailed level which is sought by some users. • A national causes of death unit record file can be obtained through the Australian Coordinating Registry (which is housed at the Queensland Registry of Births, Deaths and Marriages) by sending an email to [email protected] (data available on application for legitimate research purposes only). If the information you require is not available from the publication or the data cubes, then the ABS may also have other relevant data available on request. Inquiries should be made to the National Information and Referral Service on 1300 135 070 or by sending an email to [email protected]. The ABS Privacy Policy outlines how the ABS will handle any personal information that you provide to the ABS. ## Quality declaration - perinatal data, summary ### Definition Perinatal deaths statistics refer to all fetal (stillbirth) deaths of at least 20 weeks gestation or at least 400 grams birth weight, and neonatal deaths (all live born babies who die within 28 days of birth, regardless of gestation or weight). ### Institutional environment For further information on the institutional environment of the ABS, including the legislative obligations of the ABS, financing and governance arrangements, and mechanisms for scrutiny of ABS operations, please see ABS Institutional Environment. Statistics on perinatal deaths presented in Causes of Death, Australia, 2018 (cat. no. 3303.0) are sourced from death registrations administered by the various state and territory Registry of Births, Deaths and Marriages. It is a legal requirement of each state and territory that all neonatal deaths and those fetal deaths of at least 20 weeks gestation or 400 grams birth weight are registered. As part of the registration process, information on the cause of death is either supplied by the medical practitioner certifying the death on a Medical Certificate of Cause of Perinatal Death, or supplied as a result of a coronial investigation. Death records are provided electronically and/or in paper form to the Australian Bureau of Statistics (ABS) by individual Registrars on a monthly basis. Each death record contains both demographic data and medical information from the Medical Certificate of Cause of Perinatal Death, where available. Information from coronial investigations are provided to the ABS through the National Coroners Information System (NCIS). ### Relevance Perinatal statistics provide valuable information for the analysis of fetal, neonatal and perinatal deaths in Australia. This publication presents data at the national and state level on registered perinatal deaths by sex, state of usual residence, main condition in fetus/infant, main condition in mother and Aboriginal and Torres Strait Islander status. Fetal, neonatal and perinatal death rates are also provided. The ABS Causes of Death collection includes all perinatal deaths that occurred and were registered in Australia, including deaths of persons whose usual residence is overseas. Deaths registered on Norfolk Island from 1 July 2016 are also included due to the introduction of the Norfolk Island Legislation Amendment Act 2015. See Explanatory Note 13 in Causes of Deaths, Australia, 2018 (cat. no. 3303.0) for more information. Deaths of Australian residents that occurred outside Australia may be registered by individual Registrars, but are not included in ABS deaths or perinatal deaths statistics. This publication only includes information on registered fetal and neonatal deaths. This scope differs from other Australian data sources on perinatal deaths including the National Perinatal Mortality Collection, which sources data via state and territory perinatal committees directly through hospital and health centres. More information about the scope of the perinatal deaths statistics can be found in Explanatory Notes 16 to 20 in this publication. Since the 2006 reference year, the scope of the perinatal death statistics has included all fetal deaths of at least 20 weeks gestation or at least 400 grams birth weight, and all neonatal deaths (all live born babies who die within 28 days of birth, regardless of gestation or weight) which are: • all deaths registered in Australia for the reference year and received by the ABS by the end of the March quarter of the subsequent year; and • deaths registered prior to the reference year but not previously received from the Registrar nor included in any statistics reported for an earlier period. For example, records received by the ABS during the March quarter of 2019 which were initially registered in 2018 or prior (but not forwarded to the ABS until 2019) are assigned to the 2018 reference year. Any death registrations relating to the 2018 reference period which are received by the ABS after the end of the March 2019 quarter are assigned to the 2019 reference year. Data in the Perinatal deaths collection include causes of death information, as well as some demographic items. Causes of death information is obtained from the Medical Certificate of Cause of Perinatal Death (perinatal deaths) and the National Coronial Information System (coroner-certified deaths). Causes of death are coded according to the International Classification of Diseases (ICD). Issues for perinatal deaths data: • The primary objective of the owner of the source data can differ from the information needs of the statistical users. Registrars of Births, Deaths and Marriages and coroners have legislative and administrative obligations to meet, as well as being the source of statistics. As a result, the population covered by the source data, the time reference period for some data, and the data items available in the registration system, may not align exactly with the requirements of users of the statistics. • There can be differences between the defined scope of the population (i.e. every death occurring in Australia) and the actual coverage achieved by the registration system. Levels of registration can be influenced by external factors and coverage achieved will be influenced by the steps taken by the owners of death registration systems to ensure all deaths are registered. • There are eight different registration systems within Australia. Each jurisdiction's registration system, while similar in many ways, also has a number of differences. These can include the types of data items collected, the definition of those collected data items, and business processes undertaken within Registries of Births, Deaths and Marriages including coding and quality assurance practices. ### Timeliness Causes of Death, Australia dataset is released annually, approximately nine months after the end of the reference period and in conjunction with Deaths, Australia 2017 (cat. no. 3302.0). Perinatal deaths data are included in the Causes of Death, Australia 2018 (cat. no. 3303.0) published outputs. Prior to the release of the 2015 dataset, Causes of Death data had been released approximately 15 months after the end of the reference period, however changes to ABS processes allowed for more timely access to Australian mortality data. For more information see A more timely annual collection: changes to ABS processes (Technical Note) in Causes of Death, Australia, 2015 (cat. no. 3303.0). There is a focus on fitness for purpose when causes of death statistics are released. To meet user requirements for accurate causes of death data it is necessary to obtain information from other administrative sources before all information for the reference period is available. This specifically applies to coroner certified deaths, where extra information relating to the death is provided through police, toxicology, autopsy and coronial finding reports. A balance therefore needs to be maintained between accuracy (completeness) of data and timeliness. ABS provides the data in a timely manner, ensuring that all coding possible can be undertaken with accuracy prior to publication. As coroner certified deaths can have ill-defined causes of death until a case is closed within the coronial system, a revisions process was introduced that applies to all neonatal coroner certified deaths registered after 1 January 2006 to enhance the cause of death output for open coroner cases (causes of death for fetal deaths are not revised). This process enables the use of additional information for coding relating to coroner certified deaths at approximately 12 and/or 24 months after initial processing. See Explanatory Notes 59-62 in this publication and Causes of Death Revisions, 2015 Final Data (Technical Note) and 2016 Revised Data (Technical Note) in Causes of Death, Australia, 2017 (cat. no. 3303.0) for more information on the revisions process. Causes of Death, Australia, 2018, includes preliminary neonatal data for 2018 and 2017, revised data for 2016 and final data for 2015 and prior years. Revised output for the 2016 and 2017 data will be released in early 2020. All data relating to fetal deaths is final. Issues for perinatal deaths data: • A balance is maintained between accuracy (completeness) and timeliness, taking into account the different needs of users and maximising the fitness for purpose of the data. Documentation including explanatory notes and technical notes are provided for causes of death statistics. These should be used to assess the fitness for purpose of the data to ensure informed decisions can be made. • The timeliness of administrative information that supports cause of death coding can be impacted by legislative requirements, systems and resources available to maintain/update systems. ### Accuracy Non-sampling errors may influence accuracy in datasets which constitute a complete census of the population, such as the Causes of Death collection. Non-sampling error arises from inaccuracies in collecting, recording and processing the data. Every effort is made to minimise non-sampling error by working closely with data providers, undertaking quality checks throughout the data processing cycle, training of processing staff, and efficient data processing procedures. The ABS has implemented a revisions process that applies to all coroner-certified neonatal deaths registered after 1 January 2006. This is a change from preceding years where all ABS processing of causes of death data for a particular reference period was finalised approximately 13 months after the end of the reference period. The revisions process enables the use of additional information relating to coroner-certified deaths as it becomes available over time, resulting in increased specificity of the assigned ICD-10 codes. See Explanatory Notes 59-62 in this publication and Causes of Death Revisions, 2015 Final Data (Technical Note) and 2016 Revised Data (Technical Note) in Causes of Death, Australia, 2017 (cat. no. 3303.0) for more information on the revisions process. Issues for perinatal deaths data: • Completeness of an individual record at a given point in time (e.g. incomplete causes of death information due to non-finalisation of coronial proceedings); • Completeness of the dataset e.g. impact of registration lags, processing lags and duplicate records; • Extent of coverage of the population (whilst all deaths are legally required to be registered some cases may not be registered for an extended time, if at all); • Some lack of consistency in the application of questions or forms used by administrative data providers. • Question and ‘interviewer’ biases given that information for death registrations are supplied about the person by someone else. For example, Aboriginal and Torres Strait Islander identification as reported by a third party can be different from self reported responses on a form; and • Level of specificity and completeness in coronial reports or doctor's findings on the Medical Certificate of Cause of Perinatal Death will impact on the accuracy of coding. • Errors in the coding of the causes of a death to ICD-10. The majority of cause of death coding is undertaken through an automated coding process, which is estimated to have a very high level of accuracy. Human coding can be subject to error, however the ABS mitigates this risk through rigorous coder training, detailed documentation and instructions for coding complex or difficult cases, and extensive data quality checks. • Cases where coronial proceedings remain open at the end of ABS processing for a reference period are potentially assigned a less specific ICD-10 cause of death code. • Where coroner-certified deaths become closed during the revisions process, additional information is often made available, making more specific coding possible. ### Coherence Use of the explanatory notes and technical notes released with the statistics is important for assessing coherence within the dataset and when comparing the statistics with data from other sources. Changing business rules over time and/or across data sources can affect consistency and hence interpretability of statistical output, especially when assessing time series data. The ICD is the international standard classification for epidemiological purposes and is designed to promote international comparability in the collection, processing, classification, and presentation of cause of death statistics. The classification is used to classify diseases, conditions, injuries and external events as recorded on many types of medical records as well as death records. It is used for both morbidity and mortality purposes, with the morbidity version incorporating clinical modifications. The ICD is revised periodically to incorporate changes in the medical field. The 10th revision of ICD (ICD-10) is used for the 2018 data. Issues for perinatal deaths data: • Changes to questions, scope etc. over time can affect the consistency of data collected over the period, even when the source of the data is the same. These changes can be the result of legislative or program objective changes. • The completeness or quality of older versus newer data can also impact on comparisons across time or domains. • Statistical concepts for questions are not always suited to the administrative purpose or the means of collection. ### Interpretability In 2014, the ABS implemented Iris, a new automated coding software product for assisting in the processing of cause of death data, and improved a number of coding practices to realign with international best practice. As part of this, the ABS began a review of its method of coding neonatal deaths which, for the 2013-2018 data published in this issue, has meant a change to the method used for assigning an underlying cause of death to neonatal deaths. It is advised that data users refer to the Changes to Perinatal Death Coding Technical Note in Causes of Death, Australia, 2014, for further information on changes to the perinatal dataset. The 2018 reference year cause of death data presented in this publication was coded using the 2016 version (version 5.4.0) of Iris software. This system replaced Iris version 4.4.1 which was used to code the 2013-2017 cause of death data. Version 5.4.0 of the Iris software applied the World Health Organization (WHO) ICD-10 updates and a new underlying cause of death processing system called the Multicausal and Unicausal Selection Engine (MUSE). This has resulted in changes to the automated coding path for some causes of death. The implementation of MUSE, alongside the updates to the ICD-10, align the Australian mortality data up to date with international best practice. The ABS have also implemented extra validation processes with the implementation of MUSE to ensure maximum alignment with WHO guidelines and coding rules. It is advised that data users refer to the below technical notes for further details: The Causes of Death, Australia (cat. no. 3303.0) publication contains detailed Explanatory Notes, Appendices and a Glossary in each issue that provide information on the data sources, terminology, classifications and other technical aspects associated with these statistics. Issues for perinatal deaths data: • Information on some aspects of statistical quality may be hard to obtain as information on the source data has not been kept over time. This is related to the issue of the administrative rather than statistical purpose of the collection of the source data. • Changes to data processing practices, such as the implementation of new software, updates to causes of death classifications, or changes to local coding practices, should be taken into consideration when comparing data over time. ### Accessibility In addition to the information provided in the commentary, a series of data cubes are also available providing detailed breakdowns by cause of death. The ABS observes strict confidentiality protocols as required by the Census and Statistics Act (1905). This may restrict access to data at a very detailed level which is sought by some users. Issues for causes of death data: • Often an administrative source can provide the basis for statistical information which has a different nature and focus to the source's principal administrative purpose. There may be a reduced focus or availability of funding within the program to ensure the accessibility of information for non-administrative uses. • Each jurisdiction has its own legislation governing death registration as well as that governing the coronial process. Jurisdictions also have privacy legislation which governs the accessibility of the statistics. • The ABS observes strict confidentiality protocols as required by the Census and Statistics Act (1905). This may restrict access to data at a very detailed level which is sought by some users. • A national causes of death unit record file which contains neonatal deaths data (but does not include fetal deaths data) can be obtained through the Australian Coordinating Registry (which is housed at the Queensland Registry of Births, Deaths and Marriages) by sending an email to [email protected] (data available on application for legitimate research purposes only). If the information you require is not available from the publication or the data cubes, then the ABS may also have other relevant data available on request. Inquiries should be made to the National Information and Referral Service on 1300 135 070 or by sending an email to [email protected]. The ABS Privacy Policy outlines how the ABS will handle any personal information that you provide to the ABS. ## Abbreviations ### Show all ABS Australian Bureau of Statistics ACS automated coding system ACT Australian Capital Territory AIDS Acquired Immune Deficiency Syndrome AIHW Australian Institute of Health and Welfare ASDR age-specific death rate ASGC Australian Standard Geographical Classification ASGS Australian Statistical Geography Standard Aust. Australia cat. no. catalogue number CDR crude death rate CM Clinical Modification COAD chronic obstructive airways disease DRF death registration form ERP estimated resident population HIV Human Immunodeficiency Virus ICD-10 International Classification of Diseases 10th Revision IHD ischaemic heart disease IMR infant mortality rate ISDR indirect standardised death rate MCCD medical certificate of cause of death MCCPD medical certificate of cause of perinatal death METeOR Metadata Online Registry MMDS Mortality Medical Data System no. number NCHS National Center for Health Statistics NCIS National Coronial Information System NCR SIC National Civil Registration and Statistics Improvement Committee NSW New South Wales NT Northern Territory QLD Queensland SA South Australia SA2 Statistical Area 2 SACC Standard Australian Classification of Countries SDR standardised death rate SIDS Sudden Infant Death Syndrome TAS Tasmania URC Update and Revision Committee VIC Victoria WA Western Australia WHO World Health Organization YPLL years of potential life lost
2021-07-24T10:36:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24137289822101593, "perplexity": 4889.4672618992745}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046150264.90/warc/CC-MAIN-20210724094631-20210724124631-00132.warc.gz"}
http://dlmf.nist.gov/19.28
# §19.28 Integrals of Elliptic Integrals In (19.28.1)–(19.28.3) we assume $\Re{\sigma}>0$. Also, $\mathop{\mathrm{B}\/}\nolimits$ again denotes the beta function (§5.12). 19.28.1 $\displaystyle\int_{0}^{1}t^{\sigma-1}\mathop{R_{F}\/}\nolimits\!\left(0,t,1% \right)\mathrm{d}t$ $\displaystyle=\tfrac{1}{2}\left(\mathop{\mathrm{B}\/}\nolimits\!\left(\sigma,% \tfrac{1}{2}\right)\right)^{2},$ 19.28.2 $\displaystyle\int_{0}^{1}t^{\sigma-1}\mathop{R_{G}\/}\nolimits\!\left(0,t,1% \right)\mathrm{d}t$ $\displaystyle=\frac{\sigma}{4\sigma+2}\left(\mathop{\mathrm{B}\/}\nolimits\!% \left(\sigma,\tfrac{1}{2}\right)\right)^{2},$ 19.28.3 $\int_{0}^{1}t^{\sigma-1}(1-t)\mathop{R_{D}\/}\nolimits\!\left(0,t,1\right)% \mathrm{d}t=\frac{3}{4\sigma+2}\left(\mathop{\mathrm{B}\/}\nolimits\!\left(% \sigma,\tfrac{1}{2}\right)\right)^{2}.$ 19.28.4 $\int_{0}^{1}t^{\sigma-1}(1-t)^{c-1}\mathop{R_{-a}\/}\nolimits\!\left(b_{1},b_{% 2};t,1\right)\mathrm{d}t=\frac{\mathop{\Gamma\/}\nolimits\!\left(c\right)% \mathop{\Gamma\/}\nolimits\!\left(\sigma\right)\mathop{\Gamma\/}\nolimits\!% \left(\sigma+b_{2}-a\right)}{\mathop{\Gamma\/}\nolimits\!\left(\sigma+c-a% \right)\mathop{\Gamma\/}\nolimits\!\left(\sigma+b_{2}\right)},$ $c=b_{1}+b_{2}>0$, $\Re{\sigma}>\max(0,a-b_{2})$. In (19.28.5)–(19.28.9) we assume $x,y,z$, and $p$ are real and positive. 19.28.5 $\int_{z}^{\infty}\mathop{R_{D}\/}\nolimits\!\left(x,y,t\right)\mathrm{d}t=6\!% \mathop{R_{F}\/}\nolimits\!\left(x,y,z\right),$ 19.28.6 $\int_{0}^{1}\mathop{R_{D}\/}\nolimits\!\left(x,y,v^{2}z+(1-v^{2})p\right)% \mathrm{d}v=\mathop{R_{J}\/}\nolimits\!\left(x,y,z,p\right).$ 19.28.7 $\int_{0}^{\infty}\mathop{R_{J}\/}\nolimits\!\left(x,y,z,r^{2}\right)\mathrm{d}% r=\tfrac{3}{2}\pi\mathop{R_{F}\/}\nolimits\!\left(xy,xz,yz\right),$ 19.28.8 $\int_{0}^{\infty}\mathop{R_{J}\/}\nolimits\!\left(tx,y,z,tp\right)\mathrm{d}t=% \frac{6}{\sqrt{p}}\mathop{R_{C}\/}\nolimits\!\left(p,x\right)\mathop{R_{F}\/}% \nolimits\!\left(0,y,z\right).$ 19.28.9 $\int_{0}^{\pi/2}\mathop{R_{F}\/}\nolimits\!\left({\mathop{\sin\/}\nolimits^{2}% }\theta{\mathop{\cos\/}\nolimits^{2}}\!\left(x+y\right),{\mathop{\sin\/}% \nolimits^{2}}\theta{\mathop{\cos\/}\nolimits^{2}}\!\left(x-y\right),1\right)% \mathrm{d}\theta=\mathop{R_{F}\/}\nolimits\!\left(0,{\mathop{\cos\/}\nolimits^% {2}}x,1\right)\mathop{R_{F}\/}\nolimits\!\left(0,{\mathop{\cos\/}\nolimits^{2}% }y,1\right),$ 19.28.10 $\int_{0}^{\infty}\mathop{R_{F}\/}\nolimits\!\left((ac+bd)^{2},(ad+bc)^{2},4% abcd{\mathop{\cosh\/}\nolimits^{2}}z\right)\mathrm{d}z=\tfrac{1}{2}\mathop{R_{% F}\/}\nolimits\!\left(0,a^{2},b^{2}\right)\mathop{R_{F}\/}\nolimits\!\left(0,c% ^{2},d^{2}\right),$ $a,b,c,d>0$. See also (19.16.24). To replace a single component of $\mathbf{z}$ in $\mathop{R_{-a}\/}\nolimits\!\left(\mathbf{b};\mathbf{z}\right)$ by several different variables (as in (19.28.6)), see Carlson (1963, (7.9)).
2017-01-18T18:06:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 89, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9957391619682312, "perplexity": 3671.9490550918113}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280310.48/warc/CC-MAIN-20170116095120-00427-ip-10-171-10-70.ec2.internal.warc.gz"}
http://pavpanchekha.com/esp/physics-c-2011/ps1.html
## By Pavel Panchekha, Jeffrey Prouty Share under CC-BY-SA. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author and do not necessarily reflect the views of the National Science Foundation. # Problem Set 1 (Given 9/18/11) ## Level I 1. Give the $$x$$ and $$y$$ components of the vector of magnitude 15 directed 135 degrees counterclockwise from the positive $x$-axis. 2. Graph and find the sum of the following three vectors in component form and give the magnitude and direction: ⟨+4, +2⟩, ⟨-7, 2⟩, and the vector of magnitude 8 directed 30 degrees counterclockwise from the positive $x$-axis. 3. Find the angle between the following two vectors: ⟨+9,+4,-3⟩ and ⟨-5,-2,+7⟩ ## Level II 1. One can derive a kinematics equation describing the motion of an object with constant acceleration that contains no acceleration term. Starting with the equations you know, find this equation in the form with the change in displacement on the left hand side. 2. Superman jumps from a building and flies in a straight line at speed vs. At $$t=0$$, he sees the batcave straight ahead of him at a ground distance of $$D$$ meters. At the same time, the batmobile leaves the batcave, starting from rest and accelerating with constant acceleration a in the same direction. For what values of a will superman and the batmobile have the same ground position twice? Once? Never? 3. At $$t=0$$, a man stands 5000 meters below a bird, which in turn is flying 6000 meters below a plane. The bird flies at 10 m/s and the plane flies at 100 m/s. Both the plane and the bird fly parallel to the ground (assumed to be flat), but the bird flies at an angle of 20 degrees north of east, while the plane flies at 20 degrees east of north. Find an equation describing the position of the bird in the reference frame of each of the three observers. ## Level III 1. A pebble is lodged in the rim of a tire of radius $$r$$ rolling forward without slipping. The center of the tire moves in a straight line with speed $$v$$. Suppose that at $$t=0$$, the pebble lies at the origin directly under the center of the tire. Find the position of the pebble at an arbitrary time later. 2. A hiker wants to lift a package of food to keep it out of reach of bears. She tosses a rope of length $$l$$ over a tree such that one end rests tied to the package on the ground and the other in her hand at height $$h$$ ($$h < (1/2)l$$). She then walks back at constant speed $$v$$ holding the rope (starting directly under the tree at $$t=0$$). Find the height of the package as a function of time.
2018-06-24T20:49:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6150428652763367, "perplexity": 315.40494043339316}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267867055.95/warc/CC-MAIN-20180624195735-20180624215735-00368.warc.gz"}
https://eng.wra.gov.tw/cp.aspx?n=24371
Announced Date: 2016-8-11 Legislative History:A Total of 21 articles were promulgated on Augest 11, 2016 by the ordinance of the Ministry of Economic Affairs Ref. No. Ching-Shui-Tzu 10504603860 Article 1  These Regulations are enacted pursuant to Paragraph 7 of Article 9 of the Reclaimed Water Resources Development Act (hereinafter referred to as the "Act"). Article 2  To apply for the establishment of the Reclaimed Water Operator, the promoter or the shareholder shall apply for the establishment approval with the municipal or county (city) competent authority where the company is located or the competent authority governing the Specific Area (hereinafter referred to as the "Competent Authority") by submitting the application form (see Appendix) and the certification of capital amount stating the following matters: 1. Company's name, address and responsible person. 2. Articles of Association draft. 4. Name list of all promoters or shareholders and photocopies of their identification documents. 5. The capital amount and the value of shares subscribed by each promoter or shareholder.Those applying for an addition to Reclaimed Water Operator's business items shall apply for the establishment approval with the Competent Authority where the company is located by submitting the application form (stating matters in Subparagraphs 1 and 3 of the preceding Paragraph), documents certification of capital amount, and the following documents: 1. Company's proof of registration. 2. Articles of Association amended draft. 3. Minutes of the board meeting and the shareholders’ meeting, or letter of agreement from all shareholders, of the Reclaimed Water Operator approving the addition. 4. For those applying for a capital increase, the capital amount following capital increase.If the applicant’s company in the preceding two Paragraphs is located in a Specific Area, the application shall be made to the competent authority governing such Specific Area.The business plan in Paragraph 1, Subparagraph 3 shall state the company's organization, operation polices, financial plan, and the performance of the shareholders or the promoters in relation to water treatment construction or operation. The personnel composition and certification of their expertise shall also be attached.Where the Competent Authority approves the contents of the documents in Paragraphs 1 and 2 after review, it shall issue an establishment approval and specify the following matters: 1. Basic information for the applicant: Company's name, address, responsible person and estimated capital amount. 2. Description of establishment approval for the Reclaimed Water Operator. 3. Date of issuance. 4. The company's establishment registration or registration of changes to the company's business items shall be completed within one year from the date of issuance. If not completed, the establishment approval shall become invalid. Article 3  If the Reclaimed Water Operator is a company limited by shares, the paid-in capitalistic at the time of its establishment shall not be less than NT$100 million; for companies whose organization is not a company limited by shares, the aggregate capital shall not be less than NT$100 million. Article 4   When accepting a construction permit application for a Reclaimed Water Development Project and an application for wastewater (sewage) or effluent water use permit in accordance with Article 8, Paragraph 1 or 3 of the Act, the Competent Authority shall process in accordance with the following procedures: 1. Phase one: When accepting the planning of the claimed water construction and operation submitted by the Reclaimed Water Operator, the Competent Authority shall, after assessing to be feasible, announce for one to three months the subject sewer system's available wastewater (sewage) or effluent water intake quantity for other private institution applicants to submit their reclaimed water construction and operation planning. All applications will be reviewed together when the announcement period expires. 2. Phase two: Applicants passing phase one of the review shall submit the reclaimed water construction and operation plan within six months from the date of passing the review. In the event that a Reclaimed Water Development Project is planned and initiated in accordance with the Government Procurement Act, Act for Promotion of Private Participation in Infrastructure Projects or other laws, then the Competent Authority’s processing is not subject to Paragraph 1, the construction and operation plans, investment implementation plans or other related documents submitted pursuant to the laws or contracts shall be deemed to be the reclaimed water construction and operation plans in Paragraph 3 of Article 9 of the Act if such documentation containing contents  meet the requirement in Paragraph 2 of Article 5. Article 5  The planning of the reclaimed water construction and operation in Paragraph 1, Subparagraph 1 of the preceding Article shall state the following matters: 1. Basic information for the Reclaimed Water Operator. 2. The planned location for the development project and the methods by which the land is obtained and used. 3. The name of the wastewater (sewage) or effluent water sewer system and water quantity planned for use. 4. Planned water supply area and water users’ Letter of Intent. 5. Feasibility assessment for the financial plan.The reclaimed water construction and operation plan in Paragraph 1, Subparagraph 2 of the preceding Article shall state the following matters: 1. Initiating entity. 2. Description of the development project. 3.  The plot number, area, land ownership, use rights or other rights, and the zoning descriptions for the land where the reclaimed water facility area and water supply facilities are planned to be located; and the land transcript, cadastral map transcription and land use consent certificates and documentation shall also be attached. 4. Name of the sewer system used, water intake quantity, and period of effect for the use permit applied for. 5. Floor plan and basic design of the reclaimed water facilities and water supply facilities. 6. Construction schedule and quality control plan for the construction audit, inspection, verification and certification. 7. Trial run (commissioning) plan. 8. Water supply quantity, water quality standards, water supply area, and water supply schedule. 9. Water supply contract when making application for water intake quantity of 50% or more of water quantity. 10. The portion of the sewer system involved that needs to be changed or added. 11.Operational planning. 12.Financial plan and economic analysis. 13.Summary of water pollution prevention plan. 14.Personnel (staffing) assignment. 15.Water quality monitoring mechanism. 16.Automatic water quantity monitoring mechanism. 17.Pollution and environmental impact control. 18.Regular inspections, maintenance and management, and annual repairs, including items, frequency and methods. 19.Emergency response measures, drill plans, and backup planning. 20.Expected benefits and impact.Matters required in Subparagraph 19 of the preceding Paragraph shall refer to measures, plans and back-up planning in response to the following incidents: 1. Inability to obtain sufficient wastewater (sewage) or effluent water. 2. Occurrence of windstorms, floods, earthquakes, droughts, landslides, or other natural disasters, the scale of which reaches Class C or above specified in the Emergency Notification Guidelines issued by the Executive Yuan. 3. Occurrence of fire, explosions, violent disturbances, power outages, or other accidents which cause damage to the facilities or equipment or render it impossible to supply water normally. 4. Occurrence of water pollution due to abnormal conditions on the water intake construction, water supply facilities, or water treatment equipment. 5.  Other incidents specified by the Competent Authority. Article 6   When the Competent Authority accepts the approval applications in Paragraph 1 of Article 4, if the content of the documentation is incomplete, the applicant shall be notified a maximum of one time to make supplements and corrections within a time limit; if the applicant fails to make supplements and corrections within the time limit or the supplements and corrections are not in compliance with the regulations, the application shall be rejected. The supplement and correction period of the preceding Paragraph will not be included in the review period. Article 7  The Competent Authority shall, in order to review applications stipulated in Paragraph 1 of Article 4, be joined by the central competent authority and the central competent authority for sewer systems, and may invite representatives of relevant authorities, experts and scholars to form review meetings to conduct the review. When necessary, the applicant may be notified to attend and provide explanation. When reviewing a phase one application in Article 4, Paragraph 1, Subparagraph 1, the Competent Authority shall complete such within three months after the announcement period expires; when necessary, review may be extended once for a maximum of three months, and the applicant shall be notified of the review result.When reviewing a phase two application in Article 4, Paragraph 1, Subparagraph 2, the Competent Authority shall complete the review within six months from the date of receiving the reclaimed water construction and operation plan; when necessary, review may be extended once for a maximum of six months. Article 8  If the Competent Authority reviews and approves a reclaimed water construction and operation plan, it shall issue a construction permit and wastewater (sewage) or effluent water use permit. The wastewater (sewage) or effluent water use permit’s period of effect shall be 10 to 15 years, starting from the operation permit's date of effect.For Reclaimed Water Development Projects initiated by the Competent Authority in accordance with the Act for Promotion of Private Participation in Infrastructure Projects, the previous Paragraph shall not apply. Article 9  The Reclaimed Water Operator may appoint professional consulting agencies to carry out quality control work for construction audits, inspection, verification and certification regarding the Reclaimed Water Development Project. Professional consulting agencies in the preceding Paragraph shall be limited to professional engineering consulting firms holding registration certificates for professional engineering consulting firms.For appointment in Paragraph 1, the professional consulting agency implementation plan shall be submitted to the Competent Authority for reference before the construction of the Reclaimed Water Development Project. Once construction begins, if there is any change, such shall also be reported for reference.The implementation plan in the preceding Paragraph shall specify the categories, items, methods, schedules, report format and other information required for the construction engineering audit, inspection verification and certification. Article 10 If the Reclaimed Water Operator needs to extend the construction period, it shall, within six months before the expiration of the construction schedule, attach the following materials and apply to the Competent Authority: 1. Basic information: Name, responsible person, address, and contact information. 2. Photocopies of the Reclaimed Water Development Project construction permit and wastewater (sewage) or effluent water use permit. 3. Reasons and impact of the extension. 4. Extended construction period.Extensions in the preceding Paragraph shall be limited to a maximum of two times; the total extended construction period shall not exceed half of the originally permitted construction period. This shall not apply to the cause not attributable to the Reclaimed Water Operator that have approved by the Competent Authority.For Reclaimed Water Development Projects initiated according to the Act for Promotion of Article 11  After completing the construction of the Reclaimed Water Development Project, the Reclaimed Water Operator shall report to the Competent Authority for inspection by attaching the following documents: 1. Work completion report. 2. Trial run (commissioning) report: The Reclaimed Water Operator shall conduct the trial run (commissioning) for at least 30 consecutive days. 3. Quality control reports for the construction audit, inspection verification and certification. 4. Inspection and maintenance manual. 5. Professional engineer’s supervision and certification report.The completion report in Subparagraph 1 of the preceding Paragraph shall include the as-built drawings of the facilities and equipment, computer image files and relevant operating manuals.The trial run (commissioning) report in Paragraph 1, Subparagraph 2 shall include the following matters: 1. Unit testing results for the main equipment in the water intake construction, water treatment facilities, and water supply facilities. 2. System test results. 3. Treatment efficiency test results.The quality control reports for the construction audit, inspection and certification in Paragraph 1, Subparagraph 3 may be replaced by the professional consulting agency’s report as implementing work in Article 9.Inspection and maintenance manuals in Paragraph 1, Subparagraph 4 shall include the contents specified in Article 5, Paragraph 2, Subparagraphs 14 to 19, and shall be amended in accordance with the actual conditions after the completion. No further approval for the change in accordance with Article 16, Paragraph 1 is required.Professional engineer’s supervision and certification reports in Paragraph 1, Subparagraph 5 shall comply with the Regulations on Governing Professional Engineer Certification for Reclaimed Water Development Project Water Intake Constructions, Water Treatment Facilities and Water Supply Facilities.The treatment efficiency test results in Paragraph 3, Subparagraph 3 must be the water quality test results which are conducted by an environmental test and determination organization that has been issued a permit by the Environmental Protection Administration of the Executive Yuan and according to the standard testing methods announced by the National Institute of Environmental Analysis under the Environmental Protection Administration of the Executive Yuan, and shall comply with Article 2 of the Regulations on Governing the Use and Quality Standards of Reclaimed Water. Article 12  The Competent Authority may, for purposes of reviewing the documents in the preceding Article and conducting on-site inspections, invite representatives of relevant authorities, experts and scholars to form review meetings to implement, or authorize relevant institutions or groups to assist in handling such. When conducting on-site inspections in the preceding Paragraph, the Competent Authority shall notify the Reclaimed Water Operator and it’s professional engineers who have handled the design and supervision certification to be present. If necessary, the Reclaimed Water Operator field staff may be ordered to perform the drills. Article 13  When the Competent Authority conducts the inspections, in the event that the inspection results are inconsistent with the contents of the permit, the Reclaimed Water Operator shall be notified to rectify such within a time limit. After the Reclaimed Water Operator makes rectifications within the time limit, it shall state the rectification status in writing and report to the Competent Authority for re-inspection. Article 14  After the Reclaimed Water Development Project has passed inspection, the Reclaimed Water Operator shall attach the licenses, permits and approvals obtained in accordance with the building laws and regulations, fire fighting laws and regulations and the environmental pollution laws and regulations, and apply to the Competent Authority for the issuance of the operation permit. Article 15  The following items shall be stated in a operation permit issued by the Competent Authority: 1. Basic information for the Reclaimed Water Operator. 2. Description of the development project. 3. Name and location of the sewer system used, and water intake quantity. 4. Start and end dates for the wastewater (sewage) or effluent water use permit's and operation permit's effective periods. 5. Water supply quantity, area and schedule. 6. Other required information. Article 16  If the Reclaimed Water Operator applies for approval of change to a reclaimed water construction and operation plan in accordance with Paragraph 5 of Article 9, of the Act, reasons for the change, comparison table for the changed contents, and the relevant certificates and doucumentation shall be submitted to the Competent Authority for approval. Where the change in the preceding Paragraph involves Article 5, Paragraph 2, Subparagraphs 4 to 8 and Subparagraph 10, or is due to the change of name or organization of the Reclaimed Water Operator, the change plan shall be additionally attached to explain the contents of the change and relevant drawings or certifying documents; where it involves Article 5, Paragraph 2, Subparagraphs 5 to 7, it shall be handled in accordance with Articles 9 to 11 and Article 14. Article 17  When the Reclaimed Water Operator applies for transfer of the Reclaimed Water Development Project, a transfer plan specifying the following items and related supporting documents shall be attached. 1. Reason for transfer. 2. The transferee's Reclaimed Water Operator proof of registration, personnel composition, certification of their expertise, and performance related to water treatment construction or operations. 3. Consent by the water user of the existing water supply contract to the transfer of such contract. 4. The transferor's approved reclaimed water construction and operation plan, the amended contents of such, and difference comparison table.For transfer applications for operation permits in the preceding Paragraph, the amended contents and difference comparison table of the inspection and maintenance manual shall also be attached.  If the Competent Authority reviews and approves a transfer plan and related supporting documents in the preceding two Paragraphs, it shall issue the transfer permit for the construction or operation permit or the wastewater (sewage) or effluent water use permit. The validity period is limited to the remaining validity period of the original permit.If Paragraph 1, Subparagraph 4 involves matters in Article 5, Paragraph 2, Subparagraphs 5 to 7, such shall be handled in accordance with Articles 9 to 11 and Article 14.The transferee shall generally assume the transferor's relevant rights and obligations regarding the reclaimed water construction and operation plan based on the permit in Paragraph 3.This Article shall be applicable mutatis mutandis to the consolidation, merger, or split-up of the Reclaimed Water Operator. Article 18 The Reclaimed Water Operator may reapply in accordance with the provisions of Article 4, Paragraph 1, Subparagraph 2 and Article 5, Paragraph 2 within 6 months from 2 years prior to the expiration of the Reclaimed Water Development Project wastewater (sewage) or effluent water use permit and operation permit. Article 19 If the Reclaimed Water Operator fails to apply in accordance with the provisions of the preceding Article, or the application in accordance with the preceding Article does not pass, the Competent Authority shall give notice to the reclaimed water operator 6 months before the expiration of the operation permit, that restoration to the original condition shall be made or that appropriate measures shall be taken within a time limit. Where there is failure to do so within such time limit, the disposal shall be handled by the Competent Authority in accordance with relevant laws and regulations. Article 20  Article 6 and Article 7, Paragraphs 1 and 3 shall be applicable mutatis mutandis when the Competent Authority reviews applications in Article 16, Paragraph 2, Article 17 and Article 18. Article 21 These Regulations become effective as of the date of promulgation.
2021-10-27T00:43:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17805081605911255, "perplexity": 8329.120943828324}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587963.12/warc/CC-MAIN-20211026231833-20211027021833-00132.warc.gz"}
http://utmost-sage-cell.org/sage:piecewise-function
Piecewise Function ## Description Piecewise functions in a single variable can be defined in Sage with the command piecewise. For example, suppose that we wish to define the function $f$ by (1) \begin{align} f(x) = \begin{cases} \sin x + 1, & x > 0 \\ x^2, & -1 \leq x \leq 0. \end{cases} \end{align} ## Sage Cell #### Code f = piecewise([((0,oo), sin(x) + 1), ([-1,0], x^2)]); f ## Options #### Option We can plot piecewise functions. Since plot uses a line plot, notice that the vertical line from $(0,0)$ to $(0,1)$ is plotted. #### Code f = piecewise([((0,oo), sin(x) + 1), ([-1,0], x^2)]) plot(f, (x,-1,3)) #### Option We can find the domain of a function. #### Code f = piecewise([((0,oo), sin(x) + 1), ([-1,0], x^2)]) f.domain() #### Option We can evaluate functions at points on the domain. #### Code f = piecewise([((0,oo), sin(x) + 1), ([-1,0], x^2)]) f(0) ## Tags Primary Tags—Precalculus:Functions. Secondary Tags—Functions: Piecewise functions. None. ## Attribute Author: T. W. Judson Date: 03 Jan 2019 22:32 Submitted by: Tom Judson Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-ShareAlike 3.0 License
2020-09-27T00:30:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9818755984306335, "perplexity": 5323.255776569793}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400249545.55/warc/CC-MAIN-20200926231818-20200927021818-00625.warc.gz"}
https://math.libretexts.org/Bookshelves/Calculus/Book%3A_Calculus_(OpenStax)/12%3A_Vectors_in_Space
Skip to main content # 12: Vectors in Space A quantity that has magnitude and direction is called a vector. Vectors have many real-life applications, including situations involving force or velocity. For example, consider the forces acting on a boat crossing a river. The boat’s motor generates a force in one direction, and the current of the river generates a force in another direction. Both forces are vectors. We must take both the magnitude and direction of each force into account if we want to know where the boat will go. • 12.0: Prelude to Vectors in Space • 12.1: Vectors in the Plane When measuring a force, such as the thrust of the plane’s engines, it is important to describe not only the strength of that force, but also the direction in which it is applied. Some quantities, such as or force, are defined in terms of both size (also called magnitude) and direction. A quantity that has magnitude and direction is called a vector. • 12.2: Vectors in Three Dimensions To expand the use of vectors to more realistic applications, it is necessary to create a framework for describing three-dimensional space. This section presents a natural extension of the two-dimensional Cartesian coordinate plane into three dimensions. • 12.3: The Dot Product In this section, we develop an operation called the dot product, which allows us to calculate work in the case when the force vector and the motion vector have different directions. The dot product essentially tells us how much of the force vector is applied in the direction of the motion vector. The dot product can also help us measure the angle formed by a pair of vectors and the position of a vector relative to the coordinate axes. • 12.4: The Cross Product In this section, we develop an operation called the cross product, which allows us to find a vector orthogonal to two given vectors. Calculating torque is an important application of cross products, and we examine torque in more detail later in the section. • 12.5: Equations of Lines and Planes in Space To write an equation for a line, we must know two points on the line, or we must know the direction of the line and at least one point through which the line passes. In two dimensions, we use the concept of slope to describe the orientation, or direction, of a line. In three dimensions, we describe the direction of a line using a vector parallel to the line. In this section, we examine how to use equations to describe lines and planes in space. • 12.6: Quadric Surfaces We have been exploring vectors and vector operations in three-dimensional space, and we have developed equations to describe lines, planes, and spheres. In this section, we use our knowledge of planes and spheres, which are examples of three-dimensional figures called surfaces, to explore a variety of other surfaces that can be graphed in a three-dimensional coordinate system. • 12.7: Cylindrical and Spherical Coordinates In this section, we look at two different ways of describing the location of points in space, both of them based on extensions of polar coordinates. As the name suggests, cylindrical coordinates are useful for dealing with problems involving cylinders, such as calculating the volume of a round water tank or the amount of oil flowing through a pipe. Similarly, spherical coordinates are useful for dealing with problems involving spheres, such as finding the volume of domed structures. • 12R: Chapter 12 Review Exercises ## Contributors and Attributions Gilbert Strang (MIT) and Edwin “Jed” Herman (Harvey Mudd) with many contributing authors. This content by OpenStax is licensed with a CC-BY-SA-NC 4.0 license. Download for free at http://cnx.org. • Was this article helpful?
2022-01-19T01:32:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7293925285339355, "perplexity": 271.9241973470741}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301217.83/warc/CC-MAIN-20220119003144-20220119033144-00120.warc.gz"}
http://pdglive.lbl.gov/ParticleGroup.action?init=0&node=MXXX040
# CHARMED, STRANGE MESONS ($\boldsymbol C$ = $\boldsymbol S$ = $\pm1$) ${{\mathit D}_{{s}}^{+}}$ = ${\mathit {\mathit c}}$ ${\mathit {\overline{\mathit s}}}$, ${{\mathit D}_{{s}}^{-}}$ = ${\mathit {\overline{\mathit c}}}$ ${\mathit {\mathit s}}$, similarly for ${{\mathit D}_{{s}}^{*}}$'s • ${{\mathit D}_{{s}}^{\pm}}$ $0(0^{-})$ • ${{\mathit D}_{{s}}^{*\pm}}$ $0(?^{?})$ • ${{\mathit D}_{{s0}}^{*}{(2317)}^{\pm}}$ $0(0^{+})$ • ${{\mathit D}_{{s1}}{(2460)}^{\pm}}$ $0(1^{+})$ • ${{\mathit D}_{{s1}}{(2536)}^{\pm}}$ $0(1^{+})$ • ${{\mathit D}_{{s2}}^{*}{(2573)}}$ $0(2^{+})$ • ${{\mathit D}_{{s1}}^{*}{(2700)}^{\pm}}$ $0(1^{-})$ ${{\mathit D}_{{s1}}^{*}{(2860)}^{\pm}}$ $0(1^{-})$ ${{\mathit D}_{{s3}}^{*}{(2860)}^{\pm}}$ $0(3^{-})$ ${{\mathit D}_{{sJ}}{(3040)}^{\pm}}$ $0(?^{?})$ • Indicates established particles.
2019-02-19T01:23:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9975370764732361, "perplexity": 891.1407203773174}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247489282.7/warc/CC-MAIN-20190219000551-20190219022551-00462.warc.gz"}
https://googology.wikia.org/wiki/Category:Higher_computable_level
## FANDOM 10,646 Pages This category contains computable numbers that cannot be estimated using fast-growing hierarchy in the form fα(n) with α made from Bachmann's ordinal collapsing function and reasonably small values of n, because they go beyond fast-growing hierarchy with Bachmann's ordinal collapsing function. The lower bound of this category is $$f_{\psi(\varphi(1,\Omega+1))}(10^6)$$. All items (281) A B D E G H K L M N O P Q S Community content is available under CC-BY-SA unless otherwise noted.
2019-09-18T09:34:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.903758704662323, "perplexity": 2654.0058500356768}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573264.27/warc/CC-MAIN-20190918085827-20190918111827-00014.warc.gz"}
http://popflock.com/learn?s=Union_(set_theory)
Union (set Theory) Get Union Set Theory essential facts below. View Videos or join the Union Set Theory discussion. Add Union Set Theory to your PopFlock.com topic list for future reference or share this resource on social media. Union Set Theory Union of two sets: ${\displaystyle ~A\cup B}$ Union of three sets: ${\displaystyle ~A\cup B\cup C}$ The union of A, B, C, D, and E is everything except the white area. In set theory, the union (denoted by ?) of a collection of sets is the set of all elements in the collection.[1] It is one of the fundamental operations through which sets can be combined and related to each other. For explanation of the symbols used in this article, refer to the table of mathematical symbols. ## Union of two sets The union of two sets A and B is the set of elements which are in A, in B, or in both A and B. In symbols, ${\displaystyle A\cup B=\{x:x\in A{\text{ or }}x\in B\}}$.[2] For example, if A = {1, 3, 5, 7} and B = {1, 2, 4, 6, 7} then A ? B = {1, 2, 3, 4, 5, 6, 7}. A more elaborate example (involving two infinite sets) is: A = {x is an even integer larger than 1} B = {x is an odd integer larger than 1} ${\displaystyle A\cup B=\{2,3,4,5,6,\dots \}}$ As another example, the number 9 is not contained in the union of the set of prime numbers {2, 3, 5, 7, 11, ...} and the set of even numbers {2, 4, 6, 8, 10, ...}, because 9 is neither prime nor even. Sets cannot have duplicate elements,[2][3] so the union of the sets {1, 2, 3} and {2, 3, 4} is {1, 2, 3, 4}. Multiple occurrences of identical elements have no effect on the cardinality of a set or its contents. ## Algebraic properties Binary union is an associative operation; that is, for any sets A, B, and C, ${\displaystyle A\cup (B\cup C)=(A\cup B)\cup C.}$ The operations can be performed in any order, and the parentheses may be omitted without ambiguity (i.e., either of the above can be expressed equivalently as A ? B ? C). Similarly, union is commutative, so the sets can be written in any order.[4] The empty set is an identity element for the operation of union. That is, A ? ? = A, for any set A. This follows from analogous facts about logical disjunction. Since sets with unions and intersections form a Boolean algebra, intersection distributes over union ${\displaystyle A\cap (B\cup C)=(A\cap B)\cup (A\cap C)}$ and union distributes over intersection ${\displaystyle A\cup (B\cap C)=(A\cup B)\cap (A\cup C)}$ . Within a given universal set, union can be written in terms of the operations of intersection and complement as ${\displaystyle A\cup B=\left(A^{C}\cap B^{C}\right)^{C}}$ where the superscript C denotes the complement with respect to the universal set. ## Finite unions One can take the union of several sets simultaneously. For example, the union of three sets A, B, and C contains all elements of A, all elements of B, and all elements of C, and nothing else. Thus, x is an element of A ? B ? C if and only if x is in at least one of A, B, and C. A finite union is the union of a finite number of sets; the phrase does not imply that the union set is a finite set.[5][6] ## Arbitrary unions The most general notion is the union of an arbitrary collection of sets, sometimes called an infinitary union. If M is a set or class whose elements are sets, then x is an element of the union of M if and only if there is at least one element A of M such that x is an element of A.[7] In symbols: ${\displaystyle x\in \bigcup \mathbf {M} \iff \exists A\in \mathbf {M} ,\ x\in A.}$ This idea subsumes the preceding sections--for example, A ? B ? C is the union of the collection {A, B, C}. Also, if M is the empty collection, then the union of M is the empty set. ### Notations The notation for the general concept can vary considerably. For a finite union of sets ${\displaystyle S_{1},S_{2},S_{3},\dots ,S_{n}}$ one often writes ${\displaystyle S_{1}\cup S_{2}\cup S_{3}\cup \dots \cup S_{n}}$ or ${\displaystyle \bigcup _{i=1}^{n}S_{i}}$. Various common notations for arbitrary unions include ${\displaystyle \bigcup \mathbf {M} }$, ${\displaystyle \bigcup _{A\in \mathbf {M} }A}$, and ${\displaystyle \bigcup _{i\in I}A_{i}}$, the last of which refers to the union of the collection ${\displaystyle \left\{A_{i}:i\in I\right\}}$ where I is an index set and ${\displaystyle A_{i}}$ is a set for every ${\displaystyle i\in I}$. In the case that the index set I is the set of natural numbers, one uses a notation ${\displaystyle \bigcup _{i=1}^{\infty }A_{i}}$ analogous to that of the infinite sums in series.[7] When the symbol "?" is placed before other symbols instead of between them, it is usually rendered as a larger size. ## Notes 1. ^ Weisstein, Eric W. "Union". Wolfram's Mathworld. Archived from the original on 2009-02-07. Retrieved . 2. ^ a b Vereshchagin, Nikolai Konstantinovich; Shen, Alexander (2002-01-01). Basic Set Theory. American Mathematical Soc. ISBN 9780821827314. 3. ^ deHaan, Lex; Koppelaars, Toon (2007-10-25). Applied Mathematics for Database Professionals. Apress. ISBN 9781430203483. 4. ^ Halmos, P. R. (2013-11-27). Naive Set Theory. Springer Science & Business Media. ISBN 9781475716450. 5. ^ Dasgupta, Abhijit (2013-12-11). Set Theory: With an Introduction to Real Point Sets. Springer Science & Business Media. ISBN 9781461488545. 6. ^ "Finite Union of Finite Sets is Finite - ProofWiki". proofwiki.org. Archived from the original on 11 September 2014. Retrieved 2018. 7. ^ a b Smith, Douglas; Eggen, Maurice; Andre, Richard St (2014-08-01). A Transition to Advanced Mathematics. Cengage Learning. ISBN 9781285463261. This article uses material from the Wikipedia page available here. It is released under the Creative Commons Attribution-Share-Alike License 3.0.
2020-08-09T08:03:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 19, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7332221269607544, "perplexity": 372.28134351603757}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738523.63/warc/CC-MAIN-20200809073133-20200809103133-00055.warc.gz"}
https://www.abs.gov.au/statistics/concepts-sources-methods/australian-system-national-accounts-concepts-sources-and-methods/2020-21/chapter-6-price-and-volume-measures/annex-deriving-chain-volume-indexes
Latest release # Annex A Deriving chain volume indexes Australian System of National Accounts: Concepts, Sources and Methods Reference period 2020-21 financial year 6A.1    The following provides a detailed description of the various chain volume measures and the issues associated with using them ## Different index formulae 6A.2    The general formula for a Laspeyres volume index from year $$y-1$$ to year $$y$$ is given by: $$\large {L_Q} = \frac{{\sum\limits_{i = 1}^n {P_i^{y - 1}Q_i^y} }}{{\sum\limits_{i = 1}^n {P_i^{y - 1}Q_i^{y - 1}} }},$$         - - - - - - - (1) where $$P_i^y$$ and $$Q_i^y$$ are prices and quantities of the $$i^{th}$$ product in year $$y$$ and there are $$n$$ products. The denominator is the current price value of the aggregate in year $$y-1$$ and the numerator is the value of the aggregate in year $$y$$ at year $$y-1$$ average prices. 6A.3 A Paasche volume index from year $$y-1$$ to year $$y$$ is defined as: $$\large {P_Q} = \frac{{\sum\limits_{i = 1}^n {P_i^yQ_i^y} }}{{\sum\limits_{i = 1}^n {P_i^yQ_i^{y - 1}} }},$$         - - - - - - - (2) 6A.4    A Fisher index is derived as the geometric mean of a Laspeyres and Paasche index: $$\large {F_Q} = {\left( {{L_Q}{P_Q}} \right)^{1/2}}$$         - - - - - - - (3) 6A.5    A Paasche price index from year $$y-1$$ to year $$y$$ is defined as: $$\large {P_P} = \frac{{\sum\limits_{i = 1}^n {P_i^yQ_i^y} }}{{\sum\limits_{i = 1}^n {P_i^{y - 1}Q_i^y} }},$$         - - - - - - - (4) 6A.6    When this Paasche price index is divided into the current price index from year $$y-1$$ to year $$y$$ a Laspeyres volume index is produced: $$\large \frac{{\sum\limits_{i = 1}^n {P_i^yQ_i^y} }}{{\frac{{\sum\limits_{i = 1}^n {P_i^{y - 1}Q_i^{y - 1}} }}{{{P_P}}}}} = \frac{{\frac{{\sum\limits_{i = 1}^n {P_i^yQ_i^y} }}{{\sum\limits_{i = 1}^n {P_i^{y - 1}Q_i^{y - 1}} }}}}{{\frac{{\sum\limits_{i = 1}^n {P_i^yQ_i^y} }}{{\sum\limits_{i = 1}^n {P_i^{y - 1}Q_i^y} }}}} = \frac{{\sum\limits_{i = 1}^n {P_i^{y - 1}Q_i^y} }}{{\sum\limits_{i = 1}^n {P_i^{y - 1}Q_i^{y-1}} }} = {L_Q}$$         - - - - - - - (5) 6A.7    Evidently, Laspeyres volume indexes and Paasche price indexes complement each other, and vice versa Table 6A.1 Comparison of Laspeyres, Paasche and Fisher volume indexes Sales of beef and chicken Quantity (kilos)Year 1Year 2Year 3Year 4 Beef20181617 Chicken10121417 Price per kilo ($) Beef1.001.101.201.30 Chicken2.002.002.102.15 Value ($) Beef20.0019.8019.2022.10 Chicken20.0024.0029.4036.55 Total40.0043.8048.6058.65 Laspeyres volume index: year 1 to year 2 using year 1 prices Values at year 1 prices ($) Year 1Year 2Volume indexGrowth rate Beef20.0018.000.900-10.0% Chicken20.0024.001.20020.0% Total40.0042.001.0505.0% Laspeyres volume index: year 2 to year 3 using year 2 prices Values at year 2 prices ($) Year 2Year 3Volume indexGrowth rate Beef19.8017.600.889-11.1% Chicken24.0028.001.16716.7% Total43.8045.601.0414.1% Laspeyres volume index: year 3 to year 4 using year 3 prices Values at year 3 prices ($) Year 3Year 4Volume indexGrowth rate Beef19.2020.401.0636.3% Chicken29.4035.701.21421.4% Total48.6056.101.15415.4% Paasche volume index: year 1 to year 2 using year 2 prices Values at year 2 prices ($) Year 1Year 2Volume indexGrowth rate Beef22.0019.800.090-10.0% Chicken20.0024.001.20020.0% Total42.0043.801.0434.3% Paasche volume index: year 2 to year 3 using year 3 prices Values at year 3 prices ($) Year 2Year 3Volume indexGrowth rate Beef21.6019.200.089-11.1% Chicken25.2029.401.16716.7% Total46.8048.601.0383.8% Paasche volume index: year 3 to year 4 using year 4 prices Values at year 4 prices ($) Year 3Year 4Volume indexGrowth rate Beef20.8022.101.0636.3% Chicken30.1036.551.21421.4% Total50.9058.651.15215.2% Comparisons of the volume indexes Year 1 to 2Year 2 to 3Year 3 to 4 Laspeyres1.0501.0411.154 Paasche1.0431.0381.152 Fisher1.0461.0401.153 6A.8    The following table provides an example of deriving Laspeyres volume indexes by deflation. Table 6A.2 Derivation of Laspeyres volume indexes by deflation Sales of beef and chicken Paasche price index: year 1 to year 2 using year 2 quantities Values at year 2 quantiles ($) Year 1Year 2Price indexGrowth rate Beef18.0019.801.10010.0% Chicken24.0024.001.0000.0% Total42.0043.801.04343.0% Paasche price index: year 2 to year 3 using year 3 quantities Values at year 3 quantiles ($) Year 2Year 3Price indexGrowth rate Beef17.6019.201.0919.1% Chicken28.0029.401.0505.0% Total45.6048.601.0666.6% Paasche price index: year 3 to year 4 using year 4 quantities Values at year 4 quantiles ($) Year 2Year 3Price indexGrowth rate Beef20.4022.101.0838.3% Chicken35.7036.551.0242.4% Total56.1058.651.0454.5% Laspeyres volume indexes derived by deflation Year 1 to 2Year 2 to 3Year 3 to 4 Value index1.0951.1101.207 Paasche price index1.0431.0661.045 Laspeyres volume index1.0501.0411.154 ## Chain volume indexes 6A.9 Annual chain Laspeyres and Paasche volume indexes can be formed by multiplying consecutive year-to-year indexes: $$\large L_Q^y = \frac{{\sum\limits_{i = 1}^n {P_i^0Q_i^1} }}{{\sum\limits_{i = 1}^n {P_i^0Q_i^0} }} \times \frac{{\sum\limits_{i = 1}^n {P_i^1Q_i^2} }}{{\sum\limits_{i = 1}^n {P_i^1Q_i^1} }} \times \frac{{\sum\limits_{i = 1}^n {P_i^2Q_i^3} }}{{\sum\limits_{i = 1}^n {P_i^2Q_i^2} }} \times ..... \times \frac{{\sum\limits_{i = 1}^n {P_i^{y - 1}Q_i^y} }}{{\sum\limits_{i = 1}^n {P_i^{y - 1}Q_i^{y - 1}} }}$$ - - - - - - - (6) $$\large P_Q^y = \frac{{\sum\limits_{i = 1}^n {P_i^1Q_i^1} }}{{\sum\limits_{i = 1}^n {P_i^1Q_i^0} }} \times \frac{{\sum\limits_{i = 1}^n {P_i^2Q_i^2} }}{{\sum\limits_{i = 1}^n {P_i^2Q_i^1} }} \times \frac{{\sum\limits_{i = 1}^n {P_i^3Q_i^3} }}{{\sum\limits_{i = 1}^n {P_i^3Q_i^2} }} \times ..... \times \frac{{\sum\limits_{i = 1}^n {P_i^yQ_i^y} }}{{\sum\limits_{i = 1}^n {P_i^yQ_i^{y - 1}} }},$$ - - - - - - - (7) 6A.10 Chain Fisher indexes can be derived by taking their geometric mean: $$\large F_Q^y = {\left( {L_Q^yP_Q^y} \right)^{1/2}}$$ - - - - - - - (8) 6A.11 All of these indexes can be re-referenced by dividing them by the index value in the chosen reference year and multiplying by 100 to produce an indexed series, or by multiplying by the current price value in the reference year to obtain a series in monetary values. ## The case for using chain indexes 6A.12 Frequent linking is beneficial when price and volume relativities progressively change. For example, volume estimates of gross fixed capital formation are much better derived as chain indexes than as fixed-weighted indexes (i.e. constant price estimates) mainly because of the steady decline in the relative prices of computer equipment and the corresponding increase in their relative volumes. While chain Fisher indexes perform best in such circumstances and are a much better indicator than fixed-weighted indexes, chain Laspeyres indexes capture much of the improvement from frequent linking. 6A.13 Conversely, frequent chaining is least beneficial when price and volume relativities are volatile. All chained series are subject to drift (see box below) when there is price and volume instability, but chain Fisher indexes usually drift less than either chain Laspeyres or chain Paasche indexes. ### Drift and long-term accuracy Suppose the prices and quantities are $$p_i^t$$ and $$q_i^t$$ at time t and $$p_i^{t+n}$$ $$n$$ periods later at time $$t+n$$. Further suppose that the price in year $$t+n$$ $$(𝑝^{𝑡+𝑛})$$ returns to the same level that it was in year $$t (𝑝𝑡)$$after having diverged from $$𝑝𝑡$$ during the intervening years ($$𝑡^2$$ to $$𝑡^{𝑛−1}$$). Similarly, the quantity in year $$t+ n$$ ($$𝑞^{𝑡=𝑛}$$) also returns to its original level ($$𝑞^𝑡$$) after having diverged between those years. Direct Laspeyres, Paasche and Fisher volume indexes from year $$t$$ to year $$t+ n$$ would equal 1. However, it is unlikely that the values of a chain volume index would be identical in these years because of the cumulative effects of changes in the prices and volumes during the intervening years. The extent of the difference (usually expressed as the quotient of the two values) is a measure of the “drift” in the chain volume index between the two time periods. In reality it is very uncommon for prices and volumes to return to the values observed in an earlier period. Therefore, in practice, the drift and long-term accuracy of a chain or fixed-weighted index can be assessed over a period of time by comparing it with a direct Fisher index; that is, a Fisher index calculated directly from the first to the last observation in a period. 6A.14 Table A.3 below compares the chain Laspeyres, chain Paasche and chain Fisher indexes of meat sales. It shows that in this example: • the chain Fisher index and the Fisher index calculated directly from the first year to the fourth year show almost the same growth rate over the four year period; that is, the chain Fisher index shows very little drift; and • both the chain Laspeyres and chain Paasche indexes come much closer to the two Fisher indexes than their fixed-weighted counterparts. 6A.15 It is important to note that this is just an example. In the real world, the differences between the different indexes are usually much less. 6A.16 For aggregates such as gross value added of mining and agriculture, and maybe exports and imports, where volatility in price and volume relativities are common, the advantages of frequent linking may be doubtful, particularly using the Laspeyres (or Paasche) formula. For reasons of practicality and consistency, the same approach to volume aggregation has to be followed throughout the accounts. So when choosing which formula to use, it is necessary to make an overall assessment of drift, accuracy and practical matters. 6A.17 In considering the benefits of chain volume indexes against fixed-weighted indexes, the 2008 SNA concludes that: . . . it is generally recommended that annual indexes be chained. The price and volume components of monthly and quarterly data are usually subject to much greater variation than their annual counterparts due to seasonality and short-term irregularities. Therefore, the advantages of chaining at these higher frequencies are less and chaining should definitely not be applied to seasonal data that are not adjusted for seasonal fluctuations.³⁶ Table 6A.3 Illustration of chain volume indexes, direct indexes and drift Laspeyres Passche Fisher Chain volume indexes $$\large L_{CV}^1$$= 100.0= 100.0$$\large P_{CV}^1$$= 100.0= 100.0$$\large F_{CV}^1$$= 100.0= 100.0 $$\large L_{CV}^2$$= 100.0 X 1.050= 105.0$$\large P_{CV}^2$$= 100.0 X 1.043= 104.3$$\large F_{CV}^2$$$${\left( {105.0 \times 104.3} \right)^{0.5}}$$= 104.6 $$\large L_{CV}^3$$= 105.0 X 1.041= 109.3$$\large P_{CV}^3$$= 104.3 X 1.038= 108.3$$\large F_{CV}^3$$$${\left( {109.3 \times 108.3} \right)^{0.5}}$$= 108.8 $$\large L_{CV}^4$$= 109.3 X 1.154= 126.2$$\large P_{CV}^4$$= 108.3 X 1.152= 124.8$$\large F_{CV}^4$$$${\left( {126.2 \times 124.8} \right)^{0.5}}$$= 125.5 Direct volume indexes $$\large L_{DV}^4$$$$\large \frac{{17 \times 1.00 + 17 \times 2.00}}{{40.00}}$$= 127.5$$\large P_{DV}^4$$$$\large \frac{{58.65}}{{20 \times 1.30 + 10 \times 2.15}}$$= 123.5$$\large F_{DV}^4$$$${\left( {127.5 \times 123.5} \right)^{0.5}}$$= 125.5 ## Deriving annual chain volume indexes in the national accounts 6A.18 It is recommended in the 2008 SNA that the annual national accounts should be balanced in both current prices and in volume terms using S-U tables. In most cases, the volume estimates are best derived in the average prices of the previous year rather than some distant base year. This is for two key reasons: • assumptions of fixed relationships in volume terms are usually more likely to hold in the previous year’s average prices than in the prices of some distant base year: and; • so that the growth rates of volumes and prices are less affected by compositional change. 6A.19 The compilation of annual S-U tables in current prices and in the average prices of the previous year lends itself to the compilation of annual Laspeyres indexes and to the formation of annual chain Laspeyres indexes. 6A.20 In order to compute annual Fisher indexes from data balanced in a S-U table, it is conceptually desirable to derive both Laspeyres and Paasche indexes from that data. The former requires balancing the S-U tables of the current year $$(y)$$ in current prices $$(y)$$ and in the average prices of the previous year $$(y-1)$$ and the latter requires balancing S-U tables in the previous year $$(y-1)$$ in the average prices of that year $$(y-1)$$ and in the average prices of the current year $$(y)$$. Thus, the compilation of annual chain Fisher indexes, at least in concept, is somewhat more demanding than compiling annual chain Laspeyres indexes. ## Deriving quarterly chain indexes in the national accounts 6A.21 Computationally, the derivation of quarterly chain indexes from quarterly data with quarterly base periods is no different to compiling annual chain indexes from annual data with annual base periods. As recommended by the 2008 SNA, if quarterly volume indexes are to have quarterly base periods and be linked each quarter, then it should only be done using seasonally adjusted data. Furthermore, if the quarterly seasonally adjusted data are subject to substantial volatility in relative prices and relative volumes, then chain indexes should not be formed from indexes with quarterly base periods at all. Even if the quarterly volatility is not so severe, quarterly base periods and quarterly linking are not recommended using the Laspeyres formula because of its greater susceptibility to drift than the Fisher formula. 6A.22 A way round this problem is to derive quarterly volume indexes from a year to quarters. In other words, use annual base years (i.e. annual weights) to derive quarterly volume indexes. Consider the Laspeyres annual volume index in formula 1. It can be expressed as a weighted average of elemental volume indexes: $$\large {L_Q} = \frac{{\sum\limits_{i = 1}^n {P_i^{y - 1}Q_i^y} }}{{\sum\limits_{i = 1}^n {P_i^{y - 1}Q_i^{y - 1}} }} = \sum\limits_{i = 1}^n {\left( {\frac{{Q_i^y}}{{Q_i^{y - 1}}}} \right)} s_i^{y - 1},\;where\;s_i^{y - 1} = \frac{{P_i^{y - 1}Q_i^{y - 1}}}{{\sum\limits_{i = 1}^n {P_i^{y - 1}Q_i^{y - 1}} }}$$ - - - - - - - (9) $$s_i^{y - 1}$$ is the share, or weight, of the $$i^{th}$$ item in year $$y-1$$. 6A.23 Paasche volume indexes can also be expressed in terms of a weighted average of the elemental volume indexes, but as the harmonic, rather than arithmetic, mean. 6A.24 A Laspeyres-type³⁷ volume index from year $$y-1$$ to quarter $$c$$ in year $$y$$ takes the form: $$\large L_Q^{(y - 1) \to (c,y)} = \frac{{\sum\limits_{i = 1}^n {P_i^{y - 1}4q_i^{c,y}} }}{{\sum\limits_{i = 1}^n {P_i^{y - 1}Q_i^{y - 1}} }} = \sum\limits_{i = 1}^n {\frac{{4q_i^{c,y}}}{{Q_i^{y - 1}}}} s_i^{y - 1},$$ - - - - - - - (10) where $${q_i^{c,y}}$$ is the volume of product $$i$$ in the $$c^{th}$$ quarter of year $$y$$. In this case the annual current price data in year $$y-1$$ are used to weight together elemental volume indexes from year $$y-1$$ to each of the quarters in year $$y$$. The “4” in formula 10 is to put the quarterly data onto a comparable basis with the annual data. Note that constant price (or fixed-weighted) volume indexes are traditionally formed in this way, but the weights are kept constant for many years. 6A.25 2008 SNA describes how chain Fisher-type indexes of quarterly data with annual base periods can be derived: "Just as it is possible to derive annually chained Laspeyres-type quarterly indices, so it is possible to derive annually chained Fisher-type quarterly indices. For each pair of consecutive years, Laspeyres-type and Paasche-type quarterly indices are constructed for the last two quarters of the first year, year $$y-1$$ and the first two quarters of the second year, year $$y$$. The Paasche-type quarterly indices are constructed as backward-looking Laspeyres-type quarterly indices and then inverted. This is done to ensure that the Fisher-type quarterly indices are derived symmetrically. In the forward-looking Laspeyres-type indices the annual value shares relate to the first of the two years, whereas in the backward-looking Laspeyres-type indices the annual value shares relate to the second of the two years. For each of the four quarters a Fisher-type index is derived as the geometric mean of the corresponding Laspeyres-type and Paasche-type indices. Consecutive spans of four quarters can then be linked using the one-quarter overlap technique. The resulting annually chained Fisher-type quarterly indices need to be benchmarked to annual chain Fisher indices to achieve consistency with the annual estimates."³⁸ ## Choosing between chain Laspeyres and chain Fisher indexes 6A.26 There are several advantages in using the Laspeyres formula: • its adoption is consistent with compiling additive S-U tables in both current prices and in the prices of the previous year; • quarterly chain volume estimates of both seasonally adjusted and unadjusted data can be derived; • it is unnecessary to seasonally adjust volume data at the most detailed level, if desired; and • it is simpler and lower risk to construct chain Laspeyres indexes than Fisher indexes. 6A.27 The advantages of using the Fisher formula are: • it is more accurate than the Laspeyres formula; and • it is more robust and less susceptible to drift when price and volume relativities are volatile. 6A.28 In practice, it is generally found that there is little difference between chain Laspeyres and Fisher indexes for most aggregates. The major threat to the efficacy of the use of the Laspeyres formula in the National Accounts has been computer equipment. The prices of computer equipment relative to improvements in quality have been falling rapidly and the volumes of production and expenditure have been rising rapidly for many years. Consequently, the chain Laspeyres and chain Fisher indexes for aggregates for which computer equipment is a significant component are likely to show differences. Until now, these differences have been insufficient to cause concern and have not been considered to outweigh the advantages of using the Laspeyres formula. This is largely due to the fact that a country such as Australia does not produce a large volume of computers domestically, and as such GDP is unaffected. 6A.29 There is one other reason why the ABS has chosen to derive chain volume estimates using the Laspeyres formula. A requirement of using quarterly base periods is the availability of quarterly current price data (see formula 9). While there are quarterly current price estimates of final expenditures in the ASNA, there are no quarterly current price estimates of gross value added by industry at the moment. Hence, it is currently not possible to derive chain volume estimates with quarterly base periods for the production measure of GDP. ## Deriving annually-linked quarterly Laspeyres-type volume indexes 6A.30 While there are different ways of linking annual Laspeyres volume indexes, they all produce the same result. But this is not true when it comes to linking annual-to-quarter Laspeyres-type volume indexes for consecutive years. Paragraphs 15.46 -15.50 of the 2008 SNA discuss three methods for linking these Laspeyres-type volume indexes; they are: • Annual overlap; • One-quarter overlap: and • Over the year. 6A.31 When a Laspeyres-type quarterly volume index from year $$y-1$$ to quarter $$c$$ in year $$y$$ is multiplied by the current price value for year $$y-1$$ divided by four, then a value for quarter $$c$$ is obtained in the average prices of year $$y-1$$. $$\large \sum\limits_{i = 1}^n {\frac{{4q_i^{c,y}}}{{Q_i^{y - 1}}}} s_i^{y - 1}\frac{1}{4}\sum\limits_{i = 1}^n {P_i^{y - 1}Q_i^{y - 1}} = \sum\limits_{i = 1}^n {\frac{{4q_i^{c,y}}}{{Q_i^{y - 1}}}} \frac{{P_i^{y - 1}Q_i^{y - 1}}}{{\sum\limits_{i = 1}^n {P_i^{y - 1}Q_i^{y - 1}} }}\frac{1}{4}\sum\limits_{i = 1}^n {P_i^{y - 1}Q_i^{y - 1}} = \sum\limits_{i = 1}^n {q_i^{c,y}P_i^{y - 1}}$$ - - - - - - - (11) 6A.32 Hence, the task of linking quarterly Laspeyres-type volume indexes for two consecutive years, year $$y-1$$ and year $$y$$, amounts to linking the quarterly values of year $$y-1$$ in year $$y-2$$ average prices with the values of year $$y$$ in year $$y-1$$ average prices. ## Annual overlap method 6A.33 One way of putting the eight quarters described in the previous paragraph onto a comparable valuation basis is to calculate and apply a link factor from an annual overlap. Values for year $$y-1$$ are derived in both $$y-1$$ prices and $$y-2$$ prices and then the former is divided by the latter; thus, giving an annual link factor for year $$y-1$$ to year $$y$$ is equal to: $$\large \frac{{\sum\limits_{i = 1}^n {P_i^{y - 1}Q_i^{y - 1}} }}{{\sum\limits_{i = 1}^n {P_i^{y - 2}Q_i^{y - 1}} }}$$ - - - - - - - (12) 6A.34 Multiplying the quarterly values for year $$y-1$$ at year $$y-2$$ average prices with this link factor puts them on to a comparable valuation basis with the quarterly estimates for year $$y$$ at year $$y-1$$ prices. Note that this link factor is identical to the one that can be used to link the annual value for year $$y-1$$ at $$y-2$$ average prices with the annual value for year $$y$$ at year $$y-1$$ average prices. Therefore, if the quarterly values for every year $$m$$ at year $$m1$$ average prices sum to the corresponding annual value, then the chain-linked quarterly series will be temporally consistent with the corresponding chain-linked annual series. ## One-quarter overlap method 6A.35 The one-quarter overlap method, as its name suggests, involves calculating a link factor using overlap values for a single quarter. To link the four quarters of year $$y-1$$ at year $$y-2$$ average prices with the four quarters of year $$y$$ at year $$y-1$$ average prices, a one-quarter overlap can be created for either the fourth quarter of year $$y-1$$ or the first quarter of year $$y$$. The link factor derived from an overlap for the fourth quarter of year $$y-1$$ is equal to: $$\large \frac{{\sum\limits_{i = 1}^n {P_i^{y - 1}q_i^{4,(y - 1)}} }}{{\sum\limits_{i = 1}^n {P_i^{y - 2}q_i^{4,(y - 1)}} }}$$ - - - - - - - (13) 6A.36 Multiplying the quarterly values for year $$y-1$$ at year $$y-2$$ average prices with this link factor puts them on to a comparable valuation basis with the quarterly estimates for year $$y$$ at year $$y-1$$ prices. 6A.37 A key property of the one-quarter overlap method is that it preserves the quarter-to-quarter growth rate between the fourth quarter of year $$y-1$$ and the first quarter of year $$y$$ - unlike the annual overlap method. The “damage” done to that growth rate by the annual overlap method is determined by the difference between the annual and quarter link factors. Conversely, this difference also means that the sum of the linked quarterly values in year $$y-1$$ differ from the annual-linked data by the ratio of the two link factors. Temporal consistency can be achieved by benchmarking the quarterly chain volume estimates to their annual counterparts. 6A.38 The following table illustrates the methods used to deriving link factors: Table 6A.4 Comparison of the methods to derive link factors Sales of beef and chicken Annual overlap method Year 2 to Year 3Year 3 to Year 4 $$\Large \frac{{\sum\limits_{i = 1}^2 {P_i^2Q_i^2} }}{{\sum\limits_{i = 1}^2 {P_i^1Q_i^2} }}$$$$\Large \frac{{\sum\limits_{i = 1}^2 {P_i^3Q_i^3} }}{{\sum\limits_{i = 1}^2 {P_i^2Q_i^3} }}$$ $$\Large \frac{{(1.1x18) + (2x12)}}{{(1x18) + (2x12)}} = 1.043$$$$\frac{{(1.2x16) + (2.1x14)}}{{(1.1x16) + (2x14)}} = 1.066$$ One-quarter overlap method Quarter 4 in Year 2Quarter 4 in Year 3 $$\Large \frac{{\sum\limits_{i = 1}^2 {P_i^2q_i^{4,2}} }}{{\sum\limits_{i = 1}^2 {P_i^1q_i^{4,2}} }}$$$$\Large \frac{{\sum\limits_{i = 1}^2 {P_i^3q_i^{4,3}} }}{{\sum\limits_{i = 1}^2 {P_i^2q_i^{4,3}} }}$$ $$\Large \frac{{(1.1x6) + (2.0x3)}}{{(1.0x6) + (2.0x3)}} = 1.05$$$$\Large \frac{{(1.2x4) + (2.1x3)}}{{(1.1x4) + (2.0x3)}} = 1.0673$$ ## Over the year method 6A.39 The over-the-year method requires compiling a separate link factor for each type of quarter. Each of the quarterly values in year $$y-1$$ at year $$y-2$$ average prices is multiplied by its own link factor. The over-the-year quarterly link factor for year $$y-1$$ at average year $$y-2$$ prices to year y at average year $$y-1$$ prices for quarter c is equal to: $$\large \frac{{\sum\limits_{i = 1}^n {P_i^{y - 1}q_i^{c,(y - 1)}} }}{{\sum\limits_{i = 1}^n {P_i^{y - 2}q_i^{c,(y - 1)}} }}$$ - - - - - - - (14) 6A.40 The over-the-year method does not distort quarter-on-same quarter of previous year growth rates, since the chain-links refer to the volumes of the same quarter in the respective previous year valued at average prices of that year. However, it does distort quarter-to-quarter growth rates. In addition, the linked quarterly data are temporally inconsistent with the annual-linked data and so benchmarking is needed. Given these shortcomings, the over-the-year method is best avoided. 6A.41 The following tables provide examples of using the annual and one-quarter overlap methods. Sales of beef and chicken Year 2 3 4 Beef (kilos) 5 4 3 6 4 5 3 4 4 4 5 4 Chicken (kilos) 2 3 4 3 2 4 5 3 3 4 6 4 Price of beef in previous year ($) 1.00 1.00 1.00 1.00 1.10 1.10 1.10 1.10 1.20 1.20 1.20 1.20 Price of chicken in previous year ($) 2.00 2.00 2.00 2.00 2.00 2.00 2.00 2.00 2.10 2.10 2.10 2.10 Value of beef at previous year's prices ($) 5.00 4.00 3.00 6.00 4.40 5.50 3.30 4.40 4.80 4.80 6.00 4.80 Value of chicken at previous year's prices ($) 4.00 6.00 8.00 6.00 4.00 8.00 10.00 6.00 6.30 8.40 12.60 8.40 Total sales of meat in previous year's prices ($) 9.00 10.00 11.00 12.00 8.40 13.50 13.30 10.40 11.10 13.20 18.60 13.20 Link factor year 2 to 3 1.0429 1.0429 1.0429 1.0429 Linking year 2 to year 3 ($) 9.39 10.43 11.47 12.51 8.40 13.50 13.30 10.40 Link factor year 3 to 4 1.0658 1.0658 1.0658 1.0658 1.0658 1.0658 1.0658 1.0658 Linking year 2 and 3 to year 4 ($) 10.00 11.12 12.23 13.34 8.95 14.39 14.18 11.08 11.10 13.20 18.60 13.20 Factor to reference to year 2 0.9383 0.9383 0.9383 0.9383 0.9383 0.9383 0.9383 0.9383 0.9383 0.9383 0.9383 0.9383 Referenced to year 2 ($) 9.39 10.43 11.47 12.51 8.40 13.50 13.30 10.40 10.41 12.39 17.45 12.39 Annualised ($) 43.80 45.60 52.64 Quarterly growth rate (%) 11.11 10.00 9.09 -32.88 60.71 -1.48 -21.80 0.14 18.92 40.91 -29.03 Table 6A.6 Quarterly chain volume measures – one- quarter overlap method: referenced to year 2 Sales of beef and chicken Year 2                  3   4 Quarter123412341234 Beef (kilos) 543645344454 Chicken (kilos)234323533464 Price of beef in previous year ($)1.001.001.001.001.101.101.101.101.201.201.201.20 Price of chicken in previous year ($)2.02.002.002.002.002.002.002.002.102.102.102.10 Value of beef at previous year's prices ($)5.004.003.006.004.405.503.304.404.804.806.004.80 Value of chicken at previous year's prices ($)4.006.008.006.004.008.0010.006.006.308.4012.608.40 Total sales of meat in previous year's prices ($)9.0010.0011.0012.008.4013.5013.3010.4011.1013.2018.6013.20 Link factor year 2 to 31.051.051.051.05 Linking year 2 to year 3 ($)9.4510.5011.5512.608.4013.5013.3010.40 Link factor year 3 to 41.06731.06731.06731.06731.06731.06731.06731.0673 Linking year 2 and 3 to year 4 ($)10.0911.2112.3313.458.9714.4114.2011.1011.1013.2018.6013.20 Factor to reference to year0.93060.93060.93060.93060.93060.93060.93060.93060.93060.93060.93060.9306 Referenced to year 2 ($)9.3910.4311.4712.518.3413.4113.2110.3310.3312.2817.3112.28 Annualised (\$)43.80   45.29   52.20 Quarterly growth rate (%) 11.1110.009.09-33.3360.71-1.48-21.800.0018.9240.91-29.03 ## Deriving chain volume estimates of time series that are not strictly positive 6A.42    Some quarterly national accounts series can take positive, negative or zero values, and so it is not possible to derive chain volume estimates for them. The best-known example is changes in inventories, but any variable which is a net measure is susceptible. While it is not possible to derive true chain volume estimates for variables that can change sign or take zero values, it is possible to derive proxy chain volume estimates. The most commonly used approach is to: • identify two strictly positive series that when differenced yield the target series; • derive chain volume estimates of these two series expressed in currency units; and • difference the two chain volume series. 6A.43    The same approach can be used to derive seasonally adjusted proxy chain volume estimates except that after step 2 the two series are seasonally adjusted before proceeding to step 3. 6A.44    In the case of changes in inventories, the obvious candidates for the two strictly positive series are the opening and closing inventory levels. The chain volume index of opening inventories is referenced to the opening value in the reference year expressed at the average prices of the reference year. Likewise, the chain volume index of closing inventories is referenced to the closing value of inventories expressed at the average prices of the reference year. This ensures that the value of the proxy chain volume measure of changes in inventories is equal to the current price value in the reference year. 6A.45    Seasonally adjusted current price estimates of changes in inventories are obtained by inflating the proxy chain volume estimates by a suitable price index centred on the middle of each quarter and with the same reference year as the volume estimates. ### Endnotes 1. SNA, 2008, para. 15.44. 2. The terms Laspeyres-type and Fisher-type indexes are used to describe quarterly indexes with annual weights. 3. SNA, 2008, paras. 15.53-15.54.
2021-11-29T18:05:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.947691023349762, "perplexity": 7761.9073036051095}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358786.67/warc/CC-MAIN-20211129164711-20211129194711-00019.warc.gz"}
https://tjyj.stats.gov.cn/CN/abstract/abstract5051.shtml
• • ### 中国地方政府债务的结构性风险 • 出版日期:2018-02-15 发布日期:2018-02-25 ### The Structural Credit Risks of Local Government Debts in China Xu Youchuan • Online:2018-02-15 Published:2018-02-25 Abstract: In the case of local government compensation uncertainly triggered by third-parties, the paper models the structural credit risks of local government debts, and gives the analytic solutions and estimation methods for the default probabilities of explicit debt and implicit debt. Based on the explicit debt data reported by local governments and the relevant audit announcements of the National Audit Office et al., the paper conducts structural decomposition and calculation for the two kinds of debts, empirically estimates their default probabilities and comparatively analyzes their changing trends. The study shows the major risk of local government debts is the uncertain triggering of contingent debt and local governments have structural pressure to fulfill their compensatory obligation at some certain time points or periods. However, the policy design such as Local Governments Bond Replacement Plans which aims at lengthening the debt duration helps to mitigate the credit risk boundary of local government debts and reduce the structural compensatory pressure of local governments. The paper uses specific figures to demonstrate the structural risks of local government debts and their changing trends, as well as the quantitative relationships between debt duration and the compensatory pressure of local governments.
2023-01-29T11:46:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19025108218193054, "perplexity": 5287.96869027795}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499713.50/warc/CC-MAIN-20230129112153-20230129142153-00448.warc.gz"}
https://www.pnnl.gov/news?news%5B0%5D=research-topic%3AAdvanced%20Hydrocarbon%20Conversion&news%5B1%5D=research-topic%3ABioenergy%20Technologies&news%5B2%5D=research-topic%3AComputing%20%26%20Analytics&news%5B3%5D=research-topic%3AData%20Analytics%20%26%20Machine%20Learning&news%5B4%5D=research-topic%3ADistribution&news%5B5%5D=research-topic%3AHuman%20Health&news%5B6%5D=research-topic%3AIntegrative%20Omics&news%5B7%5D=research-topic%3ANuclear%20Energy&news%5B8%5D=research-topic%3ARenewable%20Energy&news%5B9%5D=research-topic%3ASolar%20Energy&news%5B10%5D=research-topic%3AWater%20Power
News & Media Featured Stories SUBSCRIBE TO PNNL NEWS Latest Stories 309 results found Filtered by Advanced Hydrocarbon Conversion, Bioenergy Technologies, Computing & Analytics, Data Analytics & Machine Learning, Distribution, Human Health, Integrative Omics, Nuclear Energy, Renewable Energy, Solar Energy, and Water Power JULY 14, 2020 Web Feature Turning the Tides Their consistency and predictability makes tidal energy attractive, not only as a source of electricity but, potentially, as a mechanism to provide reliability and resilience to regional or local power grids. JUNE 25, 2020 Web Feature Mapping the Molecular Health Benefits of Exercise Researchers from 25 institutions around the country, including PNNL, are working to find out how exercise changes the molecular makeup of our cells to generate health benefits. JUNE 9, 2020 News Release PNNL Waives Fee to Test-Drive Portfolio of Intellectual Property To help spur economic development and assist in the battle against COVID-19, PNNL is making available its entire portfolio of patented technologies on a research trial basis—at no cost—through the end of 2020. JUNE 9, 2020 Web Feature MAY 19, 2020 News Release New Study Confirms Important Clues to Fight Ovarian Cancer A new study using proteogenomics to compare cancerous tissue with normal fallopian tube samples advances insights about the molecular machinery that underlies ovarian cancer. MAY 15, 2020 Web Feature The recent coronavirus pandemic shows just how quickly a deadly pathogen can sweep across the globe, killing tens of thousands in the U.S. and disrupting daily life for millions more in the span of a few months. MAY 4, 2020 News Release Software Flaws Sometimes First Reported on Social Media Software vulnerabilities are likely to be discussed on social media before they’re revealed on a government reporting site, a practice that could pose a national security threat, according to computer scientists at PNNL. APRIL 28, 2020 Web Feature The Quantum Gate Hack PNNL quantum algorithm theorist and developer Nathan Wiebe is applying ideas from data science and gaming hacks to quantum computing APRIL 21, 2020 Web Feature A Tree-mendous Study: Biomass from Forest Restoration PNNL and the U.S. Forest Service used a combination of data, models, analytical techniques and software to evaluate forest restoration impacts on the environment, while also assessing the economics of resulting biomass. APRIL 17, 2020 Web Feature Identifying the Dark Matter of the Molecular World Artificial intelligence helps researchers identify metabolites, the small molecules that underlie life. APRIL 16, 2020 Web Feature Nuclear Process Science Initiative researchers at PNNL gain insights into the formation and rupture of high-pressure gas bubbles in nuclear fuel. MARCH 31, 2020 Web Feature Scientists Take Aim at the Coronavirus Toolkit A PNNL scientist is studying the structures of the proteins on the surface of the novel coronavirus, using NMR spectroscopy to reveal information about the molecular toolkit that holds the keys to a vaccine or treatment. MARCH 23, 2020 Web Feature Defense against Coronavirus Starts with Prevention PNNL scientists are on the front lines fighting against biological threats through early detection, diagnosis, and prevention strategies. MARCH 9, 2020 News Release PNNL, Verizon bring 5G to National Laboratory Verizon recently announced a partnership that will make Pacific Northwest National Laboratory the U.S. Department of Energy’s first national laboratory with Verizon 5G ultra wideband wireless technology. MARCH 4, 2020 Web Feature PNNL Researchers Make a “Beer Run” in the Name of Science A chemical engineer by day at PNNL, Dan Howe is an ardent home brewer by night. The connection resulted in production of biocrude oil from brewery waste. MARCH 2, 2020 Director's Column PNNL Scientists Defend Against New Threats like Coronavirus Combining its strength in biological sciences and data analytics, researchers at the Department of Energy's PNNL are working to enable a quick response to a biological incident — whether intentional, accidental or natural.
2020-07-15T02:54:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1748894304037094, "perplexity": 12031.127632728949}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657154789.95/warc/CC-MAIN-20200715003838-20200715033838-00155.warc.gz"}
https://pos.sissa.it/055/109/
Volume 055 - Physics at LHC 2008 (2008LHC) - Poster session MC free calibration of LHCb RICH detectors using the $\Lambda \to p \pi$ B.P. Popovici Full text: Not available How to cite Metadata are provided both in "article" format (very similar to INSPIRE) as this helps creating very compact bibliographies which can be beneficial to authors and readers, and in "proceeding" format which is more detailed and complete. Open Access Copyright owned by the author(s) under the term of the Creative Commons Attribution-NonCommercial-ShareAlike.
2021-09-27T21:42:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19502265751361847, "perplexity": 7017.351980303111}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780058552.54/warc/CC-MAIN-20210927211955-20210928001955-00187.warc.gz"}
https://gea.esac.esa.int/archive/documentation/GEDR3/Data_processing/chap_cu3pre/sec_cu3pre_process/ssec_cu3pre_process_bvc.html
3.4.6 Barycentric radial velocity correction (BVC) Author(s): Javier Castañeda The measurement of the barycentric radial velocity correction in Gaia can be obtained from: • Gaia ephemeris or orbit; allowing one to compute Gaia position and velocity in the BCRS for any moment of time covered by observations (see Section 4.2.3). • Spacecraft attitude; providing the orientation of the satellite and thus the viewing direction of both field of views (see Section 3.4.5). In practice, the module in charge of the barycentric radial velocity correction computation populates a table with values for the two fields of view of Gaia at regular time knots (typically of 5 minutes). For each time knot, the Gaia velocity is retrieved from its ephemeris which is then projected over the viewing directions of the two field of views at the reference zero point of the RVS instrument (measured before launch). These viewing directions are obtained from the attitude and some basic transformations between the different Gaia reference systems accounting for the basic angle between the two field of views and focal plane geomatry among others (see Figure 3.15). The precision of even the most preliminary orbit data ($\sim$mm/s) and attitude data (5-mas level) exceed the needs of the barycentric velocity correction consumers by orders of magnitude (see Chapter 6). This scheme provides an RMS precision of about 0.05 km/s. The corrections can range from $+30$  km s${}^{-1}$ to $-30$  km s${}^{-1}$ as a function of Gaia’s six hour revolution as shown in Figure 6.6. The barycentric radial velocity correction table was initially produced by IDT just after the computation of OGA1 (see Section 3.4.5). However, its production in the Daily pipeline was discontinued around mid 2017 (when the third data segment started) as it was considered more practical to compute it on the fly given its small computational cost. Furthermore, consumers of this data can benefit with this new approach from the latest orbit and attitude updates.
2021-07-26T15:45:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 5, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7553878426551819, "perplexity": 1737.4110096550291}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046152144.81/warc/CC-MAIN-20210726152107-20210726182107-00644.warc.gz"}
http://xxx.lanl.gov/list/astro-ph/new
Astrophysics New submissions [ total of 69 entries: 1-69 ] [ showing up to 2000 entries per page: fewer | more ] New submissions for Wed, 23 Apr 14 [1] Title: ALFALFA Discovery of the Nearby Gas-rich Dwarf Galaxy Leo P. V. Neutral Gas Dynamics and Kinematics Comments: 38 pages, 11 figures, Accepted for publication in the Astronomical Journal Subjects: Astrophysics of Galaxies (astro-ph.GA) We present new HI spectral line imaging of the extremely metal-poor, star-forming dwarf irregular galaxy Leo P. Our HI images probe the global neutral gas properties and the local conditions of the interstellar medium (ISM). The HI morphology is slightly elongated along the optical major-axis. We do not find obvious signatures of interaction or infalling gas at large spatial scales. The neutral gas disk shows obvious rotation, although the velocity dispersion is comparable to the rotation velocity. The rotation amplitude is estimated to be V_c = 15 +/- 5 km/s. Within the HI radius probed by these observations, the mass ratio of gas to stars is roughly 2:1, while the ratio of the total mass to the baryonic mass is ~15:1. We use this information to place Leo P on the baryonic Tully-Fisher relation, testing the baryonic content of cosmic structures in a sparsely populated portion of parameter space that has hitherto been occupied primarily by dwarf spheroidal galaxies. We detect the signature of two temperature components in the neutral ISM of Leo P; the cold and warm components have characteristic velocity widths of 4.2 +/- 0.9 km/s and 10.1 +/- 1.2 km/s, corresponding to kinetic temperatures of ~1100 K and ~6200 K, respectively. The cold HI component is unresolved at a physical resolution of 200 pc. The highest HI surface densities are observed in close physical proximity to the single HII region. A comparison of the neutral gas properties of Leo P with other extremely metal-deficient (XMD) galaxies reveals that Leo P has the lowest neutral gas mass of any known XMD, and that the dynamical mass of Leo P is more than two orders of magnitude smaller than any known XMD with comparable metallicity. [2] Title: A Simple Technique for Predicting High-Redshift Galaxy Evolution Subjects: Astrophysics of Galaxies (astro-ph.GA); Cosmology and Nongalactic Astrophysics (astro-ph.CO) We show that the ratio of galaxies' specific star formation rates (SSFRs) to their host halos' specific mass accretion rates (SMARs) strongly constrains how the galaxies' stellar masses, specific star formation rates, and host halo masses evolve over cosmic time. This evolutionary constraint provides a simple way to probe z>8 galaxy populations without direct observations. Tests of the method with galaxy properties at z=4 successfully reproduce the known evolution of the stellar mass--halo mass (SMHM) relation, galaxy SSFRs, and the cosmic star formation rate (CSFR) for 5<z<8. We then predict the continued evolution of these properties for 8<z<15. In contrast to the non-evolution in the SMHM relation at z<4, the median galaxy mass at fixed halo mass increases strongly at z>4. We show that this result is closely linked to the flattening in galaxy SSFRs at z>2 compared to halo specific mass accretion rates; we expect that average galaxy SSFRs at fixed stellar mass will continue their mild evolution to z~15. The expected CSFR shows no breaks or features at z>8.5; this constrains both reionization and the possibility of a steep falloff in the CSFR at z=9-10. Finally, we make predictions for the James Webb Space Telescope (JWST), which should be able to observe one galaxy with M* > ~10^8 Msun per 10^3 Mpc^3 at z=9.6 and one such galaxy per 10^4 Mpc^3 at z=15. [3] Title: The properties of the cool circumgalactic gas probed with the SDSS, WISE and GALEX surveys Comments: 14 pages, 11 figures, 1 table, submitted to ApJ, comments are welcome Subjects: Astrophysics of Galaxies (astro-ph.GA); Cosmology and Nongalactic Astrophysics (astro-ph.CO) We explore the distribution of cool (~$10^4$K) gas around galaxies and its dependence on galaxy properties. By cross-correlating about 50,000 MgII absorbers with millions of sources from the SDSS (optical), WISE (IR), and GALEX (UV) surveys we effectively extract about 2,000 galaxy-absorber pairs at z~0.5 and probe relations between absorption strength and galaxy type, impact parameter and azimuthal angle. We find that cool gas traced by MgII absorbers exists around both star-forming and passive galaxies with a similar incidence rate on scales greater than 100 kpc but each galaxy type exhibits a different behavior on smaller scales: MgII equivalent width does not correlate with the presence of passive galaxies whereas stronger MgII absorbers tend to be found in the vicinity of star-forming galaxies. This effect is preferentially seen along the minor axis of these galaxies, suggesting that some of the gas is associated with outflowing material. In contrast, the distribution of cool gas around passive galaxies is consistent with being isotropic on the same scales. We quantify the average excess MgII equivalent width $<\delta W_{0}^{\rm MgII}>$ as a function of galaxy properties and find $<\delta W_0^{\rm MgII}>\propto SFR^{0.6}, sSFR^{0.4}$ and $M_\ast^{0.4}$ for star-forming galaxies. This work demonstrates that the dichotomy between star-forming and passive galaxies is reflected in the CGM traced by low-ionized gas. We also measure the covering fraction of MgII absorption and find it to be about 2-10 times higher for star-forming galaxies than passive ones within 50 kpc. We estimate the amount of neutral gas in the halo of $<\log M_\ast/{\rm M_\odot}>$~10.8 galaxies to be a few x$10^9 \rm M_\odot$ for both types of galaxies. Finally, we find that correlations between absorbers and sources detected in the UV and IR lead to physical trends consistent with those measured in the optical. [4] Title: The relation between gas density and velocity power spectra in galaxy clusters: high-resolution hydrodynamic simulations and the role of conduction Subjects: Cosmology and Nongalactic Astrophysics (astro-ph.CO); Astrophysics of Galaxies (astro-ph.GA); High Energy Astrophysical Phenomena (astro-ph.HE); Fluid Dynamics (physics.flu-dyn) Exploring the ICM power spectrum can help us to probe the physics of galaxy clusters. Using high-resolution 3D plasma simulations, we study the statistics of the velocity field and its relation with the thermodynamic perturbations. The normalization of the ICM spectrum (density, entropy, or pressure) is linearly tied to the level of large-scale motions, which excite both gravity and sound waves due to stratification. For low 3D Mach number M~0.25, gravity waves mainly drive entropy perturbations, traced by preferentially tangential turbulence. For M>0.5, sound waves start to significantly contribute, passing the leading role to compressive pressure fluctuations, associated with isotropic turbulence (or a slight radial bias). Density and temperature fluctuations are then characterized by the dominant process: isobaric (low M), adiabatic (high M), or isothermal (strong conduction). Most clusters reside in the intermediate regime, showing a mixture of gravity and sound waves, hence drifting towards isotropic velocities. Remarkably, regardless of the regime, the variance of density perturbations is comparable to the 1D Mach number. This linear relation allows to easily convert between gas motions and ICM perturbations, which can be exploited by Chandra, XMM data and by the forthcoming Astro-H. At intermediate and small scales (10-100 kpc), the turbulent velocities develop a Kolmogorov cascade. The thermodynamic perturbations act as effective tracers of the velocity field, broadly consistent with the Kolmogorov-Obukhov-Corrsin advection theory. Thermal conduction acts to damp the gas fluctuations, washing out the filamentary structures and steepening the spectrum, while leaving unaltered the velocity cascade. The ratio of the velocity and density spectrum thus inverts the downtrend shown by the non-diffusive models, allowing to probe the presence of significant conductivity in the ICM. [5] Title: Winds of low-metallicity OB-type stars: HST-COS spectroscopy in IC1613 Comments: ApJ, accepted. 50 pages, 13 figures Subjects: Solar and Stellar Astrophysics (astro-ph.SR); Astrophysics of Galaxies (astro-ph.GA) We present the first quantitative UV spectroscopic analysis of resolved OB stars in IC1613. Because of its alleged very low metallicity (<~1/10 Zo, from HII regions), studies in this Local Group dwarf galaxy could become a significant step forward from the SMC towards the extremely metal-poor massive stars of the early Universe. We present HST-COS data covering the ~1150-1800{\AA} wavelength range with resolution R~2500. We find that the targets do exhibit wind features, and these are similar in strength to SMC stars. Wind terminal velocities were derived from the observed PCygni profiles with the SEI method. The vinf-Z relationship has been revisited. The terminal velocity of IC1613 O-stars is clearly lower than Milky Way counterparts, but there is no clear difference between IC1613 and SMC or LMC analogue stars. We find no clear segregation with host galaxy in the terminal velocities of B-supergiants, nor in the vinf/vesc ratio of the whole OB star sample in any of the studied galaxies. Finally, we present first evidence that the Fe-abundance of IC1613 OB stars is similar to the SMC, in agreement with previous results on red supergiants. With the confirmed ~1/10 solar oxygen abundances of B-supergiants, our results indicate that IC1613's [alpha/Fe] ratio is sub-solar. [6] Title: The relation between gas density and velocity power spectra in galaxy clusters: qualitative treatment and cosmological simulations Authors: I. Zhuravleva (Stanford), E. Churazov (MPA, IKI), A. A. Schekochihin (Oxford), E. T. Lau (Yale), D. Nagai (Yale), M. Gaspari (MPA), S. W. Allen (Stanford, SLAC), K. Nelson (Yale), I. J. Parrish (CITA) Comments: 6 pages, 3 figures, submitted to ApJ Letters Subjects: High Energy Astrophysical Phenomena (astro-ph.HE); Cosmology and Nongalactic Astrophysics (astro-ph.CO); Astrophysics of Galaxies (astro-ph.GA) We address the problem of evaluating the power spectrum of the velocity field of the ICM using only information on the plasma density fluctuations, which can be measured today by Chandra and XMM-Newton observatories. We argue that for relaxed clusters there is a linear relation between the rms density and velocity fluctuations across a range of scales, from the largest ones, where motions are dominated by buoyancy, down to small, turbulent scales: $(\delta\rho_k/\rho)^2 = \eta_1^2 (V_{1,k}/c_s)^2$, where $\delta\rho_k/\rho$ is the spectral amplitude of the density perturbations at wave number $k$, $V_{1,k}^2=V_k^2/3$ is the mean square component of the velocity field, $c_s$ is the sound speed, and $\eta_1$ is a dimensionless constant of order unity. Using cosmological simulations of relaxed galaxy clusters, we calibrate this relation and find $\eta_1\approx 1 \pm 0.3$. We argue that this value is set at large scales by buoyancy physics, while at small scales the density and velocity power spectra are proportional because the former are a passive scalar advected by the latter. This opens an interesting possibility to use gas density power spectra as a proxy for the velocity power spectra in relaxed clusters, across a wide range of scales. [7] Title: CMB with the background primordial magnetic field Authors: Dai G. Yamazaki Journal-ref: Phys. Rev. D 89, 083528(2014) Subjects: Cosmology and Nongalactic Astrophysics (astro-ph.CO); General Relativity and Quantum Cosmology (gr-qc) We investigate the effects of the background primordial magnetic field (PMF) on the cosmic microwave background (CMB). The sound speed of the tightly coupled photon-baryon fluid is increased by the background PMF. The increased sound speed causes the odd peaks of the CMB temperature fluctuations to be suppressed and the CMB peak positions to be shifted to a larger scale. The background PMF causes a stronger decaying potential and increases the amplitude of the CMB. These two effects of the background PMF on a smaller scale cancel out, and the overall effects of the background PMF are the suppression of the CMB around the first peak and the shifting of peaks to a large scale. We also discuss obtaining information about the PMF generation mechanisms, and we examine the nonlinear evolution of the PMF by the constraint on the maximum scale for the PMF distributions. Finally, we discuss degeneracies between the PMF parameters and the standard cosmological parameters. [8] Title: Multi-frequency radiation hydrodynamics simulations of H2 line emission in primordial, star-forming clouds Authors: Thomas H. Greif Comments: 18 pages, 1 table, 9 figures, submitted to MNRAS Subjects: Cosmology and Nongalactic Astrophysics (astro-ph.CO) We investigate the collapse of primordial gas in a minihalo with three-dimensional radiation hydrodynamics simulations that accurately model the transfer of H2 line emission. For this purpose, we have implemented a multi-line, multi-frequency ray-tracing scheme in the moving-mesh code Arepo that is capable of adaptively refining rays based on the Healpix algorithm, as well as a hybrid equilibrium/non-equilibrium primordial chemistry solver. We find that the chemical and thermal evolution of the central gas cloud is similar to the case where an escape probability formalism with a fit to detailed one-dimensional calculations is used, with the exception that the suppression of density perturbations due to the diffusion of radiation is only present in the full radiation hydrodynamics simulations. A multi-frequency treatment of the individual H2 lines is essential, since for high optical depths the smaller cross section in the wings of the lines greatly increases the amount of energy that can escape. The influence of Doppler shifts due to bulk velocities is comparatively small, since systematic velocity differences in the cloud are typically smaller than the sound speed. The radially averaged escape fraction agrees well with the fit of Ripamonti & Abel 2004, while for high optical depths the Sobolev method overestimates the escape fraction by more than an order of magnitude. This is due to the violation of the Sobolev condition, which states that the Sobolev length must be much smaller than the scale on which the properties of the gas change. The resulting discrepancy in the escape fraction explains the differences between primordial gas clouds found in previous studies. [9] Title: Constraining galaxy cluster temperatures and redshifts with eROSITA survey data Comments: accepted for publication in A&A; 17 pages, 20 figures Subjects: Cosmology and Nongalactic Astrophysics (astro-ph.CO) The nature of dark energy is imprinted in the large-scale structure of the Universe and thus in the mass and redshift distribution of galaxy clusters. The upcoming eROSITA mission will exploit this method of probing dark energy by detecting roughly 100,000 clusters of galaxies in X-rays. For a precise cosmological analysis the various galaxy cluster properties need to be measured with high precision and accuracy. To predict these characteristics of eROSITA galaxy clusters and to optimise optical follow-up observations, we estimate the precision and the accuracy with which eROSITA will be able to determine galaxy cluster temperatures and redshifts from X-ray spectra. Additionally, we present the total number of clusters for which these two properties will be available from the eROSITA survey directly. During its four years of all-sky surveys, eROSITA will determine cluster temperatures with relative uncertainties of Delta(T)/T<10% at the 68%-confidence level for clusters up to redshifts of z~0.16 which corresponds to ~1,670 new clusters with precise properties. Redshift information itself will become available with a precision of Delta(z)/(1+z)<10% for clusters up to z~0.45. Additionally, we estimate how the number of clusters with precise properties increases with a deepening of the exposure. Furthermore, the biases in the best-fit temperatures as well as in the estimated uncertainties are quantified and shown to be negligible in the relevant parameter range in general. For the remaining parameter sets, we provide correction functions and factors. The eROSITA survey will increase the number of galaxy clusters with precise temperature measurements by a factor of 5-10. Thus the instrument presents itself as a powerful tool for the determination of tight constraints on the cosmological parameters. [10] Title: Too Big to Fail in the Local Group Comments: 16 pages, 14 figures, 2 tables, submitted to MNRAS Subjects: Astrophysics of Galaxies (astro-ph.GA) We compare the dynamical masses of dwarf galaxies in the Local Group (LG) to the predicted masses of halos in the ELVIS suite of $\Lambda$CDM simulations, a sample of 48 Galaxy-size hosts, 24 of which are in paired configuration similar to the LG. We enumerate unaccounted-for dense halos ($V_\mathrm{max} \gtrsim 25$ km s$^{-1}$) in these volumes that at some point in their histories were massive enough to have formed stars in the presence of an ionizing background ($V_\mathrm{peak} > 30$ km s$^{-1}$). Within 300 kpc of the Milky Way, the number of unaccounted-for massive halos ranges from 2 - 25 over our full sample. Moreover, this "too big to fail" count grows as we extend our comparison to the outer regions of the Local Group: within 1.2 Mpc of either giant we find that there are 12-40 unaccounted-for massive halos. This count excludes volumes within 300 kpc of both the MW and M31, and thus should be largely unaffected by any baryonically-induced environmental processes. According to abundance matching -- specifically abundance matching that reproduces the Local Group stellar mass function -- all of these missing massive systems should have been quite bright, with $M_\star > 10^6M_\odot$. Finally, we use the predicted density structure of outer LG dark matter halos together with observed dwarf galaxy masses to derive an $M_\star-V_\mathrm{max}$ relation for LG galaxies that are outside the virial regions of either giant. We find that there is no obvious trend in the relation over three orders of magnitude in stellar mass (a "common mass" relation), from $M_\star \sim 10^8 - 10^5 M_\odot$, in drastic conflict with the tight relation expected for halos that are unaffected by reionization. Solutions to the too big to fail problem that rely on ram pressure stripping, tidal effects, or statistical flukes appear less likely in the face of these results. [11] Title: AGN Feedback in the Hot Halo of NGC 4649 Comments: 23 pages, 10 figures, 3 tables. Accepted for publication on ApJ Subjects: Astrophysics of Galaxies (astro-ph.GA); Cosmology and Nongalactic Astrophysics (astro-ph.CO); High Energy Astrophysical Phenomena (astro-ph.HE) Using the deepest available $\textit{Chandra}$ observations of NGC 4649 we find strong evidences of cavities, ripples and ring like structures in the hot interstellar medium (ISM) that appear to be morphologically related with the central radio emission. These structures show no significant temperature variations in correspondence with higher pressure regions ($0.5\mbox{kpc}<r<3\mbox{kpc}$). On the same spatial scale, a discrepancy between the mass profiles obtained from stellar dynamic and $\textit{Chandra}$ data represents the telltale evidence of a significant non-thermal pressure component in this hot gas, which is related to the radio jet and lobes. On larger scale we find agreement between the mass profile obtained form $\textit{Chandra}$ data and planetary nebulae and globular cluster dynamics. The nucleus of NGC 4649 appears to be extremely radiatively inefficient, with highly sub-Bondi accretion flow. Consistently with this finding, the jet power evaluated from the observed X-ray cavities implies that a small fraction of the accretion power calculated for the Bondi mass accretion rate emerges as kinetic energy. Comparing the jet power to radio and nuclear X-ray luminosity the observed cavities show similar behavior to those of other giant elliptical galaxies. [12] Title: Different X-ray spectral evolution for black hole X-ray binaries in dual tracks of radio-X-ray correlation Authors: Xiao-Feng Cao (HUST), Qingwen Wu (Huazhong Univ of Sci and Tech), Ai-Jun Dong (HUST) Comments: Accepted for publication in ApJ; 10 pages, 5 figures, 5 tables Subjects: High Energy Astrophysical Phenomena (astro-ph.HE) Recently an outliers' track of radio-X-ray correlation was found, which is much steeper than the former universal correlation, where dual tracks were speculated to be triggered by different accretion processes. In this work, we test this issue by exploring hard X-ray spectral evolution in four black-hole X-ray binaries (XRBs) with multiple, quasi-simultaneous radio and X-ray observations. Firstly, we find that hard X-ray photon indices, $\Gamma$, are anti- and positively correlated to X-ray fluxes when the X-ray flux, $F_{\rm 3-9keV}$, is below and above a critical flux, $F_{\rm X,crit}$, which are consistent with prediction of advection dominated accretion flow (ADAF) and disk-corona model respectively. Secondly and most importantly, we find that the radio-X-ray correlations are also clearly different when the X-ray fluxes are higher and lower than the critical flux that defined by X-ray spectral evolution. The data points with $F_{\rm 3-9keV}\gtrsim F_{\rm X,crit}$ have a steeper radio-X-ray correlation ($F_{\rm X}\propto F_{\rm R}^{b}$ and $b\sim 1.1-1.4$), which roughly form the outliers' track. However, the data points with anti-correlation of $\Gamma-F_{\rm 3-9keV}$ either stay in the universal track with $b\sim0.61$ or stay in transition track (from the universal to outliers' tracks or vice versa). Therefore, our results support that the universal and outliers' tracks of radio-X-ray correlations are regulated by radiatively inefficient and radiatively efficient accretion model respectively. [13] Title: A new fundamental plane for radiatively efficient black-hole sources Authors: Ai-Jun Dong (HUST), Qingwen Wu (Huazhong Univ of Sci and Tech), Xiao-Feng Cao (HUST) Comments: Accepted for publication in ApJ Letter; emulateapj format; 6 pages, 3 figures, 1 table Subjects: High Energy Astrophysical Phenomena (astro-ph.HE) In recent years, it was found that there are several low/hard state of X-ray binaries (XRBs) follow an outliers' track of radio--X-ray correlation ($L_{\rm R}\propto L_{\rm X}^{b}$ and $b\sim1.4$), which is much steeper than the former universal track with $b\sim0.6$. In this work, we compile a sample of bright radio-quiet active galactic nuclei (AGNs) and find that their hard X-ray photon indices and Eddington ratios are positively correlated, which is similar to that of outliers' of XRBs, where both bright AGNs and outliers' of XRBs have bolometric Eddington ratios $\gtrsim1%L_{\rm Edd}$ ($L_{\rm Edd}$ is Eddington luminosity). The Eddington-scaled radio--X-ray correlation of these AGNs is also similar to that of outliers' of XRBs, which has a form of $L_{\rm 5 GHz}/L_{\rm Edd}\propto (L_{\rm 2-10 keV}/L_{\rm Edd})^{c}$ with $c\simeq1.59$ and 1.53 for AGNs and XRBs respectively. Both the positively correlated X-ray spectral evolution and the steeper radio--X-ray correlation can be regulated by a radiatively efficient accretion flow (e.g., disk-corona). Based on these similarities, we further present a new fundamental plane for `outliers' of XRBs and bright AGNs in black-hole (BH) mass, radio and X-ray luminosity space: $\log L_{\rm R}=1.59^{+0.28}_{-0.22} \log L_{\rm X}- 0.22^{+0.19}_{-0.20}\log M_{\rm BH}-28.97^{+0.45}_{-0.45}$ with a scatter of $\sigma_{\rm R}=0.51\rm dex$. This fundamental plane is suitable for radiatively efficient BH sources, while the former plane proposed by Merloni et al. and Falcke et al. may be most suitable for radiatively inefficient sources. [14] Title: Photospheric emission from long duration gamma-ray bursts powered by variable engines Authors: Diego López-Cámara (NCSU), Brian Morsony (UWi, Madison), Davide Lazzati (NCSU, Oregon State) Comments: 7 pages, 6 figures, submitted to MNRAS Subjects: High Energy Astrophysical Phenomena (astro-ph.HE); Cosmology and Nongalactic Astrophysics (astro-ph.CO) We present the results of a set of numerical simulations of long-duration gamma-ray burst jets aimed at studying the effect of a variable engine on the peak frequency of the photospheric emission. Our simulations follow the propagation of the jet inside the progenitor star, its break-out, and the subsequent expansion in the environment out to the photospheric radius. A constant and two step-function models are considered for the engine luminosity. We show that our synthetic light-curves follow a luminosity-peak frequency correlation analogous to the Golenetskii correlation found in long-duration gamma-ray burst observations. Within the parameter space explored, it appears that the central engine luminosity profile does not have a significant effect on the location of a gamma-ray burst in the Luminosity-peak frequency plane, bursts from different central engines being indistinguishable from each other. [15] Title: Spatial variations in the spectral index of polarized synchrotron emission in the 9-year WMAP sky maps Comments: 10 pages, 9 figures, submitted to ApJ Subjects: Cosmology and Nongalactic Astrophysics (astro-ph.CO) We estimate the spectral index, $\beta$, of polarized synchrotron emission as observed in the 9-year WMAP sky maps using two different methods, both linear regression ("T--T plot") and maximum likelihood. We partition the sky into 24 disjoint sky regions, and evaluate the spectral index for all polarization angles between 0 and $85^{\circ}$ in steps of $5^{\circ}$. Averaging over polarization angles, we derive a mean spectral index of $\beta^{all-sky} =-2.99\pm0.01$. Considering the Galactic plane and high latitude regions separately, we find that the synchrotron spectral index steepens by $\sim0.15$ from low to high latitudes, in agreement with previous studies, with mean spectral indices of $\beta^{plane} = -2.98\pm0.01$ and $\beta^{high-lat} = -3.12\pm0.04$, respectively. In addition, we find a significant longitudinal spectral index variation along the Galactic plane that is well modelled by an offset sinusoidal, $\beta(l) = -2.85 + 0.17\sin(2l - 90^{\circ})$. Finally, we study synchrotron emission in the BICEP2 field, in an attempt to understand whether the recently claimed detection of large-scale B-mode polarization could be explained in terms of synchrotron contamination. We estimate that the standard deviation of synchrotron emission in this field is $2.4\,\mu K$ at K-band on the angular scales of interest. Adopting a spectral index of $\beta=-3.12$, typical for high Galactic latitudes, this corresponds to a synchrotron amplitude of 0.011$\mu K$ at 150 GHz, equivalent to a primordial B-mode signal of $r=0.003$. The flattest index allowed by the data in this very low signal to noise region is $\beta=-2.5$, for which the projected amplitude is 0.036$\,\mu K$. Thus, under the assumption of a straight power-law frequency spectrum, we find that synchrotron emission can in the absolutely worst-case scenario account for at most 20% of the reported BICEP2 signal. [16] Title: An Analysis of the SEEDS High-Contrast Exoplanet Survey: Massive Planets or Low-Mass Brown Dwarfs? Comments: 21 pages, 5 figures, submitted to ApJ Subjects: Solar and Stellar Astrophysics (astro-ph.SR); Earth and Planetary Astrophysics (astro-ph.EP) We conduct a statistical analysis of a combined sample of direct imaging data, totalling nearly 250 stars observed by HiCIAO on the Subaru Telescope, NIRI on Gemini North, and NICI on Gemini South. The stars cover a wide range of ages and spectral types, and include five detections (kap And b, two ~60 M_J brown dwarf companions in the Pleiades, PZ Tel B, and CD-35 2722 B). We conduct a uniform, Bayesian analysis of the ages of our entire sample, using both membership in a kinematic moving group and activity/rotation age indicators, to obtain posterior age distributions. We then present a new statistical method for computing the likelihood of a substellar distribution function. By performing most integrals analytically, we achieve an enormous speedup over brute-force Monte Carlo. We use this method to place upper limits on the maximum semimajor axis beyond which the distribution function for radial-velocity planets cannot extend, finding model-dependent values of ~30--100 AU. Finally, we treat our entire substellar sample together, modeling it as a single power law distribution. After including GJ 758 B and GJ 504 b, two other HiCIAO detections, a distribution $p(M, a) \propto M^{-0.7 \pm 0.6} a^{-0.8 \pm 0.4}$ (1 sigma errors) from massive brown dwarfs to a theoretically motivated cutoff at ~5 M_J, provides an adequate fit to our data. This suggests that many of the directly imaged exoplanets known, including most (if not all) of the low-mass companions in our sample, formed by fragmentation in a cloud or disk, and represent the low-mass tail of the brown dwarfs. [17] Title: X-rays from Magnetically Confined Wind Shocks: Effect of Cooling-Regulated Shock Retreat Comments: Accepted for publication in MNRAS Subjects: Solar and Stellar Astrophysics (astro-ph.SR) We use 2D MHD simulations to examine the effects of radiative cooling and inverse Compton (IC) cooling on X-ray emission from magnetically confined wind shocks (MCWS) in magnetic massive stars with radiatively driven stellar winds. For the standard dependence of mass loss rate on luminosity $\Mdot \sim L^{1.7}$, the scaling of IC cooling with $L$ and radiative cooling with $\Mdot$ means that IC cooling become formally more important for lower luminosity stars. However, because the sense of the trends is similar, we find the overall effect of including IC cooling is quite modest. More significantly, for stars with high enough mass loss to keep the shocks radiative, the MHD simulations indicate a linear scaling of X-ray luminosity with mass loss rate; but for lower luminosity stars with weak winds, X-ray emission is reduced and softened by a {\em shock retreat} resulting from the larger post-shock cooling length, which within the fixed length of a closed magnetic loop forces the shock back to lower pre-shock wind speeds. A semi-analytic scaling analysis that accounts both for the wind magnetic confinement and this shock retreat yields X-ray luminosities that have a similar scaling trend, but a factor few higher values, compared to time-averages computed from the MHD simulations. The simulation and scaling results here thus provide a good basis for interpreting available X-ray observations from the growing list of massive stars with confirmed large-scale magnetic fields. [18] Title: Prospects for Detecting Oxygen, Water, and Chlorophyll in an Exo-Earth Comments: 6 pages, 6 figures, submitted to PNAS Subjects: Earth and Planetary Astrophysics (astro-ph.EP); Instrumentation and Methods for Astrophysics (astro-ph.IM) The goal of finding and characterizing nearby Earth-like planets is driving many NASA high-contrast flagship mission concepts, the latest of which is known as the Advanced Technology Large-Aperture Space Telescope (ATLAST). In this article, we calculate the optimal spectral resolution $R=\lambda/\delta\lambda$ and minimum signal-to-noise ratio per spectral bin (SNR), two central design requirements for a high-contrast space mission, in order to detect signatures of water, oxygen, and chlorophyll on an Earth twin. We first develop a minimally parametric model and demonstrate its ability to fit model Earth spectra; this allows us to measure the statistical evidence for each component's presence. We find that water is the most straightforward to detect, requiring a resolving power R>~20, while the optimal resolving power for oxygen is likely to be closer to R=150, somewhat higher than the canonical value in the literature. At these resolutions, detecting oxygen will require ~3 times the SNR as water. Chlorophyll, should it also be used by alien plants in photosynthesis, requires ~6 times the SNR as oxygen for an Earth twin, only falling to oxygen-like levels of detectability for a very low cloud cover and/or a very large vegetation covering fraction. This suggests designing a mission for sensitivity to oxygen and adopting a multi-tiered observing strategy, first targeting water, then oxygen on the more favorable planets, and finally chlorophyll on only the most promising worlds. [19] Title: The HNC/HCN Ratio in Star-Forming Regions Comments: 14 pages, 11 figures, Accepted for publication in ApJ Subjects: Astrophysics of Galaxies (astro-ph.GA) HNC and HCN, typically used as dense gas tracers in molecular clouds, are a pair of isomers that have great potential as a temperature probe because of temperature dependent, isomer-specific formation and destruction pathways. Previous observations of the HNC/HCN abundance ratio show that the ratio decreases with increasing temperature, something that standard astrochemical models cannot reproduce. We have undertaken a detailed parameter study on which environmental characteristics and chemical reactions affect the HNC/HCN ratio and can thus contribute to the observed dependence. Using existing gas and gas-grain models updated with new reactions and reaction barriers, we find that in static models the H + HNC gas-phase reaction regulates the HNC/HCN ratio under all conditions, except for very early times. We quantitively constrain the combinations of H abundance and H + HNC reaction barrier that can explain the observed HNC/HCN temperature dependence and discuss the implications in light of new quantum chemical calculations. In warm-up models, gas-grain chemistry contributes significantly to the predicted HNC/HCN ratio and understanding the dynamics of star formation is therefore key to model the HNC/HCN system. [20] Title: A New Method for Measuring Metallicities of Young Super Star Clusters Comments: 6 pages, 6 figures. Accepted for publication in ApJ Subjects: Astrophysics of Galaxies (astro-ph.GA) We demonstrate how the metallicities of young super star clusters can be measured using novel spectroscopic techniques in the J-band. The near-infrared flux of super star clusters older than ~6 Myr is dominated by tens to hundreds of red supergiant stars. Our technique is designed to harness the integrated light of that population and produces accurate metallicities for new observations in galaxies above (M83) and below (NGC 6946) solar metallicity. In M83 we find [Z]= +0.28 +/- 0.14 dex using a moderate resolution (R~3500) J-band spectrum and in NGC 6496 we report [Z]= -0.32 +/- 0.20 dex from a low resolution spectrum of R~1800. Recently commissioned low resolution multiplexed spectrographs on the VLT (KMOS) and Keck (MOSFIRE) will allow accurate measurements of super star cluster metallicities across the disks of star-forming galaxies up to distances of 70 Mpc with single night observation campaigns using the method presented in this letter. [21] Title: Identification of Two Radio Loud Narrow Line Seyfert 1 Galaxies at Gamma-Ray Energies Comments: 18 pages, 3 tables, 2 figures Subjects: High Energy Astrophysical Phenomena (astro-ph.HE) We report the discovery of gamma-ray emission from two radio-loud narrow-line Seyfert 1 galaxies using data from Fermi/LAT: J0804+3853 (z = 0.211) and J1443+4725 (z = 0.502). The objects were discovered due to singular, separate, and brief brightening events of a few months' duration during the first 66 months of Fermi observations. Also presented are our efforts thus far to monitor the optical photopolarimetric variability of these targets. This work brings the total number of this class identified at gamma-ray energies from seven to nine, thus representing a significant increase in this population of AGN. These findings can have strong implications with regard to our understanding of systems with relativistic jets. [22] Title: Planets Transiting Non-Eclipsing Binaries Comments: 19 pages, 14 figures, submitted November 2013 to A&A Subjects: Earth and Planetary Astrophysics (astro-ph.EP); Solar and Stellar Astrophysics (astro-ph.SR) The majority of binary stars do not eclipse. Current searches for transiting circumbinary planets concentrate on eclipsing binaries, and are therefore restricted to a small fraction of potential hosts. We investigate the concept of finding planets transiting non-eclipsing binaries, whose geometry would require mutually inclined planes. Using an N-body code we explore how the number and sequence of transits vary as functions of observing time and orbital parameters. The concept is then generalised thanks to a suite of simulated circumbinary systems. Binaries are constructed from radial-velocity surveys of the solar neighbourhood. They are then populated with orbiting gas giants, drawn from a range of distributions. The binary population is shown to be compatible with the Kepler eclipsing binary catalogue, indicating that the properties of binaries may be as universal as the initial mass function. These synthetic systems produce transiting circumbinary planets on both eclipsing and non-eclipsing binaries. Simulated planets transiting eclipsing binaries are compared with published Kepler detections. We obtain 1) that planets transiting non-eclipsing binaries probably exist in the Kepler data, 2) that observational biases alone cannot account for the observed over-density of circumbi- nary planets near the stability limit, implying a physical pile-up, and 3) that the distributions of gas giants orbiting single and binary stars are likely different. Estimating the frequency of circumbinary planets is degenerate with the spread in mutual inclination. Only a minimum occurrence rate can be produced, which we find to be compatible with 9%. Searching for inclined circumbinary planets may significantly increase the population of known objects and will test our conclusions. Their existence, or absence, will reveal the true occurrence rate and help develop circumbinary planet formation theories. [23] Title: Observational discrimination of Eddington-inspired Born-Infeld gravity from general relativity Authors: Hajime Sotani Comments: accepted for publication in PRD Subjects: High Energy Astrophysical Phenomena (astro-ph.HE); Nuclear Theory (nucl-th) Direct observations of neutron stars could tell us an imprint of modified gravity. However, it is generally difficult to resolve the degeneracy due to the uncertainties in equation of state (EOS) of neutron star matter and in gravitational theories. In this paper, we are successful to find the observational possibility to distinguish Eddington-inspired Born-Infeld gravity (EiBI) from general relativity. We show that the radii of neutron stars with $0.5M_{sun}$ are strongly correlated with the neutron skin thickness of ${}^{208}$Pb independently of EOS, while this correlation depends on the coupling constant in EiBI. As a result, via the direct observations of radius of neutron star with $0.5M_{sun}$ and the measurements of neutron skin thickness of ${}^{208}$Pb by the terrestrial experiments, one could not only discriminate EiBI from general relativity but also estimate the coupling constant in EiBI. [24] Title: Radial Velocities of Stars with Multiple Co-orbital Planets Comments: 16 pages, 3 figures, 1 table Subjects: Earth and Planetary Astrophysics (astro-ph.EP) To date, well over a thousand planets have been discovered orbiting other stars, hundreds of them in multi-planet systems. Most of these exoplanets have been detected by either the transit method or the radial velocity method, rather than by other methods such as astrometry or direct imaging. Both the radial velocity and astrometric methods rely upon the reflex motion of the parent star induced by the gravitational attraction of its planets. However, this reflex motion is subject to misinterpretation when a star has two or more planets with the same orbital period. Such co-orbital planets may effectively "hide" from detection by current algorithms. In principle, any number of planets can share the same orbit; the case where they all have the same mass has been studied most. Salo and Yoder (A & A 205, 309--327, 1988) have shown that more than 8 planets of equal mass sharing a circular orbit must be equally spaced for dynamical stability, while fewer than 7 equal-mass planets are stable only in a configuration where all of the planets remain on the same side of their parent star. For 7 or 8 equal-mass planets, both configurations are stable. By symmetry, it is clear that the equally-spaced systems produce no reflex motion or radial velocity signal at all in their parent stars. This could lead to their being overlooked entirely, unless they happen to be detected by the transit method. It is equally clear that the lopsided systems produce a greater radial velocity signal than a single such planet would, but a smaller signal than if all of the planets were combined into one. This could seriously mislead estimates of exoplanet masses and densities. Transit data and ellipsoidal (tidal) brightness variations in such systems also are subject to misinterpretation. This behavior is also representative of more natural systems, with co-orbital planets of different masses. [25] Title: Dust Production Factories in the Early Universe: Formation of Carbon Grains in Red-supergiant Winds of Very Massive Population III Stars Comments: 1 table, 4 figures, accepted for publication in the ApJ Letters Subjects: Solar and Stellar Astrophysics (astro-ph.SR) We investigate the formation of dust in a stellar wind during the red-supergiant (RSG) phase of a very massive Population III star with the zero-age main sequence mass of 500 M_sun. We show that, in a carbon-rich wind with a constant velocity, carbon grains can form with a lognormal-like size distribution, and that all of the carbon available for dust formation finally condense into dust for wide ranges of the mass-loss rate ((0.1-3)x10^{-3} M_sun yr^{-1}) and wind velocity (1-100 km s^{-1}). We also find that the acceleration of the wind driven by newly formed dust suppresses the grain growth but still allows more than half of gas-phase carbon to be finally locked up in dust grains. These results indicate that at most 1.7 M_sun of carbon grains can form in total during the RSG phase of 500 M_sun Population III stars. Such a high dust yield could place very massive primordial stars as important sources of dust at the very early epoch of the universe if the initial mass function of Population III stars was top-heavy. We also briefly discuss a new formation scenario of carbon-rich ultra-metal-poor stars considering the feedback from very massive Population III stars. [26] Title: Quantifying the impact of future Sandage-Loeb test data on dark energy constraints Comments: 5 pages, 3 figures, 1 table Subjects: Cosmology and Nongalactic Astrophysics (astro-ph.CO); General Relativity and Quantum Cosmology (gr-qc); High Energy Physics - Phenomenology (hep-ph); High Energy Physics - Theory (hep-th) The Sandage-Loeb (SL) test is a unique method to probe dark energy in the "redshift desert" of $2\lesssim z\lesssim 5$, and thus it provides an important supplement to the other dark energy probes. Therefore, it is of great importance to quantify how the future SL test data impact on the dark energy constraints. To avoid the potential inconsistency in data, we use the best-fitting model based on the other geometric measurements as the fiducial model to produce 30 mock SL test data. The 10-yr, 20-yr, and 30-yr observations of SL test are analyzed and compared in detail. We show that compared to the current combined data of type Ia supernovae, baryon acoustic oscillation, cosmic microwave background, and Hubble constant, the 30-yr observation of SL test could improve the constraint on $\Omega_m$ by about $80%$ and the constraint on $w$ by about $25%$. Furthermore, the SL test can also improve the measurement of the possible direct interaction between dark energy and dark matter. We show that the SL test 30-yr data could improve the constraint on $\gamma$ by about $30%$ and $10%$ for the $Q=\gamma H\rho_c$ and $Q=\gamma H\rho_{de}$ models, respectively. [27] Title: Results from the Wilkinson Microwave Anisotropy Probe Comments: 24 pages, 7 figures, invited review for Special Section "CMB Cosmology" of Progress of Theoretical and Experimental Physics (PTEP) Subjects: Cosmology and Nongalactic Astrophysics (astro-ph.CO) The Wilkinson Microwave Anisotropy Probe (WMAP) mapped the distribution of temperature and polarization over the entire sky in five microwave frequency bands. These full-sky maps were used to obtain measurements of temperature and polarization anisotropy of the cosmic microwave background with the unprecedented accuracy and precision. The analysis of two-point correlation functions of temperature and polarization data gives determinations of the fundamental cosmological parameters such as the age and composition of the universe, as well as the key parameters describing the physics of inflation, which is further constrained by three-point correlation functions. WMAP observations alone reduced the flat $\Lambda$ cold dark matter ($\Lambda$CDM) cosmological model (six) parameter volume by a factor of >68,000 compared with pre-WMAP measurements. The WMAP observations (sometimes in combination with other astrophysical probes) convincingly show the existence of non-baryonic dark matter, the cosmic neutrino background, flatness of spatial geometry of the universe, a deviation from a scale-invariant spectrum of initial scalar fluctuations, and that the current universe is undergoing an accelerated expansion. The WMAP observations provide the strongest ever support for inflation; namely, the structures we see in the universe originate from quantum fluctuations generated during inflation. [28] Title: The many sides of RCW 86: a type Ia supernova remnant evolving in its progenitor's wind bubble Comments: Accepted for publication in MNRAS. 16 pages, 13 figures Subjects: High Energy Astrophysical Phenomena (astro-ph.HE) We present the results of a detailed investigation of the Galactic supernova remnant RCW 86 using the XMM-Newton X-ray telescope. RCW 86 is the probable remnant of SN 185 A.D, a supernova that likely exploded inside a wind-blown cavity. We use the XMM-Newton Reflection Grating Spectrometer (RGS) to derive precise temperatures and ionization ages of the plasma, which are an indication of the interaction history of the remnant with the presumed cavity. We find that the spectra are well fitted by two non-equilibrium ionization models, which enables us to constrain the properties of the ejecta and interstellar matter plasma. Furthermore, we performed a principal component analysis on EPIC MOS and pn data to find regions with particular spectral properties. We present evidence that the shocked ejecta, emitting Fe-K and Si line emission, are confined to a shell of approximately 2 pc width with an oblate spheroidal morphology. Using detailed hydrodynamical simulations, we show that general dynamical and emission properties at different portions of the remnant can be well-reproduced by a type Ia supernova that exploded in a non-spherically symmetric wind-blown cavity. We also show that this cavity can be created using general wind properties for a single degenerate system. Our data and simulations provide further evidence that RCW 86 is indeed the remnant of SN 185, and is the likely result of a type Ia explosion of single degenerate origin. [29] Title: Planet Traps and First Planets: the Critical Metallicity for Gas Giant Formation Comments: 10 pages, 3 figures, 5 tables, accepted for publication in ApJ Subjects: Earth and Planetary Astrophysics (astro-ph.EP) The ubiquity of planets poses an interesting question: when first planets are formed in galaxies. We investigate this problem by adopting a theoretical model developed for understanding the statistical properties of exoplanets. Our model is constructed as the combination of planet traps with the standard core accretion scenario in which the efficiency of forming planetary cores directly relates to the dust density in disks or the metallicity ([Fe/H]). We statistically compute planet formation frequencies (PFFs) as well as the orbital radius ($<R_{rapid}>$) within which gas accretion becomes efficient enough to form Jovian planets. The three characteristic exoplanetary populations are considered: hot Jupiters, exo-Jupiters densely populated around 1 AU, and low-mass planets such as super-Earths. We explore the behavior of the PFFs as well as $<R_{rapid}>$ for the three different populations as a function of metallicity ($-2 \leq$[Fe/H]$\leq -0.6$). We show that the total PFFs increase steadily with metallicity, which is the direct outcome of the core accretion picture. For the entire range of the metallicity considered here, the population of the low-mass planets dominates over the Jovian planets. The Jovian planets contribute to the PFFs above [Fe/H]-1. We find that the hot Jupiters form at lower metallcities than the exo-Jupiters. This arises from the radially inward transport of planetary cores by their host traps, which is more effective for lower metallicity disks due to the slower growth of the cores. The PFFs for the exo-Jupiters exceed those for the hot Jupiters around [Fe/H]-0.7. Finally, we show that the critical metallicity for forming Jovian planets is [Fe/H]-1.2, which is evaluated by comparing the values of $<R_{rapid}>$ between the hot Jupiters and the low-mass planets. The comparison intrinsically links to the different gas accretion efficiency between them. [30] Title: Collective outflow from a small multiple stellar system Comments: ApJ in press, movies: this http URL Subjects: Solar and Stellar Astrophysics (astro-ph.SR); Astrophysics of Galaxies (astro-ph.GA) The formation of high-mass stars is usually accompanied by powerful protostellar outflows. Such high-mass outflows are not simply scaled-up versions of their lower-mass counterparts, since observations suggest that the collimation degree degrades with stellar mass. Theoretically, the origins of massive outflows remain open to question because radiative feedback and fragmentation of the accretion flow around the most massive stars, with M > 15 M_Sun, may impede the driving of magnetic disk winds. We here present a three-dimensional simulation of the early stages of core fragmentation and massive star formation that includes a subgrid-scale model for protostellar outflows. We find that stars that form in a common accretion flow tend to have aligned outflow axes, so that the individual jets of multiple stars can combine to form a collective outflow. We compare our simulation to observations with synthetic H_2 and CO observations and find that the morphology and kinematics of such a collective outflow resembles some observed massive outflows, such as Cepheus A and DR 21. We finally compare physical quantities derived from simulated observations of our models to the actual values in the models to examine the reliability of standard methods for deriving physical quantities, demonstrating that those methods indeed recover the actual values to within a factor of 2-3. [31] Title: Monte Carlo simulations of post-common-envelope white dwarf + main sequence binaries: comparison with the SDSS DR7 observed sample Comments: 15 pages, 7 figures, accepted for publication in A&A Subjects: Astrophysics of Galaxies (astro-ph.GA); Solar and Stellar Astrophysics (astro-ph.SR) Detached white dwarf + main sequence (WD+MS) systems represent the simplest population of post-common envelope binaries (PCEBs). Since the ensemble properties of this population carries important information about the characteristics of the common-envelope (CE) phase, it deserves close scrutiny. However, most population synthesis studies do not fully take into account the effects of the observational selection biases of the samples used to compare with the theoretical simulations. Here we present the results of a set of detailed Monte Carlo simulations of the population of WD+MS binaries in the Sloan Digital Sky Survey (SDSS) Data Release 7. We used up-to-date stellar evolutionary models, a complete treatment of the Roche lobe overflow episode, and a full implementation of the orbital evolution of the binary systems. Moreover, in our treatment we took into account the selection criteria and all the known observational biases. Our population synthesis study allowed us to make a meaningful comparison with the available observational data. In particular, we examined the CE efficiency, the possible contribution of internal energy, and the initial mass ratio distribution (IMRD) of the binary systems. We found that our simulations correctly reproduce the properties of the observed distribution of WD+MS PCEBs. In particular, we found that once the observational biases are carefully taken into account, the distribution of orbital periods and of masses of the WD and MS stars can be correctly reproduced for several choices of the free parameters and different IMRDs, although models in which a moderate fraction (<=10%) of the internal energy is used to eject the CE and in which a low value of CE efficiency is used (<=0.3) seem to fit better the observational data. We also found that systems with He-core WDs are over-represented in the observed sample, due to selection effects. [32] Title: CFHTLenS: Cosmological constraints from a combination of cosmic shear two-point and three-point correlations Comments: Accepted by MNRAS. 21 pages, 14 figues, 8 tables. The data is available at this http URL . The software used for the cosmological analysis can be downloaded from this http URL Subjects: Cosmology and Nongalactic Astrophysics (astro-ph.CO) Higher-order, non-Gaussian aspects of the large-scale structure carry valuable information on structure formation and cosmology, which is complementary to second-order statistics. In this work we measure second- and third-order weak-lensing aperture-mass moments from CFHTLenS and combine those with CMB anisotropy probes. The third moment is measured with a significance of $2\sigma$. The combined constraint on $\Sigma_8 = \sigma_8 (\Omega_{\rm m}/0.27)^\alpha$ is improved by 10%, in comparison to the second-order only, and the allowed ranges for $\Omega_{\rm m}$ and $\sigma_8$ are substantially reduced. Including general triangles of the lensing bispectrum yields tighter constraints compared to probing mainly equilateral triangles. Second- and third-order CFHTLenS lensing measurements improve Planck CMB constraints on $\Omega_{\rm m}$ and $\sigma_8$ by 26% for flat $\Lambda$CDM. For a model with free curvature, the joint CFHTLenS-Planck result is $\Omega_{\rm m} = 0.28 \pm 0.02$ (68% confidence), which is an improvement of 43% compared to Planck alone. We test how our results are potentially subject to three astrophysical sources of contamination: source-lens clustering, the intrinsic alignment of galaxy shapes, and baryonic effects. We explore future limitations of the cosmological use of third-order weak lensing, such as the nonlinear model and the Gaussianity of the likelihood function. [33] Title: Ion-molecule reactions involving HCO$^+$ and N$_2$H$^+$: Isotopologue equilibria from new theoretical calculations and consequences for interstellar isotope fractionation Comments: to appear in A&A with 14 pages, 2 figures, and 10 tables Subjects: Astrophysics of Galaxies (astro-ph.GA); Chemical Physics (physics.chem-ph) $Aims$: We revisit with new augmented accuracy the theoretical dynamics of basic isotope exchange reactions involved in the $^{12}$C/$^{13}$C, $^{16}$O/$^{18}$O, and $^{14}$N/$^{15}$N balance because these reactions have already been studied experimentally in great detail. $Methods$: Electronic structure methods were employed to explore potential energy surfaces, full-dimensional rovibrational calculations to compute rovibrational energy levels that are numerically exact, and chemical network models to estimate the abundance ratios under interstellar conditions. $Results$: New exothermicities, derived for HCO$^+$ reacting with CO, provide rate coefficients markedly different from previous theoretical values in particular at low temperatures, resulting in new abundance ratios relevant for carbon chemistry networks. In concrete terms, we obtain a reduction in the abundance of H$^{12}$C$^{18}$O$^+$ and an increase in the abundance of H$^{13}$C$^{16}$O$^+$ and D$^{13}$C$^{16}$O$^+$. In all studied cases, the reaction of the ion with a neutral polarizable molecule proceeds through the intermediate proton-bound complex found to be very stable. For the complexes OCH$^+$...CO, OCH$^+$...OC, COHOC$^+$, N$_2$...HCO$^+$, N$_2$H$^+$...OC, and N$_2$HN$_2^+$, we also calculated vibrational frequencies and dissociation energies. $Conclusions$: The linear proton-bound complexes possess sizeable dipole moments, which may facilitate their detection. [34] Title: Inducing chaos by breaking axial symmetry in a black hole magnetosphere Comments: 13 pages, 5 figures; accepted to ApJ Subjects: High Energy Astrophysical Phenomena (astro-ph.HE) While the motion of particles near a rotating, electrically neutral (Kerr) and a charged (Kerr-Newman) black hole is always strictly regular, a perturbation to the gravitational or the electromagnetic field generally leads to chaos. Transition from regular to chaotic dynamics is relatively gradual if the system preserves axial symmetry, whereas non-axisymmetry induces chaos more efficiently. Here we study the development of chaos in an oblique (electro-vacuum) magnetosphere of a magnetized black hole. Besides the strong gravity of the massive source represented by the Kerr metric we consider the presence of a weak, ordered large-scale magnetic field. An axially symmetric model consisting of a rotating black hole embedded in an aligned magnetic field is generalized by allowing an oblique direction of the field having a general inclination with respect to the rotation axis of the system. Inclination of the field acts as an additional perturbation to the motion of charged particles as it breaks the axial symmetry of the system and cancels the related integral of motion. The axial component of angular momentum is no longer conserved and the resulting system thus has three degrees of freedom. Our primary concern within this contribution is to find out how sensitive the system of bound particles is to the inclination of the field. We employ the method of the maximal Lyapunov exponent to distinguish between regular and chaotic orbits and to quantify their chaoticity. We find that even a small misalignment induces chaotic motion. [35] Title: Cross-Correlation of Cosmic Shear and Extragalactic Gamma-ray Background: Constraints on the Dark Matter Annihilation Cross-Section Comments: 32 pages, 8 figures, submitted to Phys. Rev. D Subjects: Cosmology and Nongalactic Astrophysics (astro-ph.CO) We present the first measurement of the cross-correlation of weak gravitational lensing and the extragalactic gamma-ray background emission using data from the Canada-France-Hawaii Lensing Survey and the Fermi Large Area Telescope. The cross-correlation is a powerful probe of signatures of dark matter annihilation, because both cosmic shear and gamma-ray emission originate directly from the same DM distribution in the universe, and it can be used to derive constraints on dark matter annihilation cross-section. We show that the measured lensing-gamma correlation is consistent with a null signal. Comparing the result to theoretical predictions, we exclude dark matter annihilation cross sections of <sigma v> =10^{-24}-10^{-25} cm^3 s^-1 for a 100 GeV dark matter. If dark matter halos exist down to the mass scale of 10^-6 M_sun, we are able to place constraints on the thermal cross sections <sigma v> ~ 3 x 10^{-26} cm^3 s^-1 for a 10 GeV dark matter annihilation into tau^{+} tau^{-}. Future gravitational lensing surveys will increase sensitivity to probe annihilation cross sections of <sigma v> ~ 3 x 10^{-26} cm^3 s^-1 even for a 100 GeV dark matter. Detailed modeling of the contributions from astrophysical sources to the cross correlation signal could further improve the constraints by ~ 40-70 %. [36] Title: VLBI Observations of H2O Maser Annual Parallax and Proper Motion in IRAS 20143+3634: Reflection on the Galactic Constants Comments: Intended for PASJ VERA special issue Subjects: Astrophysics of Galaxies (astro-ph.GA) We report the results of VLBI observations of H$_{2}$O masers in the IRAS 20143+3634 star forming region using VERA (VLBI Exploration of Radio Astronomy). By tracking masers for a period of over two years we measured a trigonometric parallax of $\pi = 0.367 \pm 0.037$ mas, corresponding to a source distance of $D = 2.72 ^{+0.31}_{-0.25}$ kpc and placing it in the Local spiral arm. Our trigonometric distance is just 60% of the previous estimation based on radial velocity, significantly impacting the astrophysics of the source. We measured proper motions of $-2.99 \pm 0.16$ mas yr$^{-1}$ and $-4.37 \pm 0.43$ mas yr$^{-1}$ in R.A. and Decl. respectively, which were used to estimate the peculiar motion of the source as $(U_{s},V_{s},W_{s}) = (-0.9 \pm 2.9, -8.5 \pm 1.6, +8.0 \pm 4.3)$ km s$^{-1}$ for $R_0=8$ kpc and $\Theta_0=221$ km s$^{-1}$, and $(U_{s},V_{s},W_{s}) = (-1.0 \pm 2.9, -9.3 \pm 1.5, +8.0 \pm 4.3)$ km s$^{-1}$ for $R_0=8.5$ kpc and $\Theta_0=235$ km s$^{-1}$. IRAS 20143+3634 was found to be located near the tangent point in the Cygnus direction. Using our observations we derived the angular velocity of Galactic rotation of the local standard of rest (LSR), $\Omega_{0} = 27.3 \pm 1.6$ km s$^{-1}$ kpc$^{-1}$, which is consistent with previous values derived using VLBI astrometry of SFRs at the tangent points and Solar circle. It is higher than the value recommended by the IAU of $\Omega_{0} = 25.9$ km s$^{-1}$ kpc$^{-1}$ which was calculated using the Galactocentric distance of the Sun and circular velocity of the LSR. [37] Title: Discovery of new magnetic early-B stars within the MiMeS HARPSpol survey Comments: 19 pages, 8 figures, accepted for publication in A&A Subjects: Solar and Stellar Astrophysics (astro-ph.SR) To understand the origin of the magnetic fields in massive stars as well as their impact on stellar internal structure, evolution, and circumstellar environment, within the MiMeS project, we searched for magnetic objects among a large sample of massive stars, and build a sub-sample for in-depth follow-up studies required to test the models and theories of fossil field origins, magnetic wind confinement and magnetospheric properties, and magnetic star evolution. We obtained high-resolution spectropolarimetric observations of a large number of OB stars thanks to three large programs that have been allocated on the high-resolution spectropolarimeters ESPaDOnS, Narval, and the polarimetric module HARPSpol of the HARPS spectrograph. We report here on the methods and first analysis of the HARPSpol magnetic detections. We identified the magnetic stars using a multi-line analysis technique. Then, when possible, we monitored the new discoveries to derive their rotation periods, which are critical for follow-up and magnetic mapping studies. We also performed a first-look analysis of their spectra and identified obvious spectral anomalies (e.g., abundance peculiarities, Halpha emission), which are also of interest for future studies. In this paper, we focus on eight of the 11 stars in which we discovered or confirmed a magnetic field from the HARPSpol LP sample (the remaining three were published in a previous paper). Seven of the stars were detected in early-type Bp stars, while the last star was detected in the Ap companion of a normal early B-type star. We report obvious spectral and multiplicity properties, as well as our measurements of their longitudinal field strengths, and their rotation periods when we are able to derive them. We also discuss the presence or absence of Halpha emission with respect to the theory of centrifugally-supported magnetospheres. (Abriged) [38] Title: The COS/UVES Absorption Survey of the Magellanic Stream. III: Ionization, Total Mass, and Inflow Rate onto the Milky Way Comments: Accepted for publication in ApJ, 32 pages, 7 figures, 3 tables, Figure 1 shown at low resolution to reduce file size Subjects: Astrophysics of Galaxies (astro-ph.GA) Dynamic interactions between the two Magellanic Clouds have flung large quantities of gas into the halo of the Milky Way, creating the Magellanic Stream, the Magellanic Bridge, and the Leading Arm (collectively referred to as the Magellanic System). In this third paper of a series studying the Magellanic gas in absorption, we analyze the gas ionization level using a sample of 69 Hubble Space Telescope/Cosmic Origins Spectrograph sightlines that pass through or within 30 degrees of the 21 cm-emitting regions. We find that 81% (56/69) of the sightlines show UV absorption at Magellanic velocities, indicating that the total cross section of the Magellanic System is ~11 000 square degrees, or around a quarter of the entire sky. Using observations of the Si III/Si II ratio together with Cloudy photoionization modeling, we calculate that the total mass (atomic plus ionized) of the Magellanic System is ~2.0 billion solar masses, with the ionized gas contributing over twice as much mass as the atomic gas. This is larger than the current-day interstellar H I mass of both Magellanic Clouds combined, indicating that they have lost most of their initial gas mass. If the gas in the Magellanic System survives to reach the Galactic disk over its inflow time of ~0.5-1.5 Gyr, it will represent an average inflow rate of ~3.7-6.7 solar masses per year, potentially raising the Galactic star formation rate. However, multiple signs of an evaporative interaction with the hot Galactic corona indicate that the Stream may not survive its journey to the disk fully intact, and will instead add material to (and cool) the corona. [39] Title: Cloud angular momentum and effective viscosity in global SPH simulations with feedback Subjects: Astrophysics of Galaxies (astro-ph.GA) We examine simulations of isolated galaxies to analyse the effects of localised feedback on the formation and evolution of molecular clouds. Feedback contributes to turbulence and the destruction of clouds, leading to a population of clouds that is younger, less massive, and with more retrograde rotation. We investigate the evolution of clouds as they interact with each other and the diffuse ISM, and determine that the role of cloud interactions differs strongly with the presence of feedback: in models with feedback, mergers between clouds decrease the number-fraction of prograde clouds, while in models without feedback, scattering events increase the prograde fraction. We also produce an estimate of the viscous time-scale due to cloud-cloud collisions, which increases with the inclusion of feedback (~20 Gyr vs ~10 Gyr), but is still much smaller than previous estimates (~1000 Gyr); although collisions become more frequent with feedback, less energy is lost in each collision. [40] Title: Variable Speed of Light Cosmology, Primordial Fluctuations and BICEP2 Authors: J. W. Moffat Subjects: Cosmology and Nongalactic Astrophysics (astro-ph.CO) A variable speed of light cosmology is formulated in which the speed of light is described in the action by a dynamical field $\phi$. The initial value problems of cosmology: the horizon and flatness problems are solved. The model predicts primordial scalar and tensor fluctuation spectral indices $n_s=0.96$ and $n_t=- 0.04$, respectively. The BICEP2 observation of $r=0.2$ yields $r/n_t=-5$ which is close to the single field inflationary consistency condition $r/n_t=-8$. [41] Title: Fe I Oscillator Strengths for the Gaia-ESO Survey Comments: Accepted for publication in Mon. Not. R. Astron. Soc Subjects: Solar and Stellar Astrophysics (astro-ph.SR); Instrumentation and Methods for Astrophysics (astro-ph.IM) The Gaia-ESO Public Spectroscopic Survey (GES) is conducting a large-scale study of multi-element chemical abundances of some 100 000 stars in the Milky Way with the ultimate aim of quantifying the formation history and evolution of young, mature and ancient Galactic populations. However, in preparing for the analysis of GES spectra, it has been noted that atomic oscillator strengths of important Fe I lines required to correctly model stellar line intensities are missing from the atomic database. Here, we present new experimental oscillator strengths derived from branching fractions and level lifetimes, for 142 transitions of Fe I between 3526 {\AA} and 10864 {\AA}, of which at least 38 are urgently needed by GES. We also assess the impact of these new data on solar spectral synthesis and demonstrate that for 36 lines that appear unblended in the Sun, Fe abundance measurements yield a small line-by-line scatter (0.08 dex) with a mean abundance of 7.44 dex in good agreement with recent publications. [42] Title: Model-Independent Measurements of Cosmic Expansion and Growth at z=0.57 Using the Anisotropic Clustering of CMASS Galaxies From the Sloan Digital Sky Survey Data Release 9 Authors: Yun Wang Comments: 7 pages, 4 figures. Submitted Subjects: Cosmology and Nongalactic Astrophysics (astro-ph.CO) We analyze the anisotropic two dimensional galaxy correlation function (2DCF) of the CMASS galaxy sample from the Sloan Digital Sky Survey Data Release 9 (DR9) of the Baryon Oscillation Spectroscopic Survey (BOSS) data. Modeling the 2DCF fully including nonlinear effects and redshift space distortions (RSD) in the scale range of 30 to 120 h^{-1}Mpc, we find H(0.57)r_s(z_d)/c=0.0444 +/- 0.0019, D_A(0.57)/r_s(z_d)=9.01 +/- 0.23, and f_g(0.57)\sigma_8(0.57)=0.474 +/- 0.075, where r_s(z_d) is the sound horizon at the drag epoch computed using a simple integral, and f_g(z) is the growth rate at redshift z, and \sigma_8(z) represents the matter power spectrum normalization on 8h^{-1}Mpc scale at z. We find that the scales larger than 120 h^{-1}Mpc are dominated by noise in the 2DCF analysis, and that the inclusion of scales 30-40 h^{-1}Mpc significantly tightens the RSD measurement. Our measurements are consistent with previous results using the same data, but have significantly better precision since we are using all the information from the 2DCF in the scale range of 30 to 120 h^{-1}Mpc. Our measurements have been marginalized over sufficiently wide priors for the relevant parameters; they can be combined with other data to probe dark energy and gravity. [43] Title: Large-scale magnetic fields in Bok globules Comments: 9 pages, 7 figures, accepted by A&A Subjects: Solar and Stellar Astrophysics (astro-ph.SR); Instrumentation and Methods for Astrophysics (astro-ph.IM) Context: The role of magnetic fields in the star formation process is a contentious matter of debate. In particular, no clear observational proof exists of a general influence by magnetic fields during the initial collapse of molecular clouds. Aims: Our aim is to examine magnetic fields and their influence on a wide range of spatial scales in low-mass star-forming regions. Method: We trace the large-scale magnetic field structure on scales of 10^3-10^5 AU in the local environment of Bok globules through optical and near-infrared polarimetry and combine these measurements with existing submillimeter measurements, thereby characterizing the small-scale magnetic field structure on scales of 10^2-10^3 AU. Results: For the first time, we present polarimetric observations in the optical and near-infrared of the three Bok globules B335, CB68, and CB54, combined with archival observations in the submillimeter and the optical. We find a significant polarization signal (P>=2%, P/sigma(P)>3) in the optical and near-infrared for all three globules. Additionally, we detect a connection between the structure on scales of 10^2-10^3 AU to 10^3-10^4 AU for both B335 and CB68. Furthermore, for CB54, we trace ordered polarization vectors on scales of ~10^5 AU. We determine a magnetic field orientation that is aligned with the CO outflow in the case of CB54, but nearly perpendicular to the CO outflow for CB68. For B335 we find a change in the magnetic field oriented toward the outflow direction, from the inner core to the outer regions. Conclusion: We find strongly aligned polarization vectors that indicate dominant magnetic fields on a wide range of spatial scales. Cross-lists for Wed, 23 Apr 14 [44]  arXiv:1404.4883 (cross-list from hep-th) [pdf, ps, other] Title: A small cosmological constant from Abelian symmetry breaking Authors: Gianmassimo Tasinato (ICG, Portsmouth) Subjects: High Energy Physics - Theory (hep-th); Cosmology and Nongalactic Astrophysics (astro-ph.CO); General Relativity and Quantum Cosmology (gr-qc); High Energy Physics - Phenomenology (hep-ph) We investigate some cosmological consequences of a vector-tensor theory where an Abelian symmetry in the vector sector is slightly broken by a mass term and by ghost-free derivative self-interactions. When studying cosmological expansion in the presence of large bare cosmological constant $\Lambda_{cc}$, we find that the theory admits branches of de Sitter solutions in which the scale of the Hubble parameter is inversely proportional to a power of $\Lambda_{cc}$. Hence, a large value of $\Lambda_{cc}$ leads to a small size for the Hubble scale. In an appropriate limit, in which the symmetry breaking parameters are small, the theory recovers the Abelian symmetry plus an additional Galileon symmetry acting on the longitudinal vector polarization. The approximate Galileon symmetry can make the structure of this theory stable at the energy scales we are interested in. We also analyze the dynamics of linearized cosmological fluctuations around the de Sitter solutions, showing that no manifest instabilities arise, and that the transverse vector polarizations become massless around these configurations. [45]  arXiv:1404.5446 (cross-list from hep-ph) [pdf, other] Title: Magnetic dark matter for the X-ray line at 3.55 keV Authors: Hyun Min Lee Subjects: High Energy Physics - Phenomenology (hep-ph); Cosmology and Nongalactic Astrophysics (astro-ph.CO) We consider a decaying magnetic dark matter explaining the X-ray line at 3.55 keV identified recently from XMM-Newton observations. We introduce two singlet Majorana fermions that have almost degenerate masses and fermion-portal couplings with a charged scalar of weak scale mass. In our model, an approximate $Z_2$ symmetry gives rise to a tiny transition magnetic moment between the Majorana fermions at one loop. The heavier Majorana fermion becomes a thermal dark matter due to the sizable fermion-portal coupling to the SM charged fermions. We find the parameter space for the masses of dark matter and charged scalar and their couplings, being consistent with both the relic density and the X-ray line. Various phenomenological constraints on the model are also discussed. [46]  arXiv:1404.5518 (cross-list from hep-ph) [pdf, ps, other] Title: Axion monodromy inflation with sinusoidal corrections Subjects: High Energy Physics - Phenomenology (hep-ph); Cosmology and Nongalactic Astrophysics (astro-ph.CO); High Energy Physics - Theory (hep-th) We study the axion monodromy inflation with a non-perturbatively generated sinusoidal term. The potential form is a mixture between the natural inflation and the axion monodromy inflation potentials. The sinusoidal term is subdominant in the potential, but leaves significant effects on the resultant fluctuation generated during inflation. A larger tensor-to-scalar ratio can be obtained in our model. We study two scenarios, single inflation scenario and the double inflation scenario. In the first scenario, the axion monodromy inflation with a sufficient number of e-fold generates a larger tensor-to-scalar ratio about $0.1 - 0.15$ but also a tiny running of spectral index. In the second scenario of double inflation, axion monodromy inflation is its first stage and, we assume another inflation follows. In this case, our model can realize a larger tensor-to-scalar ratio and a large negative running of spectral index simultaneously. Replacements for Wed, 23 Apr 14 [47]  arXiv:1305.7264 (replaced) [pdf, other] Title: The Moving Group Targets of the SEEDS High-Contrast Imaging Survey of Exoplanets and Disks: Results and Observations from the First Three Years Comments: 26 pages, 4 figures, 6 tables. Replaced with published ApJ version Subjects: Earth and Planetary Astrophysics (astro-ph.EP) [48]  arXiv:1308.3197 (replaced) [pdf, ps, other] Title: "Self-absorbed" GeV light-curves of Gamma-Ray Burst afterglows Comments: 8 pages, to appear in the ApJ Subjects: High Energy Astrophysical Phenomena (astro-ph.HE) [49]  arXiv:1308.5705 (replaced) [pdf, other] Title: Light NMSSM Neutralino Dark Matter in the Wake of CDMS II and a 126 GeV Higgs Comments: v2: Several significant revisions, but overall conclusions unchanged. Matches version published in PRD Subjects: High Energy Physics - Phenomenology (hep-ph); Cosmology and Nongalactic Astrophysics (astro-ph.CO) [50]  arXiv:1310.4177 (replaced) [pdf, ps, other] Title: Direct measurements of dust attenuation in z~1.5 star-forming galaxies from 3D-HST: Implications for dust geometry and star formation rates Comments: Accepted for publication in the Astrophysical Journal (13 pages, 9 figures) Subjects: Cosmology and Nongalactic Astrophysics (astro-ph.CO) [51]  arXiv:1310.6362 (replaced) [pdf, ps, other] Title: An ALMA Survey of Submillimetre Galaxies in the Extended Chandra Deep Field South: The Far-Infrared Properties of SMGs Authors: Mark Swinbank (1), James Simpson (1), Ian Smail (1), Chris Harrison (1), Jacqueline Hodge (2), Alex Karim (3), Fabian Walter (2), Dave Alexander (1), Niel Brandt (4), Carlos de Breuck (5), Elizabete da Cunha (2), Scott Chapman (6), Kristen Coppin (7), Alice Danielson (1), Helmut Dannerbauer (8), Roberto Decarli (2) Thomas Greve (9), Rob Ivison (10), Kirsten Knudsen (11), Claudia Lagos (5), Eva Schinnerer (2), Alasdair Thomson (1), Julie Wardlow (12), Axel Weiss (3), Paul van der Werf (13) ((1) ICC, Durham, (2) Heidelberg, (3) Bonn, (4) Penn State, (5) ESO, (6) Dalhousie, (7) Hertfordshire, (8) Vienna, (9) UCL, (10) IfA, Edinburgh, (11) Chalmers, (12) UC Irvine, (13) Leiden) Comments: Accepted for publication in MNRAS. 24 pages, 12 figures Journal-ref: Swinbank et al. 2014, MNRAS, 423, 1267 Subjects: Cosmology and Nongalactic Astrophysics (astro-ph.CO) [52]  arXiv:1310.7328 (replaced) [pdf, ps, other] Title: HD 285507b: An Eccentric Hot Jupiter in the Hyades Open Cluster Authors: S. N. Quinn (1 and 4), R. J. White (1), D. W. Latham (2), L. A. Buchhave (2 and 3), G. Torres (2), R. P. Stefanik (2), P. Berlind (2), A. Bieryla (2), M. C. Calkins (2), G. A. Esquerdo (2), G. Fürész (2), J. C. Geary (2), A. H. Szentgyorgyi (2) ((1) Georgia State University, (2) Harvard-Smithsonian Center for Astrophysics, (3) Centre for Star and Planet Formation, Natural History Museum of Denmark, University of Copenhagen, (4) NSF Graduate Research Fellow) Comments: 11 pages, 6 figures, 3 tables. Accepted for publication in ApJ. Minor changes from v1: updated to match published version Subjects: Earth and Planetary Astrophysics (astro-ph.EP) [53]  arXiv:1311.1600 (replaced) [pdf, other] Title: Distinguishing between inhomogeneous model and $Λ\textrm{CDM}$ model with the cosmic age method Comments: 10 pages, 2 figures, accepted by Physics Letters B. arXiv admin note: text overlap with arXiv:0911.3852 by other authors Subjects: Cosmology and Nongalactic Astrophysics (astro-ph.CO) [54]  arXiv:1311.5761 (replaced) [pdf, ps, other] Title: Rest-frame ultra-violet spectra of massive galaxies at z=3: evidence of high-velocity outflows Comments: 17 pages, 14 figures, Accepted for publication in A&A Subjects: Cosmology and Nongalactic Astrophysics (astro-ph.CO) [55]  arXiv:1312.0273 (replaced) [pdf, ps, other] Title: Impact of Dark Matter Velocity Distributions on Capture Rates in the Sun Subjects: High Energy Astrophysical Phenomena (astro-ph.HE) [56]  arXiv:1312.0566 (replaced) [pdf, other] Title: Late evolution of relic gravitational waves in coupled dark energy models Comments: 22 pages, 8 figures, article Subjects: Cosmology and Nongalactic Astrophysics (astro-ph.CO) [57]  arXiv:1312.3967 (replaced) [pdf, ps, other] Title: Identifying high-redshift GRBs with RATIR Comments: Accepted by AJ. 15 pages, 8 figures, 4 tables Subjects: High Energy Astrophysical Phenomena (astro-ph.HE); Cosmology and Nongalactic Astrophysics (astro-ph.CO) [58]  arXiv:1401.1082 (replaced) [pdf, ps, other] Title: PolarBase: a data base of high resolution spectropolarimetric stellar observations Comments: Accepted for publication in PASP Subjects: Solar and Stellar Astrophysics (astro-ph.SR); Instrumentation and Methods for Astrophysics (astro-ph.IM) [59]  arXiv:1401.3965 (replaced) [pdf, ps, other] Title: Mirror dark matter: Cosmology, galaxy structure and direct detection Authors: R. Foot Subjects: Cosmology and Nongalactic Astrophysics (astro-ph.CO); High Energy Physics - Phenomenology (hep-ph) [60]  arXiv:1401.7031 (replaced) [pdf, ps, other] Title: Probing Quintessence Potential with Future Cosmological Surveys Comments: 36 pages, 10 figures, 6 tables, minor changes to the footnote and figures; published in JCAP Subjects: Cosmology and Nongalactic Astrophysics (astro-ph.CO); High Energy Physics - Phenomenology (hep-ph); High Energy Physics - Theory (hep-th) [61]  arXiv:1403.5560 (replaced) [pdf, ps, other] Title: Prospecting in Ultracool Dwarfs: Measuring the Metallicities of Mid- and Late-M Dwarfs Comments: Accepted to AJ. 13 pages, 6 figures, 5 tables. NIR spectra of companions included. IDL program to apply calibration is available at this http URL Subjects: Solar and Stellar Astrophysics (astro-ph.SR) [62]  arXiv:1403.7534 (replaced) [pdf, ps, other] Title: Multifrequency Studies of the Peculiar Quasar 4C +21.35 During the 2010 Flaring Activity Authors: M. Ackermann (1), M. Ajello (29), A. Allafort (39), E. Antolini (4,5), G. Barbiellini (6,7), D. Bastieri (8,9), R. Bellazzini (10), E. Bissaldi (11), E. Bonamente (5,4), J. Bregeon (10), M. Brigida (12,13), P. Bruel (14), R. Buehler (1), S. Buson (8,9), G. A. Caliandro (3), R. A. Cameron (3), P. A. Caraveo (15), E. Cavazzuti (16), C. Cecchi (5,4), R. C. G. Chaves (17), A. Chekhtman (18), J. Chiang (3), G. Chiaro (9), S. Ciprini (16,19), R. Claus (3), J. Cohen-Tanugi (20), J. Conrad (21,22,23,24), S. Cutini (16,19), F. D'Ammando (25), F. de Palma (12,13), C. D. Dermer (26), E. do Couto e Silva (3), D. Donato (27,28), P. S. Drell (3), C. Favuzzi (12,13), J. Finke (26), W. B. Focke (3), A. Franckowiak (3), Y. Fukazawa (29), P. Fusco (12,13), F. Gargano (13), D. Gasparrini (16,19), N. Gehrels (30), et al. (247 additional authors not shown) Comments: 46 pages, 10 figures, 4 tables. Astrophysical Journal, in press. Contact Authors: Filippo D'Ammando, Justin Finke, Davide Donato, Josefa Becerra Gonzalez, Tomislav Terzic Subjects: High Energy Astrophysical Phenomena (astro-ph.HE); Cosmology and Nongalactic Astrophysics (astro-ph.CO); Astrophysics of Galaxies (astro-ph.GA) [63]  arXiv:1404.2996 (replaced) [pdf, ps, other] Title: Inert Dark Matter in Type-II Seesaw Subjects: High Energy Physics - Phenomenology (hep-ph); High Energy Astrophysical Phenomena (astro-ph.HE); High Energy Physics - Experiment (hep-ex) [64]  arXiv:1404.3556 (replaced) [pdf, ps, other] Title: Intrinsic $γ$-ray luminosity, black hole mass, jet and accretion in Fermi blazars Comments: 24 pages, 1 table, 18 figures, Accepted for publication in MNRAS. arXiv admin note: text overlap with arXiv:1209.4702, arXiv:1012.0308 by other authors. text overlap with arXiv:1209.4702 by other authors Subjects: High Energy Astrophysical Phenomena (astro-ph.HE) [65]  arXiv:1404.3690 (replaced) [pdf, ps, other] Title: Reconstruction of the primordial power spectra with Planck and BICEP2 Subjects: Cosmology and Nongalactic Astrophysics (astro-ph.CO); High Energy Physics - Theory (hep-th) [66]  arXiv:1404.4709 (replaced) [pdf, ps, other] Title: The Higgs vacuum is unstable Subjects: High Energy Physics - Phenomenology (hep-ph); Cosmology and Nongalactic Astrophysics (astro-ph.CO); High Energy Physics - Theory (hep-th) [67]  arXiv:1404.4817 (replaced) [pdf, ps, other] Title: Faint Population III supernovae as the origin of the most iron-poor stars Comments: Submitted to the Astrophysical Journal Letters, The list of references is corrected Subjects: High Energy Astrophysical Phenomena (astro-ph.HE); Solar and Stellar Astrophysics (astro-ph.SR) [68]  arXiv:1404.4870 (replaced) [pdf, other] Title: Towards precision distances and 3D dust maps using broadband Period--Magnitude relations of RR Lyrae stars Comments: 21 pages, 29 figures, 2 tables, abstract abridged for arXiv Subjects: Solar and Stellar Astrophysics (astro-ph.SR); Instrumentation and Methods for Astrophysics (astro-ph.IM) [69]  arXiv:1404.5276 (replaced) [pdf, other] Title: RF heating efficiency of the terahertz superconducting hot-electron bolometer Subjects: Instrumentation and Detectors (physics.ins-det); Instrumentation and Methods for Astrophysics (astro-ph.IM); Superconductivity (cond-mat.supr-con); Computational Physics (physics.comp-ph) [ total of 69 entries: 1-69 ] [ showing up to 2000 entries per page: fewer | more ]
2014-04-24T00:23:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6551446318626404, "perplexity": 2988.5219794391046}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00635-ip-10-147-4-33.ec2.internal.warc.gz"}
http://mathonline.wikidot.com/proving-set-theorems
Proving Set Theorems Table of Contents # Proving Set Theorems We have looked at some various equalities between sets and proven them. We will now formally demonstrate how to prove that two sets are equal. Recall that we can say that for two sets $A$ and $B$, that $A = B$ if and only if $A \subseteq B$ and $B \subseteq A$. Therefore, we must show that both hold for two sets to be equal. Before we look at proving some set equalities or even proving that a set is a subset of another set, let's first review some important properties regarding sets. Notice the difference between "or", "and" in the following as well: • If $A \subseteq B$ and $x \in A$ implies $x \in B$. • $x \in A \cup B$ implies $x \in A$ or $x \in B$. • $x \not \in A \cup B$ implies $x \not \in A$ and $x \not \in B$ • $x \in A \cap B$ implies $x \in A$ and $x \in B$. • $x \not \in A \cap B$ implies $x \not \in A$ or $x \not \in B$ • $x \in A \setminus B$ implies $x \in A$ and $x \not \in B$. • $x \not \in A \setminus B$ implies $x \not \in A$ or $x \in B$. More examples can be found on the Proving Set Theorems Examples 1 page. ## Example 1 Prove that if $A \cup ( A \cap B ) = A$ (one of the absorption laws of sets). • Proof: $\Rightarrow$ We want to first show that $A \cup (A \cap B) \subseteq A$. If $A \cup (A \cap B) = \emptyset$ then we are done since the empty set is a subset of any set. If not, then let $x \in A \cup (A \cap B)$. Then $x \in A$ or $x \in A \cap B$. If $x \in A$ we are done. If $x \in A \cap B$ then $x \in A$ and we are done and so $A \cup (A \cap B) \subseteq A$. • $\Leftarrow$ We want to now show that $A \subseteq A \cup (A \cap B)$. If $A = \emptyset$ then we are done. If not then let $x \in A$. Then $x \in A$ and $x \in A \cap B$ so $x \in A \cup (A \cap B)$. Therefore $A \subseteq A \cup (A \cap B)$. ## Example 2 Prove that $A \setminus (A \setminus B) \subseteq B$. Notice that in this example we are not asked to prove a set equality. In fact, we're only asked to prove that one set is a subset of another! • Proof: We want to show $A \setminus ( A \setminus B ) \subset B$. Let $x \in A \setminus (A \setminus B)$. Then $x \in A$ and $x \not \in (A \setminus B)]]. If [[$ x \not \in (A \setminus B)$then$x \not in A$or$x \in B$. Therefore$A \setminus (A \setminus B) \subseteq B$. ## Example 3 Prove that if$A \subseteq C$and$B \subseteq C$and$C \setminus A \subseteq B$, then$C = A \cup B$. • Proof:$\Rightarrow$We want to first show that$C \subseteq A \cup B$. If$C = \emptyset$then we are done since the empty set is always a subset of any set. Now if$C \neq \emptyset$, then let some element$x \in C$. So either$x \in A$or$x \not \in A$. If$x \in A$then$x \in A \cup B$and we are done. If$x \not \in A$then$x \in (C \setminus A)$. But if$x \in (C \setminus A)$then$x \in B$so$x \in A \cup B$or rather,$C \subseteq A \cup B$. •$\Leftarrow$We want to now show that$A \cup B \subseteq C$. If$A \cup B = \emptyset$then we are done once again. If not, let$x \in A \cup B$. Then$x \in A$or$x \in B$. If$x \in A$then$x \in C$since$A \subseteq C$. If$x \in B$then$x \in C$since$B \subseteq C$. Therefore$x \in C$or rather,$A \cup B \subseteq C\$. Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-ShareAlike 3.0 License
2019-04-26T10:16:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9999951124191284, "perplexity": 332.1022395393336}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578765115.93/warc/CC-MAIN-20190426093516-20190426115516-00107.warc.gz"}
https://pos.sissa.it/283/085/
Volume 283 - Neutrino Oscillation Workshop (NOW2016) - Session V: Particle Physics in the Cosmos The effective number of neutrinos: standard and non-standard scenarios S. Pastor Full text: pdf Pre-published on: February 28, 2017 Published on: June 20, 2017 Abstract We study the decoupling process of neutrinos in the early universe in the presence of three-flavour oscillations. The evolution of the neutrino spectra is found by solving the corresponding momentum-dependent kinetic equations for the neutrino density matrix, including for the first time the proper collision integrals for both diagonal and off-diagonal elements. We find that the contribution of neutrinos to the cosmological energy density in the form of radiation, in terms of the effective number of neutrinos, is $N_{\rm eff}=3.045$. This result does not depend on the ordering of neutrino masses, it is in agreement with previous theoretical calculations and consistent with the latest analysis of Planck data. We also calculate the effect of non-standard neutrino-electron interactions (NSI), predicted in many theoretical models where neutrinos acquire mass. For two sets of NSI parameters allowed by present data, we find that $N_{\rm eff}$ can be reduced down to $3.040$ or enhanced up to $3.059$. Finally, we consider the case of very low reheating scenarios ($T_{\rm RH}\sim\mathcal{O}({\rm MeV})$), where the thermalization of neutrinos can be incomplete ($N_{\rm eff}<3)$ and leads to a lower bound on the reheating temperature, $T_{\rm RH} > 4.7$ MeV from Planck data (95\% CL). DOI: https://doi.org/10.22323/1.283.0085 How to cite Metadata are provided both in "article" format (very similar to INSPIRE) as this helps creating very compact bibliographies which can be beneficial to authors and readers, and in "proceeding" format which is more detailed and complete. Open Access Copyright owned by the author(s) under the term of the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
2023-02-04T04:54:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5720579028129578, "perplexity": 1524.216774284549}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500094.26/warc/CC-MAIN-20230204044030-20230204074030-00267.warc.gz"}
https://phys.libretexts.org/Bookshelves/College_Physics/Book%3A_Conceptual_Physics_(Crowell)/08._Atoms_and_Electromagnetism/8.1_The_Electric_Glue
$$\require{cancel}$$ # 8.1 The Electric Glue Where the telescope ends, the microscope begins. Which of the two has the grander view? -- Victor Hugo His father died during his mother's pregnancy. Rejected by her as a boy, he was packed off to boarding school when she remarried. He himself never married, but in middle age he formed an intense relationship with a much younger man, a relationship that he terminated when he underwent a psychotic break. Following his early scientific successes, he spent the rest of his professional life mostly in frustration over his inability to unlock the secrets of alchemy. The man being described is Isaac Newton, but not the triumphant Newton of the standard textbook hagiography. Why dwell on the sad side of his life? To the modern science educator, Newton's lifelong obsession with alchemy may seem an embarrassment, a distraction from his main achievement, the creation the modern science of mechanics. To Newton, however, his alchemical researches were naturally related to his investigations of force and motion. What was radical about Newton's analysis of motion was its universality: it succeeded in describing both the heavens and the earth with the same equations, whereas previously it had been assumed that the sun, moon, stars, and planets were fundamentally different from earthly objects. But Newton realized that if science was to describe all of nature in a unified way, it was not enough to unite the human scale with the scale of the universe: he would not be satisfied until he fit the microscopic universe into the picture as well. It should not surprise us that Newton failed. Although he was a firm believer in the existence of atoms, there was no more experimental evidence for their existence than there had been when the ancient Greeks first posited them on purely philosophical grounds. Alchemy labored under a tradition of secrecy and mysticism. Newton had already almost single-handedly transformed the fuzzy-headed field of “natural philosophy” into something we would recognize as the modern science of physics, and it would be unjust to criticize him for failing to change alchemy into modern chemistry as well. The time was not ripe. The microscope was a new invention, and it was cutting-edge science when Newton's contemporary Hooke discovered that living things were made out of cells. ### 8.1.1 The quest for the atomic force Newton was not the first of the age of reason. He was the last of the magicians. -- John Maynard Keynes #### Newton's quest Nevertheless it will be instructive to pick up Newton's train of thought and see where it leads us with the benefit of modern hindsight. In uniting the human and cosmic scales of existence, he had reimagined both as stages on which the actors were objects (trees and houses, planets and stars) that interacted through attractions and repulsions. He was already convinced that the objects inhabiting the microworld were atoms, so it remained only to determine what kinds of forces they exerted on each other. His next insight was no less brilliant for his inability to bring it to fruition. He realized that the many human-scale forces --- friction, sticky forces, the normal forces that keep objects from occupying the same space, and so on --- must all simply be expressions of a more fundamental force acting between atoms. Tape sticks to paper because the atoms in the tape attract the atoms in the paper. My house doesn't fall to the center of the earth because its atoms repel the atoms of the dirt under it. Here he got stuck. It was tempting to think that the atomic force was a form of gravity, which he knew to be universal, fundamental, and mathematically simple. Gravity, however, is always attractive, so how could he use it to explain the existence of both attractive and repulsive atomic forces? The gravitational force between objects of ordinary size is also extremely small, which is why we never notice cars and houses attracting us gravitationally. It would be hard to understand how gravity could be responsible for anything as vigorous as the beating of a heart or the explosion of gunpowder. Newton went on to write a million words of alchemical notes filled with speculation about some other force, perhaps a “divine force” or “vegetative force” that would for example be carried by the sperm to the egg. a / Four pieces of tape are prepared, 1, as described in the text. Depending on which combination is tested, the interaction can be either repulsive, 2, or attractive, 3. Luckily, we now know enough to investigate a different suspect as a candidate for the atomic force: electricity. Electric forces are often observed between objects that have been prepared by rubbing (or other surface interactions), for instance when clothes rub against each other in the dryer. A useful example is shown in figure a/1: stick two pieces of tape on a tabletop, and then put two more pieces on top of them. Lift each pair from the table, and then separate them. The two top pieces will then repel each other, a/2, as will the two bottom pieces. A bottom piece will attract a top piece, however, a/3. Electrical forces like these are similar in certain ways to gravity, the other force that we already know to be fundamental: • Electrical forces are universal. Although some substances, such as fur, rubber, and plastic, respond more strongly to electrical preparation than others, all matter participates in electrical forces to some degree. There is no such thing as a “nonelectric” substance. Matter is both inherently gravitational and inherently electrical. • Experiments show that the electrical force, like the gravitational force, is an inverse square force. That is, the electrical force between two spheres is proportional to $$1/r^2$$, where $$r$$ is the center-to-center distance between them. Furthermore, electrical forces make more sense than gravity as candidates for the fundamental force between atoms, because we have observed that they can be either attractive or repulsive. 8.1.2 Charge, electricity and magnetism #### Charge “Charge” is the technical term used to indicate that an object has been prepared so as to participate in electrical forces. This is to be distinguished from the common usage, in which the term is used indiscriminately for anything electrical. For example, although we speak colloquially of “charging” a battery, you may easily verify that a battery has no charge in the technical sense, e.g., it does not exert any electrical force on a piece of tape that has been prepared as described in the previous section. #### Two types of charge We can easily collect reams of data on electrical forces between different substances that have been charged in different ways. We find for example that cat fur prepared by rubbing against rabbit fur will attract glass that has been rubbed on silk. How can we make any sense of all this information? A vast simplification is achieved by noting that there are really only two types of charge. Suppose we pick cat fur rubbed on rabbit fur as a representative of type A, and glass rubbed on silk for type B. We will now find that there is no “type C.” Any object electrified by any method is either A-like, attracting things A attracts and repelling those it repels, or B-like, displaying the same attractions and repulsions as B. The two types, A and B, always display opposite interactions. If A displays an attraction with some charged object, then B is guaranteed to undergo repulsion with it, and vice-versa. #### The coulomb Although there are only two types of charge, each type can come in different amounts. The metric unit of charge is the coulomb (rhymes with “drool on”), defined as follows: One Coulomb (C) is the amount of charge such that a force of $$9.0\times10^9$$ N occurs between two point-like objects with charges of 1 C separated by a distance of 1 m. The notation for an amount of charge is $$q$$. The numerical factor in the definition is historical in origin, and is not worth memorizing. The definition is stated for point-like, i.e., very small, objects, because otherwise different parts of them would be at different distances from each other. #### A model of two types of charged particles Experiments show that all the methods of rubbing or otherwise charging objects involve two objects, and both of them end up getting charged. If one object acquires a certain amount of one type of charge, then the other ends up with an equal amount of the other type. Various interpretations of this are possible, but the simplest is that the basic building blocks of matter come in two flavors, one with each type of charge. Rubbing objects together results in the transfer of some of these particles from one object to the other. In this model, an object that has not been electrically prepared may actually possesses a great deal of both types of charge, but the amounts are equal and they are distributed in the same way throughout it. Since type A repels anything that type B attracts, and vice versa, the object will make a total force of zero on any other object. The rest of this chapter fleshes out this model and discusses how these mysterious particles can be understood as being internal parts of atoms. #### Use of positive and negative signs for charge Because the two types of charge tend to cancel out each other's forces, it makes sense to label them using positive and negative signs, and to discuss the total charge of an object. It is entirely arbitrary which type of charge to call negative and which to call positive. Benjamin Franklin decided to describe the one we've been calling “A” as negative, but it really doesn't matter as long as everyone is consistent with everyone else. An object with a total charge of zero (equal amounts of both types) is referred to as electrically $$neutral$$. self-check: Criticize the following statement: “There are two types of charge, attractive and repulsive.” A large body of experimental observations can be summarized as follows: Coulomb's law: The magnitude of the force acting between pointlike charged objects at a center-to-center distance $$r$$ is given by the equation $\begin{equation*} |\mathbf{F}| = k\frac{|q_1||q_2|}{r^2} , \end{equation*}$ where the constant $$k$$ equals $$9.0\times10^9\ \text{N}\!\cdot\!\text{m}^2/\text{C}^2$$. The force is attractive if the charges are of different signs, and repulsive if they have the same sign. Clever modern techniques have allowed the $$1/r^2$$ form of Coulomb's law to be tested to incredible accuracy, showing that the exponent is in the range from 1.9999999999999998 to 2.0000000000000002. Note that Coulomb's law is closely analogous to Newton's law of gravity, where the magnitude of the force is $$Gm_1m_2/r^2$$, except that there is only one type of mass, not two, and gravitational forces are never repulsive. Because of this close analogy between the two types of forces, we can recycle a great deal of our knowledge of gravitational forces. For instance, there is an electrical equivalent of the shell theorem: the electrical forces exerted externally by a uniformly charged spherical shell are the same as if all the charge was concentrated at its center, and the forces exerted internally are zero. #### Conservation of charge An even more fundamental reason for using positive and negative signs for electrical charge is that experiments show that charge is conserved according to this definition: in any closed system, the total amount of charge is a constant. This is why we observe that rubbing initially uncharged substances together always has the result that one gains a certain amount of one type of charge, while the other acquires an equal amount of the other type. Conservation of charge seems natural in our model in which matter is made of positive and negative particles. If the charge on each particle is a fixed property of that type of particle, and if the particles themselves can be neither created nor destroyed, then conservation of charge is inevitable. #### Electrical forces involving neutral objects b / A charged piece of tape attracts uncharged pieces of paper from a distance, and they leap up to it. As shown in figure b, an electrically charged object can attract objects that are uncharged. How is this possible? The key is that even though each piece of paper has a total charge of zero, it has at least some charged particles in it that have some freedom to move. Suppose that the tape is positively charged, c. Mobile particles in the paper will respond to the tape's forces, causing one end of the paper to become negatively charged and the other to become positive. The attraction between the paper and the tape is now stronger than the repulsion, because the negatively charged end is closer to the tape. c / The paper has zero total charge, but it does have charged particles in it that can move. self-check: What would have happened if the tape was negatively charged? (answer in the back of the PDF version of the book) We have begun to encounter complex electrical behavior that we would never have realized was occurring just from the evidence of our eyes. Unlike the pulleys, blocks, and inclined planes of mechanics, the actors on the stage of electricity and magnetism are invisible phenomena alien to our everyday experience. For this reason, the flavor of the second half of your physics education is dramatically different, focusing much more on experiments and techniques. Even though you will never actually see charge moving through a wire, you can learn to use an ammeter to measure the flow. Students also tend to get the impression from their first semester of physics that it is a dead science. Not so! We are about to pick up the historical trail that leads directly to the cutting-edge physics research you read about in the newspaper. The atom-smashing experiments that began around 1900, which we will be studying in this chapter, were not that different from the ones of the year 2000 --- just smaller, simpler, and much cheaper. #### Magnetic forces A detailed mathematical treatment of magnetism won't come until much later in this book, but we need to develop a few simple ideas about magnetism now because magnetic forces are used in the experiments and techniques we come to next. Everyday magnets come in two general types. Permanent magnets, such as the ones on your refrigerator, are made of iron or substances like steel that contain iron atoms. (Certain other substances also work, but iron is the cheapest and most common.) The other type of magnet, an example of which is the ones that make your stereo speakers vibrate, consist of coils of wire through which electric charge flows. Both types of magnets are able to attract iron that has not been magnetically prepared, for instance the door of the refrigerator. A single insight makes these apparently complex phenomena much simpler to understand: magnetic forces are interactions between moving charges, occurring in addition to the electric forces. Suppose a permanent magnet is brought near a magnet of the coiled-wire type. The coiled wire has moving charges in it because we force charge to flow. The permanent magnet also has moving charges in it, but in this case the charges that naturally swirl around inside the iron. (What makes a magnetized piece of iron different from a block of wood is that the motion of the charge in the wood is random rather than organized.) The moving charges in the coiled-wire magnet exert a force on the moving charges in the permanent magnet, and vice-versa. The mathematics of magnetism is significantly more complex than the Coulomb force law for electricity, which is why we will wait until chapter 11 before delving deeply into it. Two simple facts will suffice for now: 1. If a charged particle is moving in a region of space near where other charged particles are also moving, their magnetic force on it is directly proportional to its velocity. 2. The magnetic force on a moving charged particle is always perpendicular to the direction the particle is moving. Example 1: A magnetic compass The Earth is molten inside, and like a pot of boiling water, it roils and churns. To make a drastic oversimplification, electric charge can get carried along with the churning motion, so the Earth contains moving charge. The needle of a magnetic compass is itself a small permanent magnet. The moving charge inside the earth interacts magnetically with the moving charge inside the compass needle, causing the compass needle to twist around and point north. Example 2: A television tube A TV picture is painted by a stream of electrons coming from the back of the tube to the front. The beam scans across the whole surface of the tube like a reader scanning a page of a book. Magnetic forces are used to steer the beam. As the beam comes from the back of the tube to the front, up-down and left-right forces are needed for steering. But magnetic forces cannot be used to get the beam up to speed in the first place, since they can only push perpendicular to the electrons' direction of motion, not forward along it. Discussion Questions ◊ An electrically charged piece of tape will be attracted to your hand. Does that allow us to tell whether the mobile charged particles in your hand are positive or negative, or both? ◊ If the electrical attraction between two pointlike objects at a distance of 1 m is $$9\times10^9$$ N, why can't we infer that their charges are $$+1$$ and $$-1$$ C? What further observations would we need to do in order to prove this? $\begin{equation*} \frac{m_\text{He}}{m_\text{H}}=3.97 \end{equation*}$ $\begin{equation*} \frac{m_\text{Ne}}{m_\text{H}}=20.01 \end{equation*}$ $\begin{equation*} \frac{m_\text{Sc}}{m_\text{H}}=44.60 \end{equation*}$ Examples of masses of atoms compared to that of hydrogen. Note how some, but not all, are close to integers. ### 8.1.3 Atoms I was brought up to look at the atom as a nice, hard fellow, red or grey in color according to taste. -- Rutherford #### Atomism The Greeks have been kicked around a lot in the last couple of millennia: dominated by the Romans, bullied during the crusades by warlords going to and from the Holy Land, and occupied by Turkey until recently. It's no wonder they prefer to remember their salad days, when their best thinkers came up with concepts like democracy and atoms. Greece is democratic again after a period of military dictatorship, and an atom is proudly pictured on one of their coins. That's why it hurts me to have to say that the ancient Greek hypothesis that matter is made of atoms was pure guesswork. There was no real experimental evidence for atoms, and the 18th-century revival of the atom concept by Dalton owed little to the Greeks other than the name, which means “unsplittable.” Subtracting even more cruelly from Greek glory, the name was shown to be inappropriate in 1897 when physicist J.J. Thomson proved experimentally that atoms had even smaller things inside them, which could be extracted. (Thomson called them “electrons.”) The “unsplittable” was splittable after all. But that's getting ahead of our story. What happened to the atom concept in the intervening two thousand years? Educated people continued to discuss the idea, and those who were in favor of it could often use it to give plausible explanations for various facts and phenomena. One fact that was readily explained was conservation of mass. For example, if you mix 1 kg of water with 1 kg of dirt, you get exactly 2 kg of mud, no more and no less. The same is true for the a variety of processes such as freezing of water, fermenting beer, or pulverizing sandstone. If you believed in atoms, conservation of mass made perfect sense, because all these processes could be interpreted as mixing and rearranging atoms, without changing the total number of atoms. Still, this is nothing like a proof that atoms exist. If atoms did exist, what types of atoms were there, and what distinguished the different types from each other? Was it their sizes, their shapes, their weights, or some other quality? The chasm between the ancient and modern atomisms becomes evident when we consider the wild speculations that existed on these issues until the present century. The ancients decided that there were four types of atoms, earth, water, air and fire; the most popular view was that they were distinguished by their shapes. Water atoms were spherical, hence water's ability to flow smoothly. Fire atoms had sharp points, which was why fire hurt when it touched one's skin. (There was no concept of temperature until thousands of years later.) The drastically different modern understanding of the structure of atoms was achieved in the course of the revolutionary decade stretching 1895 to 1905. The main purpose of this chapter is to describe those momentous experiments. #### Atoms, light, and everything else Although I tend to ridicule ancient Greek philosophers like Aristotle, let's take a moment to praise him for something. If you read Aristotle's writings on physics (or just skim them, which is all I've done), the most striking thing is how careful he is about classifying phenomena and analyzing relationships among phenomena. The human brain seems to naturally make a distinction between two types of physical phenomena: objects and motion of objects. When a phenomenon occurs that does not immediately present itself as one of these, there is a strong tendency to conceptualize it as one or the other, or even to ignore its existence completely. For instance, physics teachers shudder at students' statements that “the dynamite exploded, and force came out of it in all directions.” In these examples, the nonmaterial concept of force is being mentally categorized as if it was a physical substance. The statement that “winding the clock stores motion in the spring” is a miscategorization of electrical energy as a form of motion. An example of ignoring the existence of a phenomenon altogether can be elicited by asking people why we need lamps. The typical response that “the lamp illuminates the room so we can see things,” ignores the necessary role of light coming into our eyes from the things being illuminated. If you ask someone to tell you briefly about atoms, the likely response is that “everything is made of atoms,” but we've now seen that it's far from obvious which “everything” this statement would properly refer to. For the scientists of the early 1900s who were trying to investigate atoms, this was not a trivial issue of definitions. There was a new gizmo called the vacuum tube, of which the only familiar example today is the picture tube of a TV. In short order, electrical tinkerers had discovered a whole flock of new phenomena that occurred in and around vacuum tubes, and given them picturesque names like “x-rays,” “cathode rays,” “Hertzian waves,” and “N-rays.” These were the types of observations that ended up telling us that we know about matter, but fierce controversies ensued over whether these were themselves forms of matter. Let's bring ourselves up to the level of classification of phenomena employed by physicists in the year 1900. They recognized three categories: • Matter has mass, can have kinetic energy, and can travel through a vacuum, transporting its mass and kinetic energy with it. Matter is conserved, both in the sense of conservation of mass and conservation of the number of atoms of each element. Atoms can't occupy the same space as other atoms, so a convenient way to prove something is not a form of matter is to show that it can pass through a solid material, in which the atoms are packed together closely. • Light has no mass, always has energy, and can travel through a vacuum, transporting its energy with it. Two light beams can penetrate through each other and emerge from the collision without being weakened, deflected, or affected in any other way. Light can penetrate certain kinds of matter, e.g., glass. • The third category is everything that doesn't fit the definition of light or matter. This catch-all category includes, for example, time, velocity, heat, and force. #### The chemical indexelements, chemicalelements How would one find out what types of atoms there were? Today, it doesn't seem like it should have been very difficult to work out an experimental program to classify the types of atoms. For each type of atom, there should be a corresponding element, i.e., a pure substance made out of nothing but that type of atom. Atoms are supposed to be unsplittable, so a substance like milk could not possibly be elemental, since churning it vigorously causes it to split up into two separate substances: butter and whey. Similarly, rust could not be an element, because it can be made by combining two substances: iron and oxygen. Despite its apparent reasonableness, no such program was carried out until the eighteenth century. The ancients presumably did not do it because observation was not universally agreed on as the right way to answer questions about nature, and also because they lacked the necessary techniques or the techniques were the province of laborers with low social status, such as smiths and miners. Alchemists were hindered by atomism's reputation for subversiveness, and by a tendency toward mysticism and secrecy. (The most celebrated challenge facing the alchemists, that of converting lead into gold, is one we now know to be impossible, since lead and gold are both elements.) By 1900, however, chemists had done a reasonably good job of finding out what the elements were. They also had determined the ratios of the different atoms' masses fairly accurately. A typical technique would be to measure how many grams of sodium (Na) would combine with one gram of chlorine (Cl) to make salt (NaCl). (This assumes you've already decided based on other evidence that salt consisted of equal numbers of Na and Cl atoms.) The masses of individual atoms, as opposed to the mass ratios, were known only to within a few orders of magnitude based on indirect evidence, and plenty of physicists and chemists denied that individual atoms were anything more than convenient symbols. #### Making sense of the elements As the information accumulated, the challenge was to find a way of systematizing it; the modern scientist's aesthetic sense rebels against complication. This hodgepodge of elements was an embarrassment. One contemporary observer, William Crookes, described the elements as extending “before us as stretched the wide Atlantic before the gaze of Columbus, mocking, taunting and murmuring strange riddles, which no man has yet been able to solve.” It wasn't long before people started recognizing that many atoms' masses were nearly integer multiples of the mass of hydrogen, the lightest element. A few excitable types began speculating that hydrogen was the basic building block, and that the heavier elements were made of clusters of hydrogen. It wasn't long, however, before their parade was rained on by more accurate measurements, which showed that not all of the elements had atomic masses that were near integer multiples of hydrogen, and even the ones that were close to being integer multiples were off by one percent or so. e / A modern periodic table. Elements in the same column have similar chemical properties. The modern atomic numbers, discussed in section 8.2, were not known in Mendeleev's time, since the table could be flipped in various ways. Chemistry professor Dmitri Mendeleev, preparing his lectures in 1869, wanted to find some way to organize his knowledge for his students to make it more understandable. He wrote the names of all the elements on cards and began arranging them in different ways on his desk, trying to find an arrangement that would make sense of the muddle. The row-and-column scheme he came up with is essentially our modern periodic table. The columns of the modern version represent groups of elements with similar chemical properties, and each row is more massive than the one above it. Going across each row, this almost always resulted in placing the atoms in sequence by weight as well. What made the system significant was its predictive value. There were three places where Mendeleev had to leave gaps in his checkerboard to keep chemically similar elements in the same column. He predicted that elements would exist to fill these gaps, and extrapolated or interpolated from other elements in the same column to predict their numerical properties, such as masses, boiling points, and densities. Mendeleev's professional stock skyrocketed when his three elements (later named gallium, scandium and germanium) were discovered and found to have very nearly the properties he had predicted. One thing that Mendeleev's table made clear was that mass was not the basic property that distinguished atoms of different elements. To make his table work, he had to deviate from ordering the elements strictly by mass. For instance, iodine atoms are lighter than tellurium, but Mendeleev had to put iodine after tellurium so that it would lie in a column with chemically similar elements. #### Direct proof that atoms existed The success of the kinetic theory of heat was taken as strong evidence that, in addition to the motion of any object as a whole, there is an invisible type of motion all around us: the random motion of atoms within each object. But many conservatives were not convinced that atoms really existed. Nobody had ever seen one, after all. It wasn't until generations after the kinetic theory of heat was developed that it was demonstrated conclusively that atoms really existed and that they participated in continuous motion that never died out. The smoking gun to prove atoms were more than mathematical abstractions came when some old, obscure observations were reexamined by an unknown Swiss patent clerk named Albert Einstein. A botanist named Brown, using a microscope that was state of the art in 1827, observed tiny grains of pollen in a drop of water on a microscope slide, and found that they jumped around randomly for no apparent reason. Wondering at first if the pollen he'd assumed to be dead was actually alive, he tried looking at particles of soot, and found that the soot particles also moved around. The same results would occur with any small grain or particle suspended in a liquid. The phenomenon came to be referred to as Brownian motion, and its existence was filed away as a quaint and thoroughly unimportant fact, really just a nuisance for the microscopist. It wasn't until 1906 that Einstein found the correct interpretation for Brown's observation: the water molecules were in continuous random motion, and were colliding with the particle all the time, kicking it in random directions. After all the millennia of speculation about atoms, at last there was solid proof. Einstein's calculations dispelled all doubt, since he was able to make accurate predictions of things like the average distance traveled by the particle in a certain amount of time. (Einstein received the Nobel Prize not for his theory of relativity but for his papers on Brownian motion and the photoelectric effect.) ##### Discussion Questions ◊ How could knowledge of the size of an individual aluminum atom be used to infer an estimate of its mass, or vice versa? ◊ How could one test Einstein's interpretation of Brownian motion by observing it at different temperatures? ## 8.1.4 Quantization of charge Proving that atoms actually existed was a big accomplishment, but demonstrating their existence was different from understanding their properties. Note that the Brown-Einstein observations had nothing at all to do with electricity, and yet we know that matter is inherently electrical, and we have been successful in interpreting certain electrical phenomena in terms of mobile positively and negatively charged particles. Are these particles atoms? Parts of atoms? Particles that are entirely separate from atoms? It is perhaps premature to attempt to answer these questions without any conclusive evidence in favor of the charged-particle model of electricity. f / A young Robert Millikan. (Contemporary) Strong support for the charged-particle model came from a 1911 experiment by physicist Robert Millikan at the University of Chicago. Consider a jet of droplets of perfume or some other liquid made by blowing it through a tiny pinhole. The droplets emerging from the pinhole must be smaller than the pinhole, and in fact most of them are even more microscopic than that, since the turbulent flow of air tends to break them up. Millikan reasoned that the droplets would acquire a little bit of electric charge as they rubbed against the channel through which they emerged, and if the charged-particle model of electricity was right, the charge might be split up among so many minuscule liquid drops that a single drop might have a total charge amounting to an excess of only a few charged particles --- perhaps an excess of one positive particle on a certain drop, or an excess of two negative ones on another. g / A simplified diagram of Millikan's apparatus. Millikan's ingenious apparatus, g, consisted of two metal plates, which could be electrically charged as needed. He sprayed a cloud of oil droplets into the space between the plates, and selected one drop through a microscope for study. First, with no charge on the plates, he would determine the drop's mass by letting it fall through the air and measuring its terminal velocity, i.e., the velocity at which the force of air friction canceled out the force of gravity. The force of air drag on a slowly moving sphere had already been found by experiment to be $$bvr^2$$, where $$b$$ was a constant. Setting the total force equal to zero when the drop is at terminal velocity gives $\begin{equation*} bvr^2 - mg = 0 , \end{equation*}$ and setting the known density of oil equal to the drop's mass divided by its volume gives a second equation, $\begin{equation*} \rho = \frac{m}{\frac{4}{3}\pi r^3} . \end{equation*}$ Everything in these equations can be measured directly except for $$m$$ and $$r$$, so these are two equations in two unknowns, which can be solved in order to determine how big the drop is. Next Millikan charged the metal plates, adjusting the amount of charge so as to exactly counteract gravity and levitate the drop. If, for instance, the drop being examined happened to have a total charge that was negative, then positive charge put on the top plate would attract it, pulling it up, and negative charge on the bottom plate would repel it, pushing it up. (Theoretically only one plate would be necessary, but in practice a two-plate arrangement like this gave electrical forces that were more uniform in strength throughout the space where the oil drops were.) The amount of charge on the plates required to levitate the charged drop gave Millikan a handle on the amount of charge the drop carried. The more charge the drop had, the stronger the electrical forces on it would be, and the less charge would have to be put on the plates to do the trick. Unfortunately, expressing this relationship using Coulomb's law would have been impractical, because it would require a perfect knowledge of how the charge was distributed on each plate, plus the ability to perform vector addition of all the forces being exerted on the drop by all the charges on the plate. Instead, Millikan made use of the fact that the electrical force experienced by a pointlike charged object at a certain point in space is proportional to its charge, $\begin{equation*} \frac{F}{q} = \text{constant} . \end{equation*}$ With a given amount of charge on the plates, this constant could be determined for instance by discarding the oil drop, inserting between the plates a larger and more easily handled object with a known charge on it, and measuring the force with conventional methods. (Millikan actually used a slightly different set of techniques for determining the constant, but the concept is the same.) The amount of force on the actual oil drop had to equal $$mg$$, since it was just enough to levitate it, and once the calibration constant had been determined, the charge of the drop could then be found based on its previously determined mass. q (C) q / (1.64 X 10-19 C) − 1.970×10 − 18 − 12.02 − 0.987×10 − 18 − 6.02 − 2.773×10 − 18 − 16.93 A few samples of Millikan's data. The table above shows a few of the results from Millikan's 1911 paper. (Millikan took data on both negatively and positively charged drops, but in his paper he gave only a sample of his data on negatively charged drops, so these numbers are all negative.) Even a quick look at the data leads to the suspicion that the charges are not simply a series of random numbers. For instance, the second charge is almost exactly equal to half the first one. Millikan explained the observed charges as all being integer multiples of a single number, $$1.64\times10^{-19}$$ C. In the second column, dividing by this constant gives numbers that are essentially integers, allowing for the random errors present in the experiment. Millikan states in his paper that these results were a ... direct and tangible demonstration ... of the correctness of the view advanced many years ago and supported by evidence from many sources that all electrical charges, however produced, are exact multiples of one definite, elementary electrical charge, or in other words, that an electrical charge instead of being spread uniformly over the charged surface has a definite granular structure, consisting, in fact, of ... specks, or atoms of electricity, all precisely alike, peppered over the surface of the charged body. In other words, he had provided direct evidence for the charged-particle model of electricity and against models in which electricity was described as some sort of fluid. The basic charge is notated $$e$$, and the modern value is $$e=1.60\times10^{-19}$$ C. The word “quantized” is used in physics to describe a quantity that can only have certain numerical values, and cannot have any of the values between those. In this language, we would say that Millikan discovered that charge is quantized. The charge $$e$$ is referred to as the quantum of charge. #### A historical note on Millikan's fraud Very few undergraduate physics textbooks mention the well-documented fact that although Millikan's conclusions were correct, he was guilty of scientific fraud. His technique was difficult and painstaking to perform, and his original notebooks, which have been preserved, show that the data were far less perfect than he claimed in his published scientific papers. In his publications, he stated categorically that every single oil drop observed had had a charge that was a multiple of $$e$$, with no exceptions or omissions. But his notebooks are replete with notations such as “beautiful data, keep,” and “bad run, throw out.” Millikan, then, appears to have earned his Nobel Prize by advocating a correct position with dishonest descriptions of his data. Why do textbook authors fail to mention Millikan's fraud? It may be that they think students are too unsophisticated to correctly evaluate the implications of the fact that scientific fraud has sometimes existed and even been rewarded by the scientific establishment. Maybe they are afraid students will reason that fudging data is OK, since Millikan got the Nobel Prize for it. But falsifying history in the name of encouraging truthfulness is more than a little ironic. English teachers don't edit Shakespeare's tragedies so that the bad characters are always punished and the good ones never suffer! self-check: Is money quantized? What is the quantum of money? ## 8.1.5 The electron #### Cathode Rays Nineteenth-century physicists spent a lot of time trying to come up with wild, random ways to play with electricity. The best experiments of this kind were the ones that made big sparks or pretty colors of light. One such parlor trick was the cathode ray. To produce it, you first had to hire a good glassblower and find a good vacuum pump. The glassblower would create a hollow tube and embed two pieces of metal in it, called the electrodes, which were connected to the outside via metal wires passing through the glass. Before letting him seal up the whole tube, you would hook it up to a vacuum pump, and spend several hours huffing and puffing away at the pump's hand crank to get a good vacuum inside. Then, while you were still pumping on the tube, the glassblower would melt the glass and seal the whole thing shut. Finally, you would put a large amount of positive charge on one wire and a large amount of negative charge on the other. Metals have the property of letting charge move through them easily, so the charge deposited on one of the wires would quickly spread out because of the repulsion of each part of it for every other part. This spreading-out process would result in nearly all the charge ending up in the electrodes, where there is more room to spread out than there is in the wire. For obscure historical reasons a negative electrode is called a cathode and a positive one is an anode. i / Cathode rays observed in a vacuum tube. Figure i shows the light-emitting stream that was observed. If, as shown in this figure, a hole was made in the anode, the beam would extend on through the hole until it hit the glass. Drilling a hole in the cathode, however would not result in any beam coming out on the left side, and this indicated that the stuff, whatever it was, was coming from the cathode. The rays were therefore christened “cathode rays.” (The terminology is still used today in the term “cathode ray tube” or “CRT” for the picture tube of a TV or computer monitor.) #### Were cathode rays a form of light, or of matter? Were cathode rays a form of light, or matter? At first no one really cared what they were, but as their scientific importance became more apparent, the light-versus-matter issue turned into a controversy along nationalistic lines, with the Germans advocating light and the English holding out for matter. The supporters of the material interpretation imagined the rays as consisting of a stream of atoms ripped from the substance of the cathode. One of our defining characteristics of matter is that material objects cannot pass through each other. Experiments showed that cathode rays could penetrate at least some small thickness of matter, such as a metal foil a tenth of a millimeter thick, implying that they were a form of light. Other experiments, however, pointed to the contrary conclusion. Light is a wave phenomenon, and one distinguishing property of waves is demonstrated by speaking into one end of a paper towel roll. The sound waves do not emerge from the other end of the tube as a focused beam. Instead, they begin spreading out in all directions as soon as they emerge. This shows that waves do not necessarily travel in straight lines. If a piece of metal foil in the shape of a star or a cross was placed in the way of the cathode ray, then a “shadow” of the same shape would appear on the glass, showing that the rays traveled in straight lines. This straight-line motion suggested that they were a stream of small particles of matter. These observations were inconclusive, so what was really needed was a determination of whether the rays had mass and weight. The trouble was that cathode rays could not simply be collected in a cup and put on a scale. When the cathode ray tube is in operation, one does not observe any loss of material from the cathode, or any crust being deposited on the anode. Nobody could think of a good way to weigh cathode rays, so the next most obvious way of settling the light/matter debate was to check whether the cathode rays possessed electrical charge. Light was known to be uncharged. If the cathode rays carried charge, they were definitely matter and not light, and they were presumably being made to jump the gap by the simultaneous repulsion of the negative charge in the cathode and attraction of the positive charge in the anode. The rays would overshoot the anode because of their momentum. (Although electrically charged particles do not normally leap across a gap of vacuum, very large amounts of charge were being used, so the forces were unusually intense.) #### Thomson's experiments j / J.J. Thomson in the lab. Physicist J.J. Thomson at Cambridge carried out a series of definitive experiments on cathode rays around the year 1897. By turning them slightly off course with electrical forces, k, he showed that they were indeed electrically charged, which was strong evidence that they were material. Not only that, but he proved that they had mass, and measured the ratio of their mass to their charge, $$m/q$$. Since their mass was not zero, he concluded that they were a form of matter, and presumably made up of a stream of microscopic, negatively charged particles. When Millikan published his results fourteen years later, it was reasonable to assume that the charge of one such particle equaled minus one fundamental charge, $$q=-e$$, and from the combination of Thomson's and Millikan's results one could therefore determine the mass of a single cathode ray particle. k / Thomson's experiment proving cathode rays had electric charge (redrawn from his original paper). The cathode, C, and anode, A, are as in any cathode ray tube. The rays pass through a slit in the anode, and a second slit, B, is interposed in order to make the beam thinner and eliminate rays that were not going straight. Charging plates D and E shows that cathode rays have charge: they are attracted toward the positive plate D and repelled by the negative plate E. The basic technique for determining $$m/q$$ was simply to measure the angle through which the charged plates bent the beam. The electric force acting on a cathode ray particle while it was between the plates would be proportional to its charge, $\begin{equation*} F_{elec} = \text{(known constant)} \cdot q . \end{equation*}$ Application of Newton's second law, $$a=F/m$$, would allow $$m/q$$ to be determined: $\begin{equation*} \frac{m}{q} = \frac{\text{known constant}}{a} \end{equation*}$ There was just one catch. Thomson needed to know the cathode ray particles' velocity in order to figure out their acceleration. At that point, however, nobody had even an educated guess as to the speed of the cathode rays produced in a given vacuum tube. The beam appeared to leap across the vacuum tube practically instantaneously, so it was no simple matter of timing it with a stopwatch! Thomson's clever solution was to observe the effect of both electric and magnetic forces on the beam. The magnetic force exerted by a particular magnet would depend on both the cathode ray's charge and its velocity: $\begin{equation*} F_{mag} = \text{(known constant #2)} \cdot qv \end{equation*}$ Thomson played with the electric and magnetic forces until either one would produce an equal effect on the beam, allowing him to solve for the velocity, $\begin{equation*} v = \frac{\text{(known constant)}}{\text{(known constant #2)}} . \end{equation*}$ Knowing the velocity (which was on the order of 10% of the speed of light for his setup), he was able to find the acceleration and thus the mass-to-charge ratio $$m/q$$. Thomson's techniques were relatively crude (or perhaps more charitably we could say that they stretched the state of the art of the time), so with various methods he came up with $$m/q$$ values that ranged over about a factor of two, even for cathode rays extracted from a cathode made of a single material. The best modern value is $$m/q=5.69\times10^{-12}$$ kg/C, which is consistent with the low end of Thomson's range. #### The cathode ray as a subatomic particle: the indexelectronelectron What was significant about Thomson's experiment was not the actual numerical value of $$m/q$$, however, so much as the fact that, combined with Millikan's value of the fundamental charge, it gave a mass for the cathode ray particles that was thousands of times smaller than the mass of even the lightest atoms. Even without Millikan's results, which were 14 years in the future, Thomson recognized that the cathode rays' $$m/q$$ was thousands of times smaller than the $$m/q$$ ratios that had been measured for electrically charged atoms in chemical solutions. He correctly interpreted this as evidence that the cathode rays were smaller building blocks --- he called them electrons --- out of which atoms themselves were formed. This was an extremely radical claim, coming at a time when atoms had not yet been proven to exist! Even those who used the word “atom” often considered them no more than mathematical abstractions, not literal objects. The idea of searching for structure inside of “unsplittable” atoms was seen by some as lunacy, but within ten years Thomson's ideas had been amply verified by many more detailed experiments. ##### Discussion Questions ◊ Thomson started to become convinced during his experiments that the “cathode rays” observed coming from the cathodes of vacuum tubes were building blocks of atoms --- what we now call electrons. He then carried out observations with cathodes made of a variety of metals, and found that $$m/q$$ was roughly the same in every case, considering his limited accuracy. Given his suspicion, why did it make sense to try different metals? How would the consistent values of $$m/q$$ serve to test his hypothesis? ◊ My students have frequently asked whether the $$m/q$$ that Thomson measured was the value for a single electron, or for the whole beam. Can you answer this question? ◊ Thomson found that the $$m/q$$ of an electron was thousands of times smaller than that of charged atoms in chemical solutions. Would this imply that the electrons had more charge? Less mass? Would there be no way to tell? Explain. Remember that Millikan's results were still many years in the future, so $$q$$ was unknown. ◊ Can you guess any practical reason why Thomson couldn't just let one electron fly across the gap before disconnecting the battery and turning off the beam, and then measure the amount of charge deposited on the anode, thus allowing him to measure the charge of a single electron directly? ◊ Why is it not possible to determine $$m$$ and $$q$$ themselves, rather than just their ratio, by observing electrons' motion in electric and magnetic fields? ### 8.1.6 The raisin cookie model of the atom Based on his experiments, Thomson proposed a picture of the atom which became known as the raisin cookie model. In the neutral atom, l, there are four electrons with a total charge of $$-4e$$, sitting in a sphere (the “cookie”) with a charge of $$+4e$$ spread throughout it. It was known that chemical reactions could not change one element into another, so in Thomson's scenario, each element's cookie sphere had a permanently fixed radius, mass, and positive charge, different from those of other elements. The electrons, however, were not a permanent feature of the atom, and could be tacked on or pulled out to make charged ions. Although we now know, for instance, that a neutral atom with four electrons is the element beryllium, scientists at the time did not know how many electrons the various neutral atoms possessed. l / The raisin cookie model of the atom with four units of charge, which we now know to be beryllium. This model is clearly different from the one you've learned in grade school or through popular culture, where the positive charge is concentrated in a tiny nucleus at the atom's center. An equally important change in ideas about the atom has been the realization that atoms and their constituent subatomic particles behave entirely differently from objects on the human scale. For instance, we'll see later that an electron can be in more than one place at one time. The raisin cookie model was part of a long tradition of attempts to make mechanical models of phenomena, and Thomson and his contemporaries never questioned the appropriateness of building a mental model of an atom as a machine with little parts inside. Today, mechanical models of atoms are still used (for instance the tinker-toy-style molecular modeling kits like the ones used by Watson and Crick to figure out the double helix structure of DNA), but scientists realize that the physical objects are only aids to help our brains' symbolic and visual processes think about atoms. Although there was no clear-cut experimental evidence for many of the details of the raisin cookie model, physicists went ahead and started working out its implications. For instance, suppose you had a four-electron atom. All four electrons would be repelling each other, but they would also all be attracted toward the center of the “cookie” sphere. The result should be some kind of stable, symmetric arrangement in which all the forces canceled out. People sufficiently clever with math soon showed that the electrons in a four-electron atom should settle down at the vertices of a pyramid with one less side than the Egyptian kind, i.e., a regular tetrahedron. This deduction turns out to be wrong because it was based on incorrect features of the model, but the model also had many successes, a few of which we will now discuss. Example 3: Flow of electrical charge in wires One of my former students was the son of an electrician, and had become an electrician himself. He related to me how his father had remained refused to believe all his life that electrons really flowed through wires. If they had, he reasoned, the metal would have gradually become more and more damaged, eventually crumbling to dust. His opinion is not at all unreasonable based on the fact that electrons are material particles, and that matter cannot normally pass through matter without making a hole through it. Nineteenth-century physicists would have shared his objection to a charged-particle model of the flow of electrical charge. In the raisin-cookie model, however, the electrons are very low in mass, and therefore presumably very small in size as well. It is not surprising that they can slip between the atoms without damaging them. Example 4: Flow of electrical charge across cell membranes Your nervous system is based on signals carried by charge moving from nerve cell to nerve cell. Your body is essentially all liquid, and atoms in a liquid are mobile. This means that, unlike the case of charge flowing in a solid wire, entire charged atoms can flow in your nervous system Example 5: Emission of electrons in a cathode ray tube Why do electrons detach themselves from the cathode of a vacuum tube? Certainly they are encouraged to do so by the repulsion of the negative charge placed on the cathode and the attraction from the net positive charge of the anode, but these are not strong enough to rip electrons out of atoms by main force --- if they were, then the entire apparatus would have been instantly vaporized as every atom was simultaneously ripped apart! The raisin cookie model leads to a simple explanation. We know that heat is the energy of random motion of atoms. The atoms in any object are therefore violently jostling each other all the time, and a few of these collisions are violent enough to knock electrons out of atoms. If this occurs near the surface of a solid object, the electron may can come loose. Ordinarily, however, this loss of electrons is a self-limiting process; the loss of electrons leaves the object with a net positive charge, which attracts the lost sheep home to the fold. (For objects immersed in air rather than vacuum, there will also be a balanced exchange of electrons between the air and the object.) This interpretation explains the warm and friendly yellow glow of the vacuum tubes in an antique radio. To encourage the emission of electrons from the vacuum tubes' cathodes, the cathodes are intentionally warmed up with little heater coils. ##### Discussion Questions ◊ Today many people would define an ion as an atom (or molecule) with missing electrons or extra electrons added on. How would people have defined the word “ion” before the discovery of the electron? ◊ Since electrically neutral atoms were known to exist, there had to be positively charged subatomic stuff to cancel out the negatively charged electrons in an atom. Based on the state of knowledge immediately after the Millikan and Thomson experiments, was it possible that the positively charged stuff had an unquantized amount of charge? Could it be quantized in units of +e? In units of +2e? In units of +5/7e? ### Contributor Benjamin Crowell (Fullerton College). Conceptual Physics is copyrighted with a CC-BY-SA license.
2019-03-23T09:51:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5958081483840942, "perplexity": 688.8748630306414}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202781.33/warc/CC-MAIN-20190323080959-20190323102959-00136.warc.gz"}
https://pdglive.lbl.gov/DataBlock.action?node=B151W&home=sumtabB
# ${{\boldsymbol \Lambda}_{{c}}{(2880)}^{+}}$ WIDTH INSPIRE search VALUE (MeV) CL% EVTS DOCUMENT ID TECN  COMMENT $\bf{ 5.6 {}^{+0.8}_{-0.6}}$ OUR AVERAGE $5.43$ ${}^{+0.77}_{-0.71}$ ${}^{+0.81}_{-0.29}$ 1 2017 S LHCB in ${{\mathit \Lambda}_{{b}}^{0}}$ $\rightarrow$ ${{\mathit D}^{0}}{{\mathit p}}{{\mathit \pi}^{-}}$ $5.8$ $\pm1.5$ $\pm1.1$ 2.8k 2007 BABR in ${{\mathit p}}{{\mathit D}^{0}}$ $5.8$ $\pm0.7$ $\pm1.1$ 690 2007 BELL in ${{\mathit \Sigma}_{{c}}{(2455)}^{0}}{}^{,++}{{\mathit \pi}^{\pm}}$ • • • We do not use the following data for averages, fits, limits, etc. • • • $\text{<8}$ 90 2001 CLEO in ${{\mathit \Lambda}_{{c}}^{+}}{{\mathit \pi}^{+}}{{\mathit \pi}^{-}}$ 1  AAIJ 2017S reports $5.43$ ${}^{+0.77}_{-0.71}$ $\pm0.29$ ${}^{+0.75}_{-0.00}$ MeV value where the third uncertainty comes from modeling the resonant shape of the ${{\mathit \Lambda}_{{c}}{(2880)}^{+}}$ and the background (non-resonant) amplitudes. We have combined in quadrature the systematic uncertainties. References: AAIJ 2017S JHEP 1705 030 Study of the ${{\mathit D}^{0}}{{\mathit p}}$ Amplitude in ${{\mathit \Lambda}_{{b}}^{0}}$ $\rightarrow$ ${{\mathit D}^{0}}{{\mathit p}}{{\mathit \pi}^{-}}$ Decays AUBERT 2007 PRL 98 012001 Observation of a Charmed Baryon Decaying to ${{\mathit D}^{0}}{{\mathit p}}$ at a Mass Near 2.94 GeV/$\mathit c{}^{2}$ MIZUK 2007 PRL 98 262001 Experimental Constraints on the Spin and Parity of the ${{\mathit \Lambda}_{{c}}{(2880)}^{+}}$ ARTUSO 2001 PRL 86 4479 Observation of New States Decaying into ${{\mathit \Lambda}_{{c}}^{+}}{{\mathit \pi}^{-}}{{\mathit \pi}^{+}}$
2021-02-27T03:12:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8458943367004395, "perplexity": 5925.520745013109}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178358064.34/warc/CC-MAIN-20210227024823-20210227054823-00617.warc.gz"}
https://large-numbers.fandom.com/wiki/Nonagonal_number
## FANDOM 1,080 Pages A nonagonal number is a figurate number that extends the concept of triangular and square numbers to the nonagon (a nine-sided polygon). However, unlike the triangular and square numbers, the patterns involved in the construction of nonagonal numbers are not rotationally symmetrical. Specifically, the nth nonagonal numbers counts the number of dots in a pattern of n nested nonagons, all sharing a common corner, where the ith nonagon in the pattern has sides made of i dots spaced one unit apart from each other. The nonagonal number for n is given by the formula: $\frac {n(7n - 5)}{2}.$ The first few nonagonal numbers are: 1, 9, 24, 46, 75, 111, 154, 204, 261, 325, 396, 474, 559, 651, 750, 856, 969, 1089, 1216, 1350, 1491, 1639, 1794, 1956, 2125, 2301, 2484, 2674, 2871, 3075, 3286, 3504, 3729, 3961, 4200, 4446, 4699, 4959, 5226, 5500, 5781, 6069, 6364, 6666, 6975, 7291, 7614, 7944, 8281, 8625, 8976, 9334, 9699... The parity of nonagonal numbers follows the pattern odd-odd-even-even. Letting N(n) give the nth nonagonal number and T(n) the nth triangular number, ${7N(n) + 3 = T(7n - 3)}.$ ## Test for nonagonal numbersEdit $\mathsf{Let}~x = \frac{\sqrt{56n+25}+5}{14}.$ If is an integer, then is the -th nonagonal number. If is not an integer, then is not nonagonal. ## Sources Edit Community content is available under CC-BY-SA unless otherwise noted.
2020-01-24T07:35:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7432646155357361, "perplexity": 474.0315847457684}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250616186.38/warc/CC-MAIN-20200124070934-20200124095934-00275.warc.gz"}
http://www.itl.nist.gov/div898/possolo/TutorialWEBServer/TutorialMetrologists2011Nov09.xht
] > Tutorial for Metrologists on the probabilistic and statistical apparatus underlying the GUM and related documents ## Tutorial for Metrologists on the probabilistic and statistical apparatus underlying the GUM and related documents November 9, 2011 ### 1 Preamble Familiarity with the basic concepts and techniques from probability theory and mathematical statistics can best be gained by studying suitable textbooks and exercising those concepts and techniques by solving instructive problems. The books by DeGroot and Schervish [2011], Hoel et al. [1971a], Hoel et al. [1971b], Feller [1968], Lindley [1965a], and Lindley [1965b] are highly recommended for this purpose and appropriate for readers who will have studied mathematical calculus in university courses for science and engineering majors. This document aims to provide an overview of some of these concepts and techniques that have proven useful in applications to the characterization, propagation, and interpretation of measurement uncertainty as described, for example, by Morgan and Henrion [1992] and Taylor and Kuyatt [1994], and in guidance documents produced by international organizations, including the Guide to the expression of uncertainty in measurement (GUM) [Joint Committee for Guides in Metrology2008a] and its supplements [Joint Committee for Guides in Metrology2008b]. However, the examples do not all necessarily show a direct connection to measurement science. Our basic premises are that probability is best suited to express uncertainty quantitatively, and that Bayesian statistical methods afford the best means to exploit information about quantities of interest that originates from multiple sources, including empirical data gathered for the purpose, and preexisting expert knowledge. Although there is nothing particularly controversial about the calculus of probability or about the mathematical methods of statistics, both the meaning of probability and the interpretation of the products of statistical inference continue to be subjects of debate. This debate is meta-probabilistic and meta-statistical, in the same sense as metaphysics employs methods different from the methods of physics to study the world. In fact, the debate is liveliest and most scholarly among professional philosophers [Fitelson2007]. However, probabilists and statisticians often participate in it when they take off their professional hats and become philosophers [Neyman1977], as any inquisitive person is wont to do, at one time or another. For this reason, we begin with an overview of some of the meanings that have been assigned to probability (§2) before turning to the calculus of probability (§3). In applications, the devices of this calculus are typically brought into play when considering random variables and probability distributions (§4), in particular to characterize the probability distribution of functions of random variables (§5). Statistical inference (§6) uses all of these devices to produce probabilistic statements about unknown quantity values. ### 2 Probability #### 2.1 Meaning In Truth and Probability [Ramsey1926, 1931], Frank Ramsey takes the view that probability is “a branch of logic, the logic of partial belief and inconclusive argument”. In this vein, and more generally, probabilities serve to quantify uncertainty. For example, when one states that, with $99\phantom{\rule{0.3em}{0ex}}%$ confidence, the distance between two geodesic marks is within 0.07 m of 936.84 m, one believes that the actual distance most likely lies between 936.77 m and 936.91 m, but still entertains the possibility that it may lie elsewhere. Similarly, a weather service announcement of $20\phantom{\rule{0.3em}{0ex}}%$ chance of rain tomorrow for a particular region summarizes an assessment of uncertainty about what will come to pass. Although relevant to the interpretation of measurement uncertainty, and generally to all applications of probability and statistics, the meaning of probability really is a philosophical issue [Gillies2000Hájek2007Mellor2005]. And while there is much disagreement about what probabilities mean, and how they are created to begin with (interpretation and elicitation of probability), there also is essentially universal agreement about how numerical assessments of probability should be manipulated and combined (calculus of probability). #### 2.2 Chance and Propensity Chances arise in connection with games of chance, and with phenomena that conceivably can recur under essentially the same circumstances. Thus one speaks of the chances of a pair of Kings in a poker hand, or of the chances that the nucleus of an atom of a particular uranium isotope will emit an alpha particle within a given time interval, or of the chances that a person born in France will have blood of type AB. Chances seem to be intrinsic properties of objects or processes in specific environments, maybe propensities for something to happen: their most renowned theorists have been Hans Reichenbach [Reichenbach1949], Richard von Mises [von Mises1981], and Karl Popper [Popper1959]. #### 2.3 Credence and Belief Credences measure subjective beliefs. They are best illustrated in relation with betting on the outcomes of events one is uncertain about. For example, in this most memorable of bets offered when the thoroughbred Eclipse was about to run against Gower, Tryal, Plume, and Chance in the second heat of the races on May 3rd, 1769, at Epsom Downs: “Eclipse first, the rest nowhere”, with odds of 6-to-4 [Clee2007]. The strength or degree of these beliefs can be assessed numerically by techniques that include the observation of betting behavior (actual or virtual), and this assessment can be gauged, and improved, by application of scoring rules [Lindley1985]. Dennis Lindley [Lindley1985] suggests that degrees of belief can be measured by comparison with a standard, similarly to how length or mass are measured. In general, subjective probabilities can be revealed by judicious application of elicitation methods [Garthwaite et al.2005]. Consider an urn that contains 100 balls that are identical but for their colors: $\beta$ are black and $100-\beta$ are white. The urn’s contents are thoroughly mixed, and the standard is the probability of the event $\mathit{B}$ of drawing a black ball. Now, given an event $\mathit{E}$, for example that, somewhere in London, it will rain tomorrow, whose probability he wishes to gauge, Peter will select a value for $\beta$ such that he regards gambling on $\mathit{B}$ as equivalent to gambling on $\mathit{E}$ (for the same prize): in these circumstances, $\beta ∕100$ is Peter’s credence on $\mathit{E}$. The beliefs that credences measure are subjective and personal, hence the probabilities that gauge them purport to a relationship between a particular knowing subject and the object of this subject’s interest. These beliefs certainly are informed by such knowledge as one may have about a situation, but they also are tempered by one’s preferences or tastes, and do not require that a link be drawn explicitly between that knowledge or sentiment and the corresponding bet. Bruno de Finetti [de Finetti19371990], Jimmie Savage [Savage1972], and Dennis Lindley [Lindley2006] have been leading developers of the subjective, personalistic viewpoint. #### 2.4 Epistemic Probability Logical (or epistemic, that is, involving or relating to knowledge) probabilities measure the degree to which the truth of a proposition justifies, warrants, or rationally supports the truth of another [Carnap1962]. For example, when a medical doctor concludes that a positive result in a tuberculin sensitivity test indicates tuberculosis with $62.5\phantom{\rule{0.3em}{0ex}}%$ probability (§3.6), or when measurements made during a total eclipse of the sun overwhelmingly favor Einstein’s theory of gravitation over Newton’s [Dyson et al.1920]. The fact that scientists or judges may not necessarily or explicitly use probabilities to convey their confidence in theories or in arguments [Glymour1980] does not reduce the value that probabilities have in models for the rational process of learning from experience, either for human subjects or for reasoning machines that are programmed to make decisions in situations of uncertainty. In this fashion, probability is an extension of deductive logic, and measures degree of confirmation: it does this objectively because it does not involve subjective personal opinion, hence is as incontrovertible as deduction by any of the forms of classical logic. The difficulty lies in specifying a starting point, a state of a priori ignorance that is similarly objective and hence universally acceptable. Harold Jeffreys [Jeffreys1961] provided maybe the first modern, thorough account of how this may be done. He argued, and illustrated in many substantive examples, that it is fit to address the widest range of scientific problems where one wishes to exploit the information in observational data. The interpretation of probability as an extension of logic makes it particularly well-suited to applications in measurement science, where it is desirable to be able to treat different uncertainty components, which may have been evaluated using different methods, simultaneously, using a uniform vocabulary, and a single set of technical tools. This concept underlies the treatment of measurement uncertainty in the GUM and in its supplements. Richard Cox [Cox19461961] and Edwin Jaynes [Jaynes19582003] have articulated cogent arguments in support of this view, and José Bernardo [Bernardo1979] and James Berger [Berger2006] have greatly expanded it. #### 2.5 Difficulties Even in situations where, on first inspection, chances seem applicable, closer inspection reveals that something else really is needed. There may be no obvious reason to doubt that the chance is ½ that a coin tossed to assign sides before a football game will land Heads up. However, if the coin instead is spun on its edge on a table, that chance will be closer to either $1∕3$ or $2∕3$ than to ½ [Diaconis and Ylvisaker1985]. And when it is the magician Persi Warren [DeGroot1986] who tosses the coin, then all bets are off because he can manage to toss it so that it always lands Heads up on his hand after he flips it in the air: while the unwary may be willing to bet at even odds on the outcome, for Persi the probability is 1. And there are situations that naturally lend themselves to, or that even seem to require, multiple interpretations. Take the 20 % chance of rain: does this mean that, of all days when the weather conditions have been similar to today’s in the region the forecast purports to, it has rained some time during the following day with historical frequency of 20 %? Or is this the result of a probabilistic forecast that is to be interpreted epistemically? Maybe it means something else altogether, like: of all things that will fall from the sky tomorrow, 1 in 5 will be a raindrop. #### 2.6 Role of Context The use of probability as an extension of logic ensures that different people who have the same information (empirical or other) about a measurand should produce the same inferences about it. Example §2.7 illustrates the fact that contextual information relating to a proposition, situation, or event, will influence probabilistic assessments to the extent that different people with different information may, while all acting rationally, produce different uncertainty assessments, and hence different probabilities, for the same proposition, situation, or event. When probabilities express subjective beliefs, or when they express states of incomplete or imperfect knowledge, different people typically will assign different probabilities to the same statements or events. If they have to reach a consensus on a course of action that is informed by their plural, varied assessments, then they have to engage in a harmonization exercise that preserves the internal coherence of their individual positions. Both statisticians [Stone1961Morris1977Lindley1983Clemen and Winkler1999] and philosophers [Bovens and Rabinowicz2006Hartmann and Sprenger2011] have addressed this topic. #### 2.7 example: Prospecting James and Claire, who both make investments in mining prospects, have been told that samples from a region surveyed recently have mass fractions of titanium averaging 3 g kg-1, give or take 1 g kg-1 (where “give or take” means that the true mass fraction of titanium in the region sampled is between 2 g kg-1 and 4 g kg-1 with 95 % probability). James, however, has also been told that the samples are of a sandstone with grains of ilmenite. On this basis, James may assign a much higher probability than Claire to the proposition that asserts that the region sampled includes an economically viable deposit of titanium ore. ### 3 Probability Calculus #### 3.1 Axioms Once numeric probabilities are in hand, irrespective of how they may be interpreted, the same set of rules, or axioms, is used to combine them. We formulate these axioms in the context where probability is regarded as measuring degree of (rational) belief in the truth of propositions, given a particular body of knowledge and universe of discourse, $\mathit{H}$, that makes all the participating elements meaningful. Let $\mathit{A}$ and $\mathit{B}$ denote propositions whose probabilities $Pr\left(\mathit{A}|\mathit{H}\right)$ and $Pr\left(\mathit{B}|\mathit{H}\right)$ express degrees of belief about their truth given (or, conditionally upon) the context defined by $\mathit{H}$. The notation denotes the conditional probability of $\mathit{B}$, assuming that $\mathit{A}$ is true and given the context defined by $\mathit{H}$. Note that is not necessarily 0 when $Pr\left(\mathit{A}|\mathit{H}\right)=0$: for example, the probability is 0 that a point chosen uniformly at random over the surface of the earth will be on the equator; yet the probability is ½ that, conditionally on its being on the equator, its longitude is between 0° and 180° West of the prime meridian at Greenwich, UK. The axioms for the calculus of probability are these: Convexity: $Pr\left(\mathit{A}|\mathit{H}\right)$ is a number between 0 and 1, and it is 1 if and only if $\mathit{H}$ logically implies $\mathit{A}$; , where the expression “$\mathit{A}$ or $\mathit{B}$” is true if and only if $\mathit{A}$, $\mathit{B}$, or both are true; Multiplication: . Since the roles of $\mathit{A}$ and $\mathit{B}$ are interchangeable, the multiplication axiom obviously can also be written as . Most accounts of mathematical probability theory use an additional rule (countable additivity) that ensures that the probability that at least one proposition is true, among countably infinitely many mutually exclusive propositions, equals the sum of their individual probabilities [Casella and Berger2002, Definition 1.2.4]. (“Countably infinitely many” means “as many as there are integer numbers”.) When the context that $\mathit{H}$ defines is obvious, often one suppresses explicit reference to it, as in this derivation: if $\stackrel{˜}{\mathit{A}}$ denotes the negation of $\mathit{A}$, then Convexity and the Addition Rule imply that $1$ $=Pr\left(\mathit{A}\right)$ $+Pr\left(\stackrel{˜}{\mathit{A}}\right)$ because one but not both of $\mathit{A}$ and $\stackrel{˜}{\mathit{A}}$ must be true. #### 3.2 Independence The concept of independence pervades much of probability theory. Two propositions $\mathit{A}$ and $\mathit{B}$ are independent if the probability that both are true equals the product of their individual probabilities of being true. If $\mathit{A}$ asserts that there is a Queen in Alexandra’s poker hand, and $\mathit{B}$ asserts that Beatrice’s comprises red cards only, both hands having been dealt from the same deck, then $\mathit{A}$ and $\mathit{B}$ are independent. Intuitively, if knowledge of the truth of one proposition influences the assessment of probability of another, then they are dependent: in particular, two mutually exclusive propositions are dependent. If , then $\mathit{A}$ and $\mathit{B}$ are independent given $\mathit{H}$. #### 3.3 Extending the Conversation When considering the probability of a proposition, it often proves advantageous to consider the truth or falsity of another one, somehow related to the first [Lindley2006, §5.6]. To assess the probability $Pr\left(+\right)$ of a positive tuberculin skin test (§3.6), it is convenient to consider how the test performs separately in persons infected or not infected with Mycobacterium tuberculosis: if $\mathit{I}$ denotes infection, then $Pr\left(+\right)$ $=Pr\left(+|\mathit{I}\right)Pr\left(\mathit{I}\right)$ $+Pr\left(+|\stackrel{˜}{\mathit{I}}\right)Pr\left(\stackrel{˜}{\mathit{I}}\right)$, where $Pr\left(+|\stackrel{˜}{\mathit{I}}\right)$ is the probability of a false positive, and $1-Pr\left(+|\mathit{I}\right)$ is the probability of a false negative, both more accessible than $Pr\left(+\right)$. #### 3.4 Coherence If an ideal reasoning agent (human or machine) assigns probabilities to events or to the truth of propositions according to the foregoing axioms, then this agent’s beliefs are said to be coherent. In these circumstances, if probabilities are used to inform bets concerning the truth of propositions in the universe of discourse where these probabilities are meaningful, then it is impossible (for a “bookie”) to devise a collection of bets that bring an assured loss to this agent (a so-called “Dutch Book”). Now, suppose that, having ascertained the truth of a proposition $\mathit{A}$, one produces $Pr\left(\mathit{C}|\mathit{A}\right)$ as assessment of $\mathit{C}$’s truth on the evidence provided by $\mathit{A}$. Next, one determines that $\mathit{B}$, too, is true and revises this last assessment of $\mathit{C}$’s truth to become . The process whereby probabilities are updated is coherently extensible if the resulting assessment is the same irrespective of whether the evidence provided by $\mathit{A}$ and $\mathit{B}$ is brought to bear either sequentially, as just considered, or simultaneously. The incorporation of information from multiple sources, and the corresponding propagation of uncertainty, that is carried out by application of Bayes’ formula, which is described next and illustrated in examples §3.6 and §6.7, is coherently extensible. #### 3.5 Bayes’s Formula If exactly one (that is, one and one only) among propositions ${\mathit{A}}_{1},\dots ,{\mathit{A}}_{\mathit{n}}$ can be true, and $\mathit{B}$ is another proposition with positive probability, then (1) This follows from the axioms above because (Multiplication axiom), whose numerator equals $Pr\left(\mathit{B}|{\mathit{A}}_{\mathit{j}}\right)Pr\left({\mathit{A}}_{\mathit{j}}\right)$ (Multiplication axiom), and whose denominator equals $Pr\left(\mathit{B}|{\mathit{A}}_{1}\right)Pr\left({\mathit{A}}_{1}\right)$ $\dots \phantom{\rule{0.3em}{0ex}}$ $Pr\left(\mathit{B}|{\mathit{A}}_{\mathit{n}}\right)Pr\left({\mathit{A}}_{\mathit{n}}\right)$ (“extending the conversation”, as in §3.3). #### 3.6 example: Tuberculin Test Richard has been advised that his tuberculin skin test has returned a positive result. The tuberculin skin test has a reported false-negative rate of $25\phantom{\rule{0.3em}{0ex}}%$ during the initial evaluation of persons with active tuberculosis [American Thoracic Society1999Holden et al.1971]: this means that the probability is 0.25 that the test will yield a negative ($-$) response when administered to an infected person ($\mathit{I}$), $Pr\left(-|\mathit{I}\right)=0.25$. Therefore, the probability is only 0.75 that infection will yield a positive test result. In populations where cross-reactivity with other mycobacteria is common, the test’s false-positive rate is 5%: that is, the conditional probability of a positive result ($+$) for a person that is not infected ($\stackrel{˜}{\mathit{I}}$) is $Pr\left(+|\stackrel{˜}{\mathit{I}}\right)=0.05$. Richard happens to live in an area where tuberculosis has a prevalence of $10\phantom{\rule{0.3em}{0ex}}%$. Given the positive result of the test he underwent, the probability that he is infected is $\begin{array}{llll}\hfill Pr\left(\mathit{I}|+\right)=& \frac{Pr\left(+|\mathit{I}\right)Pr\left(\mathit{I}\right)}{Pr\left(+|\mathit{I}\right)Pr\left(\mathit{I}\right)+Pr\left(+|\stackrel{˜}{\mathit{I}}\right)Pr\left(\stackrel{˜}{\mathit{I}}\right)}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill =& \frac{\left(0.75×0.10\right)}{\left(0.75×0.10\right)+\left(0.05×0.90\right)}=0.625.\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$ Common sense suggests that the diagnostic value of the test should depend on its false-negative and false-positive rates, as well as on the prevalence of the disease: Bayes’ formula states exactly how these ingredients should be combined to produce $Pr\left(\mathit{I}|+\right)$, which expresses that diagnostic value quantitatively. Richard has the tuberculin skin test repeated, and this second test also turns out positive. To incorporate this additional piece of evidence into the probability that Richard is infected, first we summarize the state of knowledge (about whether he is infected) determined by the result from the first test. This is done by defining $\mathit{Q}\left(\mathit{I}\right)=Pr\left(\mathit{I}|+\right)=0.625$ and $\mathit{Q}\left(\stackrel{˜}{\mathit{I}}\right)=1-\mathit{Q}\left(\mathit{I}\right)=0.375$, and using them in the role that the overall probability of infection ($10\phantom{\rule{0.3em}{0ex}}%$) or non-infection ($90\phantom{\rule{0.3em}{0ex}}%$) played prior to Richard’s first test, when all one knew about his condition was that he was a member of a population where the prevalence of tuberculosis was $10\phantom{\rule{0.3em}{0ex}}%$. Again applying Bayes’ theorem, and assuming that the two tests are independent, the revised probability that Richard is infected after two positive tests is $\begin{array}{llll}\hfill \mathit{Q}\left(\mathit{I}|+\right)=& \frac{Pr\left(+|\mathit{I}\right)\mathit{Q}\left(\mathit{I}\right)}{Pr\left(+|\mathit{I}\right)\mathit{Q}\left(\mathit{I}\right)+Pr\left(+|\stackrel{˜}{\mathit{I}}\right)\mathit{Q}\left(\stackrel{˜}{\mathit{I}}\right)}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill =& \frac{\left(0.75×0.625\right)}{\left(0.75×0.625\right)+\left(0.05×0.375\right)}=0.962.\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$ If, instead, one had been initially told that Richard had had two independent, positive tuberculin skin tests, then the calculation would have been: $\begin{array}{llll}\hfill Pr\left(\mathit{I}|++\right)=& \frac{Pr\left(++|\mathit{I}\right)Pr\left(\mathit{I}\right)}{Pr\left(++|\mathit{I}\right)Pr\left(\mathit{I}\right)+Pr\left(++|\stackrel{˜}{\mathit{I}}\right)Pr\left(\stackrel{˜}{\mathit{I}}\right)}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill =& \frac{\left(0.7{5}^{2}×0.10\right)}{\left(0.7{5}^{2}×0.10\right)+\left(0.0{5}^{2}×0.90\right)}=0.962.\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$ This example illustrates the fact that Bayes’ theorem produces the same probability irrespective of whether the information is incorporated sequentially, or all at once. #### 3.7 Growth of Knowledge The example in §3.6 illustrated how the probability of Richard being infected increased (relative to the overall probability of infection in the town where he lives) as a first, and then a second tuberculin test turned out positive. However, even if he is infected, by chance alone a test may turn out negative. In a sequence of tests, therefore, the probability of his being infected may oscillate, increasing when a test turns out positive, decreasing when some subsequent test turns out negative. Therefore, the question naturally arises of whether a person employing the Bayesian method of exploiting information, and incorporating it into the current state of knowledge, ever will, in situations of uncertainty, arrive at conclusions with overwhelming confidence. Jimmie Savage proved rigorously that the answer is “yes” with great generality: “with observation of an abundance of relevant data, the person is almost certain to become highly convinced of the truth, and […] he himself knows this to be the case” [Savage1972, §3.6]. The restriction to “relevant data” is critical: in relation with the tuberculin test, if it happened that , then the test would have no discriminatory power, and in fact would be irrelevant to learning about disease status. ### 4 Random Variables and Probability Distributions #### 4.1 Random Variables The notion of random variable originates in games of chance, like roulette, whose outcomes are unpredictable. Its rigorous mathematical definition (measurable function from one probability space into another) is unlikely to be of great interest to the metrologist. Instead, one may like to keep in mind its heuristic meaning: the value of a quantity that has a probability distribution as an attribute whose role is to describe the uncertainty associated with that value. #### 4.2 example: Roulette In the version of roulette played in Monte Carlo, the possible outcomes are numbers in the set $\mathcal{𝒳}=\left\{0,1,\dots ,36\right\}$ (usually one disregards other possible, but “uninteresting” outcomes, including those where the ball exits the wheel and lands elsewhere, or where it lands inside the wheel but in none of its numbered pockets). Once those 37 numbers are deemed to be equally likely, one can speak of a random variable that is equal to $0$ with probability $1∕37$, or that is odd with probability $18∕37$. (Note that these statements are meaningful irrespective of whether the event in question will happen in the future, or has happened already, provided one does not know its actual outcome yet). #### 4.3 example: Light Bulb The GE A19 Party Light 25 W incandescent light bulb has expected lifetime 2000 h: this is usually taken to mean that, if a brand new bulb is turned on and left on supplied with constant 120 V electrical current until it burns out, its actual lifetime may be described as a realized value (realization, or outcome) of a random variable with an exponential probability distribution (§4.10) whose expected value is — this is denoted $\eta$ in §4.10, and in general it needs to be estimated from experimental data. The concept of random variable applies just as well to domains of discourse unrelated to games of chance, hence can be used to suggest uncertainty about the value of a quantity, irrespective of the source of this uncertainty, including situations where there is nothing “random” (in the sense of “chancy”) in play. #### 4.4 Notational Convention For the most part, upper case letters (Roman or Greek) denote generic quantity values modeled as random variables, and their lowercase counterparts denote particular values. Upper case letters like $\mathit{X}$ or ${\mathit{X}}_{1},{\mathit{X}}_{2},\dots \phantom{\rule{0.3em}{0ex}}$, and $\mathit{Y}$, denote generic random variables, without implying that any of the former necessarily play the role of input quantity values (as defined in the international vocabulary of metrology (VIM) Joint Committee for Guides in Metrology [2008c], VIM 2.50), or that the latter necessarily plays the role of output quantity values (VIM 2.51) in a measurement model (VIM 2.48). The probabilities most commonly encountered in metrological practice concern sets of numbers that a quantity value may take: in this case, if $\mathit{X}$ denotes a random variable whose values belong to a set $\mathcal{𝒳}$, and $\mathit{A}$ is a subset of $\mathcal{𝒳}$, then $Pr\left(\mathit{X}\in \mathit{A}\right)$ denotes the probability that $\mathit{X}$’s value lies in $\mathit{A}$. For example, if $\mathit{X}$ represents the length (expressed in meter, say) of a gauge block, then $\mathcal{𝒳}$ would be the set of all possible values of length, and $\mathit{A}$ could be the subset of such values between 0.0423 m and 0.0427 m, say. #### 4.5 Probability Distributions Given a random variable $\mathit{X}$ one can then define a function ${\mathit{P}}_{\mathit{X}}$ such that ${\mathit{P}}_{\mathit{X}}\left(\mathit{A}\right)=Pr\left(\mathit{X}\in \mathit{A}\right)$ for all $\mathit{A}\subset \mathcal{𝒳}$ to which a probability can be assigned. This ${\mathit{P}}_{\mathit{X}}$ is called $\mathit{X}$’s probability distribution. If $\mathcal{𝒳}$ is countable (that is, either finite or infinite but with as many elements as there are positive integers), then one says that $\mathit{X}$ has a discrete distribution, which is fully specified by the probability it assigns to each value in $\mathcal{𝒳}$. For example, the outcome of a roulette wheel is a random variable whose probability distribution is discrete. If $\mathcal{𝒳}$ is uncountable (that is, it has as many elements as there are real numbers) and $Pr\left(\mathit{X}=\mathit{x}\right)=0$ for all $\mathit{x}\in \mathcal{𝒳}$, then one says that $\mathit{X}$ has a continuous distribution. For example, the lifetime of an incandescent light bulb that does light up and then is constantly left on until it burns out is a random variable with a continuous distribution. A distribution may be neither discrete nor continuous, but of a mixed type instead: for example, when a random variable is equal to 0 with probability $ϵ>0$, and has an exponential distribution (see §4.10) with probability $1-ϵ$. Since a brand new light bulb has a positive probability of burning out the instant it is turned on, its lifetime may more realistically be modeled as a random variable that has an “atom” of probability at 0, and is exponential with the complementary probability. #### 4.6 Probability Distribution Function The probability distribution of a random variable $\mathit{X}$ whose possible values are real numbers, can be succinctly described by its probability distribution function, which is the function ${\mathit{P}}_{\mathit{X}}$ such that ${\mathit{P}}_{\mathit{X}}\left(\mathit{x}\right)=Pr\left(\mathit{X}\le \mathit{x}\right)$ for every real number $\mathit{x}$. Note that the symbol we use here to denote the probability distribution function, is the same that we used in §4.5 to denote the probability distribution itself. Any confusion this may cause will be promptly resolved by examining the argument of ${\mathit{P}}_{\mathit{X}}$: if it is a set, then we mean the distribution itself, while if it is a number or a vector with numerical components, then we mean the probability distribution function. For example, if $\mathit{X}$ is real-valued and $\mathit{x}$ is a particular real number, then in ${\mathit{P}}_{\mathit{X}}\left(\mathit{x}\right)={\mathit{P}}_{\mathit{X}}\left(\left(-\infty ,\mathit{x}\right]\right)$ the ${\mathit{P}}_{\mathit{X}}$ on the left hand side refers to the probability distribution function, while the ${\mathit{P}}_{\mathit{X}}$ on the right hand side refers to the distribution itself because $\left(-\infty ,\mathit{x}\right]$ denotes the set of all real numbers no greater than $\mathit{x}$. Since the distribution function determines the distribution, the confusion is harmless. #### 4.7 Probability Density Function If $\mathit{X}$ has a discrete distribution (§4.5), then its probability density (also known as its probability mass function) is the function ${\mathit{p}}_{\mathit{X}}$ such that ${\mathit{p}}_{\mathit{X}}\left(\mathit{x}\right)=Pr\left(\mathit{X}=\mathit{x}\right)$ for $\mathit{x}\in \mathcal{𝒳}$. If $\mathcal{𝒳}$ is uncountable and $\mathit{X}$’s distribution is continuous and sufficiently smooth (in the sense described next), then the corresponding probability density function (PDF) is defined similarly to a material object’s mass density, as follows. Consider the simplest case, where $\mathcal{𝒳}$ is an interval of real numbers, and suppose that $\mathit{x}$ is one point in the interior of this interval. Now suppose that ${\delta }_{1}>{\delta }_{2}>\dots \phantom{\rule{0.3em}{0ex}}$ is an infinite sequence of positive numbers decreasing to zero. If $\mathit{X}$’s probability distribution is sufficiently smooth, then the limit ${\mathit{p}}_{\mathit{X}}\left(\mathit{x}\right)=\underset{\mathit{n}\to \infty }{lim}\left(\right{\mathit{P}}_{\mathit{X}}\left(\mathit{x}+{\delta }_{\mathit{n}}\right)-{\mathit{P}}_{\mathit{X}}\left(\mathit{x}-{\delta }_{\mathit{n}}\right)\left)\right∕\left(2{\delta }_{\mathit{n}}\right)$ exists. The function ${\mathit{p}}_{\mathit{X}}$ so defined is $\mathit{X}$’s probability density function. If the distribution function is differentiable, then the probability density is the derivative of the probability distribution function, ${\mathit{p}}_{\mathit{X}}={\mathit{P}}_{\mathit{X}}^{\prime }$. Both the probability distribution function and the probability density function have multivariate counterparts. #### 4.8 Expected Value, Variance, and Standard Deviation The expectation (expected value, or mean value) of a (scalar or vector valued) function $\phi$ of a random variable $\mathit{X}$ is if $\mathit{X}$ has a continuous probability distribution with density ${\mathit{p}}_{\mathit{X}}$, or $\mathbb{𝔼}\left(\phi \left(\mathit{X}\right)\right)={\sum }_{\mathit{x}\in \mathcal{𝒳}}\phi \left(\mathit{x}\right){\mathit{p}}_{\mathit{X}}\left(\mathit{x}\right)$ if $\mathit{X}$ has a discrete distribution. Note that $\mathbb{𝔼}\left(\phi \left(\mathit{X}\right)\right)$ can be computed without determining the probability distribution of the random variable $\phi \left(\mathit{X}\right)$ explicitly. $\mathbb{𝔼}\left(\mathit{X}\right)$ indicates $\mathit{X}$’s location, or the center of its probability distribution: therefore it is a most succinct summary of this distribution, and it is the best estimate of $\mathit{X}$’s value in the sense that it has the smallest mean squared error. The median is another indication of location for a scalar random variable: it is any value $\xi$ such that $Pr\left(\mathit{X}\le \xi \right)\ge \text{½}$ and $Pr\left(\mathit{X}\ge \xi \right)\ge \text{½}$, and it need not be unique. The median is the best estimate of $\mathit{X}$’s value in the sense that it has the smallest absolute deviation. Neither the mean nor the median need be “representative” values of the distribution. For example, when $\mathit{X}$ denotes a proportion whose most common values are close to 0 or to 1 and its mean is close to ½, then values close to the mean are very unlikely. $\mathbb{𝔼}\left(\mathit{X}\right)$ need not exist (in the sense that the defining integral or sum may fail to converge). $\mathbb{𝔼}\left({\mathit{X}}^{\mathit{k}}\right)$, where $\mathit{k}$ is a positive integer, is called $\mathit{X}$’s $\mathit{k}$th moment. The variance of $\mathit{X}$ is ${\sigma }^{2}=\mathbb{𝕍}\left(\mathit{X}\right)=\mathbb{𝔼}\left[{\left(\mathit{X}-\mathbb{𝔼}\left(\mathit{X}\right)\right)}^{2}\right]$, or, equivalently, the difference between its second moment and the square of its first moment. The positive square root of the variance, $\sigma$, is the standard deviation of $\mathit{X}$. #### 4.9 example: Poisson Distribution The only values that a Poisson distributed random variable $\mathit{X}$ can take are the non-negative integers: 0, 1, 2, …, and the probability that its value is $\mathit{x}$ is ${\mathit{p}}_{\mathit{X}}\left(\mathit{x}\right)={\lambda }^{\mathit{x}}{\mathit{e}}^{-\lambda }∕\mathit{x}!$, where $\lambda$ is some given positive number, and $\mathit{x}!=\mathit{x}\left(\mathit{x}-1\right)\dots 1$. This model distributes its unit of probability into infinitely many lumps, one at each non-negative integer, so that ${\mathit{p}}_{\mathit{X}}\left(\mathit{x}\right)$ decreases rapidly with increasing $\mathit{x}$, and ${\mathit{p}}_{\mathit{X}}\left(0\right)+{\mathit{p}}_{\mathit{X}}\left(1\right)+{\mathit{p}}_{\mathit{X}}\left(2\right)+\cdots =1$. Both the expected value and the variance equal $\lambda$. The number of alpha particles emitted by a sample containing the radionuclide , during a period of $\mathit{t}$ seconds that is a small fraction of this isotope’s half-life (138 days), is a value of a Poisson random variable with mean proportional to $\mathit{t}$. #### 4.10 example: Exponential Distribution Suppose that $\mathit{X}$ represents the lifetime (thousands of hours) of an incandescent light bulb, such that, for $0<\mathit{a}<\mathit{b}$, $Pr\left(\mathit{a}<\mathit{X}<\mathit{b}\right)=exp\left(-\mathit{a}∕\eta \right)-exp\left(-\mathit{b}∕\eta \right)$, for some given number $\eta >0$: note that, as $\mathit{a}$ decreases toward 0, and $\mathit{b}$ increases without limit, $Pr\left(\mathit{a}<\mathit{X}<\mathit{b}\right)$ approaches 1. Focus on a particular number $\mathit{x}>0$, and consider the ratio $Pr\left(\mathit{x}-\delta <\mathit{X}<\mathit{x}+\delta \right)∕\left(2\delta \right)=exp\left(-\mathit{x}∕\eta \right)\left[exp\left(\delta ∕\eta \right)-exp\left(-\delta ∕\eta \right)\right]∕\left(2\delta \right)$ for some $\delta >0$. As $\delta$ decreases to $0$ this ratio approaches $\left(1∕\eta \right)exp\left(-\mathit{x}∕\eta \right)$. Therefore, the function ${\mathit{p}}_{\mathit{X}}$ such that ${\mathit{p}}_{\mathit{X}}\left(\mathit{x}\right)=\left(1∕\eta \right)exp\left(-\mathit{x}∕\eta \right)$ is the probability density of the exponential distribution. In this case, the probability distribution function is ${\mathit{P}}_{\mathit{X}}$ such that ${\mathit{P}}_{\mathit{X}}\left(\mathit{x}\right)=Pr\left(\mathit{X}\le \mathit{x}\right)=1-exp\left(-\mathit{x}∕\eta \right)$. $\mathit{X}$’s mean value is $\mathbb{𝔼}\left(\mathit{X}\right)=\eta$, and its variance is $\mathbb{𝕍}\left(\mathit{X}\right)={\eta }^{2}$. Figure 1 illustrates both the distribution function and the density for the case where . #### 4.11 Joint, Marginal, and Conditional Distributions Suppose that $\mathit{X}$ represents a bivariate quantity value, for example, the Cartesian coordinates $\left(\mathit{U},\mathit{V}\right)$ of a point inside a circle of unit radius centered at $\left(0,0\right)$. In this case the range $\mathcal{𝒳}$ of $\mathit{X}=\left(\mathit{U},\mathit{V}\right)$ is this unit circle. The joint probability distribution of $\mathit{U}$ and $\mathit{V}$ describes a state of knowledge about the location of $\mathit{X}$: for example, that more likely than not $\mathit{X}$ is less than ½ away from the center of the circle: statements of this kind involve $\mathit{U}$ and $\mathit{V}$ together (that is, jointly). The marginal probability distributions of $\mathit{U}$ and $\mathit{V}$ are the probability distributions that characterize the state of knowledge about each of them separately from the other: for example, that more likely than not , irrespective of $\mathit{V}$. Clearly, the marginal distributions have to be consistent with the joint distribution, and while it is true that the joint distribution determines the marginal distributions, the reverse is not true, in that typically there are many joint distributions consistent with given marginal distributions [Possolo2010]. Now, suppose one knows that $\mathit{U}=2∕3$. This implies that $-\sqrt{5}∕3<\mathit{V}<\sqrt{5}∕3$, hence that $\mathit{X}=\left(\mathit{U},\mathit{V}\right)$ is somewhere on a particular chord $\mathcal{𝒞}$ of the unit circle. The conditional probability distribution of $\mathit{V}$ given that $\mathit{U}=2∕3$ is a (univariate) probability distribution over this chord. #### 4.12 example: Shark’s Fin The random variables $\mathit{X}$ and $\mathit{Y}$ take values in the interval $\left(1,2\right)$ and have joint probability density function ${\mathit{p}}_{\mathit{X},\mathit{Y}}$ such that ${\mathit{p}}_{\mathit{X},\mathit{Y}}\left(\mathit{x},\mathit{y}\right)=\left(\mathit{x}+\mathit{y}\right)∕3$ for $1\le \mathit{x},\mathit{y}\le 2$ and is zero otherwise (Figure 2): since ${\mathit{p}}_{\mathit{X},\mathit{Y}}\ge 0$ and , ${\mathit{p}}_{\mathit{X},\mathit{Y}}$ is a bona fide (bivariate) probability density. The density of the marginal distribution of $\mathit{X}$ is , and similarly for $\mathit{Y}$. And the density of the conditional distribution of $\mathit{Y}$ given $\mathit{X}=\mathit{x}$ is ${\mathit{p}}_{\mathit{Y}|\mathit{X}}\left(\mathit{y}|\mathit{x}\right)={\mathit{p}}_{\mathit{X},\mathit{Y}}\left(\mathit{x},\mathit{y}\right)∕{\mathit{p}}_{\mathit{X}}\left(\mathit{x}\right)=\left(\mathit{x}+\mathit{y}\right)∕\left(\mathit{x}+3∕2\right)$. To determine the probability density of $\mathit{R}=\mathit{Y}∕\mathit{X}$, also depicted in Figure 2, note that . First, consider the case : $Pr\left(\mathit{R}\le \mathit{r}\right)={\left(2\mathit{r}-1\right)}^{2}∕\left(2\mathit{r}\right)$ and ${\mathit{p}}_{\mathit{R}}\left(\mathit{r}\right)=\left(4{\mathit{r}}^{2}-1\right)∕\left(2{\mathit{r}}^{2}\right)$. Next, consider the case $1<\mathit{r}\le 2$: $Pr\left(\mathit{R}\le \mathit{r}\right)=1-{\left(2-\mathit{r}\right)}^{2}∕\left(2\mathit{r}\right)$ and ${\mathit{p}}_{\mathit{R}}\left(\mathit{r}\right)=\left(4-{\mathit{r}}^{2}\right)∕\left(2{\mathit{r}}^{2}\right)$. #### 4.13 Independent Random Variables The (scalar or vectorial) random variables $\mathit{X}$ and $\mathit{Y}$ are independent if and only if $=Pr\left(\mathit{X}\in \mathit{A}\right)Pr\left(\mathit{Y}\in \mathit{B}\right)$ for all subsets $\mathit{A}$ and $\mathit{B}$ in their respective ranges (to which probabilities can be coherently assigned.) Suppose $\mathit{X}$ and $\mathit{Y}$ have joint probability distribution with probability density function ${\mathit{p}}_{\mathit{X},\mathit{Y}}$, and marginal density functions ${\mathit{p}}_{\mathit{X}}$ and ${\mathit{p}}_{\mathit{Y}}$: the random variables are independent if and only if ${\mathit{p}}_{\mathit{X},\mathit{Y}}={\mathit{p}}_{\mathit{X}}{\mathit{p}}_{\mathit{Y}}$. #### 4.14 example: Unit Circle Suppose that the probability distribution of a point is uniform inside the circle of unit radius centered at the origin $\left(0,0\right)$ of the Euclidean plane. This means that the probability that a point with Cartesian coordinates $\left(\mathit{X},\mathit{Y}\right)$ should lie in a subset $\mathit{S}$ of this circle is proportional to $\mathit{S}$’s area, but is otherwise independent of $\mathit{S}$’s shape or location within the circle. The probability density function of the joint distribution of $\mathit{X}$ and $\mathit{Y}$ is the function ${\mathit{p}}_{\mathit{X},\mathit{Y}}$ such that ${\mathit{p}}_{\mathit{X},\mathit{Y}}\left(\mathit{x},\mathit{y}\right)=1∕\pi$ if ${\mathit{x}}^{2}+{\mathit{y}}^{2}<1$, and ${\mathit{p}}_{\mathit{X},\mathit{Y}}\left(\mathit{x},\mathit{y}\right)=0$ otherwise. The random variables $\mathit{X}$ and $\mathit{Y}$ are dependent (§4.13): for example, if one is told that , then one can surely conclude that $-\sqrt{3}∕2<\mathit{Y}<\sqrt{3}∕2$. The marginal distribution of $\mathit{X}$ has density ${\mathit{p}}_{\mathit{X}}$ such that for $-1<\mathit{x}<1$. $\mathit{X}$ has expected value 0 and standard deviation ½. Owing to symmetry, $\mathit{X}$ and $\mathit{Y}$ have identical marginal distributions. #### 4.15 Correlations and Copulas If two or more of the random variables are dependent, then modeling their individual probability distributions will not suffice to specify their joint behavior: their joint probability distribution is needed. One commonly used metric of dependence between two random variables $\mathit{X}$ and $\mathit{Y}$ is Pearson’s product-moment correlation coefficient, defined as $\rho \left(\mathit{X},\mathit{Y}\right)=\mathbb{𝔼}\left[\left(\mathit{X}-\mathbb{𝔼}\left(\mathit{X}\right)\right)\left(\mathit{Y}-\mathbb{𝔼}\left(\mathit{Y}\right)\right)\right]∕\sqrt{\mathbb{𝕍}\left(\mathit{X}\right)\mathbb{𝕍}\left(\mathit{Y}\right)}$. However, it is possible for the variables to be dependent and still have $\rho \left(\mathit{X},\mathit{Y}\right)=0$. When the only information in hand are the expected values, standard deviations, and correlations, and still one needs a joint distribution consistent with this information, then the usual course of action is to assign distributions to them individually, and then manufacturing a joint distribution using a copula [Possolo2010] — there is, however, a multitude of different copulas that can be used for this purpose, and the choice that must be made generally is influential. ### 5 Functions of Random Variables #### 5.1 Overview If a random variable $\mathit{Y}$ is a function of other random variables, $\mathit{Y}=\phi \left({\mathit{X}}_{1},\dots ,{\mathit{X}}_{\mathit{n}}\right)$, then $\phi$ and the joint probability distribution of ${\mathit{X}}_{1},\dots ,{\mathit{X}}_{\mathit{n}}$ determine the probability distribution of $\mathit{Y}$. If only the means, standard deviations, and correlations of ${\mathit{X}}_{1},\dots ,{\mathit{X}}_{\mathit{n}}$ are known, then it still is possible to derive approximations to the mean and standard deviation of $\mathit{Y}$, by application of the Delta Method. If the joint probability distribution of ${\mathit{X}}_{1},\dots ,{\mathit{X}}_{\mathit{n}}$ is known, then it may be possible to determine the probability distribution of $\mathit{Y}$ analytically, using the change of variables formula. In general, it is possible to obtain a sample from $\mathit{Y}$’s distribution by taking a sample from the joint distribution of ${\mathit{X}}_{1},\dots ,{\mathit{X}}_{\mathit{n}}$ and applying $\phi$ to each element of this sample (§5.8). The results may then be summarized in several different ways: one of them is an estimate of the probability density of $\mathit{Y}$ [Silverman1986], a procedure that is implemented in function density of the R environment for statistical programming and graphics [R Development Core Team2010]. #### 5.2 Delta Method If $\mathit{X}$ is a random variable with mean $\mu$ and variance ${\sigma }^{2}$, $\phi$ is a differentiable real-valued function of a real variable whose first derivative does not vanish at $\mu$, and $\mathit{Y}=\phi \left(\mathit{X}\right)$, then $\mathbb{𝔼}\left(\mathit{Y}\right)\approx \phi \left(\mu \right)$, and $\mathbb{𝕍}\left(\mathit{Y}\right)\approx {\left[{\phi }^{\prime }\left(\mu \right)\right]}^{2}{\sigma }^{2}$. (This results from the so-called Taylor approximation that replaces $\phi$ by a straight line tangent to its graph at $\mu$.) If $\mathit{X}=\left({\mathit{V}}_{1}+\cdots +{\mathit{V}}_{\mathit{m}}\right)∕\mathit{m}$ is an average of independent, identically distributed random variables with finite variance, then $\sqrt{\mathit{m}}\left(\phi \left({\mathit{V}}_{\mathit{m}}\right)-\phi \left(\mu \right)\right)$ also is approximately Gaussian with mean $0$ and standard deviation $|{\phi }^{\prime }\left(\mu \right)|\sigma$, where $|{\phi }^{\prime }\left(\mu \right)|$ denotes the absolute value of the first derivative of $\phi$ evaluated at $\mu$. The quality of the approximation improves with increasing $\mathit{m}$. #### 5.3 Delta Method — Degeneracy When ${\phi }^{\prime }\left(\mu \right)=0$ and ${\phi }^{\prime \prime }\left(\mu \right)$ exists and is not zero, $\mathit{X}=\left({\mathit{V}}_{1}+\cdots +{\mathit{V}}_{\mathit{m}}\right)∕\mathit{m}$ is an average of independent, identically distributed random variables with finite variance, and $\mathit{m}$ is large, then the probability distribution of $\mathit{m}\left(\phi \left(\mathit{X}\right)-\phi \left(\mu \right)\right)$ is approximately like that of ${\sigma }^{2}{\phi }^{\prime \prime }\left(\mu \right){\mathit{Z}}^{2}∕2$, where $\mathit{Z}$ denotes a Gaussian (or, normal) random variable with mean 0 and standard deviation 1. Since the variance of ${\mathit{Z}}^{2}$ is 2, the standard deviation of $\phi \left({\mathit{V}}_{\mathit{m}}\right)$ is approximately ${\sigma }^{2}|{\phi }^{\prime \prime }\left(\mu \right)|∕\sqrt{2}$, rather different from what applies in the conditions of §5.2. Consider a surface whose reflectance is Lambertian: that is, light falling on it is scattered in such a way that the surface’s brightness apparent to an observer is the same regardless of the observer’s angle of view. The radiant power $\mathit{W}$ emitted by such surface that is measured by a sensor aimed at angle $\mathit{A}$ to the surface’s normal is proportional to $cos\left(\mathit{A}\right)$, hence one writes $\mathit{W}=\kappa cos\left(\mathit{A}\right)$ [Cannon1998Köhler1998]. If knowledge about the value of $\mathit{A}$ is modeled by a Gaussian distribution with mean $\alpha >0$ and standard measurement uncertainty $\mathit{u}\left(\mathit{A}\right)$ (both expressed in radians), then §5.2 (with $\mathit{m}=1$) suggests that knowledge of $\mathit{W}$ should be described approximately by a Gaussian distribution with mean $\kappa cos\left(\alpha \right)$ and standard measurement uncertainty $\kappa \mathit{u}\left(\mathit{A}\right)sin\left(\alpha \right)$. If, for example, , , and , then the approximation to $\mathit{W}$’s distribution that the Delta Method suggests, of a Gaussian distribution with mean and standard deviation , is remarkably accurate (Figure 3). When the detector is aimed squarely at the target, that is $\alpha =0$, this approximation no longer works because the first derivative of the cosine vanishes at $0$, which is the degenerate case that §5.3 contemplates. In this case, $\mathit{W}$’s standard measurement uncertainty is approximately $\kappa {\mathit{u}}^{2}\left(\mathit{A}\right)∕\sqrt{2}$. For and , this equals , which is accurate to the two significant digits shown. However, Figure 3 shows that, in this case, the Delta Method produces a poor approximation. When $\alpha =0$, the probability density of $\mathit{W}$ is markedly asymmetrical, and the meaning of $\mathit{W}$’s standard deviation is rather different from its meaning when $\alpha >0$. Indeed, when $\alpha =0$ the probability that $\mathit{W}$ should lie within one standard deviation of its expected value is $88\phantom{\rule{0.3em}{0ex}}%$ approximately. #### 5.5 Delta Method — Multivariate The Delta Method can be extended to apply to a function of several random variables. Suppose that ${\mathit{X}}_{1}=\left({\mathit{V}}_{1,1}+\cdots +{\mathit{V}}_{{\mathit{m}}_{1},1}\right)$, …, ${\mathit{X}}_{\mathit{n}}=\left({\mathit{V}}_{1,\mathit{n}}+\cdots +{\mathit{V}}_{{\mathit{m}}_{\mathit{n}},\mathit{n}}\right)$ are averages of sets of random variables whose variances are finite. The variables in each set are independent and identically distributed, those in set $1\le \mathit{j}\le \mathit{n}$ having mean ${\mu }_{\mathit{j}}$ and variance ${\sigma }_{\mathit{j}}^{2}$. However, variables in different sets may be dependent, hence the $\left\{{\mathit{X}}_{\mathit{j}}\right\}$ may be dependent, too. Let $\Sigma$ denote the $\mathit{n}×\mathit{n}$ symmetrical matrix whose element ${\sigma }_{{\mathit{j}}_{1}{\mathit{j}}_{2}}$ $=\mathbb{𝔼}\left(\right\left({\mathit{V}}_{\mathit{m},{\mathit{j}}_{1}}-{\mu }_{{\mathit{j}}_{1}}\right)\left({\mathit{V}}_{\mathit{i},{\mathit{j}}_{2}}-{\mu }_{{\mathit{j}}_{2}}\right)\left)\right$ is the covariance between ${\mathit{X}}_{{\mathit{j}}_{1}}$ and ${\mathit{X}}_{{\mathit{j}}_{2}}$, for $1\le {\mathit{j}}_{1},{\mathit{j}}_{2}\le \mathit{n}$. Now, consider the random variable $\mathit{Y}=\phi \left({\mathit{X}}_{1},\dots ,{\mathit{X}}_{\mathit{n}}\right)$, where $\phi$ denotes a real-valued function of $\mathit{n}$ variables whose first partial derivatives are continuous and none vanishes at ${\mu }_{1},\dots ,{\mu }_{\mathit{n}}$. If ${\tau }^{2}$ $={\sum }_{{\mathit{j}}_{1}=1}^{\mathit{n}}{\sum }_{{\mathit{j}}_{2}=1}^{\mathit{n}}{\sigma }_{{\mathit{j}}_{1}{\mathit{j}}_{2}}\left(\partial \phi ∕\partial {\mu }_{{\mathit{j}}_{1}}\right)\left(\mu \right)$ $\left(\partial \phi ∕\partial {\mu }_{{\mathit{j}}_{2}}\right)\left(\mu \right)$ is finite, then $\sqrt{\mathit{m}}\left(\phi \left(\mathit{Y}\right)-\phi \left({\mu }_{1},\dots ,{\mu }_{\mathit{n}}\right)\right)$ also is approximately Gaussian with mean $0$ and variance ${\tau }^{2}$. If ${\mathit{X}}_{1},\dots ,{\mathit{X}}_{\mathit{n}}$ are uncorrelated and have means ${\mu }_{1},\dots ,{\mu }_{\mathit{n}}$ and standard deviations ${\sigma }_{1},\dots ,{\sigma }_{\mathit{n}}$, then the Delta Method approximation reduces to a well-known formula first presented by Gauss [Gauss1823, §18, Problema]: $\mathbb{𝕍}\left(\mathit{Y}\right)\approx {\mathit{c}}_{1}^{2}{\sigma }_{1}^{2}+\cdots +{\mathit{c}}_{\mathit{n}}^{2}{\sigma }_{\mathit{n}}^{2}$, where the sensitivity coefficient ${\mathit{c}}_{\mathit{j}}=\partial \phi \left({\mathit{x}}_{1},\dots ,{\mathit{x}}_{\mathit{n}}\right)∕\partial {\mathit{x}}_{\mathit{j}}$ is the value at $\left({\mathit{x}}_{1},\dots ,{\mathit{x}}_{\mathit{n}}\right)$ of the $\mathit{j}$th partial derivative of $\phi$ with respect to ${\mathit{x}}_{\mathit{j}}$. #### 5.6 example: Beer-Lambert-Bouguer Law If a beam of monochromatic light of power ${\mathit{I}}_{0}$ (W) travels a path of length $\mathit{L}$ (m) through a solution containing a solute whose molar absorptivity for that light is $\mathit{E}$ (L mol-1 m-1), and whose molar concentration is $\mathit{C}$ (mol L-1), then the beam’s power is reduced to $\mathit{I}$ (W) such that $\mathit{I}={\mathit{I}}_{0}{10}^{\mathit{E}\mathit{L}\mathit{C}}$. Application of Gauss’s formula (§5.5) to $\mathit{C}={log}_{10}\left({\mathit{I}}_{0}∕\mathit{I}\right)∕\left(\mathit{E}\mathit{L}\right)$ yields: $\mathbb{𝕍}\left(\mathit{C}\right)\approx \frac{\frac{{\sigma }_{{\mathit{I}}_{0}}^{2}}{{\mathit{I}}_{0}^{2}}+\frac{{\sigma }_{\mathit{I}}^{2}}{{\mathit{I}}^{2}}}{{\left(\mathit{E}\mathit{L}log10\right)}^{2}}+\left(\right\frac{{\sigma }_{\mathit{E}}^{2}}{{\mathit{E}}^{2}}+\frac{{\sigma }_{\mathit{L}}^{2}}{{\mathit{L}}^{2}}\left)\right{log}_{10}^{2}\left(\mathit{I}∕{\mathit{I}}_{0}\right)$ #### 5.7 example: Darcy’s Law Darcy’s law relates the dynamic viscosity $\mathit{H}$ of a fluid to the volumetric rate of discharge $\mathit{Q}$ (volume per unit of time) when the fluid flows through a permeable cylindrical medium of cross-section $\mathit{A}$ and intrinsic permeability $\mathit{K}$ under a pressure drop of $\Delta$ along a length $\mathit{L}$, as follows: $\mathit{H}=\mathit{K}\mathit{A}\Delta ∕\left(\mathit{Q}\mathit{L}\right)$. To compute an approximation to the standard deviation of $\mathit{H}$, one may use the formula from §5.5 directly, or first take the logarithm of both sides, which linearizes the relationship, $log\left(\mathit{H}\right)=log\left(\mathit{K}\right)+log\left(\mathit{A}\right)+log\left(\Delta \right)-log\left(\mathit{Q}\right)-log\left(\mathit{L}\right)$. Applied to these logarithms, the formula from §5.5 is exact. The approximation is then done for each term separately, using the univariate Delta Method. Since $\mathbb{𝕍}\left(log\left(\eta \right)\right)\approx \mathbb{𝕍}\left(\eta \right)∕{\eta }^{2}$, and similarly for the other logarithmic terms, $\mathbb{𝕍}\left(\eta \right)∕{\eta }^{2}\approx \mathbb{𝕍}\left(\kappa \right)∕{\kappa }^{2}\cdots +\mathbb{𝕍}\left(\mathit{L}\right)∕{\mathit{L}}^{2}$. In other words, the square of the coefficient of variation of $\eta$ is approximately equal to the sum of the squares of the variation coefficients of the other variables. #### 5.8 Monte Carlo Method The Monte Carlo method offers several important advantages over the Delta Method described in §5.2 and §5.5: (i) it can produce as many correct significant digits in its results as may be required; (ii) it does not involve the computation of derivatives, either analytically or numerically; (iii) it is applicable in many situations where the Delta Method is not; (iv) it provides a picture of the whole probability distribution of a function of several random variables, not just an approximation to it, or to its mean and standard deviation. The Monte Carlo method in general dates back to the middle of the twentieth century [Metropolis and Ulam1949Metropolis et al.1953]. A variant used in mathematical statistics is known as the parametric bootstrap [Efron and Tibshirani1993]. This involves using random draws from a (possibly multivariate) probability distribution whose parameters have been replaced by estimates thereof (for example, means of posterior probability distributions, §6) to ascertain the probability distribution of a function of one (or more) random variables. Morgan and Henrion [1992] and Joint Committee for Guides in Metrology [2008b] describe how it may be employed to evaluate measurement uncertainty, and provide illustrative examples. The procedure comprises the following steps: MC1 Define the joint probability distribution of ${\mathit{X}}_{1}$, …, ${\mathit{X}}_{\mathit{n}}$. MC2 Choose a suitably large positive integer $\mathit{K}$ and draw a sample of size $\mathit{K}$ from this joint distribution to obtain $\left({\mathit{x}}_{11},\dots ,{\mathit{x}}_{\mathit{n}1}\right)$, …, $\left({\mathit{x}}_{1\mathit{K}},\dots ,{\mathit{x}}_{\mathit{n}\mathit{K}}\right)$. (If ${\mathit{X}}_{1}$, …, ${\mathit{X}}_{\mathit{n}}$ happen to be independent, then this amounts to drawing a sample of size $\mathit{K}$ from the distribution of each of them separately.) MC3 Compute ${\mathit{y}}_{1}=\phi \left({\mathit{x}}_{11},\dots ,{\mathit{x}}_{\mathit{n}1}\right)$, …, ${\mathit{y}}_{\mathit{K}}=\phi \left({\mathit{x}}_{1\mathit{K}},\dots ,{\mathit{x}}_{\mathit{n}\mathit{K}}\right)$, which are a sample from $\mathit{Y}$’s distribution. MC4 Summarize this sample in one or more of these different ways: MC4.a — Probability Density The most inclusive summarization is in the form of an estimate of $\mathit{Y}$’s probability density function: this may be either a simple histogram, or a kernel density estimate [Silverman1986]. MC4.b — Mean and Standard Deviation The mean and standard deviation of $\mathit{Y}$ are estimated by the mean and the standard deviation of $\left\{{\mathit{y}}_{1},\dots ,{\mathit{y}}_{\mathit{K}}\right\}$. (To ascertain the number of significant digits in this mean and standard deviation, hence to decide whether $\mathit{K}$ is large enough for the intended purpose, or should be increased, one may employ either the adaptive procedure explained in the Supplement 1 to the GUM [Joint Committee for Guides in Metrology2008b, 7.9], or resort to the non-parametric statistical bootstrap or to other resampling methods [Davison and Hinkley1997].) MC4.c — Probability Interval If ${\mathit{y}}_{\left(1\right)}\le {\mathit{y}}_{\left(2\right)}\le \cdots \le {\mathit{y}}_{\left(\mathit{K}\right)}$ denote the result of ordering ${\mathit{y}}_{1},\dots ,{\mathit{y}}_{\mathit{K}}$ from smallest to largest, then the interval $\left({\mathit{y}}_{\left(\mathit{K}\alpha ∕2\right)},{\mathit{y}}_{\left(\mathit{K}\left(1-\alpha ∕2\right)\right)}\right)$ includes $\mathit{Y}$’s true value with probability $1-\alpha$. (Since $\mathit{K}\alpha ∕2$ and $\mathit{K}\left(1-\alpha ∕2\right)$ need not be integers, the end-points of this coverage interval may be calculated by interpolation of adjacent ${\mathit{y}}_{\left(\mathit{i}\right)}$s.) #### 5.9 example: Volume of Cylinder The radius $\mathit{R}$ and the height $\mathit{H}$ of a cylinder are values of independent random variables with exponential probability distributions with mean 1 m. To characterize the probability distribution of its volume $\mathit{V}=\pi {\mathit{R}}^{2}\mathit{H}$, draw a sample of size ${10}^{7}$ from the joint distribution of $\mathit{R}$ and $\mathit{H}$, and compute the volume corresponding to each pair of sampled values $\left(\mathit{r},\mathit{h}\right)$ to obtain a sample of the same size from the distribution of $\mathit{V}$. The average and standard deviation of these values are 6.3 m3 and 21 m3, and they are estimates (whose two most significant digits are exact) of the mean and standard deviation of $\mathit{V}$. Figure 4 depicts an estimate of the corresponding probability density. #### 5.10 Change-of-Variable Formula — Univariate Suppose that $\mathit{X}$ is a random variable with a continuous distribution and values in $\mathcal{𝒳}$, with probability distribution function ${\mathit{P}}_{\mathit{X}}$ and probability density function ${\mathit{p}}_{\mathit{X}}$, and consider the random variable $\mathit{Y}=\phi \left(\mathit{X}\right)$ where $\phi$ denotes a real-valued function of a real variable. Let $\mathcal{𝒴}$ denote the set where $\mathit{Y}$ takes its values, and let ${\mathit{P}}_{\mathit{Y}}$ and ${\mathit{p}}_{\mathit{Y}}$ denote $\mathit{Y}$’s probability distribution function and probability density function, respectively. In these circumstances [Casella and Berger2002, Chapter 2]. • If $\phi$ is increasing on $\mathcal{𝒳}$ and $\psi$ denotes its inverse, then ${\mathit{P}}_{\mathit{Y}}\left(\mathit{y}\right)=Pr\left(\mathit{Y}\le \mathit{y}\right)=Pr\left(\mathit{X}\le \psi \left(\mathit{y}\right)\right)={\mathit{P}}_{\mathit{X}}\left[\psi \left(\mathit{y}\right)\right]$ for $\mathit{y}\in \mathcal{𝒴}$; and if $\phi$ is decreasing, then ${\mathit{P}}_{\mathit{Y}}\left(\mathit{y}\right)=1-{\mathit{P}}_{\mathit{X}}\left[\psi \left(\mathit{y}\right)\right]$. • If $\phi$ is either increasing or decreasing on $\mathcal{𝒳}$ (but not both), and its inverse $\psi$ has a continuous first derivative $\stackrel{̇}{\psi }$, then ${\mathit{p}}_{\mathit{Y}}\left(\mathit{y}\right)={\mathit{p}}_{\mathit{X}}\left[\psi \left(\mathit{y}\right)\right]|\stackrel{̇}{\psi }\left(\mathit{y}\right)|$, for $\mathit{y}\in \mathcal{𝒴}$, where $|\stackrel{̇}{\psi }\left(\mathit{y}\right)|$ denotes the absolute value of the derivative of $\psi$ at $\mathit{y}$. #### 5.11 example: Oscillating Mirror A horizontal beam of light emerges from a tiny hole in a wall and travels along a 1 m long path at right angles to the wall, towards a flat mirror that oscillates freely around a vertical axis. When the mirror’s surface normal makes an angle $\mathit{A}$ with the beam, its reflection hits the wall at distance $\mathit{D}=tan\left(\mathit{A}\right)$ from the hole (positive to the right of the hole and negative to the left). If $\mathit{A}$ is uniformly (or, rectangularly) distributed between and , then ${\mathit{P}}_{\mathit{D}}\left(\mathit{d}\right)=Pr\left(\mathit{D}\le \mathit{d}\right)=Pr\left(\mathit{A}\le arctan\left(\mathit{d}\right)\right)=\left(arctan\left(\mathit{d}\right)+\pi ∕2\right)∕\pi$, and $\mathit{D}$’s probability density is ${\mathit{p}}_{\mathit{D}}$ such that ${\mathit{p}}_{\mathit{D}}\left(\mathit{d}\right)=1∕\left[\pi \left(1+{\mathit{d}}^{2}\right)\right]$ for $-\infty <\mathit{d}<\infty$. As it turns out, both the mean and the standard deviation of $\mathit{D}$ are infinite [Feller1971, Page 51]. #### 5.12 Change-of-Variable Formula — Multivariate Suppose that ${\mathit{X}}_{1},\dots ,{\mathit{X}}_{\mathit{n}}$ are random variables and consider ${\mathit{Y}}_{\mathit{j}}={\phi }_{\mathit{j}}\left({\mathit{X}}_{1},\dots ,{\mathit{X}}_{\mathit{n}}\right)$ for $\mathit{j}=1,\dots ,\mathit{n}$, and where ${\phi }_{1},\dots ,{\phi }_{\mathit{n}}$ are real-valued functions of $\mathit{n}$ real variables each. Suppose also that (i) the vector $\mathit{X}=\left({\mathit{X}}_{1},\dots ,{\mathit{X}}_{\mathit{n}}\right)$ takes values in an open subset $\mathcal{𝒳}$ of $\mathit{n}$-dimensional Euclidean space, and has a continuous joint probability distribution with probability density function ${\mathit{p}}_{\mathit{X}}$; (ii) the vector-valued function $\phi =\left({\phi }_{1},\dots ,{\phi }_{\mathit{n}}\right)$ is invertible, and the inverse $\psi =\left({\psi }_{1},\dots ,{\psi }_{\mathit{n}}\right)$ has a Jacobian determinant ${\mathit{J}}_{\psi }$ that does not vanish on $\mathcal{𝒴}$. CV1 Solve the $\mathit{n}$ equations ${\mathit{y}}_{1}={\phi }_{1}\left({\mathit{x}}_{1},\dots ,{\mathit{x}}_{\mathit{n}}\right)$, …, ${\mathit{y}}_{\mathit{n}}={\phi }_{\mathit{n}}\left({\mathit{x}}_{1},\dots ,{\mathit{x}}_{\mathit{n}}\right)$, for ${\mathit{x}}_{1},\dots ,{\mathit{x}}_{\mathit{n}}$, to obtain the inverse transformation such that ${\mathit{x}}_{1}={\psi }_{1}\left({\mathit{y}}_{1},\dots ,{\mathit{y}}_{\mathit{n}}\right)$, …, ${\mathit{x}}_{\mathit{n}}={\psi }_{\mathit{n}}\left({\mathit{y}}_{1},\dots ,{\mathit{y}}_{\mathit{n}}\right)$. CV2 Find ${\stackrel{̇}{\psi }}_{\mathit{i}\mathit{j}}$, the partial derivative of ${\psi }_{\mathit{i}}$ with respect to its $\mathit{j}$th argument, for $\mathit{i},\mathit{j}=1,\dots ,\mathit{n}$, and compute the Jacobian determinant of the inverse transformation at $\mathit{y}$ $=\left({\mathit{y}}_{1},\dots ,{\mathit{y}}_{\mathit{n}}\right)$: ${\mathit{J}}_{\psi }\left(\mathit{y}\right)=det\left[\begin{array}{cccc}\hfill {\stackrel{̇}{\psi }}_{11}\left(\mathit{y}\right)\hfill & \hfill {\stackrel{̇}{\psi }}_{12}\left(\mathit{y}\right)\hfill & \hfill \dots \hfill & \hfill {\stackrel{̇}{\psi }}_{1\mathit{n}}\left(\mathit{y}\right)\hfill \\ \hfill {\stackrel{̇}{\psi }}_{21}\left(\mathit{y}\right)\hfill & \hfill {\stackrel{̇}{\psi }}_{22}\left(\mathit{y}\right)\hfill & \hfill \dots \hfill & \hfill {\stackrel{̇}{\psi }}_{2\mathit{n}}\left(\mathit{y}\right)\hfill \\ \hfill ⋮\hfill & \hfill ⋮\hfill & \hfill \ddots \hfill & \hfill ⋮\hfill \\ \hfill {\stackrel{̇}{\psi }}_{\mathit{n}1}\left(\mathit{y}\right)\hfill & \hfill {\stackrel{̇}{\psi }}_{\mathit{n}2}\left(\mathit{y}\right)\hfill & \hfill \dots \hfill & \hfill {\stackrel{̇}{\psi }}_{\mathit{n}\mathit{n}}\left(\mathit{y}\right)\hfill \\ \hfill \hfill \end{array}\right]$ CV3 The density the joint probability distribution of the random vector $\mathit{Y}$ is ${\mathit{p}}_{\mathit{Y}}$ such that ${\mathit{p}}_{\mathit{Y}}\left(\mathit{y}\right)={\mathit{p}}_{\mathit{X}}\left[\psi \left(\mathit{y}\right)\right]\left|{\mathit{J}}_{\psi }\left(\mathit{y}\right)\right|.$ (2) Note that ${\mathit{J}}_{\psi }\left(\mathit{y}\right)$ is a scalar, and $\left|{\mathit{J}}_{\psi }\left(\mathit{y}\right)\right|$ denotes its absolute value. CV4 The probability density of ${\mathit{Y}}_{1}$ is , where the $\mathit{n}-1$ integrals are over the ranges of ${\mathit{Y}}_{2},\dots ,{\mathit{Y}}_{\mathit{n}}$. #### 5.13 example: Linear Combinations of Gaussian Random Variables Suppose that $\mathit{U}$ and $\mathit{V}$ are independent, Gaussian random variables with mean 0 and variance 1, and let $\mathit{S}=\mathit{a}\mathit{U}+\mathit{b}\mathit{V}$, and $\mathit{T}=\mathit{b}\mathit{U}-\mathit{a}\mathit{V}$, for given real numbers $\mathit{a}$ and $\mathit{b}$. The inverse transformation maps $\left(\mathit{s},\mathit{t}\right)$ onto $\left(\right\left(\mathit{a}\mathit{s}+\mathit{b}\mathit{t}\right)∕\left({\mathit{a}}^{2}+{\mathit{b}}^{2}\right),\left(\mathit{b}\mathit{s}-\mathit{a}\mathit{t}\right)∕\left({\mathit{a}}^{2}+{\mathit{b}}^{2}\right)\left)\right$, and has Jacobian determinant $det\left[\begin{array}{cc}\mathit{a}& \mathit{b}\\ \mathit{b}& -\mathit{a}\end{array}\right]∕\left({\mathit{a}}^{2}+{\mathit{b}}^{2}\right)$ whose absolute value is $1∕\left({\mathit{a}}^{2}+{\mathit{b}}^{2}\right)$. Since the density of the joint probability distribution of $\mathit{U}$ and $\mathit{V}$ is ${\mathit{p}}_{\mathit{U},\mathit{V}}\left(\mathit{u},\mathit{v}\right)=exp\left(-\left({\mathit{u}}^{2}+{\mathit{v}}^{2}\right)∕2\right)∕\left(2\pi \right)$, application of the multivariate change-of-variable formula yields ${\mathit{p}}_{\mathit{S},\mathit{T}}\left(\mathit{s},\mathit{t}\right)=\frac{exp\left\{-\frac{{\mathit{s}}^{2}}{2\left({\mathit{a}}^{2}+{\mathit{b}}^{2}\right)}\right\}}{\sqrt{2\pi \left({\mathit{a}}^{2}+{\mathit{b}}^{2}\right)}}\frac{exp\left\{-\frac{{\mathit{t}}^{2}}{2\left({\mathit{a}}^{2}+{\mathit{b}}^{2}\right)}\right\}}{\sqrt{2\pi \left({\mathit{a}}^{2}+{\mathit{b}}^{2}\right)}}.$ But this means that $\mathit{S}$ and $\mathit{T}$ also are independent and Gaussian with mean 0 and variance ${\mathit{a}}^{2}+{\mathit{b}}^{2}$. On the one hand, this result is surprising because $\mathit{S}$ and $\mathit{T}$ both are functions of the same random variables. On the other hand it is hardly surprising because the transformation amounts to a rotation of the coordinate axes, followed by a global dilation. Since the joint distribution of $\mathit{U}$ and $\mathit{V}$ is circularly symmetric relative to $\left(0,0\right)$, so will the joint distribution of $\mathit{S}$ and $\mathit{T}$ be, which implies independence and the same functional form for the density, up to a difference in scale. #### 5.14 example: Ratio of Exponential Lifetimes To compute the probability density of the ratio $\mathit{R}=\mathit{X}∕\mathit{Y}$ of two independent and exponentially distributed random variables $\mathit{X}$ and $\mathit{Y}$ with mean $1∕\lambda$, define the function $\phi$ such that $\phi \left(\mathit{x},\mathit{y}\right)=\left(\mathit{x}∕\mathit{y},\mathit{y}\right)$, whose inverse is $\psi \left(\mathit{r},\mathit{s}\right)=\left(\mathit{r}\mathit{s},\mathit{s}\right)$, with Jacobian determinant ${\mathit{J}}_{\psi }\left(\mathit{r},\mathit{s}\right)=det\left[\begin{array}{cc}\mathit{s}& \mathit{r}\\ 0& 1\end{array}\right]=\mathit{s}>0$. The multivariate change-of-variable formula then yields ${\mathit{p}}_{\mathit{R},\mathit{S}}\left(\mathit{r},\mathit{s}\right)=\mathit{s}{\lambda }^{2}exp\left[\right-\lambda \left(1+\mathit{r}\right)\mathit{s}\left]\right$ for the density of the joint distribution of $\mathit{R}=\mathit{X}∕\mathit{Y}$ and $\mathit{S}=\mathit{Y}$. The (marginal) density of $\mathit{R}$ is , for $\mathit{r}>0$, being 0 otherwise. ${\mathit{p}}_{\mathit{R}}$ indeed is a probability density function because it is a non-negative function and . However, since , neither the mean nor the variance of $\mathit{R}$ is finite. The Delta Method, however, would have suggested that $\mathbb{𝔼}\left(\mathit{X}∕\mathit{Y}\right)\approx 1$ and that the coefficient of variation (ratio of the standard deviation to the mean) of $\mathit{X}∕\mathit{Y}$ is $\sqrt{2}$ approximately. #### 5.15 example: Polar Coordinates The inverse of the transformation that maps the polar coordinates $\left(\mathit{r},\alpha \right)$ of a point in the Euclidean plane to its Cartesian coordinates $\left(\mathit{x},\mathit{y}\right)$ is $\psi$ such that $\psi \left(\mathit{r},\mathit{a}\right)=\left(\mathit{r}cos\mathit{a},\mathit{r}sin\mathit{a}\right)$ for $\mathit{r}>0$ and $0<\mathit{a}<2\pi$, with Jacobian determinant ${\mathit{J}}_{\psi }\left(\mathit{r},\alpha \right)=\mathit{r}$. If $\mathit{X}$ and $\mathit{Y}$ are independent Gaussian random variables with mean 0 and variance 1, then the probability density of $\left(\mathit{R},\mathit{A}\right)$ is ${\mathit{p}}_{\mathit{R},\mathit{A}}\left(\mathit{r},\alpha \right)=\mathit{r}exp\left(-{\mathit{r}}^{2}∕2\right)∕\left(2\pi \right)$. Since ${\int }_{0}^{\infty }\mathit{r}exp\left(-{\mathit{r}}^{2}∕2\right)=1$, it follows that $\mathit{R}$ and $\mathit{A}$ are independent, the former having a Rayleigh distribution with mean $\sqrt{\pi ∕2}$, the latter a uniform distribution between $0$ and $2\pi$. ### 6 Statistical Inference The statistical inferences we are primarily interested in are probabilistic statements about the unknown value of a quantity, produced by application of a statistical method. In the example of §6.6, one of the inferences is this statement: the probability is 95% that the difference between the numbers of hours gained with the two soporifics is between 0.7 h and 2.5 h. Another common inference is an estimate of the value of a quantity, which must be qualified with an assessment of the associated uncertainty. In the example of §6.8, a typical inference of this kind would be this: the difference in mean levels of thyroxine in the serum of two groups of children diagnosed with hypothyroidism is estimated as with standard uncertainty (Figure 6). In our treatment of this example, the inference is based entirely on a small set of empirical data, and on a particular choice of statistical model used to describe the dispersion of the data, and to characterize the fact that no knowledge other than the data was brought into play. Statistical methods different from the one we used could have been employed: some of these would produce the same result (in particular those illustrated when this dataset was first described [Student1908Fisher1973]), while others would have produced different results. Even when the result is the same, it may be variously interpreted: • For some that statement means that if the same sampling and study method is used repeatedly, and each time the resulting dataset is modeled and analyzed in the same way to produce an interval like the one above, then about 95 % of the resulting intervals will include the true difference sought — with no guarantee or implication that the interval that was obtained is one of these; • For others (among whom we stand) that statement expresses the degree of belief one is entitled to have about the true difference lying between 0.7 h and 2.5 h specifically, in light of all the relevant information in hand. #### 6.1 Bayesian Inference Bayesian inference [Bernardo and Smith2000Lindley2006Robert2007] is a class of statistical procedures that serve to blend preexisting information about the value of a quantity with fresh information in empirical data. The defining traits of a Bayesian procedure are these: (i) All quantity values that are the objects of interest but are accessible to direct observation (non-observables) are modeled as values of non-observable random variables whose (prior, or a priori) distributions encode and convey states of incomplete knowledge about those values; (ii) The empirical data (observables) are modeled as realized values of random variables whose probability distributions depend on those objects of interest; (iii) Preexisting information about those objects of interest is updated in light of the fresh empirical data by application of Bayes rule, and the results are encapsulated in a (posterior, or a posteriori) probability distribution; (iv) Selected aspects of this distribution are then abstracted from it and used to characterize the objects of interest and to describe the state of knowledge about them. #### 6.2 Prior Distribution Let $\theta$ denote the value of the quantity of interest, which we model as realized value of a random variable $\Theta$ with probability density function ${\mathit{p}}_{\Theta }$ that encodes the state of knowledge about $\theta$ prior to obtaining fresh data, and which must be defined even if there is no prior knowledge. Defining such ${\mathit{p}}_{\Theta }$ often is a challenging task. If in fact there exists substantial prior knowledge about $\theta$, then it needs to be elicited from experts in the matter and encapsulated in the form of a particular probability density: Garthwaite et al. [2005] review how this may be done. For example, when measuring the mass fraction of titanium in a mineral specimen, then knowledge of the species (ilmenite, titanite, rutile, etc.) of the specimen is highly informative about that mass fraction. Familiarity with the process of analytical chemistry employed to make the measurement may indicate the dispersion of values to be expected. In some cases, essentially no prior knowledge exists about $\theta$, or none is deemed reliable enough to be taken into account. In such cases, a so-called non-informative prior distribution needs to be produced and assigned to $\Theta$ that reflects this state of affairs: if $\theta$ is univariate (that is, a single number), then the rules developed by Jeffreys [1961] often prove satisfactory; if $\theta$ is multivariate (that is, a numerical vector), then the so-called reference prior distributions are recommended [Bernardo and Smith2007] (these reduce to Jeffreys’s in the univariate case). These rules often produce a ${\mathit{p}}_{\Theta }$ that is improper, in the sense that diverges to infinity, where $\mathcal{ℋ}$ denotes the range of $\Theta$. (If $\Theta$ should have a discrete distribution then this integral is replaced by a sum.) Fortunately, once used in Bayes Rule (§3.5 and §6.4), improper priors often lead to proper posterior probability distributions. #### 6.3 Likelihood Function The empirical data $\mathit{x}$ (which may be a single number, a numerical vector, or a data structure of still greater complexity) are modeled as realized values of a random variable $\mathit{X}$ whose probability density describes the corresponding dispersion of values. This density must depend on $\theta$, which is another way of saying that the data are informative about $\theta$ (otherwise there would be nothing to be gained by observing them). In fact, this is the density of the conditional probability distribution of $\mathit{X}$ given that $\Theta =\theta$. Choosing a specific functional form for it generally is a non-trivial exercise: it involves defining a statistical model that correctly captures the dispersion of values likely to be obtained in the experiment that produces them. Once the data $\mathit{x}$ are in hand, ${\mathit{p}}_{\mathit{X}|\Theta }\left(\mathit{x}|\theta \right)$ becomes a function of $\theta$ alone, being largest for values of $\theta$ that make the data appear most likely. As such, it still is non-negative, but its integral (or sum, if $\mathit{X}$’s distribution should be discrete) over the range of $\Theta$, need not be 1. #### 6.4 Posterior Distribution Suppose that both $\mathit{X}$ given that $\Theta =\theta$ and $\Theta$ have continuous distributions with densities ${\mathit{p}}_{\mathit{X}|\Theta }$ and ${\mathit{p}}_{\Theta }$. In these circumstances, Bayes rule becomes (3) The function ${\mathit{p}}_{\Theta |\mathit{X}}$, which is defined over the range of $\Theta$ for each fixed value $\mathit{x}$, is the density of the posterior distribution of the value of the quantity of interest $\Theta$ given the data. In some cases this can be computed in closed form, in many others it cannot. In all cases it is possible to obtain a sample from this posterior distribution by application of a procedure known as Markov Chain Monte Carlo (MCMC) [Gelman et al.2003]. This sample can then be summarized as described in §5.8. Once infected by influenza A virus, an epithelial cell of the upper respiratory tract releases $\theta$ virions on average, which may then go on to infect other cells. This number $\theta$ depends on the volume of the cell, and we will treat it as realized value of a non-observable random variable with an exponential distribution whose expected value, $1∕\gamma$ for some $0<\gamma <1$, is known. Given $\theta$, the actual number of virions that are released is $\mathit{x}$, and this is like a realized value of a Poisson random variable with mean $\theta$. Suppose that the prior density is ${\mathit{p}}_{\Theta }\left(\theta \right)=\gamma exp\left(-\gamma \theta \right)$, and the likelihood function is ${\mathit{L}}_{\mathit{x}}\left(\theta \right)={\mathit{p}}_{\mathit{X}|\Theta }\left(\mathit{x}|\theta \right)={\theta }^{\mathit{x}}exp\left(-\theta \right)∕\mathit{x}!$, for $\theta >0$. The posterior distribution of $\Theta$ given $\mathit{x}$ belongs to the gamma family, and has expected value $\left(\mathit{x}+1\right)∕\left(\gamma +1\right)$, variance $\left(\mathit{x}+1\right)∕{\left(\gamma +1\right)}^{2}$, and density #### 6.6 example: Sleep Hours The differences between the numbers of additional hours of sleep that ten patients gained when using two soporific drugs, described in examples given by Student [1908] and [Fisher1973, §24], were 1.2 h, 2.4 h, 1.3 h, 1.3 h, 0.0 h, 1.0 h, 1.8 h, 0.8 h, 4.6 h, and 1.4 h. Suppose that, given $\mu$ and $\sigma$, these are realized values of independent Gaussian random variables with mean $\mu$ and variance ${\sigma }^{2}$. Let denote their average, and denote the sum of their squared deviations from $\overline{\mathit{x}}$ divided by 9. In these circumstances, the likelihood function is ${\mathit{L}}_{\overline{\mathit{x}},{\mathit{s}}^{2}}\left(\mu ,{\sigma }^{2}\right)={\left(2\pi {\sigma }^{2}\right)}^{-\mathit{n}∕2}exp\left\{-\left[\mathit{n}{\left(\overline{\mathit{x}}-\mu \right)}^{2}+\left(\mathit{n}-1\right){\mathit{s}}^{2}\right]∕\left(2{\sigma }^{2}\right)\right\}$. Assume, in addition, that $\mu$ and $\sigma$ are realized values of non-observable random variables $\mathit{M}$ and $\Sigma$ that are independent a priori and such that $\mathit{M}$ and $log\Sigma$ are uniformly distributed between $-\infty$ and $+\infty$ (both improper prior distributions). Then, given $\overline{\mathit{x}}$ and $\mathit{s}$, $\left(\mu -\overline{\mathit{x}}\right)∕\left(\mathit{s}∕\sqrt{\mathit{n}}\right)$ is like a realized value of a random variable with a Student’s $\mathit{t}$ distribution with $\mathit{n}-1=9$ degrees of freedom, and $\left(\mathit{n}-1\right){\mathit{s}}^{2}∕{\sigma }^{2}$ is like a realized value of a random variable with a chi-squared distribution with $\mathit{n}-1$ degrees of freedom [Box and Tiao1973, Theorem 2.2.1]. Therefore, the expected value of the posterior distribution of the mean difference of hours of sleep gained is , and the standard deviation is . A 95 % probability interval for $\mu$ ranges from 0.7 h to 2.5 h, and a similar one for $\sigma$ ranges from 0.8 h to 2.2 h. Suppose that ${\mathit{M}}_{\overline{\mathit{x}},\mathit{s}}$ and ${\Sigma }_{\overline{\mathit{x}},\mathit{s}}$ are the counterparts of $\mathit{M}$ and $\Sigma$ once the information in the data has been taken into account: that is, their probability distribution is the joint (or, bivariate) posterior probability distribution given the data. Even though $\mathit{M}$ and $\Sigma$ were assumed to be independent a priori, ${\mathit{M}}_{\overline{\mathit{x}},\mathit{s}}$ and ${\Sigma }_{\overline{\mathit{x}},\mathit{s}}$ turn out to be dependent a posteriori (that is, given the data), but their correlation is zero [Lindley1965b, §5.4]. #### 6.7 example: Hurricanes A major hurricane has category 3, 4, or 5 on the Saffir-Simpson Hurricane Scale [Simpson1974]: its central pressure is no more than 945 mbar (94 500 Pa), it has winds of at least (49.6 m s-1), generates sea surges of 9 feet (2.7 m) or greater, and has the potential to cause extensive damage. The numbers of major hurricanes that struck the U.S. mainland directly, in each decade starting with 1851–1860 and ending with 1991–2010, are: 6, 1, 7, 5, 8, 4, 7, 5, 8, 10, 8, 6, 4, 4, 5, 7 [Blake et al.2011]. Let $\mathit{n}=16$ denote the number of decades, ${\mathit{x}}_{1},\dots ,{\mathit{x}}_{\mathit{n}}$ denote the corresponding counts, and $\mathit{s}={\mathit{x}}_{1}+\cdots +{\mathit{x}}_{\mathit{n}}$. Suppose that one wishes to predict $\mathit{y}$, the number of such hurricanes in the decade 2011–2020. Assume that the mean number of such hurricanes per decade will have remained constant between 1851 and 2010 (certainly a questionable assumption), with unknown value $\lambda$, and that, conditionally on this value, ${\mathit{x}}_{1}$, …, ${\mathit{x}}_{\mathit{n}}$, and $\mathit{y}$ are realized values of independent Poisson random variables ${\mathit{X}}_{1}$, …, ${\mathit{X}}_{\mathit{n}}$ (observable), $\mathit{Y}$ (non-observable), all with mean value $\lambda$: their common probability density is ${\mathit{p}}_{\mathit{X}|\Lambda }\left(\mathit{k}|\lambda \right)={\lambda }^{\mathit{k}}exp\left(-\lambda \right)∕\mathit{k}!$ for $\mathit{k}=0,1,2,\dots \phantom{\rule{0.3em}{0ex}}$. This model is commonly used for phenomena that result from the cumulative effect of many improbable events [Feller1968, XI.6b]. Even though the goal is to predict $\mathit{Y}$, the fact that there is no a priori knowledge about $\lambda$ other than that it must be positive, requires that this be modeled as the (non-observable) value of a random variable $\Lambda$ whose probability distribution must reflect this ignorance. (According to the Bayesian paradigm, all states of knowledge, even complete ignorance, have to be modeled using probability distributions.) If the prior distribution chosen for $\Lambda$ is the reference prior distribution [Berger2006Bernardo1979Bernardo and Smith2000], then the value of its probability density ${\mathit{p}}_{\Lambda }$ at $\lambda$ should be proportional to $1∕\sqrt{\lambda }$ [Bernardo and Smith2000, A.2], an improper prior probability density. However, the corresponding posterior distribution for $\Lambda$ is proper, in fact it is a gamma distribution with expected value and probability density function ${\mathit{p}}_{\Lambda |{\mathit{X}}_{1},\dots ,{\mathit{X}}_{\mathit{n}}}$ such that (4) However, what is needed for the aforementioned prediction is the conditional distribution of $\mathit{Y}$ given the observed counts: the so-called predictive distribution [Schervish1995, Page 18]. If $\pi$ denotes the corresponding density, then This defines a discrete probability distribution on the non-negative integers, often called a Poisson-gamma mixture distribution [Bernardo and Smith2000, §3.2.2]. For our data, since $\pi$ achieves a maximum at $\mathit{y}=5$ (Figure 5), this is the (a posteriori) most likely number $\mathit{Y}$ of major hurricanes that will hit the U.S. mainland in 2011–2020. The mean of the posterior distribution is $6$. Since the probability is 0.956 that $\mathit{Y}$’s value lies between 2 and 11 (inclusive), the interval whose end-points are 2 and 11 is a $95.6\phantom{\rule{0.3em}{0ex}}%$ coverage interval for $\mathit{Y}$. #### 6.8 example: Hypothyroidism [Altman1991, Table 9.6] lists measurement results from Hulse et al. [1979], for the concentration of thyroxine in the serum of sixteen children diagnosed with hypothyroidism, of which nine had slight or no symptoms, and the other seven had marked symptoms. The values measured for the former, all in units of nmol/L, were 34, 45, 49, 55, 58, 59, 60, 62, and 86; and for the latter they were 5, 8, 18, 24, 60, 84, and 96. The averages are $\overline{\mathit{x}}=56.4$ and $\overline{\mathit{y}}=42.1$, and the standard deviations are $\mathit{s}=14.2$ and $\mathit{t}=37.5$. Our goal is to produce a probability interval for the difference between the corresponding means, $\mu$ and $\nu$, say, when nothing is assumed known a priori either about these means or about the corresponding standard deviations, $\sigma$ and $\tau$, which may be different. Given the values of these four parameters, suppose that the values measured in the $\mathit{m}=9$ children with slight or no symptoms are observed values of independent Gaussian random variables ${\mathit{U}}_{1},\dots ,{\mathit{U}}_{\mathit{m}}$ with common mean $\mu$ and standard deviation $\sigma$, and that those measured in the $\mathit{n}=7$ children with marked symptoms are observed values of independent Gaussian random variables ${\mathit{V}}_{1},\dots ,{\mathit{V}}_{\mathit{n}}$, also independent of the $\left\{{\mathit{U}}_{\mathit{i}}\right\}$, with common mean $\nu$ and standard deviation $\tau$. The problem of constructing a probability interval for $\mu -\nu$ under these circumstances is known as the Behrens-Fisher problem [Ghosh and Kim2001]. For the Bayesian solution, we regard $\mu$, $\nu$, $\sigma$ and $\tau$ as realized values of non-observable random variables $\mathit{M}$, $\mathit{N}$, $\Sigma$, and $\mathit{T}$, assumed independent a priori and such that $\mathit{M}$, $\mathit{N}$, $log\Sigma$, and $log\mathit{T}$ all are uniformly distributed over the real numbers (hence have improper prior distributions). The corresponding posterior distributions all are proper provided $\mathit{m}\ge 2$ and $\mathit{n}\ge 2$. However, the density of the posterior probability distribution of $\mathit{M}-\mathit{N}$ given the data cannot be computed in closed form. This problem in Bayesian inference, and other problems much more demanding than this, can be solved using the MCMC sampling technique mentioned in §6.4, for which there exist several generic software implementations: we obtained the results presented below using function metrop of the R package mcmc [Geyer2010]. Typically, all that is needed is the logarithm of the numerator of Bayes formula (3). Leaving out constants that do not involve $\mu$, $\nu$, $\sigma$ or $\tau$, this is $-\left(\mathit{m}+1\right)log\left(\sigma \right)-\left(\mathit{n}+1\right)log\left(\tau \right)-\frac{\mathit{m}{\left(\mu -\overline{\mathit{x}}\right)}^{2}+\left(\mathit{m}-1\right){\mathit{s}}^{2}}{2{\sigma }^{2}}-\frac{\mathit{n}{\left(\nu -\overline{\mathit{y}}\right)}^{2}+\left(\mathit{n}-1\right){\mathit{t}}^{2}}{2{\tau }^{2}}.$ MCMC produces a sample of suitably large size $\mathit{K}$ from the joint posterior distribution of $\mathit{M}$, $\mathit{N}$, $\Sigma$, and $\mathit{T}$, given the data, say $\left({\mu }_{1},{\nu }_{1},{\sigma }_{1},{\tau }_{1}\right)$, …, $\left({\mu }_{\mathit{K}},{\nu }_{\mathit{K}},{\sigma }_{\mathit{K}},{\tau }_{\mathit{K}}\right)$. The 95 % probability interval for the difference in mean levels of thyroxine in the serum of the two groups, which extends from to , and Figure 6, are based on a sample of size $\mathit{K}=4.5×1{0}^{6}$. The probability density in this figure, and that probability interval, were computed as described in MC4.c and MC4.d of §5.8, only applied to the differences $\left\{{\mu }_{\mathit{k}}-{\nu }_{\mathit{k}}\right\}$. In this particular case it is possible to ascertain the correctness of the results owing to an interesting, albeit surprising result: the posterior means are independent a posteriori, and have probability distributions that are re-scaled, shifted versions of Student’s $\mathit{t}$ distributions with $\mathit{m}-1$ and $\mathit{n}-1$ degrees of freedom [Box and Tiao1973, 2.5.2]. Therefore, by application of the Monte Carlo method of §5.8, one may obtain a sample from the posterior distribution of the difference $\mathit{M}-\mathit{T}$ independently of the MCMC procedure: the results are depicted in Figure 6 (where they are labeled “Jeffreys (Exact)”), and are essentially indistinguishable from the results of MCMC. This same figure shows yet another posterior density that differs hardly at all from the posterior density corresponding to Jeffreys’s prior: this alternative result corresponds to the “matching” prior distribution (also improper) derived by Ghosh and Kim [2001], whose density is proportional to $\left({\sigma }^{2}∕\mathit{m}+{\tau }^{2}∕\mathit{n}\right)∕{\left(\sigma \tau \right)}^{3}$. This illustrates a generally good practice: that the sensitivity of the results of Bayesian analysis should be evaluated by comparing how they vary when different but comparably acceptable priors are used. ### 7 Acknowledgments Antonio Possolo thanks colleagues and former colleagues from the Working Group 1 of the Joint Committee for Guides in Metrology, who offered valuable comments and suggestions in multiple discussions of an early draft of a document that he had submitted to the consideration of this Working Group and that included material now in the present document: Walter Bich, Maurice Cox, René Dybkær, Charles Ehrlich, Clemens Elster, Tyler Estler, Brynn Hibbert, Hidetaka Imai, Willem Kool, Lars Nielsen, Leslie Pendrill, Lorenzo Peretto, Steve Sidney, Adriaan van der Veen, Graham White, and Wolfgang Wöger. Antonio Possolo is particularly grateful to Tyler Estler for suggestions regarding §2.3 and §3.1, and to Graham White for suggestions and corrections that greatly improved §3.6, on the tuberculin test. Both authors thank their common NIST colleagues Andrew Rukhin and Jack Wang for suggesting many corrections and improvements. Mary Dal-Favero and Alan Heckert, both from NIST, kindly facilitated the deployment of this material on the World Wide Web. ### References D. G. Altman. Practical Statistics for Medical Research. Chapman & Hall/CRC, Boca Raton, FL, 1991. Reprinted 1997. American Thoracic Society. Diagnostic standards and classification of tuberculosis in adults and children. American Journal of Respiratory and Critical Care Medicine, 161:1376–1395, 1999. J. Berger. The case for objective Bayesian analysis. Bayesian Analysis, 1(3):385–402, 2006. URL http://ba.stat.cmu.edu/. J. Bernardo and A. Smith. Bayesian Theory. John Wiley & Sons, New York, 2000. J. Bernardo and A. Smith. Bayesian Theory. John Wiley & Sons, Chichester, England, 2nd edition, 2007. J. M. Bernardo. Reference posterior distributions for Bayesian inference. Journal of the Royal Statistical Society, 41:113–128, 1979. E. S. Blake, C. W. Landsea, and E. J. Gibney. The deadliest, costliest, and most intense United States tropical cyclones from 1851 to 2010 (and other frequently requested hurricane facts). Technical Report Technical Memorandum NWS NHC-6, NOAA, National Weather Service, National Hurricane Center, Miami, Florida, August 2011. L. Bovens and W. Rabinowicz. Democratic answers to complex questions — an epistemic perspective. Synthese, 150:131–153, 2006. G. E. P. Box and G. C. Tiao. Bayesian Inference in Statistical Analysis. Addison-Wesley, Reading, Massachusetts, 1973. T. W. Cannon. Light and radiation. In C. DeCusatis, editor, Handbook of Applied Photometry, chapter 1, pages 1–32. Springer Verlag, New York, New York, 1998. R. Carnap. Logical Foundations of Probability. University of Chicago Press, Chicago, Illinois, 2nd edition, 1962. G. Casella and R. L. Berger. Statistical Inference. Duxbury, Pacific Grove, California, 2nd edition, 2002. N. Clee. Who’s the daddy of them all? In Observer Sport Monthly. Guardian News and Media Limited, Manchester, UK, Sunday March 4, 2007. R. T. Clemen and R. L. Winkler. Combining probability distributions from experts in risk analysis. Risk Analysis, 19:187–203, 1999. R. T. Cox. Probability, frequency and reasonable expectation. American Journal of Physics, 14:1–13, 1946. R. T. Cox. The Algebra of Probable Inference. The Johns Hopkins Press, Baltimore, Maryland, 1961. A. C. Davison and D. Hinkley. Bootstrap Methods and their Applications. Cambridge University Press, New York, NY, 1997. B. de Finetti. La prévision: ses lois logiques, ses sources subjectives. Anales de l’Institut Henri Poincaré, 7:1–68, 1937. B. de Finetti. Theory of Probability: A critical introductory treatment. John Wiley & Sons, Chichester, 1990. Two volumes, translated from the Italian and with a preface by Antonio Machì and Adrian Smith, with a foreword by D. V. Lindley, Reprint of the 1975 translation. M. H. DeGroot. A conversation with Persi Diaconis. Statistical Science, 1(3):319–334, August 1986. M. H. DeGroot and M. J. Schervish. Probability and Statistics. Addison-Wesley, 4th edition, 2011. P. Diaconis and D. Ylvisaker. Quantifying prior opinion. In J. M. Bernardo, M. H. DeGroot, D. V. Lindley, and A. Smith, editors, Bayesian Statistics, volume 2, pages 163–175. North-Holland, Amsterdam, 1985. F. W. Dyson, A. S. Eddington, and C. Davidson. A determination of the deflection of light by the sun’s gravitational field, from observations made at the total eclipse of May 29, 1919. Philosophical Transactions of the Royal Society of London, Series A, Containing Papers of a Mathematical or Physical Character, 220:291–333, 1920. B. Efron and R. J. Tibshirani. An Introduction to the Bootstrap. Chapman & Hall, London, UK, 1993. W. Feller. An Introduction to Probability Theory and Its Applications, volume I. John Wiley & Sons, New York, 3rd edition, 1968. Revised Printing. W. Feller. An Introduction to Probability Theory and Its Applications, volume II. John Wiley & Sons, New York, 2nd edition, 1971. R. A. Fisher. Statistical Methods for Research Workers. Hafner Publishing Company, New York, NY, 14th edition, 1973. B. Fitelson. Likelihoodism, bayesianism, and relational confirmation. Synthese, 156(3):473–489, 2007. P. H. Garthwaite, J. B. Kadane, and A. O’Hagan. Statistical methods for eliciting probability distributions. Journal of the American Statistical Association, 100:680–701, June 2005. C. F. Gauss. Theoria combinationis observationum erroribus minimis obnoxiae. In Werke, Band IV . Könighlichen Gesellschaft der Wissenschaften, Göttingen, 1823. A. Gelman, J. B. Carlin, H. S. Stern, and D. B. Rubin. Bayesian Data Analysis. Chapman & Hall / CRC, 2nd edition, 2003. C. J. Geyer. mcmc: Markov Chain Monte Carlo, 2010. URL http://CRAN.R-project.org/package=mcmc. R package version 0.8. M. Ghosh and Y.-H. Kim. The Behrens-Fisher problem revisited: A Bayes-Frequentist synthesis. The Canadian Journal of Statistics, 29(1): 5–17, March 2001. D. Gillies. Philosophical Theories of Probability. Routledge, London, UK, 2000. C. Glymour. Theory and evidence. Princeton University Press, Princeton, New Jersey, 1980. A. Hájek. Interpretations of probability. In E. N. Zalta, editor, The Stanford Encyclopedia of Philosophy. The Metaphysics Research Lab, Center for the Study of Language and Information, Stanford University, Stanford, California, 2007. URL http://plato.stanford.edu/archives/win2007/entries/probability-interpret/. S. Hartmann and J. Sprenger. Judgment aggregation and the problem of tracking the truth. Synthese, pages 1–13, 2011. URL http://dx.doi.org/10.1007/s11229-011-0031-5. P.G. Hoel, S. C. Port, and C. J. Stone. Introduction to Probability Theory. Houghton Mifflin, 1971a. P.G. Hoel, S. C. Port, and C. J. Stone. Introduction to Statistical Theory. Houghton Mifflin, 1971b. M. Holden, M. R. Dubin, and P. H. Diamond. Frequency of negative intermediate-strength tuberculin sensitivity in patients with active tuberculosis. New England Journal of Medicine, 285:1506–1509, 1971. J. A. Hulse, D. Jackson, D. B. Grant, P. G. H. Byfield, and R. Hoffenberg. Different measurements of thyroid function in hypothyroid infants diagnosed by screening. Acta Pædiatrica, 68:21–25, 1979. E. T. Jaynes. Probability Theory in Science and Engineering. Colloquium Lectures in Pure and Applied Science, No. 4. Socony Mobil Oil Company, Dallas, Texas, 1958. E. T. Jaynes. Probability Theory: The Logic of Science. Cambridge University Press, Cambridge, UK, 2003. G. L. Bretthorst, Editor. H. Jeffreys. Theory of Probability. Oxford University Press, London, 3rd edition, 1961. Corrected Impression, 1967. Joint Committee for Guides in Metrology. Evaluation of measurement data — Guide to the expression of uncertainty in measurement. International Bureau of Weights and Measures (BIPM), Sèvres, France, September 2008a. URL http://www.bipm.org/en/publications/guides/gum.html. BIPM, IEC, IFCC, ILAC, ISO, IUPAC, IUPAP and OIML, JCGM 100:2008, GUM 1995 with minor corrections. Joint Committee for Guides in Metrology. Evaluation of measurement data — Supplement 1 to the “Guide to the expression of uncertainty in measurement” — Propagation of distributions using a Monte Carlo method. International Bureau of Weights and Measures (BIPM), Sèvres, France, 2008b. URL http://www.bipm.org/en/publications/guides/gum.html. BIPM, IEC, IFCC, ILAC, ISO, IUPAC, IUPAP and OIML, JCGM 101:2008. Joint Committee for Guides in Metrology. International vocabulary of metrology — Basic and general concepts and associated terms (VIM). International Bureau of Weights and Measures (BIPM), Sèvres, France, 2008c. URL http://www.bipm.org/en/publications/guides/vim.html. BIPM, IEC, IFCC, ILAC, ISO, IUPAC, IUPAP and OIML, JCGM 200:2008. R. Köhler. Photometric and radiometric quantities. In C. DeCusatis, editor, Handbook of Applied Photometry, chapter 2, pages 33–54. Springer Verlag, New York, New York, 1998. D. Lindley. Understanding Uncertainty. John Wiley & Sons, Hoboken, New Jersey, 2006. D. V. Lindley. Introduction to Probability and Statistics from a Bayesian Viewpoint — Part 1, Probability. Cambridge University Press, Cambridge, UK, 1965a. D. V. Lindley. Introduction to Probability and Statistics from a Bayesian Viewpoint — Part 2, Inference. Cambridge University Press, Cambridge, UK, 1965b. D. V. Lindley. Reconciliation of probability distributions. Operations Research, 31(5):866–880, September-October 1983. D. V. Lindley. Making Decisions. John Wiley & Sons, London, 2nd edition, 1985. D. H. Mellor. Probability: A Philosophical Introduction. Routledge, New York, 2005. N. Metropolis and S. Ulam. The Monte Carlo Method. Journal of the American Statistical Association, 44:335–341, September 1949. N. Metropolis, A. Rosenbluth, M. Rosenbluth, A. Teller, and E. Teller. Equations of state calculations by fast computing machines. Journal of Chemical Physics, 21:1087–1091, 1953. M. G. Morgan and M. Henrion. Uncertainty — A Guide to Dealing with Uncertainty in Quantitative Risk and Policy Analysis. Cambridge University Press, New York, NY, first paperback edition, 1992. 10th printing, 2007. P. A. Morris. Combining expert judgments: A bayesian approach. Management Science, 23(7):679–693, March 1977. J. Neyman. Frequentist probability and frequentist statistics. Synthese, 36(1):97–131, 1977. K. R. Popper. The propensity interpretation of probability. British Journal of the Philosophy of Science, 10:25–42, 1959. A. Possolo. Copulas for uncertainty analysis. Metrologia, 47:262–271, 2010. R Development Core Team. R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria, 2010. URL http://www.R-project.org. ISBN 3-900051-07-0. F. P. Ramsey. Truth and probability. In R.B. Braithwaite, editor, The Foundations of Mathematics and other Logical Essays, chapter VII, pages 156–198. Harcourt, Brace and Company, New York, 1999 electronic edition, 1926, 1931. URL http://homepage.newschool.edu/het/texts/ramsey/ramsess.pdf. H. Reichenbach. The Theory of Probability: An Inquiry into the Logical and Mathematical Foundations of the Calculus of Probability. University of California Press, Berkeley, California, 1949. English translation of the 1935 German edition. C. P. Robert. The Bayesian Choice: From Decision-Theoretic Foundations to Computational Implementation. Springer, New York, NY, second edition, 2007. L. J. Savage. The Foundations of Statistics. Dover Publications, New York, New York, 1972. M. J. Schervish. Theory of Statistics. Springer Series in Statistics. Springer Verlag, New York, NY, 1995. B. W. Silverman. Density Estimation. Chapman and Hall, London, 1986. R. H. Simpson. The hurricane disaster potential scale. Weatherwise, 27: 169–186, 1974. M. Stone. The opinion pool. The Annals of Mathematical Statistics, 32: 1339–1342, December 1961. Student. The probable error of a mean. Biometrika, 6(1):1–25, March 1908. B. N. Taylor and C. E. Kuyatt. Guidelines for Evaluating and Expressing the Uncertainty of NIST Measurement Results. National Institute of Standards and Technology, Gaithersburg, MD, 1994. URL http://physics.nist.gov/Pubs/guidelines/TN1297/tn1297s.pdf. NIST Technical Note 1297. R. von Mises. Probability, Statistics and Truth. Dover Publications, New York, 2nd revised edition, 1981. ISBN 0486242145. Translation of the 3rd German edition.
2016-09-24T22:37:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 732, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8800507187843323, "perplexity": 480.8471175060894}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738659512.19/warc/CC-MAIN-20160924173739-00307-ip-10-143-35-109.ec2.internal.warc.gz"}
https://lammps.sandia.gov/doc/pair_cs.html
pair_style born/coul/wolf/cs/gpu command Syntax pair_style style args • style = born/coul/long/cs or buck/coul/long/cs or born/coul/dsf/cs or born/coul/wolf/cs • args = list of arguments for a particular style born/coul/long/cs args = cutoff (cutoff2) cutoff = global cutoff for non-Coulombic (and Coulombic if only 1 arg) (distance units) cutoff2 = global cutoff for Coulombic (optional) (distance units) buck/coul/long/cs args = cutoff (cutoff2) cutoff = global cutoff for Buckingham (and Coulombic if only 1 arg) (distance units) cutoff2 = global cutoff for Coulombic (optional) (distance units) born/coul/dsf/cs args = alpha cutoff (cutoff2) alpha = damping parameter (inverse distance units) cutoff = global cutoff for non-Coulombic (and Coulombic if only 1 arg) (distance units) cutoff2 = global cutoff for Coulombic (distance units) born/coul/wolf/cs args = alpha cutoff (cutoff2) alpha = damping parameter (inverse distance units) cutoff = global cutoff for Buckingham (and Coulombic if only 1 arg) (distance units) cutoff2 = global cutoff for Coulombic (optional) (distance units) Examples pair_style born/coul/long/cs 10.0 8.0 pair_coeff 1 1 6.08 0.317 2.340 24.18 11.51 pair_style buck/coul/long/cs 10.0 pair_style buck/coul/long/cs 10.0 8.0 pair_coeff * * 100.0 1.5 200.0 pair_coeff 1 1 100.0 1.5 200.0 9.0 pair_style born/coul/dsf/cs 0.1 10.0 12.0 pair_coeff * * 0.0 1.00 0.00 0.00 0.00 pair_coeff 1 1 480.0 0.25 0.00 1.05 0.50 pair_style born/coul/wolf/cs 0.25 10.0 12.0 pair_coeff * * 0.0 1.00 0.00 0.00 0.00 pair_coeff 1 1 480.0 0.25 0.00 1.05 0.50 Description These pair styles are designed to be used with the adiabatic core/shell model of (Mitchell and Finchham). See the Howto coreshell doc page for an overview of the model as implemented in LAMMPS. The styles with a coul/long term are identical to the pair_style born/coul/long and pair_style buck/coul/long styles, except they correctly treat the special case where the distance between two charged core and shell atoms in the same core/shell pair approach r = 0.0. This needs special treatment when a long-range solver for Coulombic interactions is also used, i.e. via the kspace_style command. More specifically, the short-range Coulomb interaction between a core and its shell should be turned off using the special_bonds command by setting the 1-2 weight to 0.0, which works because the core and shell atoms are bonded to each other. This induces a long-range correction approximation which fails at small distances (~< 10e-8). Therefore, the Coulomb term which is used to calculate the correction factor is extended by a minimal distance (r_min = 1.0-6) when the interaction between a core/shell pair is treated, as follows where C is an energy-conversion constant, Qi and Qj are the charges on the core and shell, epsilon is the dielectric constant and r_min is the minimal distance. The pair style born/coul/dsf/cs is identical to the pair_style born/coul/dsf style, which uses the damped shifted force model as in coul/dsf to compute the Coulomb contribution. This approach does not require a long-range solver, thus the only correction is the addition of a minimal distance to avoid the possible r = 0.0 case for a core/shell pair. The pair style born/coul/wolf/cs is identical to the pair_style born/coul/wolf style, which uses the Wolf summation as in coul/wolf to compute the Coulomb contribution. This approach does not require a long-range solver, thus the only correction is the addition of a minimal distance to avoid the possible r = 0.0 case for a core/shell pair. Styles with a gpu, intel, kk, omp, or opt suffix are functionally the same as the corresponding style without the suffix. They have been optimized to run faster, depending on your available hardware, as discussed on the Speed packages doc page. The accelerated styles take the same arguments and should produce the same results, except for round-off and precision issues. These accelerated styles are part of the GPU, USER-INTEL, KOKKOS, USER-OMP and OPT packages, respectively. They are only enabled if LAMMPS was built with those packages. See the Build package doc page for more info. You can specify the accelerated styles explicitly in your input script by including their suffix, or you can use the -suffix command-line switch when you invoke LAMMPS, or you can use the suffix command in your input script. See the Speed packages doc page for more instructions on how to use the accelerated styles effectively. Mixing, shift, table, tail correction, restart, rRESPA info: See the corresponding doc pages for pair styles without the “cs” suffix to see how mixing, shifting, tabulation, tail correction, restarting, and rRESPA are handled by theses pair styles. Restrictions These pair styles are part of the CORESHELL package. They are only enabled if LAMMPS was built with that package. See the Build package doc page for more info.
2018-10-22T01:22:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5783002972602844, "perplexity": 5599.099993638467}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583514443.85/warc/CC-MAIN-20181022005000-20181022030500-00360.warc.gz"}
https://pvpmc.sandia.gov/modeling-steps/2-dc-module-iv/diode-equivalent-circuit-models/de-soto-5-parameter-model/
The De Soto model (De Soto et al., 2006), also known as the five-parameter model, uses the following equations to express each of the five primary parameters as a function of cell temperature $T_c$ and total absorbed irradiance $S$: • ${I_L}&space;=&space;{S&space;\over&space;{{S_{ref}}}}{M&space;\over&space;{{M_{ref}}}}&space;\left[&space;{{I_{L,ref}}&space;+&space;{\alpha_{Isc}}\left(&space;{{T_c}&space;-&space;{T_{c,ref}}}&space;\right)}&space;\right]&space;\qquad&space;$ • ${I_0}&space;=&space;{I_{0,ref}}&space;{\left(&space;{{{{T_c}}&space;\over&space;{{T_{c,ref}}}}}&space;\right)^3}&space;\exp&space;\left[&space;{{1&space;\over&space;k}&space;\left(&space;{{{{E_g}&space;\left(&space;{{T_{ref}}}&space;\right)}&space;\over&space;{{T_{ref}}}}&space;-&space;{{{E_g}&space;\left(&space;{{T_c}}&space;\right)}&space;\over&space;{{T_c}}}}&space;\right)}&space;\right]&space;\qquad&space;$ • ${E_g}&space;\left(&space;{{T_c}}&space;\right)&space;=&space;{E_g}&space;\left(&space;{{T_{ref}}}&space;\right)&space;\left[&space;{1&space;-&space;0.0002677&space;\left(&space;{{T_c}&space;-&space;{T_{ref}}}&space;\right)}&space;\right]&space;\qquad$ • ${R_s}&space;=&space;{\rm{constant}}&space;\qquad&space;$ • ${R_{sh}}&space;=&space;{R_{sh,ref}}{{{S_{ref}}}&space;\over&space;S}&space;\qquad&space;$ • ${n}&space;=&space;{\rm{constant}}&space;\qquad&space;$ Absorbed irradiance, $S$, is equal to POA irradiance reaching the PV cells (including incident angle reflection losses but not spectral mismatch).  In each equation, the subscript “ref” refers to a value at reference conditions. In De Soto et al., 2006, the modified ideality factor $a$ is used, and expressed as a linear function of cell temperature $T_c$, which is equivalent to a constant diode ideality factor $n$. $M$, termed the “air mass modifier”, represents the spectral effect, from changing atmospheric air mass and corresponding absorption, on the light current. $M$ is the polynomial in air mass from the Sandia PV Array Performance Model (SAPM). The term $\alpha_{Isc}$ is the temperature coefficient (A/K) of short-circuit current, set equal to the temperature coefficient of the light current. The term ${E_g}&space;\left(&space;{{T_c}}&space;\right)$ is the temperature-dependent bandgap (eV); given as the simplified first order Taylor series of the experimental bandgap temperature. The empirical constant 0.0002677 is representative of silicon cells at typical operating temperatures, and it is used for all cell technologies. Content for this page was contributed by Matthew Boyd (NIST) and Clifford Hansen (Sandia)
2019-08-20T13:38:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 16, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7962932586669922, "perplexity": 2195.2431285825}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315544.11/warc/CC-MAIN-20190820133527-20190820155527-00009.warc.gz"}
http://www.lanl.gov/projects/dense-plasma-theory/background/quark-gluon-plasma.php
Los Alamos National LaboratoryDense Plasma Theory Microphysical properties of dense, strongly coupled, and quantum plasmas # Quark-Gluon Plasma The quark-gluon plasma created in very high-energy nucleus-nucleus collisions exhibits a remarkably small viscosity and strong collective behavior. Systems consisting of deconfined quarks and gluons, the fundamental constituents of matter and the mediators of the strong force, are produced in controlled laboratory conditions in reactions of heavy nuclei at ultra-relativistic energies. These so-called “the quark-gluon plasmas” (QGP) exist at very high temperatures and energy densities similar to those found a few microseconds after the Big Bang. The quest to discover and characterize the properties of this new state of matter via ultra-relativistic collisions of large nuclei is an active research thrust at many experimental facilities such as the Bevalac, the CERN Super Proton Synchrotron (SPS), the Relativistic Heavy Ion Collider (RHIC) and the Large Hadron Collider (LHC). Thousands of tracks are recorded when the quark-gluon plasma created in a central Au+Au collision at RHIC as the plasma cools down and transforms into ordinary elementary particles which reach the detectors. Contrary to early expectations, data collected have revealed that the constituents of the quark-gluon plasma (QGP) created in very high-energy nucleus-nucleus collisions appear to be strongly coupled, in analogy to the strongly coupled electron-ion plasma (SCP) state. Specifically, ”elliptic flow” - an almond shape expansion characteristic of asymmetric collisions - is consistent with hydrodynamic simulations with zero or very small viscosity $$\eta$$. This suggests that the mean free paths of the quarks and gluons in the QGP are very small and the system is strongly coupled. Another important piece of evidence is the strong suppression in the rate of energetic particle and jet production rates. These energetic particles and jets (collimated showers of particles) carry momenta hundreds of times larger than the temperature of the plasma. Nevertheless, they couple strongly to the medium and lose energy via collisional and radiative processes. The differences between the quark-gluon plasma and the strongly coupled Coulomb plasmas notwithstanding, the scientific community has rapidly recognized the potential of strongly-coupled plasma physics to help understand the fundamental properties of the QGP, but until today the analogy has largely been unexploited. Despite the fact that the latter interacts electro-magnetically and the former interacts through the strong nuclear force, there is tremendous commonality in the intellectual approach to the theoretical and experimental tools for their characterization. ## References Contacts | Media | Calendar
2017-11-18T19:11:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41921743750572205, "perplexity": 1330.9764055558283}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805023.14/warc/CC-MAIN-20171118190229-20171118210229-00180.warc.gz"}
https://par.nsf.gov/biblio/10154251-multiple-intertwined-pairing-states-temperature-sensitive-gap-anisotropy-superconductivity-nematic-quantum-critical-point
Multiple intertwined pairing states and temperature-sensitive gap anisotropy for superconductivity at a nematic quantum-critical point Abstract The proximity of many strongly correlated superconductors to density-wave or nematic order has led to an extensive search for fingerprints of pairing mediated by dynamical quantum-critical (QC) fluctuations of the corresponding order parameter. Here we study anisotropics-wave superconductivity induced by anisotropic QC dynamical nematic fluctuations. We solve the non-linear gap equation for the pairing gap$$\Delta (\theta ,{\omega }_{m})$$$\Delta \left(\theta ,{\omega }_{m}\right)$and show that its angular dependence strongly varies below$${T}_{{\rm{c}}}$$${T}_{c}$. We show that this variation is a signature of QC pairing and comes about because there are multiples-wave pairing instabilities with closely spaced transition temperatures$${T}_{{\rm{c}},n}$$${T}_{c,n}$. Taken alone, each instability would produce a gap$$\Delta (\theta ,{\omega }_{m})$$$\Delta \left(\theta ,{\omega }_{m}\right)$that changes sign$$8n$$$8n$times along the Fermi surface. We show that the equilibrium gap$$\Delta (\theta ,{\omega }_{m})$$$\Delta \left(\theta ,{\omega }_{m}\right)$is a superposition of multiple components that are nonlinearly induced below the actual$${T}_{{\rm{c}}}={T}_{{\rm{c}},0}$$${T}_{c}={T}_{c,0}$, and get resonantly enhanced at$$T={T}_{{\rm{c}},n}\ <\ {T}_{{\rm{c}}}$$$T={T}_{c,n}\phantom{\rule{0ex}{0ex}}<\phantom{\rule{0ex}{0ex}}{T}_{c}$. This gives rise to strong temperature variation of the angular dependence of$$\Delta (\theta ,{\omega }_{m})$$$\Delta \left(\theta ,{\omega }_{m}\right)$. This variation progressively disappears away from a QC point. Authors: ; ; Publication Date: NSF-PAR ID: 10154251 Journal Name: npj Quantum Materials Volume: 4 Issue: 1 ISSN: 2397-4648 Publisher: Nature Publishing Group National Science Foundation ##### More Like this 1. Abstract Kondo insulators are expected to transform into metals under a sufficiently strong magnetic field. The closure of the insulating gap stems from the coupling of a magnetic field to the electron spin, yet the required strength of the magnetic field–typically of order 100 T–means that very little is known about this insulator-metal transition. Here we show that Ce$${}_{3}$$${}_{3}$Bi$${}_{4}$$${}_{4}$Pd$${}_{3}$$${}_{3}$, owing to its fortuitously small gap, provides an ideal Kondo insulator for this investigation. A metallic Fermi liquid state is established above a critical magnetic field of only$${B}_{{\rm{c}}}\approx$$${B}_{c}\approx$11 T. A peak in the strength of electronic correlations near$${B}_{{\rm{c}}}$$${B}_{c}$, which is evidentmore » 2. Abstract We study the singular set in the thin obstacle problem for degenerate parabolic equations with weight$$|y|^a$$${|y|}^{a}$for$$a \in (-1,1)$$$a\in \left(-1,1\right)$. Such problem arises as the local extension of the obstacle problem for the fractional heat operator$$(\partial _t - \Delta _x)^s$$${\left({\partial }_{t}-{\Delta }_{x}\right)}^{s}$for$$s \in (0,1)$$$s\in \left(0,1\right)$. Our main result establishes the complete structure and regularity of the singular set of the free boundary. To achieve it, we prove Almgren-Poon, Weiss, and Monneau type monotonicity formulas which generalize those for the case of the heat equation ($$a=0$$$a=0$). 3. Abstract We present a proof of concept for a spectrally selective thermal mid-IR source based on nanopatterned graphene (NPG) with a typical mobility of CVD-grown graphene (up to 3000$$\hbox {cm}^2\,\hbox {V}^{-1}\,\hbox {s}^{-1}$$${\text{cm}}^{2}\phantom{\rule{0ex}{0ex}}{\text{V}}^{-1}\phantom{\rule{0ex}{0ex}}{\text{s}}^{-1}$), ensuring scalability to large areas. For that, we solve the electrostatic problem of a conducting hyperboloid with an elliptical wormhole in the presence of anin-planeelectric field. The localized surface plasmons (LSPs) on the NPG sheet, partially hybridized with graphene phonons and surface phonons of the neighboring materials, allow for the control and tuning of the thermal emission spectrum in the wavelength regime from$$\lambda =3$$$\lambda =3$to 12$$\upmu$$$\mu$m by adjusting themore » 4. Abstract Efficient conversion of methane to value-added products such as olefins and aromatics has been in pursuit for the past few decades. The demand has increased further due to the recent discoveries of shale gas reserves. Oxidative and non-oxidative coupling of methane (OCM and NOCM) have been actively researched, although catalysts with commercially viable conversion rates are not yet available. Recently,$${{{{{{{\mathrm{Sr}}}}}}}}_2Fe_{1.5 + 0.075}Mo_{0.5}O_{6 - \delta }$$${\mathrm{Sr}}_{2}F{e}_{1.5+0.075}M{o}_{0.5}{O}_{6-\delta }$(SFMO-075Fe) has been reported to activate methane in an electrochemical OCM (EC-OCM) set up with a C2 selectivity of 82.2%1. However, alkaline earth metal-based materials are known to suffer chemical instability in carbon-rich environments. Hence,more » 5. Abstract Hemiwicking is the phenomena where a liquid wets a textured surface beyond its intrinsic wetting length due to capillary action and imbibition. In this work, we derive a simple analytical model for hemiwicking in micropillar arrays. The model is based on the combined effects of capillary action dictated by interfacial and intermolecular pressures gradients within the curved liquid meniscus and fluid drag from the pillars at ultra-low Reynolds numbers$${\boldsymbol{(}}{{\bf{10}}}^{{\boldsymbol{-}}{\bf{7}}}{\boldsymbol{\lesssim }}{\bf{Re}}{\boldsymbol{\lesssim }}{{\bf{10}}}^{{\boldsymbol{-}}{\bf{3}}}{\boldsymbol{)}}$$$\left({10}^{-7}\lesssim \mathrm{Re}\lesssim {10}^{-3}\right)$. Fluid drag is conceptualized via a critical Reynolds number:$${\bf{Re}}{\boldsymbol{=}}\frac{{{\bf{v}}}_{{\bf{0}}}{{\bf{x}}}_{{\bf{0}}}}{{\boldsymbol{\nu }}}$$$\mathrm{Re}=\frac{{v}_{0}{x}_{0}}{\nu }$, wherev0corresponds to the maximum wetting speed on a flat, dry surface andx0is the extension length of the liquidmore »
2022-08-20T03:25:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 41, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.730307400226593, "perplexity": 2751.50542998148}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573876.92/warc/CC-MAIN-20220820012448-20220820042448-00053.warc.gz"}
https://eva.ecdc.europa.eu/mod/forum/discuss.php?d=403
## Announcements' board ### Routine upgrade of EVA Routine upgrade of EVA Dear EVA users, Please be informed that on Friday 07.10.2016 ECDC ICT will perform a routine upgrade of EVA starting at 18:00h.  The upgrade will take approximately 3 hours and during this time the platform will not be available. We are sorry for the inconvenience and thank you for the understanding Liliya (on behalf of EVA team)
2021-09-18T11:17:19
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8375414609909058, "perplexity": 8196.203429300856}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056392.79/warc/CC-MAIN-20210918093220-20210918123220-00444.warc.gz"}
https://lammps.sandia.gov/doc/compute_smd_tlsph_num_neighs.html
# compute smd/tlsph/num/neighs command ## Syntax compute ID group-ID smd/tlsph/num/neighs • ID, group-ID are documented in compute command • smd/tlsph/num/neighs = style name of this compute command ## Examples compute 1 all smd/tlsph/num/neighs ## Description Define a computation that calculates the number of particles inside of the smoothing kernel radius for particles interacting via the Total-Lagrangian SPH pair style. See this PDF guide to using Smooth Mach Dynamics in LAMMPS. Output info: This compute calculates a per-particle vector, which can be accessed by any command that uses per-particle values from a compute as input. See the Howto output doc page for an overview of LAMMPS output options. The per-particle values are dimensionless. See units. ## Restrictions This compute is part of the USER-SMD package. It is only enabled if LAMMPS was built with that package. See the Build package doc page for more info. This quantity will be computed only for particles which interact with the Total-Lagrangian pair style.
2018-12-18T15:32:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5997738838195801, "perplexity": 7341.180282666447}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376829429.94/warc/CC-MAIN-20181218143757-20181218165757-00531.warc.gz"}
https://par.nsf.gov/biblio/10153643-optimization-molecules-via-deep-reinforcement-learning
Optimization of Molecules via Deep Reinforcement Learning Abstract We present a framework, which we call Molecule DeepQ-Networks (MolDQN), for molecule optimization by combining domain knowledge of chemistry and state-of-the-art reinforcement learning techniques (doubleQ-learning and randomized value functions). We directly define modifications on molecules, thereby ensuring 100% chemical validity. Further, we operate without pre-training on any dataset to avoid possible bias from the choice of that set. MolDQN achieves comparable or better performance against several other recently published algorithms for benchmark molecular optimization tasks. However, we also argue that many of these tasks are not representative of real optimization problems in drug discovery. Inspired by problems faced during medicinal chemistry lead optimization, we extend our model with multi-objective reinforcement learning, which maximizes drug-likeness while maintaining similarity to the original molecule. We further show the path through chemical space to achieve optimization for a molecule to understand how the model works. Authors: ; ; ; ; Award ID(s): Publication Date: NSF-PAR ID: 10153643 Journal Name: Scientific Reports Volume: 9 Issue: 1 ISSN: 2045-2322 Publisher: Nature Publishing Group National Science Foundation More Like this 1. With the recent explosion in the size of libraries available for screening, virtual screening is positioned to assume a more prominent role in early drug discovery’s search for active chemical matter. In typical virtual screens, however, only about 12% of the top-scoring compounds actually show activity when tested in biochemical assays. We argue that most scoring functions used for this task have been developed with insufficient thoughtfulness into the datasets on which they are trained and tested, leading to overly simplistic models and/or overtraining. These problems are compounded in the literature because studies reporting new scoring methods have not validated their models prospectively within the same study. Here, we report a strategy for building a training dataset (D-COID) that aims to generate highly compelling decoy complexes that are individually matched to available active complexes. Using this dataset, we train a general-purpose classifier for virtual screening (vScreenML) that is built on the XGBoost framework. In retrospective benchmarks, our classifier shows outstanding performance relative to other scoring functions. In a prospective context, nearly all candidate inhibitors from a screen against acetylcholinesterase show detectable activity; beyond this, 10 of 23 compounds have IC 50 better than 50 μM. Without any medicinal chemistry optimization,more » 2. Abstract Our career-forward approach to general chemistry laboratory for engineers involves the use of design challenges (DCs), an innovation that employs authentic professional context and practice to transform traditional tasks into developmentally appropriate career experiences. These challenges are scaled-down engineering problems related to the US National Academy of Engineering’s Grand Challenges that engage students in collaborative problem solving via the modeling process. With task features aligned with professional engineering practice, DCs are hypothesized to support student motivation for the task as well as for the profession. As an evaluation of our curriculum design process, we use expectancy–value theory to test our hypotheses by investigating the association between students’ task value beliefs and self-confidence with their user experience, gender and URM status. Using stepwise multiple regression analysis, the results reveal that students find value in completing a DC (F(5,2430) = 534.96,p < .001) and are self-confident (F(8,2427) = 154.86,p < .001) when they feel like an engineer, are satisfied, perceive collaboration, are provided help from a teaching assistant, and the tasks are not too difficult. We highlight that although female and URM students felt less self-confidence in completing a DC, these feelings were moderated by their perceptions of feeling like an engineer and collaboration in the learning process (F(10,2425) = 127.06,p < .001).more » 3. Study of the permeability of small organic molecules across lipid membranes plays a significant role in designing potential drugs in the field of drug discovery. Approaches to design promising drug molecules have gone through many stages, from experiment-based trail-and-error approaches, to the well-established avenue of the quantitative structure–activity relationship, and currently to the stage guided by machine learning (ML) and artificial intelligence techniques. In this work, we present a study of the permeability of small drug-like molecules across lipid membranes by two types of ML models, namely the least absolute shrinkage and selection operator (LASSO) and deep neural network (DNN) models. Molecular descriptors and fingerprints are used for featurization of organic molecules. Using molecular descriptors, the LASSO model uncovers that the electro-topological, electrostatic, polarizability, and hydrophobicity/hydrophilicity properties are the most important physical properties to determine the membrane permeability of small drug-like molecules. Additionally, with molecular fingerprints, the LASSO model suggests that certain chemical substructures can significantly affect the permeability of organic molecules, which closely connects to the identified main physical properties. Moreover, the DNN model using molecular fingerprints can help develop a more accurate mapping between molecular structures and their membrane permeability than LASSO models. Our results provide deep understandingmore » 4. Abstract Motivation The crux of molecular property prediction is to generate meaningful representations of the molecules. One promising route is to exploit the molecular graph structure through graph neural networks (GNNs). Both atoms and bonds significantly affect the chemical properties of a molecule, so an expressive model ought to exploit both node (atom) and edge (bond) information simultaneously. Inspired by this observation, we explore the multi-view modeling with GNN (MVGNN) to form a novel paralleled framework, which considers both atoms and bonds equally important when learning molecular representations. In specific, one view is atom-central and the other view is bond-central, then the two views are circulated via specifically designed components to enable more accurate predictions. To further enhance the expressive power of MVGNN, we propose a cross-dependent message-passing scheme to enhance information communication of different views. The overall framework is termed as CD-MVGNN. Results We theoretically justify the expressiveness of the proposed model in terms of distinguishing non-isomorphism graphs. Extensive experiments demonstrate that CD-MVGNN achieves remarkably superior performance over the state-of-the-art models on various challenging benchmarks. Meanwhile, visualization results of the node importance are consistent with prior knowledge, which confirms the interpretability power of CD-MVGNN. Availability and implementation The code and data underlyingmore » Supplementary information Supplementary data are available at Bioinformatics online. 5. Abstract The quantum simulation of quantum chemistry is a promising application of quantum computers. However, forNmolecular orbitals, the$${\mathcal{O}}({N}^{4})$$$O\left({N}^{4}\right)$gate complexity of performing Hamiltonian and unitary Coupled Cluster Trotter steps makes simulation based on such primitives challenging. We substantially reduce the gate complexity of such primitives through a two-step low-rank factorization of the Hamiltonian and cluster operator, accompanied by truncation of small terms. Using truncations that incur errors below chemical accuracy allow one to perform Trotter steps of the arbitrary basis electronic structure Hamiltonian with$${\mathcal{O}}({N}^{3})$$$O\left({N}^{3}\right)$gate complexity in small simulations, which reduces to$${\mathcal{O}}({N}^{2})$$$O\left({N}^{2}\right)$gate complexity in the asymptotic regime; and unitary Coupled Cluster Trotter steps with$${\mathcal{O}}({N}^{3})$$$O\left({N}^{3}\right)$gate complexity as a function of increasing basis size for a given molecule. In the case of the Hamiltonian Trotter step, these circuits have$${\mathcal{O}}({N}^{2})$$$O\left({N}^{2}\right)$depth on a linearly connected array, an improvement over the$${\mathcal{O}}({N}^{3})$$$O\left({N}^{3}\right)$scaling assuming no truncation. As a practical example, we show that a chemically accurate Hamiltonian Trotter step for a 50 qubit molecular simulation can be carried out in the molecular orbital basis with as few as 4000 layers of parallel nearest-neighbor two-qubit gates, consisting of fewer than 105non-Clifford rotations. We also apply our algorithm to iron–sulfur clusters relevant for elucidating the mode of action of metalloenzymes.
2023-02-03T14:43:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 6, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4080768823623657, "perplexity": 2125.2578498523267}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500056.55/warc/CC-MAIN-20230203122526-20230203152526-00471.warc.gz"}
https://www.usgs.gov/media/files/gypsum-2019-tables-only-release
# Gypsum in 2019, tables-only release ### Detailed Description Advance data tables (XLSX format) for the gypsum chapter of the Minerals Yearbook 2019. A version with an embedded text document and also a PDF of text and tables will follow. Public Domain.
2023-04-02T13:21:55
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9194477796554565, "perplexity": 14394.37963368193}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950528.96/warc/CC-MAIN-20230402105054-20230402135054-00294.warc.gz"}
https://fermatslibrary.com/s/q-2020-02-06-226
Data re-uploading for a universal quantum classifier ´ an P ´ erez-Salinas 1,2 , Alba Cervera-Lierta 1,2 , Elies Gil-Fuster 3 , and Jos ´ e I. Latorre 1,2,4,5 1 Barcelona Supercomputing Center 2 Institut de Ci encies del Cosmos, Universitat de Barcelona, Barcelona, Spain 3 Dept. F ´ ısica Qu antica i Astrof ´ ısica, Universitat de Barcelona, Barcelona, Spain. 4 Nikhef Theory Group, Science Park 105, 1098 XG Amsterdam, The Netherlands. 5 Center for Quantum Technologies, National University of Singapore, Singapore. A single qubit provides sufficient compu- tational capabilities to construct a universal quantum classifier when assisted with a clas- sical subroutine. This fact may be surpris- ing since a single qubit only offers a simple superposition of two states and single-qubit gates only make a rotation in the Bloch sphere. The key ingredient to circumvent these limita- tions is to allow for multiple data re-uploading. A quantum circuit can then be organized as a series of data re-uploading and single-qubit processing units. Furthermore, both data re- date multiple dimensions in the input and sev- eral categories in the output, to conform to a universal quantum classifier. The extension of this idea to several qubits enhances the effi- ciency of the strategy as entanglement expands the superpositions carried along with the clas- sification. Extensive benchmarking on differ- ent examples of the single- and multi-qubit quantum classifier validates its ability to de- scribe and classify complex data. 1 Introduction Quantum circuits that make use of a small number of quantum resources are of most importance to the field of quantum computation. Indeed, algorithms that need few qubits may prove relevant even if they do not attempt any quantum advantage, as they may be useful parts of larger circuits. A reasonable question to ask is what is the lower limit of quantum resources needed to achieve a given computation. A naive estimation for the quantum cost of a new proposed quantum algorithm is often made based on analogies with classical algorithms. But this may be misleading, as classical computation can play with memory in a rather different way as quantum computers do. The question then turns to the more refined problem of establishing the absolute minimum of quantum resources for a problem to be solved. We shall here explore the power and minimal needs of quantum circuits assisted with a classical subrou- tine to carry out a general supervised classification task, that is, the minimum number of qubits, quan- tum operations and free parameters to be optimized classically. Three elements in the computation need renewed attention. The obvious first concern is to find a way to upload data in a quantum computer. Then, it is necessary to find the optimal processing of information, followed by an optimal measurement strategy. We shall revisit these three issues in turn. The non-trivial step we take here is to combine the first two, which is data uploading and processing. There exist several strategies to design a quantum classifier. In general, they are inspired in well-known classical techniques such as artificial neural networks [13] or kernel methods used in classical machine learning [410]. Some of these proposals [46] encode the data values into a quantum state amplitude, which is manipulated afterward. These approaches need an efficient way to prepare and access to these ampli- tudes. State preparation algorithms are in general costly in terms of quantum gates and circuit depth, although some of these proposals use a specific state preparation circuit that only require few single-qubit gates. The access to the states that encode the data can be done efficiently by using a quantum random access memory (QRAM) [11]. However, this is experi- mentally challenging and the construction of a QRAM is still under development. Other proposals exploit hybrid quantum-classical strategies[710]. The clas- sical parts can be used to construct the correct en- coding circuit or as a minimization method to extract the optimal parameters of the quantum circuit, such as the angles of the rotational gates. In the first case, the quantum circuit computes the hardest instances of the classical classification algorithm as, for example, the inner products needed to obtain a kernel matrix. In the second case, the data is classified directly by using a parametrized quantum circuit, whose variables are used to construct a cost function that should be minimized classically. This last strategy is more con- venient for a Near Intermediate Scale Quantum com- putation (NISQ) since, in general, it requires short- depth circuits, and its variational core makes it more resistant to experimental errors. Our proposal be- longs to this last category, the parametrized quantum Accepted in Quantum 2020-01-27, click title to verify. Published under CC-BY 4.0. 1 arXiv:1907.02085v2 [quant-ph] 30 Jan 2020 classifiers. A crucial part of a quantum classification algorithm is how data is encoded into the circuit. Proposals based on kernel methods design an encoding circuit which implements a feature map from the data space to the qubits Hilbert space. The construction of this quantum feature map may vary depending on the al- gorithm, but common strategies make use of the quan- tum Fourier transform or introduce data in multiple qubits using one- and two-qubit gates [9, 10]. Both the properties of the tensor product and the entangle- ment generated in those encoding circuits capture the non-linearities of the data. In contrast, we argue that there is no need to use highly sophisticated encoding circuits nor a significant number of qubits to intro- duce these non-linearities. Single-qubit rotations ap- plied multiple times along the circuit generate highly non-trivial functions of the data values. The main dif- ference between our approach and the ones described above is that the circuit is not divided between the encoding and processing parts, but implements both multiple times along the algorithm. Data re-uploading is considered as a manner of solv- ing the limitations established by the no-cloning the- orem. Quantum computers cannot copy data, but classical devices can. For instance, a neural network takes the same input many times when processing the data in the hidden layer neurons. An analogous quan- tum neural network can only use quantum data once. Therefore, it makes sense to re-upload classical data along a quantum computation to bypass this limita- tion on the quantum circuit. By following this line of thought, we present an equivalence between data orem applied to artificial neural networks [12]. Just as a network composed of a single hidden layer with enough neurons can reproduce any continuous func- tion, a single-qubit classifier can, in principle, achieve the same by re-uploading the data enough times. The single-qubit classifier illustrates the computa- tional power that a single qubit can handle. This pro- posal is to be added to other few-qubit benchmarks in machine learning [13]. The input redundancy has also been proposed to construct complex encoding in parametrized quantum circuits and in the construc- tion of quantum feature maps [10, 14]. These and other proposals mentioned in the previous paragraphs are focused on representing classically intractable or very complex kernel functions with few qubits. On the contrary, the focus of this work is to distill the minimal amount of quantum resources, i.e., the num- ber of qubits and gates, needed for a given classifica- tion task quantified in terms of the number of qubits and unitary operations. The main result of this work is, indeed, to show that there is a trade-off between the number of qubits needed to perform classification and multiple data re-uploading. That is, we may use fewer qubits at the price of re-entering data several times along the quantum computation. We shall illustrate the power of a single- and multi- qubit classifiers with data re-uploading with a series of examples. First, we classify points in a plane that is divided into two areas. Then, we extend the num- ber of regions on a plane to be classified. Next, we consider the classification of multi-dimensional pat- terns and, finally, we benchmark this quantum clas- sifier with non-convex figures. For every example, we train a parametrized quantum circuit that car- ries out the task and we analyze its performance in terms of the circuit architecture, i.e., for single- and multi-qubit classifiers with and without entanglement between qubits. This paper is structured as follows. First, in Sec- tion 2, we present the basic structure of a single-qubit quantum classifier. Data and processing parameters are uploaded and re-uploaded using one-qubit gen- eral rotations. For each data point, the final state of the circuit is compared with the target state as- signed to its class, and the free parameters of the cir- cuit are updated accordingly using a classical mini- mization algorithm. Next, in Section 3, we motivate the data re-uploading approach by using the Univer- sal Approximation Theorem of artificial neural net- works. In Section 4, we introduce the extension of this classifier to multiple qubits. Then, in Section 5, we detail the minimization methods used to train the quantum classifiers. Finally, in Section 6, we bench- mark single- and multi-qubit quantum classifiers de- fined previously with problems of different dimensions and complexity and compare their performance re- spect to classical classification techniques. The con- clusions of this proposal for a quantum classifier are exposed in Section 7. 2 Structure of a single-qubit quantum classifier The global structure of any quantum circuit can be divided into three elements: uploading of informa- tion onto a quantum state, processing of the quan- tum state, and measurement of the final state. It is far from obvious how to implement each of these ele- ments optimally to perform a specific operation. We shall now address them one at a time for the task of classification. To load classical information onto a quantum circuit is a highly non-trivial task [4]. A critical example is the processing of big data. While there is no in-principle obstruction to upload large amounts of data onto a state, it is not obvious how to do it. The problem we address here is not related to a large amount of data. It is thus possible to consider a Accepted in Quantum 2020-01-27, click title to verify. Published under CC-BY 4.0. 2 quantum circuit where all data are loaded in the co- efficients of the initial wave function [8, 9, 1315]. In the simplest of cases, data are uploaded as rotations of qubits in the computational basis. A quantum circuit would then follow that should perform some classifi- cation. This strategy would be insufficient to create a uni- versal quantum classifier with a single qubit. A first limitation is that a single qubit only has two degrees of freedom, thus only allowing to represent data in a two-dimensional space. No quantum classifier in higher dimensions can be created if this architecture is to be used. A second limitation is that, once data is uploaded, the only quantum circuit available is a rotation in the Bloch sphere. It is easy to prove that a single rotation cannot capture any non-trivial sepa- ration of patterns in the original data. We need to turn to a different strategy, which turns out to be inspired by neural networks. In the case of feed-forward neural networks, data are entered in a network in such a way that they are processed by sub- sequent layers of neurons. The key idea is to observe that the original data are processed several times, one for each neuron in the first hidden layer. Strictly speaking, data are re-uploaded onto the neural net- work. If neural networks were affected by some sort of no-cloning theorem, they could not work as they do. Coming back to the quantum circuit, we need to design a new architecture where data can be intro- duced several times into the circuit. The central idea to build a universal quantum clas- sifier with a single qubit is thus to re-upload classical data along with the computation. Following the com- parison with an artificial neural network with a single hidden layer, we can represent this re-upload diagram- matically, as it is shown in Figure 1. Data points in a neural network are introduced in each processing unit, represented with squares, which are the neurons of the hidden layer. After the neurons process these data, a final neuron is necessary to construct the output to be analyzed. Similarly, in the single-qubit quantum clas- sifier, data points are introduced in each processing unit, which this time corresponds to a unitary rota- tion. However, each processing unit is affected by the previous ones and re-introduces the input data. The final output is a quantum state to be analyzed as it will be explained in the next subsections. The explicit form of this single-qubit classifier is shown in Figure 2. Classical data are re-introduced several times in a sequence interspaced with process- ing units. We shall consider the introduction of data as a rotation of the qubit. This means that data from three-dimensional space, ~x, can be re-uploaded using unitaries that rotate the qubit U (~x). Later processing units will also be rotations as discussed later on. The whole structure needs to be trained in the classifica- tion of patterns. As we shall see, the performance of the single-qubit (a) Neural network (b) Quantum classifier Figure 1: Simplified working schemes of a neural network and a single-qubit quantum classifier with data re-uploading. In the neural network, every neuron receives input from all neurons of the previous layer. In contrast with that, the single-qubit classifier receives information from the previous processing unit and the input (introduced classically). It pro- cesses everything all together and the final output of the computation is a quantum state encoding several repetitions of input uploads and processing parameters. quantum classifier will depend on the number of re- uploads of classical data. This fact will be explored in the results section. The single-qubit classifier belongs to the category of parametrized quantum circuits. The performance of the circuit is quantified by a figure of merit, some specific χ 2 to be minimized and defined later. We need, though, to specify the processing gates present in the circuit in terms of a classical set of parameters. Given the simple structure of a single-qubit circuit presented in Figure 2, the data is introduced in a sim- ple rotation of the qubit, which is easy to character- ize. We just need to use arbitrary single-qubit rota- tions U (φ 1 , φ 2 , φ 3 ) SU(2). We will write U( ~ φ) with ~ φ = (φ 1 , φ 2 , φ 3 ). Then, the structure of the universal quantum classifier made with a single qubit is U( ~ φ, ~x) U ( ~ φ N )U(~x) . . . U( ~ φ 1 )U(~x), (1) which acts as |ψi = U( ~ φ, ~x)|0i. (2) The final classification of patterns will come from the results of measurements on |ψi. We may introduce the concept of processing layer as the combination L(i) U ( ~ φ i )U(~x), (3) so that the classifier corresponds to U( ~ φ, ~x) = L(N) . . . L(1), (4) where the depth of the circuit is 2N . The more layers the more representation capabilities the circuit will have, and the more powerful the classifier will be- come. Again, this follows from the analogy to neural Accepted in Quantum 2020-01-27, click title to verify. Published under CC-BY 4.0. 3 L(1) L(N) |0i U (~x) U( ~ φ 1 ) ··· U (~x) U( ~ φ N ) (a) Original scheme L(1) L(N) |0i U ~ φ 1 , ~x ··· U ~ φ N , ~x (b) Compressed scheme Figure 2: Single-qubit classifier with data re-uploading. The quantum circuit is divided into layer gates L(i), which con- stitutes the classifier building blocks. In the upper circuit, each of these layers is composed of a U (~x) gate, which up- loads the data, and a parametrized unitary gate U( ~ φ). We apply this building block N times and finally compute a cost function that is related to the fidelity of the final state of the circuit with the corresponding target state of its class. This cost function may be minimized by tunning the ~ φ i pa- rameters. Eventually, data and tunable parameters can be introduced with a single unitary gate, as illustrated in the bottom circuit. networks, where the size of the intermediate hidden layer of neurons is critical to represent complex func- tions. There is a way to compactify the quantum circuit into a shorter one. This can be done if we incorporate data and processing angles in a single step. Then, a layer would only need a single rotation to introduce data and tunable parameters, i.e. L(i) = U( ~ φ, ~x). In addition, each data point can be uploaded with some weight w i . These weights will play a similar role as weights in artificial neural networks, as we will see in the next section. Altogether, each layer gate can be taken as L(i) = U ~ θ i + ~w i ~x , (5) where ~w i ~x = w 1 i x 1 , w 2 i x 2 , w 3 i x 3 product of two vectors. In case the data points have dimension lesser than three, the rest of ~x components are set to zero. Such an approach reduces the depth of the circuit by half. Further combinations of layers into fewer rotations are also possible, but the nonlinearity inherent to subsequent rotations would be lost, and the circuit would not be performing well. Notice that data points are introduced linearly into the rotational gate. Non-linearities will come from the structure of these gates. We chose this encoding function as we believe it is one of the lesser biased ways to encode data with unknown properties. Due to the structure of single-qubit unitary gates, we will see that this encoding is particularly suited for data with rotational symmetry. Still, it can also classify other kinds of data structures. We can also apply other encoding techniques, e.g. the ones proposed in Ref. [10], but for the scope of this work, we have just tested the linear encoding strategy as a proof of concept of the performance of this quantum classifier. It is also possible to enlarge the dimensionality of the input space in the following way. Let us extend the definition of i-th layer to L(i) = U ~ θ (k) i + ~w (k) i ~x (k) ···U ~ θ (1) i + ~w (1) i ~x (1) , (6) where each data point is divided into k vectors of di- mension three. In general, each unitary U could ab- sorb as many variables as freedom in an SU(2) uni- tary. Each set of variables act at a time, and all of them have been shown to the circuit after k iterations. Then, the layer structure follows. The complexity of the circuit only increases linearly with the size of the input space. 2.3 Measurement The quantum circuit characterized by a series of pro- cessing angles {θ i } and weights {w i } delivers a final state |ψi, which needs to be measured. The results of each measurement are used to compute a χ 2 that quantifies the error made in the classification. The minimization of this quantity in terms of the classical parameters of the circuit can be organized using any preferred supervised machine learning technique. The critical point in the quantum measurement is to find an optimal way to associate outputs from the observations to target classes. The fundamental guid- ing principle to be used is given by the idea of max- imal orthogonality of outputs [16]. This is easily es- tablished for a dichotomic classification, where one of two classes A and B have to be assigned to the final measurement of the single qubit. In such a case it is possible to measure the output probabilities P (0) for |0i and P (1) for |1i. A given pattern could be classified into the A class if P (0) > P (1) and into B otherwise. We may refine this criterium by introduc- ing a bias. That is, the pattern is classified as A if P (0) > λ, and as B otherwise. The λ is chosen to op- timize the success of classification on a training set. Results are then checked on an independent validation set. The assignment of classes to the output reading of a single qubit becomes an involved issue when many classes are present. For the sake of simplicity, let us mention two examples for the case of classification to four distinct classes. One possible strategy consists on comparing the probability P(0) to four sectors with three thresholds: 0 λ 1 λ 2 λ 3 1. Then, the value of P(0) will fall into one of them, and classifi- cation is issued. A second, more robust assignment is obtained by computing the overlap of the final state to one of the states of a label states-set. This states- set is to be chosen with maximal orthogonality among Accepted in Quantum 2020-01-27, click title to verify. Published under CC-BY 4.0. 4 Figure 3: Representation in the Bloch sphere of four and six maximally orthogonal points, corresponding to the vertices of a tetrahedron and an octahedron respectively. The single- qubit classifier will be trained to distribute the data points in one of these vertices, each one representing a class. all of them. This second method needs from the max- imally orthogonal points in the Bloch sphere. Figure 3 shows the particular cases that can be applied to a classification task of four and six classes. In gen- eral, a good measurement strategy may need some prior computational effort and refined tomography of the final state. Since we are proposing a single-qubit classifier, the tomography protocol will only require three measurements. It is possible to interpret the single-qubit classi- fier in terms of geometry. The classifier opens a 2- dimensional Hilbert space, i.e., the Bloch sphere. As we encode data and classification within the param- eters defining rotations, this Hilbert space is enough to achieve classification. Any operation L(i) is a ro- tation on the Bloch sphere surface. With this point of view in mind, we can easily see that we can clas- sify any point using only one unitary operation. We can transport any point to any other point on the Bloch sphere by nothing else than choosing the an- gles of rotation properly. However, this does not work for several data, as the optimal rotation for some data points could be very inconvenient for some other points. However, if more layers are applied, each one will perform a different rotation, and many different rotations together have the capability of enabling a feature map. Data embedded in this feature space can be easily separated into classes employing the re- gions on the Bloch sphere. 2.3.1 A fidelity cost function We propose a very simple cost function motivated by the geometrical interpretation introduced above. We want to force the quantum states |ψ( ~ θ, ~w, ~x)i to be as near as possible to one particular state on the Bloch sphere. The angular distance between the label state and the data state can be measured with the relative fidelity between the two states [17]. Thus, our aim is to maximize the average fidelity between the states at the end of the quantum circuit and the label states corresponding to their class. We define the following cost function that carries out this task, χ 2 f ( ~ θ, ~w) = M X µ=1 1 |h ˜ ψ s |ψ( ~ θ, ~w, ~x µ )i| 2 , (7) where | ˜ ψ s i is the correct label state of the µ data point, which will correspond to one of the classes. 2.3.2 A weighted fidelity cost function We shall next define a refined version of the previous fidelity cost function to be minimized. The set of maximally orthogonal states in the Bloch sphere, i.e., the label states, are written as |ψ c i, where c is the class. Each of these label states represents one class for the classifier. Now, we will follow the lead usually taken in neural network classification. Let us define the quantity F c ( ~ θ, ~w, ~x) = |h ˜ ψ c |ψ( ~ θ, ~w, ~x)i| 2 , (8) where M is the total number of training points, | ˜ ψ c i is the label state of the class c and |ψ( ~ θ, ~w, ~x)i is the final state of the qubit at the end of the circuit. This fidelity is to be compared with the expected fidelity of a successful classification, Y c (~x). For example, given a four-class classification and using the vertices of a tetrahedron as label states (as shown in Figure 3), one expects Y s (~x) = 1, where s is the correct class, and Y r (~x) = 1/3 for the other r classes. In general, Y c (~x) can be written as a vector with one entry equal to 1, the one corresponding to the correct class, and the others containing the overlap between the correct class label state and the other label states. With these definitions, we can construct a cost function which turns out to be inspired by conven- tional cost functions in artificial neural networks. By weighting the fidelities of the final state of the circuit with all label states, we define the weighted fidelity cost function as χ 2 wf (~α, ~ θ, ~w) = 1 2 M X µ=1 C X c=1 α c F c ( ~ θ, ~w, ~x µ ) Y c (~x µ ) 2 ! , (9) where M is the total number of training points, C is the total number of classes, ~x µ are the training points and ~α = (α 1 , ··· , α C ) are introduced as class weights to be optimized together with ~ θ and ~w pa- rameters. This weighted fidelity has more parameters than the fidelity cost function. These parameters are the weights for the fidelities. The main difference between the weighted fidelity cost function of Eq. (9) and the fidelity cost function of Eq. (7) is how many overlaps do we need to com- pute. The χ 2 wf requires as many fidelities as classes every time we run the optimization subroutine, while the χ 2 f needs just one. This is not such a big differ- ence for a few classes and only one qubit. It is possible Accepted in Quantum 2020-01-27, click title to verify. Published under CC-BY 4.0. 5 to measure any state with a full tomography process which, for one qubit, is achievable. However, for many different classes, we expect that one measurement will be more efficient than many. Besides the weighted fidelity cost function being costlier than the fidelity cost function, there is another qualitative difference between both. The fidelity cost function forces the parameters to reach the maximum in fidelities. Loosely speaking, this fidelity moves the qubit state to where it should be. The weighted fi- delity forces the parameters to be close to a specified configuration of fidelities. It moves the qubit state to where it should be and moves it away from where it should not. Therefore, we expect that the weighted fi- delity will work better than the fidelity cost function. Moreover, this extra cost in terms of the number of parameters of the weighted fidelity cost function will only affect the classical minimization part of the al- gorithm. In a sense, we are increasing the classical processing part to reduce the quantum resources re- quired for the algorithm, i.e. the number of quantum operations (layers). This fact gain importance in the NISQ computation era. 3 Universality of the single-qubit clas- sifier After analyzing several classification problems, we ob- tain evidence that the single-qubit classifier intro- duced above can approximate any classification func- tion up to arbitrary precision. In this section, we pro- vide the motivation for this statement based on the Universal Approximation Theorem (UAT) of artificial neural networks [12]. 3.1 Universal Approximation Theorem Theorem– Let I m = [0, 1] m be the m-dimensional unit cube and C(I m ) the space of continuous functions in I m . Let the function ϕ : R R be a nonconstant, bounded and continuous function and f : I m R a function. Then, for every > 0, there exists an integer N and a function h : I m R, defined as h(~x) = N X i=1 α i ϕ ( ~w i ·~x + b i ) , (10) with α i , b i R and ~w i R m , such that h is an ap- proximate realization of f with precision , i.e., |h(~x) f(~x)| < (11) for all ~x I m . In artificial neural networks, ϕ is the activation function, ~w i are the weights for each neuron, b i are the biases and α i are the neuron weights that construct the output function. Thus, this theorem establishes that it is possible to reconstruct any continuous func- tion with a single layer neural network of N neurons. The proof of this theorem for the sigmoidal activation function can be found in Ref. [18]. This theorem was generalized for any nonconstant, bounded and contin- uous activation function in Ref. [12]. Moreover, Ref. [12] presents the following corollary of this theorem: ϕ could be a nonconstant finite linear combination of periodic functions, in particular, ϕ could be a non- constant trigonometric polynomial. 3.2 Universal Quantum Circuit Approximation The single-qubit classifier is divided into several layers which are general SU(2) rotational matrices. There exist many possible decompositions of an SU(2) rota- tional matrix. In particular, we use U( ~ φ) = U(φ 1 , φ 2 , φ 3 ) = e 2 σ z e 1 σ y e 3 σ z , (12) where σ i are the conventional Pauli matrices. Using the SU(2) group composition law, we can rewrite the above parametrization in a single exponential, U( ~ φ) = e i~ω( ~ φ)·~σ , (13) with ~ω( ~ φ) = ω 1 ( ~ φ), ω 2 ( ~ φ), ω 3 ( ~ φ) and ω 1 ( ~ φ) = d N sin ((φ 2 φ 3 )/2) sin (φ 1 /2) , (14) ω 2 ( ~ φ) = d N cos ((φ 2 φ 3 )/2) sin (φ 1 /2) , (15) ω 3 ( ~ φ) = d N sin ((φ 2 + φ 3 )/2) cos (φ 1 /2) , (16) where N = 1 cos 2 d 1 and cos d = cos ((φ 2 + φ 3 )/2) cos (φ 1 /2). The single-qubit classifier codifies the data points into ~ φ parameters of the U unitary gate. In particu- lar, we can re-upload data together with the tunable parameters as defined in Eq. (5), i.e. ~ φ(~x) = (φ 1 (~x), φ 2 (~x), φ 3 (~x)) = ~ θ + ~w ~x. (17) Thus, U(~x) = U N (~x)U N1 (~x) ···U 1 (~x) = N Y i=1 e i~ω( ~ φ i (~x))·~σ , (18) Next, we apply the Baker-Campbell-Hausdorff (BCH) formula [19] to the above equation, U(~x) = exp " i N X i=1 ~ω( ~ φ i (~x)) ·~σ + O corr # . (19) Notice that the remaining BCH terms O corr are also proportional to Pauli matrices due to [σ i , σ j ] = 2i ijk σ k . Accepted in Quantum 2020-01-27, click title to verify. Published under CC-BY 4.0. 6 Each ~ω terms are trigonometric functions, uncon- stant, bounded and continuous. Then N X i=1 ~ω( ~ φ i (~x)) = N X i=1 ω 1 ( ~ φ i (~x)), ω 2 ( ~ φ i (~x)), ω 3 ( ~ φ i (~x)) = N X i=1 ω 1 ( ~ θ i + ~w i ~x), ω 2 ( ~ θ i + ~w i ~x), ω 3 ( ~ θ i + ~w i ~x) = (f 1 (~x), f 2 (~x), f 3 (~x)) . (20) We still have to deal with the remaining terms O corr of the BCH expansion. Instead of applying such expansion, we can use again the SU(2) group composition law to obtain the analytical formula of U(~x) = e i ~ ξ(~x)·~σ , where ~ ξ(~x) will be an inextricably trigonometric function of ~x. The O corr terms are pro- portional to ~σ matrices, so O corr = ~%(~x) · ~σ for some function ~%(~x). Then, U(~x) = e i ~ ξ(~x)·~σ = e i ~ f(~x)·~σ+i~%(~x)·~σ . (21) Thus, O corr terms can be absorbed in ~ f(~x). For each data point ~x, we obtain a final state that will contain these ~ ξ(~x) functions. With all train- ing points, we construct a cost function that can in- clude new parameters α c for each class if we use the weighted fidelity cost function of Eq. (9). The func- tion obtained from the combination of ~ ξ(x) and α c is expected to be complex enough to probably represent almost any continuous function. However, more pa- rameters are necessary to map this argument with the UAT expression. If we compare the parameters of the UAT with the single-qubit circuit parameters, the ~w i will correspond with the weights, the ~ θ i with the biases b i , the number of layers N of the quantum classifier will correspond with the number of neurons in the hidden layer and ~ω functions with the activation functions ϕ. We have explained why it is necessary to re-upload the data at each layer and why a single qubit could be a universal classifier. As has been stated before, an artificial neural network introduces the data points in each hidden neuron, weights them and adds some bias. Here we cannot just copy each data point be- cause the non-cloning theorem, so we have to re- upload it at each layer. 4 From single- to multi-qubit quantum classifier The single-qubit classifier cannot carry any quantum advantage respect classical classification techniques such as artificial neural networks. In the previous sections, we have defined a quantum mechanical ver- sion of a neural network with a single hidden layer. In general, a huge amount of hidden neurons is necessary to approximate a target function with a single layer. To circumvent this inconvenience, more hidden layers are introduced, leading eventually to the concept of deep neural networks. By using the single-qubit classifier formalism that we have introduced in the previous sections, we pro- pose its generalization to more qubits. The introduc- tion of multiple qubits to this quantum classifier may improve its performance as more hidden layers im- prove the classification task of an artificial neural net- work. With the introduction of entanglement between these qubits, we reduce the number of layers of our classifier as well as propose a quantum classification method that can achieve quantum advantage. Figure 1 shows the analogy between a neural net- work with a single hidden layer and a single-qubit classifier. The generalization of this analogy is not so obvious. A multi-qubit classifier without entan- glement could have some similarities with a convolu- tional neural network, where each qubit could repre- sent a neural network by itself. However, it is not clear if the introduction of entanglement between qubits can be understood as a deep neural network archi- tecture. The discussion around this analogy as well as an extended study of the performance of a multi- qubit classifier is beyond the scope of this work. In the next subsections, we present a general proposal for a multi-qubit classifier which we compare with the single-qubit one in Section 6. 4.1 Measurement strategy and cost function for a multi-qubit classifier With a single-qubit classifier, the measurement strat- egy consisting on comparing the final state of the circuit with a pre-defined target state was achiev- able. Experimentally, one needs to perform a quan- tum state tomography protocol of only three mea- surements. However, if more qubits are to be con- sidered, tomography protocols become exponentially expensive in terms of number of measurements. We propose two measurement strategies for a multi- qubit classifier. The first one is the natural general- ization of the single-qubit strategy, although it will become unrealizable for a large number of qubits. We compare the final state of the circuit with one of the states of the computational basis, one for each class. The second strategy consist on focusing in one qubit and depending on its state associate one or other class. This is similar to previous proposals of binary multi- qubit classifiers [7], although we add the possibility of multiclass classification by introducing several thresh- olds (see Section 2). Another part that should be adapted is the defini- tion of the cost function. In particular, we use differ- ent functions for each strategy explained above. For the first strategy, we use the fidelity cost func- tion of Eq. (7). Its generalization to more qubits is straightforward. However, the orthogonal states used Accepted in Quantum 2020-01-27, click title to verify. Published under CC-BY 4.0. 7 |0i L 1 (1) L 1 (2) L 1 (3) ··· L 1 (N) |0i L 2 (1) L 2 (2) L 2 (3) ··· L 2 (N) (a) Ansatz with no entanglement |0i L 1 (1) L 1 (2) ··· L 1 (N) |0i L 2 (1) L 2 (2) ··· L 2 (N) (b) Ansatz with entanglement Figure 4: Two-qubit quantum classifier circuit without en- tanglement (top circuit) and with entanglement (bottom circuit). Here, each layer includes a rotation with data re- uploading in both qubits plus a CZ gate if there is entangle- ment. The exception is the last layer, which does not have any CZ gate associated to it. For a fixed number of layers, the number of parameters to be optimized doubles the one needed for a single-qubit classifier. for a multi-qubit classifier are taken as the computa- tional basis states. A more sophisticated set of states could be considered to improve the performance of this method. For the second strategy, we use the weighted fidelity cost function. As stated above, we just focus on one qubit, thus F c,q ( ~ θ, ~w, ~x) = h ˜ ψ c |ρ q ( ~ θ, ~w, ~x)| ˜ ψ c i, (22) where ρ q is the reduced density matrix of the qubit to be measured. Then, the weighted fidelity cost func- tion can be adapted as χ 2 wf (~α, ~ θ, ~w) = 1 2 M X µ=1 C X c=1 Q X q=1 α c,q F c,q ( ~ θ, ~w, ~x µ ) Y c (~x µ ) 2 ! , (23) where we average over all Q qubits that form the clas- sifier. Eventually, we can just measure one of these qubits, reducing the number of parameters to be op- timized. 4.2 Quantum circuits examples The definition of a multi-qubit quantum classifier cir- cuit could be as free as is the definition of a multi- layer neural network. In artificial neural networks, it is far from obvious what should be the number of hidden layers and neurons per layer to perform some task. Besides, it is, in general, problem-dependent. For a multi-qubit quantum classifier, there is extra degree of freedom in the circuit-design: how to in- troduce the entanglement. This is precisely an open problem in parametrized quantum circuits: to find a |0i L 1 (1) L 1 (2) L 1 (3) ··· L 1 (N) |0i L 2 (1) L 2 (2) L 2 (3) ··· L 2 (N) |0i L 3 (1) L 3 (2) L 3 (3) ··· L 3 (N) |0i L 4 (1) L 4 (2) L 4 (3) ··· L 4 (N) (a) Ansatz with no entanglement |0i L 1 (1) L 1 (2) ··· L 1 (N) |0i L 2 (1) L 2 (2) ··· L 2 (N) |0i L 3 (1) L 3 (2) ··· L 3 (N) |0i L 4 (1) L 4 (2) ··· L 4 (N) (b) Ansatz with entanglement Figure 5: Four-qubit quantum classifier circuits. Without entanglement (top circuit), each layer is composed by four parallel rotations. With entanglement (bottom circuit) each layer includes a parallel rotation and two parallel CZ gates. The order of CZ gates alternates in each layer between (1)- (2) and (3)-(4) qubits and (2)-(3) and (1)-(4) qubits. The exception is in the last layer, which does not contain any CZ gate. For a fixed number of layers, the number of parameters to be optimized quadruples the ones needed for a single-qubit classifier. correct ansatz for the entangling structure of the cir- cuit. Figures 4 and 5 show the explicit circuits used in this work. For a two-qubit classifier without entangle- ment, and similarly for a four-qubit classifier, we iden- tify each layer as parallel rotations on all qubits. We introduce the entanglement using CZ gates between rotations that are absorbed in the definition of layer. For two-qubit classifier with entanglement, we apply a CZ gate after each rotation with exception of the last layer. For a four-qubit classifier, two CZ gates are ap- plied after each rotation alternatively between (1)-(2) and (3)-(4) qubits and (2)-(3) and (1)-(4) qubits. The number of parameters needed to perform the optimization doubles the ones needed for a single- qubit classifier for the two-qubit classifier and quadru- ples for the four-qubit classifier. For N layers, the cir- cuit depth is N for the non-entangling classifiers and 2N for the entangling classifiers. 5 Minimization methods The practical training of a parametrized single-qubit or multi-qubit quantum classifier needs minimization in the parameter space describing the circuit. This Accepted in Quantum 2020-01-27, click title to verify. Published under CC-BY 4.0. 8 is often referred as a hybrid algorithm, where classi- cal and quantum logic coexist and benefit from one another. To be precise, the set of {θ i } angles and {w i } weights, together with α q,l parameters if appli- cable, forms a space to be explored in search of a min- imum χ 2 . In parameter landscapes as big as the ones treated here, or in regular neural network classifica- tion, the appearance of local minima is ultimately un- avoidable. The composition of rotation gates renders a large product of independent trigonometric func- tions. It is thus clear to see that our problem will be overly populated with minima. The classical min- imizer can easily get trapped in a not optimal one. Our problem is reduced to minimizing a function of many parameters. For a single-qubit classifier, the number of parameters is (3 + d)N where d is the di- mension of the problem, i.e. the dimension of ~x, and N is the number of layers. Three of these parameters are the rotational angles and the other d correspond with the ~w i weight. If using the weighted fidelity cost function, we should add C extra parameters, one for each class. In principle, one does not know how is the parame- ter landscape of the cost function to be minimized. If the cost function were, for example, a convex function, a downhill strategy would be likely to work properly. The pure downhill strategy is known as gradient de- scent. In machine learning, the method commonly used is a Stochastic Gradient Descent (SGD) [20]. There is another special method of minimization known as L-BFGS-B [21]. This method has been used in classical machine learning with very good results [22]. The results we present from now on are starred by the L-BFGS-B algorithm, as we found it is accurate and relatively fast. We used open source software [23] as the core of the minimization with own made func- tions to minimize. The minimizer is taken as a black box whose parameters are set by default. As this is the first attempt of constructing a single- or multi- qubit classifier, further improvements can be done on the hyperparameters of minimization. Nevertheless we have also tested a SGD algorithm for the fidelity cost function. This whole algorithm has been developed by us following the steps from [17]. The details can be read in Appendix A. In general, we found that L-BFGS-B algorithm is better than SGD. This is something already observed in classical neu- ral networks. When the training set is small, it is often more convenient to use a L-BFGS-B strategy than a SGD. We were forced to use small training sets due to computational capabilities for our simula- tions. Numerical evidences on this arise when solving the problems we face for these single- and multi-qubit classifiers with classical standard machine learning li- braries [22]. This can be understood with a simple argument. Neural networks or our quantum classifier are supposed to have plenty of local minima. Neural networks have got huge products of non linear func- tions. The odds of having local minima are then large. In the quantum circuits side, there are nothing but trigonometric functions. In both cases, if there are a lot of training points it is more likely to find some of them capable of getting us out of local minima. If this is the case, SGD is more useful for being faster. On the contrary, when the training set is small, we have to pick an algorithm less sensitive to local minima, such as the L-BFGS-B. 6 Benchmark of a single- and multi- qubit classifier We can now tackle some classification problems. We will prove that a single-qubit classifier can perform a multi-class classification for multi-dimensional data and that a multi-qubit classifier, in general, improves these results. We construct several classifiers with different num- ber of layers. We then train the circuits with a train- ing set of random data points to obtain the values of the free parameters {θ i } and {w i } for each layer and {α i } when applicable. We use the cost functions de- fined in Eq. (9) and Eq. (7). Then, we test the perfor- mance of each classifier with a test set independently generated and one order of magnitud greater than the training set. For the sake of reproducibility, we have fixed the same seed to generate all data points. For this reason, the test and training set points are the same for all problems. For more details, we provide the explicit code used in this work [24]. We run a single-, two- and four-qubit classifiers, with and without entanglement, using the two cost functions described above. We benchmark several classifiers formed by L = 1, 2, 3, 4, 5, 6, 8 and 10 layers. In the following subsections, we describe the partic- ular problems addressed with these single- and multi- qubit classifiers with data re-uploading. We choose four problem types: a simple binary classification, a classification of a figure with multiple patterns, a multi-dimensional classification and a non-convex fig- ure. The code used to define and benchmark the single- and multi-qubit quantum classifier is open and can be found in Ref. [24]. 6.1 Simple example: classification of a circle Let us start with a simple example. We create a ran- dom set of data on a plane with coordinates ~x = (x 1 , x 2 ) with x i [1, 1]. Our goal is to classify these points according to x 2 1 +x 2 2 < r 2 , i.e. if they are inside or outside of a circle of radius r. The value of the ra- dius is chosen in such a way that the areas inside and outside it are equal, that is, r = q 2 π , so the proba- bility of success if we label each data point randomly Accepted in Quantum 2020-01-27, click title to verify. Published under CC-BY 4.0. 9
2023-01-27T17:44:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7160747647285461, "perplexity": 3719.373342845887}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764495001.99/warc/CC-MAIN-20230127164242-20230127194242-00200.warc.gz"}
https://zbmath.org/authors/?q=ai%3Adade.everett-c
# zbMATH — the first resource for mathematics Compute Distance To: Documents Indexed: 72 Publications since 1959 all top 5 #### Co-Authors 63 single-authored 5 Taussky-Todd, Olga 3 Zassenhaus, Hans Julius 1 Bumby, Richard T. 1 Goldberg, Karl 1 Robinson, Donald W. 1 Ward, Morgan 1 Yadav, Manoj Kumar all top 5 #### Serials 16 Journal of Algebra 16 Mathematische Zeitschrift 7 Annals of Mathematics. Second Series 6 Journal für die Reine und Angewandte Mathematik 4 Illinois Journal of Mathematics 3 Mathematische Annalen 1 Israel Journal of Mathematics 1 Acta Arithmetica 1 Archiv der Mathematik 1 Inventiones Mathematicae 1 Mathematika 1 Michigan Mathematical Journal 1 Pacific Journal of Mathematics 1 Proceedings of the London Mathematical Society. Third Series 1 Journal of the Australian Mathematical Society. Series A 1 Journal of Group Theory 1 Bulletin of the American Mathematical Society #### Fields 46 Group theory and generalizations (20-XX) 13 Associative rings and algebras (16-XX) 3 Number theory (11-XX) 1 Commutative algebra (13-XX) #### Citations contained in zbMATH Open 70 Publications have been cited 1,011 times in 724 Documents Cited by Year Blocks with cyclic defect groups. Zbl 0163.27202 1966 Group-graded rings and modules. Zbl 0424.16001 1980 Endo-permutation modules over p-groups. II. Zbl 0404.16003 1978 Endo-permutation modules over p-groups. I. Zbl 0395.16007 1978 Compounding Clifford’s theory. Zbl 0224.20037 1970 Counting characters in blocks. I. Zbl 0738.20011 1992 Block extensions. Zbl 0352.20008 1973 Counting characters in blocks. II. Zbl 0790.20020 1994 Remarks on isomorphic blocks. Zbl 0376.20003 1977 Isomorphisms of Clifford extensions. Zbl 0217.35204 1970 Clifford theory for group-graded rings. Zbl 0583.16001 1986 Carter subgroups and Fitting heights of finite solvable groups. Zbl 0195.04003 1969 Counting characters in blocks. II. 9. Zbl 0904.20004 1997 The equivalence of various generalizations of group rings and modules. Zbl 0492.20006 1982 Some indecomposable group representations. Zbl 0119.03101 1963 Characters of groups with normal extra-special subgroups. Zbl 0325.20005 1977 On the theory of orders, in particular on the semigroup of ideal classes and genera of an order in an algebraic number field. Zbl 0113.26504 Dade, E. C.; Taussky, O.; Zassenhaus, Hans 1962 Counting characters in blocks with cyclic defect groups. I. Zbl 0873.20006 1996 Monomial characters and normal subgroups. Zbl 0451.20007 1981 Locally trivial outer automorphisms of finite groups. Zbl 0194.33601 1970 Deux groupes finis distincts ayant la même algèbre de groupe sur tout corps. Zbl 0201.03303 1971 The maximal finite groups of $$4\times 4$$ integral matrices. Zbl 0123.02702 1965 Answer to a question of R. Brauer. Zbl 0121.03303 1964 Extending group modules in a relatively prime case. Zbl 0523.16006 1984 A correspondence of characters. Zbl 0456.20002 1980 Character values and Clifford extensions for finite groups. Zbl 0314.20011 1974 Extending irreducible modules. Zbl 0497.20001 1981 Localization of injective modules. Zbl 0461.13003 1981 Une extension de la théorie de Hall et Higman. Zbl 0246.20014 1972 A Clifford theory for blocks. Zbl 0297.20022 1971 Normal subgroups of M-groups need not be M-groups. Zbl 0255.20007 1973 Generalized Clifford correspondences for group characters. Zbl 0535.20003 1984 Algebraic integral representations by arbitrary forms. Zbl 0121.28502 1964 Clifford theory for group-graded rings. II. Zbl 0638.16001 1988 Sylow-centralizing sections of outer automorphism groups of finite groups are nilpotent. Zbl 0282.20018 1975 Counterexamples to a conjecture of Tamaschke. Zbl 0185.06701 1969 Lifting group characters. Zbl 0123.25104 1964 Divisors of recurrent sequences. Zbl 0142.29004 Dade, E. C.; Robinson, D. W.; Taussky, O.; Ward, M. 1964 On the semigroup of ideal classes in an order of an algebraic number field. Zbl 0100.03202 Dade, E. C.; Taussky, O.; Zassenhaus, Hans 1961 Finite groups with many product conjugacy classes. Zbl 1139.20027 2006 On normal complements to sections of finite groups. Zbl 0319.20031 1975 A new approach to Glauberman’s correspondence. Zbl 1037.20004 2003 Stabilizer limits of characters of nilpotent normal subgroups. Zbl 0602.20004 1986 Outer automorphisms centralizing every nilpotent subgroup of a finite group. Zbl 0237.20017 1973 Correction to ”Locally trivial outer automorphisms of finite groups”. Zbl 0228.20006 1972 Degrees of modular irreducible representations of $$p$$-solvable groups. Zbl 0157.06301 1968 Products of orders of centralizers. Zbl 0189.31602 1967 Some new results connected with matrices of rational integers. Zbl 0142.29103 1965 Rings in which no fixed power of ideal classes becomes invertible. Zbl 0113.03101 1962 Clifford theory and Galois theory. I. Zbl 1149.20005 2008 Caracteres venant des $$\mathcal F$$-normalisateurs d’un groupe fini resoluble. Zbl 0396.20010 1979 Automorphismes extérieurs centralisant tout sous-groupe de Sylow. Zbl 0188.06104 1970 Remark on a problem of Niven and Zuckerman. Zbl 0153.07802 Bumby, R. T.; Dade, E. C. 1967 Accessible characters are monomial. Zbl 0651.20011 1988 Indecomposable endo-permutation modules over p-groups. Zbl 0603.20009 1986 The Green correspondents of simple group modules. Zbl 0496.20004 1982 Algebraically rigid modules. Zbl 0444.20005 1980 Outer automorphisms centralizing every abelian subgroup of a finite group. Zbl 0268.20019 1974 On Brauer’s second main theorem. Zbl 0178.35202 1965 Another way to count characters. Zbl 0918.20007 1999 Counting characters of (ZT)-groups. Zbl 0929.20009 1999 Blocks of fully graded rings. Zbl 0915.16034 1998 Clifford theory and induction from subgroups. Zbl 0688.16001 1989 Some M-subgroups of M-groups. Zbl 0538.20005 1984 Some p-solvable groups. Zbl 0132.26802 1965 On the different in orders in an algebraic number field and special units connected with it. Zbl 0125.29205 1964 Integral systems of imprimitivity. Zbl 0124.26801 1964 1963 The construction of Hadamard matrices. Zbl 0092.01505 1959 Abelian groups of unimodular matrices. Zbl 0087.02201 1959 Clifford theory and Galois theory. I. Zbl 1149.20005 2008 Finite groups with many product conjugacy classes. Zbl 1139.20027 2006 A new approach to Glauberman’s correspondence. Zbl 1037.20004 2003 Another way to count characters. Zbl 0918.20007 1999 Counting characters of (ZT)-groups. Zbl 0929.20009 1999 Blocks of fully graded rings. Zbl 0915.16034 1998 Counting characters in blocks. II. 9. Zbl 0904.20004 1997 Counting characters in blocks with cyclic defect groups. I. Zbl 0873.20006 1996 Counting characters in blocks. II. Zbl 0790.20020 1994 Counting characters in blocks. I. Zbl 0738.20011 1992 Clifford theory and induction from subgroups. Zbl 0688.16001 1989 Clifford theory for group-graded rings. II. Zbl 0638.16001 1988 Accessible characters are monomial. Zbl 0651.20011 1988 Clifford theory for group-graded rings. Zbl 0583.16001 1986 Stabilizer limits of characters of nilpotent normal subgroups. Zbl 0602.20004 1986 Indecomposable endo-permutation modules over p-groups. Zbl 0603.20009 1986 Extending group modules in a relatively prime case. Zbl 0523.16006 1984 Generalized Clifford correspondences for group characters. Zbl 0535.20003 1984 Some M-subgroups of M-groups. Zbl 0538.20005 1984 The equivalence of various generalizations of group rings and modules. Zbl 0492.20006 1982 The Green correspondents of simple group modules. Zbl 0496.20004 1982 Monomial characters and normal subgroups. Zbl 0451.20007 1981 Extending irreducible modules. Zbl 0497.20001 1981 Localization of injective modules. Zbl 0461.13003 1981 Group-graded rings and modules. Zbl 0424.16001 1980 A correspondence of characters. Zbl 0456.20002 1980 Algebraically rigid modules. Zbl 0444.20005 1980 Caracteres venant des $$\mathcal F$$-normalisateurs d’un groupe fini resoluble. Zbl 0396.20010 1979 Endo-permutation modules over p-groups. II. Zbl 0404.16003 1978 Endo-permutation modules over p-groups. I. Zbl 0395.16007 1978 Remarks on isomorphic blocks. Zbl 0376.20003 1977 Characters of groups with normal extra-special subgroups. Zbl 0325.20005 1977 Sylow-centralizing sections of outer automorphism groups of finite groups are nilpotent. Zbl 0282.20018 1975 On normal complements to sections of finite groups. Zbl 0319.20031 1975 Character values and Clifford extensions for finite groups. Zbl 0314.20011 1974 Outer automorphisms centralizing every abelian subgroup of a finite group. Zbl 0268.20019 1974 Block extensions. Zbl 0352.20008 1973 Normal subgroups of M-groups need not be M-groups. Zbl 0255.20007 1973 Outer automorphisms centralizing every nilpotent subgroup of a finite group. Zbl 0237.20017 1973 Une extension de la théorie de Hall et Higman. Zbl 0246.20014 1972 Correction to ”Locally trivial outer automorphisms of finite groups”. Zbl 0228.20006 1972 Deux groupes finis distincts ayant la même algèbre de groupe sur tout corps. Zbl 0201.03303 1971 A Clifford theory for blocks. Zbl 0297.20022 1971 Compounding Clifford’s theory. Zbl 0224.20037 1970 Isomorphisms of Clifford extensions. Zbl 0217.35204 1970 Locally trivial outer automorphisms of finite groups. Zbl 0194.33601 1970 Automorphismes extérieurs centralisant tout sous-groupe de Sylow. Zbl 0188.06104 1970 Carter subgroups and Fitting heights of finite solvable groups. Zbl 0195.04003 1969 Counterexamples to a conjecture of Tamaschke. Zbl 0185.06701 1969 Degrees of modular irreducible representations of $$p$$-solvable groups. Zbl 0157.06301 1968 Products of orders of centralizers. Zbl 0189.31602 1967 Remark on a problem of Niven and Zuckerman. Zbl 0153.07802 Bumby, R. T.; Dade, E. C. 1967 Blocks with cyclic defect groups. Zbl 0163.27202 1966 The maximal finite groups of $$4\times 4$$ integral matrices. Zbl 0123.02702 1965 Some new results connected with matrices of rational integers. Zbl 0142.29103 1965 On Brauer’s second main theorem. Zbl 0178.35202 1965 Some p-solvable groups. Zbl 0132.26802 1965 Answer to a question of R. Brauer. Zbl 0121.03303 1964 Algebraic integral representations by arbitrary forms. Zbl 0121.28502 1964 Lifting group characters. Zbl 0123.25104 1964 Divisors of recurrent sequences. Zbl 0142.29004 Dade, E. C.; Robinson, D. W.; Taussky, O.; Ward, M. 1964 On the different in orders in an algebraic number field and special units connected with it. Zbl 0125.29205 1964 Integral systems of imprimitivity. Zbl 0124.26801 1964 Some indecomposable group representations. Zbl 0119.03101 1963 1963 On the theory of orders, in particular on the semigroup of ideal classes and genera of an order in an algebraic number field. Zbl 0113.26504 Dade, E. C.; Taussky, O.; Zassenhaus, Hans 1962 Rings in which no fixed power of ideal classes becomes invertible. Zbl 0113.03101 1962 On the semigroup of ideal classes in an order of an algebraic number field. Zbl 0100.03202 Dade, E. C.; Taussky, O.; Zassenhaus, Hans 1961 The construction of Hadamard matrices. Zbl 0092.01505 1959 Abelian groups of unimodular matrices. Zbl 0087.02201 1959 all top 5 #### Cited by 516 Authors 22 Nastasescu, Constantin 20 An, Jianbei 20 Carlson, Jon Frederick 20 Dade, Everett C. 20 Koshitani, Shigeo 19 van Oystaeyen, Freddy 13 Marcus, Andrei 13 Navarro, Gabriel 12 Mazza, Nadia Paola 11 Külshammer, Burkhard 11 Linckelmann, Markus 11 Puig, Lluis 11 Schmid, Peter P. 10 Benson, David John 9 Harris, Morton E. 9 Isaacs, I. Martin 9 Lassueur, Caroline 8 Eaton, Charles William 8 Huang, Shih-Chang 8 Pevtsova, Julia 8 Robinson, Geoffrey Raymond 8 Späth, Britta 8 Thévenaz, Jacques 8 Watanabe, Atumi 7 Sambale, Benjamin 7 Tiep Pham Huu 6 Brauer, Richard 6 Erdmann, Karin 6 Fong, Paul 6 Hanaki, Akihide 6 Jin, Ping 6 Lewis, Mark L. 6 O’Brien, Eamonn A. 6 Roggenkamp, Klaus W. 5 Bleher, Frauke M. 5 Bouc, Serge 5 Fan, Yun 5 Friedlander, Eric Mark 5 Guralnick, Robert Michael 5 Kessar, Radha 5 Kunugi, Naoko 5 Murai, Masafumi 5 Nakano, Daniel K. 5 Taussky-Todd, Olga 5 Turull, Alexandre 5 Witherspoon, Sarah J. 4 Assadi, Amir H. 4 Broué, Michel 4 Cegarra, Antonio Martínez 4 Gómez Pardo, José Luis 4 Himstedt, Frank 4 Khukhro, Evgeniĭ I. 4 Luckas, Melissa R. 4 Malle, Gunter 4 Nauman, Syed Khalid 4 Torrecillas Jover, Blas 4 Wang, Baoshan 4 Zhou, Yuanyang 3 Albu, Toma 3 Cline, Edward T. 3 Cohen, Miriam 3 Craven, David A. 3 Dăscălescu, Sorin 3 Ercan, Gülin 3 Garzón, Antonio R. 3 Haefner, Jeremy 3 Hartley, Brian 3 Herman, Allen 3 Holloway, Miles 3 Hüttemann, Thomas 3 Khosravi, Bahman 3 Khosravi, Behnam 3 Khosravi, Behrooz 3 Kurzweil, Hans 3 Leon, Jeffrey S. 3 Leonard, Henry S. jun. 3 Lu, Ziqun 3 Lundström, Patrik 3 Mikhalëv, Aleksandr Vasil’evich 3 Miyachi, Hyohe 3 Montgomery, Susan 3 Müller, Jürgen 3 Öinert, Johan 3 Öztürk Kaptanoğlu, Semra 3 Peacock, R. M. 3 Plesken, Wilhelm 3 Sehgal, Surinder K. 3 Srinivasan, Bhama 3 Tasaka, Fuminori 3 Uno, Katsuhiro 3 Wales, David B. 3 Wiegand, Sylvia M. 3 Yang, Sheng 3 Zhang, Yinhuo 2 Alperin, Jonathan L. 2 Aschbacher, Michael George 2 Baeth, Nicholas R. 2 Balmer, Paul 2 Berger, Thomas R. 2 Biland, Erwan ...and 416 more Authors all top 5 #### Cited in 95 Serials 258 Journal of Algebra 71 Communications in Algebra 49 Journal of Pure and Applied Algebra 32 Mathematische Zeitschrift 22 Archiv der Mathematik 18 Transactions of the American Mathematical Society 18 Journal of Group Theory 13 Advances in Mathematics 12 Journal of Number Theory 11 Proceedings of the American Mathematical Society 10 Inventiones Mathematicae 10 Algebras and Representation Theory 10 Journal of Algebra and its Applications 9 Israel Journal of Mathematics 8 Bulletin of the American Mathematical Society 7 Bulletin of the American Mathematical Society. New Series 6 Journal of Soviet Mathematics 6 Manuscripta Mathematica 6 Linear Algebra and its Applications 6 Algebra Colloquium 5 Journal für die Reine und Angewandte Mathematik 5 Nagoya Mathematical Journal 5 Osaka Journal of Mathematics 5 Proceedings of the Indian Academy of Sciences. Mathematical Sciences 5 Journal of the European Mathematical Society (JEMS) 5 Acta Mathematica Sinica. English Series 4 Mathematical Proceedings of the Cambridge Philosophical Society 4 Glasgow Mathematical Journal 3 Mathematics of Computation 3 Algebra and Logic 3 Monatshefte für Mathematik 3 Proceedings of the Japan Academy. Series A 3 International Journal of Algebra and Computation 3 Journal of Algebraic Combinatorics 3 Representation Theory 3 LMS Journal of Computation and Mathematics 3 Journal of Commutative Algebra 2 Bulletin of the Australian Mathematical Society 2 Mathematische Annalen 2 Proceedings of the Edinburgh Mathematical Society. Series II 2 European Journal of Combinatorics 2 Forum Mathematicum 2 Annales Scientifiques de l’Université Blaise Pascal Clermont-Ferrand II. Mathématiques 2 Proceedings of the Royal Society of Edinburgh. Section A. Mathematics 2 Journal of Mathematical Sciences (New York) 2 Annals of Mathematics. Second Series 2 Science China. Mathematics 1 Discrete Mathematics 1 Indian Journal of Pure & Applied Mathematics 1 Reports on Mathematical Physics 1 Beiträge zur Algebra und Geometrie 1 Acta Mathematica 1 Annales Scientifiques de l’École Normale Supérieure. Quatrième Série 1 Compositio Mathematica 1 Computing 1 Duke Mathematical Journal 1 Geometriae Dedicata 1 Indiana University Mathematics Journal 1 Publications Mathématiques 1 Journal of the Mathematical Society of Japan 1 Memoirs of the American Mathematical Society 1 Michigan Mathematical Journal 1 Numerische Mathematik 1 Pacific Journal of Mathematics 1 Rendiconti del Seminario Matematico della Università di Padova 1 Results in Mathematics 1 Semigroup Forum 1 Siberian Mathematical Journal 1 Cybernetics 1 Acta Applicandae Mathematicae 1 Journal of Symbolic Computation 1 Journal of the American Mathematical Society 1 Elemente der Mathematik 1 Expositiones Mathematicae 1 Indagationes Mathematicae. New Series 1 Selecta Mathematica. New Series 1 Wuhan University Journal of Natural Sciences (WUJNS) 1 Algebraic & Geometric Topology 1 Dynamical Systems 1 Journal of the Australian Mathematical Society 1 Bulletin of the Malaysian Mathematical Sciences Society. Second Series 1 JP Journal of Algebra, Number Theory and Applications 1 Journal of Applied Mathematics and Computing 1 International Journal of Number Theory 1 SIGMA. Symmetry, Integrability and Geometry: Methods and Applications 1 Complex Analysis and Operator Theory 1 Algebra & Number Theory 1 European Journal of Pure and Applied Mathematics 1 Journal of $$K$$-Theory 1 Acta Universitatis Sapientiae. Mathematica 1 Quantum Topology 1 Kyoto Journal of Mathematics 1 Forum of Mathematics, Pi 1 Forum of Mathematics, Sigma 1 Journal of Siberian Federal University. Mathematics & Physics all top 5 #### Cited in 25 Fields 517 Group theory and generalizations (20-XX) 231 Associative rings and algebras (16-XX) 42 Commutative algebra (13-XX) 34 Number theory (11-XX) 31 Category theory; homological algebra (18-XX) 28 $$K$$-theory (19-XX) 16 Combinatorics (05-XX) 16 Algebraic geometry (14-XX) 15 Nonassociative rings and algebras (17-XX) 10 Linear and multilinear algebra; matrix theory (15-XX) 9 Field theory and polynomials (12-XX) 8 Manifolds and cell complexes (57-XX) 7 Algebraic topology (55-XX) 5 History and biography (01-XX) 3 Dynamical systems and ergodic theory (37-XX) 3 Geometry (51-XX) 3 Information and communication theory, circuits (94-XX) 1 Order, lattices, ordered algebraic structures (06-XX) 1 Topological groups, Lie groups (22-XX) 1 Harmonic analysis on Euclidean spaces (42-XX) 1 Functional analysis (46-XX) 1 Convex and discrete geometry (52-XX) 1 Differential geometry (53-XX) 1 Global analysis, analysis on manifolds (58-XX) 1 Probability theory and stochastic processes (60-XX) #### Wikidata Timeline The data are displayed as stored in Wikidata under a Creative Commons CC0 License. Updates and corrections should be made in Wikidata.
2021-06-21T23:53:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3803970515727997, "perplexity": 2532.2326329779025}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488504838.98/warc/CC-MAIN-20210621212241-20210622002241-00173.warc.gz"}
http://dlmf.nist.gov/13.18
# §13.18 Relations to Other Functions ## §13.18(ii) Incomplete Gamma Functions For the notation see §§6.2(i), 7.2(i), and 8.2(i). When is an integer the Whittaker functions can be expressed as incomplete gamma functions (or generalized exponential integrals). For example, Special cases are the error functions 13.18.7 ## §13.18(iii) Modified Bessel Functions When the Whittaker functions can be expressed as modified Bessel functions. For the notation see §§10.25(ii) and 9.2(i). ## §13.18(v) Orthogonal Polynomials Special cases of §13.18(iv) are as follows. For the notation see §18.3.
2013-12-11T15:07:28
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9743549227714539, "perplexity": 4516.205159941537}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164037762/warc/CC-MAIN-20131204133357-00036-ip-10-33-133-15.ec2.internal.warc.gz"}
https://forum.step.esa.int/t/export-safe-to-beam-dimap/9586
# Export SAFE to BEAM DIMAP Hi, I got Sentinel-1 data in the .SAFE format that I need to export to the BEAM-DIMAP format. I used SNAP to do so. However, I think I’m losing metadata during the process. Although running gdalinfo on the GeoTiff files in the .SAFE product shows the Coordinate Reference System, it is not the case when I run it on the .img files within the .data directory of the BEAM DIMAP format. How can I retrieve this metadata? (I know that there is a Coordinate_Reference_System tag in the .dim file but it does not seem complete.) Thanks, FX instead of running gdalinfo you could use the gpt to directly access the S1 GRD product with all its metadata based on the manifest.safe file: gpt Write -Ssource='C:\path\to\manifest.safe' -Pfile='C:\path\to\output.dim' I just tested it and it wrote a dim file of Amplitudes, no geocoding, of course: Thanks for your answer, gpt Write actually does the conversion but the output is the same as when using the export feature in SNAP. This might be a dump question but I need to extract the Coordinate System in the WKT format, something like that: GEOGCS[“WGS 84”,DATUM[“WGS_1984”,SPHEROID[“WGS 84”,6378137,298.257223563,AUTHORITY[“EPSG”,“7030”]],AUTHORITY[“EPSG”,“6326”]],PRIMEM[“Greenwich”,0,AUTHORITY[“EPSG”,“8901”]],UNIT[“degree”,0.01745329251994328,AUTHORITY[“EPSG”,“9122”]],AUTHORITY[“EPSG”,“4326”]] and all I can find is the Coordinate_Reference_System tag in the dim file: <Coordinate_Reference_System> <Horizontal_CS> <HORIZONTAL_CS_TYPE>GEOGRAPHIC</HORIZONTAL_CS_TYPE> <Geographic_CS> <Horizontal_Datum> <HORIZONTAL_DATUM_NAME>WGS-84</HORIZONTAL_DATUM_NAME> <Ellipsoid> <ELLIPSOID_NAME>WGS-84</ELLIPSOID_NAME> <Ellipsoid_Parameters> <ELLIPSOID_MAJ_AXIS unit="M">6378137.0</ELLIPSOID_MAJ_AXIS> <ELLIPSOID_MIN_AXIS unit="M">6356752.3</ELLIPSOID_MIN_AXIS> </Ellipsoid_Parameters> </Ellipsoid> </Horizontal_Datum> </Geographic_CS> </Horizontal_CS> </Coordinate_Reference_System> and I can’t see how to do that as I don’t find any information about prime meridium or units in the XML. I don’t quite get what you are trying to achieve by that. Maybe you can shortly describe your aim so we can better understand and suggest how to help you with it. I need to use datacube and prior to data ingestion I need to write a preparation script which generates a metadata file, the doc is available here. In this file there should be among other things the CRS. A script is provided as an example in the datacube but it tries to get the CRS from somewhere where there’s nothing so I get an error. The script assumes that the data is in the BEAM-DIMAP format which is why I’d like to perform the conversion. I know I could write a script from scratch that uses the SAFE format but it seems easier to fix the error in the provided script. Hope I’ve been clear enough, if I haven’t please let me know. Thank you that looks interesting. Sorry for asking but I thought that maybe there was an easier way to achieve things. But in your case I think you’ll have to go the hard way You can get the WKT in SNAP Desktop. Select the product and the from the menu Analysis / Geo-Coding. You can get it via the API too. For this you would need to write a piece of code in Java or Python. But it is not possible to get it via gpt. Thank you for your answer. I managed indeed to do that in SNAP Desktop. Could you please give me a hint of how to do it in Java? I’ve never used the API.
2022-11-27T09:54:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3579547703266144, "perplexity": 787.149098751358}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710218.49/warc/CC-MAIN-20221127073607-20221127103607-00080.warc.gz"}
https://community.kde.org/index.php?title=Solid/Projects/LibBlueDevil&diff=prev&oldid=6402
# Difference between revisions of "Solid/Projects/LibBlueDevil" Jump to: navigation, search ## Description LibBluedevil is a Qt based library which makes easier to develop Qt based applications that use BlueZ abstracting the developer from DBus. ## How to collaborate LibBluedeivl is a KDE project, so we use the usual ways to communicate to each other, and to work together. ### Project Page Currently LibBlueDevil is under playground/libs, move it to kdesupport can be expected in the future. ### Mailist We're using the kde-hardware-devel mailist kde-hardware-devel Content is available under Creative Commons License SA 4.0 unless otherwise noted.
2019-11-19T06:17:01
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8149458169937134, "perplexity": 6696.051631332105}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670006.89/warc/CC-MAIN-20191119042928-20191119070928-00108.warc.gz"}
https://www.zbmath.org/authors/?q=ai%3Aabhyankar.shreeram-shankar
zbMATH — the first resource for mathematics Abhyankar, Shreeram Shankar Compute Distance To: Author ID: abhyankar.shreeram-shankar Published as: Abhyankar, S.; Abhyankar, S. S.; Abhyankar, Shreeram; Abhyankar, Shreeram S.; Abhyankar, Shreeram Shankar External Links: MGP · Wikidata · dblp · GND · MacTutor Documents Indexed: 187 Publications since 1951, including 13 Books Biographic References: 6 Publications all top 5 Co-Authors 118 single-authored 11 Heinzer, William J. 5 Moh, Tzuong-Tsieng 4 Bajaj, Chanderjit L. 4 Joshi, Sanjeevani B. 4 Sathaye, Avinash 3 Assi, Abdallah 3 Inglis, Nicholas F. J. 3 Kulkarni, Devadatta M. 3 Loomis, Paul A. 3 Yie, Ikkwon 2 Artal Bartolo, Enrique 2 Chandrasekar, Srinivasan 2 Chandru, Vijaya 2 Luengo, Ignacio 2 Popp, Herbert 2 Seiler, Wolfgang K. 2 Sundaram, Ganapathy S. 2 van der Put, Marius 1 Chern, Shiing-Shen 1 Christensen, Chris 1 Cohen, Stephen D. 1 Eakin, Paul M. jun. 1 Feit, Walter 1 Fireman, Nicholas J. 1 Fried, Michael David 1 Ghorpade, Sudhir R. 1 Gu, Nan 1 Igusa, Jun-ichi 1 Ihara, Yasutaka 1 Keskar, Pradipkumar H. 1 Kravitz, Ben 1 Kumar, Manish 1 Lang, Serge 1 Li, Wei 1 Morin, Thomas L. 1 Moses, Nathan C. 1 Ou, Jun 1 Risk, C. 1 Rubel, Lee Albert 1 Singh, Balwant 1 Trafalis, Theodore B. 1 Völklein, Helmut 1 Wiegand, Sylvia M. 1 Wilson, W. Stephen 1 Yalcin, Umud D. 1 Zariski, Oscar 1 Zieve, Michael E. all top 5 Serials 24 Proceedings of the American Mathematical Society 17 American Journal of Mathematics 10 Journal of Algebra 8 Discrete Mathematics 7 Mathematische Annalen 7 Proceedings of the Indian Academy of Sciences. Mathematical Sciences 6 Transactions of the American Mathematical Society 4 Journal für die Reine und Angewandte Mathematik 2 American Mathematical Monthly 2 Advances in Mathematics 2 Publications Mathématiques 2 The Journal of the Indian Mathematical Society. New Series 2 The Mathematics Student 2 ACM Transactions on Graphics 2 CAD. Computer-Aided Design 2 Proceedings of the National Academy of Sciences of the United States of America 2 Bulletin of the American Mathematical Society. New Series 2 Comptes Rendus de l’Académie des Sciences. Série I 2 Notices of the American Mathematical Society 2 Finite Fields and their Applications 2 Revista Matemática Complutense 2 Annals of Mathematics. Second Series 2 Lecture Notes in Mathematics 1 Discrete Applied Mathematics 1 Indian Journal of Pure & Applied Mathematics 1 Israel Journal of Mathematics 1 Mathematics Magazine 1 The Mathematical Intelligencer 1 Acta Mathematica Vietnamica 1 Annali di Matematica Pura ed Applicata. Serie Quarta 1 Bulletin of the Calcutta Mathematical Society 1 Duke Mathematical Journal 1 Journal of Pure and Applied Algebra 1 Journal of Statistical Planning and Inference 1 Le Matematiche 1 Rendiconti del Seminario Matematico 1 Computer Aided Geometric Design 1 Revista Matemática Iberoamericana 1 Atti della Accademia Nazionale dei Lincei. Serie Ottava. Rendiconti. Classe di Scienze Fisiche, Matematiche e Naturali 1 Historia Mathematica 1 Linear Algebra and its Applications 1 Nieuw Archief voor Wiskunde. Derde Serie 1 Comptes Rendus de l’Académie des Sciences. Série I. Mathématique 1 Bulletin of the Brazilian Mathematical Society. New Series 1 Memoirs of the College of Science, University of Kyoto, Series A 1 Annals of Mathematics Studies 1 Contemporary Mathematics 1 Mathematical Surveys and Monographs 1 Pure and Applied Mathematics, Marcel Dekker 1 Wiadomości Matematyczne 1 Springer Monographs in Mathematics all top 5 Fields 111 Algebraic geometry (14-XX) 50 Field theory and polynomials (12-XX) 42 Commutative algebra (13-XX) 25 Group theory and generalizations (20-XX) 10 History and biography (01-XX) 10 Several complex variables and analytic spaces (32-XX) 8 Combinatorics (05-XX) 8 Number theory (11-XX) 6 General and overarching topics; collections (00-XX) 6 Geometry (51-XX) 4 Numerical analysis (65-XX) 4 Computer science (68-XX) 3 Linear and multilinear algebra; matrix theory (15-XX) 3 Differential geometry (53-XX) 1 Convex and discrete geometry (52-XX) 1 Operations research, mathematical programming (90-XX) Citations contained in zbMATH 150 Publications have been cited 2,038 times in 1,120 Documents Cited by Year Embeddings of the line in the plane. Zbl 0332.14004 Abhyankar, Shreeram S.; Moh, Tzuong-tsieng 1975 On the valuations centered in a local domain. Zbl 0074.26301 Abhyankar, Shreeram 1956 On the uniqueness of the coefficient ring in a polynomial ring. Zbl 0255.13008 Abhyankar, Shreeram S.; Heinzer, William; Eakin, Paul 1972 Local rings of high embedding dimension. Zbl 0159.33202 Abhyankar, S. S. 1967 Resolution of singularities of embedded algebraic surfaces. Zbl 0147.20504 Abhyankar, S. S. 1966 Algebraic geometry for scientists and engineers. Zbl 0709.14001 Abhyankar, Shreeram S. 1990 Local analytic geometry. Zbl 0205.50401 Abhyankar, Shreeram Shankar 1964 Lectures on expansion techniques in algebraic geometry. With notes by Balwant Singh. Zbl 0818.14001 Abhyankar, Shreeram Shankar 1977 Local uniformization on algebraic surfaces over ground fields of characteristic $$p\neq 0$$. Zbl 0108.16803 Abhyankar, S. S. 1956 Ramification theoretic methods in algebraic geometry. Zbl 0101.38201 Abhyankar, S. S. 1959 On the ramification of algebraic functions. Zbl 0064.27501 Abhyankar, Shreeram 1955 Galois theory on the line in nonzero characteristic. Zbl 0760.12002 Abhyankar, Shreeram S. 1992 Newton-Puiseux expansion and generalized Tschirnhausen transformation. I. II. Zbl 0272.12102 Abhyankar, Shreeram S.; Moh, Tzuong-tsieng 1973 Coverings of algebraic curves. Zbl 0087.03603 Abhyankar, Shreeram 1957 Resolution of singularities of embedded algebraic surfaces. 2nd, enl. ed. Zbl 0914.14006 Abhyankar, S. S. 1998 Automatic parameterization of rational curves and surfaces. III: Algebraic plane curves. Zbl 0655.65019 Abhyankar, Shreeram S.; Bajaj, Chanderjit L. 1988 Enumerative combinatorics of Young tableaux. Zbl 0643.05001 Abhyankar, Shreeram S. 1988 Historical ramblings in algebraic geometry and related algebra. Zbl 0339.14001 Abhyankar, Shreeram S. 1976 Desingularization of plane curves. Zbl 0521.14005 Abhyankar, Shreeram S. 1983 Tame coverings and fundamental groups of algebraic varieties. I: Branch loci with normal crossings; applications: Theorems of Zariski and Picard. II: Branch curves with higher singularities. III: Some other sets of conditions for the fundamental group to be abelian. IV: Product theorems. V: Three cuspidal plane quartics. VI: Plane curves of order at most four. Zbl 0100.16401 Abhyankar, Shreeram 1960 Resolution of singularities of arithmetical surfaces. Zbl 0147.20503 Abhyankar, S. S. 1965 On the semigroup of a meromorphic curve. I. Zbl 0408.14010 Abhyankar, Shreeram S. 1977 Nice equations for nice groups. Zbl 0828.14014 Abhyankar, Shreeram S. 1994 Lectures on algebra. Vol. 1. Zbl 1121.13001 Abhyankar, S. S. 2006 Automatic parameterization of rational curves and surfaces. IV: Algebraic space curves. Zbl 0746.68104 Abhyankar, Shreeram S.; Bajaj, Chanderjit L. 1989 Good points of a hypersurface. Zbl 0657.14008 Abhyankar, Shreeram S. 1988 Concepts of order and rank on a complex space, and a condition for normality. Zbl 0107.15001 Abhyankar, S. S. 1960 Two notes on formal power series. Zbl 0073.02601 Abhyankar, Shreeram 1956 Projective polynomials. Zbl 0912.12004 Abhyankar, Shreeram S. 1997 Simultaneous resolution for algebraic surfaces. Zbl 0073.37902 Abhyankar, Shreeram 1956 Dicritical divisors and Jacobian problem. Zbl 1197.14061 Abhyankar, Shreeram S. 2010 Automatic parametrization of rational curves and surfaces. II: Cubics and cubicoids. Zbl 0655.65018 Abhyankar, Shreeram S.; Bajaj, Chanderjit 1987 Automatic parameterization of rational curves and surfaces. I: Conics and conicoids. Zbl 0655.65017 Abhyankar, Shreeram S.; Bajaj, Chanderjit 1987 Homomorphisms of analytic local rings. Zbl 0193.00501 Abhyankar, Shreeram Shankar; van der Put, Marius 1970 On the ramification of algebraic functions. II: Unaffected equations for characteristic two. Zbl 0093.04501 Abhyankar, Shreeram 1959 Irreducibility criterion for germs of analytic functions of two complex variables. Zbl 0683.14001 Abhyankar, Shreeram S. 1989 A remark on the nonnormal locus of an analytic space. Zbl 0146.17202 Abhyankar, S. S. 1964 Resolution of singularities and modular Galois theory. Zbl 0999.12003 Abhyankar, Shreeram S. 2001 Uniformization of Jungian local domains. Zbl 0127.01702 Abhyankar, S. S. 1965 Existence of dicritical divisors. Zbl 1248.13021 Abhyankar, Shreeram S.; Heinzer, William J. 2012 Algebraic theory of dicritical divisors. Zbl 1232.13012 Abhyankar, Shreeram S.; Luengo, Ignacio 2011 Combinatoire des tableaux de Young, variétés determinantielles et calcul de fonctions de Hilbert. (Combinatoric of Young tableaux, determinantal varieties and calculus of Hilbert functions). (Rédigé par A. Galligo). Zbl 0614.14017 Abhyankar, S. 1984 Quasirational singularities. Zbl 0425.14009 Abhyankar, Shreeram S. 1979 Geometric theory of algebraic space curves. Zbl 0357.14008 Abhyankar, S. S.; Sathaye, A. M. 1974 Algebraic space curves. Edited by P. Russell and A. Sathaye. Zbl 0245.14009 Abhyankar, Shreeram S. 1971 Inversion and invariance of characteristic pairs. Zbl 0162.34103 Abhyankar, Shreeram Shankar 1967 Fundamental group of the affine line in positive characteristic. Zbl 0914.14014 Abhyankar, Shreeram S. 1995 Singularities of algebraic curves. Zbl 0197.47203 Abhyankar, S. S. 1970 Pillars and towers of quadratic transformations. Zbl 1227.14004 Abhyankar, Shreeram S. 2011 Alternating group coverings of the affine line for characteristic two. Zbl 0833.14017 Abhyankar, Shreeram S.; Ou, Jun; Sathaye, Avinash 1994 On Hilbertian ideals. Zbl 0719.13005 1989 A reduction theorem for divergent power series. Zbl 0191.04403 Abhyankar, S. S.; Moh, T. T. 1970 More nice equations for nice groups. Zbl 0866.12005 Abhyankar, Shreeram S. 1996 Alternating group coverings of the affine line for characteristic greater than two. Zbl 0801.12003 Abhyankar, Shreeram S. 1993 Abhyankar, Shreeram S. 2011 Bivariate factorizations connecting Dickson polynomials and Galois theory. Zbl 0989.12001 Abhyankar, Shreeram S.; Cohen, Stephen D.; Zieve, Michael E. 2000 Galois theory of semilinear transformations. Zbl 0986.12002 Abhyankar, Shreeram S. 1999 Again nice equations for nice groups. Zbl 0866.12004 Abhyankar, Shreeram S. 1996 Small degree coverings of the affine line in characteristic two. Zbl 0833.14018 Abhyankar, Shreeram S.; Yie, Ikkwon 1994 Embeddings of certain curves in the affine plane. Zbl 0383.14007 Abhyankar, Shreeram S.; Singh, Balwant 1978 Quadratic transforms inside their generic incarnations. Zbl 1307.14001 Abhyankar, Shreeram S. 2012 Existence of dicritical divisors revisited. Zbl 1282.14105 Abhyankar, Shreeram S.; Heinzer, William J. 2011 Inversion and invariance of characteristic terms. I. Zbl 1322.14054 Abhyankar, Shreeram S. 2010 Once more nice equations for nice groups. Zbl 0889.12005 Abhyankar, Shreeram S.; Loomis, Paul A. 1998 Polynomial expansion. Zbl 0942.14008 Abhyankar, Shreeram S. 1998 Further nice equations for nice groups. Zbl 0860.12001 Abhyankar, Shreeram S. 1996 Some remarks on the Jacobian question. (Notes by Marius van der Put and William Heinzer, updated by Avinash Sathaye). Zbl 0812.13013 Abhyankar, Shreeram S. 1994 What is the difference between a parabola and a hyperbola ? Zbl 0698.14061 Abhyankar, Shreeram S. 1988 Every difference polynomial has a connected zero-set. Zbl 0532.12021 Abhyankar, Shreeram S.; Rubel, Lee A. 1980 Some thoughts on the Jacobian conjecture. II. Zbl 1139.14047 Abhyankar, Shreeram S. 2008 Some thoughts on the Jacobian conjecture. I. Zbl 1139.14046 Abhyankar, Shreeram S. 2008 Translates of polynomials. Zbl 1053.14011 Abhyankar, Shreeram S.; Heinzer, William J.; Sathaye, Avinash 2003 Twice more nice equations for nice groups. Zbl 0982.12002 Abhyankar, Shreeram S.; Loomis, Paul A. 1999 Semilinear transformations. Zbl 0934.12002 Abhyankar, Shreeram S. 1999 Local fundamental groups of algebraic varieties. Zbl 0891.12003 Abhyankar, Shreeram S. 1997 Factorizations over finite fields. Zbl 0912.11052 Abhyankar, Shreeram S. 1996 Uniqueness of plane embeddings of special curves. Zbl 0880.14012 Abhyankar, Shreeram S.; Sathaye, Avinash 1996 Some more Mathieu group coverings in characteristic two. Zbl 0837.12001 Abhyankar, Shreeram S.; Yie, Ikkwon 1994 Wreath products and enlargements of groups. Zbl 0787.20019 Abhyankar, Shreeram S. 1993 Young tableaux and linear independence of standard monomials in multiminors of a multimatrix. Zbl 0781.15003 Abhyankar, Shreeram S.; Ghorpade, Sudhir R. 1991 Bijection between indexed monomials and standard bitableaux. Zbl 0767.05094 Abhyankar, Shreeram S.; Kulkarni, Devadatta M. 1990 Determinantal loci and enumerative combinatorics of Young tableaux. Zbl 0688.14044 Abhyankar, Shreeram S. 1988 Dicriticals of pencils and Dedekind’s Gauss lemma. Zbl 1334.14004 Abhyankar, Shreeram S. 2013 Algebraic theory of curvettes and dicriticals. Zbl 1303.13002 Abhyankar, Shreeram S.; Artal Bartolo, Enrique 2013 Some thoughts on the Jacobian conjecture. III. Zbl 1162.14041 Abhyankar, Shreeram S. 2008 Galois embeddings for linear groups. Zbl 0999.12005 Abhyankar, Shreeram S. 2000 Galois theory of Moore-Carlitz-Drinfeld modules. Zbl 0884.12001 Abhyankar, Shreeram S.; Sundaram, Ganapathy S. 1997 Mathieu group coverings and linear group coverings. Zbl 0930.14019 Abhyankar, Shreeram S. 1995 Efficient faces of polytopes: Interior point algorithms, parametrization of algebraic varieties, and multiple objective optimization. Zbl 0725.90056 Abhyankar, S. S.; Morin, T. L.; Trafalis, T. 1990 Note on coefficient fields. Zbl 0184.06901 Abhyankar, Shreeram Shankar 1968 Uniformization on $$p$$-cyclic extensions of algebraic surfaces over ground fields of characteristic $$p$$. Zbl 0121.37901 Abhyankar, S. S. 1964 On the field of definition of a nonsingular birational transform of an algebraic surface. Zbl 0108.16901 Abhyankar, S. S. 1957 Spiders and multiplicity sequences. Zbl 1285.14002 Abhyankar, Shreeram S.; Luengo, Ignacio 2013 Mathieu-group coverings of the affine line. Zbl 0788.14022 Abhyankar, Shreeram S.; Seiler, Wolfgang K.; Popp, Herbert 1992 Generalized codeletion and standard multitableaux. Zbl 0703.14034 Abhyankar, Shreeram S.; Joshi, Sanjeevani B. 1989 On Macaulay’s examples. Notes by A. Sathaye. Zbl 0259.13010 Abhyankar, Shreeram 1973 Nonprefactorial local rings. Zbl 0159.33201 Abhyankar, S. S. 1967 An algorithm on polynomials in one indeterminate with coefficients in a two dimensional regular local domain. Zbl 0158.04201 Abhyankar, S. S. 1966 Generic incarnations of quadratic transforms. Zbl 1311.13004 Abhyankar, Shreeram S. 2013 Rees valuations. Zbl 1275.13015 Abhyankar, Shreeram S.; Heinzer, William J. 2012 Analytic theory of curvettes and dicriticals. Zbl 1317.14002 Abhyankar, Shreeram S.; Bartolo, Enrique Artal 2014 Dicriticals of pencils and Dedekind’s Gauss lemma. Zbl 1334.14004 Abhyankar, Shreeram S. 2013 Algebraic theory of curvettes and dicriticals. Zbl 1303.13002 Abhyankar, Shreeram S.; Artal Bartolo, Enrique 2013 Spiders and multiplicity sequences. Zbl 1285.14002 Abhyankar, Shreeram S.; Luengo, Ignacio 2013 Generic incarnations of quadratic transforms. Zbl 1311.13004 Abhyankar, Shreeram S. 2013 Existence of dicritical divisors. Zbl 1248.13021 Abhyankar, Shreeram S.; Heinzer, William J. 2012 Quadratic transforms inside their generic incarnations. Zbl 1307.14001 Abhyankar, Shreeram S. 2012 Rees valuations. Zbl 1275.13015 Abhyankar, Shreeram S.; Heinzer, William J. 2012 Algebraic theory of dicritical divisors. Zbl 1232.13012 Abhyankar, Shreeram S.; Luengo, Ignacio 2011 Pillars and towers of quadratic transformations. Zbl 1227.14004 Abhyankar, Shreeram S. 2011 Abhyankar, Shreeram S. 2011 Existence of dicritical divisors revisited. Zbl 1282.14105 Abhyankar, Shreeram S.; Heinzer, William J. 2011 Dicritical divisors and Jacobian problem. Zbl 1197.14061 Abhyankar, Shreeram S. 2010 Inversion and invariance of characteristic terms. I. Zbl 1322.14054 Abhyankar, Shreeram S. 2010 Some thoughts on the Jacobian conjecture. II. Zbl 1139.14047 Abhyankar, Shreeram S. 2008 Some thoughts on the Jacobian conjecture. I. Zbl 1139.14046 Abhyankar, Shreeram S. 2008 Some thoughts on the Jacobian conjecture. III. Zbl 1162.14041 Abhyankar, Shreeram S. 2008 Two counterexamples in normalization. Zbl 1128.13003 Abhyankar, Shreeram S.; Kravitz, Ben 2007 Lectures on algebra. Vol. 1. Zbl 1121.13001 Abhyankar, S. S. 2006 Abhyankar, Shreeram S.; Kumar, Manish 2005 Translates of polynomials. Zbl 1053.14011 Abhyankar, Shreeram S.; Heinzer, William J.; Sathaye, Avinash 2003 Geometry and Galois theory. Zbl 1034.12002 Abhyankar, S. S. 2003 Jacobian pairs. Zbl 1101.13303 Abhyankar, Shreeram S.; Assi, Abdallah 2003 Two step descent in modular Galois theory, theorems of Burnside and Cayley, and Hilbert’s thirteenth problem. Zbl 1024.12003 Abhyankar, Shreeram S. 2002 Semidirect products: $$x\mapsto ax+b$$ as a first example. Zbl 1065.15034 Abhyankar, Shreeram S.; Christensen, Chris 2002 Symplectic groups and permutation polynomials. I. Zbl 1045.12002 Abhyankar, Shreeram S. 2002 Desingularization and modular Galois theory (with an appendix by David Harbater). Zbl 1037.12004 Abhyankar, Shreeram S. 2002 Resolution of singularities and modular Galois theory. Zbl 0999.12003 Abhyankar, Shreeram S. 2001 Descent principle in modular Galois theory. Zbl 1022.12002 Abhyankar, Shreeram S.; Keskar, Pradipkumar H. 2001 Galois groups of some vectorial polynomials. Zbl 0992.12002 Abhyankar, Shreeram S.; Inglis, Nicholas F. J. 2001 Galois groups of generalized iterates of generic vectorial polynomials. Zbl 1069.12001 Abhyankar, Shreeram S.; Sundaram, Ganapathy S. 2001 Local analytic geometry. Zbl 0974.32003 Abhyankar, Shreeram Shankar 2001 Bivariate factorizations connecting Dickson polynomials and Galois theory. Zbl 0989.12001 Abhyankar, Shreeram S.; Cohen, Stephen D.; Zieve, Michael E. 2000 Galois embeddings for linear groups. Zbl 0999.12005 Abhyankar, Shreeram S. 2000 Factoring the Jacobian. Zbl 1056.14506 Abhyankar, Shreeram S.; Assi, Abdallah 2000 Galois theory of semilinear transformations. Zbl 0986.12002 Abhyankar, Shreeram S. 1999 Twice more nice equations for nice groups. Zbl 0982.12002 Abhyankar, Shreeram S.; Loomis, Paul A. 1999 Semilinear transformations. Zbl 0934.12002 Abhyankar, Shreeram S. 1999 Jacobian of meromorphic curves. Zbl 0952.14021 Abhyankar, Shreeram S.; Assi, Abdallah 1999 Resolution of singularities of embedded algebraic surfaces. 2nd, enl. ed. Zbl 0914.14006 Abhyankar, S. S. 1998 Once more nice equations for nice groups. Zbl 0889.12005 Abhyankar, Shreeram S.; Loomis, Paul A. 1998 Polynomial expansion. Zbl 0942.14008 Abhyankar, Shreeram S. 1998 Projective polynomials. Zbl 0912.12004 Abhyankar, Shreeram S. 1997 Local fundamental groups of algebraic varieties. Zbl 0891.12003 Abhyankar, Shreeram S. 1997 Galois theory of Moore-Carlitz-Drinfeld modules. Zbl 0884.12001 Abhyankar, Shreeram S.; Sundaram, Ganapathy S. 1997 More nice equations for nice groups. Zbl 0866.12005 Abhyankar, Shreeram S. 1996 Again nice equations for nice groups. Zbl 0866.12004 Abhyankar, Shreeram S. 1996 Further nice equations for nice groups. Zbl 0860.12001 Abhyankar, Shreeram S. 1996 Factorizations over finite fields. Zbl 0912.11052 Abhyankar, Shreeram S. 1996 Uniqueness of plane embeddings of special curves. Zbl 0880.14012 Abhyankar, Shreeram S.; Sathaye, Avinash 1996 Fundamental group of the affine line in positive characteristic. Zbl 0914.14014 Abhyankar, Shreeram S. 1995 Mathieu group coverings and linear group coverings. Zbl 0930.14019 Abhyankar, Shreeram S. 1995 Hilbert’s thirteenth problem. Zbl 0886.12002 Abhyankar, Shreeram S. 1995 Small Mathieu group coverings in characteristic two. Zbl 0860.14026 Abhyankar, Shreeram S.; Yie, Ikkwon 1995 Recent developments in the inverse Galois problem. A joint summer research conference, July 17-23, 1993, University of Washington, Seattle, WA, USA. Zbl 0823.00012 Fried, Michael D. (ed.); Abhyankar, Shreeram S. (ed.); Feit, Walter (ed.); Ihara, Yasutaka (ed.); Vöklein, Helmut (ed.) 1995 Nice equations for nice groups. Zbl 0828.14014 Abhyankar, Shreeram S. 1994 Alternating group coverings of the affine line for characteristic two. Zbl 0833.14017 Abhyankar, Shreeram S.; Ou, Jun; Sathaye, Avinash 1994 Small degree coverings of the affine line in characteristic two. Zbl 0833.14018 Abhyankar, Shreeram S.; Yie, Ikkwon 1994 Some remarks on the Jacobian question. (Notes by Marius van der Put and William Heinzer, updated by Avinash Sathaye). Zbl 0812.13013 Abhyankar, Shreeram S. 1994 Some more Mathieu group coverings in characteristic two. Zbl 0837.12001 Abhyankar, Shreeram S.; Yie, Ikkwon 1994 Square-root parametrization of plane curves. Appendix by J.-P. Serre. Zbl 0830.12002 Abhyankar, Shreeram S. 1994 Ramification in infinite integral extensions. Zbl 0832.13008 Abhyankar, Shreeram S.; Heinzer, William J. 1994 Polynomial maps and Zariski’s main theorem. Zbl 0819.13007 Abhyankar, Shreeram S. 1994 Alternating group coverings of the affine line for characteristic greater than two. Zbl 0801.12003 Abhyankar, Shreeram S. 1993 Wreath products and enlargements of groups. Zbl 0787.20019 Abhyankar, Shreeram S. 1993 Mathieu group coverings in characteristic two. Zbl 0809.14020 Abhyankar, Shreeram S. 1993 Generalized coinsertion and standard multitableaux. Zbl 0768.05096 Abhyankar, Shreeram S.; Joshi, Sanjeevani B. 1993 Galois theory on the line in nonzero characteristic. Zbl 0760.12002 Abhyankar, Shreeram S. 1992 Mathieu-group coverings of the affine line. Zbl 0788.14022 Abhyankar, Shreeram S.; Seiler, Wolfgang K.; Popp, Herbert 1992 Linear disjointness of polynomials. Zbl 0760.12003 Abhyankar, Shreeram S. 1992 Young tableaux and linear independence of standard monomials in multiminors of a multimatrix. Zbl 0781.15003 Abhyankar, Shreeram S.; Ghorpade, Sudhir R. 1991 Group enlargements. Zbl 0793.20019 Abhyankar, Shreeram S. 1991 On the compositum of two power series rings. Zbl 0742.13010 Abhyankar, Shreeram S.; Heinzer, William; Wiegand, Sylvia 1991 Generalized rodeletive correspondence between multitableaux and multimonomials. Zbl 0754.05073 Abhyankar, Shreeram S.; Joshi, Sanjeevani B. 1991 Generalized roinsertive correspondence between multitableaux and multimonomials. Zbl 0754.05072 Abhyankar, Shreeram S.; Joshi, Sanjeevani B. 1991 Intersection of algebraic space curves. Zbl 0746.14013 Abhyankar, Shreeram S.; Chandrasekar, Srinivasan; Chandru, Vijaya 1991 Derivativewise unramified infinite integral extensions. Zbl 0815.13004 Abhyankar, Shreeram S.; Heinzer, William J. 1991 Algebraic geometry for scientists and engineers. Zbl 0709.14001 Abhyankar, Shreeram S. 1990 Bijection between indexed monomials and standard bitableaux. Zbl 0767.05094 Abhyankar, Shreeram S.; Kulkarni, Devadatta M. 1990 Efficient faces of polytopes: Interior point algorithms, parametrization of algebraic varieties, and multiple objective optimization. Zbl 0725.90056 Abhyankar, S. S.; Morin, T. L.; Trafalis, T. 1990 Coinsertion and standard bitableaux. Zbl 0747.05100 Abhyankar, Shreeram S.; Kulkarni, Devadatta M. 1990 Improper intersection of algebraic curves. Zbl 0726.68072 Abhyankar, Shreeram S.; Chandrasekar, Srinivasan; Chandru, Vijaya 1990 Automatic parameterization of rational curves and surfaces. IV: Algebraic space curves. Zbl 0746.68104 Abhyankar, Shreeram S.; Bajaj, Chanderjit L. 1989 Irreducibility criterion for germs of analytic functions of two complex variables. Zbl 0683.14001 Abhyankar, Shreeram S. 1989 On Hilbertian ideals. Zbl 0719.13005 1989 Generalized codeletion and standard multitableaux. Zbl 0703.14034 Abhyankar, Shreeram S.; Joshi, Sanjeevani B. 1989 On the Jacobian conjecture: A new approach via Gröbner bases. Zbl 0693.13008 Abhyankar, Shreeram S.; Li, Wei 1989 Automatic parameterization of rational curves and surfaces. III: Algebraic plane curves. Zbl 0655.65019 Abhyankar, Shreeram S.; Bajaj, Chanderjit L. 1988 Enumerative combinatorics of Young tableaux. Zbl 0643.05001 Abhyankar, Shreeram S. 1988 Good points of a hypersurface. Zbl 0657.14008 Abhyankar, Shreeram S. 1988 What is the difference between a parabola and a hyperbola ? Zbl 0698.14061 Abhyankar, Shreeram S. 1988 Determinantal loci and enumerative combinatorics of Young tableaux. Zbl 0688.14044 Abhyankar, Shreeram S. 1988 Automatic parametrization of rational curves and surfaces. II: Cubics and cubicoids. Zbl 0655.65018 Abhyankar, Shreeram S.; Bajaj, Chanderjit 1987 Automatic parameterization of rational curves and surfaces. I: Conics and conicoids. Zbl 0655.65017 Abhyankar, Shreeram S.; Bajaj, Chanderjit 1987 Combinatoire des tableaux de Young, variétés determinantielles et calcul de fonctions de Hilbert. (Combinatoric of Young tableaux, determinantal varieties and calculus of Hilbert functions). (Rédigé par A. Galligo). Zbl 0614.14017 Abhyankar, S. 1984 Desingularization of plane curves. Zbl 0521.14005 Abhyankar, Shreeram S. 1983 Weighted expansions for canonical desingularization. With foreword by U. Orbanz. Zbl 0479.14009 Abhyankar, Shreeram S. 1982 Every difference polynomial has a connected zero-set. Zbl 0532.12021 Abhyankar, Shreeram S.; Rubel, Lee A. 1980 Quasirational singularities. Zbl 0425.14009 Abhyankar, Shreeram S. 1979 Embeddings of certain curves in the affine plane. Zbl 0383.14007 Abhyankar, Shreeram S.; Singh, Balwant 1978 ...and 50 more Documents all top 5 Cited by 988 Authors 74 Abhyankar, Shreeram Shankar 22 Heinzer, William J. 21 Cutkosky, Steven Dale 20 Yu, Jie-Tai 13 Olberding, Bruce M. 13 Pérez-Díaz, Sonia 12 Villamayor Uriburu, Orlando Eugenio 11 Granja, Angel 10 Galindo Pastor, Carlos 10 Gupta, Neena 10 Gurjar, Rajendra Vasant 10 Makar-Limanov, Leonid G. 9 Bhatwadekar, Srikant M. 9 Cossart, Vincent 9 El Kahoui, M’hammed 9 Kaliman, Shulim I. 9 Verma, Jugal Kishore 8 Becker, Joseph A. 8 Daigle, Daniel 8 Moh, Tzuong-Tsieng 8 Piltant, Olivier 8 Russell, Peter K. 8 Sally, Judith D. 8 Sendra, Juan Rafael 8 Wang, Stuart Sui-Sheng 8 Wang, Wenping 7 Dubouloz, Adrien 7 García Barroso, Evelia Rosa 7 Monserrat, Francisco 7 Shpilrain, Vladimir 7 Valla, Giuseppe 6 Connell, Edwin H. 6 Dutta, Amartya Kumar 6 González Pérez, Pedro Daniel 6 Herzog, Jürgen 6 Kuhlmann, Franz-Viktor 6 Kulkarni, Devadatta M. 6 Miyanishi, Masayoshi 6 Mulay, Shashikant B. 6 Płoski, Arkadiusz 6 Sathaye, Avinash 6 Toeniskoetter, Matthew 5 Assi, Abdallah 5 Chèze, Guillaume 5 Fried, Michael David 5 Ghorpade, Sudhir R. 5 Gutierrez, Jaime 5 Jia, Xiaohong 5 Kashcheyeva, Olga 5 Kim, Mee-Kyoung 5 Kuhlmann, Norbert 5 Martínez, Ma. Carmen 5 McKay, James H. 5 Ngô Viêt Trung 5 Popp, Herbert 5 Reguera, Ana-José 5 Rossi, Maria Evelina 5 van den Essen, Arno 4 Angermüller, Gerhard 4 Cassou-Noguès, Pierrette 4 Conca, Aldo 4 Crachiola, Anthony J. 4 Elias, Juan 4 Forstnerič, Franc 4 Furter, Jean-Philippe 4 Gao, Xiaoshan 4 Gwoździewicz, Janusz 4 Hauser, Herwig 4 Jelonek, Zbigniew 4 Johnston, Bernard L. 4 Kanel’-Belov, Alekseĭ Yakovlevich 4 Kawakita, Masayuki 4 Kustin, Andrew R. 4 Lang, Jeffrey John 4 Moyano Fernández, Julio José 4 Parusiński, Adam 4 Polini, Claudia 4 Popescu-Pampu, Patrick 4 Rodríguez, Cristina 4 Rond, Guillaume 4 Rotthaus, Christel 4 Schicho, Josef 4 Shen, Liyong 4 Spivakovsky, Mark 4 Ulrich, Bernd 4 Wiegand, Sylvia M. 4 Winkler, Franz 4 Wright, David L. 4 Yie, Ikkwon 4 Yoshihara, Hisao 4 Zhao, Wenhua 4 Zieve, Michael E. 3 Aroca, Fuensanta 3 Artal Bartolo, Enrique 3 Bajaj, Chanderjit L. 3 Benito, Angélica 3 Blasco, Ángel 3 Busé, Laurent 3 Campillo, Antonio 3 Chakraborty, Sagnik ...and 888 more Authors all top 5 Cited in 164 Serials 147 Journal of Algebra 97 Journal of Pure and Applied Algebra 80 Proceedings of the American Mathematical Society 49 Communications in Algebra 48 Mathematische Annalen 44 Transactions of the American Mathematical Society 34 Journal of Symbolic Computation 31 Computer Aided Geometric Design 28 Inventiones Mathematicae 26 Compositio Mathematica 23 Advances in Mathematics 22 Annales de l’Institut Fourier 20 Mathematische Zeitschrift 16 Duke Mathematical Journal 15 Israel Journal of Mathematics 14 Manuscripta Mathematica 14 Finite Fields and their Applications 12 Journal of Algebra and its Applications 11 Discrete Mathematics 11 Journal of Number Theory 11 Proceedings of the Indian Academy of Sciences. Mathematical Sciences 11 Journal of Commutative Algebra 10 Archiv der Mathematik 9 Mathematical Proceedings of the Cambridge Philosophical Society 9 Bulletin of the American Mathematical Society. New Series 8 Journal of Mathematical Analysis and Applications 8 Rocky Mountain Journal of Mathematics 8 Revista Matemática Iberoamericana 8 Revista Matemática Complutense 7 Publications of the Research Institute for Mathematical Sciences, Kyoto University 6 Annales Scientifiques de l’École Normale Supérieure. Quatrième Série 6 Bulletin de la Société Mathématique de France 6 Publications Mathématiques 6 Journal of Computational and Applied Mathematics 6 Journal of the Mathematical Society of Japan 6 Nagoya Mathematical Journal 6 Rendiconti del Seminario Matematico della Università di Padova 6 Journal of the American Mathematical Society 6 Applicable Algebra in Engineering, Communication and Computing 6 Journal of Algebraic Geometry 5 Annales de la Faculté des Sciences de Toulouse. Mathématiques. Série VI 5 Transformation Groups 5 Annali della Scuola Normale Superiore di Pisa. Scienze Fisiche e Matematiche. III. Ser 5 Kyoto Journal of Mathematics 5 Revista de la Real Academia de Ciencias Exactas, Físicas y Naturales. Serie A: Matemáticas. RACSAM 4 Indian Journal of Pure & Applied Mathematics 4 Mathematics of Computation 4 Annali di Matematica Pura ed Applicata. Serie Quarta 4 Tohoku Mathematical Journal. Second Series 4 Linear Algebra and its Applications 3 Arkiv för Matematik 3 Abhandlungen aus dem Mathematischen Seminar der Universität Hamburg 3 Acta Mathematica 3 Geometriae Dedicata 3 Journal of Combinatorial Theory. Series A 3 Journal of Differential Equations 3 Journal of Soviet Mathematics 3 Kodai Mathematical Journal 3 Monatshefte für Mathematik 3 Theoretical Computer Science 3 International Journal of Mathematics 3 Journal of Systems Science and Complexity 3 Comptes Rendus. Mathématique. Académie des Sciences, Paris 2 Computer Methods in Applied Mechanics and Engineering 2 Mathematical Notes 2 Beiträge zur Algebra und Geometrie 2 Glasgow Mathematical Journal 2 Information Sciences 2 Journal of the London Mathematical Society. Second Series 2 Journal für die Reine und Angewandte Mathematik 2 Journal of Statistical Planning and Inference 2 Osaka Journal of Mathematics 2 Proceedings of the Edinburgh Mathematical Society. Series II 2 Semigroup Forum 2 Physica D 2 Journal of Complexity 2 Algorithmica 2 Applied Mathematics Letters 2 International Journal of Computational Geometry & Applications 2 Designs, Codes and Cryptography 2 Indagationes Mathematicae. New Series 2 Journal of Algebraic Combinatorics 2 The Electronic Journal of Combinatorics 2 Journal of the European Mathematical Society (JEMS) 2 Journal of the Institute of Mathematics of Jussieu 2 Bulletin of the American Mathematical Society 2 Functional Analysis and Other Mathematics 2 Algebra & Number Theory 1 Bulletin of the Australian Mathematical Society 1 Discrete Applied Mathematics 1 International Journal of Theoretical Physics 1 Journal d’Analyse Mathématique 1 Moscow University Mathematics Bulletin 1 Nuclear Physics. B 1 Theoretical and Mathematical Physics 1 Chaos, Solitons and Fractals 1 The Mathematical Intelligencer 1 Acta Arithmetica 1 Acta Mathematica Vietnamica 1 Annali della Scuola Normale Superiore di Pisa. Classe di Scienze. Serie IV ...and 64 more Serials all top 5 Cited in 51 Fields 640 Algebraic geometry (14-XX) 477 Commutative algebra (13-XX) 134 Several complex variables and analytic spaces (32-XX) 115 Field theory and polynomials (12-XX) 73 Number theory (11-XX) 54 Computer science (68-XX) 46 Numerical analysis (65-XX) 45 Group theory and generalizations (20-XX) 35 Combinatorics (05-XX) 31 Associative rings and algebras (16-XX) 15 Mathematical logic and foundations (03-XX) 13 Real functions (26-XX) 13 Ordinary differential equations (34-XX) 12 Global analysis, analysis on manifolds (58-XX) 11 Linear and multilinear algebra; matrix theory (15-XX) 11 Dynamical systems and ergodic theory (37-XX) 10 Functions of a complex variable (30-XX) 9 Manifolds and cell complexes (57-XX) 8 Nonassociative rings and algebras (17-XX) 8 Geometry (51-XX) 8 Information and communication theory, circuits (94-XX) 7 Partial differential equations (35-XX) 7 Differential geometry (53-XX) 6 Order, lattices, ordered algebraic structures (06-XX) 6 Functional analysis (46-XX) 5 Category theory; homological algebra (18-XX) 5 Operations research, mathematical programming (90-XX) 4 $$K$$-theory (19-XX) 3 Difference and functional equations (39-XX) 3 Convex and discrete geometry (52-XX) 3 Quantum theory (81-XX) 2 History and biography (01-XX) 2 Special functions (33-XX) 2 Approximations and expansions (41-XX) 2 Abstract harmonic analysis (43-XX) 2 Integral transforms, operational calculus (44-XX) 2 Operator theory (47-XX) 2 Mechanics of deformable solids (74-XX) 1 General and overarching topics; collections (00-XX) 1 Potential theory (31-XX) 1 Sequences, series, summability (40-XX) 1 Harmonic analysis on Euclidean spaces (42-XX) 1 Algebraic topology (55-XX) 1 Probability theory and stochastic processes (60-XX) 1 Statistics (62-XX) 1 Mechanics of particles and systems (70-XX) 1 Fluid mechanics (76-XX) 1 Statistical mechanics, structure of matter (82-XX) 1 Geophysics (86-XX) 1 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 1 Biology and other natural sciences (92-XX) Wikidata Timeline The data are displayed as stored in Wikidata under a Creative Commons CC0 License. Updates and corrections should be made in Wikidata.
2021-04-17T00:36:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.545937716960907, "perplexity": 7775.037990158001}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038092961.47/warc/CC-MAIN-20210416221552-20210417011552-00200.warc.gz"}
https://www.aimsciences.org/article/doi/10.3934/proc.2015.0775
Article Contents Article Contents # An application of an avery type fixed point theorem to a second order antiperiodic boundary value problem • In this article, we show the existence of an antisymmetric solution to the second order boundary value problem $x''+f(x(t))=0,\; t\in(0,n)$ satisfying antiperiodic boundary conditions $x(0)+x(n)=0,\; x'(0)+x'(n)=0$ using an Avery et. al. fixed point theorem which itself is an extension of the traditional Leggett-Williams fixed point theorem. The antisymmetric solution satisfies $x(t)=-x(n-t)$ for $t\in[0,n]$ and is nonnegative, nonincreasing, and concave for $t\in[0,n/2]$. To conclude, we present an example. Mathematics Subject Classification: Primary: 34B15. Citation: • [1] A. A. Altwaty and P. W. Eloe, The role of concavity in applications of Avery type fixed point theorems to higher order differential equations, J. Math Inequal., 6 (2012), 79-90. [2] A. A. Altwaty and P. W. Eloe, Concavity of solutions of a $2n$-th order problem with symmetry, Opuscula Math., 33 (2013), 603-613. [3] D. R. Anderson and R. I. Avery, Fixed point theorem of cone expansion and compression of functional type, J. Difference Equ. Appl., 8 (2002), 1073-1083. [4] D. R. Anderson, R. I. Avery and J. Henderson, Functional expansion-compression fixed point theorem of Leggett-Williams type, Electron. J. Differential Equations, 2010, 1-9. [5] D. R. Anderson, R. I. Avery, J. Henderson and X. Liu, Operator type expansion-compression fixed point theorem, Electron. J. Differential Equations, 2011, 1-11. [6] D. R. Anderson, R. I. Avery, J. Henderson and X. Liu, Fixed point theorem utilizing operators and functionals, Electron. J. Qual. Theory Differ. Equ., 2012, 1-16. [7] D. R. Anderson, R. I. Avery, J. Henderson, X. Liu and J. W. Lyons, Existence of a positive solution for a right focal discrete boundary value problem, J. Difference Equ. Appl., 17 (2011), 1635-1642. [8] J. Andres and V. Vlček, Green's functions for periodic and anti-periodic BVPs to second-order ODEs, Acta Univ. Palack. Olomuc. Fac. Rerum Natur. Math., 32 (1993), 7-16. [9] R. I. Avery, D. R. Anderson and J. Henderson, A topological proof and extension of the Leggett-Williams fixed point theorem, Comm. Appl. Nonlinear Anal., 16 (2009), 39-44. [10] R. I. Avery, D. R. Anderson and J. Henderson, Existence of a positive solution to a right focal boundary value problem, Electron. J. Qual. Theory Differ. Equ., 2010, 1-6. [11] R. I. Avery, P. W. Eloe and J. Henderson, A Leggett-Willaims type theorem applied to a fourth order problem, Commun. Appl. Anal., 16 (2012), 579-588. [12] C. Bai, On the solvability of anti-periodic boundary value problems with impulse, Electron. J. Qual. Theory Differ. Equ., 2009, 1-15. [13] M. Benchohra, N. Hamidi and J. Henderson, Fractional differential equations with anti-periodic boundary conditions, Numer. Funct. Anal. Optim., 34 (2013), 404-414. [14] D. Franco, J. J. Nieto and D. O'Regan, Anti-periodic boundary value problem for nonlinear first order ordinary differential equations, Math. Inequal. Appl., 6 (2003), 477-485. [15] R. W. Leggett and L. R. Williams, Multiple positive fixed points of nonlinear operators on ordered Banach spaces, Indiana Univ. Math. J., 28 (1979), 673-688. [16] X. Liu, J. T. Neugebauer and S. Sutherland, Application of a functional type compression expansion fixed point theorem for a right focal boundary value problem on a time scale, Comm. Appl. Nonlinear Anal., 19 (2012), 25-39. [17] J. W. Lyons and J. T. Neugebauer, Existence of a positive solution for a right focal dynamic boundary value problem, Nonlinear Dyn. Syst. Theory, 14 (2014), 76-83. [18] J. T. Neugebauer and C. L. Seelbach, Positive symmetric solutions of a second order difference equation, Involve, 5 (2012), 497-504. [19] G. F. Roach, Green's Functions, $2^{nd}$ edition, Cambridge University Press, Cambridge-New York, 1982. Open Access Under a Creative Commons license
2022-12-03T21:40:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.5302268266677856, "perplexity": 2576.6344374117134}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710941.43/warc/CC-MAIN-20221203212026-20221204002026-00024.warc.gz"}
https://ftp.aimsciences.org/article/doi/10.3934/proc.2007.2007.495
Article Contents Article Contents # Lie group study of finite difference schemes • Differential equations arising in fluid mechanics are usually derived from the intrinsic properties of mechanical systems, in the form of conservation laws, and bear symmetries, which are not generally preserved by a finite difference approximation, and lead to inaccurate numerical results. This paper deals with the analysis of symmetry group of finite difference equations, which is based on the differential approximation. We develop a new scheme, the related differential approximation of which is invariant under the symmetries of the original differential equations. A comparison of numerical performance of this scheme, with standard ones and a higher order one has been realized for the Burgers equation. Mathematics Subject Classification: 22E70. Citation: Open Access Under a Creative Commons license
2022-12-09T05:34:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.17742912471294403, "perplexity": 273.3030832917063}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711390.55/warc/CC-MAIN-20221209043931-20221209073931-00764.warc.gz"}
http://repository.aust.edu.ng/xmlui/handle/1721.1/98878
# Network Maximal Correlation Unknown author (2015-09-21) Identifying nonlinear relationships in large datasets is a daunting task particularly when the form of the nonlinearity is unknown. Here, we introduce Network Maximal Correlation (NMC) as a fundamental measure to capture nonlinear associations in networks without the knowledge of underlying nonlinearity shapes. NMC infers, possibly nonlinear, transformations of variables with zero means and unit variances by maximizing total nonlinear correlation over the underlying network. For the case of having two variables, NMC is equivalent to the standard Maximal Correlation. We characterize a solution of the NMC optimization using geometric properties of Hilbert spaces for both discrete and jointly Gaussian variables. For discrete random variables, we show that the NMC optimization is an instance of the Maximum Correlation Problem and provide necessary conditions for its global optimal solution. Moreover, we propose an efficient algorithm based on Alternating Conditional Expectation (ACE) which converges to a local NMC optimum. For this algorithm, we provide guidelines for choosing appropriate starting points to jump out of local maximizers. We also propose a distributed algorithm to compute a 1-$\epsilon$ approximation of the NMC value for large and dense graphs using graph partitioning. For jointly Gaussian variables, under some conditions, we show that the NMC optimization can be simplified to a Max-Cut problem, where we provide conditions under which an NMC solution can be computed exactly. Under some general conditions, we show that NMC can infer the underlying graphical model for functions of latent jointly Gaussian variables. These functions are unknown, bijective, and can be nonlinear. This result broadens the family of continuous distributions whose graphical models can be characterized efficiently. We illustrate the robustness of NMC in real world applications by showing its continuity with respect to small perturbations of joint distributions. We also show that sample NMC (NMC computed using empirical distributions) converges exponentially fast to the true NMC value. Finally, we apply NMC to different cancer datasets including breast, kidney and liver cancers, and show that NMC infers gene modules that are significantly associated with survival times of individuals while they are not detected using linear association measures. ##### View/Open Except where otherwise noted, this item's license is described as Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International
2023-02-04T16:14:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5572123527526855, "perplexity": 661.6600286519335}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500140.36/warc/CC-MAIN-20230204142302-20230204172302-00766.warc.gz"}
http://dergipark.gov.tr/gujs/issue/35772/323926
| | | | ## A New Distribution Family Constructed by Polynomial Rank Transmutation #### Mehmet YILMAZ [1] ##### 153 335 In this study, a new polynomial rank transmutation is proposed with the help of the bivariate Farlie-Gumbel Morgenstern distribution family. The distribution family obtained in this transmutation is considered to be an alternative to the distribution families obtained by quadratic rank transformation. Various properties of the introduced family are studied. Two real-data examples is considered to illustrate this contribution. Transmuted Distribution, Polynomial Rank Transmutation, Quadratic Rank Trasnmutation, Transmuted-Weibull • [1] Alzaatreh, A., Lee, C., and Famoye, F. “A new method for generating families of continuous distributions”, Metron, 71(1), 63-79, (2013). [2] Anderson, T. W. “Anderson–darling tests of goodness-of-fit”, International Encyclopedia of Statistical Science (pp. 52-54), Springer Berlin Heidelberg, (2011). [3] Aryal, G. R., and Tsokos, C. P. “Transmuted Weibull distribution: A generalization of the Weibull probability distribution”, European Journal of Pure and Applied Mathematics, 4(2), 89-102, (2011). [4] Bourguignon, M., Silva, R. B., and Cordeiro, G. M. “The Weibull-G family of probability distributions”, Journal of Data Science, 12(1), 53-68. (2014). [5] Bourguignon, M., Ghosh, I., and Cordeiro, G. M. “General results for the transmuted family of distributions and new models”, Journal of Probability and Statistics, (2016). [6] Eugene, N., Lee, C., Famoye, F. “The beta-normal distribution and its applications”, Commun. Stat. Theory Methods 31(4), 497–512 (2002). [7] Farlie, D.J.G. “The performance of some correlation coefficients for a general bivariate distribution”, Biometrika 47:307–323, (1960). [8] Lai, C. D., and Xie, M. “A new family of positive quadrant dependent bivariate distributions”, Statistics and probability letters, 46(4), 359-364. (2000). [9] Marshall, A.W, and Olkin, I. “A new method for adding a parameter to a family of distributions with application to the exponential and Weibull families”, Biometrika. 84, 641–652 (1997). [10] Pal, M., and Tiensuwan, M. “The beta transmuted Weibull distribution”, Austrian Journal of Statistics, 43(2), 133-149, (2014). [11] Pogány, T. K., Saboor, A., and Provost, S. “The Marshall-Olkin exponential Weibull distribution”, Hacettepe Journal of Mathematics and Statistics, 44(6), 1579, (2015). [12] Quesenberry, C. P., and Kent, J. “Selecting among probability distributions used in reliability”, Technometrics, 24(1), 59-65. (1982). [13] Shaw, W.T and Buckley, I.R.C. “The Alchemy of Probability Distributions: Beyond Gram-Charlier & Cornish-Fisher Expansions, and Skew-Normal or Kurtotic-Normal Distributions”, Research report. (2007). [14] Stephens, M. A. “The Anderson-Darling Statistic” (No. TR-39). Stanford Univ Ca Dept. of Statistics, (1979). [15] Wang, F. K. “A new model with bathtub-shaped failure rate using an additive Burr XII distribution”, Reliability Engineering and System Safety, 70, 305-312, (2000). Subjects Statistics Author: Mehmet YILMAZInstitution: ANKARA ÜNİVERSİTESİCountry: Turkey Bibtex @ { gujs323926, journal = {GAZI UNIVERSITY JOURNAL OF SCIENCE}, issn = {}, eissn = {2147-1762}, address = {Gazi University}, year = {2018}, volume = {31}, pages = {282 - 294}, doi = {}, title = {A New Distribution Family Constructed by Polynomial Rank Transmutation}, key = {cite}, author = {YILMAZ, Mehmet} } APA YILMAZ, M . (2018). A New Distribution Family Constructed by Polynomial Rank Transmutation. GAZI UNIVERSITY JOURNAL OF SCIENCE, 31 (1), 282-294. Retrieved from http://dergipark.gov.tr/gujs/issue/35772/323926 MLA YILMAZ, M . "A New Distribution Family Constructed by Polynomial Rank Transmutation". GAZI UNIVERSITY JOURNAL OF SCIENCE 31 (2018): 282-294 Chicago YILMAZ, M . "A New Distribution Family Constructed by Polynomial Rank Transmutation". GAZI UNIVERSITY JOURNAL OF SCIENCE 31 (2018): 282-294 RIS TY - JOUR T1 - A New Distribution Family Constructed by Polynomial Rank Transmutation AU - Mehmet YILMAZ Y1 - 2018 PY - 2018 N1 - DO - T2 - GAZI UNIVERSITY JOURNAL OF SCIENCE JF - Journal JO - JOR SP - 282 EP - 294 VL - 31 IS - 1 SN - -2147-1762 M3 - UR - Y2 - 2018 ER - EndNote %0 GAZI UNIVERSITY JOURNAL OF SCIENCE A New Distribution Family Constructed by Polynomial Rank Transmutation %A Mehmet YILMAZ %T A New Distribution Family Constructed by Polynomial Rank Transmutation %D 2018 %J GAZI UNIVERSITY JOURNAL OF SCIENCE %P -2147-1762 %V 31 %N 1 %R %U ISNAD YILMAZ, Mehmet . "A New Distribution Family Constructed by Polynomial Rank Transmutation". GAZI UNIVERSITY JOURNAL OF SCIENCE 31 / 1 (March 2018): 282-294.
2019-04-20T16:34:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5109752416610718, "perplexity": 12869.548970190071}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578529898.48/warc/CC-MAIN-20190420160858-20190420182858-00183.warc.gz"}
http://www.itl.nist.gov/div898/handbook/ppc/section5/ppc523.htm
3. Production Process Characterization 3.5. Case Studies 3.5.2. Machine Screw Case Study ## Analysis of Variance Analysis of Variance Using All Factors We can confirm our interpretation of the box plots by running an analysis of variance when all four factors are included. Source DF Sum of Mean F Statistic Prob > F Squares Square ------------------------------------------------------------------ Machine 2 0.000111 0.000055 29.3159 1.3e-11 Day 2 0.000004 0.000002 0.9884 0.37 Time 1 0.000002 0.000002 1.2478 0.27 Sample 9 0.000009 0.000001 0.5205 0.86 Residual 165 0.000312 0.000002 ------------------------------------------------------------------ Corrected Total 179 0.000437 0.000002 Interpretation of ANOVA Output We fit the model $$Y_{ijklm} = \mu + \alpha_i + \beta_j + \tau_k + \phi_l + \epsilon_{ijklm}$$ which has an overall mean, as opposed to the model $$Y_{ijklm} = A_i + B_j + C_k + D_l + \epsilon_{ijklm}$$ These models are mathematically equivalent. The effect estimates in the first model are relative to the overall mean. The effect estimates for the second model can be obtained by simply adding the overall mean to effect estimates from the first model. Only the machine factor is statistically significant. This confirms what the box plots in the previous section had indicated graphically. Analysis of Variance Using Only Machine The previous analysis of variance indicated that only the machine factor was statistically significant. The following table displays the ANOVA results using only the machine factor. Source DF Sum of Mean F Statistic Prob > F Squares Square ------------------------------------------------------------------ Machine 2 0.000111 0.000055 30.0094 6.0E-12 Residual 177 0.000327 0.000002 ------------------------------------------------------------------ Corrected Total 179 0.000437 0.000002 Interpretation of ANOVA Output At this stage, we are interested in the level means for the machine variable. These can be summarized in the following table. Level Number Mean Standard Error Lower 95% CI Upper 95% CI 1 60 0.124887 0.00018 0.12454 0.12523 2 60 0.122968 0.00018 0.12262 0.12331 3 60 0.124022 0.00018 0.12368 0.12437 Model Validation As a final step, we validate the model by generating a 4-plot of the residuals. The 4-plot does not indicate any significant problems with the ANOVA model.
2017-10-23T20:46:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7319201231002808, "perplexity": 1370.836457140376}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187826642.70/warc/CC-MAIN-20171023202120-20171023222120-00825.warc.gz"}
https://lammps.sandia.gov/doc/fix_neb_spin.html
# fix neb/spin command ## Syntax fix ID group-ID neb/spin Kspring • ID, group-ID are documented in fix command • neb/spin = style name of this fix command Kspring = spring constant for parallel nudging force (force/distance units or force units, see parallel keyword) ## Examples fix 1 active neb/spin 1.0 ## Description Add nudging forces to spins in the group for a multi-replica simulation run via the neb/spin command to perform a geodesic nudged elastic band (GNEB) calculation for finding the transition state. Hi-level explanations of GNEB are given with the neb/spin command and on the Howto replica doc page. The fix neb/spin command must be used with the “neb/spin” command and defines how inter-replica nudging forces are computed. A GNEB calculation is divided in two stages. In the first stage n replicas are relaxed toward a MEP until convergence. In the second stage, the climbing image scheme is enabled, so that the replica having the highest energy relaxes toward the saddle point (i.e. the point of highest energy along the MEP), and a second relaxation is performed. The nudging forces are calculated as explained in (BessarabB)). See this reference for more explanation about their expression. Restart, fix_modify, output, run start/stop, minimize info: No information about this fix is written to binary restart files. None of the fix_modify options are relevant to this fix. No global or per-atom quantities are stored by this fix for access by various output commands. No parameter of this fix can be used with the start/stop keywords of the run command. The forces due to this fix are imposed during an energy minimization, as invoked by the minimize command via the neb/spin command. ## Restrictions This command can only be used if LAMMPS was built with the SPIN package. See the Build package doc page for more info. ## Default none (BessarabB) Bessarab, Uzdin, Jonsson, Comp Phys Comm, 196, 335-347 (2015).
2019-06-27T04:19:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7664972543716431, "perplexity": 5972.512210616427}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000613.45/warc/CC-MAIN-20190627035307-20190627061307-00021.warc.gz"}
https://pdglive.lbl.gov/DataBlock.action?node=M117M&home=MXXX005
${{\boldsymbol f}_{{2}}{(1640)}}$ MASS INSPIRE search VALUE (MeV) DOCUMENT ID TECN  COMMENT $\bf{ 1639 \pm6}$ OUR AVERAGE  Error includes scale factor of 1.2. $1620$ $\pm16$ 1995 MRK3 ${{\mathit J / \psi}}$ $\rightarrow$ ${{\mathit \gamma}}{{\mathit \pi}^{+}}{{\mathit \pi}^{-}}{{\mathit \pi}^{+}}{{\mathit \pi}^{-}}$ $1647$ $\pm7$ 1992 OBLX ${{\overline{\mathit n}}}$ ${{\mathit p}}$ $\rightarrow$ 3 ${{\mathit \pi}^{+}}$2 ${{\mathit \pi}^{-}}$ $1635$ $\pm7$ 1990 GAM2 38 ${{\mathit \pi}^{-}}$ ${{\mathit p}}$ $\rightarrow$ ${{\mathit \omega}}{{\mathit \omega}}{{\mathit n}}$ • • • We do not use the following data for averages, fits, limits, etc. • • • $1640$ $\pm5$ 2006 CBAR 0.9 ${{\overline{\mathit p}}}$ ${{\mathit p}}$ $\rightarrow$ ${{\mathit K}^{+}}{{\mathit K}^{-}}{{\mathit \pi}^{0}}$ $1659$ $\pm6$ 2006 SPEC 40 ${{\mathit \pi}^{-}}$ ${{\mathit p}}$ $\rightarrow$ ${{\mathit K}_S^0}$ ${{\mathit K}_S^0}$ ${{\mathit n}}$ $1643$ $\pm7$ 1 1989 B GAM2 38 ${{\mathit \pi}^{-}}$ ${{\mathit p}}$ $\rightarrow$ ${{\mathit \omega}}{{\mathit \omega}}{{\mathit n}}$ 1  Superseded by ALDE 1990 . References: AMSLER 2006 PL B639 165 Study of ${{\mathit K}}{{\overline{\mathit K}}}$ resonances in ${{\overline{\mathit p}}}$ ${{\mathit p}}$ $\rightarrow$ ${{\mathit K}^{+}}{{\mathit K}^{-}}{{\mathit \pi}^{0}}$ at 900 and 1640 MeV/$\mathit c$ PAN 69 493 Analysis of the ${{\mathit K}_S^0}$ ${{\mathit K}_S^0}$ System from the Reaction ${{\mathit \pi}^{-}}$ ${{\mathit p}}$ $\rightarrow$ ${{\mathit K}_S^0}$ ${{\mathit K}_S^0}$ ${{\mathit n}}$ at 40 GeV PL B353 378 Further Amplitude Analysis of ${{\mathit J / \psi}}$ $\rightarrow$ ${{\mathit \gamma}}$( ${{\mathit \pi}^{+}}{{\mathit \pi}^{-}}{{\mathit \pi}^{+}}{{\mathit \pi}^{-}}$) PL B241 600 Further Study of Mesons which Decay into ${{\mathit \omega}}{{\mathit \omega}}$ PL B216 451 Study of ${{\mathit \omega}}{{\mathit \omega}}$ Systems Produced in 38 ${\mathrm {GeV/}}\mathit c$ ${{\mathit \pi}^{-}}{{\mathit p}}$ Collisions
2021-03-06T10:27:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8616056442260742, "perplexity": 3778.105445950361}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178374686.69/warc/CC-MAIN-20210306100836-20210306130836-00100.warc.gz"}
http://dlmf.nist.gov/18.32
# §18.32 OP’s with Respect to Freud Weights A Freud weight is a weight function of the form 18.32.1 ${w(x)=\mathop{\exp\/}\nolimits\!\left(-Q(x)\right)},$ $-\infty, Symbols: $\mathop{\exp\/}\nolimits\NVar{z}$: exponential function, $w(x)$: weight function and $x$: real variable Permalink: http://dlmf.nist.gov/18.32.E1 Encodings: TeX, pMML, png See also: info for 18.32 where $Q(x)$ is real, even, nonnegative, and continuously differentiable. Of special interest are the cases $Q(x)=x^{2m}$, $m=1,2,\dots$. No explicit expressions for the corresponding OP’s are available. However, for asymptotic approximations in terms of elementary functions for the OP’s, and also for their largest zeros, see Levin and Lubinsky (2001) and Nevai (1986). For a uniform asymptotic expansion in terms of Airy functions (§9.2) for the OP’s in the case $Q(x)=x^{4}$ see Bo and Wong (1999). For asymptotic approximations to OP’s that correspond to Freud weights with more general functions $Q(x)$ see Deift et al. (1999a, b), Bleher and Its (1999), and Kriecherbauer and McLaughlin (1999).
2016-09-28T00:04:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 10, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9451207518577576, "perplexity": 7741.534947921337}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738661289.41/warc/CC-MAIN-20160924173741-00247-ip-10-143-35-109.ec2.internal.warc.gz"}
https://pdglive.lbl.gov/DataBlock.action?node=B148DD&home=BXXX040
# ${\boldsymbol m}_{{{\boldsymbol \Xi}_{{c}}{(2815)}^{+}}}–{\boldsymbol m}_{{{\boldsymbol \Xi}_{{c}}{(2815)}^{0}}}$ INSPIRE search VALUE (MeV) DOCUMENT ID TECN  COMMENT $\bf{ -3.51 \pm0.26}$ OUR FIT • • • We do not use the following data for averages, fits, limits, etc. • • • $-3.47$ $\pm0.12$ $\pm0.48$ 2016 BELL 941 and 1258 evts $-3.4$ $\pm1.9$ $\pm0.9$ 2008 BELL 73 $\&$ 48 events References: YELTON 2016 PR D94 052011 Study of Excited ${{\mathit \Xi}_{{c}}}$ States Decaying into ${{\mathit \Xi}_{{c}}^{0}}$ and ${{\mathit \Xi}_{{c}}^{+}}$ Baryons LESIAK 2008 PL B665 9 Measurement of Masses of the ${{\mathit \Xi}_{{c}}{(2645)}}$ and ${{\mathit \Xi}_{{c}}{(2815)}}$ Baryons and Observation of ${{\mathit \Xi}_{{c}}{(2980)}}$ $\rightarrow$ ${{\mathit \Xi}_{{c}}{(2645)}}{{\mathit \pi}}$
2020-10-22T12:06:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9124019145965576, "perplexity": 8991.258410329841}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107879537.28/warc/CC-MAIN-20201022111909-20201022141909-00294.warc.gz"}
https://www.zbmath.org/authors/?q=ai%3Atakens.floris
## Takens, Floris Compute Distance To: Author ID: takens.floris Published as: Takens, Floris; Takens, F. External Links: MGP · Wikidata · GND · IdRef Documents Indexed: 111 Publications since 1967, including 6 Books 7 Contributions as Editor Biographic References: 4 Publications Co-Authors: 45 Co-Authors with 50 Joint Publications 844 Co-Co-Authors all top 5 ### Co-Authors 65 single-authored 16 Broer, Henk W. 8 Palis, Jacob jun. 5 Verbitski, Evgeny 3 Newhouse, Sheldon E. 3 Ruelle, David Pierre 2 Braaksma, Boele L. J. 2 Dumortier, Freddy 2 Hasselblatt, Boris 2 Pacifico, Maria José 2 Wagener, Florian 1 Bakker, R. R. 1 Bamón, Rodrigo 1 Banchoff, Thomas F. 1 Booij, Leo H. D. J. 1 Coenen, Anton M. L. 1 Cushman, Richard H. 1 de Korte, R. J. 1 de la Harpe, Pierre 1 DeGoede, Jacob 1 Fassò, Francesco 1 Hoveijn, Igor 1 Huitema, G. B. 1 Klingenberg, Wilhelm P. A. 1 Looijenga, Eduard J. N. 1 Lukina, Olga 1 Maes, Christian 1 Malta, Iaci 1 Posthumus, Rense A. 1 Redig, Frank 1 Schouten, J. C. 1 Siersma, Dirk 1 Stoyanov, Luchezar N. 1 Tavares Camacho, Maria Isabel 1 Van de Craats, Jan 1 van den Bleek, C. M. 1 van den Broek, Philip L. C. 1 van Egmond, Jan 1 van Gils, Stephan A. 1 Van Moffaert, Annelies 1 van Rijn, Clementina M. 1 van Strien, Sebastian J. 1 Vanderbauwhede, Andre L. 1 Verbitskij, E. A. 1 Verbitskiy, Evgeny A. 1 White, James H. all top 5 ### Serials 5 Ergodic Theory and Dynamical Systems 4 Communications in Mathematical Physics 4 Nonlinearity 4 Inventiones Mathematicae 3 Boletim da Sociedade Brasileira de Matemática 3 Mathematische Annalen 3 Topology 3 Boletim da Sociedade Brasileira de Matemática. Nova Série 3 Regular and Chaotic Dynamics 3 Nieuw Archief voor Wiskunde. Vijfde Serie 2 Compositio Mathematica 2 Publications Mathématiques 2 Physica D 2 International Journal of Bifurcation and Chaos in Applied Sciences and Engineering 2 Annals of Mathematics. Second Series 2 Lecture Notes in Mathematics 2 Nederlandse Akademie van Wetenschappen. Proceedings. Series A. Indagationes Mathematicae 1 Archive for Rational Mechanics and Analysis 1 Israel Journal of Mathematics 1 Jahresbericht der Deutschen Mathematiker-Vereinigung (DMV) 1 Mathematical Notes 1 Periodica Mathematica Hungarica 1 Revue Roumaine de Mathématiques Pures et Appliquées 1 American Journal of Mathematics 1 Annales de l’Institut Fourier 1 Fundamenta Mathematicae 1 Illinois Journal of Mathematics 1 Indiana University Mathematics Journal 1 Journal of Differential Equations 1 Journal of Differential Geometry 1 Manuscripta Mathematica 1 Mathematische Zeitschrift 1 Memoirs of the American Mathematical Society 1 Topology and its Applications 1 Mitteilungen der Mathematischen Gesellschaft in Hamburg 1 Nieuw Archief voor Wiskunde. Derde Serie 1 Bulletin de la Société Mathématique de Belgique. Série A 1 Nieuw Archief voor Wiskunde. Vierde Serie 1 Fractals 1 Bulletin of the Brazilian Mathematical Society. New Series 1 Bulletin of the American Mathematical Society 1 Applied Mathematical Sciences 1 Cambridge Studies in Advanced Mathematics 1 Epsilon Uitgaven 1 Progress in Nonlinear Differential Equations and Their Applications all top 5 ### Fields 86 Dynamical systems and ergodic theory (37-XX) 23 Global analysis, analysis on manifolds (58-XX) 18 Ordinary differential equations (34-XX) 11 Manifolds and cell complexes (57-XX) 10 General and overarching topics; collections (00-XX) 6 Differential geometry (53-XX) 6 General topology (54-XX) 5 Measure and integration (28-XX) 5 Mechanics of particles and systems (70-XX) 4 History and biography (01-XX) 4 Statistics (62-XX) 4 Fluid mechanics (76-XX) 2 Partial differential equations (35-XX) 2 Calculus of variations and optimal control; optimization (49-XX) 2 Probability theory and stochastic processes (60-XX) 2 Statistical mechanics, structure of matter (82-XX) 2 Biology and other natural sciences (92-XX) 1 Combinatorics (05-XX) 1 Nonassociative rings and algebras (17-XX) 1 Topological groups, Lie groups (22-XX) 1 Operator theory (47-XX) 1 Mathematics education (97-XX) ### Citations contained in zbMATH Open 90 Publications have been cited 2,954 times in 2,459 Documents Cited by Year On the nature of turbulence. Zbl 0223.76041 Ruelle, David; Takens, Floris 1971 Detecting strange attractors in turbulence. Zbl 0513.58032 Takens, Floris 1981 Hyperbolicity and sensitive chaotic dynamics at homoclinic bifurcations. Fractal dimensions and infinitely many attractors. Zbl 0790.58014 Palis, Jacob; Takens, Floris 1993 Singularities of vector fields. Zbl 0279.58009 Takens, Floris 1973 Bifurcations and stability of families of diffeomorphisms. Zbl 0518.58031 Newhouse, S.; Palis, J.; Takens, F. 1983 Occurrence of strange Axiom A attractors near quasi periodic flows on $$T^m$$, $$m\geq 3$$. Zbl 0396.58029 Newhouse, S.; Ruelle, D.; Takens, F. 1978 Note concerning our paper ”On the nature of turbulence”. Commun. math. Phys. 20, 167-192 (1971). Zbl 0227.76084 Ruelle, D.; Takens, F. 1971 Normal forms for certain singularities of vectorfields. Zbl 0266.34046 Takens, Floris 1973 Unfoldings and bifurcations of quasi-periodic tori. Zbl 0717.58043 Broer, H. W.; Huitema, G. B.; Takens, F.; Braaksma, B. L. J. 1990 On the variational principle for the topological entropy of certain non-compact sets. Zbl 1042.37020 Takens, Floris; Verbitskiy, Evgeny 2003 Unfoldings of certain singularities of vectorfields: Generalized Hopf bifurcations. Zbl 0273.35009 Takens, Floris 1973 A global version of the inverse problem of the calculus of variations. Zbl 0463.58015 Takens, Floris 1979 Partially hyperbolic fixed points. Zbl 0214.22901 Takens, F. 1971 Forced oscillations and bifurcations. Zbl 1156.37315 Takens, Floris 2001 Hyperbolicity and the creation of homoclinic orbits. Zbl 0641.58029 Palis, J.; Takens, F. 1987 Multifractal analysis of local entropies for expansive homeomorphisms with specification. Zbl 0955.37002 Takens, Floris; Verbitski, Evgeny 1999 Orbits with historic behaviour, or nonexistence of averages. Zbl 1147.37013 Takens, Floris 2008 Heteroclinic attractors: Time averages and moduli of topological conjugacy. Zbl 0801.58030 Takens, Floris 1994 Topological equivalence of normally hyperbolic dynamical systems. Zbl 0391.58015 Palis, J.; Takens, F. 1977 Generic properties of geodesic flows. Zbl 0225.58006 Klingenberg, W.; Takens, F. 1972 The minimal number of critical points of a function on a compact manifold and the Lusternik-Schnirelman category. Zbl 0198.56603 Takens, F. 1968 Dynamical systems and chaos. Zbl 1218.37001 Broer, Henk; Takens, Floris 2011 Homoclinic points in conservative systems. Zbl 0247.58007 Takens, Floris 1972 Some remarks on the Böhme-Berger bifurcation theorem. Zbl 0237.47032 Takens, Floris 1972 Structures in dynamics: finite dimensional deterministic studies. Zbl 0746.58002 Broer, H. W.; Dumortier, F.; van Strien, S. J.; Takens, F. 1991 Derivations of vector fields. Zbl 0258.58005 Takens, Floris 1973 Constrained equations; a study of implicit differential equations and their discontinuous solutions. Zbl 0386.34003 Takens, Floris 1976 Generalized entropies: Rényi and correlation integral approach. Zbl 0943.37004 Takens, Floris; Verbitski, Evgeny 1998 Stable arcs of diffeomorphisms. Zbl 0339.58008 Newhouse, Sheldon; Palis, Jacob; Takens, Floris 1976 Cycles and measure of bifurcation sets for two-dimensional diffeomorphisms. Zbl 0579.58005 Palis, J.; Takens, F. 1985 Moduli and bifurcations: non-transversal intersections of invariant manifolds of vectorfields. Zbl 0473.58018 Takens, Floris 1980 Limit capacity and Hausdorff dimension of dynamically defined Cantor sets. Zbl 0661.58024 Takens, Floris 1988 Stability of parametrized families of gradient vector fields. Zbl 0533.58018 Palis, J.; Takens, F. 1983 On the numerical determination of the dimension of an attractor. Zbl 0561.58027 Takens, Floris 1985 Hamiltonian systems: Generic properties of closed orbits and local perturbations. Zbl 0191.21602 Takens, Floris 1970 Resonances in skew and reducible quasi-periodic Hopf bifurcations. Zbl 0989.37017 Takens, F.; Wagener, F. O. O. 2000 Formally symmetric normal forms and genericity. Zbl 0704.58047 Broer, H. W.; Takens, F. 1989 The Lusternik-Schnirelman categories of a product space. Zbl 0198.28302 Takens, F. 1970 Global phenomena in bifurcations of dynamical systems with simple recurrence. Zbl 0419.58012 Takens, F. 1979 Rotation intervals of endomorphisms of the circle. Zbl 0605.58027 Bamon, R.; Malta, I. P.; Pacifico, M. J.; Takens, F. 1984 On Zeeman’s tolerance stability conjecture. Zbl 0217.48303 Takens, F. 1971 A note on sufficiency of jets. Zbl 0231.58008 Takens, Floris 1971 Characterization of a differentiable structure by its group of diffeomorphisms. Zbl 0447.58002 Takens, Floris 1979 Detecting nonlinearities in stationary time series. Zbl 0871.62078 Takens, Floris 1993 Geometry of KAM tori for nearly integrable Hamiltonian systems. Zbl 1131.37057 Broer, Henk; Cushman, Richard; Fassó, Francesco; Takens, Floris 2007 Dynamical systems and chaos. Zbl 1283.37001 Broer, Henk; Takens, Floris 2009 Transitions from periodic to strange attractors in constrained equations. Zbl 0637.58018 Takens, F. 1987 Distinguishing deterministic and random systems. Zbl 0532.58017 Takens, F. 1983 Symmetries, conservation laws and variational principles. Zbl 0368.49019 Takens, Floris 1977 Intermittency and weak Gibbs states. Zbl 1005.37007 Maes, Christian; Redig, Frank; Takens, Floris; van Moffaert, Annelies; Verbitski, Evgeny 2000 General multifractal analysis of local entropies. Zbl 0964.37012 Takens, Floris; Verbitski, Evgeny 2000 Multiplications in solenoids as hyperbolic attractors. Zbl 1074.37019 Takens, Floris 2005 Abundance of generic homoclinic tangencies in real-analytic families of diffeomorphisms. Zbl 0772.58037 Takens, Floris 1992 Constrained differential equations. Zbl 0331.34001 Takens, Floris 1975 Mechanical and gradient systems; local perturbations and generic properties. Zbl 0572.58008 Takens, Floris 1983 Integrable and non-integrable deformations of the skew Hopf bifurcation. Zbl 1012.37031 Broer, H. W.; Takens, F.; Wagener, F. O. O. 1999 A $$C^ i$$ counterexample to Moser’s twist theorem. Zbl 0235.58009 Takens, Floris 1971 Preliminaries of dynamical systems theory. Zbl 1241.37004 Broer, H. W.; Takens, F. 2010 Unicity of KAM tori. Zbl 1130.37376 Broer, Henk; Takens, Floris 2007 Tolerance stability. Zbl 0321.54022 Takens, Floris 1975 Vector fields with no nonwandering points. Zbl 0339.58009 Takens, Floris; White, Warren 1976 The reconstruction theorem for endomorphisms. Zbl 1032.37012 Takens, Floris 2002 Singularities of functions and vectorfields. Zbl 0237.58012 Takens, Floris 1972 Motion under the influence of a strong constraining force. Zbl 0458.58010 Takens, Floris 1980 Reconstruction theory and nonlinear time series analysis. Zbl 1260.37057 Takens, Floris 2010 Global properties of integrable Hamiltonian systems. Zbl 1229.37052 Lukina, O. V.; Takens, F.; Broer, H. W. 2008 Mixed spectra and rotational symmetry. Zbl 0789.58050 Broer, Henk; Takens, Floris 1993 Implicit differential equations; some open problems. Zbl 0354.34017 Takens, Floris 1976 Homoclinic bifurcations and hyperbolic dynamics. 16th Brazilian colloquium on mathematics (16e colóquio Brasileiro de matemática), held in Rio de Janeiro, Brasil 1987. Zbl 0696.58001 Palis, Jacob jun.; Takens, Floris 1987 A non-stabilizable jet of a singularity of a vector field; the analytic case. Zbl 0569.58003 Takens, Floris 1984 Rényi entropies of aperiodic dynamical systems. Zbl 1044.37005 Takens, Floris; Verbitskiy, Evgeny 2002 Multifractal analysis of dimensions and entropies. Zbl 0970.37002 Takens, F.; Verbitski, E. 2000 Height functions on surfaces with three critical points. Zbl 0311.57022 Banchoff, Thomas; Takens, Floris 1975 Geometric aspects of non-linear R.L.C. networks. Zbl 0335.34025 Takens, Floris 1975 Characterization of compactness for symplectic manifolds. Zbl 0378.58004 Dumortier, F.; Takens, F. 1973 Handbook of dynamical systems. Volume 3. Zbl 1216.37002 2010 Moduli of singularities of vector fields. Zbl 0526.58037 Takens, Floris 1984 Dynamical systems and bifurcations. Proceedings of a Workshop held in Groningen, The Netherlands, April 16-20, 1984. Zbl 0552.00007 1985 Homoclinic tangencies: Moduli and topology of separatrices. Zbl 0783.58051 Posthumus, Rense A.; Takens, Floris 1993 Morse theory of double normals of immersions. Zbl 0228.58007 Takens, Floris; White, James 1971 A nonstabilizable jet of a singularity of a vector field. Zbl 0283.58009 Takens, Floris 1973 Local invariant manifolds and normal forms. Zbl 1248.37005 Takens, Floris; Vanderbauwhede, André 2010 Symmetries, conservation laws and symplectic structures; elementary systems. Zbl 0417.70007 Takens, F. 1979 Singularities of gradient vector fields and moduli. Zbl 0614.58033 Takens, Floris 1985 Nonlinear dynamical systems and chaos. Proceedings of the dynamical systems conference, held at the University of Groningen, Netherlands in Dec. 1995, in honour of Johann Bernoulli. Zbl 0830.00032 1996 Geometry Symposium, Utrecht 1980. Proceedings of a Symposium Held at the University of Utrecht, The Netherlands, August 27-29, 1980. Zbl 0465.00016 1981 Handbook of dynamical systems. Volume 3. Reprint of the 2010 hardback edition. Zbl 1349.37002 2016 Neural networks for prediction and control of chaotic fluidized bed hydrodynamics: A first step. Zbl 0907.76069 Bakker, R.; de Korte, R. J.; Schouten, J. C.; van den Bleek, C. M.; Takens, F. 1997 Generalized entropies. Zbl 0929.37001 Verbitskij, E. A.; Takens, F. 1998 Reduction entropy. Zbl 0885.58041 Takens, Floris 1995 Handbook of dynamical systems. Volume 3. Reprint of the 2010 hardback edition. Zbl 1349.37002 2016 Dynamical systems and chaos. Zbl 1218.37001 Broer, Henk; Takens, Floris 2011 Preliminaries of dynamical systems theory. Zbl 1241.37004 Broer, H. W.; Takens, F. 2010 Reconstruction theory and nonlinear time series analysis. Zbl 1260.37057 Takens, Floris 2010 Handbook of dynamical systems. Volume 3. Zbl 1216.37002 2010 Local invariant manifolds and normal forms. Zbl 1248.37005 Takens, Floris; Vanderbauwhede, André 2010 Dynamical systems and chaos. Zbl 1283.37001 Broer, Henk; Takens, Floris 2009 Orbits with historic behaviour, or nonexistence of averages. Zbl 1147.37013 Takens, Floris 2008 Global properties of integrable Hamiltonian systems. Zbl 1229.37052 Lukina, O. V.; Takens, F.; Broer, H. W. 2008 Geometry of KAM tori for nearly integrable Hamiltonian systems. Zbl 1131.37057 Broer, Henk; Cushman, Richard; Fassó, Francesco; Takens, Floris 2007 Unicity of KAM tori. Zbl 1130.37376 Broer, Henk; Takens, Floris 2007 Multiplications in solenoids as hyperbolic attractors. Zbl 1074.37019 Takens, Floris 2005 On the variational principle for the topological entropy of certain non-compact sets. Zbl 1042.37020 Takens, Floris; Verbitskiy, Evgeny 2003 The reconstruction theorem for endomorphisms. Zbl 1032.37012 Takens, Floris 2002 Rényi entropies of aperiodic dynamical systems. Zbl 1044.37005 Takens, Floris; Verbitskiy, Evgeny 2002 Forced oscillations and bifurcations. Zbl 1156.37315 Takens, Floris 2001 Resonances in skew and reducible quasi-periodic Hopf bifurcations. Zbl 0989.37017 Takens, F.; Wagener, F. O. O. 2000 Intermittency and weak Gibbs states. Zbl 1005.37007 Maes, Christian; Redig, Frank; Takens, Floris; van Moffaert, Annelies; Verbitski, Evgeny 2000 General multifractal analysis of local entropies. Zbl 0964.37012 Takens, Floris; Verbitski, Evgeny 2000 Multifractal analysis of dimensions and entropies. Zbl 0970.37002 Takens, F.; Verbitski, E. 2000 Multifractal analysis of local entropies for expansive homeomorphisms with specification. Zbl 0955.37002 Takens, Floris; Verbitski, Evgeny 1999 Integrable and non-integrable deformations of the skew Hopf bifurcation. Zbl 1012.37031 Broer, H. W.; Takens, F.; Wagener, F. O. O. 1999 Generalized entropies: Rényi and correlation integral approach. Zbl 0943.37004 Takens, Floris; Verbitski, Evgeny 1998 Generalized entropies. Zbl 0929.37001 Verbitskij, E. A.; Takens, F. 1998 Neural networks for prediction and control of chaotic fluidized bed hydrodynamics: A first step. Zbl 0907.76069 Bakker, R.; de Korte, R. J.; Schouten, J. C.; van den Bleek, C. M.; Takens, F. 1997 Nonlinear dynamical systems and chaos. Proceedings of the dynamical systems conference, held at the University of Groningen, Netherlands in Dec. 1995, in honour of Johann Bernoulli. Zbl 0830.00032 1996 Reduction entropy. Zbl 0885.58041 Takens, Floris 1995 Heteroclinic attractors: Time averages and moduli of topological conjugacy. Zbl 0801.58030 Takens, Floris 1994 Hyperbolicity and sensitive chaotic dynamics at homoclinic bifurcations. Fractal dimensions and infinitely many attractors. Zbl 0790.58014 Palis, Jacob; Takens, Floris 1993 Detecting nonlinearities in stationary time series. Zbl 0871.62078 Takens, Floris 1993 Mixed spectra and rotational symmetry. Zbl 0789.58050 Broer, Henk; Takens, Floris 1993 Homoclinic tangencies: Moduli and topology of separatrices. Zbl 0783.58051 Posthumus, Rense A.; Takens, Floris 1993 Abundance of generic homoclinic tangencies in real-analytic families of diffeomorphisms. Zbl 0772.58037 Takens, Floris 1992 Structures in dynamics: finite dimensional deterministic studies. Zbl 0746.58002 Broer, H. W.; Dumortier, F.; van Strien, S. J.; Takens, F. 1991 Unfoldings and bifurcations of quasi-periodic tori. Zbl 0717.58043 Broer, H. W.; Huitema, G. B.; Takens, F.; Braaksma, B. L. J. 1990 Formally symmetric normal forms and genericity. Zbl 0704.58047 Broer, H. W.; Takens, F. 1989 Limit capacity and Hausdorff dimension of dynamically defined Cantor sets. Zbl 0661.58024 Takens, Floris 1988 Hyperbolicity and the creation of homoclinic orbits. Zbl 0641.58029 Palis, J.; Takens, F. 1987 Transitions from periodic to strange attractors in constrained equations. Zbl 0637.58018 Takens, F. 1987 Homoclinic bifurcations and hyperbolic dynamics. 16th Brazilian colloquium on mathematics (16e colóquio Brasileiro de matemática), held in Rio de Janeiro, Brasil 1987. Zbl 0696.58001 Palis, Jacob jun.; Takens, Floris 1987 Cycles and measure of bifurcation sets for two-dimensional diffeomorphisms. Zbl 0579.58005 Palis, J.; Takens, F. 1985 On the numerical determination of the dimension of an attractor. Zbl 0561.58027 Takens, Floris 1985 Dynamical systems and bifurcations. Proceedings of a Workshop held in Groningen, The Netherlands, April 16-20, 1984. Zbl 0552.00007 1985 Singularities of gradient vector fields and moduli. Zbl 0614.58033 Takens, Floris 1985 Rotation intervals of endomorphisms of the circle. Zbl 0605.58027 Bamon, R.; Malta, I. P.; Pacifico, M. J.; Takens, F. 1984 A non-stabilizable jet of a singularity of a vector field; the analytic case. Zbl 0569.58003 Takens, Floris 1984 Moduli of singularities of vector fields. Zbl 0526.58037 Takens, Floris 1984 Bifurcations and stability of families of diffeomorphisms. Zbl 0518.58031 Newhouse, S.; Palis, J.; Takens, F. 1983 Stability of parametrized families of gradient vector fields. Zbl 0533.58018 Palis, J.; Takens, F. 1983 Distinguishing deterministic and random systems. Zbl 0532.58017 Takens, F. 1983 Mechanical and gradient systems; local perturbations and generic properties. Zbl 0572.58008 Takens, Floris 1983 Detecting strange attractors in turbulence. Zbl 0513.58032 Takens, Floris 1981 Geometry Symposium, Utrecht 1980. Proceedings of a Symposium Held at the University of Utrecht, The Netherlands, August 27-29, 1980. Zbl 0465.00016 1981 Moduli and bifurcations: non-transversal intersections of invariant manifolds of vectorfields. Zbl 0473.58018 Takens, Floris 1980 Motion under the influence of a strong constraining force. Zbl 0458.58010 Takens, Floris 1980 A global version of the inverse problem of the calculus of variations. Zbl 0463.58015 Takens, Floris 1979 Global phenomena in bifurcations of dynamical systems with simple recurrence. Zbl 0419.58012 Takens, F. 1979 Characterization of a differentiable structure by its group of diffeomorphisms. Zbl 0447.58002 Takens, Floris 1979 Symmetries, conservation laws and symplectic structures; elementary systems. Zbl 0417.70007 Takens, F. 1979 Occurrence of strange Axiom A attractors near quasi periodic flows on $$T^m$$, $$m\geq 3$$. Zbl 0396.58029 Newhouse, S.; Ruelle, D.; Takens, F. 1978 Topological equivalence of normally hyperbolic dynamical systems. Zbl 0391.58015 Palis, J.; Takens, F. 1977 Symmetries, conservation laws and variational principles. Zbl 0368.49019 Takens, Floris 1977 Constrained equations; a study of implicit differential equations and their discontinuous solutions. Zbl 0386.34003 Takens, Floris 1976 Stable arcs of diffeomorphisms. Zbl 0339.58008 Newhouse, Sheldon; Palis, Jacob; Takens, Floris 1976 Vector fields with no nonwandering points. Zbl 0339.58009 Takens, Floris; White, Warren 1976 Implicit differential equations; some open problems. Zbl 0354.34017 Takens, Floris 1976 Constrained differential equations. Zbl 0331.34001 Takens, Floris 1975 Tolerance stability. Zbl 0321.54022 Takens, Floris 1975 Height functions on surfaces with three critical points. Zbl 0311.57022 Banchoff, Thomas; Takens, Floris 1975 Geometric aspects of non-linear R.L.C. networks. Zbl 0335.34025 Takens, Floris 1975 Singularities of vector fields. Zbl 0279.58009 Takens, Floris 1973 Normal forms for certain singularities of vectorfields. Zbl 0266.34046 Takens, Floris 1973 Unfoldings of certain singularities of vectorfields: Generalized Hopf bifurcations. Zbl 0273.35009 Takens, Floris 1973 Derivations of vector fields. Zbl 0258.58005 Takens, Floris 1973 Characterization of compactness for symplectic manifolds. Zbl 0378.58004 Dumortier, F.; Takens, F. 1973 A nonstabilizable jet of a singularity of a vector field. Zbl 0283.58009 Takens, Floris 1973 Generic properties of geodesic flows. Zbl 0225.58006 Klingenberg, W.; Takens, F. 1972 Homoclinic points in conservative systems. Zbl 0247.58007 Takens, Floris 1972 Some remarks on the Böhme-Berger bifurcation theorem. Zbl 0237.47032 Takens, Floris 1972 Singularities of functions and vectorfields. Zbl 0237.58012 Takens, Floris 1972 On the nature of turbulence. Zbl 0223.76041 Ruelle, David; Takens, Floris 1971 Note concerning our paper ”On the nature of turbulence”. Commun. math. Phys. 20, 167-192 (1971). Zbl 0227.76084 Ruelle, D.; Takens, F. 1971 Partially hyperbolic fixed points. Zbl 0214.22901 Takens, F. 1971 On Zeeman’s tolerance stability conjecture. Zbl 0217.48303 Takens, F. 1971 A note on sufficiency of jets. Zbl 0231.58008 Takens, Floris 1971 A $$C^ i$$ counterexample to Moser’s twist theorem. Zbl 0235.58009 Takens, Floris 1971 Morse theory of double normals of immersions. Zbl 0228.58007 Takens, Floris; White, James 1971 Hamiltonian systems: Generic properties of closed orbits and local perturbations. Zbl 0191.21602 Takens, Floris 1970 The Lusternik-Schnirelman categories of a product space. Zbl 0198.28302 Takens, F. 1970 The minimal number of critical points of a function on a compact manifold and the Lusternik-Schnirelman category. Zbl 0198.56603 Takens, F. 1968 all top 5 ### Cited by 3,243 Authors 35 Broer, Henk W. 21 Yu, Pei 18 Mesón, Alejandro M. 18 Takens, Floris 18 Vericat, Fernando 15 Díaz, Lorenzo Justiniano 15 Hanßmann, Heinz 14 Morales, Carlos Arnoldo 14 Yorke, James Alan 13 Dumortier, Freddy 13 Moreira, Carlos Gustavo Tamm de Araujo 13 Rybicki, Sławomir Maciej 12 Barreira, Luis Manuel 12 Turaev, Dmitry V. 11 Bonatti, Christian 11 Tian, Xueting 10 Gonchenko, Sergey V. 10 Gorodetskii, Anton Semenovich 10 Iooss, Gérard 10 Kiriki, Shin 10 Li, Feng 10 Osinga, Hinke Maria 10 Pacifico, Maria José 10 Palis, Jacob jun. 10 Roussarie, Robert 10 Tresser, Charles 10 Wu, Yusen 9 Araújo, Vítor 9 Bonckaert, Patrick 9 Brunton, Steven L. 9 Chen, Ercai 9 de la Llave, Rafael 9 Grebogi, Celso 9 Holmes, Philip J. 9 Krauskopf, Bernd 9 Lai, Yingcheng 9 Letellier, Christophe 9 Li, Weigu 9 Llibre, Jaume 9 Sevryuk, Mikhail Borisovich 8 Chen, Zhimin 8 Crovisier, Sylvain 8 Damanik, David 8 Guckenheimer, John M. 8 Guo, Shangjiang 8 Kuznetsov, Sergey P. 8 Naudot, Vincent 8 Palese, Marcella 8 Pochinka, Olga Vital’evna 8 Ruelle, David Pierre 8 Simó, Carles 8 Small, Michael 7 Aihara, Kazuyuki 7 Chen, Guanrong 7 El Naschie, Mohamed Saladin 7 Haro, Àlex 7 Homburg, Ale Jan 7 Jiang, Kan 7 Krupka, Demeter 7 Li, Larry Kin Bong 7 Liu, Yirong 7 Mello, Luis Fernando O. 7 Misiurewicz, Michał 7 Nozdrinova, Elena V. 7 Ott, Edward 7 Price, W. Geraint 7 Rams, Michał 7 Shilnikov, Leonid Pavlovich 7 Simpson, David John Warwick 7 Teixeira, Marco Antonio 7 Viana, Marcelo 6 Algaba, Antonio 6 Berger, Pierre 6 Campbell, Sue Ann 6 Cao, Yongluo 6 de Carvalho Braga, Denis 6 Efstathiou, Konstantinos 6 Farmer, James Doyne 6 Giannakis, Dimitrios 6 Giné, Jaume 6 Hommes, Cars H. 6 Ilyashenko, Yulij Sergeevich 6 Kutz, J. Nathan 6 Nakano, Yushi 6 Remizov, Alexey O. 6 Rodrigues, Alexandre A. P. 6 Rossi, Olga 6 Sataev, Igor’ Rustamovich 6 Soma, Teruhiko 6 Sotomayor, Jorge 6 Sterk, Alef E. 6 van Strien, Sebastian J. 6 Varandas, Paulo 6 Vegter, Gert 6 Wang, Qiudong 6 Zhuzhoma, Evgeniĭ Viktorovich 5 Ashwin, Peter 5 Bessa, Mário 5 Blokh, Alexander M. 5 Casdagli, Martin C. ...and 3,143 more Authors all top 5 ### Cited in 386 Serials 203 Physica D 114 Journal of Differential Equations 113 International Journal of Bifurcation and Chaos in Applied Sciences and Engineering 95 Chaos, Solitons and Fractals 79 Ergodic Theory and Dynamical Systems 69 Chaos 64 Communications in Mathematical Physics 56 Journal of Fluid Mechanics 46 Transactions of the American Mathematical Society 42 Journal of Mathematical Analysis and Applications 33 Nonlinear Dynamics 32 Journal of Dynamics and Differential Equations 32 Discrete and Continuous Dynamical Systems 28 Applied Mathematics and Computation 27 Communications in Nonlinear Science and Numerical Simulation 26 Journal of Statistical Physics 26 Proceedings of the American Mathematical Society 24 Annales de l’Institut Henri Poincaré. Analyse Non Linéaire 24 Regular and Chaotic Dynamics 23 Inventiones Mathematicae 22 Dynamical Systems 21 Nonlinearity 20 Functional Analysis and its Applications 20 Nonlinear Analysis. Theory, Methods & Applications. Series A: Theory and Methods 20 Journal of Nonlinear Science 20 Qualitative Theory of Dynamical Systems 19 Archive for Rational Mechanics and Analysis 18 Advances in Mathematics 18 Mathematische Zeitschrift 18 SIAM Journal on Applied Dynamical Systems 18 Nonlinear Analysis. Theory, Methods & Applications 17 Journal of Difference Equations and Applications 16 Bulletin of Mathematical Biology 16 Indagationes Mathematicae. New Series 16 Journal of Mathematical Sciences (New York) 15 Journal of Mathematical Physics 15 Annales de l’Institut Fourier 15 Discrete and Continuous Dynamical Systems. Series B 15 Journal of Dynamical Systems and Geometric Theories 14 Physics Letters. A 13 Topology and its Applications 12 Publications Mathématiques 12 Journal of Economic Dynamics & Control 12 Dynamics and Stability of Systems 11 Journal of Mathematical Biology 11 Annales Scientifiques de l’École Normale Supérieure. Quatrième Série 11 Mathematical and Computer Modelling 11 Differential Geometry and its Applications 11 Boletim da Sociedade Brasileira de Matemática. Nova Série 11 Journal of Dynamical and Control Systems 10 Biological Cybernetics 10 Journal of Computational Physics 10 Mathematical Biosciences 10 Physica A 10 ZAMP. Zeitschrift für angewandte Mathematik und Physik 10 Discrete and Continuous Dynamical Systems. Series S 9 Israel Journal of Mathematics 9 Mathematical Proceedings of the Cambridge Philosophical Society 9 Mathematische Annalen 9 Meccanica 8 Computers & Mathematics with Applications 8 Journal of the Franklin Institute 8 Boletim da Sociedade Brasileira de Matemática 8 Monatshefte für Mathematik 8 Acta Applicandae Mathematicae 8 Mathematical Problems in Engineering 7 Physics Reports 7 Journal of Geometry and Physics 7 Bulletin of the American Mathematical Society. New Series 6 Computer Methods in Applied Mechanics and Engineering 6 International Journal of Systems Science 6 Theoretical and Mathematical Physics 6 Ukrainian Mathematical Journal 6 Physics of Fluids, A 6 Journal of Computational and Applied Mathematics 6 Manuscripta Mathematica 6 Mathematics and Computers in Simulation 6 Systems & Control Letters 6 Neural Computation 6 Annales de la Faculté des Sciences de Toulouse. Mathématiques. Série VI 6 Physics of Fluids 6 Differential Equations 6 Advances in Difference Equations 5 Journal of Applied Mathematics and Mechanics 5 Mathematical Notes 5 Annali di Matematica Pura ed Applicata. Serie Quarta 5 Compositio Mathematica 5 Czechoslovak Mathematical Journal 5 Journal of Economic Theory 5 Journal of Soviet Mathematics 5 Japan Journal of Applied Mathematics 5 The Journal of Geometric Analysis 5 Applied Mathematical Modelling 5 Journal de Mathématiques Pures et Appliquées. Neuvième Série 5 Vestnik St. Petersburg University. Mathematics 5 Fractals 5 Bulletin des Sciences Mathématiques 5 Abstract and Applied Analysis 5 Acta Mathematica Sinica. English Series 5 Annales Henri Poincaré ...and 286 more Serials all top 5 ### Cited in 54 Fields 1,481 Dynamical systems and ergodic theory (37-XX) 542 Ordinary differential equations (34-XX) 254 Fluid mechanics (76-XX) 192 Biology and other natural sciences (92-XX) 182 Global analysis, analysis on manifolds (58-XX) 154 Mechanics of particles and systems (70-XX) 131 Partial differential equations (35-XX) 127 Measure and integration (28-XX) 115 Statistics (62-XX) 86 Manifolds and cell complexes (57-XX) 79 Systems theory; control (93-XX) 75 Numerical analysis (65-XX) 69 Differential geometry (53-XX) 62 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 61 Statistical mechanics, structure of matter (82-XX) 57 Probability theory and stochastic processes (60-XX) 53 General topology (54-XX) 51 Computer science (68-XX) 51 Information and communication theory, circuits (94-XX) 39 Operator theory (47-XX) 39 Quantum theory (81-XX) 35 Algebraic topology (55-XX) 28 Several complex variables and analytic spaces (32-XX) 27 Number theory (11-XX) 24 Difference and functional equations (39-XX) 24 Classical thermodynamics, heat transfer (80-XX) 23 Nonassociative rings and algebras (17-XX) 23 Mechanics of deformable solids (74-XX) 21 Real functions (26-XX) 21 Calculus of variations and optimal control; optimization (49-XX) 21 Geophysics (86-XX) 15 Operations research, mathematical programming (90-XX) 14 Functional analysis (46-XX) 14 Optics, electromagnetic theory (78-XX) 13 Algebraic geometry (14-XX) 13 Relativity and gravitational theory (83-XX) 12 General and overarching topics; collections (00-XX) 12 History and biography (01-XX) 12 Topological groups, Lie groups (22-XX) 10 Group theory and generalizations (20-XX) 10 Functions of a complex variable (30-XX) 6 Mathematical logic and foundations (03-XX) 5 Astronomy and astrophysics (85-XX) 4 Combinatorics (05-XX) 4 Commutative algebra (13-XX) 4 Linear and multilinear algebra; matrix theory (15-XX) 4 Associative rings and algebras (16-XX) 4 Category theory; homological algebra (18-XX) 4 Harmonic analysis on Euclidean spaces (42-XX) 3 Special functions (33-XX) 3 Convex and discrete geometry (52-XX) 2 Geometry (51-XX) 1 Abstract harmonic analysis (43-XX) 1 Mathematics education (97-XX) ### Wikidata Timeline The data are displayed as stored in Wikidata under a Creative Commons CC0 License. Updates and corrections should be made in Wikidata.
2022-05-20T17:32:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5392476916313171, "perplexity": 6702.407268600233}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662533972.17/warc/CC-MAIN-20220520160139-20220520190139-00748.warc.gz"}
https://e-magnetica.pl/file/electromagnetic_field_magnetica_png
# Encyclopedia Magnetica ### Site Tools file:electromagnetic_field_magnetica_png Schematic representation of field lines of electric field around a negative electric charge. A static electric charge (grey small circle at the centre) generates electrostatic field (blue area). A sudden acceleration of the charge (dark blue small circle) creates an electromagnetic pulse (red ring) which radiates away into space at the speed of light. A charge moving at a constant velocity v generates electric and magnetic field attached with the charge (green area). electromagnetic_field_magnetica.png You are permitted and indeed encouraged to use this image freely, for any legal purpose including commercial, and with any modifications (the permission is hereby given, so there is no need to ask for it explicitly again), but you MUST always give the following credits: S. Zurek, Encyclopedia Magnetica, CC-BY-4.0 We would appreciate if you let us know of any use: [email protected]
2021-12-05T09:18:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49501073360443115, "perplexity": 1331.1552105711578}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363149.85/warc/CC-MAIN-20211205065810-20211205095810-00234.warc.gz"}
https://oertx.highered.texas.gov/courseware/lesson/1062/overview
Author: Kris Seago Subject: Government/Political Science Material Type: Full Course Level: Provider: Austin Community College Tags: ACC Liberal Arts, ACC OER Language: English Media Formats: Text/HTML # Constitution of 1866 ## Overview Constitution of 1866 # Learning Objectives By the end of this section, you will be able to: • Understand the Constitution of 1866’s role in Texas history # Introduction This section discusses the Constitution of 1866’s role in Texas history, # Constitution of 1866 The Constitutional Convention of 1866, in addition to other actions in compliance with presidential Reconstruction, proposed a series of amendments to the fundamental law, which came to be known as the Constitution of 1866. The governor’s term was increased to four years and his salary from $3,000 to$4,000 a year. He was prohibited from serving more than eight years in any twelve-year period. For the first time the governor was given the line-item veto on appropriations. He was empowered to convene the legislature at some place other than the state capital should the capital become dangerous “by reason of disease or the public enemy.” The comptroller and treasurer were elected by the voters to hold office for four years. The Senate was set to number from nineteen to thirty-three members and the House from forty-five to ninety; legislators were required to be white men with a prior residence of five years in Texas. Terms of office were to remain the same as before, but salaries of legislators were raised from three dollars a day to eight dollars, and mileage was increased to eight dollars for each twenty-five miles. A census and reapportionment, based on the number of white citizens, was to be held every ten years. The Supreme Court was increased from three judges to five, with a term of office of ten years and a salary of $4,500 a year. The chief justice was to be selected by the five justices on the court from their own number. District judges were elected for eight years at salaries of$3,500 a year. The attorney general was elected for four years with a salary of \$3,000. Jurisdiction of all courts was specified in detail. A change was made in the method of constitutional revision in that a three-fourths majority of each house of the legislature was required to call a convention to propose changes in the constitution, and the approval of the governor was required. Elaborate plans were made for a system of internal improvements and for a system of public education to be directed by a superintendent of public instruction. Separate schools were ordered organized for black children. Lands were set aside for the support of public schools, for the establishment and endowment of a university, and for the support of eleemosynary institutions. The legislature was empowered to levy a school tax. An election in June ratified the proposed amendments by a vote of 28,119 to 23,400; the small majority was attributed to dissatisfaction of many citizens with the increase in officials’ salaries. Hans Peter Nielsen Gammel, comp., Laws of Texas, 1822–1897 (10 vols., Austin: Gammel, 1898). Charles W. Ramsdell, Reconstruction in Texas (New York: Columbia University Press, 1910; rpt., Austin: Texas State Historical Association, 1970). John Sayles, The Constitutions of the State of Texas (1872; 4th ed., St. Paul, Minnesota: West, 1893). Vernon's Annotated Constitution of the State of Texas (Kansas City: Vernon Law Book Company, 1955). Handbook of Texas Online, S. S. McKay, "CONSTITUTION OF 1866," accessed August 23, 2019.
2021-10-16T03:35:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2931501865386963, "perplexity": 5540.779716759937}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323583408.93/warc/CC-MAIN-20211016013436-20211016043436-00277.warc.gz"}
https://pos.sissa.it/360/020/
Volume 360 - ALPS 2019 An Alpine LHC Physics Summit (ALPS2019) - Flavour physics Displaced Heavy Neutrinos at the LHC and Beyond W. Liu Full text: pdf Published on: February 08, 2021 Abstract We investigate the pair-production of right-handed neutrinos via the Standard Model (SM) Higgs boson in a gauged $B-L$ model. The right-handed neutrinos with a mass of few tens of GeV generating viable light neutrino masses via the seesaw mechanism naturally exhibit displaced vertices and distinctive signatures at the LHC. We focus on the displaced leptonic final states arising from decays of the SM Higgs, and analyze the sensitivity reach of the LHC and its beyond in probing the active-sterile neutrino mixing. We also analyse pair production of right-handed neutrinos from the decay of the additional neutral gauge boson $Z^\prime$. This is interesting especially when $Z^\prime$ is relatively light which can lead to displaced vertices in the forward direction. We perform a similar simulation focused on displaced final states at FASER 2, MAPP*, CODEX-b and LHCb and MATHUSLA, CMS detectors for such a process. DOI: https://doi.org/10.22323/1.360.0020 How to cite Metadata are provided both in "article" format (very similar to INSPIRE) as this helps creating very compact bibliographies which can be beneficial to authors and readers, and in "proceeding" format which is more detailed and complete. Open Access Copyright owned by the author(s) under the term of the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
2023-02-03T14:46:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5490321516990662, "perplexity": 4572.190246950285}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500056.55/warc/CC-MAIN-20230203122526-20230203152526-00586.warc.gz"}
https://gssc.esa.int/navipedia/index.php/Instrumental_Delay
If you wish to contribute or participate in the discussions about articles you are invited to join Navipedia as a registered user # Instrumental Delay Fundamentals Title Instrumental Delay Author(s) J. Sanz Subirana, J.M. Juan Zornoza and M. Hernández-Pajares, Technical University of Catalonia, Spain. Level Intermediate Year of Publication 2011 Possible sources of these delays are antennas, cables, as well as different filters used in receivers and satellites. These instrumental delays affect both, code and carrier measurements. The receiver instrumental delay is assimilated in the receiver clock. That is, being common for all satellites, it is assumed as zero and is included in the receiver clock estimate. Satellite clocks (broadcast or precise) are referred to the ionosphere-free combination of codes ($R_{_{PC}}$) and, thence, the instrumental delays cancel in such combination of two frequency signals (see Combination of GNSS Measurements). For single frequency users, the satellites broadcast in their navigation messages the Timing Group Delay or Total Group Delay (TGD), which is proportional to the Differential Code Bias (DCB), or interfrequency bias, between the two codes involved in such PC combination $K_{P21}\equiv K_{P2}-K_{P1}$ (see equations (6, 11) in the article Combining pairs of signals and clock definition for further details). $TGD_{P1}=\frac{-1}{\gamma_{_{12}}-1} (K_{P2}^{sat}-K_{P1}^{sat})=-\hat{\alpha}_1\,K_{P21}^{sat} \qquad \mbox{(1)}$ As the instrumental delays cancel in the PC combination, thence, the TGDs for the two associated codes are related by the square of their signal frequencies: $TGD_{P2}=\gamma_{_{12}}\, TGD_{P1} \qquad \mbox{(2)}$ It must be pointed out that the instrumental delay depends not only on the signal frequency but also on the code. For example, there is a DCB between the C1 and P1 GPS codes, and thence, the DCB between the P2 and P1 codes is different to the DCB between P2 and C1 codes. The biases between the C1 and P1 codes, for the different GPS satellites are provided, for instance, by IGS centres. The Galileo navigation messages F/NAV and I/NAV broadcast TGDs for the code on frequency E1, associated to the ionosphere free combinations of codes at the frequencies E5a,E1 and E5b,E1, respectively. No TGDs (nor any ionospheric model) is broadcast in the GLONASS navigation message. Among the navigation message, DCBs are also provided by IGS centres, together with Global Ionospheric Maps (GIM) files. At this regard, it must be pointed out that there is a correlation between the DCBs and the ionospheric estimates (see Combining pairs of signals and clock definition). Indeed, the DCBs are associated to the ionospheric model used to compute such values. Thence, the DCBs broadcasted in the GPS navigation message must be used with the Klobuchar ionospheric model and the DCBs of IGS GIMs with the IONEX files. frameless frameless frameless First row shows the horizontal (left) and vertical (right) positioning error using (blue) or not using (red) the broadcasted Total Group Delays (TGDs) (equation 1). The variation in range is shown in the second row at left. Figure 1 illustrates the values of the GPS P1 code TGDs. Moreover the effect of neglecting such delays in Single Point Positioning for the horizontal and vertical error components is depicted by comparing the navigation solution using (blue) and not using (red) such TGDs. Finally, the carrier phase instrumental delays are assimilated in the unknown ambiguities, which are estimated as real numbers (floating ambiguities) when positioning in PPP. Figure 2 shows the fractional part of the wide-lane and L1 ambiguities [footnotes 1]. for a GPS satellite and for a receiver. These carrier phase instrumental delays cancel in the double differences between satellites and receivers and, thence, are not needed to know to fix ambiguities in differential mode. Nevertheless, they are needed (the satellite ones) for undifferenced ambiguity fixing, e.g., real-time ambiguity fixing for PPP (see Carrier Phase Ambiguity Fixing). frameless frameless For more information see GPS C1, P1 and P2 Codes and Receiver Types ## Notes 1. ^ Actually, the residuals of carrier phase instrumental delays module the wavelength ## References 1. ^ [Juan et al., 2010] Juan, J., Hernandez-Pajares, M. and Sanz, J., 2010. Precise Real Time Orbit Determination and Time Synchronization (PRTODTS). TN2 & TN3: GPS PRTODTS Design and Validation Documents, v0.0. ESA report (ESA ITT AO/1-5823/08/NL/AT), ESOC/ESA, The Netherlands.
2019-04-25T13:01:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5868938565254211, "perplexity": 4823.210897938089}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578721441.77/warc/CC-MAIN-20190425114058-20190425140058-00116.warc.gz"}
http://gea.esac.esa.int/archive/documentation/GDR2/Data_processing/chap_cu5pho/sec_cu5pho_calibr/ssec_cu5pho_SaturationCorr.html
# 5.3.8 Saturation correction Author(s): Josep M. Carrasco The current calibration is not perfectly accounting for saturation effects (see Figure 5.22 comparing residuals in the Hipparcos and Tycho-2 photometric transformations derived in Section 5.3.7 as a function of the magnitude). In order to provide a tool to empirically correct this trend at bright magnitudes we fit here a relationship of this trend in Figure 5.22, combining both Hipparcos and Tycho-2 residuals. The resulting fitted relationships are the following: $\displaystyle G^{\rm corr}-G$ $\displaystyle=$ $\displaystyle-0.047344+0.16405G-0.046799G^{2}+0.0035015G^{3}$ $\displaystyle G_{\rm BP}^{\rm corr}-G_{\rm BP}$ $\displaystyle=$ $\displaystyle-2.0384+0.95282G-0.11018G^{2}$ $\displaystyle G_{\rm RP}^{\rm corr}-G_{\rm RP}$ $\displaystyle=$ $\displaystyle-13.946+13.239G_{\rm RP}-4.23G_{\rm RP}^{2}+0.4532G_{\rm RP}^{3}$ The ranges of applicability for these relationships are the following: • $2.0 mag for the $G$ relationship, • $2.0 mag for the $G_{\rm BP}$ relationship, and • $2.0 mag for the $G_{\rm RP}$ relationship.
2019-04-20T10:50:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 18, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7875959873199463, "perplexity": 2601.747402256593}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578529606.64/warc/CC-MAIN-20190420100901-20190420121830-00016.warc.gz"}
https://kelvins.esa.int/star-trackers-first-contact/challenge/
#### Timeline The competition is over. ## Main Goal The goal of this competition is to propose new and fast star identification algorithms for star trackers that are robust to measurement uncertainties and artifacts. Given only one image as information, this scenario models a spacecraft being "lost in space", which means that there is no prior information on the spacecraft's orientation. It is your job to recover a sense of orientation allowing the spacecraft to keep pursuing the goal of its mission. ## In a nutshell Star trackers are devices commonly present in most spacecraft to determine orientation based on camera images of stars. A crucial element of the star tracker functioning is the star identification algorithm. This algorithm is presented with an image containing spikes (bright points) that can represent either true stars or artifacts. Using a catalog of stars defined by an identifier $$h$$, a magnitude $$m$$ and by two angles $$\omega, \psi$$ determining their position on the galactic sphere, the star identification algorithm maps each of the spikes to an appropriate star in this catalog or classifies it as an artifact. The input to the algorithm (a scene) is thus a set of $$N$$ image coordinates and magnitudes $$C = \{[x_1\ y_1\ m_1], [x_2\ y_2\ m_2], ..., [x_N\ y_N\ m_N]\}$$ and the output is a set of identifiers $$H = \{h_1, h_2, ..., h_N\}$$ from the catalog (false stars will be identified by a default identifier $$h_F$$). The star identification algorithm can therefore be seen as a function $$H = f(C)$$. In this competition, a star catalog is given (see the data page), together with a large number of possible scenes $$C_i, i = 1 .. N$$ and the participants are asked to submit a file containing the identification made for all stars in each scene, and optionally a separate C file containing their code as to allow us to profile it. Figure 1: The galactic coordinate system. ## Scene generation Understanding how a scene is generated helps helps to identify stars in a scene. In other words, how does a given star end up as a spike in some image position? This process is, essentially, the inverse of star identification. The implementation of our scene generation can be found on the scripts page and you can use it to create your own scenes. The code is only provided as a courtesy and it is not necessary for the purpose of submitting a solution to the problem, but it certainly helps in understanding it in case of non experts. The parts of the code (simulator.py) that implement the described formulas are explicitly referenced. ## Understanding the camera model We are given the star's identifier $$h$$ and want to know its coordinates (if any) $$x$$ and $$y$$ in the image of our star tracker camera. The first step is to determine the position of the star in the celestial sphere. While the stars have different distances from our solar system, one can consider all stars to be infinitely far away. Therefore, two angles are enough to describe the position of the star on the celestial sphere. The star catalog contains the right ascension $$\omega$$ and declination $$\psi$$ of the star in the International Celestial Reference System (ICRS J1991.25). Figure 1 shows a hemisphere of the celestial sphere with the cardinal points (N, E, S, W) and the zenith (Z). The right ascension is the angle between the north point and the stars position projected on the horizontal plane. This angle can be between 0 and 360 degrees and corresponds to a rotation around the z-axis. The declination then is the angle between the horizontal plane and the actual star. To get to the zenith (top) of the sphere, this angle has to be 90 degrees, or -90 to reach the nadir (bottom), so the angle can be between -90 and 90 degrees. The right ascension and declination are stored next to the HIP number (star identifier) in the star catalog implemented in the StarCatalog class, that loads the catalog data. The next step is to transform this position of the star in the galactic coordinate system to the image coordinate system. To do so, a model of the camera is used consisting of an extrinsic and intrinsic part. The extrinsic part describes the outer transformation of the star position as seen in the celestial reference frame to the camera/observer reference frame. Typically the location and orientation and in our case reduced to just the orientation. The intrinsic part describes the transformation done by the optical system of the camera and for example contains the distortion caused by the lens. It transforms the observer reference frame coordinates to the image plane coordinates. Figure 2: Rotation of the sky as seen from the camera. ### Extrinsic model The camera can have any arbitrary orientation with respect to the celestial reference frame. The camera/observer reference system has its own cardinal points (N, E, S, W) which are more commonly called up, right, down and left, as well as zenith (front) and nadir (back). Figure 2 illustrates this point of view: the celestial reference frame is shown in red and the camera reference frame is illustrated in blue. There are several options to transform the angles in the celestial reference frame $$\omega$$ and $$\psi$$ into the frame of the camera. One way is to transform the angular representation into a unit-vector and use a rotation matrix or quaternions to rotate them, before transforming them back into an angular representation. We call the angular representation that we obtain this way as azimuth and altitude, which correspond to the right ascension and declination in the original galactic coordinate system. To not introduce too many variables we will keep the notation $$\omega$$ and $$\psi$$ for azimuth and altitude respectively from now on. The implementation contains the functions angles_to_vector and vector_to_angles to transform from angle to vector representation and vice versa. The transformation itself is done with a rotation matrix that can simply be multiplied with numpy's dot function. Two functions help to generate such a rotation matrix: lookat creates a rotation matrix that makes the camera look at a specific direction in the celestial reference frame and random_matrix creates a random rotation matrix. Figure 3: Side view of the intrinsic camera model. ### Intrinsic model Figure 3 shows a side view of the intrinsic camera model. The camera projects the light onto the image plane. The incoming lightray passes through the optical center. The optical axis is the axis that is normal to the image plane and passes through the optical center. The incoming ray comes at an angle $$\theta = \frac{pi}{2} - \psi$$ to the optical axis and passes through the optical center where it is distorted. We assume this distortion to be radially symmetric, so the azimuth of the incoming ray is not distorted. The principal point $$p$$ is the point on the image where the optical axis and image plane intersect. Ideally this would be the center of the image. The radial distance $$r$$ from the principle point is a function of the angle $$theta$$. For an undistorted lightray that passes the optical center without a change of direction, this function is $$r(\theta) = f \tan(\theta),$$ where $$f$$ is the focal length. This model is known as the rectilinear or perspective camera model. Other commonly used projection functions are the (1) stereographic, (2) equidistant or equi-angular, (3) equisolid angle or equal area, and (4) orthographic projection: Figure 4: Intrinsic camera model. \begin{align} r_1(\theta) &= 2f \tan\left(\frac{\theta}{2}\right) \\ r_2(\theta) &= f \theta \\ r_3(\theta) &= 2f \sin\left(\frac{\theta}{2}\right) \\ r_4(\theta) &= f \sin(\theta) \end{align} Figure 4 shows the intrinsic camera model from a different perspective. We can see the width $$w$$ and height $$h$$ of the image. Using the equation for the rectilinear projection, we can also determine the field of view $$fov_x$$ in the x-direction as $$fov_x = 2 \tan^{-1}\left(\frac{w}{2 f}\right),$$ where the focal length $$f$$ is given in pixels. If the focal length is given in meters another factor comes into play, which is the pixel size $$s$$ on the sensor given in meters. Note that in case of non-square pixels, the focal length in pixels differs for the x- and y-axis. This is typically considered by using the focal length for the x-axis and multiplying $$r$$ by the pixel size aspect ratio for the y-axis. Figure 5: Camera versus image coordinate system. A final step we have to consider is the coordinate system of our image. Previously we defined the camera to look into the positive z-axis direction and the positive y-axis is up. In our right-handed coordinate system this results into the x-axis pointing left instead of the usual right as we want to have it in our image. This is illustrated in figure 5, where the hemisphere has been turned around, so that you can imagine the image lying on the base of the hemisphere, as if it would lie on a table. The original three-dimensional axes are colored blue, while the target two-dimensional image axes are colored in yellow. One way to solve this problem is to simply flip the x-axis. Alternatively we can also transform the azimuth $$\omega$$ to the angle $$\alpha = \pi - \omega$$. Now we can combine all these efforts into our final equations: \begin{align} x &= r(\theta) \cos(\alpha) + p_x\\ y &= \frac{s_x}{s_y} r(\theta) \sin(\alpha) + p_y \end{align} The intrinsic camera model is implemented in the abstract class Camera with derivative classes implementing the different projection functions (RectilinearCamera, EquidistantCamera, OrthographicCamera, etc.). The Camera class offers two methods: from_angles transforms from angles to pixel coordinates and to_angles transforms in the opposite direction. ## Star Magnitude The magnitude of a star has been historically measured on a relative logarithmic scale. The brightest stars were given a magnitude of 1 and the most faint stars that were still visible by eye a magnitude of 6. Magnitude was assigned based on comparison with other stars. Due to the eyes' logarithmic sensitivity, the magnitude is logarithmically related to the luminous intensity. Today the magnitude is calibrated based on a set of reference stars. Per definition, 5 magnitude steps correspond to a factor 100 difference in luminous intensity. This definition leads to the formula $$m - m_{ref} = -2.5 \log\left(\frac{I}{I_{ref}}\right),$$ where $$m$$ is the magnitude and $$I$$ is the luminous intensity. For example a star with a magnitude $$m = 5$$ compared to a reference star with $$m_{ref} = 0$$ leads to $$\frac{I}{I_{ref}} = 10^{\frac{5 - 0}{-2.5}} = 10^{-2} = \frac{1}{100},$$ as expected from the definition. A star tracker camera can then be calibrated by measuring reference stars with known magnitude. The number of photoelectrons that are measured by the sensor are assumed to be proportional to the luminous intensity. The problem in practice is of course noise, which will be discussed in the next section. ## Noise Real star tracking systems deal with different sources of noise. In general, the noise on the position $$x$$ and $$y$$ is lower than on the magnitude $$m$$. For this reason some star tracker algorithms completely ignore the magnitude information. The centroiding of the stars in practice is more accurate than the quantization noise of a single pixel. The centroiding noise is modeled as a Gaussian noise. The magnitude has different types of noise contributions, such as dark current, shot noise and readout noise. Overall the noise components on the magnitude are modelled with different contributions from Poisson, Gaussian and other types of noise. These noise components on the magnitude are modeled in the class StarDetector. Finally, the noise might cause the centroiding algorithm to not discover stars or detect artifacts. Reflections or radiation can cause the camera to detect spikes that are not actual stars. The star identification algorithm has to deal with these artifacts and mark them as such. The overall scene generation is implemented in the Scene class. The static method random generates a random scene calling the methods compute computing a random orientation, transforming the celestial coordinates to the image plane and adding the Gaussian centroiding noise to this possion, add_false_stars which adds a random number of artifacts, scramble scrambling the order of the stars and artifacts in the scene and finally add_magnitude_noise adding the magnitude noise modeled with the StarDetector class. The method render can be used to visualize the image that the star tracker camera would see.
2019-11-22T20:25:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7459331154823303, "perplexity": 538.3665955810612}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496671548.98/warc/CC-MAIN-20191122194802-20191122223802-00238.warc.gz"}
https://www.usgs.gov/media/images/kw-webcam-image-taken-december-20-2020-just-6-pm-hst
# KW webcam image taken on December 20, 2020, just before 6 p.m. HST. ### Detailed Description Kīlauea summit KW webcam image taken on December 20, 2020, just before 6 p.m. HST. Three and a half hours later, at 9:30 p.m., an eruption began in the walls of Halemaʻumaʻu crater, vaporizing the lake. You can view live KW webcam images here. USGS photo. Public Domain.
2021-12-08T13:48:34
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8606377840042114, "perplexity": 8069.027956437806}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363510.40/warc/CC-MAIN-20211208114112-20211208144112-00318.warc.gz"}
http://www.csm.ornl.gov/newsite/events.html
Events Workshops and Conferences Advances in Scientific Computing and Applied Mathematics (October 9-12, 2015) Clayton Webster The conference on "Advances in scientific computing and applied mathematics" will take place October 9-12 in the Stratosphere Hotel in Las Vegas, Nevada. The conference is co-sponsored by the Oak Ridge National Laboratory, Sandia National Laboratories and The Office of Science, Advanced Simulation Research Computing (ASCR), at the Department of Energy. We also invite the participants to submit an original research paper to a special issue of the journal "Computers and Mathematics with Applications" to honor Prof. Max Gunzburger's 70th birthday. The issue will be edited by Drs. Pavel Bochev, Qiang Du, Steven L. Hou and Clayton Webster. The submissions are due by March 31st, 2015. We look forward to your contribution. Visit site [here]. OpenSHMEM 2015: Second workshop on OpenSHMEM and Related Technologies (August 4-6, 2015) Pavel Shamis The OpenSHMEM workshop is an annual event dedicated to the promotion and advancement of the OpenSHMEM programming interface and to helping shape its future direction. It is the premier venue to discuss and present the latest developments, implementation technologies, tools, trends, recent research ideas and results related to OpenSHMEM. This year's workshop will explore the ongoing evolution of OpenSHMEM as a next generation PGAS programming model to address the needs of exascale applications. The focus will be on future extensions to improve OpenSHMEM on current and upcoming architectures. Although, this is an OpenSHMEM specific workshop, we welcome ideas used for other PGAS languages/APIs that may be applicable to OpenSHMEM. Visit site [here]. Beyond Lithium Ion VIII (June 2-4, 2015) Sreekanth Pannala and Jason Zhang Significant advances in electrical energy storage could revolutionize the energy landscape. For example, widespread adoption of electric vehicles could greatly reduce dependence on finite petroleum resources, reduce carbon dioxide emissions and provide new scenarios for grid operation. Although electric vehicles with advanced lithium ion batteries have been introduced, further breakthroughs in scalable energy storage, beyond current state-of-the-art lithium ion batteries, are necessary before the full benefits of vehicle electrification can be realized. Motivated by these societal needs and by the tremendous potential for materials science and engineering to provide necessary advances, a consortium comprising IBM Research and five U.S. Department of Energy National Laboratories (National Renewable, Argonne, Lawrence Berkeley, Pacific Northwest, and Oak Ridge) will host a symposium June 2-4, 2015, at Oak Ridge National Laboratory. This is the eighth in a series of conferences that began in 2009. Visit site [here]. Numerical and Computational Developments to Advance Multiscale Earth System Models (MSESM) (June 1-3, 2015) Kate Evans Substantial recent development of Earth system models has enabled simulations that capture climate change and variability at ever finer spatial and temporal scales. In this workshop we seek to showcase recent progress on the computational development needed to address the new complexities of climate models and their multi-scale behavior to maximize efficiency and accuracy. This workshop brings together computational and domain Earth scientists to focus on Earth system models at the largest scales for deployment on the largest computing and data centers. Topics include, but are not limited to: • multi-scale time integration • advection schemes • regionally- and dynamically-refined meshes • many-core acceleration techniques • examination of multi-scale atmospheric events • coupled interactions between model components (such as atmosphere, ocean, and land), and techniques to make these couplings computationally tractable Visit the site [here]. ASCR Workshop on Quantum Computing for Science (February 17-18, 2015) Travis Humble At the request of the Department of Energy's (DOE) Office of Advanced Scientific Computing Research (ASCR), this program committee has been tasked with organizing a workshop to assess the viability of quantum computing technologies to meet the computational requirements in support of DOE's science and energy mission and to identify the potential impact of these technologies. As part of the process, the program committee is soliciting community input in the form of position papers. The program committee will review these position papers and, based on the fit of their area of expertise and interest, selected contributors will have the opportunity to participate in the workshop currently planned for February 17-18th, 2015 in Bethesda, MD. Visit the site [here]. OpenSHMEM User Group - OUG2014 (October 7) The OpenSHMEM User Group (OUG 2014) is a user meeting dedicated to the promotion and advancement of all aspects of the OpenSHMEM API and its tools eco-system. The goal of the meeting is to discuss and present the ongoing user experiences, research, implementations, and tools that use the OpenSHMEM API. A particular emphasis will be given to the ongoing research that can enhance the OpenSHMEM specification to leverage the emerging hardware while addressing the application needs. Visit the site [here]. DOE Vehicle Technologies Office Annual Merit Review and Peer Evaluation Meeting (June 16-20) John Turner and Sreekanth Pannala The DOE Vehicle Technologies Office Annual Merit Review and Peer Evaluation Meeting was held on June 16-20, 2014, in Washington, D.C. John Turner, Group Leader of Computational Engineering and Energy Sciences (CEES) and Sreekanth Pannala, Distinguished Staff Member in CEES, attended. Dr. Pannala presented a summary of the Open Architecture Software (OAS) developed for the Computer Aided Engineering for Batteries (CAEBAT) project and described related activities such as the development of a common input format based on XML for battery simulation tools and a common "battery state" file format to facilitate transfer of information between simulation tools. ORNL Software Expo (May 7) Jay Billings and Dasha Gorin There are programmers, researchers, and engineers all over the ORNL campus developing software or modifying existing programs to better meet either their goals, and/or help others reach theirs. Because we are so spread out, many scientists are unaware of others' projects--possibly missing out on important opportunities to collaborate or otherwise ease their burden. The Computer Science Research Group would like to provide a chance for everyone to come together and remedy this. We will be hosting a poster session in the JICS atrium on Wednesday, May 7th, from 9:00am-12:00pm. Presenters will be showcasing posters and/or live demos of their projects. Anyone working at ORNL, from interns to senior staff members, may register to present a (non-classified) project at www.csm.ornl.gov/expo; the deadline to register is April 16th. Non-presenting attendees do not need to register. Please join us; this is not only a great networking opportunity, but also a celebration of ORNL's diverse programming community! Visit the site [here]. Spring Durmstrang Review (March 25-26) The spring review for the Durmstrang project managed by the ESSC was held on March 25-26 in Maryland. Durmstrang is a DoD/ORNL collaboration in extreme scale high performance computing. The long term goal of the project is to support the achievement of sustained exascale processing on applications and architectures of interest to both partners. Steve Poole, Chief Scientist of CSMD, presented the overview and general status update at the spring review. Benchmarks R&D discussion was facilitated by Josh Lothian, Matthew Baker (left in photo), Jonathan Schrock, and Sarah Powers of ORNL; Languages and Compilers R&D discussion was facilitated by Matthew Baker, Oscar Hernandez, Pavel Shamis (right in photo), and Manju Venkata of ORNL; I/O and FileSystems R&D discussion was facilitated by Brad Settlemyer of ORNL; Networking R&D discussion was facilitated by Nagi Rao, Susan Hicks, and Paul Newman of ORNL; Power Aware Computing R&D discussion was facilitated by Chung-Hsing Hsu of ORNL; System Schedulers R&D discussion was facilitated by Greg Koenig, Tiffany Mintz, and Sarah Powers of ORNL. A special panel on Networking R&D was also convened to discuss best practices and path forward. Panelists included both DoD and ORNL members. The topics of discussion during the executive session of the review included continued funding/growth of the program, task progression, and development of performance metrics for the project. SOS 18 (March 17-20) On March 17-20, 2014, Jeff Nichols, Al Geist, Buddy Bland, Barney Maccabe, Jack Wells, and John Turner attended the 18th Sandia-Oak Ridge-Switzerland workshop (SOS18) [1]. The SOS workshops are co-organized by James Ang at Sandia National Laboratory, John Turner at ORNL, and Thomas Schulthess at the Swiss National Computing Center. The theme this year was "Supercomputers as scientific instruments", and a number of presentations examined this analogy extensively. John Turner, Computational Engineering and Energy Sciences Group Leader conveys the following impressions: (1) Python is ubiquitous, (2) Domain Specific Languages (DSLs) are no longer considered exotic, (3) proxy apps / mini-apps continue to gain popularity as a mechanism for domain developers to interact with computer scientists, and (4) there is increased willingness on the part of code teams to consider a full re-write of some codes, but funding for such activities remains unclear. Presentations can be obtained from the SOS18 web site [2]. 2014 Gordon Research Conference on Batteries (March 9-14) John Turner On March 9-14, Computational Engineering and Energy Sciences Group Leader John Turner attended the 2014 Gordon Research Conference on Batteries [1] in Ventura, CA. Dr. Turner presented a poster titled "3D Predictive Simulation of Battery Systems" on behalf of the team working on battery simulation, Sreekanth Pannala, Srikanth Allu, Srdjan Simunovic, Sergiy Kalnaus, Wael Elwasif, and Jay Jay Billings. The work was funded through the Vehicle Technologies (VT) program office within the EERE [2] as part of the CAEBAT program [3]. This program, led by NREL and including industry and university partners, is developing computational tools for the design and analysis of batteries. CSDM staff are leading development of the shared computational infrastructure used across the program. [1] http://www.grc.org/programs.aspx?year=2014&program=batteries [2] DOE Office of Energy Efficiency and Renewable Energy (EERE) [3] The Computer-Aided Engineering for Batteries (CAEBAT) program (http://www.nrel.gov/vehiclesandfuels/energystorage/ caebat.html ) Advisory Council Review (March 6-7) The third annual meeting of the Oak Ridge National Laboratory Computing and Computational Sciences Directorate (CCSD) advisory committee was convened March 6-7 to focus on two key areas of Directorate activities. The primary activities were to review CCSD's recent developments in computational and applied mathematics and CCSD's geospatial data science program and its impact on problems of national and global significance. In the course of the review, CSMD researchers presented five posters covering their work in computational and applied mathematics. Developing U.S. Phenoregions from Remote Sensing Jitendra Kumar, Forrest Hoffman, and William Hargrove Variations in vegetation phenology can be a strong indicator of ecological change or disturbance. Phenology is also strongly influenced by seasonal, interannual, and long-term trends in climate, making identification of changes in forest ecosystems a challenge. Normalized difference vegetation index (NDVI), a remotely sensed measure of greenness, provides a proxy for phenology. NDVI for the conterminous United States (CONUS) derived from the Moderate Resolution Spectroradiometer (MODIS) at 250 m resolution was used in this study to develop phenological signatures of ecological regimes called phenoregions. By applying a unsupervised, quantitative data mining technique to NDVI measurements for every eight days over the entire MODIS record, annual maps of phenoregions were developed. This technique produces a prescribed number of prototypical phenological states to which every location belongs in any year. Since the data mining technique is unsupervised, individual phenoregions are not identified with an ecologically understandable label. Therefore, we applied the method of MAPCURVES to associate individual phenoregions with maps of biomes, land cover, and expert-derived ecoregions. By applying spatial overlays with various maps, this "label-stealing" method exploits the knowledge contained in other maps to identify properties of our statistically derived phenoregions. 3D Virtual Vehicle Sreekanth Pannala and John Turner Advances in transportation technologies, sensors, and onboard computers continue to push the efficiency of vehicles, resulting in an exponential increase in design parameter space. This expansion spans the entire vehicle and includes individual components such as combustion engines, electric motors, power electronics, energy storage, and waste heat recovery, as well as weight and aerodynamics. The parameter space has become so large that manufacturers do not have the computational resources or software tools to optimize vehicle design and calibration. This expanded flexibility in vehicle design and control, in addition to stringent CAFE standards, is driving a need for the development of new high-fidelity vehicle simulations, optimization methods, and self-learning control methods. The biggest opportunity for improvements in vehicle fuel economy is improved integration and optimization of vehicle subsystems. The current industry approach is to optimize individual subsystems using detailed computational tools and to optimize the vehicle system with a combination of low order map-based simulations with physical prototype vehicles. Industry is very interested in reducing the dependence on prototype vehicles due to significant investment cost and time. With increasingly aggressive fuel economy standards, emissions regulations, and unprecedented growth in vehicle technologies, the current approach is simply not sufficient to meet these challenges. The increase in technologies has led to an exponential growth in parameter and calibration space. Advanced modeling and simulation through virtual vehicle framework can facilitate accelerated development of vehicles through the rapid exploration and optimization of parameter space, while providing guidance to more focused experimental studies. Each of the component areas require HPC resources, and an integrated system approach will likely approach Exascale. Networking and Communications Research and Development Pavel Shamis, Brad Settlemyer, Nagi Rao, Thomas Naughton, and Manju Gorentla Universal Common Communication Substrate (UCCS) is a communication middleware that aims to provide a high performing low-level communication substrate for implementing parallel programming models. UCCS aims to deliver a broad range of communication semantics such as active messages, collective operations, puts, gets, and atomic operations. This enables implementation of one-sided and two-sided communication semantics to efficiently support both PGAS (OpenSHMEM, UPC, Co-Array Fortran, ect) and MPI-style programming models. The interface is designed to minimize software overheads, and provide direct access to network hardware capabilities without sacrificing productivity. This was accomplished by forming and adhering to the following goals: Provide a universal network abstraction with an API that addresses the needs of parallel programming languages and libraries. Provide a high-performance communication middleware by minimizing software overheads and taking full advantage of modern network technologies with communication-offloading capabilities. Enable network infrastructure for upcoming parallel programming models and network technologies. Compute and Data Environment for Science (CADES) Galen Shipman The Compute and Data Environment for Science (CADES) provides R&D with a flexible and elastic compute and data infrastructure. The initial deployment consists of over 5 petabytes of high-performance storage, nearly half a petabyte of scalable NFS storage, and over 1000 compute cores integrated into a high performance ethernet and InfiniBand network. This infrastructure, based on OpenStack, provides a customizable compute and data environment for a variety of use cases including large-scale omics databases, data integration and analysis tools, data portals, and modeling/simulation frameworks. These services can be composed to provide end-to-end solutions for specific science domains. Co-designing Exascale Scott Klasky and Jeffrey Vetter Co-design refers to a computer system design process where scientific problem requirements influence architecture design and technology and constraints inform formulation and design of algorithms and software. To ensure that future architectures are well-suited for DOE target applications and that major DOE scientific problems can take advantage of the emerging computer architectures, major ongoing research and development centers of computational science need to be formally engaged in the hardware, software, numerical methods, algorithms, and applications co-design process. Co-design methodology requires the combined expertise of vendors, hardware architects, system software developers, domain scientists, computer scientists, and applied mathematicians working together to make informed decisions about features and tradeoffs in the design of the hardware, software and underlying algorithms. CSMD is a Co-PI organization on all three ASCR Co-design Centers: Exascale Co-design Center for Materials in Extreme Environments (ExMatEx), Center for Exascale Simulation of Advanced Reactors (CESAR), and Center for Exascale Simulation of Combustion in Turbulence (ExaCT). Read more about the ASCR Co-design centers [here]. OpenSHMEM Workshop (March 4-6) Oscar Hernandez, Pavel Shamis, and Jennifer Goodpasture The OpenSHMEM workshop for Extreme Scale Systems Center (ESSC) was held in Annapolis, Maryland on March 4-6. This workshop was held to promote the advancement of parallel programming with the OpenSHMEM programming interface and to help shape its future direction. The OpenSHMEM workshop is an annual event dedicated to the promotion and advancement of parallel programming with the OpenSHMEM programming interface and to helping shape its future direction. It is the premier venue to discuss and present the latest developments, implementation technology, tools, trends, recent research ideas and results related to OpenSHMEM and its use in applications. This year's workshop also emphasized the future direction of OpenSHMEM and related technologies, tools and frameworks. It also focused on future extensions for OpenSHMEM and hybrid programming on platforms with accelerators. Topics of interest for conference included (but were not limited to): Experiences in OpenSHMEM applications in any domain Extensions to and shortcomings of current OpenSHMEM specification Hybrid heterogeneous or many-core programming with OpenSHMEM and other languages or APIs (i.e. OpenCL, OpenACC, CUDA, OpenMP) Experiences in implementing OpenSHMEM on new architectures Low level communication layers to support OpenSHMEM or other PGAS languages/APIs Performance evaluation of OpenSHMEM or OpenSHMEM-based applications Power / energy studies of OpenSHMEM Static analysis and verification tools for OpenSHMEM Modeling and performance analysis tools for OpenSHMEM and/or other PGAS languages/APIs. Auto-tuning or optimization strategies for OpenSHMEM programs Runtime environments and schedulers for OpenSHMEM Benchmarks and validation suites for OpenSHMEM The workshop had participation from DoD, DoE Office of Science, and other National Labs such as Argonne National Lab and Sandia National Lab. Visit the site [here]. 2014 Oil & Gas High-Performance Computing Workshop (March 6) David Bernholdt, Scott Klasky, David Pugmire, Suzy Tichenor On 6 March, Rice University in Houston hosted the seventh annual meeting on the computing an information technology challenges and needs in the oil and gas industry. CSMD researchers figured prominently on the day's program, which included 31 presentations an additional 31 research posters, and attracted over 500 registered participants. Computer Science Research Group Leader David Bernholdt gave a plenary talk titled "Some Assembly Required? Thoughts on Programming at Extreme Scale" which covered a broad range of issues in programming future systems, including programming models and languages, resilience, and the engineering of scientific software. Scientific Data Group Leader Scott Klasky gave a talk titled "Extreme Scale I/O using the Adaptable I/O System (ADIOS)" in the "Programming Models, Libraries, and Tools" track, which introduced ADIOS to the audience of Oil&Gas industry by describing the problems of data driven science and the approach taken by ADIOS to extreme scale data processing. Finally, David Pugmire, a member of the Scientific Data Group gave a talk titled" Visualization of Very Large Scientific Data" in the "Systems Infrastructure, Facilities, and Vizualization" track which addressed the challenges of current and future HPC systems for large scale visualization and analysis. Suzy Tichenor, Director of Industrial Partnerships for the Computing and Computational Sciences Directorate also participated in the workshop. Prevent Three-Eyed Fish: Analyze Your Nuclear Reactor with Eclipse (March 1) Jordan Deyton and Jay Jay Billings Jordan Deyton co-presented with Jay Jay Billings at EclipseCon North America 2014 on Wednesday, March 19. The talk demonstrated the NEAMS Integrated Computational Environment (NiCE) and its post-simulation reactor analysis plugin called the Reactor Analyzer, which supports Light Water and Sodium-cooled Fast Reactor analysis. Software Productivity for Extreme-Scale Science Workshop (January 13-14) The ASCR Workshop on Software Productivity for Extreme-Scale Science (SWP4XS) was held 13-14 January 2014 in Rockville, MD. The meeting was organized by researchers from ANL, LBNL, LANL, LLNL, ORNL (David Bernholdt, CSR/CSMD), and the Universities of Alabama and Southern California at the behest of the US Department of Energy Office of Advanced Scientific Computing Research, to bring together computational scientists from academia, industry, and national laboratories to identify the major challenges of large-scale application software productivity on extreme-scale computing platforms. The focus of the workshop was on assessing the needs of computational science software in the age of extreme-scale multicore and hybrid architectures, examining the scientific software lifecycle and infrastructure requirements for large-scale code development efforts, and exploring potential contributions and lessons learned that software engineering can bring to HPC software at scale. Participants were asked to identify short- and long-term challenge of scientific software that must be addressed in order to significantly improve the productivity of emerging HPC computing systems through effective scientific software development processes and methodologies. The workshop included more than 70 participants, including ORNL researchers Ross Bartlett (CEES/CSMD), Al Geist (CTO/CSMD), Judy Hill (SciComp/NCCS), Jeff Vetter (FT/CSMD), in addition to organizer Bernholdt. Participants contributed 35 position papers in advance of the workshop, and the workshop itself included 19 presentations, a panel discussion, and three sets of breakout sessions, most of which are archived on the workshop's web site (http://www.orau.gov/swproductivity2014/) An outcome of the workshop will be a report that articulates and prioritizes productivity challenges and recommends both short- and long-term research directions for software productivity for extreme-scale science. Society for Industrial and Applied Mathematics Annual Meeting The SIAM Annual Meeting is the largest applied math conference held every year. Guannan Zhang and Miroslav Stoyanov organized a mini-symposium on "Recent Advances in Numerical Methods for Partial Differential Equations with Random Inputs", which is a rapidly growing field that is of great importance to science. There were 12 invited speakers both national labs and academia including ORNL, Argonne National Lab, Florida State University, University of California, University of Pittsburgh, Virginia Tech, University of Minnesota, Auburn University. The mini-symposium was well attended by an even wider variety of researchers. The mini-symposium gave participants an opportunity to discuss their current work as well as future development of the field. Durmstrang-2 Review The Fall review for the Durmstrang-2 project was held on September 10-11 in Maryland. Durmstrang-2 is a DoD/ORNL collaboration in extreme scale high performance computing. The long term goal of the project is to support the achievement of sustained exascale processing on applications and architectures of interest to both partners. The Durmstrand-2 project is managed from the Extreme Scale Systems Center (ESSC) of CCSD. Steve Poole, Chief Scientist of CSMD, presented the overview and general status update at the Fall review. Benchmarks R&D discussion was facilitated by Josh Lothian, Matthew Baker, Jonathan Schrock, and Sarah Powers of ORNL; Languages and Compilers R&D discussion was facilitated by Matthew Baker, Oscar Hernandez, Pavel Shamis, and Manju Venkata of ORNL; I/O and FileSystems R&D discussion was facilitated by Brad Settlemyer of ORNL; Networking R&D discussion was facilitated by Nagi Rao, Susan Hicks, Paul Newman, Neena Imam, and Yehuda Braiman of ORNL; Power Aware Computing R&D discussion was facilitated by Chung-Hsing Hsu of ORNL; System Schedulers R&D discussion was facilitated by Greg Koenig and Sarah Powers of ORNL. A special panel on Lustre was also convened to discuss best practices and path forward. Panelists included both DoD and ORNL members. The topics of discussion during the executive session of the review included continued funding/growth of the program, task progression, and development of performance metrics for the project. Upcoming events for ESSC include: OpenSHMEM Birds-of-a-Feather session at Supercomputing 2013, OpenSHMEM booth at Supercomputing 2013, and OpenSHMEM Workshop (date to be announced). ORNL R&D Staff to recently join the ESSC team is Dr. Tiffany Mintz (Computer Science Research Group). CAEBAT Annual Review The Computer-Aided Engineering for Batteries (CAEBAT) program [1] is funded through the Vehicle Technologies (VT) program office within the DOE Office of Energy Efficiency and Renewable Energy (EERE). This program, led by NREL and including industry and university partners, is developing computational tools for the design and analysis of batteries with improved performance and lower cost. CSDM staff in the Computational Engineering and Energy Science (CEES) and Computer Science (CS) groups are leading development of the shared computational infrastructure used across the program, known as the Open Architecture Software (OAS), as well as defining standards for input and battery "state" representations [2]. On Aug. 27, 2013, the ORNL team (Sreekanth Pannala, Srdjan Simunovic, Wael Elwasif, Sergiy Kalnaus, Jay Jay Billings, Taylor Patterson, and CEES Group Leader John Turner) hosted the CAEBAT Program Manager, Brian Cunningham, at ORNL. This visit served as an annual review for the ORNL CAEBAT effort, and provided a venue for the team to demonstrate progress in simulation capabilities, including an initial demonstration of the use of the NEAMS Integrated Computational Environment (NiCE) with OAS [3]. First OpenSHMEM Workshop: Experiences, Implementations and Tools October 23-25 The OpenSHMEM workshop is an annual event dedicated to the promotion and advancement of parallel programming with the OpenSHMEM programming interface and to helping shape its future direction. It is the premier venue to discuss and present the latest developments, implementation technology, tools, trends, recent research ideas and results related to OpenSHMEM and its use in applications. This year's workshop will also emphasize the future direction of OpenSHMEM and related technologies, tools and frameworks. We will also focus on future extensions for OpenSHMEM and hybrid programming on platforms with accelerators. Although, this is an OpenSHMEM specific workshop, we welcome ideas used for other PGAS languages/APIs that may be applicable to OpenSHMEM. Topics of interest for conference include (but are not limited to): • Experiences in OpenSHMEM applications in any domain • Extensions to and shortcomings of current OpenSHMEM specification • Hybrid heterogeneous or many-core programming with OpenSHMEM and other languages or APIs (i.e. OpenCL, OpenACC, CUDA, OpenMP) • Experiences in implementing OpenSHMEM on new architectures • Low level communication layers to support OpenSHMEM or other PGAS languages/APIs • Performance evaluation of OpenSHMEM or OpenSHMEM-based applications • Power/energy studies of OpenSHMEM • Static analysis and verification tools for OpenSHMEM • Modeling and performance analysis tools for OpenSHMEM and/or other PGAS languages/APIs • Auto-tuning or optimization strategies for OpenSHMEM programs • Runtime environments and schedulers for OpenSHMEM • Benchmarks and validation suites for OpenSHMEM http://www.csm.ornl.gov/workshops/openshmem2013/ Fourth Workshop on Data Mining in Earth System Science June 5-7 CSMD researcher Forrest Hoffman organized the Fourth Workshop on Data Mining in Earth System Science (DMESS 2013; http://www.climatemodeling.org/workshops/dmess2013/) with co-conveners Jitendra Kumar (ORNL), J. Walter Larson (Australian National University, AUSTRALIA), and Miguel D. Mahecha (Max Planck Institute for Biogeochemistry, GERMANY). This workshop was held in conjunction with the 2013 International Conference on Computational Sciences (ICCS 2013; http://www.iccs-meeting.org/iccs2013/) in Barcelona, Spain, on June 5-7, 2013, and was chaired by J. Walter Larson. Richard T. Mills and Brian Smith of ORNL both presented papers in the DMESS 2013 session. These papers were published in volume 18 of Procedia Computer Science and are available at http://dx.doi.org/10.1016/j.procs.2013.05.411 and http://dx.doi.org/10.1016/j.procs.2013.05.408 Special Symposium on Phenology April 14-18 CSMD researcher Forrest Hoffman co-organized a Special Symposium on Phenology with Bill Hargrove and Steve Norman (USDA Forest Service) and Joe Spruce (NASA Stennis Space Center) at the 2013 U.S.-International Association for Landscape Ecology Annual Symposium (US-IALE 2013; http://www.usiale.org/austin2013/), which was held April 14-18, 2013, in Austin, Texas. Hoffman also gave an oral presentation in this symposium. Titled "Developing Phenoregion Maps Using Remotely Sensed Imagery", this presentation described application of a data mining algorithm to the entire record of MODIS satellite NDVI for the conterminous U.S. at 250 m resolution to delineate annual maps of phenological regions. In addition, I was co-author on four other oral presentations at the US-IALE Symposium, including one by Jitendra Kumar (ORNL) that described an imputation technique for estimating tree suitability from sparse measurements. SOS 17 Conference March 25-28 Successful workshop on Big Data and High Performance Computing hosted by ORNL in Jekyll Island Georgia SOS is an invitation-only 2 1/2 day meeting held each year by Sandia labs, Oak Ridge National Laboratory, and Swiss Technical institute. This year it was hosted by ORNL in Jekyll Island Georgia on March 25-28, 2013. The theme this year was "The intersection of High Performance Computing and Big Data." There were 40 speakers and panelists from around the world representing views from industry, academia, and national laboratories. The first day focused on the gaps between big computing and big data and the challenges of turning science data into knowledge. On the second day the talks and panels focused on where HPC and big data intersect and the state of big-data analysis software. The morning of the third day focused on the politics of big data including the issues of data ownership. Findings of the meeting include the fact that large experimental facilities such as CERNs Large Hadron Collider, and the new telescopes coming online already generate prodigious amounts of scientific data. The volume and speed that data is generated requires that the data be analyzed on the fly and only a tiny fraction be kept. The amount kept still amounts to many petabytes. The attendees stressed how important provenance is to the use of the archived data by other researchers around the world. The majority of today's scientific data is only of value to the original researcher, because the data lacks the meta-data required for others to use it. The talks and panels clearly showed the intersection of high performance computing and big data. They also showed that the converse is not necessarily true, i.e. big data (as defined by Google and Amazon) does not require high performance computing. These vendors and their customers are able to get their work done on large, distributed networks of independent PCs. The meeting was filled with lively discussion, and provocative questions. For those wanting to know more, the agenda and talks are posted on the SOS17 website: http://www.csm.ornl.gov/workshops/SOS17/ SIAM SEAS 2013 Annual Meeting March 22-24 On March 22-24, Oak Ridge National Laboratory and the University of Tennessee hosted the 37th annual meeting of the SIAM Southeastern Atlantic Section. The meeting included approximately 160 registered participants, of which roughly 60 were students and 20 were from ORNL. There were 4 plenary talks, 24 mini-symposium sessions, seven contributed sessions, and a poster session. Awards were given to students for Best Paper and Best Poster presentations. Attendees were also given guided tours of the Graphite Reactor, the Spallation Neutron Source, and the National Center for Computational Science. The meeting was organized by Chris Baker (ORNL), Cory Hauck (ORNL), Jillian Trask (UT), Lora Wolfe (ORNL), and Yulong Xing (ORNL/UT). Durmstrang-2 March 18-19 The semi-annual review for the Durmstrang-2 project was held on March 18-19 in Maryland. Durmstrang-2 is a DoD/ORNL collaboration in extreme scale high performance computing. The long term goal of the project is to support the achievement of sustained exascale processing on applications and architectures of interest to both partners. The Durmstrand-2 project is managed from the Extreme Scale Systems Center (ESSC) of CCSD. Steve Poole, Chief Scientist of CSMD, presented the overview and general status update at the March review. Benchmarks R&D discussion was facilitated by Josh Lothian, Matthew Baker, Jonathan Schrock, and Sarah Powers of ORNL; Languages and Compilers R&D discussion was facilitated by Matthew Baker, Oscar Hernandez, Pavel Shamis, and Manju Venkata of ORNL; I/O and FileSystems R&D discussion was facilitated by Brad Settlemyer of ORNL; Networking R&D discussion was facilitated by Nagi Rao, Susan Hicks, Paul Newman, and Steve Poole of ORNL; Power Aware Computing R&D discussion was facilitated by Chung-Hsing Shu of ORNL; System Schedulers R&D discussion was facilitated by Greg Koenig and Sarah Powers of ORNL. The topics of discussion during the executive session of the review included continued funding/growth of the program and development of performance metrics for the project. APS 2013 March Meeting March 18-22 The American Physical Society (APS) March Meeting is the largest physics meeting in the world, focusing on research from industry, universities, and major labs. Participation in this years' meeting held in Baltimore, MD (March 18-22, 2013) by staff members of the Computational Chemical and Materials Sciences (CCMS) Group included 24 different talks (bold names are from CCMS). Monojoy Goswami, Bobby G. Sumpter, "Morphology and Dynamics of Ion Containing Polymers using Coarse Grain Molecular Dynamics Simulation", Talk in Session T32: Charged and Ion Containing Polymers (March 21, 2013) APS National Meeting, Baltimore. Debapriya Banerjee, Kenneth S. Schweizer, Bobby G. Sumpter, Mark D. Dadmun, "Dispersion of small nanoparticles in random copolymer melts", Talk in Session F32: Polymer Nanocomposites II (March 19, 2013) APS National Meeting, Baltimore. Rajeev Kumar, Bobby G. Sumpter, S. Michael Kilbey II, "00003 Charge regulation and local dielectric function in planar polyelectrolyte brushes", Talk in Session U32: Charged Polymers and Ionic Liquids (March 21, 2013) APS National Meeting, Baltimore. Alamgir Karim, David Bucknall, Dharmaraj Raghavan, Bobby Sumpter, Scott Sides, "In-situ Neutron Scattering Determination of 3D Phase-Morphology Correlations in Fullerene -Polymer Organic Photovoltaic Thin Films", Talk in Session Y33: Organic Electronics and Photonics-Morphology and Structure I (March 22, 2013) APS National Meeting, Baltimore. Geoffrey Rojas, P. Ganesh, Simon Kelly, Bobby G. Sumpter, John Schlueter, Petro Maksymovych," Molecule/Surface Interactions and the Control of Electronic Structure In Epitaxial Charge Transfer Salts", Talk in Session U35: Search for New Superconductors III (March 21, 2013) APS National Meeting, Baltimore. Geoffrey A. Rojas, P. Ganesh, Simon Kelly, Bobby G. Sumpter, John A. Schlueter, Petro Maksymovich, "Density Functional Theory studies of Epitaxial Charge Transfer Salts", Talk in Session N35: Search for New Superconductors III (March 20, 2013) APS National Meeting, Baltimore. Arthur P. Baddorf, Qing Li, Chengbo Han, J. Bernholc, Humberto Terrones, Bobby G. Sumpter, Miguel Fuentes-Cabrera, Jieyu Yi, Zheng Gai, Peter Maksymovych, Minghu Pan," Electron Injection to Control Self-Assembly and Disassembly of Phenylacetylene on Gold", Talk in Session C33: Organic Electronics and Photonics - Interfaces and Contacts (March 18, 2013) APS National Meeting, Baltimore. Mina Yoon, Kai Xiao, Kendal W. Clark, An-Ping Li, David Geohegan, Bobby G. Sumpter, Sean Smith, "Understanding the growth of nanoscale organic semiconductors: the role of substrates", Talk in Session Z33: Organic Electronics and Photonics - Morphology and Structure II (March 22, 2013) APS National Meeting, Baltimore. Chengbo Han, Wenchang Lu, Jerry Bernholc, Miguel Fuentes-Cabrera, Humberto Terrones, Bobby G. Sumpter, Jieyu Yi, Zheng Gai, Arthur P. Baddorf, Qing Li,. Peter Maksymovych, Minghu Pan, "Computational Study of Phenylacetylene Self-Assembly on Au(111) Surface", Talk in Session C33: Organic Electronics and Photonics - Interfaces and Contacts (March 18, 2013) APS National Meeting, Baltimore. Jaron Krogel, Jeongnim Kim, David Ceperley "Prospects for efficient QMC defect calculations: the energy density applied to Ge self-interstitials", Talk in Session J24: Quantum Many-Body Systems and Methods I (March 19, 2013) APS National Meeting, Baltimore. Kendal Clark, Xiaoguang Zhang, Ivan Vlassiouk, Guowei He,Gong Gu, Randall Feenstra, An-Ping Li, "Mapping the Electron Transport of Graphene Boundaries Using Scanning Tunneling Potentiometry", Talk in Session G6: CVD Graphene - Doping and Defects (March 19, 2013) APS National Meeting, Baltimore. Gregory Brown, Donald M. Nicholson, Markus Eisenbach, Kh. Odbadrakh "Wang-Landau or Statistical Mechanics", Talk in Session G6: Equilibrium Statistical Mechanics, Followed by GSNP Student Speaker Award (March 18, 2013) APS National Meeting, Baltimore. Don Nicholson, Kh. Odbadrakh, German Samolyuk, G. Malcolm Stocks," Calculated magnetic structure of mobile defects in Fe", Session Y16: Magnetic Theory II (March 22, 2013) APS National Meeting, Baltimore. Khorgolkhuu Odbadrakh, Don Nicholson, Aurelian Rusanu, German Samolyuk, Yang Wang, Roger Stoller, Xiaoguang Zhang, George Stocks, "Coarse graining approach to First principles modeling of structural materials", Session A43: Multiscale modeling--Coarse-graining in Space and Time I (March 18, 2013) APS National Meeting, Baltimore. M. G. Reuter & P. D. Williams, "The Information Content of Conductance Histogram Peaks: Transport Mechanisms, Level Alignments, and Coupling Strengths" Talk in Session R43: Electron Transfer, Charge Transfer and Transport Session, (March 20,2013) APS National Meeting, Baltimore. Paul R. C. Kent, Panchapakesan Ganesh, Jeongnim Kim, Mina Yoon, Fernando Reboredo, "Binding and Diffusion of Li in Graphite: Quantum Monte Carlo Benchmarks and validation of Van der Waals DFT" Talk in Session A5: Van der Waals Bonding in Advanced Materials – Materials Behavior, (March 18, 2013) APS National Meeting, Baltimore. Peter Staar, Thomas Maier, Thomas Schulthess, "DCA+: Incorporating self-consistently a continuous momentum self-energy in the Dynamical Cluster Approximation" Talk in Session N24, APS National Meeting, Baltimore. Thomas Maier, Peter Hirschfeld, Douglas Scalapino, Yan Wang, Andreas Kreisel, "Pairing strength and gap functions in multiband superconductors: 3D effects" Talk in Session G37: Electronic Structute Methods II,(March 20, 2013) APS National Meeting, Baltimore. Thomas Maier, Yan Wang, Andreas Kreisel, Peter Hirschfeld, Douglas Scalapino, "Spin fluctuation theory of pairing in AFe2As2" Talk in Session G37: Electronic Structure Methods II,(March 20, 2013), APS National Meeting, Baltimore. Peter Hirschfeld, Andreas Kreisel, Yan Wang, Milan Tomic, Harald Jeschke, Anthony Jacko, Roser Valenti, Thomas Maier, Douglas Scalapino, "Pressure dependence of critical temperature of bulk FeSe from spin fluctuation theory" Talk in Session G37: Electronic Structure Methods II (March 20, 2013), APS National Meeting, Baltimore. Markus Eisenbach, Junqi Yin, Don M. Nicholson, Ying Wai Li, "First principles calculation of finite temperature magnetism in Ni", Talk in Session C17: Magnetic Theory I (March 18, 2013), APS National Meeting, Baltimore. Madhusudan Ojha, Don M. Nicholson, Takeshi Egami, "Ab-initio atomic level stresses in Cu-Zr crystal, liquid and glass phases", Talk in Session G42: Focus Session: Physics of Glasses and Viscous Liquids I (March 19, 2013), APS National Meeting, Baltimore. Junqi Yin, Markus Eisenbach, Don Nicholson, "Spin-lattice coupling in BCC iron", Talk in Session T39: Metals Alloys and Metallic Structures (March 21, 2013), APS National Meeting, Baltimore. German Samolyuk, Yuri Osetsky, Roger Stoller, Don Nicholson, George Malcolm Stocks, "The modification of core structure and Peierls barrier of 1/2$<111>$ screw dislocation in bcc Fe in presence of Cr solute atoms", Talk in Session T39: Metals Alloys and Metallic Structures (March 21, 2013), APS National Meeting, Baltimore. SIAM-CSE13 February 25 - March 1 The CSMD had a strong showing at SIAM-CSE13 with over 25 presentations from staff members from the division.  This conference is a leading conference in computer science and mathematics, drawing thousands of researchers from across the globe and supported jointly by NSF and DOE.  Division scientist organized eight different mini-symposiums with close to a hundred invited speakers in areas of modern libraries (Christopher Baker), climate (Kate Evans), nuclear simulations (Bobby Philip), kinetic theory (Cory Hauck), hybrid architecture linear algebra (Ed D'Azevedo), UQ and stochastic inverse problems (Clayton Webster), and Structural Graph Theory, Sparse Linear Algebra, and Graphical Models (Blair Sullivan). Seminars July 1, 2015 - Greg Watson: Software Engineering for Science: Beyond the Eclipse Parallel Tools Platform ABSTRACT: The Eclipse Parallel Tools Platform (PTP) project was started over 10 years ago with the goal of bringing best practices in software engineering to scientific computing. The results of the project have been mixed; we have seen adoption of Eclipse in many labs and academic institutions, and the PTP development environment has been downloaded over 1M times since records started being kept in 2012. However, we are still not seeing general use across the scientific computing community, and many negative perceptions of Eclipse still persist. In spite of the fact that a number of groups have their own Eclipse-based tools, we also haven't seen a high level of integration that was one of the original objectives of the project. Although software engineering practices have improved to some degree, there is still much room for improvement, particularly as the next generation of highly complex computing systems becomes available. This talk will discuss some key observations on the uptake of advanced development environments by the scientific computing community, and consider the factors that have influenced the adoption of PTP in particular. The presentation will then examine some areas that we believe would be beneficial for improving software engineering practices, as well as looking at some exciting possibilities for future research. June 30, 2015 - Torsten Hoefler: How fast will your application run at <next>-scale? Static and dynamic techniques for application performance modeling ABSTRACT: Many parallel applications suffer from latent performance limitations that may prevent them from utilizing resources efficiently when scaling to larger parallelism. Often, such scalability bugs manifest themselves only when an attempt to scale the code is actually being made---a point where remediation can be difficult. However, creating analytical performance models that would allow such issues to be pinpointed earlier is so laborious that application developers attempt it at most for a few selected kernels, running the risk of missing harmful bottlenecks. We discuss dynamic techniques to generate performance models for program scalability to identify scaling bugs early and automatically. This automation enables a new set of parallel software development techniques. We demonstrate the practicality of this method with various real-world applications but also point out limitations of the dynamic approach. We then discuss a static analysis that establishes close provable bounds for the number of loop iterations and the scalability of parallel programs. While this analysis captures more loops then existing techniques based on the Polyhedral model, no analysis can count all loops statically. We conclude by briefly discussing how to combine these two approaches into an integrated framework for scalability and performance analysis. BIOGRAPHY: Torsten is an Assistant Professor of Computer Science at ETH Zürich, Switzerland. Before joining ETH, he led the performance modeling and simulation efforts of parallel petascale applications for the NSF-funded Blue Waters project at NCSA/UIUC. He is also a key member of the Message Passing Interface (MPI) Forum where he chairs the "Collective Operations and Topologies" working group. Torsten won best paper awards at the ACM/IEEE Supercomputing Conference SC10, SC13, SC14, EuroMPI 2013, IPDPS 2015, and other conferences. He published numerous peer-reviewed scientific conference and journal articles and authored chapters of the MPI-2.2 and MPI-3.0 standards. His research interests revolve around the central topic of "Performance-centric Software Development" and include scalable networks, parallel programming techniques, and performance modeling. Additional information about Torsten can be found on his homepage at htor.inf.ethz.ch. June 26, 2015 - Edwin Garcia: Progress Towards a Microstructurally Resolved Porous Electrode Theory for Rechargeable Batteries ABSTRACT: In high energy density, low porosity, lithium-ion battery electrodes, the underlying microstructural characteristics control the macroscopic charge capacity, average lithium-ion transport, and macroscopic resistivity of the cell, particularly at high electronic current densities and power densities. In this presentation, we report on progress towards the development of a combined numerical+analytical framework to describe the effect of particle morphologies and its processing-induced spatial distribution on the macroscopic and position-dependent performance. Here, by spatially resolving the electrochemical fields, the effect of particle size polydispersity on the galvanostatic behavior is analyzed. We detail such effects in structures of controlled electrode compaction and polydispersity on the macroscopic effective transport properties and discuss its impact on the macroscopic galvanostatic response for existing and emerging energy storage devices. The framework presented herein enables to establish relations that combine the tortuosity and reactivity constitutive properties of the individual components. Macroscopic tortuosity-porosity relations for mixtures of porous particle systems of widely different length scales and well-known individual tortuosity constitutive equations are combined into self-consistent macroscopic expressions, in agreement with recently reported empirical measures. June 22, 2015 - Mohamed Wahib : Scalable and Automated GPU Kernel Transformations in Production Stencil Applications We present a scalable method for exposing and exploiting hidden localities in production GPU stencil applications. Exploiting inter-kernel localities is essentially the following: find the best permutation of kernel fusions that would minimize redundant memory accesses. To achieve this, we first expose the hidden localities by analyzing inter-kernel data dependencies and order-of-execution. Next, we use a scalable search heuristic that relies on a lightweight performance model to identify the best candidate kernel fusions. Experiments with two real-world applications prove the effectiveness of manual kernel fusion. To make kernel fusion a practical choice, we further introduce an end-to-end method for automated transformation. A CUDA-to-CUDA transformation collectively replaces the user-written kernels by auto-generated kernels optimized for data reuse. Moreover, the automated method allows us to improve the search process by enabling kernel fission and thread block tuning. We demonstrate the practicality and effectiveness of the proposed end-to-end automated method. With minimum intervention from the user, we improved the performance of six applications with speedups ranging between 1.12x to 1.76x. BIOGRAPHY: Mohamed Wahib is currently a postdoctoral researcher in the "HPC Programming Framework Research Team" at RIKEN Advanced Institute for Computational Science (RIKEN AICS). He joined RIKEN AICS in 2012 after years at Hokkaido University, Japan, where he received a Ph.D. in Computer Science in 2012. Prior to his graduate studies, he worked as a researcher at Texas Instruments (TI) R&D for four years. June 12, 2015 - Saurabh Hukerikar : Introspective Resilience for Exascale High Performance Computing Systems ABSTRACT: Future exascale High Performance Computing (HPC) systems will be constructed from VLSI devices that will be less reliable than those used today, and faults will become the norm, not the exception. Furthermore, the Mean Time to Failure (MTTF) of the system scales inversely to the number of components in the system and therefore faults and resultant system level failures will increase, as systems scale in terms of the number of processor cores and memory modules used. This will pose significant problems for system designers and programmers, who for half-a-century have enjoyed an execution model that assumed correct behavior by the underlying computing system. However, not every error detected needs to result in catastrophic failure. Many HPC applications are inherently fault resilient but lack convenient mechanisms to express their resilience features to the execution environments which are designed to be fault oblivious. Dr. Hukerikar will present research conducted as part of his PhD dissertation which proposes an execution model based on the notion of introspection. A set of resilience oriented language extensions was developed, which facilitate the incorporation of fault resilience as intrinsic property of the scientific application codes. These are supported by a compiler infrastructure and a runtime system that reasons about the context and significance of faults to the outcome of the application execution. The compiler infrastructure was extended to demonstrate an application level methodology for fault detection and correction that is based on redundant multithreading (RMT). An introspective runtime framework was also developed that continuously observes and reflects upon the platform level fault indicators to assess the vulnerability of the system's resources. The introspective runtime system provides a unified execution environment that reasons about the implications of resource man agement actions for the resilience and performance of the application processes. Results, which cover several high performance computing applications and different fault types and distributions, demonstrate that a resilience aware execution environment is important to solve the most demanding computational challenges on future extreme scale HPC systems. *Saurabh Hukerikar is a candidate for a postdoctoral position with the Computer Science Research Group. He recently completed his PhD from the Ming Hsieh Department of Electrical Engineering at the University of Southern California. He works with the Computational Systems and Technology Division at USC's Information Sciences Institute. His graduate work seeks to address the challenge of resilience for extreme-scale high-performance computing (HPC) systems. He received a MS in Electrical Engineering in 2010 and a MS in Computer Science (with emphasis on High Performance Computing and Simulations) in 2012 both from the University of Southern California June 5, 2015 - Vivek Sarkar : Runtime System Challenges for Extreme Scale Systems ABSTRACT: It is widely recognized that radical changes are to be expected in future HPC systems to address the challenges of extreme-scale computing. Specifically, they will be built using homogeneous and heterogeneous many-core processors with 100's to 1000's of cores per chip, their performance will be driven by parallelism (billion-way parallelism for an exascale system), and constrained by energy and data movement. They will also be subject to frequent faults and failures. Unlike previous generations of hardware evolution, these Extreme Scale HPC systems will have a profound impact on future applications and their underlying software stack. The software challenges are further compounded by the addition of new application requirements that include, most notably, data-intensive computing and analytics. The challenges across the entire software stack for Extreme Scale systems are driven by programmability, portability and performance requirements, and impose new requirements on programming models, languages, compilers, runtime systems, and system software. Focus is on the critical role played by Runtime Systems in enabling programmability in the upper layers of the software stack that interface with the programmer, and in enabling performance in the lower levels of the software stack that interface with the operating system and hardware. Examples of key runtime primitives being developed to address these challenges will be drawn from experiences in the Habanero Extreme Scale Software Research project which targets a wide range of homogeneous and heterogeneous manycore processors, as well as from the Open Community Runtime (OCR) system being developed in the DOE X-Stack program. Background material for this talk will also be drawn from the DARPA Exascale Software Study report from the DOE ASCAC study on Synergistic Challenges in Data-Intensive Science and Exascale Computing. We would like to acknowledge the contributions of all participants in the Habanero project, the OCR project, and the DARPA and DOE studies. BIOGRAPHY: Vivek Sarkar is Professor and Chair of Computer Science at Rice University. He conducts research in multiple aspects of parallel software including programming languages, program analysis, compiler optimizations and runtimes for parallel and high performance computer systems. He currently leads the Habanero Extreme Scale Software Research Laboratory at Rice University, and serves as Associate Director of the NSF Expeditions Center for Domain-Specific Computing. Prior to joining Rice in July 2007, Vivek was Senior Manager of Programming Technologies at IBM Research. His responsibilities at IBM included leading IBM's research efforts in programming model, tools, and productivity in the PERCS project during 2002- 2007 as part of the DARPA High Productivity Computing System program. His prior research projects include the X10 programming language, the Jikes Research Virtual Machine for the Java language, the ASTI optimizer used in IBM's XL Fortran product compilers, the PTRAN automatic parallelization system, and profile-directed partitioning and scheduling of Sisal programs. In 1997, he was on sabbatical as a visiting associate professor at MIT, where he was a founding member of the MIT Raw multicore project. Vivek became a member of the IBM Academy of Technology in 1995, the E.D. Butcher Chair in Engineering at Rice University in 2007, and was inducted as an ACM Fellow in 2008. He holds a B.Tech. degree from the Indian Institute of Technology, Kanpur, an M.S. degree from University of Wisconsin-Madison, and a Ph.D. from Stanford University. Vivek has been serving as a member of the US Department of Energy's Advanced Scientific Computing Advisory Committee (ASCAC) since 2009. May 27, 2015 - Jeffrey K. Hollingsworth : Active Harmony: Making Autotuning Easy ABSTRACT: Active Harmony is an auto-tuning framework for parallel programs. In this talk, I will describe how the system makes it easy (sometimes even automatic) to create programs that can be auto-tuned. I will present examples from a few applications and programming languages. I will also discuss recent work we have been doing to provide support for auto-tuning programs with multiple (potentially conflicting) objectives such as performance and power. BIOGRAPHY: Jeffrey K. Hollingsworth is a Professor of the Computer Science Department at the University of Maryland, College Park. He also has an appointment in the University of Maryland Institute for Advanced Computer Studies and the Electrical and Computer Engineering Department. He received his PhD and MS degrees in computer sciences from the University of Wisconsin. His research is in the area of performance measurement, auto-tuning, and binary instrumentation. He is Editor in chief of the journal Parallel Computing, was general chair of the SC12 conference, and is Vice Chair of ACM SIGHPC. May 19, 2015 - Mikolai Fajer : Effects of the SH2/SH3 Regulatory Domains on the Activation Transition of c-Src Kinases ABSTRACT: The c-Src kinase is an important component in cellular signalling, and its activity is closely regulated by the SH2/SH3 domains. Using the swarms-of-trajectories string method, the transition from inactive to active conformations of the kinase domain are studied in the presence of the SH2/SH3 domains. The assembled, down-regulated SH2/SH3 conformation closely resembles the activation transition of the kinase-only domain. The re-assembled and up-regulated SH2/SH3 conformation pre-orients several side chains for their active state interactions, thus promoting the active state of the kinase. BIOGRAPHY: Mikolai Fajer received his bachelor's degree in Physics and Chemistry from the University of Florida. He then went on to get his PhD working under Andy McCammon at the University of California, San Diego, working on enhanced sampling methods. Most recently he has been working as a postdoc for Benoit Roux at the University of Chicago, studying conformational transitions in biomolecular systems. May 14, 2015 - Brent Gorda : Lustre Keeping Pace with Compute and Intel's Continued Commitment ABSTRACT: Brent will discuss the topic of "Lustre Keeping Pace with Compute and Intel's Continued Commitment." What is Intel's role in making sure data can safely move in and out of High Performance compute at extreme scale and at the speed of your network interface? Why do both Scientific Simulation environments and increasingly Big Data Applications need advanced parallel file systems such as Intel's hardened Lustre? How are partners now driving Lustre Innovation, alongside the Lustre Community? What improvements are coming in Lustre for Small File Performance, HSM, Fault Tolerance, Snapshot and Security? To get to Exascale Computing, what needs to change in I/O? BIOGRAPHY: Brent Gorda is the General Manager of the High Performance Data Division at Intel. Brent co-founded and led Whamcloud, a startup focused on the Lustre technology which was subsequently acquired by Intel. A longtime member of the HPC community, Brent was at the Lawrence Livermore National Laboratory and responsible for the BlueGene P/Q architectures as well as many of the large IB-based cluster architectures in use among the NNSA DOE laboratories. Brent is the founder of the Student Cluster Competition, a worldwide event that showcase the power of parallel/cluster computing in the hands of students. April 30, 2015 - Dimitri Mavriplis : High Performance Computational Aerodynamics for Multidisciplinary Wind Energy and Aerospace Vehicle Analysis and Optimization ABSTRACT: This talk will describe the development of a multi-solver, overlapping adaptive mesh CFD capability that scales well on current high performance computing hardware with applications in aerospace vehicle analysis and design and complete wind farm simulations. The multisolver paradigm makes use of a near- body unstructured mesh solver coupled with an adaptive Cartesian higher-order accurate off-body solver implemented within the SAMRAI framework. An overview of the multi-solver software structure will be given, after which a description of the solution techniques used for the unstructured mesh multigrid solver component will be presented in more detail. Subsequently, the incorporation of a discrete adjoint capability will be described for multidisciplinary time-dependent aero-structural problems, and results demonstrating the optimization of time-dependent helicopter rotors will be shown. The talk will conclude with prospects for advanced discretizations and solvers as we move towards the exascale era. BIOGRAPHICAL INFORMATION: Dimitri Mavriplis is currently the Max Castagne Professor in Mechanical Engineering at the University of Wyoming. He obtained his Bachelor and Master's degrees in Mechanical Engineering from McGill University and his PhD in Mechanical and Aerospace Engineering from Princeton University. After graduation, he spent over 15 years at ICASE/NASA Langley where we worked on the development of unstructured mesh discretizations and solvers. In 2003 he joined the University of Wyoming where he leads a research group that focuses on HPC solver technology, adjoint methods for optimization and error control and high-order discretizations with applications in multidisciplinary wind energy and aerospace vehicle analysis and design optimization. April 16, 2015 - David Lecomber : Software Engineering for HPC - Experiences in Developing Software Tools for Rapidly Moving Targets Code modernization is one of the hotter topics in HPC today - but modernization is about more than modern processors. I will consider how the modernization of software practices is making an impact in HPC - and some of the best practices we see out in the field amongst HPC developers. I will examine the challenges of software engineering to production and beyond from the perspective of engineering at Allinea, how we develop and test in a world of constant change, and the lessons learned along the way. April 8, 2015 - Kirk W. Cameron : Why high-performance systems need a little bit of LUC In 1936, Harvard University sociologist Robert Morton wrote a paper entitled "The unanticipated consequences of purposive social action", where he described how government policies often result in both positive and negative unintended consequences. The lesson from Morton's work was that unexpected consequences in complex social systems, at the time relegated to theology or chance, should be evaluated scientifically. Independent groups typically design the components of HPC systems. Hard disks, processors, memories, and boards are eventually combined with BIOSs, file systems, operating systems, communication libraries, and applications. Today's components also adapt automatically to local conditions to improve efficiency. For example, processors and memories can vary their frequencies in response to demand. Disks can vary their rotation speeds. BIOSs and OSs can adapt their scheduling policies for different use cases. Since the performance effects of local hardware and software management are largely unknown, these potentially valuable features are often disabled in high-performance environments. And unfortunately, while we assume that disabling these features will have positive consequences, Morton teaches us that relegating performance behavior to chance is just as likely to result in negative consequences. For example, there is mounting evidence that when processors are fixed at the highest frequency (i.e., disabling dynamic frequency scaling), performance can worsen. In this presentation, I will revisit the conventional wisdom that "faster is always better" for processor speeds in high-performance environments. In essence, through exhaustive experimentation, we can demonstrate quantitatively that slowing down CPU frequency can speed up performance as much as 50% for some I/O intensive applications. For the first time, we have identified the root cause of slowdowns at higher frequencies. I will describe how the LUC runtime system Limits the Unintended Consequences of processor speed in high-performance I/O applications. Our work also motivates the need to reject chance as an explanation of performance and revisit first principals so we can design systems that truly offer the highest performance. BIO: Kirk W. Cameron is Professor and Associate Department Head of Computer Science in the College of Engineering at Virginia Tech. The central theme of his research is to improve power and performance efficiency in high performance computing (HPC) systems and applications. More than half a million people in more than 160 countries have used his power management software. In addition to his research, his NSF-funded, 256-node SeeMore kinetic sculpture of Raspberry Pi's was featured at SIGGRAPH 2014 in Vancouver, B.C. and is scheduled for multiple exhibitions in Washington D.C. and New York in 2015. March 31, 2015 - Keita Teranishi : Local Failure Local Recovery for large scale SPMD applications As leadership class computing systems increase in complexity and component feature sizes continue to decrease, the ability of an application code to treat the system as a reliable digital machine diminishes. In fact, there is a growing concern in the high performance computing community that applications will have to explicitly manage resilience issues beyond the current practice of checkpoint/restart (C/R). In particular, the current system reaction to the loss of a single MPI process is to terminate all remaining processes and restart the application from the most recent checkpoint. This is suboptimal at scale because the recovery cost is not to the size of failures. We address this scaling issues using an emerging resilient computing model called Local Failure, Local Recovery (LFLR) that attempts to provide application developers with the ability to recover locally and continue application execution when a process is lost. In this talk, I will present our two ongoing efforts to enable scalable on-line application recovery, including the general-purpose recovery heavily leveraging MPI-ULFM (fault tolerate MPI prototype), and recovery of stencil-based code using Cray's uGNI. BIOGRAPHICAL INFORMATION: Keita Teranishi is a principal staff member of Scalable Modeling and Analysis Systems at Sandia National Laboratories in California. Before joining Sandia, he was involved in several projects in dense and sparse matrix libraries development at Cray Inc. His broad research interest in HPC includes application resilience, programming models, automatic performance tuning and numerical linear algebra. He holds an MS degree from University of Tennessee, Knoxville and a Ph.D. degree from Pennsylvania State University. March 30, 2015 - Sarah Osborn : Solutions Strategies for Stochastic Galerkin Discretizations of PDEs with Random Data When using partial differential equations (PDEs) to model physical problems, the exact values of coefficients are often unknown. To obtain more realistic models, the coefficients are typically treated as random variables in an attempt to quantify uncertainty in the underlying problem. Stochastic Galerkin methods are used to obtain numerical solutions for these types of problems. These methods couple the stochastic and deterministic degrees-of-freedom and yield a large system of equations that must be solved. A challenge in this method is solving the large system accurately and efficiently. Typically the system is solved iteratively and reconditioning strategies dictate the performance of the iterative method. The goal of this work is to improve solver efficiency by investigating preconditioning techniques and solver implementation details. The model problem considered is the diffusion problem with uncertainties in the diffusion coefficient. An algebraic multigrid preconditioner based on smoothed aggregation is presented with emphasis on the formulation of the model problem where the uncertain component has a nonlinear structure. Special consideration is given to the solution and proposed preconditioning strategy for improving performance on emerging architectures. Numerical results will be presented that illustrate the performance of the proposed preconditioner and implementation changes. March 30, 2015 - Emil Alexov : Revealing the molecular mechanism of Snyder-Robinson Syndrome and rescuing it with small molecule binding The Snyder-Robinson Syndrome (SRS) (OMIM 300105) is a rare mental retardation disorder which is caused by missense mutations in the spermine sythase gene (SpmSyn). The SpmSyn encodes a protein, the spermine synthase (SMS) of 529 amino acids, which becomes dysfunctional in SRS patients due to specific missense mutations. Here we investigate, in silico and in vitro, the molecular effect of these amino acid substitutions causing SRS and demonstrate that almost always the mutations do not directly affect the functional properties of the SMS, but rather indirectly alter its wild type characteristics. A particular feature of SMS, which is shown to affect SMS functionality, is the formation of SMS homo-dimer. If the homo-dimer does not form, the activity of SMS is practically abolished. With this regard we identify several disease-causing mutations that affect homo-dimerization of SMS and carry in silico screening to identify small molecules which binding to the destabilized homo-dimer can restore wild type homo-dimer affinity. The investigation resulted in extensive list of plausible stabilizers, among which we selected and tested 51 compounds experimentally for their capability to increase SMS mutant enzymatic activity. In silico analysis of the experimentally identified stabilizers suggested five distinctive chemical scaffolds. The identified chemical scaffolds are drug-like and can serve as original starting points for development of lead molecules to further rescue the disease-causing effects of the Snyder-Robinson syndrome for which no efficient treatment exists up to now. Lab page URL: http://compbio.clemson.edu/ BIOGRAPHICAL INFORMATION: Dr. Emil Alexov is a Professor in the Department of Physics and Astronomy at Clemson University. He received his Ph.D. in Radiophysics and Electronics and his M.S. in Plasma Physics from Sofia University. He is currently a member of the American Physical Society, the Biophysical Society and the Protein Society. Dr. Alexov has been active in the National Institutes of Health, the National Scientific Foundation, among many other professional scientific activities March 9, 2015 - Mark Kim: GPU-enabled Particle Systems for Visualization Particle systems have a rich history in scientific visualization because of their practicality and versatility. And although particles are a useful tool for visualization, one difficulty is particle advection on an arbitrary surface. One solution is to parameterize the surface, which can be difficult to construct and utilize. Another method is to use a distance field and reproject particles onto the surface, which is a iterative search. Unfortunately, this iterative search is not optimal on the GPU. In this talk, I will discuss our research on particle advection on surfaces on the GPU. As GPUs have become more powerful and accessible for general purposes, new techniques are required to fully utilize that performance. I will begin my talk with a discussion about some of the problems with particle systems on the GPU. In particular, I will discuss issues adapting multimaterial mesh extraction to the GPU. To address these issues, a new surface representation was chosen: the closest point embedding. The closest point embedding is a simple grid-based representation for arbitrary surfaces. To demonstrate the effectiveness of the closest point embedding, I will present two visualization techniques sped up on the GPU with the closest point embedding. First, the closest point embedding is used to speed-up particle advection for multimaterial mesh extraction on the GPU. Second, unsteady flow visualization on arbitrary surfaces is simplified and sped up with the closest point embedding. March 6, 2015 - Sungahn Ko: Aided decision-making through visual analytics systems for big data As technologies have advanced, various types of data are produced in science and industry, and extracting actionable information for making effective decisions becomes increasingly difficult for analysts and decision makers. The main reasons causing such difficulty are two-fold; 1) the overwhelming amount of data prevents users from understand the data during exploration, and 2) the complexity of the multiple data characteristics (multivariate, spatial, temporal or/and networked) needs an integrated data presentation for finding any pattern, trend, or anomaly for decision-making. To overcome the analysts' information overload and enable effective visual presentation for efficient analysis and decision making, an interactive visual exploration and analysis environment are needed since traditional machine learning and big data analytics alone are insufficient. In this talk, I present visual analytics approaches for solving the big data problem and examples including spatiotemporal network data analysis, business intelligence, and steering of simulation pipelines. February 3, 2015 - Sergiy Kalnaus: Predictive modeling for electrochemical energy storage Electrochemical energy storage devices have gained popularity and market penetration as means for providing energy/power source for consumer electronics, hybrid and fully electric vehicles (EV), and grid storage. Lithium-ion secondary batteries represent the most promising and commercially viable segment, although lithium, lithium-air as well as intercalation systems based on other metals (sodium, aluminum) are being studied. Despite being adopted in many electrified powertrains (BMW ActiveE, Nissan Leaf, Ford C-max Energi, etc), Li-ion batteries are still suffering from high manufacturing cost, low cycle life and safety issues. Modeling and simulation is a great tool for quantifying the response that otherwise cannot be assessed experimentally and for designing the strategies for better management of such systems. This talk will discuss the modeling approaches and results of computational studies of performance and safety of Li-ion batteries. The newly released Virtual Integrated Battery Environment (VIBE) is an integral part of the Open Architecture Software Framework designed within the CAEBAT (Computer Aided Engineering for Batteries) project. Coupled simulations and physics models within VIBE will be discussed. January 29, 2015 - Deepak Majeti: Portable Programming Models for Heterogeneous Platforms ABSTRACT: Heterogeneous architectures have become mainstream today and are found in a range of system from mobile devices to supercomputers. However, these architectures with their diverse architectural features pose several programmability challenges including handling data-coherence, managing computation and data communication, and mapping of tasks and data distributions. Consequently, application programmers have to deal with new low-level programming languages that involves non-trivial learning and training. In my talk, I will present two programming models that tackle some of the aforementioned challenges. The first model is the "Concord" programming model which provides a widely used Intel Thread Building Blocks like interface and targets integrated CPU+GPU architectures with semi-coherent caches. This model also supports a wide set of C++ language features. The second model is "Heterogeneous Habanero C (H2C)", which is an implementation of the Habanero execution model for modern heterogeneous architectures. The novel features of H2C include high-level language constructs that support automatic data layout, task mapping and data distributions. I will conclude the talk with performance evaluations of Concord and H2C, and propose future extensions to these models. BIO: Deepak is a 5th year graduate student at Rice University working with Prof. Vivek Sarkar. As part of his ongoing doctoral thesis, he is developing Heterogeneous Habanero-C (H2C). Deepak's areas of interest include programming models, compiler and runtime support for modern heterogeneous architectures. He was a major contributor to the Concord project as an intern at Intel Programming Systems Lab. He also worked on porting the Chapel programming language onto the HSA + XTQ architecture as an intern at AMD Research. Apart from research, Deepak loves to play sports which include soccer, badminton, squash and of course cricket. January 6, 2015 - David M. Weiss: Industrial Strength Software Measurement Abstract: In an industrial environment where software development is a necessary part of product development, measuring the state of software development and the attributes of the software becomes a crucial issue. For a company to survive and to make progress against its competition, it must have answers to questions such as "What is my customers' perception of the quality of the software in my products?", "How long will it take me to complete a new product or a new release of an existing one?" "What are the major bottlenecks in software production?" "How effective is a new technique or tool when introduced into the software development process?" The fate of the company, and of individuals within the company, may depend on accurate answers to these questions, so one must not only know how to obtain and analyze data to answer them, but also estimate how good one's answers are. In a large scale industrial software development environment, software measurement must be meaningful, automatable, nonintrusive, and feasible. Sources of data are diffuse, nonuniform, and nonstandard. The data itself are difficult to collect and interpret, and hard to compare across projects and organizations. Nonetheless, other industries perform such measurements as a matter of course, and software development organizations should as well. In this talk I will discuss the challenges of deciding what questions to ask, how to answer them, and what the impact of answering them is. I will illustrate with examples drawn from real projects, and from an existing and ongoing project, that details the state of software production in a large company, focusing on change data and how to use it to answer some of the questions posed in the preceding. December 19, 2014 - Soumi Manna: Evaluating the Performance of the Community Atmosphere Model at High Resolutions Abstract: The Community Atmosphere Model (CAM5) is one of the multiple component models in the Community Earth System Model (CESM). Recently, efforts have been focused on increasing the resolution of CAM5 to produce more accurate predictions. Additionally, new developments have enabled the use of mesh refinement in CAM5 through the High-Order Method Modeling Environment (HOMME) dynamical core. These meshes allow for regions with extremely high-resolution and produce a challenge to the current parallel domain decomposition algorithm. In this project, we focused on analyzing the performance of HOMME on high and variable resolutions. We investigated the quality of domain decompositions produced by space-filling curve algorithms for refined and unrefined meshes. Additionally, we evaluated performance metrics of realistic simulations on these meshes using the automatic trace analysis tool Scalasca. By correlating performance bottlenecks with geometric mesh information, we identified sub-optimal properties of the domain decompositions and worked to address this behavior. Improving the quality of these decompositions will increase the scalability of simulations at these resolutions enhancing their scientific impact. December 12, 2014 - Jay Jay Billings: Eclipse ICE: ORNL's Modeling and Simulation User Environment Abstract: In the past several years ORNL modeling and simulation projects have experienced an increased need for interactive, graphical user tools. The projects in question span advanced materials, batteries, nuclear fuels and reactors, nuclear fusion, quantum computing and many others. They all require four tasks that are fundamental to modeling and simulation: creating input files, launching and monitoring jobs locally and remotely, visualizing and analyzing results, and managing data. This talk will present the Eclipse Integrated Computational Environment (ICE), a general-purpose open source platform that provides integrated tools and utilities for creating rich user environments. It will cover both the robust, new infrastructure developed for modeling and simulation projects, such as new mesh editors and visualization tools, as well as the plugins for codes that are already supported by the platform and taking advantage of these features . The design philosophy of the project will also be presented as well as how the "n-body code problem" is solved by the platform. In addition to covering the services provided by the platform, this talk will also discuss ICE's place in the larger Eclipse ecosystem and how it became an Eclipse project. Finally, we will show how you can leverage it to accelerate your code deployment, use it to simplify your modeling and simulation project or get involved in the development. Bio: Jay Jay Billings is a member of the research staff in the Computer Science Research group and leader of the ICE team. December 12, 2014 - Andrew Ross: Large scale Foundation Nurtured Collaboration Abstract: Software and data are crucial to almost all organizations. Open Source Software and Open Data are a vital part of this. This presentation provides a glimpse of why an open approach to software and data results in far more than just free software and data, as measured in terms of freedoms and acquisition price. Collaboration across groups within large organizations and between organizations is hard. The Eclipse Foundation is the NFL of open collaborations. It provides governance structure, technology infrastructure, and many services to facilitate collaboration. This presentation will briefly examine this and how working groups hosted by the Eclipse Foundation are enabling collaboration for domains such as Scientific R&D, Internet of Things (IoT), Location aware technologies, and more. The results are important; such as: • communication protocols like Paho for messaging between IoT devices • large scale distributed computing platforms such as GeoMesa and GeoTrellis • Data analysis and visualization found in ICE and DAWNSci • Advanced workflow and version control of data with tools such as GeoGig From this presentation, audience members will get a brief taste of some of the collaboration opportunities, how to learn more, and how to get involved. Bio: Andrew Ross is Director of Ecosystem Development at the Eclipse Foundation, a vendor neutral not-for-profit. He is responsible for Eclipse's collaborative working groups including the LocationTech and Science groups which collaboratively develop software for location-aware systems and scientific research respectively. Prior to the Eclipse Foundation, Andrew was Director of Engineering at Ingres where his team developed advanced spatial support features for the relational database and many applications. Before Ingres, Andrew developed highly available Telecom solutions based on open source technologies for Nortel. December 10, 2014 - Beth Plale: The Research Data Alliance: Progress and Promise in Global Data Sharing Abstract: The Research Data Alliance is coming up on 1.5 years old along the road to realizing its vision of "researchers and innovators openly sharing data across technologies, disciplines, and countries to address the grand challenges of society." RDA has grown tremendously in the last 1.5 years from a handful of committed individuals to an organization with 1600 members in 70 countries. As one who was part of the small group that got RDA off the ground and remains deeply engaged, I will introduce the Research Data Alliance, take stock of it's impressive accomplishments to date, and highlight what I see as the opportunities it faces in realizing the grand goal RDA states so succinctly in its vision. November 14, 2014 - Taisuke Boku: Tightly Coupled Accelerators: A very low latency communication system on GPU cluster and parallel programming Accelerating devices such as GPU, MIC or FPGA are one of the most powerful computing resources to provide high performance/energy and high performance/space ratio for wide area of large scale computational science. On the other hand, the complexity of programming combining various frameworks such as CUDA, OpenCL, OpenACC, OpenMP and MPI is growing and seriously degrades the programmability and productivity. We have been developing XcalableMP (XMP) parallel programming language for distributed memory architecture for PC clusters to MPP, and enhancing its capability to include accelerating devices for heterogeneous parallel processing systems. XMP is a sort of PGAS language, and XMP-dev and XMP-ACC are the extension for accelerating devices. On the other hand, we are also developing a new technology for inter-node GPU direct communication named TCA (Tightly Coupled Accelerators) architecture network from special hardware to the applications covered by this concept. Our on-going project vertically integrate all these components toward the new generation of parallel accelerated computing. In this talk, I will introduce our on-going project which vertically integrates all these components toward the new generation of parallel accelerated computing. BIO: Prof. Taisuke Boku received Master and PhD degrees from Department of Electrical Engineering at Keio University. After his carrier as assistant professor in Department of Physics at Keio University, he joined to Center for Computational Sciences (former Center for Computational Physics) at University of Tsukuba where he is currently the deputy director, the HPC division leader and the system manager of supercomputing resources. He has been working there more than 20 years for HPC system architecture, system software, and performance evaluation on various scientific applications. In these years, he has been playing the central role of system development on CP-PACS (ranked as number one in TOP500 in 1996), FIRST (hybrid cluster with gravity accelerator), PACS-CS (bandwidth-aware cluster) and HA-PACS (high-density GPU cluster) as the representative supercomputers in Japan. He also contributed to the system design of K Computer as a member of architecture design working group in RIKEN and currently a member of operation advisory board of AICS, RIKEN. He received ACM Gordon Bell Prize in 2011. His recent research interests include accelerated HPC systems and direct communication hardware/software for accelerators in HPC systems based on FPGA technology. November 13, 2014 - Eric Lingerfelt: Accelerating Scientific Discovery with the Bellerophon Software System Abstract: We present an overview of a software system, Bellerophon, built to support a production-level HPC application called CHIMERA, which simulates the temporal evolution of core-collapse supernovae. Developed over the last 5 years at ORNL, Bellerophon enables CHIMERA's geographically dispersed team of collaborators to perform job monitoring and real-time data analysis from multiple supercomputing resources, including platforms at OLCF, NERSC, and NICS. Its n-tier architecture provides an encapsulated, end-to-end software solution that enables the CHIMERA team to quickly and easily access highly customizable animated and static views of results from anywhere in the world via a web-deliverable, cross-platform desktop application. Bellerophon has quickly evolved into the CHIMERA team's de facto work environment for analysis, artifact management, regression testing, and other workflow tasks. We will also present plans to expand utilization and encourage adoption by generalizing the system for new HPC applications and domains. Bio: Eric Lingerfelt is a technical staff member and software engineer in the ORNL Computer Science and Mathematics Division's Computer Science Research Group. Mr. Lingerfelt specializes in developing n-tier software systems with web-deliverable, highly-interactive client-side applications that allow users to generate, access, visualize, manipulate, and share complex sets of data from anywhere in the world. For over a decade, he has designed, developed, and successfully delivered multiple software systems to the US Department of Energy and other customers in the fields of nuclear astrophysics, Big Bang cosmology, core-collapse supernovae, isotope sales and distribution, environmental science, nuclear energy, theoretical nuclear science, and the oil and gas industry. He is a 2011 ORNL Computing and Computational Sciences Directorate Distinguished Contributor and the recipient of the 2013 CSMD Most Significant Technical Contribution Award. Mr. Lingerfelt received his B.S. in Mathematics and Physics from East Tennessee State University in 1998 and his M.S. in Physics from the University of Tennessee in 2002. November 12, 2014 - John Springer: Discovery Advancements Through Data Analytics Abstract: The Purdue Discovery Advancements Through Analytics (D.A.T.A.) Laboratory seeks to address the computational challenges surrounding data analytics in the life, physical, and social sciences by focusing on the development and optimization of parallel codes that perform analytics. They complement these efforts by also examining the aspects of data analytics related to user adoption as well as the best practices pertaining to the management of associated metadata. In this seminar, the lead investigator in the D.A.T.A. Lab, Dr. John Springer, will discuss the lab¹s past and current efforts and will introduce the lab's planned activities. Bio: John Springer is an Associate Professor in Computer and Information Technology at Purdue University and the Lead Scientist for High Performance Data Management Systems at the Bindley Bioscience Center at Discovery Park. Dr. Springer's discovery efforts focus on distributed and parallel computational approaches to data integration and analytics, and he serves as the leader of the Purdue Discovery Advancements Through Analytics (D.A.T.A.) Laboratory. November 10, 2014 - Christopher Rodrigues: High-Level Accelerator-Style Programming of Clusters with Triolet Container libraries are popular for parallel programming due to their simplicity. Programs invoke library operations on entire containers, relying on the library implementation to turn groups of operations into efficient parallel loops and communication. However, their suitability for parallel programming on clusters has been limited, due to having a limited repertoire of parallel algorithm implementations under the hood. In this talk, I will present Triolet, a high-level functional language for using a cluster as a computational accelerator. Triolet improves upon the generality of prior distributed container library interfaces by separating concerns of parallelism, loop nesting, and data partitioning. I will discuss how this separation is used to efficiently decompose and communicate multidimensional array blocks, as well as to generate irregular loop nests from computations with variable-size temporary data. These loop-building algorithms are implemented as library code. Triolet's compiler inlines and specializes library calls to produce efficient parallel loops. The resulting code often performs comparably to handwritten C. For several compute-intensive loops running on a 128-core cluster (with 8 nodes and 16 cores per node), Triolet performs significantly faster than sequential C code, with performance ranging from slightly faster to 4.3× slower than manually parallelized C code. Thus, Triolet demonstrates that a library of container traversal functions can deliver cluster-parallel performance comparable to manually parallelized C code without requiring programmers to manage parallelism. Triolet carries lessons for the design of runtimes, compilers, and libraries for parallel programming using container APIs. BIO: Christopher Rodrigues got his Ph.D. in Electrical Engineering at the University of Illinois. He is one of the developers of the Parboil GPU benchmark suite. A computer architect by training, he has chased parallelism up the software stack, having worked on alias and dependence analysis, parallel programming for GPUs, statically typed functional language compilation, and the design of parallel libraries. He is interested in reducing the pain of writing and maintaining high-performance parallel code. November 3, 2014 - Benjamin Lee: Statistical Methods for Hardware-Software Co-Design Abstract: To pursue energy-efficiency, computer architects specialize and coordinate design across the hardware/software interface. However, coordination is expensive, with high non-recurring engineering costs that arise from an intractable number of degrees of freedom. I present the case for statistical methods to infer regression models, which provide tractability for complex design questions. These models estimate performance and power as a function of hardware parameters and software characteristics to permit coordinated design. For example, I show how to coordinate the tuning of sparse linear algebra with the design of the cache and memory hierarchy. Finally, I describe on-going work in using logistic regression to understand the root causes of performance tails and outliers in warehouse-scale datacenters. BIO: Benjamin Lee is an assistant professor of Electrical and Computer Engineering at Duke University. His research focuses on scalable technologies, power-efficient architectures, and high-performance applications. He is also interested in the economics and public policy of computation. He has held visiting research positions at Microsoft Research, Intel Labs, and Lawrence Livermore National Lab. Dr. Lee received his B.S. in electrical engineering and computer science at the University of California, Berkeley and his Ph.D. in computer science at Harvard University. He did postdoctoral work in electrical engineering at Stanford University. He received an NSF Computing Innovation Fellowship and an NSF CAREER Award. His research has been honored as a Top Pick by IEEE Micro Magazine and has been honored twice as Research Highlights by Communications of the ACM. October 28, 2014 - Robinson Pino: New Program Directions for Advanced Scientific Computing Research (ASCR) October 24, 2014 - Qingang Xiong: Computational Fluid Dynamics Simulation of Biomass Fast Pyrolysis - From Particle Scale to Reactor Scale Abstract: Fast pyrolysis, a prominent thermochemical conversion approach to produce bio-oil from biomass, has attracted increased interest. However, the fundamental mechanisms of biomass fast pyrolysis are still poorly understood and the design, operation and optimization of pyrolyzers are far from satisfactory because of the characteristics that complicated multiphase flows are coupled with complex devolatilization processes. Computational fluid dynamics (CFD) is a powerful tool to investigate the underlined mechanisms of biomass fast pyrolysis and help optimize efficient pyrolyzers. In this presentation, I will describe my postdoctoral work on CFD of biomass fast pyrolysis at both particle scale and reactor scale. For the particle-scale CFD, the lattice Boltzmann method is used to describe the flow and heat transfer processes. The intra-particle gas flow is modeled by the Darcy law. A lumped multi-step reaction kinetics is employed to model the biomass decomposition. Through the particle-scale CFD, detailed information on the evolution of a biomass particle is obtained. The velocity, temperature, and species mass fraction inside and surrounding the particles are presented. The evolutions of particle shape and density are monitored. For the reactor-scale CFD, we use the so-called multi-fluid model to simulate the multiphase hydrodynamics, in which all phases are treated as interpenetrating continua. Volume-fraction based mass, momentum, energy, and species conservation equations are employed to describe the density, velocity, temperature and mass fraction fields. Various submodels are used to close the conservation equations. Using this model, fluidized-bed and auger reactors are modeled. Parametric and sensitivity studies on the effects of operating conditions, devolatilization schemes, and submodel selections are investigated. It is expected that these multi-scale CFD simulation will contribute significantly to the accuracy improvement of industrial reactor modeling for biomass fast pyrolysis. Finally, I will discuss on some of my ideas about the future directions in the multi-scale CFD simulation of biomass thermochemical conversion. Biography: Dr. Qingang Xiong is a postdoctoral research associate in the Department of Mechanical Engineering, Iowa State University. Dr. Xiong obtained his Ph.D. in Chemical Engineering from Institute of Process Engineering, Chinese Academy of Sciences in 2011. After his graduation, Dr. Xiong went to the University of Heidelberg, Germany, as a software engineer for half year to conduct GPU-based high performance computing of astrophysics. Dr. Xiong's research areas are computational fluid dynamics, CPU- and GPU-based parallel computing, heat and mass transfer, and biomass thermochemical conversion. Dr. Xiong has published more 20 scientific papers and given more than 15 conference presentations. Dr. Xiong serves as editorial board member for several journals and chair in international conferences. October 23, 2014 - Liang Zhou: Multivariate Transfer Function Design Visualization and exploration of volumetric datasets has been an active area of research for over two decades. During this period, volumetric datasets used by domain users have evolved from univariate to multivariate. The volume datasets are typically explored and classified via transfer function design and visualized using direct volume rendering. To improve classification results and to enable the exploration of multivariate volume datasets, multivariate transfer functions emerge. In this talk, we describe our research on multivariate transfer function design. To improve the classification of univariate volumes, various one-dimensional (1D) or two-dimensional (2D) transfer function spaces have been proposed; however, these methods work on only some datasets. We propose a novel transfer function method that provides better classifications by combining different transfer function spaces. Methods have been proposed for exploring multivariate simulations; however, these approaches are not suitable for complex real-world datasets and may be unintuitive for domain users. To this end, we propose a method based on user-selected samples in the spatial domain to make complex multivariate volume data visualization more accessible for domain users. However, this method still requires users to fine-tune transfer functions in parameter space transfer function widgets, which may not be familiar to them. We therefore propose GuideME, a novel slice-guided semiautomatic multivariate volume exploration approach. GuideME provides the user, an easy-to-use, slice-based user interface that suggests the feature boundaries and allows the user to select features via click and drag, and then an optimal transfer function is automatically generated by optimizing a response function. Throughout the exploration process, the user does not need to interact with the parameter views at all. Finally, real-world multivariate volume datasets are also usually of large size, which is larger than the GPU memory and even the main memory of standard work stations. We propose a ray-guided out-of-core, interactive volume rendering and efficient query method to support large and complex multivariate volumes on standard work stations. October 20, 2014 - John Schmisseur: New HORIzONS: A Vision for Future Aerospace Capabilities within the University of Tennessee Recently, issues of national interest including the planned DoD Pivot to the Pacific and assured large payload access to space have renewed commitment to the development of high-speed aerospace systems. As a result, many agencies, including the Air Force, are exploring new technology systems to facilitate operation in the hypersonic flight regime. One facet of the Air Force strategy in this area has been a reemphasis of hypersonic testing capabilities at the Arnold Engineering Development Complex (AEDC) and the establishment of an Air Force Research Laboratory scientific research group co-located at the complex. These recent events provide an opportunity for the University of Tennessee to support the Air Force and other agencies in the realization of planned high-speed capabilities while simultaneously establishing a precedent for the integration of contributions across the UT system. The HORIzON center (High-speed Original Research & InnovatiON) at the University of Tennessee Space Institute (UTSI) has been established to address the current, intermediate and strategic challenges faced by national agencies in the development of high-speed/hypersonic capabilities. Specifically, the center will foster the development of world-class basic research capabilities in the region surrounding AEDC, create a culture of discovery and innovation integrating elements from academia, government and small business, and take the lead in the development of a rational methodology for the integration of large scale empirical and numerical data sets within a digital environment. Dr. Schmisseur's presentation will provide the background and motivation that has driven the establishment of the HORIzON center and highlight a few of the center's major research vectors. He will be visiting ORNL to explore how contributions from the DoE can be integrated within the HORIzON enterprise to support the achievement of our national goals in high-speed technology development. October 14, 2014 - Krishna Chaitanya Gurijala: Shaped-Based Analysis Shape analysis plays a critical role in many fields, especially in medical analysis. There has been substantial research performed for shape analysis in manifolds. On the contrary, shape-based analysis has not received much attention for volumetric data. It is not feasible to directly extend the successful manifold shape analysis methods, such as the heat diffusion, to volumes due to the huge computational cost. The work presented herein seeks to address this problem by presenting two approaches for shape analysis in volumes that not only capture the shape information efficiently but also reduce the computational time drastically. The first approach is a cumulative approach and is called the Cumulative Heat Diffusion, where the heat diffusion is carried out by simultaneously considering all the voxels as sources. The cumulative heat diffusion is monitored by a novel operator called the Volume Gradient Operator, which is a combination of the well-known Laplace-Beltrami operator and a data-driven operator. The cumulative heat diffusion is computed by considering all the voxels and hence is inherently dependent on the resolution of the data. Therefore, we propose a second approach which is a stochastic approach for shape analysis. In this approach the diffusion process is carried out by using tiny massless particles termed shapetons. The shapetons are diffused in a Monte Carlo fashion across the voxels for pre-defined distance (serves as single time step) to obtain the shape information. The direction of propagation for the shapetons is monitored by the volume gradient operator. The shapeton diffusion is a novel diffusion approach and is independent of the resolution of the data. These approaches robustly extract features, objects based on shape. Both shape analysis approaches are used in several medical applications such as segmentation, feature extraction, registration, transfer function design and tumor detection. This work majorly focuses on the diagnosis of colon cancer. Colorectal cancer is the second leading cause of cancer related mortality in the United States. Virtual colonoscopy is a viable non-invasive screening method, whereby a radiologist can explore a colon surface to locate and remove the precancerous polyps (protrusions/bumps on the colon wall). To facilitate an efficient colon exploration, a robust and shape-preserving colon flattening algorithm is presented using the heat diffusion metric which is insensitive to topological noise. The flattened colon surface provides effective colon exploration, navigation, polyp visualization, detection, and verification. In addition, the flattened colon surface is used to consistently register the supine and prone colon surfaces. Anatomical landmarks such as the taeniae coli, flexures and the surface feature points are used in the colon registration pipeline and this work presents techniques using heat diffusion to automatically identify them September 30, 2014 - Stanley Osher: What Sparsity and l1 Optimization Can Do For You Sparsity and compressive sensing have had a tremendous impact in science, technology, medicine, imaging, machine learning and now, in solving multiscale problems in applied partial differential equations, developing sparse bases for Elliptic eigenspaces. l1 and related optimization solvers are a key tool in this area. The special nature of this functional allows for very fast solvers: l1 actually forgives and forgets errors in Bregman iterative methods. I will describe simple, fast algorithms and new applications ranging from sparse dynamics for PDE, new regularization paths for logistic regression and support vector machine to optimal data collection and hyperspectral image processing. Credits: Stanley Osher, jointly with many others) MORE ABOUT THE SPEAKER Dr. Osher's awards and accomplishments are voluminous and exceptionally remarkable, just a few highlights of which include: • Recently awarded the prestigious Gauss Prize, the highest honor in applied mathematics from the International Congress of Mathematicians. • Named among the top 1 percent of the most frequently cited scholars in both mathematics and computer science between 2002 and 2012. • Elected in 2009 to the American Academy of Arts and Sciences. • Honored with the 2007 United States Association for Computational Mechanics (USACM) Computational and Applied Sciences Award. • Elected in 2005 to the National Academy of Sciences. • Received the 2005 Society for Industrial and Applied Mathematics (SIAM) Kleinman Prize for "outstanding research or other contributions that bridge the gap between mathematics and applications". • Awarded the 2003 International Council for Industrial and Applied Mathematics (ICIAM) Pioneer Prize "for pioneering work introducing applied mathematical methods and scientific computing techniques to an industrial problem area or a new scientific field of applications". • Appointed as an Alfred P. Sloan Fellow and a Fulbright Fellow. The Gauss prize citation summarized Dr. Osher's many achievements by stating that, "Stanley Osher has made influential contributions in a broad variety of fields in applied mathematics. These include high resolution shock capturing methods for hyperbolic equations, level set methods, PDE based methods in computer vision and image processing, and optimization. His numerical analysis contributions, including the Engquist-Osher scheme, TVD schemes, entropy conditions, ENO and WENO schemes and numerical schemes for Hamilton-Jacobi type equations have revolutionized the field. His level set contributions include new level set calculus, novel numerical techniques, fluids and materials modeling, variational approaches, high co-dimension motion analysis, geometric optics, and the computation of discontinuous solutions to Hamilton-Jacobi equations; level set methods have been extremely influential in computer vision, image processing, and computer graphics. In addition, such new methods have motivated some of the most fundamental studies in the theory of PDEs in recent years, completing the picture of applied mathematics inspiring pure mathematics." September 11, 2014 - Jeffrey Willert: Increased Efficiency and Functionality inside the Moment-Based Accelerated Thermal Radiation Transport Algorithm Recent algorithm design efforts for thermal radiation transport (TRT) have included the application of "Moment-Based Acceleration" (MBA). These MBA algorithms achieve accurate solutions in a highly efficient manner by moving a large portion of the computational effort to a nonlinearly consistent low-order (reduced phase space) domain. In this talk I will discuss recent improvements/advancements of the MBA-TRT algorithm. We explore the use of Anderson Acceleration to solve the nonlinear low-order system as a replacement to a more traditional Jacobian-Free Newton-Krylov solver. Additionally, the MBA-TRT algorithm has struggled when error from Monte Carlo calculations builds up over several time steps. This error often corrupts the low-order system and may prevent convergence of the nonlinear solver. We attempt to remedy this by implementing a "Residual" Monte Carlo algorithm in which the stochastic error is greatly reduced for the same or less computational cost. We conclude with a discussion of areas of future work. September 9, 2014 - Swen Boehm: STCI - A scalable approach for tools and runtimes The system community to is required to provide scalable and resilient communication substrates and run-time infrastructures by the ever-increasing complexity and scale of high-performance computer (HPC) systems and parallel scientific applications. Two system research efforts will be presented, focusing on adaptation and customization of HPC runtimes as well as the usability of such systems. The Scalable runTime Component Infrastructure (STCI) will be introduced, a modular library that enables the implementation of new scalable and resilient HPC run-time systems. Its unique modular architecture eases the adaption to a particular HPC system. Additionally, STCI is based on the concept of "agents", which allows to further customize run-time services. For instance, STCI's customizability was recently utilized to implement an MPMD style execution model on top of STCI. Finally, "librte," will be presented: a unified runtime abstraction API that aims at improving the usability of HPC systems by providing an abstraction to various run-time systems such as Cray ALPS, PMI, ORTE and STC. "librte" is used by the Universal Common Communication Substrate (UCCS) and provides an simple and well-defined interface to tool developers. September 9, 2014 - Ewa Deelman: Science Automation with the Pegasus Workflow Management System Abstract sent on behalf of the speaker: Scientific workflows allow scientists to declaratively describe potentially complex applications that are composed of individual computational components. Workflows also include a description of the data and control dependencies between the components. This talk will describe example workflows in various science domains including astronomy, bioinformatics, earthquake science, gravitational-wave physics, and others. It will examine the challenges faced by workflow management systems when executing workflows in distributed and high-performance computing environments. In particular, the talk will describe the Pegasus Workflow Management System developed at USC/ISI. Pegasus bridges the scientific domain and the execution environment by automatically mapping high-level workflow descriptions onto distributed resources. It locates the input data and computational resources necessary for workflow execution. It also restructures the workflow for performance and reliability reasons. Pegasus can execute workflows on a laptop, a campus cluster, grids, and clouds. It can handle workflows with a single task or millions of tasks and has been used to manage workflows accessing and generating TeraBytes of data. The talk will describe the capabilities of Pegasus and how it manages heterogeneous computing environments. BIO FROM SPEAKER: Ewa Deelman is a Research Associate Professor at the USC Computer Science Department and the Assistant Director of Science Automation Technologies at the USC Information Sciences Institute. Dr. Deelman's research interests include the design and exploration of collaborative, distributed scientific environments, with particular emphasis on workflow management as well as the management of large amounts of data and metadata. In 2007, Dr. Deelman edited a book: "Workflows in e-Science: Scientific Workflows for Grids", published by Springer. She is also the founder of the annual Workshop on Workflows in Support of Large-Scale Science, which is held in conjunction with the Super Computing conference. In 1997 Dr. Deelman received her PhD in Computer Science from the Rensselaer Polytechnic Institute. August 29, 2014 - C. David Levermore: Coarsening of Particle Systems ABSTRACT FROM SPEAKER: Each particle in a simulation of a system of particles usually represents a huge number of real particles. We present a framework for constructing the dynamics for a so-called coarsened system of simulated particles. We build an approximate solution to the Liouville equation for the original system from the solution of an equation for the phase-space density of a smaller system. We do this with a Markov approximation within a Mori-Zwanzig formalism based upon a reference density. We then identify the evolution equation for the reduced phase-space density as the forward Kolmogorov equation of a Markov process. The original system governed by deterministic dynamics is then simulated with the coarsened system governed by this Markov process. Both Monte Carlo (MC) and molecular dynamics (MD) simulations can be view from this framework. More generally, the reduced dynamics can have elements of both MC and MD. Abstract sent on behalf of the speaker: Laplace method is a widely used method to approximate an integration in statistics. We analyze this method in the context of optimal Bayesian experimental design and extend this method from the classical scenario, where parameters can be completely determined by the experiment, to the scenarios where an unidentifiable parametric manifold exists. We show that by carrying out this approximation the estimation of the expected Kullback-Leibler divergence can be significantly accelerated. The developed methodology has been applied to the optimal experimental design of impedance tomography and seismic source inversion. August 18, 2014 - Pierre Gremaud: Impedance boundary conditions for flows on networks Abstract sent on behalf of the speaker: From hemodynamics to engineering applications, many flow problems are solved on networks. For feasibility reasons, computational domains are often truncated and outflow conditions have to be prescribed at the end of the domain under consideration. We will show how to efficiently compute the impedance of specific networks and how to use this information as outflow boundary condition. The method is based on linearization arguments and Laplace transforms. The talk will focus on hemodynamics applications but we will indicate how to generalize the approach. July 23, 2014 - Frédérique Laurent-Negre: High order moment methods for the description of spray: mathematical modeling and adapted numerical methods Abstract sent on behalf of the speaker: We consider a two-phase flow constituted of a dispersed phase of liquid droplets (a spray) in a gas flow. This type of flow occurs in many applications, such as two-phase combustion or solid propulsion. The spray is then characterized by its distribution in size and velocity, which satisfies a Boltzmann-type equation. As an alternative to Lagrangian methods that are commonly used for the numerical simulations, we have developed Eulerian models that can account for the polydisperse character of the sprays. They use moments in size and velocity of the distribution on fixed intervals of droplet size. These moments represent the number and the mass or the amount of surface area, the momentum ... of all droplets of a given size range. However, the space in which the moment vectors live becomes complex when high order moments are considered. A key point of numerical methods is then to ensure that the moment vector will stay in this space. We study here some mathematical models derived from the kinetic model as well as high-order numerical methods specifically developed to preserve the moment space. July 22, 2014 - Christos Kavouklis: Numerical Solution of the 3D Poisson Equation with the Method of Local Corrections Abstract sent on behalf of the speaker: We present a new version of the Method of Local Corrections; a low communications algorithm for the numerical solution of the free space Poisson's equation on 3D structured grids. We are assuming a decomposition of the fine computational domain (which contains the global right hand side - charge) into a set of small disjoint cubic patches (e.g. of size 33^3). The Method of Local Corrections comprises three steps where Mehrstellen discretizations of the Laplace operator are employed; (i) A loop over the fine disjoint patches and the computation of local potentials on sufficiently large extensions of theirs (downward pass) (ii) An inexpensive global Poisson solve on the associated coarse domain with right hand side computed by applying the coarse mesh Laplacian to the local potentials of step (i) and (iii) A correction of the local solutions computed in step (i) on the boundaries of the fine disjoint patches based on interpolating the global coarse solution and a propagation of the corrections in the patch interiors via local Dirichlet solves (upward pass). Local solves in the downward pass and the global coarse solve are performed utilizing the domain doubling algorithm of Hockney. For the local solves in the upward pass we are employing a standard DFT Dirichlet Poisson solver. In this new version of the Method of Local Corrections we take into consideration the local potentials induced by truncated Legendre expansions of degree P of the local charges (the original version corresponded to P=0). The result is an h-p scheme that is P+1-order accurate and involves only local communication. Specifically, we only have to compute and communicate the coefficients of local Legendre expansions (that is, for instance, 20 scalars per patch for expansions of degree P=3). Several numerical simulations are presented to illustrate the new method and demonstrate its convergence properties. July 17, 2014 - Kody John Hoffman Law: Dimension-independent, likelihood-informed (DILI) MCMC (Markov chain Monte Carlo) sampling algorithms for Bayesian inverse problems July 1, 2014 - Xubin (Ben) He: High Performance and Reliable Storage Support for Big Data Abstract sent on behalf of the speaker: Big data applications have imposed unprecedented challenges in data analysis, storage, organization and understanding due to their heterogeneity, volume, complexity, and high velocity. These challenges are for both computer systems researchers who investigate new storage and computational solutions to support fast and reliable access to large datasets and application scientists in various disciplines who exploit these datasets of vital scientific interest for knowledge discovery. In this talk, I will talk about my research in data storage and I/O systems, particularly in solid-state devices (SSDs) and erasure codes to provide cost effective solutions for big data management for high performance and reliability. Bio: Dr. Xubin He is a Professor and the Graduate Program Director of Electrical and Computer Engineering at Virginia Commonwealth University. He is also the Director of the Storage Technology and Architecture Research (STAR) lab. Dr. He received his PhD in Electrical and Computer Engineering from University of Rhode Island, USA in 2002 and both his MS and BS degrees in Computer Science from Huazhong University of Science and Technology, China, in 1997 and 1995, respectively. His research interests include computer architecture, reliable and high availability storage systems and distributed computing. He has published more than 80 refereed articles in prestigious journals such as IEEE Transactions on Parallel and Distributed Systems (TPDS), Journal of Parallel and Distributed Computing (JPDC), ACM Transactions on Storage, and IEEE Transactions on Dependable and Secure Computing (TDSC), and at various international conferences, including USENIX FAST, USENIX ATC, Eurosys, IEEE/IFIP DSN, IEEE IPDPS, MSST, ICPP, MASCOTS, LCN, etc. He is the general co-chair for IEEE NAS'2009, program co-chair for MSST'2010, IEEE NAS'2008 and SNAPI'2007. Dr. He has served as a proposal review panelist for NSF, various chair roles and committee members for many professional conferences in the field. Dr. He was a recipient of the ORAU Ralph E. Powe Junior Faculty Enhancement Award in 2004, the TTU Chapter Sigma Xi Research Award in 2010 and 2005, and TTU ECE Most Outstanding Teaching Faculty Award in 2010. He holds one U.S. patent. He is a senior member of the IEEE, a member of the IEEE Computer Society and USENIX. June 18, 2014 - Hari Krishnan: Enabling Collaborative Domain-Centric Visualization and Analysis in High Performance Computing Environments Abstract and Bio sent on behalf of the speaker: Multi-institutional interdisciplinary domain science teams are increasingly commonplace in modern high performance computing (HPC) environments. Visualization tools, such as VisIt and ParaView, have traditionally focused more on improving scalability, performance, and efficiency of algorithms over enabling ease of use and collaborative functionality that compliments the power of the HPC resources. In addition, visualization tools provide an algorithm-based infrastructure focusing on a diverse set of readers, plots, and operations rather than higher level domain-specific set of capabilities when providing solutions to the scientific community. This strategy yields a higher return on investment, but increases complexity for the user community. As larger, more diverse teams of scientists become more common place they require applications tuned at providing the most use out of a heavily utilized and resource constrained distributed HPC environment. Standard methods of visualization and data sharing pose significant challenges detracting from users focus on scientific inquiry. In this presentation I will highlight three new capabilities under development within VisIt to address these needs which enable domain scientists to refocus their efforts on more productive endeavors. These features include tailored visualization using a new PySide/PyQt infrastructure, a new parallel analysis framework supporting Python & R scripting, and a collaboration suite that allows sharing and communicating among a variety of display mediums from mobile devices to visualization clusters. The goal is to enhance the experience of domain scientists by streamlining their work environment, providing easy access to a complex set of resources, and enabling collaborations, sharing, and communication among a diverse team. Bio: Hari Krishnan graduated with his Ph.D. in computer science and works for the visualization and graphics group as a computer systems engineer at Lawrence Berkeley National Laboratory. His research focuses on scientific visualization on HPC platforms and Many-core architectures. He leads the development effort on several HPC related projects which include research on new visualization methods, optimizing scaling and performance on Cray machines, working on data model optimized I/O libraries and enabling a remote workflow services. He is also an active developer on several major open source projects which include VisIt, NiCE, H5hut, and has developed plugins for Fiji/ImageJ. May 20, 2014 - Weiran Sun: A Spectral Method for Linear Half-Space Kinetic Equations Abstract sent on behalf of the speaker: Half-space equations naturally arise in boundary layer analysis of kinetic equations. In this talk we will present a unified proof for the well-posedness of a class of linear half - space equations with general incoming data. We will also show a spectral method to numerically resolve these type of equations in a systematic way. Our main strategy in both analysis and numerics includes three steps: adding damping terms to the original half-space equation, using an inf - sup argument and even-odd decomposition to establish the well-posedness of the damped equation, and then recovering solutions to the original half - space equation. The accuracy of the damped equation is shown to be quasi-optimal and the numerical error of approximations to the original equation is controlled by that of the damped equation. Numerical examples are shown for the isotropic neutron transport equation and the linearized BGK equation. This is joint work with Qin Li and Jianfeng Lu. May 14, 2014 - Michael Bauer: Programming Distributed Heterogeneous Architectures with Logical Regions Abstract and Bio sent on behalf of the speaker: Modern supercomputers now encompass both heterogeneous processors and deep, complex memory hierarchies. Programming these machines currently requires expertise in an eclectic collection of tools (MPI, OpenMP, CUDA, etc.) that primarily focus on describing parallelism while placing the burden of data movement on the programmer. Legion is an alternative approach that provides extensive support for describing the structure of program data through logical regions. Logical regions can be dynamically partitioned into sub-regions giving applications an explicit mechanism for directly conveying information about locality and independence to the Legion runtime. Using this information, Legion automatically extracts task parallelism and orchestrates data movement through the memory hierarchy. Time permitting, we will discuss results from several applications including a port of S3D, a production combustion simulation running on Titan, the Department of Energy's current flagship supercomputer. Bio: Michael Bauer is a sixth year PhD student in computer science at Stanford University. His interests include the design and implementation of programming systems for supercomputers and distributed systems. Abstract sent on behalf of the speaker: Recent work in uncertainty quantification (UQ) has made it feasible to compute the statistical uncertainties for mathematical models in physics, biology, and engineering applications, offering added insight into how the model relates to the measurement data it represents. This talk focuses on two issues related to the reliability of UQ methods for model calibration in practice. The first issue concerns calibration of models having discrepancies with respect to the phenomena they model when these discrepancies violate commonly employed statistical assumptions used for simplifying computation. Using data from a vibrating beam as a case study, I will illustrate how these discrepancies can limit the accuracy of predictive simulation and discuss some approaches for reducing the impact of these limitations. The second issue concerns verifying the accurate implementation of computational algorithms for solving inverse problems in UQ. In this context, verification is particularly important as the nature of the computational results makes detection of subtle implementation errors unlikely. I will present a collaboratively developed computational framework for verification of statistical inverse problem solvers and present examples of its use to verify the Markov Chain Monte Carlo (MCMC) based routines in the QUESO C++ library. May 5, 2014 - Abhishek Kumar: Multiscale modeling of polycrystalline material for optimized property May 1, 2014 - Eric Chung: Staggered Discontinuous Galerkin Methods ABSTRACT FROM SPEAKER: In this talk, we will present the staggered discontinuous Galerkin methods. These methods are based on piecewise polynomial approximation on staggered grids. The basis functions have to be carefully designed, so that some compatibility conditions are satisfied. Moreover, the use of staggered grids bring some advantages, such as optimal convergence and conservation. We will discuss the basic methodologies and applications to wave propagation and fluid flows. April 7, 2014 - Tom Scogland: Runtime Adaptation for Autonomic Heterogeneous Computing Heterogeneity is increasing at all levels of computing, certainly with the rise in general purpose computing with GPUs in everything from phones to supercomputers. More quietly it is increasing with the rise of NUMA systems, hierarchical caching, OS noise, and a myriad of other factors. As heterogeneity becomes a fact of life at every level of computing, efficiently managing heterogeneous compute resources is becoming a critical task. In order to make the problem tractable we must develop methods and systems to allow software to adapt to the hardware it finds within a given node at runtime. The goal is to make the complex functions of heterogeneous computing autonomic, handling load balancing, memory coherence and other performance critical factors in the runtime. This talk will discuss my research into this area, including the design of a work-sharing construct for CPU and GPU resources in OpenMP and automated memory reshaping/re-mapping for locality. Dr. Scogland is a candidate for a postdoctoral position with the Computer Science Research Group Recent experiments have shown that junctions consisting of individual single-molecule magnets (SMMs) bridged between two electrodes can be fabricated in three-terminal devices, and that the characteristic magnetic anisotropy of the SMMs can be affected by electrons tunneling through the molecule. Vibrational modes of the SMM can couple to electronic charge and spin degrees of freedom, and this coupling also influences the magnetic and transport properties of the SMM. The effect of electron-phonon coupling on transport has been extensively studied in small molecules, but not yet for junctions of SMMs. The goals of this talk will be two-fold: to present a novel approach for studying the effects of this electron-phonon coupling in transport through SMMs that utilizes both density functional theory calculations and model Hamiltonian construction and analysis, and to present a software framework based on this hybrid approach for the simulation of transport across user-defined SMMs . The results of these simulations will indicate a characteristic suppression of the current at low energies that is strongly dependent on the overall electron-phonon coupling strength and number of molecular vibrational modes considered. Mr. McCaskey is a candidate for a graduate position in the Computer Science Research Group March 26, 2014 - Steven Wise: Convergence of a Mixed FEM for a Cahn-Hilliard-Stokes System Abstract and Bio sent on behalf of the speaker: Co-Authors: Amanda Diegel and Xiaobing Feng Abstract: In this talk I will describe a mixed finite element method for a modified Cahn-Hilliard equation coupled with a non-steady Darcy-Stokes flow that models phase separation and coupled fluid flow in immiscible binary fluids and di-block copolymer melts. I will focus both on numerical implementation issues for the scheme as well as the convergence analysis. The time discretization is based on a convex splitting of the energy of the equation. I will show that our scheme is unconditionally energy stable with respect to a spatially discrete analogue of the continuous free energy of the system and unconditionally uniquely solvable. We can show, in addition, that the phase variable is bounded in L^\infty(0,T,L^\infty) and the chemical potential is bounded in L\infty(0,T,L^2), unconditionally in both two and three dimensions, for any finite final time T. In fact the bounds in such estimates grow only (at most) linearly in T. I will prove that these variables converge with optimal rates in the appropriate energy norms in both two and three dimensions. Finally, I will discuss some extensions of the scheme to approximate solutions for diffuse interface flow models with large differences in density. Bio: Steven Wise is an associate professor of mathematics at the University of Tennessee. He specializes in fast adaptive nonlinear algebraic solvers for numerical PDE, numerical analysis, and scientific computing more broadly. Before coming to the University of Tennessee, he was a postdoc and visiting assistant professor of mathematics and biomedical engineering at the University of California, Irvine. He earned a PhD in engineering physics from the University of Virginia in 2003. March 18, 2014 - Zhiwen Zhang: A Dynamically Bi-Orthogonal Method for Time-Dependent Stochastic Partial Differential Equation We propose a dynamically bi-orthogonal method (DyBO) to study time dependent stochastic partial differential equations (SPDEs). The objective of our method is to exploit some intrinsic sparse structure in the stochastic solution by constructing the sparsest representation of the stochastic solution via a bi-orthogonal basis. It is well-known that the Karhunen-Loeve expansion minimizes the total mean squared error and gives the sparsest representation of stochastic solutions. However, the computation of the KL expansion could be quite expensive since we need to form a covariance matrix and solve a large-scale eigenvalue problem. In this talk, we derive an equivalent system that governs the evolution of the spatial and stochastic basis in the KL expansion. Unlike other reduced model methods, our method constructs the reduced basis on-the-fly without the need to form the covariance matrix or to compute its eigen-decomposition. We further present an adaptive strategy to dynamically remove or add modes, perform a detailed complexity analysis, and discuss various generalizations of this approach. Several numerical experiments will be provided to demonstrate the effectiveness of the DyBO method. Bio: Zhiwen Zhang is a postdoctoral scholar in the Department of Computing and Mathematical Sciences, California Institute of Technology. He graduated from the Department of Mathematical Sciences, Tsinghua University in 2011, where he was awarded the degree of Ph.D. in Applied Mathematics. From 2008 to 2009, he was studied in the University of Wisconsin at Madison as a visiting student. His research interests lie in the applied analysis and numerical computation of problems arising from quantum chemistry, wave propagation, porous media, cell evolution, Bayesian updating, stochastic fluid dynamics and random heterogeneous media. March 4, 2014 - David Seal: Beyond the Method of Lines Formulation: Building Spatial Derivatives into the Temporal Integrator Abstract: High-order solvers for hyperbolic conservation laws often fall under two disparate categories. On one hand, the method of lines formulation starts by discretizing the spatial variables, and then a system of ODEs is solved using an appropriate time-integrator. On the other hand, Lax-Wendroff discretizations immediately convert Taylor series in time to discrete spatial derivatives. In this talk, we present generalizations of these methods including high-order discontinuous Galerkin (DG) methods based on multiderivative time-integrators, as well as high-order finite difference weighted essentially non-oscillatory (WENO) methods based on the Picard Integral Formulation (PIF) of the conservation law. Multiderivative time integrators are extensions of Runge-Kutta and Taylor methods. They reduce the overall storage required for a Runge-Kutta method, and they introduce flexibility to the Taylor series in time methods by allowing for new coefficients to be used at various stages. In the multiderivative DG method, "modified fluxes'' are used to define high-order Riemann problems, which are similar to those defined in the generalized Riemann problem solvers incorporated in the Arbitrary DERivative (ADER) methods. The finite difference WENO method is based on a Picard Integral Formulation of the PDE, where we first integrate in time, and then work on discretizing the temporal integral. The present formulation is automatically mass conservative, and therefore it introduces the possibility of modifying finite difference fluxes for the purpose of accomplishing tasks such as positivity preservation, or reducing the number of expensive non-linear WENO reconstructions. For now, we present results for a single-step version of the PIF-WENO method which lends itself to incorporating adaptive mesh refinement technology. Results for one- and two-dimensional conservation laws are presented, and they indicate that the new methods compete well with current state of the art technology. Microbial communities populate and shape diverse ecological niches within natural environments. The physiology of organisms in natural consortia has been studied with community proteomics. However, little is known about how free-living microorganisms regulate protein activities through post-translational modifications (PTMs). Here, we harnessed high-performance mass spectrometry and supercomputing for identification and quantification of a broad range of PTMs (including hydroxylation, methylation, citrullination, acetylation, phosphorylation, methylthiolation, S-nitrosylation, and nitration) in microorganisms. Using an E. coli proteome as a benchmark, we identified more than 5,000 PTM events of diverse types and a large number of modified proteins that carried multiple types of PTMs. We applied this demonstrated approach to profiling PTMs in two growth stages of a natural microbial community growing in the acid mine drainage environment. We found that the multi-type, multi-site protein modifications are highly prevalent in free-living microorganisms. A large number of proteins involved in various biological processes were dynamically modified during the community succession, indicating that dynamic protein modification might play an important role in organismal response to changing environmental conditions. Furthermore, we found closely related, but ecologically differentiated bacteria harbored remarkably divergent PTM patterns between their orthologous proteins, implying that PTM divergence could be a molecular mechanism underlying their phenotypic diversities. We also quantified fractional occupancy for thousands of PTM events. The findings of this study should help unravel the role of PTMs in microbial adaptation, evolution and ecology. February 14, 2014 - Celia E. Shiau: Probing fish-microbe interface for environmental assessment of clean energy To preserve wildlife and natural resources for future generations, we face the grand challenge of effectively assessing and predicting the impact of current and future energy use. My overall goal is to probe the microbiome and host-microbe interface of fish populations, in order to evaluate environmental stress on aquatic life and resources. Current understanding of aquatic microbes in fresh and salt water is centered on free-living bacteria (independent of a host). I will discuss my work on the experimentally tractable fish model (Danio rerio) that can be applied to investigate the interaction between microbiota, host health, and environmental toxicants (such as mercury and other metalloids), and the aims of my Liane Russell fellowship research program. The findings will provide a framework for studies of other fish species, leveraging advanced imaging, metagenomics, bioinformatics, and neutron scattering. The proposed study promises to inform the potential use of fish microbes to solve energy and environmental challenges, thereby providing means for critical assessment of global energy impact. February 6, 2014 - Susan Janiszewski: 3-connected, claw-free, generalized net-free graphs are hamiltonian Given a family $\mathcal{F} = \{H_1, H_2, \dots, H_k\}$ of graphs, we say that a graph is $\mathcal{F}$-free if $G$ contains no subgraph isomorphic to any $H_i$, $i = 1,2,\dots, k$. The graphs in the set $\mathcal{F}$ are known as {\it forbidden subgraphs}. The main goal of this dissertation is to further classify pairs of forbidden subgraphs that imply a 3-connected graph is hamiltonian. First, the number of possible forbidden pairs is reduced by presenting families of graphs that are 3-connected and not hamiltonian. Of particular interest is the graph $K_{1,3}$, also known as the {\it claw}, as we show that it must be included in any forbidden pair. Secondly, we show that 3-connected, $\{K_{1,3}, N_{i,j,0}\}$-free graphs are hamiltonian for $i,j \ne 0, i+j \le 9$ and 3-connected, $\{K_{1,3}, N_{3,3,3}\}$-free graphs are hamiltonian, where $N_{i,j,k}$, known as the {\it generalized net}, is the graph obtained by rooting vertex-disjoint paths of length $i$, $j$, and $k$ at the vertices of a triangle. These results combined with previous known results give a complete classification of generalized nets such that claw-free, net-free implies a 3-connected graph is hamiltonian. Abstract and Bio sent on behalf of the speaker: The semi-Lagrangian (SL) scheme for transport problems gains more and more popularity in the computational science community due to its attractive properties. For example, the SL scheme, compared with the Eulerian approach, allows extra large time step evolution by incorporating characteristics tracing mechanism, hence achieving great computational efficiency. In this talk, we will introduce a family of dimensional splitting high order SL methods coupled with high order finite difference weighted essentially non-oscillatory (WENO) procedures and finite element discontinuous Galerkin (DG) methods. By performing dimensional splitting, the multi-dimensional problem is decoupled into a sequence of 1-D problems, which are much easier to solve numerically in the SL setting. The proposed SL schemes are applied to the Vlasov model arising from the plasma physics and the global transport problems based on the cubed-sphere geometry from the operational climate model. We further introduce the integral defer correction (IDC) framework to reduce the dimensional splitting errors. The proposed algorithms have been extensively tested and benchmarked with classical problems in plasma physics such as Landau damping, two stream instability, Kelvin-Helmholtz instability and global transport problems on the cubed-sphere. This is joint work with Andrew Christlieb, Maureen Morton, Ram Nair and Jing-Mei Qiu. January 28, 2014 - Jeff Haack: Applications of computational kinetic theory Abstract and Bio sent on behalf of the speaker: Kinetic theory describes the evolution of a complex system of a large number of interacting particles. These models are used to describe systems where the characteristic scales for interaction between particles and characteristic length scales are similar. In this talk, I will discuss numerical computation of several applications of kinetic theory, including rarefied gas dynamics with applications towards re-entry, kinetic models for plasmas, and a biological model for swarm behavior. As kinetic models often involve a high dimensional phase space as well as an integral operator modeling particle interactions, simulations have been impractical in many settings. However, recent advances in massively parallel computing are very well suited to solving kinetic models, and I will discuss how these resources are used in computing kinetic models and new difficulties that arise when computing on these architectures. January 24, 2014 - Roman Lysecky: Data-driven Design Methods and Optimization for Adaptable High-Performance Systems Abstract and Bio sent on behalf of the speaker: Research has demonstrated that runtime optimization and adaptation methods can achieve performance improvement over design-time optimization system implementations. Furthermore, modern computing applications require a large degree of configurability and adaptability to operate on a variety of data inputs where the characteristic of the data inputs may change over time. In this talk, we highlight two runtime optimization methods for adaptable computing systems. We first highlight the use of runtime profiling and system-level performance and power estimation methods for estimating the speedup and power consumption of dynamically reconfigurable systems. We evaluate the accuracy and fidelity of the online estimation framework for dynamic configuration of computational kernels with goals of both maximizing performance and minimizing system power consumption. We further present an overview of the design framework and runtime reconfiguration methods supporting data-adaptable reconfigurable systems. Data-adaptable reconfigurable systems enable a flexible runtime implementation in which a system can transition the execution of tasks between different execution modalities, e.g., hardware and software implementations, while simultaneously continuing to process data during the transition. Bio: Roman Lysecky is an Associate Professor of Electrical and Computer Engineering at the University of Arizona. He received his B.S., M.S., and Ph.D. in Computer Science from the University of California, Riverside in 1999, 2000, and 2005, respectively. His research interests focus on embedded systems, with emphasis on embedded system security, non-intrusive system observation methods for in-situ analysis of complex hardware and software behavior, runtime optimizations methods, and design methods for precisely timed systems with applications in safety-critical and mobile health systems. He was awarded the Outstanding Ph.D. Dissertation Award from the European Design and Automation Association (EDAA) in 2006 for New Directions in Embedded Systems. He received a CAREER award from the National Science Foundation in 2009 and four Best Paper Awards from the ACM/IEEE International Conference on Hardware-Software Codesign and System Synthesis (CODES+ISSS), the ACM/IEEE Design Automation and Test in Europe Conference (DATE), the IEEE International Conference on Engineering of Computer-Based Systems (ECBS), and the International Conference on Mobile Ubiquitous Computing, Systems, Services (UBICOMM). He has coauthored five textbooks on VHDL, Verilog, C, C++, and Java programming. He is an inventor on one US patent. In 2008 and 2013, he received an award for Excellence at the Student Interface from the College of Engineering and the University of Arizona. January 21, 2014 - Tuoc Van Phan: Some Aspects in Nonlinear Partial Differential Equations and Nonlinear Dynamics This talk contains two parts: Part I: We discuss the Shigesada-Kawasaki-Teramoto system of cross-diffusion equations of two competing species in population dynamics. We show that if there are self-diffusion in one species and no cross-diffusion in the other, then the system has a unique smooth solution for all time in bounded domains of any dimension. We obtain this result by deriving global W ^(1,p) –estimates of Calderón-Zygmund type for a class of nonlinear reaction-diffusion equations with self-diffusion. These estimates are achieved by employing Caffarelli-Peral perturbation technique together with a new two-parameter scaling argument. Part II: We study a class of nonlinear Schrödinger equations in one dimensional spatial space with double-well symmetric potential. We derive and justify a normal form reduction of the nonlinear Schrödinger equation for a general pitchfork bifurcation of the symmetric bound state. We prove persistence of normal form dynamics for both supercritical and subcritical pitchfork bifurcations in the time-dependent solutions of the nonlinear Schrödinger equation over long but finite time intervals. The talk is based on my joint work with Luan Hoang (Texas Tech University), Truyen Nguyen (University of Akron), and Dmitry Pelinovsky (McMaster University). January 17, 2014 - John Dolbow: Recent advances in embedded finite element methods This seminar will present recent advances in an emerging class of embedded finite element methods for evolving interface problems in mechanics. By embedded, we refer to methods that allow for the interface geometry to be arbitrarily located with respect to the finite element mesh. This relaxation between mesh and geometry obviates the need for remeshing strategies in many cases and greatly facilitates adaptivity in others. The approach shares features with finite-difference methods for embedded boundaries, but within a variational setting that facilitates error and stability analysis. We focus attention on a weighted form of Nitsche's method that allows interfacial conditions to be robustly enforced. Classically, Nitsche's method provides a means to weakly impose boundary conditions for Galerkin-based formulations. With regard to embedded interface problems, some care is needed to ensure that the method remains well behaved in varied settings ranging from interfacial configurations resulting in arbitrarily small elements to problems exhibiting large contrast. We illustrate how the weighting of the interfacial terms can be selected to both guarantee stability and to guard against ill-conditioning. Various benchmark problems for the method are then presented. January 16, 2014 - Aziz Takhirov: Numerical analysis of the flows in Pebble Bed Geometries Flows in complex geometries intermediate between free flows and porous media flows occur in pebble bed reactors and other industrial processes. The Brinkman models have consistently shown that for simplified settings accurate prediction of essential flow features depends on the impossible problem of meshing the pores. We discuss a new model to understand the flow and its properties in these geometries. January 13, 2014 - Pablo Seleson: Bridging Scales in Materials with Mesoscopic Models Complex systems are often characterized by processes occurring at different spatial and temporal scales. Accurate predictions of quantities of interest in such systems are many times only feasible through multiscale modeling. In this talk, I will discuss the use of mesoscopic models as a means to bridge disparate scales in materials. Examples of mesoscopic models include nonlocal continuum models, based on integro-differential equations, that generalize classical continuum models based on partial differential equations. Nonlocal models possess length scales, which can be controlled for multiscale modeling. I will present two nonlocal models: peridynamics and nonlocal diffusion, and demonstrate how inherent length scales in these models allow to bridge scales in materials. January 9, 2014 - Gung-Min Gie: Motion of fluids in the presence of a boundary In most practical applications of fluid mechanics, it is the interaction of the fluid with the boundary that is most critical to understanding the behavior of the fluid. Physically important parameters, such as the lift and drag of a wing, are determined by the sharp transition the air makes from being at rest on the wing to flowing freely around the airplane near the wing. Mathematically, the behavior of such flows at small viscosity is modeled by the Navier-Stokes equations. In this talk, we discuss some recent results on the boundary layers of the Navier-Stokes equations under various boundary conditions. January 6, 2014 - Christine Klymko: Central and Communicability Measures in Complex Networks: Analysis and Algorithms Complex systems are ubiquitous throughout the world, both in nature and within man-made structures. Over the past decade, large amounts of network data have become available and, correspondingly, the analysis of complex networks has become increasingly important. One of the fundamental questions in this analysis is to determine the most important elements in a given network. Measures of node importance are usually referred to as node centrality and measures of how well two nodes are able to communicate with each other are referred to as the communicability between pairs of nodes. Many measures of node centrality and communicability have been proposed over the years. Here, we focus on the analysis and computation of centrality and communicability measures based on matrix functions. First, we examine a node centrality measure based on the notion of total communicability, defined in terms of the row sums of the exponential of the adjacency matrix of the network. We argue that this is a natural metric for ranking nodes in a network, and we point out that it can be computed very rapidly even in the case of large networks. Furthermore, we propose a measure of the total network communicability, based on the total sum of node communicabilities, as a useful measure of the connectivity of the network as a whole. Next, we compare various parameterized centrality rankings based on the matrix exponential and matrix resolvent with degree and eigenvector centrality. The centrality measures we consider are exponential and resolvent subgraph centrality (defined in terms of the diagonal entries of the matrix exponential and matrix resolvent, respectively), total communicability, and Katz centrality (defined in terms of the row sums of the matrix resolvent). We demonstrate an analytical relationship between these rankings and the degree and subgraph centrality rankings which helps to explain explain the observed robustness of these rankings on many real world networks, even though the scores produced by the centrality measures are not stable. December 19, 2013 - Adam Larios: New Techniques for Large-Scale Parallel Turbulence Simulations at High Reynolds Numbers Abstract sent on behalf of the speaker: Two techniques have recently been developed to handle large-scale simulations of turbulent flows. The first is a nonlinear, LES-type viscosity, which is based on the numerical violation of the local energy balance of the Navier-Stokes equations. This technique enjoys a numerical dissipation which remains vanishingly small in regions where the solution is smooth, only damping the flow in regions of numerical shock, allowing for increased accuracy at reduced computational cost. The second is a direction-splitting technique for projection methods, which unlocks new parallelism previously unexploited in fluid flows, and enables very fast, large-scale turbulence simulations. December 16, 2013 - Tuoc Van Phan: Some Aspects in Nonlinear Partial Differential Equations and Nonlinear Dynamics Abstract is attached and is sent on behalf of the speaker: This talk contains two parts: Part I: We discuss the Shigesada-Kawasaki-Teramoto system of cross-diffusion equations of two competing species in population dynamics. We show that if there are self-diffusion in one species and no cross-diffusion in the other, then the system has a unique smooth solution for all time in bounded domains of any dimension. We obtain this result by deriving global W ^(1,p) - estimates of Calderón-Zygmund type for a class of nonlinear reaction-diffusion equations with self-diffusion. These estimates are achieved by employing Caffarelli-Peral perturbation technique together with a new two-parameter scaling argument. Part II: We study a class of nonlinear Schrödinger equations in one dimensional spatial space with double-well symmetric potential. We derive and justify a normal form reduction of the nonlinear Schrödinger equation for a general pitchfork bifurcation of the symmetric bound state. We prove persistence of normal form dynamics for both supercritical and subcritical pitchfork bifurcations in the time-dependent solutions of the nonlinear Schrödinger equation over long but finite time intervals. The talk is based on my joint work with Luan Hoang (Texas Tech University), Truyen Nguyen (University of Akron), and Dmitry Pelinovsky (McMaster University). December 13, 2013 - Rich Lehoucq: A Computational Spectral Graph Theory Tutorial My presentation considers the research question of whether existing algorithms and software for the large-scale sparse eigenvalue problem can be applied to problems in spectral graph theory. I first provide an introduction to several problems involving spectral graph theory. I then provide a review of several different algorithms for the large-scale eigenvalue problem and briefly introduce the Anasazi package of eigensolvers. December 10, 2013 - Jingwei Hu: Fast algorithms for quantum Boltzmann collision operators The quantum Boltzmann equation describes the non-equilibrium dynamics of a quantum system consisting of bosons or fermions. The most prominent feature of the equation is a high-dimensional integral operator modeling particle collisions, whose nonlinear and nonlocal structure poses a great challenge for numerical simulation. I will introduce two fast algorithms for the quantum Boltzmann collision operator. The first one is a quadrature based solver specifically designed for the collision operator in reduced energy space. Compared to cubic complexity of direct evaluation, our algorithm runs in only linear complexity (optimal up to a logarithmic factor). The second one accelerates the computation of the full phase space collision operator. It is a spectral algorithm based on a special low-rank decomposition of the collision kernel. Numerical examples including an application to semiconductor device modeling are presented to illustrate the efficiency and accuracy of proposed algorithms. December 6, 2013 - Jeongnim Kim: Analysis of QMC Applications on Petascale Computers Continuum Quantum Monte Carlo (QMC) has proved to be an invaluable tool for predicting the properties of matter from fundamental principles. The multiple forms of parallelism afforded by QMC algorithms and high compute-to-communication ratio make them ideal candidates for acceleration in the multi/many-core paradigm, as demonstrated by the performance of QMCPACK on various high-performance computing (HPC) platforms including Titan (Cray XK7) and Mira (IBM BlueGene Q). The changes expected on future architectures - orders of magnitude higher parallelism, hierarchical memory and communication, and heterogeneous nodes - pose great challenges to application developers but also present opportunities to transform them to tackle new classes of problems. This talk presents core QMC algorithms and their implementations in QMCPACK on the HPC systems of today. The speaker will discuss the performance of typical QMC workloads to elucidate the critical issues to be resolved for QMC to fully exploit increasing computing powers of forthcoming HPC systems. December 3, 2013 - Terry Haut: Advances on an asymptotic parallel-in-time method for highly oscillatory PDEs In this talk, I will first review a recent time-stepping algorithm for nonlinear PDEs that exhibit fast (highly oscillatory) time scales. PDEs of this form arise in many applications of interest, and in particular describe the dynamics of the ocean and atmosphere. The scheme combines asymptotic techniques (which are inexpensive but can have insufficient accuracy) with parallel-in-time methods (which, alone, can yield minimal speedup for equations that exhibit rapid temporal oscillations). Examples are presented on the (1D) rotating shallow water equations in a periodic domain, which demonstrate significant parallel speedup is achievable. In order to implement this time-stepping method for general spatial domains (in 2D and 3D), a key component involves applying the exponential of skew-Hermitian operators. To this end, I will next present a new algorithm for doing so. This method can also be used for solving wave propagation problems, which is of independent interest. This scheme has several advantages over standard methods, including the absence of any stability constraints in relation to the spatial discretization, and the ability to parallelize the computation in the time variable over as many characteristic wavelengths as resources permit (in addition to any spatial parallelization). I will also present examples on the linear 2D shallow water equations, as well the 2D (variable coefficient) wave equation. In these examples, this method (in serial) is 1-2 orders of magnitude faster than both RK4 and the use of Chebyshev polynomials. December 3, 2013 - Galen Shipman: The Compute and Data Environment for Science (CADES) In this talk I will discuss ORNL's Compute and Data Environment for Science. The Compute and Data Environment for Science (CADES) provides R&D with a flexible and elastic compute and data infrastructure. The initial deployment consists of over 5 petabytes of high-performance storage, nearly half a petabyte of scalable NFS storage, and over 1000 compute cores integrated into a high performance ethernet and InfiniBand network. This infrastructure, based on OpenStack, provides a customizable compute and data environment for a variety of use cases including large-scale omics databases, data integration and analysis tools, data portals, and modeling/simulation frameworks. These services can be composed to provide end-to-end solutions for specific science domains. Galen Shipman is the Data Systems Architect for the Computing and Computational Sciences Directorate and Director of the Compute and Data Environment for Science at Oak Ridge National Laboratory (ORNL). He is responsible for defining and maintaining an overarching strategy and infrastructure for data storage, data management, and data analysis spanning from research and development to integration, deployment and operations for high-performance and data-intensive computing initiatives at ORNL. His current work includes addressing many of the data challenges of major facilities such as those of the Spallation Neutron Source (Basic Energy Sciences) and major data centers focusing on Climate Science (Biological and Environmental Research). December 2, 2013 - Wei Ding: Klonos: A Similarity Analysis-Based Tool for Software Porting in High-Performance Computing Porting applications to a new system is a nontrivial job in the HPC field. It is a very time-consuming, labor-intensive process, and the quality of the results will depend critically on the experience of the experts involved. In order to ease the porting process, a methodology is proposed to address an important aspect of software porting that receives little attention, namely, planning support. When a scientific application consisting of many subroutines is to be ported, the selection of key subroutines greatly impacts the productivity and overall porting strategy, because these subroutines may represent a significant feature of the code in terms of functionality, code structure, or performance. They may also serve as indicators of the difficulty and amount of effort involved in porting a code to a new platform. The proposed methodology is based on the idea that a set of similar subroutines can be ported with similar strategies and result in a similar-quality porting. By vie wing subroutines as data and operator sequences, analogous to DNA sequences, various bio-informatics techniques may be used to conduct the similarity analysis of subroutines while avoiding NP-complete complexities of other approaches. Other code metrics and cost-model metrics have been adapted for similarity analysis to capture internal code characteristics. Based on those similarity analyses, "Klonos," a tool for software porting, has been created. Experiment shows that Klonos is very effective for providing a systematic porting plan to guide users during their porting process of reusing similar porting strategies for similar code regions. November 20, 2013 - Chao Yang: Numerical Algorithms for Solving Nonlinear Eigenvalue Problems in Electronic Structure Calculation The Kohn-Sham density functional theory (KSDFT) is the most widely used theory for studying electronic properties of molecules and solids. The main computational problem in KSDFT is a nonlinear eigenvalue problem in which the matrix Hamiltonian is a function of a number of eigenvectors associated with smallest eigenvalues. The problem can also be formulated as a constrained energy minimization problem or a nonlinear equation in which the unknown ground state electron density satisfies a fixed point map. Significant progress has been made in the last few years on understanding the mathematical properties of this class of problems. Efficient and reliable numerical algorithms have been developed to accelerate the convergence of nonlinear solvers. New methods have also been developed to reduce the computational cost in each step of the iterative solver. We will review some of these developments and discuss additional challenges in large-scale electronic structure calculations. November 15, 2013 - Christian Straube: Simulation of HPDC Infrastructure Attributes High Performance Distributed Computing (HPDC) infrastructures use several data centers, High Performance Computing (HPC) and distributed systems, each built from manifold (often heterogeneous) compute, storage, interconnect, and other specialized sub components to provide their capabilities, i.e. well-defined functionality that is exposed to a user or application. Capabilities' quality can be described by attributes, e.g., performance, energy efficiency, or reliability. Hardware-related modifications, such as clock rate adaptation or interconnect throughput improvement, often induce two groups of effects onto these attributes: the (by definition) positive intended effects and the mostly negative but unavoidable side effects. For instance, increasing a typical HPDC infrastructure's redundancy to address short-time breakdown and to improve reliability (positive intended effect), simultaneously increases energy consumption and degrades performance due to redundancy overhead (neg ative side effects). In this talk, I present Predictive Modification Effect Analysis (PMEA) that aims at avoiding harmful execution and costly but spare modification exploration by investigating in advance, whether the (negative) side effects on attributes will outweigh the (positive) intended effects. The talk covers the fundamental concepts and basic ideas of PMEA and it presents it's underlying model. The model is straightforward and fosters fast development, even for complex HPDC infrastructures, it handles individual and open sets of attributes and their calculations, and it addresses effect cascading through the entire HPC infrastructure. Additionally, I will present a prototype of a simulation tool and describe some selected features in detail. Bio: Christian Straube is a Computer Science Ph.D. student at the Ludwig-Maximilians-University (LMU) in Munich, Germany since January 2012. His research interests include HPDC infrastructure and data center analysis, in particular planning, modification justification, as well as effect outweighing and cascading. During his time as Ph.D. student, he worked several months at the Leibniz Supercomputing Center, which operates the SuperMUC, a three Petaflop/s system that applies warm-water cooling. Prior to joining LMU as a Ph.D. student, Christian worked for several years in industry and academia as software engineer and project manager. He ran his own software engineering company for 10 years, and was (co-) founder of several IT related start-ups. He received a best paper award for a conference contribution to INFOCOMP 2012 and was subsequently invited as technical program member of INFOCOMP 2013. Christian holds a Diploma with Distinction in Computer Science from Ludwig-Maximilians-University in Munich with a minor in Medicine. November 12, 2013 - Surya R. Kalidindi: Data Science and Cyberinfrastructure Enabled Development of Advanced Materials Materials with enhanced performance characteristics have served as critical enablers for the successful development of advanced technologies throughout human history, and have contributed immensely to the prosperity and well-being of various nations. Although the core connections between the material's internal structure (i.e. microstructure), its evolution through various manufacturing processes, and its macroscale properties (or performance characteristics) in service are widely acknowledged to exist, establishing this fundamental knowledge base has proven effort-intensive, slow, and very expensive for a number of candidate material systems being explored for advanced technology applications. It is anticipated that the multi-functional performance characteristics of a material are likely to be controlled by a relatively small number of salient features in its microstructure. However, cost-effective validated protocols do not yet exist for fast identification of these salient features and establishment of the desired core knowledge needed for the accelerated design, manufacture and deployment of new materials in advanced technologies. The main impediment arises from lack of a broadly accepted framework for a rigorous quantification of the material's microstructure, and objective (automated) identification of the salient features in the microstructure that control the properties of interest. Microstructure Informatics focuses on the development of data science algorithms and computationally efficient protocols capable of mining the essential linkages in large microstructure datasets (both experimental and modeling), and building robust knowledge systems that can be readily accessed, searched, and shared by the broader community. Given the nature of the challenges faced in the design and manufacture of new advanced, this new emerging interdisciplinary field is ideally positioned to produce a major transformation in the current practices used by materials scientists and engineers. The novel data science tools produced by this emerging field promise to significantly accelerate the design and development of new advanced materials through their increased efficacy in gleaning and blending the disparate knowledge and insights hidden in "big data" gathered from multiple sources (including both experiments and simulations). This presentation outlines specific strategies for data science enabled development of advanced materials, and illustrates key components of the proposed overall strategy with examples. November 11, 2013 - Hermann Härtig: A fast and fault tolerant microkernel-based system for exa-scale computing (FFMK) FFMK is a recently started project funded by DFG's Exascale-Software program. It addresses three key scalability obstacles expected in future exa-scale systems: the vulnerability to system failures due to transient or permanent failures, the performance losses due to imbalances and the noise due to unpredictable interactions between HPC applications and the operating system. To this end, we adapt and integrate well-proven technologies including: • Microkernel-based operating systems (L4) to eliminate operating system noise impacts of feature-heavy all-in-one operating systems and to make kernel influences more deterministic and predictable, • Erasure-code protected on-node checkpointing to provide a fast checkpoint and restart mechanism capable of keeping up with worsening mean-time between failures (MTBF), and • Mathematically sound management system and load balancing algorithms (Mosix) to adjust the system to the highly dynamic and wide variety of requirements for today's and future HPC applications. FFMK will combine Linux running in a light-weight virtual machine with a special-purpose component for MPI, both running side by side on L4. The objective is to build a fluid self-organizing platform for applications that require scaling up to exa-scale performance. The talk will explain assumptions and overall architecture of FFMK and continue with presenting a number of design decisions the team is currently facing. FFMK is a cooperation between Hebrew University's MosiX team, the HPC centers of Berlin and Dresden (ZIB, ZIH) and TU Dresden's operating systems group. Bio: After having received his PhD from Karlsruhe University on an SMP-related topic, Hermann Härtig led a team at German National Research Center(GMD) to build BirliX, a Unix lookalike designed to address high security requirements. He then moved to TU Dresden to lead the operating systems chair. His team was among the pioneers in building micro kernels of the L4 family (Fiasco, Nova) and systems based on L4 (LeRE, DROPS, NIZZA). L4RE and Fiasco form the OS basis of the SIMKO 3 smart phone. Hermann Härtig now is PI for FFMK. October 17, 2013 - Marta D'Elia: Fractional differential operators on bounded domains as special cases of nonlocal diffusion operators We analyze a nonlocal diffusion operator having as special cases the fractional Laplacian and fractional differential operators that arise in several applications, e.g. jump processes. In our analysis, a nonlocal vector calculus is exploited to define a weak formulation of the nonlocal problem. We demonstrate that the solution of the nonlocal equation converges to the solution of the fractional Laplacian equation on bounded domains as the nonlocal interactions become infinite. We also introduce Galerkin finite element discretizations of the nonlocal weak formulation and we derive a priori error estimates. Through several numerical examples we illustrate the theoretical results and we show that by solving the nonlocal problem it is possible to obtain accurate approximations of the solutions of fractional differential equations circumventing the problem of treating infinite-volume constraints. October 15, 2013 - Tommy Janjusic: Framework for Evaluating Dynamic Memory Allocators including a new Equivalence Class based Cache-Conscious Dynamic Memory Allocator Software applications' performance is hindered by a variety of factors, but most notably by the well-known CPU-Memory speed gap (often known as the memory wall). This results in the CPU sitting idle waiting for data to be brought from memory to processor caches. The addressing used by caches causes non-uniform accesses to various cache sets. The non-uniformity is due to several reasons; including how different objects are accessed by the code and how the data objects are located in memory. Memory allocators determine where dynamically created objects are placed, thus defining addresses and their mapping to cache locations. It is important to evaluate how different allocators behave with respect to the localities of the created objects. Most allocators use a single attribute, the size, of an object in making allocation decisions. Additional attributes such as the placement with respect to other objects, or specific cache area may lead to better use of cache memories. This talk discusses a framework that allows for the development and evaluation of new memory allocation techniques. At the root of the framework is a memory tracing tool called Gleipnir, which provides very detailed information about every memory access, and relates it back to source level objects. Using the traces from Gleipnir, we extended a commonly used cache simulator for generating detailed cache statistics: per function, per data object, per cache line, and identify specific data objects that are conflicting with each other. The utility of the framework is demonstrated with a new memory allocator known as an equivalence class allocator. The new allocator allows users to specify cache sets, in addition to object size, where the objects should be placed. We compare this new allocator with two well-known allocators, viz., Doug\_Lea and Pool allocators. October 8, 2013 - Sophie Blondel: NAT++: An analysis software for the NEMO experiment The NEMO 3 detector aims to prove that the neutrino is a Majorana particle (i.e. identical to the antineutrino). It is mainly composed of a calorimeter and a wire chamber, the former measuring the time and energy of a particle, and the latter reconstructing its track. NEMO 3 has taken data for 5 effective years with an event trigger rate of ~5 Hz, resulting in a total of 10e8 events to analyze. A C++-based software, called NAT++, was created to calibrate and analyze these events. The analysis is mainly based on a time of flight calculation which will be the focus of this presentation. Supplementing this classic analysis, a new tool named gamma-tracking has been developed in order to improve the reconstruction of the gamma energy deposits in the detector. The addition of this tool in the analysis pipeline leads to an increase of 30% of statistics in certain desired channels. September 30, 2013 - Eric Barton: Fast Forward Storage and Input/Output (I/O) Conflicting pressures drive the requirements for I/O and Storage at Exascale. On the one hand, an explosion is anticipated, not only in the size of scientific data models but also in their complexity and in the volume of their attendant metadata. These models require workflows that integrate analysis and visualization and new object-oriented I/O Application Programming Interfaces (APIs) to make application development tractable and allow compute to be moved to the data or data to the compute as appropriate. On the other hand, economic realities driving the architecture and reliability of the underlying hardware will push the limits on horizontal scale, introduce unavoidable jitter and make failure the norm. The I/O system will have to handle these as transparently as possible while providing efficient, sustained and predictable performance. This talk will describe the research underway in the Department of Energy (DOE) Fast Forward Project to prototype a complete Exascale I/O stack including at the top level, an object-oriented I/O API based on HDF5, in the middle, a Burst Buffer and data layout optimizer based on PLFS (A Checkpoint Filesystem for Parallel Applications) and at the bottom, DAOs (Data Access Objects) - transactional object storage based on Lustre. September 25, 2013 - James Beyer: OPENMP vs OPENACC A brief introduction to two accelerator programming directive sets with a common heritage: OpenACC 2.0 and OpenMP 4.0. After introducing the two directive sets, a side by side comparison of available features along with code examples will be presented to help developers understand their options as they begin programming for both Nvidia and Intel accelerated machines. September 25, 2013 - Michael Wolfe: OPENACC 2.X AND BEYOND The OpenACC API is designed to support high-level, performance portable, programming across a range of host+accelerator target systems. This presentation will start with a short discussion of that range, which provides a context for the features and limitations of the specification. Some important additions that were included in OpenACC 2.0 will be highlighted. New features currently under discussion for future versions of the OpenACC API and a summary of the expected timeline will be presented. September 23, 2013 - Jun Jia: Accelerating time integration using spectral deferred correction In this talk, we illustrate how to use the spectral deferred correction (SDC) to improve the time integration for scientific simulations. The SDC method combines a Picard integral formulation of the error equation, spectral integration and a user chosen low-order time marching method to form stable methods with arbitrarily high formal order of accuracy in time. The method could be either explicit or implicit, and it also provides the ability to adopt operator splitting while maintaining high formal order. At the end of the talk, we will show some applications using this technique. September 19, 2013 - Kenny Gross: Energy Aware Data Center (EADC) Innovations: Save Energy, Boost Performance The global electricity consumption for enterprise and high-performance computing data centers continues to grow much faster than Moore's Law as data centers push into emerging markets, and as developed countries see explosive growth in computing demand as well as supraexponential growth in demand for exabyte (and now zettabyte) storage systems. The USDOE reported that data centers now consume 38 gigawatts of electricity worldwide, a number that is growing exponentially even during times of global economic slowdowns. Oracle has developed a suite of novel algorithmic innovations that can be applied nonintrusively to any IT servers and substantially reduces the energy usage and thermal dissipation for the IT assets (saving additional energy for the data center HVAC systems), while significantly boosting performance (and hence Return-On-Assets) for the IT assets, thereby avoiding additional server purchases (that would consume more energy). The key enabler for this suite of algorithmic innovations is Oracle's Intelligent Power Monitoring (IPM) telemetry harness (implemented in software...no hardware mods anywhere in the data center). IPM, when coupled with advanced pattern recognition, identifies and quantifies three significant nonlinear (heretofore 'invisible') energy-wastage mechanisms that are present in all enterprise and HPC computing assets today, including in low-PUE high-efficiency data centers: 1) leakage power in the CPUs (grows exponentially with CPU temperature), 2) aggregate fan-motor power inside the servers (grows with the cubic power of fan RPMs), and 3) substantial degradation of server energy efficiency by low-level ambient vibrations in the data center racks. This presentation shows how continuous system internal telemetry coupled with advanced pattern recognition technology that was developed for nuclear reactor applications by the presenter and his team back at Argonne National Lab in the 1990s are significantly cutting energy utilization while boosting performance for enterprise and HPC computing assets. Speaker Bio Info: ------------------ Kenny Gross is a Distinguished Engineer for Oracle and team leader for the System Dynamics Characterization and Control team in Oracle's Physical Sciences Research Center in San Diego. Kenny specializes in advanced pattern recognition, continuous system telemetry, and dynamic system characterization for improving the reliability, availability, and energy efficiency of enterprise computing systems and for the datacenters in which the systems are deployed. Kenny has 220 US patents issued and others pending, 180 scientific publications, and was awarded a 1998 R&D 100 Award for one of the top 100 technological innovations of that year, for an advanced statistical pattern recognition technique that was originally developed for nuclear plant applications and is now being used for a variety of applications to improve the quality-of-service, availability, and optimal energy efficiency for enterprise and HPC computer servers. Kenny earned his Ph.D. in nuclear engineering from the U. of Cincinnati in 1977. September 17, 2013 - Damien Lebrun-Grandie: Simulation of thermo-mechanical contact between fuel pellets and cladding in UO2 nuclear fuel rods As fission process heats up the fuel rods, UO2 pellets stacked on top of each other swell both radially and axially, while the surrounding Zircaloy cladding creeps down, so that cladding and pellet eventually come into contact. This exacerbate chemical degradation of the protective cladding and stresses may enable rapid propagation of cracks and thus threaten integrity of the clad. Along these lines, pellet-cladding interaction establish itself as a major concern in fuel rod design and reactor core operation in light water reactors. Accurately modeling fuel behavior is challenging because the mechanical contact problem strongly depends on temperature distribution, and the coupled pellet-cladding heat transfer problem, in turn, is affected by changes in geometry induced by bodies deformations and stresses generated at contact interface. Our work focuses on active set strategies to determine the actual contact area in high-fidelity coupled physics fuel performance codes. The approach consists of two steps: In the first one, we determine the boundary region on conventional finite element meshes where the contact conditions shall be enforced to prevent objects from occupying the same space. For this purpose, we developed and implemented an efficient parallel search algorithm for detecting mesh inter-penetration and vertex/mesh overlap. The second step deals with solving the mechanical equilibrium factoring the contact conditions computed in the first step. To do so, we developed a modified version of the multi-point constraint (MPC) strategy. While the original algorithm was restricted to the Jacobi preconditioned conjugate gradient method, our MPC algorithm works with any other Krylov solvers (and thus liberate us from the symmetry requirements). Furthermore it does not place any restriction on the preconditioner used. The multibody thermo-mechanical contact problem is tackled using modern numerics, with higher-order finite elements and a Newton-based monolithic strategy to handle both nonlinearities (coming from the non-linearity of the contact condition but as well as from the temperature-dependence of the fuel thermal conductivity for instance) and coupling between the various physics components (gap conductance sensitive to the clad- pellet distance, thermal expansion coefficient or Youngs modulus affected by temperature changes, etc.). We will provide different numerical examples for one and multiple bodies contact problems to demonstrate how the method performs. September 5, 2013 - Jared Saia: How to Build a Reliable System Out of Unreliable Components The first part of this talk will survey several decades of work on designing distributed algorithms that boost reliability. These algorithms boost reliability in the sense that they enable the creation of a reliable system from unreliable components. We will discuss practical successes of these algorithms, along with drawbacks. A key drawback is scalability: significant redundancy of resources is required in order to tolerate even one node fault. The second part of the talk will introduce a new class of distributed algorithms for boosting reliability. These algorithms are self-healing in the sense that they dynamically adapt to failures, requiring additional resources only when faults occur. We will discuss two such self-healing algorithms. The first enables self-healing in an overlay network, even when an omniscient adversary repeatedly removes carefully chosen nodes. Specifically, the algorithm ensures that the shortest path between any pair of nodes never increases by more than a logarithmic factor, and that the degree of any node never increases by more than a factor of 3. The second algorithm enables self-healing with Byzantine faults, where an adversary can control t < n/8 of the n total nodes in the network. This algorithm enables point-to-point communication with an expected number of message corruptions that is O(t(log* n)^2). Empirical results show that this algorithm reduces bandwidth and computation costs by up to a factor of 70 when compared to previous work. August 21, 2013 - Hank Childs: Hybrid Parallelism for Visualization and Analysis Many of today's parallel visualization and analysis programs are designed for distributed-memory parallelism, but not for the shared-memory parallelism available on GPUs or multi-core CPUs. However, architectural trends on supercomputers increasingly contain more and more cores per node, whether through the presence of GPUs or through more cores per CPU node. To make the best use of such hardware, we must evaluate the benefits of hybrid parallelism - parallelism that blends distributed- and shared-memory approaches - for visualization and analysis's data-intensive workloads. With this talk, Hank explores the fundamental challenges and opportunities for hybrid parallelism with visualization and analysis, and discusses recent results that measure its benefit. Speaker Bio: Hank Childs is an assistant professor at the University of Oregon and a computer systems engineer at Lawrence Berkeley National Laboratory. His research focuses on scientific visualization, high-performance computing, and the intersection of the two. He received the Department of Energy Career award in 2012 to research explorative visualization use cases on exascale machines. Additionally, Hank is one of the founding members of the team that developed the VisIt visualization and analysis software. He received his Ph.D. from UC Davis in 2006. August 13, 2013 - Rodney O. Fox: Quadrature-Based Moment Methods for Kinetics-Based Flow Models Kinetic theory is a useful theoretical framework for developing multiphase flow models that account for complex physics (e.g., particle trajectory crossings, particle size distributions, etc.) (1). For most applications, direct solution of the kinetic equation is intractable due to the high-dimensionality of the phase space. Thus a key challenge is to reduce the dimensionality of the problem without losing the underlying physics. At the same time, the reduced description must be numerically tractable and possess the favorable attributes of the original kinetic equation (e.g. hyperbolic, conservation of mass/momentum, etc.) Starting from the seminal work of McGraw (2) on the quadrature method of moments (QMOM), we have developed a general closure approximation referred to as quadrature-based moment methods (3; 4; 5). The basic idea behind these methods is to use the local (in space and time) values of the moments to reconstruct a well-defined local distribution function (i.e. non-negative, compact support, etc.). The reconstructed distribution function is then used to close the moment transport equations (e.g. spatial fluxes, nonlinear source terms, etc.). In this seminar, I will present the underlying theoretical and numerical issues associated with quadrature-based reconstructions. The transport of moments in real space, and its numerical representation in terms of fluxes, plays a critical role in determining whether a moment set is realizable. Using selected examples, I will introduce recent work on realizable high-order flux reconstructions developed specifically for finite-volume schemes (6). References [1] MARCHISIO, D. L. & FOX, R. O. 2013 Computational Models for Polydisperse Particulate and Multiphase Systems, Cambridge University Press. [2] MCGRAW, R. 1997 Description of aerosol dynamics by the quadrature method of moments. Aerosol Science and Technology 27, 255–265. [3] DESJARDINS, O., FOX, R. O. & VILLEDIEU, P. 2008 A quadrature-based moment method for dilute fluid-particle flows. Journal of Computational Physics 227, 2514–2539. [4] YUAN, C. & FOX, R. O. 2011 Conditional quadrature method of moments for kinetic equations. Journal of Computational Physics 230, 8216–8246. [5] YUAN, C., LAURENT, F. & FOX, R. O. 2012 An extended quadrature method of moments for population balance equations. Journal of Aerosol Science 51, 1–23. [6] VIKAS, V., WANG, Z. J., PASSALACQUA, A. & FOX, R. O. 2011 Realizable high-order finite-volume schemes for quadrature-based moment methods. Journal of Computational Physics 230, 5328–5352. August 12, 2013 - Lucy Nowell: ASCR: Funding/ Data/ Computer Science Dr. Lucy Nowell is a Computer Scientist and Program Manager for the Advanced Scientific Computing Research (ASCR) program office in the Department of Energy's (DOE) Office of Science. While her primary focus is on scientific data management, analysis and visualization, her portfolio spans the spectrum of ASCR computer science interests, including supercomputer architecture, programming models, operating and runtime systems, and file systems and input/output research. Before moving to DOE in 2009, Dr. Nowell was a Chief Scientist in the Information Analytics Group at Pacific Northwest National Laboratory (PNNL). On detail from PNNL, she held a two-year assignment as a Program Director for the National Science Foundation's Office of Cyberinfrastructure, where her program responsibilities included Sustainable Digital Data Preservation and Access Network Partners (DataNet), Community-based Data Interoperability Networks (INTEROP), Software Development for Cyberinfrastructure (SDCI) and Strategic Technologies for Cyberinfrastructure (STCI). At PNNL, her research centered on applying her knowledge of visual design, perceptual psychology, human-computer interaction, and information storage and retrieval to problems of understanding and navigating in very large information spaces, including digital libraries. She holds several patents in information visualization technologies. Dr. Nowell joined PNNL in August 1998 after a career as a professor at Lynchburg College in Virginia, where she taught a wide variety of courses in Computer Science and Theatre. She also headed the Theatre program and later chaired the Computer Science Department. While pursuing her Master of Science and Doctor of Philosophy degrees in Computer Science at Virginia, she worked as a Research Scientist in the Digital Libraries Research Laboratory and also interned with the Information Access team at IBM's T. J. Watson Research Laboratories in Hawthorne, NY. She also has a Master of Fine Arts degree in Drama from the University of New Orleans and the Master of Arts and Bachelor of Arts degrees in Theatre from the University of Alabama August 8, 2013 - Carlos Maltzahn: Programmable Storage Systems With the advent of open source parallel file systems a new usage pattern emerges: users isolate subsystems of parallel file systems and put them in contexts not foreseen by the original designers, e.g., an object-based storage back end gets a new REST-ful front end to become Amazon Web Service's S3 compliant key value store, or a data placement function becomes a placement function for customer accounts. This trend shows a desire for the ability to use existing file system services and compose them to implement new services. We call this ability "programmable storage systems". In this talk I will argue that by designing programmability into storage systems has the following benefits: (1) we are achieving greater separation of storage performance engineering from storage reliability engineering, making it possible to optimize storage systems in a wide variety of ways without risking years of investments into code hardening; (2) we are creating an environment that encourages people to create a new stack of storage systems abstractions, both domain-specific and across domains, including sophisticated optimizers that rely on machine learning techniques; (3) we inform commercial parallel file system vendors on the design of low-level APIs for their products so that they match the versatility of open source storage systems without having to release their entire code into open source; and (4) use this historical opportunity to leverage the tension between the versatility of open source storage systems and the reliability of proprietary systems to lead the community of storage system designers. I will illustrate programmable storage with an overview of programming abstractions that we have found useful so far, and if time permits, talk about "scriptable storage systems" and the interesting new possibilities of truly data-centered software engineering it enables. Bio: Carlos Maltzahn is an Associate Adjunct Professor at the Computer Science Department of the Jack Baskin School of Engineering, Director of the UCSC Systems Research Lab and Director of the UCSC/Los Alamos Institute for Scalable Scientific Data Management at the University of California at Santa Cruz. Carlos Maltzahn's current research interests include scalable file system data and metadata management, storage QoS, data management games, network intermediaries, information retrieval, and cooperation dynamics. Carlos Maltzahn joined UC Santa Cruz in December 2004 after five years at Network Appliance. He received his Ph.D. in Computer Science from the University of Colorado at Boulder in 1999, his M.S. in Computer Science in 1997, and his Univ. Diplom Informatik from the University of Passau, Germany in 1991. August 7, 2013 - Tiffany M. Mintz: Toward Abstracting the Communication Intent in Applications to Improve Portability and Productivity Programming with communication libraries such as the Message Passing Interface (MPI) obscures the high-level intent of the communication in an application and makes static communication analysis difficult to do. Compilers are unaware of communication libraries' specifics, leading to the exclusion of communication patterns from any automated analysis and optimizations. To overcome this, communication patterns can be expressed at higher-levels of abstraction and incrementally added to existing MPI applications. In this paper, we propose the use of directives to clearly express the communication intent of an application in a way that is not specific to a given communication library. Our communication directives allow programmers to express communication among processes in a portable way, giving hints to the compiler on regions of computations that can be overlapped with communication and relaxing communication constraints on the ordering, completion and synchronization of the communication imposed by specific libraries such as MPI. The directives can then be translated by the compiler into message passing calls that efficiently implement the intended pattern and be targeted to multiple communication libraries. Thus far, we have used the directives to express point-to-point communication patterns in C, C++ and Fortran applications, and have translated them to MPI and SHMEM. August 2, 2013 - Alberto Salvadori: Multi-scale and multi-physics modeling of Li-ion batteries: a computational homogenization approach There is being great interest in developing next generation of lithium ion battery for higher capacity and longer life of cycling, in order to develop significantly more demanding energy storage requirements for humanity existing and future inventories of power-generation and energy-management systems. Industry and academic are looking for alternative materials and Si is one of the most promising candidates for the active material, because it has the highest theoretical specific energy capacity. It emerged that very large mechanical stresses associated with huge volume changes during Li intercalation/deintercalation are responsible for poor cyclic behaviors and quick fading of electrical performance. The present contribution aims at providing scientific contributions in this vibrant context. The computational homogenization scheme is here tailored to model the coupling between electrochemistry and mechanical phenomena that coexist during batteries charging and discharging cycles. At the macro-scale, di.fl'usion-advection equations model the electro-chemistry of the whole cell, whereas the micro-scale models the multi-component porous electrode, dififusion and intercalation of Lithium in the active particles, the swelling and fracturing of the latter. The scale transitions are formulated by tailoring the well established first-order computational homogenization scheme for mechanical and thermal problems. August 2, 2013 - Michela Taufer: The effectiveness of application-aware self-management for scientific discovery in volunteer computing systems There is being great interest in developing next generation of lithium ion battery for higher capacity and longer life of cycling, in order to develop significantly more demanding energy storage requirements for humanity existing and future inventories of power-generation and energy-management systems. Industry and academic are looking for alternative materials and Si is one of the most promising candidates for the active material, because it has the highest theoretical specific energy capacity. It emerged that very large mechanical stresses associated with huge volume changes during Li intercalation/deintercalation are responsible for poor cyclic behaviors and quick fading of electrical performance. The present contribution aims at providing scientific contributions in this vibrant context. July 24, 2013 - Catalin Trenchea: Improving time-stepping numerics for weakly dissipative systems In this talk I will address the stability and accuracy of CNLF time-stepping scheme, and propose a modification of Robert-Asselin time-filters for numerical models of weakly diffusive evolution systems. This is motivated by the vast number of applications, e.g., the meteorological equations, and coupled systems with dominating skew symmetric coupling (ground-water surface-water). In contemporary numerical simulations of the atmosphere, evidence suggests that time-stepping errors may be a significant component of total model error, on both weather and climate time-scales. After a brief review, I will suggest a simple but effective method for substantially improving the time-stepping numerics at no extra computational expense. The most common time-stepping method is the leapfrog scheme combined with the Robert-Asselin (RA) filter. This method is used in many atmospheric models: ECHAM, MAECHAM, MM5, CAM, MESO-NH, HIRLAM, KMCM, LIMA, SPEEDY, IGCM, PUMA, COSMO, FSU-GSM, FSU-NRSM, NCEP-GFS, NCEP-RSM, NSEAM, NOGAPS, RAMS, and CCSR/NIES-AGCM. Although the RA filter controls the time-splitting instability in these models (successfully suppresses the spurious computational mode associated with the leapfrog time stepping scheme), it also weakly suppresses the physical mode, introduces non-physical damping, and reduces the accuracy. This presentation proposes a simple modification to the RA filter (mRA) [Y. Li, CT 2013]. The modification is analyzed and compared with the RAW filter (Williams 2009, 2011). The mRA increases the numerical accuracy to O(Δt^4) amplitude error and at least O(Δt2) phase-speed error for the physical mode. The mRA filter requires the same storage factors as RAW, and one more than the RA filter does. When used in conjunction with the leapfrog scheme, the RAW filter eliminates the non-physical damping and increases the amplitude accuracy by two orders, yielding third-order accuracy, the phase accuracy remaining second-order. The mRA and RAW filters can easily be incorporated into existing models, typically via the insertion of just a single line of code. Better simulations are obtained at no extra computational expense. June 28, 2013 - Yuri Melnikov: A surprising connection between Green's functions and the infinite product representation of elementary functions Some standard as well as innovative approaches will be reviewed for the construction of Green's functions for the elliptic PDEs. Based on that, a surprising technique is proposed for obtaining infinite product representations of some trigonometric, hyperbolic, and special functions. The technique uses comparison of different alternative expressions of Green's functions constructed by different methods. This allows us not only obtain the classical Euler's formulas but also come up with a number of new representations. Power is getting more important in high performance computing than ever before as we are on the way to exascale computing. The transition from old style which considers performance and accuracy to the new style which will take care of performance, accuracy and power is necessary. In high performance computing a workflow is composed of a large number of tasks, such as simulation, analysis and visualization. However, there is no such guidance for user getting to know which kind of task allocation and task placement to nodes and clusters are good for performance or power with accuracy requirement. In this presentation, I will talk about power optimization for reconfigurable embedded systems which dynamically choose kernels to run on hardware co-processors to response to dynamic application behavior at runtime. With a lot of commonalities as in HPC, we are going to explore the method in high performance computing for a dynamic workflow of task placement, etc., in terms of performance, power and accuracy constraints. June 26, 2013 - Matthew Causley: A fast implicit Maxwell field solver for plasma simulations We present a conservative spectral scheme for Boltzmann collision operators. This formulation is derived from the weak form of the Boltzmann equation, which can represent the collisional term as a weighted convolution in Fourier space. The weights contain all of the information of the collision mechanics and can be precomputed. I will present some results for isotropic (in angle) interations, such as hard spheres and Maxwell molecules. We have recently extended the method to take into account anisotropic scattering mechanisms arising from potential interactions between particles, and we use this method to compute the Boltzmann equation with screened Coulomb potentials. In particular, we study the rate of convergence of the Fourier transform for the Boltzmann collision operator in the grazing collisions limit to the Fourier transform for the limiting Landau collision operator. We show that the decay rate to equilibrium depends on the parameters associated with the collision cross section, and specifically study the differences between the classical Rutherford scattering angular cross section, which has logarithmic error, and an artificial one with a linear error. I will also present recent work extending this method for multispecies gases and gas with internal degrees of freedom, which introduces new challenges for conservation and introduces inelastic collisions to the system. June 25, 2013 - Jeff Haack: Conservative Spectral Method for Solving the Boltzmann Equation We present a conservative spectral scheme for Boltzmann collision operators. This formulation is derived from the weak form of the Boltzmann equation, which can represent the collisional term as a weighted convolution in Fourier space. The weights contain all of the information of the collision mechanics and can be precomputed. I will present some results for isotropic (in angle) interations, such as hard spheres and Maxwell molecules. We have recently extended the method to take into account anisotropic scattering mechanisms arising from potential interactions between particles, and we use this method to compute the Boltzmann equation with screened Coulomb potentials. In particular, we study the rate of convergence of the Fourier transform for the Boltzmann collision operator in the grazing collisions limit to the Fourier transform for the limiting Landau collision operator. We show that the decay rate to equilibrium depends on the parameters associated with the collision cross section, and specifically study the differences between the classical Rutherford scattering angular cross section, which has logarithmic error, and an artificial one with a linear error. I will also present recent work extending this method for multispecies gases and gas with internal degrees of freedom, which introduces new challenges for conservation and introduces inelastic collisions to the system. June 17, 2013 - Megan Cason: Analytic Utility Of Novel Threading Models In Distributed Graph Algorithms Current analytic methods for judging distributed algorithms rely on communication abstractions that characterize performance assuming purely passive data movement and access. This assumption complicates the analysis of certain algorithms, such as graph analytics, which have behavior that is very dependent on data movement and modifying shared variables. This presentation will discuss an alternative model for analyzing theoretic scalability of distributed algorithms written with the possibility of active data movement and access. The mobile subjective model presented here confines all communication to 1) shared memory access and 2) executing thread state which can be relocated between processes, i.e., thread migration. Doing so enables a new type of scalability analysis, which calculates the number of thread relocations required, and whether that communication is balanced across all processes in the system. This analysis also includes a model for contended shared data accesses, which is used to identify serialization points in an algorithm. This presentation will show the analysis for a common distributed graph algorithm, and illustrate how this model could be applied to a real world distributed runtime software stack. June 14, 2013 - Jeff Carver: Applying Software Engineering Principles to Computational Science The increase in the importance of Computational Science software motivates the need to identify and understand which software engineering (SE) practices are appropriate. Because of the uniqueness of the computational science domain, exiting SE tools and techniques developed for the business/IT community are often not efficient or effective. Appropriate SE solutions must account for the salient characteristics of the computational science development environment. To identify these solutions, members of the SE community must interact with members of the computational science community. This presentation will discuss the findings from a series of case studies of CSE projects and the results of an ongoing workshop series. First, a series of case studies of computational science projects were conducted as part of the DARPA High Productivity Computing Systems (HPCS) project. The main goal of these studies was to understand how SE principles were and were not being applied in computational science along with some of the reasons why. The studies resulted in nine lessons learned about computational science software that are important to consider moving forward. Second, the Software Engineering for Computational Science and Engineering workshop brings together software engineers and computational scientists. The outcomes of this workshop series provide interesting insight into potential future trends. June 12, 2013 - Hans-Werner van Wyk: Multilevel Quadrature Methods Stochastic Sampling methods are arguably the most direct and least intrusive means of incorporating parametric uncertainty into numerical simulations of partial differential equations with random inputs. However, to achieve an overall error that is within a desired tolerance, a large number of sample simulations may be required (to control the sampling error), each of which may need to be run at high levels of spatial fidelity (to control the spatial error). Multilevel methods aim to achieve the same accuracy as traditional sampling methods, but at a reduced computational cost, through the use of a hierarchy of spatial discretization models. Multilevel algorithms coordinate the number of samples needed at each discretization level by minimizing the computational cost, subject to a given error tolerance. They can be applied to a variety of sampling schemes, exploit nesting when available, can be implemented in parallel and can be used to inform adaptive spatial refinement strategies. We present an introduction to multilevel quadrature in the context of stochastic collocation methods, and demonstrate its effectiveness theoretically and by means of numerical examples. June 7, 2013 - Xuechen Zhang: Scibox: Cloud Facility for Sharing On-Line Data Collaborative science demands global sharing of scientific data but it cannot leverage universally accessible cloud-based infrastructures, like DropBox, as those offer limited interfaces and inadequate levels of access bandwidth. In this talk, I will present Scibox cloud facility for online sharing scientific data. It uses standard cloud storage solutions, but offers a usage model in which high end codes can write/read data to/from the cloud via the same ADIOS APIs they already use for their I/O actions, thereby naturally coupling data generation with subsequent data analytics. Extending current ADIOS IO methods, with Scibox, data upload/download volumes are controlled via Data Reduction (DR) functions stated by end users and applied at the data source, before data is moved, with further gains in efficiency obtained by combining DR-functions to move exactly what is needed by current data consumers. June 6, 2013 - Yuan Tian: Taming Scientific Big Data with Flexible Organizations for Exascale Computing The fast growing High Performance Computing systems enable scientists to simulate scientific processes with great complexities and consequently, often producing complex data that are also exponentially increasing in size. However, the growth within the computing infrastructure is significantly imbalanced. The dramatically increasing computing power is accompanied with a slowly improving storage system. Such discordant progress among computing power, storage, and data, has led to a severe Input/Output (I/O) bottleneck that requires novel techniques to address big data challenges in the scientific domain. This talk will identify the prevalent characteristics of scientific data and storage system as a whole, and explore opportunities to drive I/O performance for petascale computing and prepare it for the exascale. To this end, a set of flexible data organization and management techniques are introduced and evaluated to address the aforementioned concerns. Four key techniques are designed to exploit the capability of the back-end storage system for processing and storing scientific big data with a fast and scalable I/O performance, visualization space filling curve-based data reorganization, system-aware chunking, spatial and temporal aggregation, and in-node staging with compression. The experimental results demonstrated more than 60x speedup for a mission critical climate application during data post-processing. May 31, 2013 - Pablo Seleson: Multiscale Material Modeling with Peridynamics Multiscale modeling has been recognized in recent years as an important research field to achieve feasible and accurate predictions of complex systems. Peridynamics, a nonlocal reformulation of continuum mechanics based on integral equations, is able to resolve microscale phenomena at the continuum level. As a nonlocal model, peridynamics possesses a length scale which can be controlled for multiscale modeling. For instance, classical elasticity has been presented as a limiting case of a peridynamic model. In this talk, I will introduce the peridynamics theory and show analytical and numerical connections of peridynamics to molecular dynamics and classical elasticity. I will also present multiscale methods to concurrently couple peridynamics and classical elasticity, demonstrating the capabilities of peridynamics towards multiscale material modeling. Dr. Seleson is a Postdoctoral Fellow in the Institute for Computational Engineering and Sciences at The University of Texas at Austin. He has obtained his Ph.D. in Computational Science from Florida State University in 2010. He holds a M.S. degree in Physics from the Hebrew University of Jerusalem (2006), and a double B.S. degree in Physics and Philosophy also from the Hebrew University of Jerusalem (2002). May 29, 2013 - Ryan McMahan: The Effects of System Fidelity for Virtual Reality Applications Virtual reality (VR) has developed from Ivan Sutherland's inception of an "ultimate display" to a realized field of advanced technologies. Despite evidence supporting the use of VR for various benefits, the level of system fidelity required for such benefits is often unknown. Modern VR systems range from high-fidelity simulators that incorporate many technologies to lower-fidelity, desktop-based virtual environments. In order to identify the level of system fidelity required for certain beneficial uses, research has been conducted to better understand the effects of system fidelity on the user. In this talk, a series of experiments evaluating the effects of interaction fidelity and display fidelity will be presented. Future directions of system fidelity research will also be discussed. Dr. Ryan P. McMahan is an Assistant Professor of Computer Science at the University of Texas at Dallas, where his research focuses on the effects of system fidelity for virtual reality (VR) applications. Using an immersive VR system comprised of a wireless head-mounted display (HMD), a real-time motion tracking system, and Wii Remotes as 3D input devices, his research determines the effects of system fidelity by varying components such as stereoscopy, field of view, and degrees of freedom for interactions. Currently, he is using this methodology to investigate the effects of fidelity on learning for VR training applications. Dr. McMahan received his Ph.D. in Computer Science in 2011 from Virginia Tech, where he also received his B.S. and M.S. in Computer Science in 2004 and 2007. May 28, 2013 - Adrian Sandu: Data Assimilation and the Adaptive Solution of Inverse Problems The task of providing an optimal analysis of the state of the atmosphere requires the development of novel computational tools that facilitate an efficient integration of observational data into models. In this talk, we will introduce variational and statistical estimation approaches to data assimilation. We will discuss important computational aspects including the construction of efficient models for background errors, the construction and analysis of discrete adjoint models, new approaches to estimate the information content of observations, and hybrid variational-ensemble approaches to assimilation. We will also present some recent results on the solution of inverse problems using space and time adaptivity, and a priori and a posteriori error estimates for the optimal solution. May 24, 2013 - Satoshi Matsuoka: The Futures of Tsubame Supercomputer and the Japanese HPCI Towards Exascale HPCI is the Japanese High Performance Computer Infrastructure, which encompasses the national operations of major supercomputers, such as the K supercomputer and Tsubame2.0, much like the XSEDE in the United States and PRACE in Europe. Recently it was announced that the Japanese Ministry of Education, Culture, Sports, Science and Technology is intending to initiate a project towards an exascale supercomputer to be deployed around 2020. However, the workshop report that recommend the project also calls out for a comprehensive infrastructure where a flagship machine will be supplemented with leadership machines to complement the abilities of the flagship. Although it is still early, I will attempt to discuss the current status of Tsubame2.0 evolution to 2.5 and 3.0 in this context, as well as the activities in Japan to initiate an exascale effort, with collaborative elements with the US Department of Energy partners in system software development. May 17, 2013 - Jon Mietling and Tony McCrary: Bling3D: a new game development toolset from l33t Labs Bling3D is a forthcoming game development toolset from l33t labs. The fusion of Eclipse 4 with game development technologies, Bling allows both programmers and designers to create compelling interactive experiences from within one powerful tool. In this talk, you will be introduced to some of Bling's exciting features, including: • GPU Powered UI - A revolutionary new user interface for Eclipse, which uses shader programs to render widgets directly on the GPU. • BYOE (Bring Your Own Engine) - Bling is designed as a universal tools platform for game technologies. You can use our game engine or integrate your own! • Ultimate Toolset - Use the power of Bling's interface and Eclipse's extensibility to create mind blowing tools and plugins. • Designers Love It - Intuitive visual tools that allow you to create new worlds and artificial realities with ease. • Transform Your Assets - Easily create new ways to process raw assets (geometry, images, etc) into materials suitable for runtime use. Jon Mietling and Tony McCrary are representatives of l33t labs LLC, technology startup from the Detroit, Michigan region. May 10, 2013 - Xiao Chen: A Modular Uncertainty Quantification Framework for Multi-physics Systems This talk presents a modular uncertainty quantification (UQ) methodology for multi-physics applications in which each physics module can be independently embedded with its internal UQ method (intrusive or non-intrusive). This methodology offers the advantage of "plug-and-play" flexibility (i.e., UQ enhancements to one module do not require updates to the other modules) without losing the "global" uncertainty propagation property. (This means that, by performing UQ in this modular manner, all inter-module uncertainty and sensitivity information is preserved.) In addition, using this methodology one can also track the evolution of global uncertainties and sensitivities at the grid point level, which may be useful for model improvement. We demonstrate the utility of such a framework for error management and Bayesian inference on a practical application involving a multi-species flow and reactive transport in randomly heterogeneous porous media. May 2, 2013 - Kenley Pelzer: Quantum Biology: Elucidating Design Principles from Photosynthesis Recent experiments suggest that quantum mechanical effects may play a role in the efficiency of photosynthetic light harvesting. However, much controversy exists about the interpretation of these experiments, in which light harvesting complexes are excited by a fem to second laser pulse. The coherence in such laser pulses raises the important question of whether these quantum mechanical effects are significant in biological systems excited by incoherent light from the sun. In our work, we apply frequency-domain Green's function analysis to model a light-harvesting complex excited by incoherent light. By modeling incoherent excitation, we demonstrate that the evidence of long-lived quantum mechanical effects is not purely an artifact of peculiarities of the spectroscopy. This data provides a new perspective on the role of noisy biological environments in promoting or destroying quantum transport in photosynthesis. April 23, 2013 - Kirk W. Cameron: Power-Performance Modeling, Analyses and Challenges The power consumption of supercomputers ultimately limits their performance. The current challenge is not whether we will can build an exaflop system by 2018, but whether we can do it in less than 20 megawatts. The SCAPE Laboratory at Virginia Tech has been studying the tradeoffs between performance and power for over a decade. We've developed an extensive tool chain for monitoring and managing power and performance in supercomputers. We will discuss our power-performance modeling efforts and the implications of our findings for exascale systems as well as some research directions ripe for innovation. April 23, 2013 - Jordan Deyton: Tor Bridge Distribution Powered by Threshold RSA Since its inception, Tor has offered anonymity for internet users around the world. Tor now offers bridges to help users evade internet censorship, but the primary distribution schemes that provide bridges to users in need have come under attack. This talk explores how threshold RSA can help strengthen Tor's infrastructure while also enabling more powerful bridge distribution schemes. We implement a basic threshold RSA signature system for the bridge authority and a reputation-based social network design for bridge distribution. Experimental results are obtained showing the possibility of quick responses to requests from honest users while maintaining both the secrecy and the anonymity of registered clients and bridges. April 19, 2013 - Maria Avramova and Kostadin Ivanov: OECD LWR UAM and PSBT/BFBT benchmarks and their relation to Advanced LWR Simulations From 1987 to 1995, Nuclear Power Engineering Corporation (NUPEC) in Japan performed a series of void measurement tests using full-size mock-up tests for both BWRs and PWRs. Void fraction measurements and departure from nucleate boiling (DNB) tests were performed at NUPEC under steady-state and transient conditions. The workshop will provide overview of the OECD/NEA/NRC PWR Subchannel and Bundle Tests (PSBT) and OECD/NEA/NRC BWR Full-size Fine-mesh Bundle Tests (BFBT) benchmarks based on the NUPEC data. The benchmarks were designed to provide a data set for evaluation of the abilities of existing subchannel, system, and computational fluid dynamics (CFD) thermal-hydraulics codes to predict void distribution and departure from nucleate boiling (DNB) in LWRs under steady-state and transient conditions. The first part of the seminar summarizes the description of PSBT and BFBT benchmark databases, specifications, definition of benchmark exercises and comparative analysis of obtained results and makes the case on how these benchmarks can be used for verification, validation and uncertainty quantification of thermal-hydraulic tools developed for advanced LWR simulations. The second part of the seminar will provide overview of the OECD/NEA benchmark for LWR Uncertainty Analysis in Modeling (UAM) with emphasis on the Exercises of Phase I and Phase II of the benchmark and discussion of the Phase III, which is directly related to coupled multi-physics advanced LWR simulations. Series of well-defined problems with complete sets of input specifications and reference experimental data will be introduced with an objective is to determine the uncertainty in LWR calculations at all stages of coupled reactor physics/thermal hydraulics calculation. The full chain of uncertainty propagation will be discussed starting form from basic data and engineering uncertainties, across different scales (multi-scale), and physics phenomena (multi-physics) as well as how this propagation is tested on a number of benchmark exercises. Input, output and assumptions for each Exercise will be given as well as the procedures to calculate the output and propagated uncertainties in each step will be described supplemented by results of benchmark participants. Bio of Dr. Maria Avramova Dr. Maria Avramova is an Assistant Professor in the Mechanical and Nuclear Engineering Department at the Pennsylvania State University. She is currently the Director of Reactor Dynamics and Fuel Management Group (RDFMG). Her expertise and experience is in the area of developing methods and computer codes for multi-dimensional reactor core analysis. Her background includes development, verification, and validation of thermal-hydraulics sub-channel, porous media, and CFD models and codes for reactor core design, transient, and safety computational analysis. She has led and coordinated the OECD/NRC BFBT and PSBT benchmarks and currently is coordinating Phase II of the OECD LWR UAM benchmark. Her latest research efforts have been focused on high-fidelity multi-physics simulations (involving coupling of reactor physics, thermal-hydraulics and fuel performance models) as well as on uncertainty and sensitivity analysis of reactor design and safety calculations. Dr. Avramova has published over 15 refereed journal papers and over 40 refereed conference proceedings articles. Bio of Dr. Kostadin Ivanov Dr. Kostadin Ivanov is Distinguished Professor in the Mechanical and Nuclear Engineering Department at the Pennsylvania State University. He is currently Graduate Coordinator of Nuclear Engineering Program. His research developments include computational methods, numerical algorithms and iterative techniques, nuclear fuel management and reloading optimization techniques, reactor kinetics and core dynamics methods, cross-section generation and modeling algorithms for multi-dimensional steady-state and transient reactor calculations, and coupling three-dimensional (3-D) kinetics models with thermal-hydraulic codes. He has also led the development of multi-dimensional neutronics, in-core fuel management and coupled 3-D kinetics/thermal-hydraulic computer code benchmarks, multi-dimensional reactor transient and safety analysis methodologies as well as integrated analysis of safety-related parameters, system transient modeling of power plants, and in-core fuel management analyses. Examples of such benchmarks are OECD/NRC PWR MSLB benchmark, OECD/NRC BWR TT benchmark and OECD/DOE/CEA VVER-1000 CT benchmark. He is currently a chair and coordinator of the Scientific Board and Technical Program Committee of OECD LWR UAM benchmark. April 18, 2013 - Sparsh Mittal: MASTER: A Technique for Improving Energy Efficiency of Caches in Multicore Processors Large power consumption of modern processors has been identified as the most severe constraint in scaling their performance. Further, in recent CMOS technology generations, leakage energy has been dramatically increasing and hence, the leakage energy consumption of large last-level caches (LLCs) has become a significant source of the processor power consumption. This talk first highlights the need of power management in LLCs in the modern multi-core processors and then presents MASTER, a micro-architectural cache leakage energy saving technique using dynamic cache reconfiguration. MASTER uses dynamic profiling of LLCs to predict energy consumption of running programs at multiple LLC sizes. Using these estimates, suitable cache quotas are allocated to different programs using cache-coloring scheme and the unused LLC space is turned off to save energy. The implementation overhead of MASTER is small and even for 4 core systems; its overhead is only 0.8% of L2 cache size. Simulations have been performed using an out-of-order x86-64 simulator and 2-core and 4-core multi-programmed workloads from SPEC2006 suite. Further, MASTER has been compared with two energy saving techniques, namely decay cache and way-adaptable cache. The results show that MASTER gives the highest saving in energy and does not harm performance or cause unfairness. Finally, this talk briefly shows an extension of MASTER for multicore QoS systems. Simulation results confirm that a large amount of energy is saved while meeting the QoS requirement of most of the workloads. April 17, 2013 - Okwan Kwon: Automatic Scaling of OpenMP Applications Beyond Shared Memory We present the first fully automated compiler-runtime system that successfully translates and executes OpenMP shared-address-space programs on laboratory-size clusters, for the complete set of regular, repetitive applications in the NAS Parallel Benchmarks. We introduce a hybrid compiler-runtime translation scheme. This scheme features a novel runtime data flow analysis and compiler techniques for improving data affinity and reducing communication costs. We present and discuss the performance of our translated programs, and compare them with the performance of the MPI, HPF and UPC versions of the benchmarks. The results show that our translated programs achieve 75% of the hand-coded MPI programs, on average. April 17, 2013 - Michael S. Murillo: Molecular Dynamics Simulations of Charged Particle Transport in High Energy-Density Matter High energy-density matter is now routinely produced at large laser facilities. Producing fusion energy at such facilities challenges our ability to model collisional plasma processes that transport energy among the plasma species and across spatial scales. While the most accurate computational method for describing collisional processes is molecular dynamics, there are numerous challenges associated with using molecular dynamics to model very hot plasmas. However, recent advances in high performance computing have allowed us to develop methods for simulating a wide variety of processes in hot, dense plasmas. I will review these developments and describe our recent results that involve simulating fast particle stopping in dense plasmas. Using the simulation results, implications for theoretical modeling of charged-particle stopping will be given. April 12, 2013 - Vivek K. Pallipuram: Exploring Multiple Levels Of Performance Modeling For Heterogeneous Systems One of the major challenges faced by the High-Performance Computing (HPC) community today is user-friendly and accurate heterogeneous performance modeling. Although performance prediction models exist to fine-tune applications, they are seldom easy-to-use and do not address multiple levels of design space abstraction. Our research aims to bridge the gap between reliable performance model selection and user-friendly analysis. We propose a straightforward and accurate multi-level performance modeling suite for multi-GPGPU systems that addresses multiple levels of design space abstraction. The multi-level performance modeling suite primarily targets synchronous iterative algorithms (SIAs) using our synchronous iterative GPGPU execution (SIGE) model and addresses two levels of design space abstraction: 1) low-level where partial details of the implementation are present along with system specifications and 2) high-level where implementation details are minimum and only high-level system specifications are known. The low-level abstraction of the modeling suite employs statistical techniques for runtime prediction, whereas the high-level abstraction utilizes existing analytical and quantitative modeling tools to predict the application runtime. Our initial validation efforts for the low-level abstraction yield high runtime prediction accuracy with less than 10% error rate for several tested GPGPU cluster configurations and case studies. The development of high-level abstraction models is underway. The end goal of our research is to offer the scientific community, a reliable and user-friendly performance prediction framework that allows them to optimally select a performance prediction strategy for the given design goals and system architecture characteristics. Current Top 500 systems like Titan, Stampede, and Tianhe-1A have started to embrace the use of off-chip accelerators, such as GPUs and x86 coprocessors, to dramatically improve their overall performance and efficiency numbers. At the same time, these systems also make very specific assumptions about the availability of highly optimized interconnects and software stacks that are used to mitigate the effects of running large applications across multiple nodes and their accelerators. This talk focuses on the gap in networking between high-performance computing clusters and data centers and proposes that future clusters should be built around commodity-based networks and managed global address spaces to improve the performance of data movement between host memory and accelerator memory. This thesis is supported by previous research into converged commodity interconnects and ongoing research on the Oncilla managed GAS runtime to support aggregated memory for data warehousing applications. In addition, we will speculate on how commodity-based networks and memory management for clusters of accelerators might be affected by the advent of 3D stacking and fused CPU/GPU architectures. April 9, 2013 - Cong Liu: Towards Efficient Real-Time Multicore Computing Systems Current trends in multicore computing are towards building more powerful, intelligent, yet space- and power-efficient systems. A key requirement in correctly building such intelligent systems is to ensure real-time performance, i.e., "make the right move at the right time in a predictable manner." Current research on real-time multicore computing has been limited to simple systems for which complex application runtime behaviors are ignored; this limits the practical applicability of such research. In practice, complex but realistic application runtime behaviors often exist, such as I/O operations, data communications, parallel execution segments, critical sections etc. Such runtime behaviors are currently dealt with by over-provisioning systems, which is an economically wasteful practice. I will present predictable real-time multicore computing system design, analysis, and implementation methods that can efficiently support common types of application runtime behaviors. I will show that the proposed methods are able to avoid over-provisioning systems and to reduce the number of needed hardware components to the extent possible while providing timing correctness guarantees. In the second part of the talk, I will present energy-efficient workload mapping techniques for heterogeneous multicore CPU/GPU systems. Through both algorithmic analysis and prototype system implementation, I will show that the proposed techniques are able to achieve better energy efficiency while guaranteeing response time performance. April 9, 2013 - Frank Mueller: On Determining a Viable Path to Resilience at Exascale Exascale computing is projected to feature billion core parallelism. At such large processor counts, faults will become more common place. Current techniques to tolerate faults focus on reactive schemes for recovery and generally rely on a simple checkpoint/restart mechanism. Yet, they have a number of shortcomings. (1) They do not scale and require complete job restarts. (2) Projections indicate that the mean-time-between-failures is approaching the overhead required for checkpointing. (3) Existing approaches are application-centric, which increases the burden on application programmers and reduces portability. To address these problems, we discuss a number of techniques and their level of maturity (or lack thereof) to address these problems. These include (a) scalable network overlays, (b) on-the-fly process recovery, (c) proactive process-level fault tolerance, (d) redundant execution, (e) the effort of SDCs on IEEE floating point arithmetic and (f) resilience modeling. In combination, these methods are aimed to pave the path to exascale computing. April 5, 2013 - Sarat Sreepathi: Optimus: A Parallel Metaheuristic Optimization Framework With Environmental Engineering Applications Optimus (Optimization Methods for Universal Simulators) is a parallel optimization framework for coupling computational intelligence methods with a target scientific application. Optimus includes a parallel middleware component, PRIME (Parallel Reconfigurable Iterative Middleware Engine) for scalable deployment on emergent supercomputing architectures. PRIME provides a lightweight communication layer to facilitate periodic inter-optimizer data exchanges. A parallel search method, COMSO (Cooperative Multi-Swarm Optimization) was designed and tested on various high dimensional mathematical benchmark problems. Additionally, this work presents a novel technique, TAPSO (Topology Aware Particle Swarm Optimization) for network based optimization problems. Empirical studies demonstrate that TAPSO achieves better convergence than standard PSO for Water Distribution Systems (WDS) applications. Scalability analysis of Optimus was performed on the Cray XK6 supercomputer (Jaguar) at Oak Ridge Leadership Computing Facility for the leak detection problem in WDS. For a weak scaling scenario, we achieved 84.82% of baseline at 200,000 cores relative to performance at 1000 cores. March 20, 2013 - J.W. Banks: Stable Partitioned Solvers for Compresible Uid-structure Interaction Problems In this talk, we discuss recent work concerning the developing and analysis of stable, partitioned solvers for uid-structure interaction problems. In a partitioned approach, the solvers for each uid or solid domain are isolated from each other and coupled only through the interface. This is in contrast to fully-coupled monolithic schemes where the entire system is advanced by a single unied solver, typically by an implicit method. Added-mass instabilities, common to partitioned schemes, are addressed through the use of a newly developed interface projection technique. The overall approach is based on imposing the exact solution to local uid-solid Riemann problems directly in the numerical method. Stability of the FSI coupling is discussed using normal-mode stability theory, and the new scheme is shown to be stable for a wide range of material parameters. For the rigid body case, the approach is shown to be stable even for bodies of no mass or rotational inertia. This dicult limiting case exposes interesting subtleties concerning the notion of added mass in uid-structure problems at the continuous level. March 13, 2013 - Travis Thompson: Navier-Stokes equations to Describe the Motion of Fluid Substances The Navier-Stokes equations describe the motion of fluid substances; the equations are widely utilized to model many physical phenomena such as weather patterns, ocean currents, turbulent fluid flow and magneto-hydrodynamics. Despite their wide utilization a comprehensive theoretical understanding remains an open question; the equations offer a venue for challenges at the forefront of both theoretical and computational knowledge. My work at Texas A&M has focused, primarily, on two topics: aspects of hyperbolic conservation laws, specifically mass conservation for incompressible Navier-Stokes, and computational investigation of an LES model based on a new eddy-viscosity; both embody appeal to highly-parallel scientific computing albeit in differing ways. With respect to hyperbolic conservation laws: on the computational side I have implemented a one-step artificial compression term in a numerical code which counteracts an entropy-viscosity regularization term. This is an innovative approach; canonical methods for interface tracking are two-step or adaptive procedures. In addition the implementation utilizes a splitting approach, originally designed for use in a highly-parallel momentum equation variant, as an approximation operator in the time-stepping scheme; this approach imbues the algorithm with additional parallelism. On the theoretical side a distinct approach towards the analysis of dispersion error, utilizing a commutator expression, has been investigated for particular finite element spaces; the approach offers a computational segue into investigating consistency error and moves away from the canonical, tedious, expansion-based methodology of analysis. With respect to large eddy simulations (LES): Computational investigations of an eddy-viscosity model based on the entropy-viscosity of Guermond & Popov has been underway for the last six months; in collaboration with Dr. Larios, a post-doc here at Texas A&M, an analysis of the qualitative and statistical attributes of high Reynolds number, turbulent flow is being conducted. We will compare our results to the Smagorinsky-Lilly turbulence model and attempt to verify basic tenets of isotropic turbulence theory; namely the Kolmogorov – 5/3 law and predictions regarding the uncorrelated nature of velocity structure functions. March 1, 2013 - Bob Salko: Development, Improvement, and Validation of Reactor Thermal-Hydraulic Analysis Tools As a result of the need for continual development, qualification, and application of computational tools relating to the modeling of nuclear systems, the Reactor Dynamics and Fuel Management Group (RDFMG) at the Pennsylvania State University has maintained an active involvement in this area. This presentation will highlight recent RDFMG work relating to thermal-hydraulic modeling tools. One such tool is the COolant Boiling in Rod Arrays - Two Fluids (COBRA-TF) computer code, capable of modeling the independent behavior of continuous liquid, vapor, and droplets using the sub-channel methodology. Work has been done to expand the modeling capabilities from the in-vessel region only, which COBRA-TF has been developed for, to the coolant-line region by developing a dedicated coolant-line-analysis package that serves as an add-on to COBRA-TF. Additional COBRA-TF work includes development of a pre-processing tool for faster, more user-friendly creation of COBRA-TF input decks, implementation of post-processing capabilities for visualization of simulation results, and optimization of the source code for significant improvements in simulation speed and memory management. Of equal importance to these development activities is the validation of the resulting tools for their intended applications. The code capability to capture rod-bundle thermal-hydraulic behavior during prototypical PWR operating conditions will be demonstrated through comparison of predicted and experimental results for the New Experimental Studies of Thermal-Hydraulics of Rod Bundles (NESTOR) tests. Due to the growing usage of Computational Fluids Dynamics (CFD) tools in this area, modeling results predicted by the STAR-CCM+ CFD tool will also be presented for these tests. In this talk I will discuss a convergence framework for directly approximating the viscosity solutions of fully nonlinear second order PDE problems. The main focus will be the introduction of a set of sufficient conditions for constructing convergent finite difference (FD) methods. The conditions given are meant to be easier to realize and implement than those found in the current literature. The given FD methodology will then be shown to generalize to a class of discontinuous Galerkin (DG) methods. The proposed DG methods are high order and allow for increased flexibility when choosing a computational mesh. Numerical experiments will be presented to gauge the performance of the proposed DG methods. An overview of the PDE theory of viscosity solutions will also be given. The presented ideas are part of a larger project concerned with efficiently and accurately approximating the Hamilton-Jacobi-Bellman equation from stochastic optimal control. February 22, 2013 - Charles K. Garrett: Numerical Integration of Matrix Riccati Differential Equations with Solution Singularities A matrix Riccati differential equation (MRDE) is a quadratic ODE of the form X' = A21 + A22X – XA11 – XA12X. It is well known that MRDEs may have singularities in their solution. In this presentation, both the theory and practice of numerically integrating MRDEs past solution singularities will be analyzed. In particular, it will be shown how to create a black box numerical MRDE solver, which accurately solves an MRDE with or without singularities. February 21, 2013 - Giacomo Dimarco: Asymptotic Preserving Implicit-Explicit Runge-Kutta Methods For Non-Linear Kinetic Equations In this talk, we will discuss Implicit-Explicit (IMEX) Runge Kutta methods which are particularly adapted to stiff kinetic equations of Boltzmann type. We will consider both the case of easy invertible collision operators and the challenging case of Boltzmann collision operators. We give sufficient conditions in order that such methods are asymptotic preserving and asymptotically accurate. Their monotonicity properties are also studied. In the case of the Boltzmann operator the methods are based on the introduction of a penalization technique for the collision integral. This reformulation of the collision operator permits to construct penalized IMEX schemes which work uniformly for a wide range of relaxation times avoiding the expensive implicit resolution of the collision operator. Finally we show some numerical results which confirm the theoretical analysis. February 20, 2013 - Tom Berlijn: Effects of Disorder on the Electronic Structure of Functional Materials Doping is one of the most powerful ways to tune the properties of functional materials such as thermoelectrics, photovoltaics and superconductors. Besides carriers and chemical pressure, the dopants insert disorder into the materials. In this talk I will present two case studies of doped Fe based superconductors: Fe vacancies in KxFeySe2 [1] and Ru substitutions in Ba(Fe1-xRux)2As2 [2]. With the use of a recently developed first principles method [3], non-trivial disorder effects are found that are not only interesting scientifically, but also have potential implications for materials technology. Open questions for further research will be discussed. [1] TB, P.j. Hirschfeld, W. Ku, PRL 109 (2012) [2] L. Wang, TB, C.-H. Lin, Y. Wang, P.j. Hirschfeld, W. Ku, PRL 110 (2013) [3] TB, D. Volja, W. Ku, PRL 106 (2011) February 19, 2013 - Joshua D. Carmichael: Seismic Monitoring of the Western Greenland Ice Sheet: Response to Early Lake Drainage In 2006, the drainage of a supraglacial lake through hydrofracture on the Greenland Ice-sheet was directly observed for the first time. This event demonstrated that surface-to-bed hydrological connections can be established through 1km of cold ice and thereby allow surficial forcing of a developed subglacial drainage system by surface meltwater. In a climate changing scenario, supraglacial lakes on the Western Greenland Ice Sheet are expected to drain earlier each summer and form new lakes at higher elevations. The ice sheet response to these earlier drainages in the near future is of glaciological concern. We address the response of the Western Greenland Ice Sheet to an observed early lake drainage using a synthesis of seismic and GPS monitoring near an actively draining lake. This experiment demonstrates that (1) seismic activity precedes the drainage event by several days and is likely coincident with crack coalescence, that (2) seismic multiplet locations are coincident with the uplift of the ice during drainage and (3) a diurnal seismic response of the ice sheet follows after the ice surface settles to pre-drainage elevation a week later. These observations are consistent with a model in which the subglacial drainage system is likely distributed, highly pressurized and with low hydraulic conductivity at drainage initiation. It also demonstrates that an early lake drainage likely reduces basal normal stress for order-week time scales by storing water subglacially. We conclude with recommendations for future long-range lake drainage detection. February 18, 2013 - Mili Shah: Calculating a Symmetry Preserving Singular Value Decomposition The symmetry preserving singular value decomposition (SPSVD) produces the best symmetric (low rank) approximation to a set of data. These symmetric approximations are characterized via an invariance under the action of a symmetry group on the set of data. The symmetry groups of interest consist of all the non-spherical symmetry groups in three dimensions. This set includes the rotational, reflectional, dihedral, and inversion symmetry groups. In order to calculate the best symmetric (low rank) approximation, the symmetry of the data set must be determined. Therefore, matrix representations for each of the non-spherical symmetry groups have been formulated. These new matrix representations lead directly to a novel reweighting iterative method to determine the symmetry of a given data set by solving a series of minimization problems. Once the symmetry of the data set is found, the best symmetric (low rank) approximation can be established by using the SPSVD. Applications of the SPSVD to protein dynamics problems as well as facial recognition will be presented. February 14, 2013 - Zheng (Cynthia) Gu: Efficient and Robust Message Passing Schemes for Remote Direct Memory Access (RDMA)-Enabled Clusters While significant effort has been made in improving Message Passing Interface (MPI) performance, existing work has mainly focused on eliminating software overhead in the library and delivering raw network performance to applications. The current MPI implementations such as MPICH2, MVAPICH2, and Open MPI still suffer from performance issues such as unnecessary synchronizations, communication progress problems, and lack of communication-computation overlaps. The root cause of these problems is the mis-match between the communication protocols/algorithms and the communication scenarios. In my PhD research, I will develop efficient and robust message passing schemes for both point-to-point and collective communications for RDMA-enabled clusters. Unlike existing approaches for optimizing MPI performance, our approach will allow different communication protocols/algorithms for different communication scenarios. The idea is to use the most appropriate communication scheme for each communication so as to remove the mis-matches, which will eliminate unnecessary synchronizations, improve communication progress, and maximize communication-computation overlaps during a communication operation. This prospectus will describe the background of this research, present our preliminary research, and summarize the proposed future work. February 8, 2013 - Taylor Patterson: Simulation of Complex Nonlinear Elastic Bodies Using Lattice Deformers Lattice deformers are a popular option in computer graphics for modeling the behavior of elastic bodies as they avoid the need for conforming mesh generation, and their regular structure offers significant opportunities for performance optimizations. This talk will present work that expands the scope of current grid-based elastic deformers, adding support for a number of important simulation features. The approach to be described accommodates complex nonlinear, optionally anisotropic materials while using an economical one-point quadrature scheme. The formulation fully accommodates near-incompressibility by enforcing accurate nonlinear constraints, supports implicit integration for large time steps, and is not susceptible to locking or poor conditioning of the discrete equations. Additionally, this technique increases the solver accuracy by employing a novel high-order quadrature scheme on lattice cells overlapping with the embedded model boundary, which are treated at sub-cell precision. This accurate boundary treatment can be implemented at a minimal computational premium over the cost of a voxel-accurate discretization. Finally, this talk will present part of the expanding feature set of this approach that is currently under development. February 6, 2013 - Makhan Virdi: Modeling High-resolution Soil Moisture to Estimate Recharge Timing and Experiences with Geospatial Analyses Estimating the time of groundwater recharge after a rainfall event is poorly understood because of it's dependence on non-linear soil characteristics and variability in antecedent soil conditions. Movement of water in variably saturated soil can be described by Richards' equation - a non-linear partial differential equation without a closed-form analytical solution, which is difficult to approximate. To develop a simple recharge model using minimum number of soil parameters, high resolution soil moisture data from a soil column in controlled laboratory conditions were analysed to understand the wetting front propagation at a finer temporal scale. Findings from a series of simulations using an existing Finite Element model by varying soil properties and depth to water table were used to propose a simple model that uses only the most significant representative soil properties and antecedent soil matrix state. In other separate geospatial analyses, satellite imagery was used for determining landslide risk cost to develop an algorithm for safest and shortest route planning in hilly areas susceptible to landslide; effects of decadal climate extremes was studied on lake-groundwater exchanges; Effects of Phosphate mining on a regional scale were studied using hydrological models and geospatial analysis LiDAR derived DEM and watershed. February 5, 2013 - Roshan J. Vengazhiyil and C. F. Jeff Wu: Experimental Design, Model Calibration, and Uncertainty Quantification We will start the talk with a newly developed space-filling design, called minimum energy design (MED). The key ideas involved in constructing the MED are the visualization of each design point as a charged particle inside a box, and minimization of the total potential energy of these particles. It is shown through theoretical arguments and simulations, that under regularity conditions and proper choice of the charge function, the MED can asymptotically generate any arbitrary probability density function. This new design technique has important applications in Bayesian computation and uncertainty quantification. The second part of the talk will focus on model calibration. The commonly used Kennedy and O'Hagan's (KO) approach treats the computer model as a black box and therefore, the statistically calibrated models lack physical interpretability. We propose a new framework that opens up the black box and introduces statistical models inside the computer model. This approach leads to simpler models that are physically more interpretable. Then, we will present some theoretical results concerning the convergence properties of calibration parameter estimation in the KO formulation of the model calibration problem. The KO calibration is shown to be asymptotically inconsistent. A new approach, called L2 distance calibration, is shown to be consistent and asymptotically efficient in estimating the calibration parameters. February 4, 2013 - Li-Shi Luo: Kinetic Methods for CFD Computational fluid dynamics (CFD) is based on direct discretizations of the Navier-Stokes equations. The traditional approach of CFD is now being challenged as new multi-scale and multi-physics problems have begun to emerge in many fields -- in nanoscale systems, the scale separation assumption does not hold; macroscopic theory is therefore inadequate, yet microscopic theory may be impractical because it requires computational capabilities far beyond our present reach. Methods based on mesoscopic theories, which connect the microscopic and macroscopic descriptions of the dynamics, provide a promising approach. Besides their connection to microscopic physics, kinetic methods also have certain numerical advantages due to the linearity of the advection term in the Boltzmann equation. Dr. Luo will discuss two mesoscopic methods: the lattice Boltzmann equation and the gas-kinetic scheme, their mathematical theory and their applications to simulate various complex flows. Examples include incompressible homogeneous isotropic turbulence, hypersonic flows, and micro-flows. January 23, 2013 - Tarek Ali El Moselhy: New Tools for Uncertainty Quantification and Data Assimilation in Complex Systems In this talk, Dr. Tarek Ali El Moselhy will present new tools for forward and inverse uncertainty quantification (UQ) and data assimilation. In the context of forward UQ, Dr. Moselhy will briefly summarize a new scalable algorithm particularly suited for very high-dimensional stochastic elliptic and parabolic PDEs. The algorithm relies on computing a compact separated representation of the stochastic field of interest. The separated presentation is computed iteratively and adaptively via a greedy optimization algorithm. The algorithm has been successfully applied to problems of flow and transport in stochastic porous media, handling “real world” levels of spatial complexity and providing orders of magnitude reduction in computational time compared to state of the art methods. In the context of inverse UQ, Dr. Moselhy will present a new algorithm for the Bayesian solution of inverse problems. The algorithm explores the posterior distribution by finding a {\it transport map} from a reference measure to the posterior measure, and therefore does not require any Markov chain Monte Carlo sampling. The map from the reference to the posterior is approximated using polynomial chaos expansion and is computed via stochastic optimization. Existence and uniqueness of the map are guaranteed by results from the optimal transport literature. The map approach is demonstrated on a variety of problems, ranging from inference of permeability fields in elliptic PDEs to benchmark high-dimensional spatial statistics problems such as inference in log-Gaussian cox point processes. In addition to its computational efficiency and parallelizability, advantages of the map approach include: providing clear convergence criteria and error measures, providing analytical expressions for posterior moments, evaluating at no additional computational cost the marginal likelihood/evidence (thus enabling model selection), the ability to generate independent uniformly-weighted posterior samples without additional model evaluations, and the ability to efficiently propagate posterior information to subsequent computational modules (thus enabling stochastic control). In the context of data assimilation, Dr. Moselhy will present an optimal map algorithm for filtering of nonlinear chaotic dynamical systems. Such an algorithm is suited for a wide variety of applications including prediction of weather and climate. The main advantage of the algorithm is that it inherently avoids issues of sample impoverishment common to particle filters, since it explicitly represents the posterior as the push forward of a reference measure rather than with a set of samples. December 13, 2012 - Russell Carden: Automating and Stabilizing the Discrete Empirical Interpolation Method for Nonlinear Model Reduction The Discrete Empirical Interpolation Method (DEIM) is a technique for model reduction ofnonlinear dynamical systems. It is based upon a modification to proper orthogonal decomposition, which is designed to reduce the computational complexity for evaluating the reduced order nonlinear term. The DEIM approach is based upon an interpolatory projection and only requires evaluation of a few selected components of the original nonlinear term. Thus, implementation of the reduced order nonlinear term requires a new code to be derived from the original code for evaluating the nonlinearity. Dr. Carden will describe a methodology for automatically deriving a code for the reduced order nonlinearity directly from the original nonlinear code. Although DEIM has been effective on some very difficult problems, it can under certain conditions introduce instabilities in the reduced model. Dr. Carden will present a problem that has proved helpful in developing a method for stabilizing DEIM reduced models. December 12, 2012 - Charlotte Kotas: Bringing Real-Time Array Signal Processing to the NVIDIA Tesla Underwater acoustic detection of hostile targets at range requires increasingly computationally advanced algorithms as adversaries become quieter. This seminar will discuss the mathematics behind one such algorithm and some of the challenges associated with modifying it to work in a real-time networked environment. The algorithm was modified from a sequential MATLAB formulation to a parallel CUDA FORTRAN formation designed to run on an NVIDIA Tesla C2050 processor. Speedups of greater than 50◊ were observed over comparable computational sections. December 6, 2012 - Shuaiwen "Leon" Song: Power, Performance and Energy Models and Systems for Emergent Architectures Massive parallelism combined with complex memory hierarchies and heterogeneity in high-performance computing (HPC) systems form a barrier to efficient application and architecture design. The performance achievements of the past must continue over the next decade to address the needs of scientific simulations. However, building an exascale system by 2022 that uses less than 20 megawatts will require significant innovations in power and performance efficiency. Prior to this work, the fundamental relationships between power and performance were not well understood. Our analytical modeling approach allows users to quantify the relationship between power and performance at scale by enabling study of the effects of machine and application dependent characteristics on system energy efficiency. Our model helps users isolate root causes of energy or performance inefficiencies and develop strategies for scaling systems to maintain or improve efficiency. I will also show how this methodology can be extended and applied to model power and performance in heterogeneous GPU-based architectures. Shuaiwen "Leon" Song is a PhD candidate in the Computer Science department of Virginia Tech. His primary research interests fall broadly within the area of High Performance Computing (HPC) with a focus on power and performance analysis and modeling for large scale homogeneous and heterogeneous parallel architectures and runtime systems. He is a recipient of the 2011 Paul E. Torgersen Award for Graduate Student Research Excellence and in 2011 was an Institute for Scientific Computing Research (ISCR) Scholar at Lawrence Livermore National Laboratory. His work has been published in conferences and journals including IPDPS, IEEE Cluster, PACT, MASCOTS, IEEE TPDS, and IJHPCA. December 6, 2012 - Miroslav Stoyanov: Gradient Based Dimension Reduction Approach for Stochastic Partial Differential Equations Dimension reduction approach is considered for uncertainty quantification, where we use gradient information to partition the uncertainty domain into “active” and “passive” subspaces, where the “passive” subspace is characterized by near zero variance of the quantity of interest. We present a way to project the model onto the low dimensional “active” subspace and solve the resulting problem using conventional techniques. We derive rigorous error bounds for the projection algorithm and show convergence in $L^1$ norm. December 5, 2012 - Barbara Chapman: Enabling Exascale Programming:  The Intranode Challenge As we continue to debate the best way to program emerging generations of leadership-class hardware, it is imperative that we do not ignore the more traditional paths. Dr. Chapman's presentation considers some of the ways in which today's intranode programming models may help us migrate legacy application code. December 5, 2012 - Andrew Christlieb: An Implicit Maxwell Solver Based on Method of Lines Transpose Fast summation methods have been successfully used in a range of plasma applications. However, in the case of moving point charges, direct application of fast summation methods in the time domain requires the use of retarded potentials. In practices, this means that every time a point charge moves in a simulation, it leaves behind an image charge that becomes a source term for all time. Hence, at each time step the number of points in the simulation grows with the number of particles being simulated. In this talk, Dr. Christlieb will present a new approach to Maxwell's equations based on the method of lines transpose. The method starts by expressing Maxwell’s equations in second order form, and then the time operator is discretized. The resulting implicit system is then solved using integral methods. This process is known as the method of lines transpose. This approach pushes the time history into a volume integral, which does not grow in complexity with time. To efficiently solve the boundary integral, Dr. Christlieb will explain the developed ADI method that is combined with a $O(N)$ solver for the 1D boundary integrals that is competitive with explicit time stepping methods. Because the new method is implicit, this approach does not have a CFL. Further, because the approach is based on an integral formulation, the new method easily encompasses complex geometry with no special modification. Dr. Christlieb will present preliminary results of this method applied to wave propagation and some basic Maxwell examples. November 27, 2012 - Charles Jackson: Metrics for Climate Model Validation A “valid” model is a model that has been tested for its intended purpose. In the Bayesian formulation, the “log-likelihood” is a test statistic for selecting, weeding, or weighting climate model ensembles with observational data. Thisstatistic has the potential to synthesize the physical and data constraints on quantities of interest. One of the thorny issues in formulating the log-likelihood is how one should account for biases because not all biases affect predictions of quantities of interest.  Dr. Jackson makes use of a 165-member ensemble CAM3.1/slab ocean climate models with different parameter settings to think through the issues that are involved with predicting eachmodel’s sensitivity to greenhouse gas forcing given what can be observed from the base state. In particular, Dr. Jackson uses multivariate empirical orthogonal functions to decompose the differences that exist among this ensemble to discover what fields and regions matter to the model’s sensitivity. What is found is that the differences that matter can be a small fraction of the total discrepancy. Moreover, weighting members of the ensemble using this knowledge does a relatively poor job of adjusting the ensemble mean toward the known answer. Dr. Jackson will discuss the implications of this result. November 15, 2012 - Erich Foster: Finite Elements for the Quasi-Geostrophic Equations of the Ocean Erich Foster will present a conforming finite element (FE) discretization of the stream function formulation of the pure stream function form of the quasi-geostrophic equations (QGE), which are a commonly used model for the large scale wind-driven ocean circulation. The pure stream function form of the QGE is a fourth-order PDE and therefore requires a C^1 FE discretization to be conforming. Thus, the Argyris finite element, a C^1 FE with 21 degrees of freedom, was chosen for the FE discretization of the QGE. Optimal error estimates for the pure stream function form of the QGE will be presented. The QGE is a simplified model of the ocean, however it can be computationally expensive to resolve all scales, therefore numerical methods, such as the two-level method, are indispensable for time sensitive projects. A two-level method and optimal error estimate for a two-level method applied to the conforming FE discretization of the pure stream function form of the QGE will be presented and computational efficiency will be demonstrated. October 25, 2012 - Shi Jin: Asymptotic-Preserving Schemes for Boltzmann Equation and Relative Problems with Stiff Sources Dr. Shi Jin will propose a general framework to design asymptotic preserving schemes for the Boltzmann kinetic and related equations. Numerically solving these equations are challenging due to the nonlinear stiff collision (source) terms induced by small mean free or relaxation time. Dr. Jin will propose to penalize the nonlinear collision term by a BGK-type relaxation term, which can be solved explicitly even if discretized implicitly in time. Moreover, the BGK-type relaxation operator helps to drive the density distribution toward the local Maxwellian, thus naturally imposes an asymptotic-preserving scheme in the Euler limit. The scheme so designed does not need any nonlinear iterative solver or the use of Wild Sum. It is uniformly stable in terms of the (possibly small) Knudsen number, and can capture the macroscopic fluid dynamic (Euler) limit even if the small scale determined by the Knudsen number is not numerically resolved. Dr. Jin will show how this idea can be applied to other collision operators; such as the Landau-Fokker-Planck operator, Ullenbeck-Urling model, and in the kinetic-fluid model of disperse multiphase flows. October 24, 2012 - Shi Jin: Semiclassical Computation of High Frequency Waves in Heterogeneous Media Dr. Shi Jin will introduce semiclassical Eulerian methods that are efficient in computing high frequency waves through heterogeneous media. The method is based on the classical Liouville equation in phase space, with discontinuous Hamiltonians due to the barriers or material interfaces. Dr. Jin will provide physically relevant interface conditions consistent with the correct transmissions and reflections, and then build the interface conditions into the numerical fluxes. This method allows the resolution of high frequency waves without numerically resolving the small wave lengths, and capture the correct transmissions and reflections at the interface. This method can also be extended to deal with diffraction and quantum barriers. Dr. Jin will also discuss Eulerian Gaussian beam formulation which can compute caustics more accurately. October 09, 2012 - Christian Ringhofer: Charged Particle Transport in Narrow Geometries under Strong Confinement Kinetic transport in narrow tubes and thin plates, involving scattering of particles with a background, is modeled by classical and quantum mechanical sub-band type macroscopic equations for the density of particles (ions).  The result are diffusion equation with the projection of the (asymptotically conserved) energy tensor on the confined directions as an additional free variable, on large time scales.  Classical transport of ions through protein channels and quantum transport in thin films are discussed as examples of the application  of this methodology. October 05, 2012 - Amilcare Porporato: Stochastic soil moisture dynamics: from soil-plant biogeochemistry and land-atmosphere interactions to sustainable use of soil and water The soil-plant-atmosphere system is characterized by a large number of interacting processes with high degree of unpredictability and nonlinearity. These elements of complexity, while making a full modeling effort extremely daunting, are also responsible for the emergence of characteristic behaviors. Duke University model these processes by mean of minimalist models which describe the main deterministic components of the system and surrogate the high dimensional ones (i.e., hydroclimatic variability and rainfall in particular) with suitable stochastic terms. The solution of the stochastic soil water balance allows us to describe probabilistically several ecohydrological processes, including ecosystem response plant productivity as well as soil organic matter and nutrient cycling dynamics. Dr. Porporato will also discuss how such an approach can be extended to include land atmosphere feedbacks and related impact on convective precipitation. Dr. Porporato will conclude with a brief discussion of how these methods can be employed to address quantitatively the sustainable management of water and soil resources, including optimal irrigation and fertilization, phytoremediation, and soil salinization risk.
2015-07-06T18:00:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31736841797828674, "perplexity": 2143.455331944496}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375098685.20/warc/CC-MAIN-20150627031818-00259-ip-10-179-60-89.ec2.internal.warc.gz"}
https://www.usgs.gov/center-news/surface-deformation-k-laueas-summit-a-moving-target
# Surface Deformation at Kīlauea's Summit is a Moving Target Release Date: Ground deformation is one of the primary ways that scientists at the Hawaiian Volcano Observatory (HVO)—and other volcanologists all over the world—monitor volcanic activity. As magma moves beneath the Earth's surface, the ground deforms to accommodate the changing volume below. For example, when magma accumulates in a subsurface chamber, the ground inflates like a giant balloon. But when magma drains from a chamber and erupts, for example, on Kīlauea's east rift zone, the surface deflates. The extent of this deformation is usually small (perhaps tens of centimeters/inches each year), but in extreme cases at some volcanoes, it can be more than several meters (over 10 feet)! HVO monitors these changes using a variety of methods. (1) To map motion of the surface over a broad region, we utilize radar data collected by orbiting satellites. (2) To track surface motion continuously in three dimensions at several places around Kīlauea, we employ GPS stations. (3) To measure tilt of the Earth's surface, we use tiltmeters installed in shallow boreholes. (4) To measure changes in elevation that are accurate to a fraction of a millimeter, we conduct leveling surveys around Kīlauea once a year. While the first three techniques listed are relatively new, leveling has been done at Kīlauea, using the same basic technology, since 1912. These measurements reveal how ground motion is related to magma movement beneath the surface at Kīlauea. During much of the Puu Oo-Kupaianaha eruption, Kīlauea's summit deflated, with the maximum deformation located about 2 km (1 mile) south of Halemaumau Crater. Compared to its 1983 elevation (at the start of the Puu Oo eruption), the southern part of the caldera had subsided by about 1.5 m (over 4 feet) by 2003! Since 2003, however, an influx of new magma to Kīlauea has caused a truly astonishing migration of the deformation source over time. During 2003-2004, inflation was centered along the east wall of Halemaumau Crater, not far from the currently active summit vent. One year later, inflation was centered closer to Keanakakoi Crater, in the southeast part of the caldera. During 2005-2006, the focal point of inflation was the south caldera-the same source that had been deflating from 1983 to 2003. Alert volcano watchers will know that activity at Kīlauea changed abruptly in June 2007, when an intrusion and eruption occurred between the summit and the east rift eruptive vent at Puu Oo, followed one month later by the opening of a new eruptive vent 2 km (1 mile) east of the crater. This new vent has been the main source of erupting lava to this day, and is the source of the lava flows that can be seen entering the ocean near Kalapana. Since then, Kīlauea's summit has been deflating, probably because more lava is erupting from the new vent than is being supplied to the volcano. Like the 2003-2007 pattern of inflation, the recent deflation has been moving around the summit area like a caged tiger. At first, the deflation was focused on the east rim of Halemaumau Crater—but by late 2007 had moved to the south caldera. In mid-2008 (after the start of the summit eruption), the center of deflation had moved back to Halemaumau's east rim-but by late summer had returned to the south caldera. What does this strange dance mean? Why would the center of deformation at Kīlauea move in apparently erratic ways? This is not the first time such behavior has been observed at Kīlauea. Before the 1967-1968 summit eruption (which was the subject of last week's column), leveling measurements by HVO showed a highly variable center of inflation. During 1966-1967, the source of uplift moved from east of Halemaumau to the south caldera, then to the southwest caldera, and finally back to the south caldera-all prior to the eruption. All these changes at Kīlauea's summit reveal the complexity of the magma storage areas beneath us. Although we often think of a magma chamber as a giant balloon, the actual geometry of the magma reservoir is probably much more complicated, with several interconnected storage areas. The moving deformation source suggests that different parts of Kīlauea's magma system are active at different times. Unusual eruptive activity characterized the migrations of the deformation source during both 1966-1967 and 2003-present eruptions. Perhaps this deformation behavior indicates that Kīlauea is especially active, and that large volumes of magma are moving underground. Certainly the fist summit eruption in more than 25 years is an indication of Kīlauea's restless state. With the latest in deformation-monitoring technology, HVO is well-positioned to track Kīlauea's ever-changing shape and learn even more about how magma is stored within the house of Pele. ———————————————————————————————————————————————————————————————— ### Volcano Activity Update Kīlauea Volcano continues to be active. A vent in Halemaumau Crater is erupting elevated amounts of sulfur dioxide gas and very small amounts of ash. Resulting high concentrations of sulfur dioxide in downwind air have closed the south part of Kīlauea caldera and produced occasional air quality alerts in more distant areas, such as Pahala and communities adjacent to Hawaii Volcanoes National Park, during kona wind periods. The have been several small ash-emission events from the vent, lasting only minutes, in the last week. Puu Ōō continues to produce sulfur dioxide at even higher rates than the vent in Halemaumau Crater. Trade winds tend to pool these emissions along the West Hawaii coast, while Kona winds blow these emissions into communities to the north, such as Mountain View, Volcano, and Hilo. Lava continues to erupt from the Thanksgiving Eve Breakout (TEB) vent and flows toward the ocean through a well-established lava tube. Lava breakouts in the Royal Gardens subdivision have been active throughout the past week, sending small flows several hundred yards southward onto the coastal plain. Activity at the Waikupanaha ocean entry has fluctuated over the past week. A deflation-inflation (DI) event at the summit led to a brief reduction in activity at the ocean entry on Wednesday, November 12. Be aware that active lava deltas can collapse at any time, potentially generating large explosions. This may be especially true during times of rapidly changing lava supply conditions. Do not venture onto the lava deltas. Even the intervening beaches are susceptible to large waves generated during delta collapse; avoid these beaches. In addition, steam plumes rising from ocean entries are highly acidic and laced with glass particles. Check Civil Defense Web site or call 961-8093 for viewing hours. Mauna Loa is not erupting. One earthquake was located beneath the summit this past week. Continuing extension between locations spanning the summit indicates slow inflation of the volcano. One earthquake beneath Hawaii Island was reported felt within the past week. A magnitude-2.7 earthquake occurred at 10:52 a.m., H.s.t., on Friday, November 7, 2008, and was located 12 km (7 miles) southwest of Kīlauea summit at a depth of 30 km ( 19 miles). Visit our Web site for daily Kīlauea eruption updates, a summary of volcanic events over the past year, and nearly real-time Hawaii earthquake information. Kīlauea daily update summaries are also available by phone at (808) 967-8862. Questions can be emailed to [email protected]. skip past bottom navigational bar
2020-01-29T16:10:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31120535731315613, "perplexity": 3338.5772775309874}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251799918.97/warc/CC-MAIN-20200129133601-20200129163601-00281.warc.gz"}