url
stringlengths
16
775
text
stringlengths
100
1.02M
date
timestamp[s]
metadata
stringlengths
1.07k
1.1k
https://tjyj.stats.gov.cn/CN/10.19343/j.cnki.11-1302/c.2021.09.009
• • ### 基于vSEIdRm模型的人口迁移以及离汉交通管控对新冠肺炎疫情发展的影响分析 • 出版日期:2021-09-25 发布日期:2021-09-27 ### The Effect of Population Migration and Wuhan Lockdown on the Control of COVID-19 Based on vSEIdRm Model Gu Jia Chen Songxi Dong Qian Qiu Yumou • Online:2021-09-25 Published:2021-09-27 Abstract: Different from the traditional SEIR (Susceptible-Exposed-Infected-Removed) epidemic model, based on the vSEIdR ( Varying Coefficient Susceptible-Exposed-Infected-Diagnosed-Removed) model in our previous study, in this paper we add a population migration compartment and propose the vSEIdRm model, which takes the effect of cross-regional migration on the epidemic into consideration and allows the parameters to vary with time. We first conduct statistical analysis on the population migration data and connect the migration with the progression of the COVID-19 epidemic. Further, based on the new model, we estimate the would-be imported cases to the provinces from Wuhan in the absence of Wuhan lockdown which quantifies the effect of the Wuhan lockdown. Our results show that the Wuhan lockdown effectively reduces the scale of the epidemic in other provinces.
2022-07-02T08:07:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2932618260383606, "perplexity": 3612.6139684712434}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103989282.58/warc/CC-MAIN-20220702071223-20220702101223-00137.warc.gz"}
https://par.nsf.gov/biblio/10347005-systematic-study-nuclear-effects-p+al-p+au-d+au-he3+au-collisions-snn-gev-using-production
This content will become publicly available on June 1, 2023 Systematic study of nuclear effects in , and ${}^{3}\mathrm{He}\phantom{\rule{0.16em}{0ex}}+\phantom{\rule{0.16em}{0ex}}\mathrm{Au}$ collisions at $\sqrt{{s}_{NN}}=200$ GeV using ${\pi }^{0}$ production Authors: ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; more » Award ID(s): Publication Date: NSF-PAR ID: 10347005 Journal Name: Physical Review C Volume: 105 Issue: 6 ISSN: 2469-9985
2022-09-26T13:11:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 25, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8718985915184021, "perplexity": 12137.489988944037}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00783.warc.gz"}
https://www.usgs.gov/programs/climate-adaptation-science-centers/science/science-topics/extreme-weather
# Extreme Weather Changes in the frequency and severity of extreme weather events such as hurricanes, blizzards, and floods can devastate human and ecological communities and fundamentally shift dynamics in the region. The CASC network produces knowledge, data, and tools to understand and predict extreme weather events and to help develop strategies for protecting communities and ecosystems. Filter Total Items: 74 #### The Combined Effects of Seasonal Climate and Extreme Precipitation on Flood Hazard in the Midwest The Midwest has experienced some of the costliest flooding events in U.S. history, including many billions of dollars during the past decade alone. The Midwest’s susceptibility to flooding has been exacerbated by a long-term increase in total precipitation and extreme rainfalls, with the 2010s being the region’s wettest decade on record Climate models strongly indicate that these recent trends w #### The Combined Effects of Seasonal Climate and Extreme Precipitation on Flood Hazard in the Midwest The Midwest has experienced some of the costliest flooding events in U.S. history, including many billions of dollars during the past decade alone. The Midwest’s susceptibility to flooding has been exacerbated by a long-term increase in total precipitation and extreme rainfalls, with the 2010s being the region’s wettest decade on record Climate models strongly indicate that these recent trends w #### Understanding Extreme Wildfire Events to Manage for Fire-Resistant and Resilient Landscapes Increasing wildfire activity in the western US poses profound risks for human communities and ecological systems. Recent fire years are characterized not only by expanding area burned but also explosive fire growth. In 2020, several fires grew by >100,000 acres within a 24-hour period. Extreme single-day fire spread events such as these are poorly understood but disproportionately responsible for #### Understanding Extreme Wildfire Events to Manage for Fire-Resistant and Resilient Landscapes Increasing wildfire activity in the western US poses profound risks for human communities and ecological systems. Recent fire years are characterized not only by expanding area burned but also explosive fire growth. In 2020, several fires grew by >100,000 acres within a 24-hour period. Extreme single-day fire spread events such as these are poorly understood but disproportionately responsible for #### Building a Coastal Flood Hazard Assessment and Adaptation Strategy with At-Risk Communities of Alaska Coastal flooding and erosion are increasingly threatening infrastructure and public safety in Alaska Native communities. While many scientists and projects are attentive to the problem, there are still a limited number of tools that assess vulnerability to coastal flood hazards. Few of the available tools use modeling approaches that can be customized to specific community information needs in a m #### Building a Coastal Flood Hazard Assessment and Adaptation Strategy with At-Risk Communities of Alaska Coastal flooding and erosion are increasingly threatening infrastructure and public safety in Alaska Native communities. While many scientists and projects are attentive to the problem, there are still a limited number of tools that assess vulnerability to coastal flood hazards. Few of the available tools use modeling approaches that can be customized to specific community information needs in a m #### Evaluating how snow avalanches impact mountain goat populations in southeast Alaska Snow avalanches have a wide variety of effects on mountain environments, with both beneficial and harmful outcomes for wildlife. Avalanches can benefit wildlife by creating open chutes in which to graze but can also be a direct source of mortality when animals are buried by avalanche debris. Mountain goats, which inhabit rugged and steep terrain, are at an increased risk of exposure to avalanches. #### Evaluating how snow avalanches impact mountain goat populations in southeast Alaska Snow avalanches have a wide variety of effects on mountain environments, with both beneficial and harmful outcomes for wildlife. Avalanches can benefit wildlife by creating open chutes in which to graze but can also be a direct source of mortality when animals are buried by avalanche debris. Mountain goats, which inhabit rugged and steep terrain, are at an increased risk of exposure to avalanches. #### Characterizing Climate-Driven Changes to Flood Events and Floodplain Forests in the Upper Mississippi River to Inform Management Floodplain forests along the Upper Mississippi River are heavily managed but understudied systems that provide critical ecosystem services, including habitat for endangered species. Impacts of a changing climate, such as warmer winters and wetter summers with extreme precipitation events, are already influencing hydrologic patterns in these ecosystems, including altering the duration, frequency, a #### Characterizing Climate-Driven Changes to Flood Events and Floodplain Forests in the Upper Mississippi River to Inform Management Floodplain forests along the Upper Mississippi River are heavily managed but understudied systems that provide critical ecosystem services, including habitat for endangered species. Impacts of a changing climate, such as warmer winters and wetter summers with extreme precipitation events, are already influencing hydrologic patterns in these ecosystems, including altering the duration, frequency, a #### Assessing the Risk to National Park Service Lands in Alaska Imposed by Rapidly Warming Temperatures The observed rate of warming in many National Park Service (NPS) lands in Alaska has accelerated soil subsidence and increased landslide frequency, thereby threatening public access, subsistence activities and infrastructure in NPS regions. Areas most affected by this change are along the Denali Park Road, the proposed Ambler Road through Gates of the Arctic National Park and Preserve, and the McC #### Assessing the Risk to National Park Service Lands in Alaska Imposed by Rapidly Warming Temperatures The observed rate of warming in many National Park Service (NPS) lands in Alaska has accelerated soil subsidence and increased landslide frequency, thereby threatening public access, subsistence activities and infrastructure in NPS regions. Areas most affected by this change are along the Denali Park Road, the proposed Ambler Road through Gates of the Arctic National Park and Preserve, and the McC #### Assessing the Climate Vulnerability of Wild Turkeys Across the Southeastern U.S. Wild turkey is a culturally and economically important game species that has shown dramatic population declines throughout much of the southeastern U.S. A possible explanation for these declines is that the timing of nesting has shifted to earlier in the year while hunting seasons have remained the same. Wild turkeys are the only gamebird in the contiguous United States that are hunted during the #### Assessing the Climate Vulnerability of Wild Turkeys Across the Southeastern U.S. Wild turkey is a culturally and economically important game species that has shown dramatic population declines throughout much of the southeastern U.S. A possible explanation for these declines is that the timing of nesting has shifted to earlier in the year while hunting seasons have remained the same. Wild turkeys are the only gamebird in the contiguous United States that are hunted during the #### Translating Existing Model Results to Aid in Resource Management Planning for Future Precipitation Extremes in Hawai‘i and Southeast Alaska Changing climate in the “Ridge-to-Reef" (R2R) and “Icefield-to-Ocean" (I2O) ecosystems of Hawai‘i and Southeast Alaska is expected to influence freshwater resources, extreme precipitation events, intensity of storms, and drought. Changes in these regions will not be uniform, rather they will depend on elevation and watershed location due to their steep-gradient terrains. A better understanding of #### Translating Existing Model Results to Aid in Resource Management Planning for Future Precipitation Extremes in Hawai‘i and Southeast Alaska Changing climate in the “Ridge-to-Reef" (R2R) and “Icefield-to-Ocean" (I2O) ecosystems of Hawai‘i and Southeast Alaska is expected to influence freshwater resources, extreme precipitation events, intensity of storms, and drought. Changes in these regions will not be uniform, rather they will depend on elevation and watershed location due to their steep-gradient terrains. A better understanding of #### A Climate-Informed Conservation Strategy for Southern California’s Montane Forests California is a world biodiversity hotspot, and also home to hundreds of sensitive, threatened, and endangered species. One of the most vulnerable ecosystems in California is the “sky island” montane forests of southern California, forests of conifers and hardwoods located only in high-elevation mountain regions. Montane forests serve many important ecosystem functions, including protecting the up... #### A Climate-Informed Conservation Strategy for Southern California’s Montane Forests California is a world biodiversity hotspot, and also home to hundreds of sensitive, threatened, and endangered species. One of the most vulnerable ecosystems in California is the “sky island” montane forests of southern California, forests of conifers and hardwoods located only in high-elevation mountain regions. Montane forests serve many important ecosystem functions, including protecting the up... #### A Climate-Informed Adaptation and Post-Fire Strategy for the Southwestern Region The Southwest is projected to face significant climate challenges in coming decades; and many of these stresses have already begun. In recent years, multiple climate assessments have been developed for the Southwest that corroborate forecasts of remarkable change to vegetation pattern and the vulnerability of regional ecosystems and suggest that measurable change is already ongoing. Disturbance ev #### A Climate-Informed Adaptation and Post-Fire Strategy for the Southwestern Region The Southwest is projected to face significant climate challenges in coming decades; and many of these stresses have already begun. In recent years, multiple climate assessments have been developed for the Southwest that corroborate forecasts of remarkable change to vegetation pattern and the vulnerability of regional ecosystems and suggest that measurable change is already ongoing. Disturbance ev
2023-04-02T02:43:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18865033984184265, "perplexity": 9744.679347718069}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950373.88/warc/CC-MAIN-20230402012805-20230402042805-00070.warc.gz"}
https://par.nsf.gov/biblio/10271881-search-diboson-resonances-hadronic-final-states-fb1-pp-collisions-sqrt-tev-atlas-detector
Search for diboson resonances in hadronic final states in 139 fb−1 of pp collisions at $$\sqrt{s}$$ = 13 TeV with the ATLAS detector
2022-10-02T19:13:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.994649350643158, "perplexity": 2263.2127526680265}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00557.warc.gz"}
https://pos.sissa.it/265/022/
Volume 265 - XXIV International Workshop on Deep-Inelastic Scattering and Related Subjects (DIS2016) - WG1: Structure functions and parton densities Nuclear Effects in Deuterium and Global PDF fits S. Alekhin, S. Kulagin, R. Petti* *corresponding author Full text: pdf Published on: 2016 November 09 Abstract We present a detailed study of nuclear corrections in the deuteron (D) from an analysis of data from charged-lepton deep-inelastic scattering (DIS) off proton and D, as well as from dimuon pair production in pp and pD collisions and $W^\pm$ and the $Z$ boson production at pp (p$\rm \bar p$) colliders. In particular, we discuss the determination of the off-shell function describing the modification of parton distributions (PDF) in bound nucleons in the context of global PDF fits. Our results are consistent with the ones obtained earlier from the study of the ratios of DIS structure functions $F_2^A/F_2^D$ in nuclei with $A\geq4$, confirming the universality of the off-shell function. We also discuss the sensitivity to various models of the deuteron wave function and the impact of nuclear corrections on the determination of the $d$ quark distribution. DOI: https://doi.org/10.22323/1.265.0022 Open Access Copyright owned by the author(s) under the term of the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
2020-06-03T09:49:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5497731566429138, "perplexity": 3668.422769043742}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347432521.57/warc/CC-MAIN-20200603081823-20200603111823-00516.warc.gz"}
https://par.nsf.gov/biblio/10298740-biological-nitrous-oxide-consumption-oxygenated-waters-high-latitude-atlantic-ocean
Biological nitrous oxide consumption in oxygenated waters of the high latitude Atlantic Ocean Abstract Nitrous oxide (N 2 O) is important to the global radiative budget of the atmosphere and contributes to the depletion of stratospheric ozone. Globally the ocean represents a large net flux of N 2 O to the atmosphere but the direction of this flux varies regionally. Our understanding of N 2 O production and consumption processes in the ocean remains incomplete. Traditional understanding tells us that anaerobic denitrification, the reduction of NO 3 − to N 2 with N 2 O as an intermediate step, is the sole biological means of reducing N 2 O, a process known to occur in anoxic environments only. Here we present experimental evidence of N 2 O removal under fully oxygenated conditions, coupled with observations of bacterial communities with novel, atypical gene sequences for N 2 O reduction. The focus of this work was on the high latitude Atlantic Ocean where we show bacterial consumption sufficient to account for oceanic N 2 O depletion and the occurrence of regional sinks for atmospheric N 2 O. Authors: ; ; ; ; ; Award ID(s): Publication Date: NSF-PAR ID: 10298740 Journal Name: Communications Earth & Environment Volume: 2 Issue: 1 ISSN: 2662-4435 2. Assessment of the global budget of the greenhouse gas nitrous oxide ($N2$O) is limited by poor knowledge of the oceanic$N2$O flux to the atmosphere, of which the magnitude, spatial distribution, and temporal variability remain highly uncertain. Here, we reconstruct climatological$N2$O emissions from the ocean by training a supervised learning algorithm with over 158,000$N2$O measurements from the surface ocean—the largest synthesis to date. The reconstruction captures observed latitudinal gradients and coastal hot spots of$N2$O flux and reveals a vigorous global seasonal cycle. We estimate an annual mean$N2$O flux of 4.2 ± 1.0 Tg N$⋅y−1$, 64% of which occurs in the tropics, and 20% in coastal upwelling systems that occupy less than 3% of the ocean area. This$N2$O flux ranges from a low of 3.3 ± 1.3 Tg N$⋅y−1$in the boreal spring to a high of 5.5 ± 2.0 Tg N$⋅y−1$in the boreal summer. Much of the seasonal variations in global$N2$O emissions can be traced to seasonal upwelling in the tropical ocean and winter mixing in the Southern Ocean. The dominant contribution to seasonality by productive, low-oxygen tropical upwelling systemsmore » 4. Climate-driven depletion of ocean oxygen strongly impacts the global cycles of carbon and nutrients as well as the survival of many animal species. One of the main uncertainties in predicting changes to marine oxygen levels is the regulation of the biological respiration demand associated with the biological pump. Derived from the Redfield ratio, the molar ratio of oxygen to organic carbon consumed during respiration (i.e., the respiration quotient,$r−O2:C$) is consistently assumed constant but rarely, if ever, measured. Using a prognostic Earth system model, we show that a 0.1 increase in the respiration quotient from 1.0 leads to a 2.3% decline in global oxygen, a large expansion of low-oxygen zones, additional water column denitrification of 38 Tg N/y, and the loss of fixed nitrogen and carbon production in the ocean. We then present direct chemical measurements of$r−O2:C$using a Pacific Ocean meridional transect crossing all major surface biome types. The observed$r−O2:C$has a positive correlation with temperature, and regional mean values differ significantly from Redfield proportions. Finally, an independent global inverse model analysis constrained with nutrients, oxygen, and carbon concentrations supports a positive temperature dependence of$r−O2:C$in exported organic matter. We provide evidence against the common assumption of a staticmore »
2022-12-04T02:07:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 17, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.500988781452179, "perplexity": 3537.68757303783}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710953.78/warc/CC-MAIN-20221204004054-20221204034054-00553.warc.gz"}
https://mooseframework.inl.gov/modules/porous_flow/tests/infiltration_and_drainage/infiltration_and_drainage_tests.html
# Infiltration and drainage test descriptions ## The 2-phase analytic infiltration solution The physical setup studied in this section is a 1D column that is initially unsaturated, and which is subject to a constant injection of fluid from its top. This is of physical importance because it is a model of constant rainfall recharge to an initially dry groundwater system. The top surface becomes saturated, and this saturated zone moves downwards into the column, diffusing as it goes. The problem is of computational interest because under certain conditions an analytic solution is available for the saturation profile as a function of depth and time. The Richards' equation for an incompressible fluid in one spatial dimension () reads (1) where (2) and (3) Here which is the capillary pressure, and recall that . The analytic solution of this nonlinear diffusion-advection relevant to constant infiltration to groundwater has been derived by Broadbridge and White (Broadbridge and White, 1988) for certain functions and . Broadbridge and White assume the hydraulic conductivity is (4) where (5) and the parameters obey , , and . The diffusivity is of the form . This leads to very complicated relationships between the capillary pressure, , and the saturation, except in the case where is small, when they are related through (6) with being the final parameter introduced by Broadbridge and White. Broadbridge and White derive time-dependent solutions for constant recharge to one end of a semi-infinite line. Their solutions are quite lengthy, will not be written here. To compare with MOOSE, the following parameters are used — the hydraulic parameters are those used in Figure 3 of Broadbridge and White: Table 1: Parameter values used in the 2-phase tests ParameterValue Bar length20m Bar porosity0.25 Bar permeability1 Gravity0.1m.s Fluid density10kg.m Fluid viscosity4Pa.s 0m.s 1m.s 0m.s 1m.s 1.5 2Pa Recharge rate 0.5 Broadbridge and white consider the case where the initial condition is , but this yields , which is impossible to use in a MOOSE model. Therefore the initial condition Pa is used which avoids any underflow problems. The recharge rate of corresponds in the MOOSE model to a recharge rate of kg.m.s. Note that m.s, so that the and may be encoded as and in the relative permeability function Eq. (4) in a straightforward way. Figure 1 shows good agreement between the analytic solution of Broadbridge and White and the MOOSE implementation. There are minor discrepancies for small values of saturation: these get smaller as the temporal and spatial resolution is increased, but never totally disappear due to the initial condition of Pa. Figure 1: Comparison of the Broadbridge and White analytical solution with the MOOSE solution for 3 times. This figure is shown in the standard format used in the Broadbridge-White paper: the constant recharge is applied to the top (where depth is zero) and gravity acts downwards in this figure. Two tests are part of the automatic test suite (one is marked "heavy" because it is a high-resolution version). [Mesh] type = GeneratedMesh dim = 2 nx = 400 ny = 1 xmin = -10 xmax = 10 ymin = 0 ymax = 0.05 [] [GlobalParams] PorousFlowDictator = dictator [] [Functions] [./dts] type = PiecewiseLinear y = '1E-5 1E-2 1E-2 1E-1' x = '0 1E-5 1 10' [../] [] [UserObjects] [./dictator] type = PorousFlowDictator porous_flow_vars = pressure number_fluid_phases = 1 number_fluid_components = 1 [../] [./pc] type = PorousFlowCapillaryPressureBW Sn = 0.0 Ss = 1.0 C = 1.5 las = 2 [../] [] [Modules] [./FluidProperties] [./simple_fluid] type = SimpleFluidProperties bulk_modulus = 2e9 viscosity = 4 density0 = 10 thermal_expansion = 0 [../] [../] [] [Materials] [./massfrac] type = PorousFlowMassFraction [../] [./temperature] type = PorousFlowTemperature [../] [./simple_fluid] type = PorousFlowSingleComponentFluid fp = simple_fluid phase = 0 [../] [./ppss] type = PorousFlow1PhaseP porepressure = pressure capillary_pressure = pc [../] [./relperm] type = PorousFlowRelativePermeabilityBW Sn = 0.0 Ss = 1.0 Kn = 0 Ks = 1 C = 1.5 phase = 0 [../] [./porosity] type = PorousFlowPorosityConst porosity = 0.25 [../] [./permeability] type = PorousFlowPermeabilityConst permeability = '1 0 0 0 1 0 0 0 1' [../] [] [Variables] [./pressure] initial_condition = -9E2 [../] [] [Kernels] [./mass0] type = PorousFlowMassTimeDerivative fluid_component = 0 variable = pressure [../] [./flux0] fluid_component = 0 variable = pressure gravity = '-0.1 0 0' [../] [] [AuxVariables] [./SWater] family = MONOMIAL order = CONSTANT [../] [] [AuxKernels] [./SWater] type = MaterialStdVectorAux property = PorousFlow_saturation_qp index = 0 variable = SWater [../] [] [BCs] [./recharge] type = PorousFlowSink variable = pressure boundary = right flux_function = -1.25 # corresponds to Rstar being 0.5 because i have to multiply by density*porosity [../] [] [Preconditioning] [./andy] type = SMP full = true petsc_options = '-snes_converged_reason -ksp_diagonal_scale -ksp_diagonal_scale_fix -ksp_gmres_modifiedgramschmidt -snes_linesearch_monitor' petsc_options_iname = '-ksp_type -pc_type -sub_pc_type -sub_pc_factor_shift_type -pc_asm_overlap -snes_atol -snes_rtol -snes_max_it' petsc_options_value = 'gmres asm lu NONZERO 2 1E-10 1E-10 10000' [../] [] [VectorPostprocessors] [./swater] type = LineValueSampler variable = SWater start_point = '-10 0 0' end_point = '10 0 0' sort_by = x num_points = 101 execute_on = timestep_end [../] [] [Executioner] type = Transient solve_type = Newton petsc_options = '-snes_converged_reason' end_time = 8 [./TimeStepper] type = FunctionDT function = dts [../] [] [Outputs] file_base = bw01 sync_times = '0.5 2 8' [./exodus] type = Exodus sync_only = true [../] [./along_line] type = CSV sync_only = true [../] [] (modules/porous_flow/test/tests/infiltration_and_drainage/bw01.i) ## The two-phase analytic drainage solution Warrick, Lomen and Islas (Warrick et al., 1990) extended the analysis of Broadbridge and White to include the case of drainage from a medium. The setup is an initially-saturated semi-infinite column of material that drains freely from its lower end. This is simulated by placing a boundary condition of at the lower end. To obtain their analytical solutions, Warrick, Lomen and Islas make the same assumptions as Broadbridge and White concerning the diffusivity and conductivity of the medium. Their solutions are quite lengthy, so are not written here A MOOSE model with the parameters almost identical to those listed in Table 1 is compared with the analytical solutions. The only differences are that the "bar" length is m (to avoid any interference from the lower Dirichlet boundary condition), and since there is no recharge. The initial condition is Pa: the choice leads to poor convergence since by construction the Broadbridge-White capillary function is only designed to simulate the unsaturated zone and a sensible extension to is discontinuous at . Figure 2 shows good agreement between the analytic solution and the MOOSE implementation. Any minor discrepancies get smaller as the temporal and spatial resolution increase. Figure 2: Comparison of the Warrick, Lomen and Islas analytical solution with the MOOSE solution for 3 times. This figure is shown in the standard format used in the literature: the top of the model is at the top of the figure, and gravity acts downwards in this figure, with fluid draining from the infinitely deep point. Two tests are part of the automatic test suite (one is marked "heavy" because it is a high-resolution version). [Mesh] type = GeneratedMesh dim = 2 nx = 1000 ny = 1 xmin = -10000 xmax = 0 ymin = 0 ymax = 0.05 [] [GlobalParams] PorousFlowDictator = dictator [] [UserObjects] [./dictator] type = PorousFlowDictator porous_flow_vars = pressure number_fluid_phases = 1 number_fluid_components = 1 [../] [./pc] type = PorousFlowCapillaryPressureBW Sn = 0.0 Ss = 1.0 C = 1.5 las = 2 [../] [] [Modules] [./FluidProperties] [./simple_fluid] type = SimpleFluidProperties bulk_modulus = 2e9 viscosity = 4 density0 = 10 thermal_expansion = 0 [../] [../] [] [Materials] [./massfrac] type = PorousFlowMassFraction [../] [./temperature] type = PorousFlowTemperature [../] [./simple_fluid] type = PorousFlowSingleComponentFluid fp = simple_fluid phase = 0 [../] [./ppss] type = PorousFlow1PhaseP porepressure = pressure capillary_pressure = pc [../] [./relperm] type = PorousFlowRelativePermeabilityBW Sn = 0.0 Ss = 1.0 Kn = 0 Ks = 1 C = 1.5 phase = 0 [../] [./porosity] type = PorousFlowPorosityConst porosity = 0.25 [../] [./permeability] type = PorousFlowPermeabilityConst permeability = '1 0 0 0 1 0 0 0 1' [../] [] [Variables] [./pressure] initial_condition = -1E-4 [../] [] [Kernels] [./mass0] type = PorousFlowMassTimeDerivative fluid_component = 0 variable = pressure [../] [./flux0] fluid_component = 0 variable = pressure gravity = '-0.1 0 0' [../] [] [AuxVariables] [./SWater] family = MONOMIAL order = CONSTANT [../] [] [AuxKernels] [./SWater] type = MaterialStdVectorAux property = PorousFlow_saturation_qp index = 0 variable = SWater [../] [] [BCs] [./base] type = DirichletBC boundary = 'left' value = -1E-4 variable = pressure [../] [] [Preconditioning] [./andy] type = SMP full = true petsc_options = '-snes_converged_reason -ksp_diagonal_scale -ksp_diagonal_scale_fix -ksp_gmres_modifiedgramschmidt -snes_linesearch_monitor' petsc_options_iname = '-ksp_type -pc_type -sub_pc_type -sub_pc_factor_shift_type -pc_asm_overlap -snes_atol -snes_rtol -snes_max_it' petsc_options_value = 'gmres asm lu NONZERO 2 1E-10 1E-10 10000' [../] [] [VectorPostprocessors] [./swater] type = LineValueSampler variable = SWater start_point = '-5000 0 0' end_point = '0 0 0' sort_by = x num_points = 71 execute_on = timestep_end [../] [] [Executioner] type = Transient solve_type = Newton petsc_options = '-snes_converged_reason' end_time = 1000 dt = 1 [] [Outputs] file_base = wli01 sync_times = '100 500 1000' [./exodus] type = Exodus sync_only = true [../] [./along_line] type = CSV sync_only = true [../] [] (modules/porous_flow/test/tests/infiltration_and_drainage/wli01.i) ## Single-phase infiltration and drainage Forsyth, Wu and Pruess (Forsyth et al., 1995) describe a HYDRUS simulation of an experiment involving infiltration (experiment 1) and subsequent drainage (experiment 2) in a large caisson. The simulation is effectively one dimensional, and is shown in Figure 3. Figure 3: Two experimental setups from Forsyth, Wu and Pruess. Experiment 1 involves infiltration of water into an initially unsaturated caisson. Experiment 2 involves drainage of water from an initially saturated caisson. The properties common to each experiment are listed in Table 2 Table 2: Parameter values used in the single-phase infiltration and drainage tests ParameterValue Caisson porosity0.33 Caisson permeabilitym Gravity10m.s Water density at STP1000kg.m Water viscosity0.00101Pa.s Water bulk modulus20MPa Water residual saturation0.0 Air residual saturation0.0 Air pressure0.0 van Genuchten Pa van Genuchten 0.336 van Genuchten turnover0.99 In each experiment 120 finite elements are used along the length of the Caisson. The modified van-Genuchten relative permeability curve with a "turnover" (set at ) is employed in order to improve convergence significantly. Hydrus also uses a modified van-Genuchten curve, although I couldn't find any details on the modification. In experiment 1, the caisson is initially at saturation 0.303 (Pa), and water is pumped into the top with a rate 0.002315kg.m.s. This causes a front of water to advance down the caisson. Figure 4 shows the agreement between MOOSE and the published result (this result was obtained by extracting data by hand from online graphics). In experiment 2, the caisson is initially fully saturated at , and the bottom is held at to cause water to drain via the action of gravity. Figure 4 and Figure 5 show the agreement between MOOSE and the published result. Figure 4: Saturation profile in the caisson after 4.16 days of infiltration. Note that the HYDRUS results are only approximate they were extrated by hand from online graphics. Figure 5: Saturation profiles in the caisson after drainage from an initially-saturated simulation (4 days and 100 days profiles). Note that the HYDRUS results are only approximate they were extrated by hand from online graphics. Experiment 1 and the first 4 simulation days of experiment 2 are marked as "heavy" in the PorousFlow test suite since the simulations take around 3 seconds to complete. The input file for Experiment 1: [Mesh] type = GeneratedMesh dim = 2 nx = 120 ny = 1 xmin = 0 xmax = 6 ymin = 0 ymax = 0.05 [] [GlobalParams] PorousFlowDictator = dictator [] [Functions] [./dts] type = PiecewiseLinear y = '1E-2 1 10 500 5000 5000' x = '0 10 100 1000 10000 100000' [../] [] [UserObjects] [./dictator] type = PorousFlowDictator porous_flow_vars = pressure number_fluid_phases = 1 number_fluid_components = 1 [../] [./pc] type = PorousFlowCapillaryPressureVG m = 0.336 alpha = 1.43e-4 [../] [] [Modules] [./FluidProperties] [./simple_fluid] type = SimpleFluidProperties bulk_modulus = 2e7 viscosity = 1.01e-3 density0 = 1000 thermal_expansion = 0 [../] [../] [] [Materials] [./massfrac] type = PorousFlowMassFraction [../] [./temperature] type = PorousFlowTemperature [../] [./simple_fluid] type = PorousFlowSingleComponentFluid fp = simple_fluid phase = 0 [../] [./ppss] type = PorousFlow1PhaseP porepressure = pressure capillary_pressure = pc [../] [./relperm] type = PorousFlowRelativePermeabilityVG m = 0.336 seff_turnover = 0.99 phase = 0 [../] [./porosity] type = PorousFlowPorosityConst porosity = 0.33 [../] [./permeability] type = PorousFlowPermeabilityConst permeability = '0.295E-12 0 0 0 0.295E-12 0 0 0 0.295E-12' [../] [] [Variables] [./pressure] initial_condition = -72620.4 [../] [] [Kernels] [./mass0] type = PorousFlowMassTimeDerivative fluid_component = 0 variable = pressure [../] [./flux0] fluid_component = 0 variable = pressure gravity = '-10 0 0' [../] [] [AuxVariables] [./SWater] family = MONOMIAL order = CONSTANT [../] [] [AuxKernels] [./SWater] type = MaterialStdVectorAux property = PorousFlow_saturation_qp index = 0 variable = SWater [../] [] [BCs] [./base] type = PorousFlowSink boundary = right flux_function = -2.315E-3 variable = pressure [../] [] [Preconditioning] [./andy] type = SMP full = true petsc_options = '-snes_converged_reason -ksp_diagonal_scale -ksp_diagonal_scale_fix -ksp_gmres_modifiedgramschmidt -snes_linesearch_monitor' petsc_options_iname = '-ksp_type -pc_type -sub_pc_type -sub_pc_factor_shift_type -pc_asm_overlap -snes_atol -snes_rtol -snes_max_it' petsc_options_value = 'gmres asm lu NONZERO 2 1E-10 1E-10 10' [../] [] [VectorPostprocessors] [./swater] type = LineValueSampler variable = SWater start_point = '0 0 0' end_point = '6 0 0' sort_by = x num_points = 121 execute_on = timestep_end [../] [] [Executioner] type = Transient solve_type = Newton petsc_options = '-snes_converged_reason' end_time = 359424 [./TimeStepper] type = FunctionDT function = dts [../] [] [Outputs] file_base = rd01 [./exodus] type = Exodus execute_on = final [../] [./along_line] type = CSV execute_on = final [../] [] (modules/porous_flow/test/tests/infiltration_and_drainage/rd01.i) The input file for the first 4 simulation days of Experiment 2: [Mesh] type = GeneratedMesh dim = 2 nx = 120 ny = 1 xmin = 0 xmax = 6 ymin = 0 ymax = 0.05 [] [GlobalParams] PorousFlowDictator = dictator [] [Functions] [./dts] type = PiecewiseLinear y = '1E-2 1 10 500 5000 50000' x = '0 10 100 1000 10000 500000' [../] [] [UserObjects] [./dictator] type = PorousFlowDictator porous_flow_vars = pressure number_fluid_phases = 1 number_fluid_components = 1 [../] [./pc] type = PorousFlowCapillaryPressureVG m = 0.336 alpha = 1.43e-4 [../] [] [Modules] [./FluidProperties] [./simple_fluid] type = SimpleFluidProperties bulk_modulus = 2e7 viscosity = 1.01e-3 density0 = 1000 thermal_expansion = 0 [../] [../] [] [Materials] [./massfrac] type = PorousFlowMassFraction [../] [./temperature] type = PorousFlowTemperature [../] [./simple_fluid] type = PorousFlowSingleComponentFluid fp = simple_fluid phase = 0 [../] [./ppss] type = PorousFlow1PhaseP porepressure = pressure capillary_pressure = pc [../] [./relperm] type = PorousFlowRelativePermeabilityVG m = 0.336 seff_turnover = 0.99 phase = 0 [../] [./porosity] type = PorousFlowPorosityConst porosity = 0.33 [../] [./permeability] type = PorousFlowPermeabilityConst permeability = '0.295E-12 0 0 0 0.295E-12 0 0 0 0.295E-12' [../] [] [Variables] [./pressure] initial_condition = 0.0 [../] [] [Kernels] [./mass0] type = PorousFlowMassTimeDerivative fluid_component = 0 variable = pressure [../] [./flux0] fluid_component = 0 variable = pressure gravity = '-10 0 0' [../] [] [AuxVariables] [./SWater] family = MONOMIAL order = CONSTANT [../] [] [AuxKernels] [./SWater] type = MaterialStdVectorAux property = PorousFlow_saturation_qp index = 0 variable = SWater [../] [] [BCs] [./base] type = DirichletBC boundary = left value = 0.0 variable = pressure [../] [] [Preconditioning] [./andy] type = SMP full = true petsc_options = '-snes_converged_reason -ksp_diagonal_scale -ksp_diagonal_scale_fix -ksp_gmres_modifiedgramschmidt -snes_linesearch_monitor' petsc_options_iname = '-ksp_type -pc_type -sub_pc_type -sub_pc_factor_shift_type -pc_asm_overlap -snes_atol -snes_rtol -snes_max_it' petsc_options_value = 'gmres asm lu NONZERO 2 1E-10 1E-10 10' [../] [] [VectorPostprocessors] [./swater] type = LineValueSampler variable = SWater start_point = '0 0 0' end_point = '6 0 0' sort_by = x num_points = 121 execute_on = timestep_end [../] [] [Executioner] type = Transient solve_type = Newton petsc_options = '-snes_converged_reason' end_time = 345600 [./TimeStepper] type = FunctionDT function = dts [../] [] [Outputs] file_base = rd02 [./exodus] type = Exodus execute_on = final [../] [./along_line] type = CSV execute_on = final [../] [] (modules/porous_flow/test/tests/infiltration_and_drainage/rd02.i) The input file for the latter 96 days of Experiment 2: [Mesh] file = gold/rd02.e [] [GlobalParams] PorousFlowDictator = dictator [] [Functions] [./dts] type = PiecewiseLinear y = '2E4 1E6' x = '0 1E6' [../] [] [UserObjects] [./dictator] type = PorousFlowDictator porous_flow_vars = pressure number_fluid_phases = 1 number_fluid_components = 1 [../] [./pc] type = PorousFlowCapillaryPressureVG m = 0.336 alpha = 1.43e-4 [../] [] [Modules] [./FluidProperties] [./simple_fluid] type = SimpleFluidProperties bulk_modulus = 2e7 viscosity = 1.01e-3 density0 = 1000 thermal_expansion = 0 [../] [../] [] [Materials] [./massfrac] type = PorousFlowMassFraction [../] [./temperature] type = PorousFlowTemperature [../] [./simple_fluid] type = PorousFlowSingleComponentFluid fp = simple_fluid phase = 0 [../] [./ppss] type = PorousFlow1PhaseP porepressure = pressure capillary_pressure = pc [../] [./relperm] type = PorousFlowRelativePermeabilityVG m = 0.336 seff_turnover = 0.99 phase = 0 [../] [./porosity] type = PorousFlowPorosityConst porosity = 0.33 [../] [./permeability] type = PorousFlowPermeabilityConst permeability = '0.295E-12 0 0 0 0.295E-12 0 0 0 0.295E-12' [../] [] [Variables] [./pressure] initial_from_file_timestep = 1 initial_from_file_var = pressure [../] [] [Kernels] [./mass0] type = PorousFlowMassTimeDerivative fluid_component = 0 variable = pressure [../] [./flux0] fluid_component = 0 variable = pressure gravity = '-10 0 0' [../] [] [AuxVariables] [./SWater] family = MONOMIAL order = CONSTANT [../] [] [AuxKernels] [./SWater] type = MaterialStdVectorAux property = PorousFlow_saturation_qp index = 0 variable = SWater [../] [] [BCs] [./base] type = DirichletBC boundary = left value = 0.0 variable = pressure [../] [] [Preconditioning] [./andy] type = SMP full = true petsc_options = '-snes_converged_reason -ksp_diagonal_scale -ksp_diagonal_scale_fix -ksp_gmres_modifiedgramschmidt -snes_linesearch_monitor' petsc_options_iname = '-ksp_type -pc_type -sub_pc_type -sub_pc_factor_shift_type -pc_asm_overlap -snes_atol -snes_rtol -snes_max_it' petsc_options_value = 'gmres asm lu NONZERO 2 1E-10 1E-10 10' [../] [] [VectorPostprocessors] [./swater] type = LineValueSampler variable = SWater start_point = '0 0 0' end_point = '6 0 0' sort_by = x num_points = 121 execute_on = timestep_end [../] [] [Executioner] type = Transient solve_type = Newton petsc_options = '-snes_converged_reason' end_time = 8.2944E6 [./TimeStepper] type = FunctionDT function = dts [../] [] [Outputs] file_base = rd03 [./exodus] type = Exodus execute_on = 'initial final' [../] [./along_line] type = CSV execute_on = final [../] [] (modules/porous_flow/test/tests/infiltration_and_drainage/rd03.i) ## Water infiltration into a two-phase (oil-water) system An analytic solution of the two-phase Richards' equations with gravity on a semi-infinite line , with a constant water infiltration flux at has been derived by (Rogers et al., 1983) (Unfortunately there must be a typo in the RSC paper as for nonzero gravity their results are clearly incorrect.). The authors assume incompressible fluids; linear relative permeability relationships; the "oil" (or "gas") viscosity is larger than the water viscosity; and, a certain functional form for the capillary pressure. When the oil viscosity is exactly twice the water viscosity, their effective saturation reads (7) where is the capillary pressure, and and are arbitrary parameters to be defined by the user in the PorousFlow implementation. For other oil/water viscosity ratios is more complicated, and note that their formulation allows , but only the particular form Eq. (7) need be used to validate the MOOSE implementation. RSC's solutions are quite lengthy, so I will not write them here. To compare with MOOSE, the following parameters are used: Table 3: Parameter values used in the Rogers-Stallybrass-Clements infiltration tests ParameterValue Bar length10 m Bar porosity0.25 Bar permeabilitym Gravity0 m.s Water density10 kg.m Water viscosityPa.s Oil density20 kg.m Oil viscosityPa.s Capillary 10 Pa Capillary 1 Pa Initial water pressure0 Pa Initial oil pressure15 Pa Initial water saturation0.08181 Initial oil saturation0.91819 Water injection rate1 kg.s.m The input file: # RSC test with high-res time and spatial resolution [Mesh] type = GeneratedMesh dim = 2 nx = 600 ny = 1 xmin = 0 xmax = 10 # x is the depth variable, called zeta in RSC ymin = 0 ymax = 0.05 [] [GlobalParams] PorousFlowDictator = dictator gravity = '0 0 0' [] [Functions] [./dts] type = PiecewiseLinear y = '3E-3 3E-2 0.05' x = '0 1 5' [../] [] [UserObjects] [./dictator] type = PorousFlowDictator porous_flow_vars = 'pwater poil' number_fluid_phases = 2 number_fluid_components = 2 [../] [./pc] type = PorousFlowCapillaryPressureRSC oil_viscosity = 2E-3 scale_ratio = 2E3 shift = 10 [../] [] [Modules] [./FluidProperties] [./water] type = SimpleFluidProperties bulk_modulus = 2e9 density0 = 10 thermal_expansion = 0 viscosity = 1e-3 [../] [./oil] type = SimpleFluidProperties bulk_modulus = 2e9 density0 = 20 thermal_expansion = 0 viscosity = 2e-3 [../] [../] [] [Materials] [./temperature] type = PorousFlowTemperature [../] [./ppss] type = PorousFlow2PhasePP phase0_porepressure = pwater phase1_porepressure = poil capillary_pressure = pc [../] [./massfrac] type = PorousFlowMassFraction mass_fraction_vars = 'massfrac_ph0_sp0 massfrac_ph1_sp0' [../] [./water] type = PorousFlowSingleComponentFluid fp = water phase = 0 compute_enthalpy = false compute_internal_energy = false [../] [./oil] type = PorousFlowSingleComponentFluid fp = oil phase = 1 compute_enthalpy = false compute_internal_energy = false [../] [./relperm_water] type = PorousFlowRelativePermeabilityCorey n = 1 phase = 0 [../] [./relperm_oil] type = PorousFlowRelativePermeabilityCorey n = 1 phase = 1 [../] [./porosity] type = PorousFlowPorosityConst porosity = 0.25 [../] [./permeability] type = PorousFlowPermeabilityConst permeability = '1E-5 0 0 0 1E-5 0 0 0 1E-5' [../] [] [Variables] [./pwater] [../] [./poil] [../] [] [ICs] [./water_init] type = ConstantIC variable = pwater value = 0 [../] [./oil_init] type = ConstantIC variable = poil value = 15 [../] [] [Kernels] [./mass0] type = PorousFlowMassTimeDerivative fluid_component = 0 variable = pwater [../] [./flux0] fluid_component = 0 variable = pwater [../] [./mass1] type = PorousFlowMassTimeDerivative fluid_component = 1 variable = poil [../] [./flux1] fluid_component = 1 variable = poil [../] [] [AuxVariables] [./SWater] family = MONOMIAL order = CONSTANT [../] [./SOil] family = MONOMIAL order = CONSTANT [../] [./massfrac_ph0_sp0] initial_condition = 1 [../] [./massfrac_ph1_sp0] initial_condition = 0 [../] [] [AuxKernels] [./SWater] type = MaterialStdVectorAux property = PorousFlow_saturation_qp index = 0 variable = SWater [../] [./SOil] type = MaterialStdVectorAux property = PorousFlow_saturation_qp index = 1 variable = SOil [../] [] [BCs] # we are pumping water into a system that has virtually incompressible fluids, hence the pressures rise enormously. this adversely affects convergence because of almost-overflows and precision-loss problems. The fixed things help keep pressures low and so prevent these awful behaviours. the movement of the saturation front is the same regardless of the fixed things. active = 'recharge fixedoil fixedwater' [./recharge] type = PorousFlowSink variable = pwater boundary = 'left' flux_function = -1.0 [../] [./fixedwater] type = PresetBC variable = pwater boundary = 'right' value = 0 [../] [./fixedoil] type = PresetBC variable = poil boundary = 'right' value = 15 [../] [] [Preconditioning] [./andy] type = SMP full = true petsc_options = '-snes_converged_reason -ksp_diagonal_scale -ksp_diagonal_scale_fix -ksp_gmres_modifiedgramschmidt -snes_linesearch_monitor' petsc_options_iname = '-ksp_type -pc_type -sub_pc_type -sub_pc_factor_shift_type -pc_asm_overlap -snes_atol -snes_rtol -snes_max_it' petsc_options_value = 'gmres asm lu NONZERO 2 1E-10 1E-10 10000' [../] [] [VectorPostprocessors] [./swater] type = LineValueSampler variable = SWater start_point = '0 0 0' end_point = '7 0 0' sort_by = x num_points = 21 execute_on = timestep_end [../] [] [Executioner] type = Transient solve_type = Newton petsc_options = '-snes_converged_reason' end_time = 5 [./TimeStepper] type = FunctionDT function = dts [../] [] [Outputs] file_base = rsc01 [./along_line] type = CSV execute_vector_postprocessors_on = final [../] [./exodus] type = Exodus execute_on = 'initial final' [../] [] (modules/porous_flow/test/tests/infiltration_and_drainage/rsc01.i) In the RSC theory water is injected into a semi-infinite domain, whereas of course the MOOSE implementation has finite extent ( is chosen). Because of the near incompressibility of the fluids (I choose the bulk modulus to be 2 GPa) this causes the porepressures to rise enormously, and the problem can suffer from precision-loss problems. Therefore, the porepressures are fixed at . This does not affect the progress of the water saturation front. Figure 6 shows good agreement between the analytic solution and the MOOSE implementation. Any minor discrepancies get smaller as the temporal and spatial resolution increase, as is suggested by the two comparisons in that figure. The "low-resolution" test has 200 elements in and uses 15 time steps is part the automatic test suite that is run every time the code is updated. The "high-resolution" test has 600 elements and uses 190 time steps, and is marked as "heavy". Figure 6: Water saturation profile after 5 seconds of injection in the Rogers-Stallybrass-Clements test. The initial water saturation is 0.08181, and water is injected at the top of this figure at a constant rate. This forms a water front which displaces the oil. Black line: RSC's analytic solution. Red squares: high-resolution MOOSE simulation. Green triangles: lower resolution MOOSE simulation. ## References 1. P. Broadbridge and I. White. Constant rate rainfall infiltration: a versatile nonlinear model, 1. analytical solution. Water Resources Research, 24:145–154, 1988.[BibTeX] 2. P. A. Forsyth, Y. S. Wu, and K. Pruess. Robust numerical methods for saturated-unsaturated flow with dry initial conditions in heterogeneous media. Water Resources Research, 18:25–38, 1995.[BibTeX] 3. C. Rogers, M. P. Stallybrass, and D. L. Clements. On two phase filtration under gravity and with boundary infiltration: application of a Backlund transformation. Nonlinear Analysis, Theory, Methods and Applications, 7:785–799, 1983.[BibTeX] 4. A. W. Warrick, D. O. Lomen, and A. Islas. An analytical solution to Richards' Equation for a Draining Soil Profile. Water Resources Research, 26:253–258, 1990.[BibTeX]
2019-05-27T06:08:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5245968699455261, "perplexity": 10250.678054938202}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232261326.78/warc/CC-MAIN-20190527045622-20190527071622-00105.warc.gz"}
http://leg.colorado.gov/bills/SB18-064
SB18-064 # Require 100% Renewable Energy By 2035 Concerning an update to the renewable energy standard to require that all electric utilities derive their energy from one hundred percent renewable sources by 2035. Session: 2018 Regular Session Subject: Energy Bill Summary The bill updates the renewable energy standard to require that all electric utilities, including cooperative electric associations and municipally owned utilities, derive their energy from 100% renewable sources by 2035. The bill also: • Removes recycled energy from the types of energy sources eligible for meeting the renewable energy standard; • Allows a utility to obtain energy efficiency credits equal in value to renewable energy credits based on any energy efficiency upgrades made for a low-income residential customer; • Removes multipliers used for counting certain renewable energy generated; and • Phases out the system of tradable renewable energy credits so that renewable energy generated after 2035 is not eligible for renewable energy credits. (Note: This summary applies to this bill as introduced.)
2018-09-22T21:16:13
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9442338347434998, "perplexity": 5054.393228112215}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267158691.56/warc/CC-MAIN-20180922201637-20180922222037-00559.warc.gz"}
https://par.nsf.gov/biblio/10346880-decam-local-volume-exploration-survey-data-release
This content will become publicly available on August 1, 2023 The DECam Local Volume Exploration Survey Data Release 2 Abstract We present the second public data release (DR2) from the DECam Local Volume Exploration survey (DELVE). DELVE DR2 combines new DECam observations with archival DECam data from the Dark Energy Survey, the DECam Legacy Survey, and other DECam community programs. DELVE DR2 consists of ∼160,000 exposures that cover >21,000 deg 2 of the high-Galactic-latitude (∣ b ∣ > 10°) sky in four broadband optical/near-infrared filters ( g , r , i , z ). DELVE DR2 provides point-source and automatic aperture photometry for ∼2.5 billion astronomical sources with a median 5 σ point-source depth of g = 24.3, r = 23.9, i = 23.5, and z = 22.8 mag. A region of ∼17,000 deg 2 has been imaged in all four filters, providing four-band photometric measurements for ∼618 million astronomical sources. DELVE DR2 covers more than 4 times the area of the previous DELVE data release and contains roughly 5 times as many astronomical objects. DELVE DR2 is publicly available via the NOIRLab Astro Data Lab science platform. Authors: ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; more » Award ID(s): Publication Date: NSF-PAR ID: 10346880 Journal Name: The Astrophysical Journal Supplement Series Volume: 261 Issue: 2 Page Range or eLocation-ID: 38 ISSN: 0067-0049 4. ABSTRACT The SuperCLuster Assisted Shear Survey (SuperCLASS) is a legacy programme using the e-MERLIN interferometric array. The aim is to observe the sky at L-band (1.4 GHz) to a r.m.s. of $7\, \mu {\rm Jy}\,$beam−1 over an area of $\sim 1\, {\rm deg}^2$ centred on the Abell 981 supercluster. The main scientific objectives of the project are: (i) to detect the effects of weak lensing in the radio in preparation for similar measurements with the Square Kilometre Array (SKA); (ii) an extinction free census of star formation and AGN activity out to z ∼ 1. In this paper we give anmore »
2022-09-26T13:25:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4808293282985687, "perplexity": 8731.098728986888}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00705.warc.gz"}
https://apo.ansto.gov.au/dspace/handle/10238/3926?mode=full
Please use this identifier to cite or link to this item: https://apo.ansto.gov.au/dspace/handle/10238/3926 DC FieldValueLanguage dc.contributor.authorWang, JL- dc.contributor.authorCampbell, SJ- dc.contributor.authorKennedy, SJ- dc.contributor.authorZeng, R- dc.contributor.authorDou, SX- dc.contributor.authorWu, GH- dc.date.accessioned2011-12-07T05:23:37Z- dc.date.available2011-12-07T05:23:37Z- dc.date.issued2011-06-01- dc.identifier.citationWang, J.L., Campbell, S.J., Kennedy, S.J., Zeng, R., Dou, S.X., Wu, G.H. (2011). Critical magnetic transition in TbNi2Mn-magnetization and Mössbauer spectroscopy. Journal of Physics: Condensed Matter, 23(21), Art. No. 216002. doi:10.1088/0953-8984/23/21/216002en_AU dc.identifier.govdoc3827- dc.identifier.issn0953-8984- dc.identifier.urihttp://dx.doi.org/10.1088/0953-8984/23/21/216002en_AU dc.identifier.urihttp://apo.ansto.gov.au/dspace/handle/10238/3926- dc.description.abstractThe structural and magnetic properties of the TbNi2Mnx series (0.9 ≤ x ≤ 1.10) have been investigated using x-ray diffraction, field- and temperature-dependent AC magnetic susceptibility, DC magnetization (5–340 K; 0–5 T) and 57Fe Mössbauer spectroscopy (5–300 K). TbNi2Mnx crystallizes in the MgCu2-type structure (space group Fd\bar {3}m ). The additional contributions to the magnetic energy terms from transition-metal–transition-metal interactions (T–T) and rare-earth–transition-metal interactions (R–T) in RNi2Mn compounds contribute to their increased magnetic ordering temperatures compared with RNi2 and RMn2. Both the lattice constant a and the Curie temperature TC exhibit maximal values at the x = 1 composition indicating strong magnetostructural coupling. Analyses of the AC magnetic susceptibility and DC magnetization data of TbNi2Mn around the Curie temperature TC = 147 K confirm that the magnetic transition is second order with critical exponents β = 0.77 ± 0.12, γ = 1.09 ± 0.07 and δ = 2.51 ± 0.06. These exponents establish that the magnetic interactions in TbNi2Mn are long range despite mixed occupancies of Tb and Mn atoms at the 8a site and vacancies. The magnetic entropy − ΔSM around TC is proportional to (μ0H/TC)2/3 in agreement with the critical magnetic analyses. The Mössbauer spectra above TC are fitted by two sub-spectra in agreement with refinement of the x-ray data while below TC three sub-spectra are required to represent the three inequivalent local magnetic environments.(c) 2011 IOP Publishing LTDen_AU dc.language.isoenen_AU dc.publisherIOP Publishing LTDen_AU dc.subjectSpectroscopyen_AU dc.subjectMagnetizationen_AU dc.subjectRare earthsen_AU dc.subjectX-ray diffractionen_AU dc.subjectInorganic compoundsen_AU dc.subjectMagnetic susceptibilityen_AU dc.titleCritical magnetic transition in TbNi2Mn-magnetization and Mössbauer spectroscopyen_AU dc.typeJournal Articleen_AU dc.date.statistics2011-12-07- Appears in Collections:Journal Articles Files in This Item: There are no files associated with this item.
2021-01-23T08:04:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5222380757331848, "perplexity": 11443.64112363377}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703536556.58/warc/CC-MAIN-20210123063713-20210123093713-00725.warc.gz"}
https://gea.esac.esa.int/archive/documentation/GEDR3/Catalogue_consolidation/chap_cu9val/sec_cu9val_introduction/ssec_cu9val_intro_completeness.html
# 8.1.1 Completeness • Completeness has improved from Gaia DR2, as shown by several comparisons: • Comparison with OGLE (Section 8.4.1) • More stars in the center of Andromeda and in M32 versus Gaia DR2 (Section 8.4) • Checks in crowded globular clusters (Section 8.7) show that the completeness is generally higher than in Gaia DR2, but strongly depending on the density. In dense areas of globular clusters, a percentage up to 20-30% of stars with astrometry do not have $G_{\rm BP}$ and $G_{\rm RP}$ magnitudes Open clusters are more favourable cases and in general the percentage of stars missing $G_{\rm BP}$, $G_{\rm RP}$ is of the order of 1%-3%. Some artifacts on the completeness showing the effect of the scanning law are visible but the number is reduced in comparison to Gaia DR2. • Tiny gain in resolution due to the new criterion for duplicated sources (Section 8.2, Figure 8.1). • Bright stars environment is clean, no completeness problem found around bright stars (Section 8.2.1). • Based on star counts, the Gaia EDR3 catalogue seems to be essentially complete between $G=12$ and $G=17$ (Section 8.3). Thus, the source list for the release will be incomplete at the bright end and has an ill-defined faint magnitude limit. Fainter than $G=17$ the completeness is complex, being affected by crowding and strongly depending on celestial position (Section 8.2). In any case, comparison with the GOG simulation shows that Gaia EDR3 completeness has improved with respect to Gaia DR2 at $G=19$, although it is still not as high as expected (Section 8.3, Figure 8.10). • The combination of the Gaia scan law coverage and the filtering on data quality which is done prior to the publication of Gaia EDR3, can lead to some regions of the sky with source density fluctuations that reflect the scan law pattern. In addition, gaps may exist in the source distribution. This becomes more marked if one uses subsets of different astrometric solutions (2p, 5p, 6p). • In any case, no significant ‘holes’ are found in the sky (Section 8.2). • Comparisons to WDS show a high completeness for separation above 1${}^{\prime\prime}$, but a rapid decrease at smaller separations (Section 8.4).
2022-01-19T04:34:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 9, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7046293020248413, "perplexity": 1501.0364961431462}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301263.50/warc/CC-MAIN-20220119033421-20220119063421-00697.warc.gz"}
https://www.ctcms.nist.gov/potentials/atomman/tutorial/03.3._Region_selectors.html
# Introduction to atomman: Region selectors Lucas M. Hale, [email protected], Materials Science and Engineering Division, NIST. Disclaimers ## 1. Introduction It can be useful during both system construction and analysis to identify and select atoms within specific regions of space. The atomman.region submodule allows for geometric regions to be defined and have methods for identifying if the points are above/below or inside/outside the shapes. Library imports [1]: # Standard Python libraries import datetime # http://www.numpy.org/ import numpy as np # https://github.com/usnistgov/atomman import atomman as am # https://matplotlib.org/ import matplotlib.pyplot as plt %matplotlib inline # Show atomman version print('atomman version =', am.__version__) # Show date of Notebook execution print('Notebook executed on', datetime.date.today()) atomman version = 1.3.2 Notebook executed on 2020-04-15 Define function for plotting projection plots of atoms, with positions given by pos and colors by atype. [2]: def projectionplots(atoms): f, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(14,4)) ax1.scatter(atoms.pos[:,0], atoms.pos[:,1], marker='o', c=atoms.atype) ax1.set_xlabel('x') ax1.set_ylabel('y') ax2.scatter(atoms.pos[:,0], atoms.pos[:,2], marker='o', c=atoms.atype) ax2.set_xlabel('x') ax2.set_ylabel('z') ax3.scatter(atoms.pos[:,1], atoms.pos[:,2], marker='o', c=atoms.atype) ax3.set_xlabel('y') ax3.set_ylabel('z') [3]: def projectionplots(atoms): f, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(14,4)) # Separate by atype for atype in atoms.atypes: pos = atoms.pos[atoms.atype == atype] ax1.plot(pos[:,0], pos[:,1], 'o') ax2.plot(pos[:,0], pos[:,2], 'o') ax3.plot(pos[:,1], pos[:,2], 'o') ax1.set_xlabel('x') ax1.set_ylabel('y') ax1.set_xlim(-40, 40) ax1.set_ylim(-40, 40) ax2.set_xlabel('x') ax2.set_ylabel('z') ax2.set_xlim(-40, 40) ax2.set_ylim(-40, 40) ax3.set_xlabel('y') ax3.set_ylabel('z') ax3.set_xlim(-40, 40) ax3.set_ylim(-40, 40) Construct a demonstration fcc system [4]: box = am.Box.cubic(a=3.6) atoms = am.Atoms(pos=[[0.0, 0.0, 0.0], [0.0, 0.5, 0.5], [0.5, 0.0, 0.5], [0.5, 0.5, 0.0]]) ucell = am.System(box=box, atoms=atoms) system = ucell.supersize((-10, 10), (-10, 10), (-10, 10)) projectionplots(system.atoms) ## 2. Plane slicing The atomman.region.Plane class allows for a plane to be defined, which can then be used to slice a system. Making a planar slice consists of two steps: defining a Plane, then using it to identify all atoms above/below it. ### 2.1 Initializing a Plane A plane can be uniquely defined in space using a normal vector and a single point located anywhere in that plane. Parameters • normal (array-like object) 3D normal vector of the plane. • point (array-like object) 3D vector coordinate of any point in the plane. ### 2.2 Plane.above() and Plane.below() Slices can then be made using the plane’s above() and below() methods. These are defined to be opposite functions: any atoms not “below” are “above”. Parameters • pos (array-like object) Nx3 array of coordinates. • inclusive (bool, optional) Indicates if points in the plane are to be included. Default value is True for below() and False for above(). Returns • (numpy.NDArray) N array of bool values ### 2.3 Examples Use Plane and System.atoms_ix to remove all atoms with positive y values [5]: # Define a simple plane normal to the y-axis at the origin plane = am.region.Plane(normal=[0,1,0], point=[0,0,0]) # Identify all atoms below the plane isbelow = plane.below(system.atoms.pos) # Use atoms_ix to build new system with only the below atoms newsystem = system.atoms_ix[isbelow] # Make projectionplots projectionplots(newsystem.atoms) Due to atomman’s design, region selections can also be made on per-atom properties themselves allowing for easily modifying or analyzing the properties of atoms in specific regions. [6]: # Define a plane with more complicated normal and position plane = am.region.Plane(normal=[1.5, -1.4, 5.2], point=[-5.7, 1.2, 7.1]) # Change atypes of all atoms above the plane system.atoms.atype[plane.above(system.atoms.pos)] = 2 # Make projectionplots projectionplots(system.atoms) # Reset all atypes back to 1 system.atoms.atype = 1 ## 3. Volume selection Points inside/outside of volumes can also be selected based on a number of simple geometric shapes. Each shape is defined as a separate subclass of the template Shape class. ### 3.1 Shape.inside() and Shape.outside() Indicates if position(s) are inside/outside the shape. These are defined to be opposite functions: any atoms not “inside” are “outside”. Parameters • pos (array-like object) Nx3 array of coordinates. • inclusive (bool, optional) Indicates if points on the shape’s boundaries are to be included. Default value is True for inside, False for outside. Returns • (numpy.NDArray) N array of bool values. ### 3.2 Box The atomman.Box class used in defining the regions of atomic systems already provides a comprehensive representation of a generic parallelepiped. As such, the class has been extended to be a child of Shape, complete with inside/outside functions. Show that all atoms in the system are inside the box, but only a few atoms are inside the original unit cell [7]: print(f'system has {system.natoms} atoms') # count number of system's atoms inside system's box numinside = np.sum(system.box.inside(system.atoms.pos)) print(f"{numinside} atoms are inside system's box") # count number of system's atoms inside ucell's box numinside = np.sum(ucell.box.inside(system.atoms.pos)) print(f"{numinside} atoms are inside ucell's box") system has 32000 atoms 32000 atoms are inside system's box 4 atoms are inside ucell's box ### 3.3 Sphere Spherical selections can be made using the atomman.region.Sphere class. Spheres are easily defined with just a center point and a radius. Parameters • center (array-like object) The position of the sphere’s center. [8]: # Define a sphere at the origin with a radius of 30 sphere = am.region.Sphere([0,0,0], 30) # Slice to create spherical particle system newsystem = system.atoms_ix[sphere.inside(system.atoms.pos)] # Make projectionplots projectionplots(newsystem.atoms) ### 3.4 Cylinder Cylindrical selections can be made using the atomman.region.Cylinder class. Parameters • center1 (array-like object) A point on the cylinder’s axis. If endcaps is True, the point is taken as the center of one of the cylinder’s endcap planes. • center2 (array-like object) A point on the cylinder’s axis. If endcaps is True, the point is taken as the center of one of the cylinder’s endcap planes. • endcaps (bool, optional) Indicates if the cylindrical volume is taken as capped at the two ends. If False, only the radial distances from the axis will be considered. If True, positions are also checked to see if they are above/below the planes defined by the axis and the center points. Default value is True. These parameters were selected to provide the most concise representation of a generic cylinder. Note that the cylinder’s axis vector is related to the two center points as $axis = \frac{center2 - center1}{|center2 - center1|}$ Create a cylindrical particle using cylinder [9]: # Define a cylinder with axis along the z axis center1 = [0, 0, -20] center2 = [0, 0, 20] # Slice to create cylindrical particle system newsystem = system.atoms_ix[cylinder.inside(system.atoms.pos)] # Make projectionplots projectionplots(newsystem.atoms) Change endcaps to false to ignore plane boundaries and create a nanowire system instead [10]: cylinder.endcaps = False # Slice to create cylindrical particle system newsystem = system.atoms_ix[cylinder.inside(system.atoms.pos)] # Make projectionplots projectionplots(newsystem.atoms)
2020-08-07T01:41:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4436842203140259, "perplexity": 6256.813795321525}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737050.56/warc/CC-MAIN-20200807000315-20200807030315-00072.warc.gz"}
https://dlmf.nist.gov/14.3
# §14.3 Definitions and Hypergeometric Representations ## §14.3(i) Interval $-1 The following are real-valued solutions of (14.2.2) when $\mu$, $\nu\in\mathbb{R}$ and $x\in(-1,1)$. ### Ferrers Function of the First Kind 14.3.1 $\mathsf{P}^{\mu}_{\nu}\left(x\right)=\left(\frac{1+x}{1-x}\right)^{\mu/2}% \mathbf{F}\left(\nu+1,-\nu;1-\mu;\tfrac{1}{2}-\tfrac{1}{2}x\right).$ ⓘ Defines: $\mathsf{P}^{\NVar{\mu}}_{\NVar{\nu}}\left(\NVar{x}\right)$: Ferrers function of the first kind Symbols: $\mathbf{F}\left(\NVar{a},\NVar{b};\NVar{c};\NVar{z}\right)$ or $\mathbf{F}\left({\NVar{a},\NVar{b}\atop\NVar{c}};\NVar{z}\right)$: $={{}_{2}{\mathbf{F}}_{1}}\left(\NVar{a},\NVar{b};\NVar{c};\NVar{z}\right)$ Olver’s hypergeometric function, $\mu$: general order, $\nu$: general degree and $x$: real variable $1 A&S Ref: 8.1.2 (modified) Referenced by: §14.11, §14.15(i), §14.3(i), §14.3(i), §15.9(iv), §15.9(iv) Permalink: http://dlmf.nist.gov/14.3.E1 Encodings: TeX, pMML, png See also: Annotations for §14.3(i), §14.3(i), §14.3 and Ch.14 ### Ferrers Function of the Second Kind 14.3.2 $\mathsf{Q}^{\mu}_{\nu}\left(x\right)=\frac{\pi}{2\sin\left(\mu\pi\right)}\left% (\cos\left(\mu\pi\right)\left(\frac{1+x}{1-x}\right)^{\mu/2}\mathbf{F}\left(% \nu+1,-\nu;1-\mu;\tfrac{1}{2}-\tfrac{1}{2}x\right)-\frac{\Gamma\left(\nu+\mu+1% \right)}{\Gamma\left(\nu-\mu+1\right)}\left(\frac{1-x}{1+x}\right)^{\mu/2}% \mathbf{F}\left(\nu+1,-\nu;1+\mu;\tfrac{1}{2}-\tfrac{1}{2}x\right)\right).$ ⓘ Defines: $\mathsf{Q}^{\NVar{\mu}}_{\NVar{\nu}}\left(\NVar{x}\right)$: Ferrers function of the second kind Symbols: $\Gamma\left(\NVar{z}\right)$: gamma function, $\pi$: the ratio of the circumference of a circle to its diameter, $\cos\NVar{z}$: cosine function, $\mathbf{F}\left(\NVar{a},\NVar{b};\NVar{c};\NVar{z}\right)$ or $\mathbf{F}\left({\NVar{a},\NVar{b}\atop\NVar{c}};\NVar{z}\right)$: $={{}_{2}{\mathbf{F}}_{1}}\left(\NVar{a},\NVar{b};\NVar{c};\NVar{z}\right)$ Olver’s hypergeometric function, $\sin\NVar{z}$: sine function, $\mu$: general order, $\nu$: general degree and $x$: real variable $1 Referenced by: §14.3(i) Permalink: http://dlmf.nist.gov/14.3.E2 Encodings: TeX, pMML, png See also: Annotations for §14.3(i), §14.3(i), §14.3 and Ch.14 Here and elsewhere in this chapter 14.3.3 $\mathbf{F}\left(a,b;c;x\right)=\frac{1}{\Gamma\left(c\right)}F\left(a,b;c;x\right)$ is Olver’s hypergeometric function (§15.1). $\mathsf{P}^{\mu}_{\nu}\left(x\right)$ exists for all values of $\mu$ and $\nu$. $\mathsf{Q}^{\mu}_{\nu}\left(x\right)$ is undefined when $\mu+\nu=-1,-2,-3,\dots$. When $\mu=m=0,1,2,\dotsc$, (14.3.1) reduces to 14.3.4 $\mathsf{P}^{m}_{\nu}\left(x\right)=(-1)^{m}\frac{\Gamma\left(\nu+m+1\right)}{2% ^{m}\Gamma\left(\nu-m+1\right)}\left(1-x^{2}\right)^{m/2}\mathbf{F}\left(\nu+m% +1,m-\nu;m+1;\tfrac{1}{2}-\tfrac{1}{2}x\right);$ equivalently, 14.3.5 $\mathsf{P}^{m}_{\nu}\left(x\right)=(-1)^{m}\frac{\Gamma\left(\nu+m+1\right)}{% \Gamma\left(\nu-m+1\right)}\left(\frac{1-x}{1+x}\right)^{m/2}\mathbf{F}\left(% \nu+1,-\nu;m+1;\tfrac{1}{2}-\tfrac{1}{2}x\right).$ When $\mu=m$ ($\in\mathbb{Z}$) (14.3.2) is replaced by its limiting value; see Hobson (1931, §132) for details. See also (14.3.12)–(14.3.14) for this case. ## §14.3(ii) Interval $1 The following are solutions of (14.2.2) when $\mu$, $\nu\in\mathbb{R}$ and $x>1$. ### Associated Legendre Function of the First Kind 14.3.6 $P^{\mu}_{\nu}\left(x\right)=\left(\frac{x+1}{x-1}\right)^{\mu/2}\mathbf{F}% \left(\nu+1,-\nu;1-\mu;\tfrac{1}{2}-\tfrac{1}{2}x\right).$ ### Associated Legendre Function of the Second Kind 14.3.7 $Q^{\mu}_{\nu}\left(x\right)=e^{\mu\pi i}\frac{\pi^{1/2}\Gamma\left(\nu+\mu+1% \right)\left(x^{2}-1\right)^{\mu/2}}{2^{\nu+1}x^{\nu+\mu+1}}\mathbf{F}\left(% \tfrac{1}{2}\nu+\tfrac{1}{2}\mu+1,\tfrac{1}{2}\nu+\tfrac{1}{2}\mu+\tfrac{1}{2}% ;\nu+\tfrac{3}{2};\frac{1}{x^{2}}\right),$ $\mu+\nu\neq-1,-2,-3,\dots$. When $\mu=m=1,2,3,\dots$, (14.3.6) reduces to 14.3.8 $P^{m}_{\nu}\left(x\right)=\frac{\Gamma\left(\nu+m+1\right)}{2^{m}\Gamma\left(% \nu-m+1\right)}\left(x^{2}-1\right)^{m/2}\mathbf{F}\left(\nu+m+1,m-\nu;m+1;% \tfrac{1}{2}-\tfrac{1}{2}x\right).$ As standard solutions of (14.2.2) we take the pair $P^{-\mu}_{\nu}\left(x\right)$ and $\boldsymbol{Q}^{\mu}_{\nu}\left(x\right)$, where 14.3.9 $P^{-\mu}_{\nu}\left(x\right)=\left(\frac{x-1}{x+1}\right)^{\mu/2}\mathbf{F}% \left(\nu+1,-\nu;\mu+1;\tfrac{1}{2}-\tfrac{1}{2}x\right),$ and 14.3.10 $\boldsymbol{Q}^{\mu}_{\nu}\left(x\right)=e^{-\mu\pi i}\frac{Q^{\mu}_{\nu}\left% (x\right)}{\Gamma\left(\nu+\mu+1\right)}.$ Like $P^{\mu}_{\nu}\left(x\right)$, but unlike $Q^{\mu}_{\nu}\left(x\right)$, $\boldsymbol{Q}^{\mu}_{\nu}\left(x\right)$ is real-valued when $\nu$, $\mu\in\mathbb{R}$ and $x\in(1,\infty)$, and is defined for all values of $\nu$ and $\mu$. The notation $\boldsymbol{Q}^{\mu}_{\nu}\left(x\right)$ is due to Olver (1997b, pp. 170 and 178). ## §14.3(iii) Alternative Hypergeometric Representations 14.3.11 $\displaystyle\mathsf{P}^{\mu}_{\nu}\left(x\right)$ $\displaystyle=\cos\left(\tfrac{1}{2}(\nu+\mu)\pi\right)w_{1}(\nu,\mu,x)+\sin% \left(\tfrac{1}{2}(\nu+\mu)\pi\right)w_{2}(\nu,\mu,x),$ 14.3.12 $\displaystyle\mathsf{Q}^{\mu}_{\nu}\left(x\right)$ $\displaystyle=-\tfrac{1}{2}\pi\sin\left(\tfrac{1}{2}(\nu+\mu)\pi\right)w_{1}(% \nu,\mu,x)+\tfrac{1}{2}\pi\cos\left(\tfrac{1}{2}(\nu+\mu)\pi\right)w_{2}(\nu,% \mu,x),$ where 14.3.13 $\displaystyle w_{1}(\nu,\mu,x)$ $\displaystyle=\frac{2^{\mu}\Gamma\left(\frac{1}{2}\nu+\frac{1}{2}\mu+\frac{1}{% 2}\right)}{\Gamma\left(\frac{1}{2}\nu-\frac{1}{2}\mu+1\right)}\left(1-x^{2}% \right)^{-\mu/2}\mathbf{F}\left(-\tfrac{1}{2}\nu-\tfrac{1}{2}\mu,\tfrac{1}{2}% \nu-\tfrac{1}{2}\mu+\tfrac{1}{2};\tfrac{1}{2};x^{2}\right),$ 14.3.14 $\displaystyle w_{2}(\nu,\mu,x)$ $\displaystyle=\frac{2^{\mu}\Gamma\left(\frac{1}{2}\nu+\frac{1}{2}\mu+1\right)}% {\Gamma\left(\frac{1}{2}\nu-\frac{1}{2}\mu+\frac{1}{2}\right)}x\left(1-x^{2}% \right)^{-\mu/2}\mathbf{F}\left(\tfrac{1}{2}-\tfrac{1}{2}\nu-\tfrac{1}{2}\mu,% \tfrac{1}{2}\nu-\tfrac{1}{2}\mu+1;\tfrac{3}{2};x^{2}\right).$ 14.3.15 $P^{-\mu}_{\nu}\left(x\right)=2^{-\mu}\left(x^{2}-1\right)^{\mu/2}\mathbf{F}% \left(\mu-\nu,\nu+\mu+1;\mu+1;\tfrac{1}{2}-\tfrac{1}{2}x\right),$ 14.3.16 $\cos\left(\nu\pi\right)P^{-\mu}_{\nu}\left(x\right)=\frac{2^{\nu}\pi^{1/2}x^{% \nu-\mu}\left(x^{2}-1\right)^{\mu/2}}{\Gamma\left(\nu+\mu+1\right)}\mathbf{F}% \left(\tfrac{1}{2}\mu-\tfrac{1}{2}\nu,\tfrac{1}{2}\mu-\tfrac{1}{2}\nu+\tfrac{1% }{2};\tfrac{1}{2}-\nu;\frac{1}{x^{2}}\right)-\frac{\pi^{1/2}\left(x^{2}-1% \right)^{\mu/2}}{2^{\nu+1}\Gamma\left(\mu-\nu\right)x^{\nu+\mu+1}}\mathbf{F}% \left(\tfrac{1}{2}\nu+\tfrac{1}{2}\mu+1,\tfrac{1}{2}\nu+\tfrac{1}{2}\mu+\tfrac% {1}{2};\nu+\tfrac{3}{2};\frac{1}{x^{2}}\right),$ 14.3.17 $P^{-\mu}_{\nu}\left(x\right)=\frac{\pi\left(x^{2}-1\right)^{\mu/2}}{2^{\mu}}% \left(\frac{\mathbf{F}\left(\frac{1}{2}\mu-\frac{1}{2}\nu,\frac{1}{2}\nu+\frac% {1}{2}\mu+\frac{1}{2};\frac{1}{2};x^{2}\right)}{\Gamma\left(\frac{1}{2}\mu-% \frac{1}{2}\nu+\frac{1}{2}\right)\Gamma\left(\frac{1}{2}\nu+\frac{1}{2}\mu+1% \right)}-\frac{x\mathbf{F}\left(\frac{1}{2}\mu-\frac{1}{2}\nu+\frac{1}{2},% \frac{1}{2}\nu+\frac{1}{2}\mu+1;\frac{3}{2};x^{2}\right)}{\Gamma\left(\frac{1}% {2}\mu-\frac{1}{2}\nu\right)\Gamma\left(\frac{1}{2}\nu+\frac{1}{2}\mu+\frac{1}% {2}\right)}\right),$ 14.3.18 $\displaystyle P^{-\mu}_{\nu}\left(x\right)$ $\displaystyle=2^{-\mu}x^{\nu-\mu}\left(x^{2}-1\right)^{\mu/2}\mathbf{F}\left(% \tfrac{1}{2}\mu-\tfrac{1}{2}\nu,\tfrac{1}{2}\mu-\tfrac{1}{2}\nu+\tfrac{1}{2};% \mu+1;1-\frac{1}{x^{2}}\right),$ 14.3.19 $\displaystyle\boldsymbol{Q}^{\mu}_{\nu}\left(x\right)$ $\displaystyle=\frac{2^{\nu}\Gamma\left(\nu+1\right)(x+1)^{\mu/2}}{(x-1)^{(\mu/% 2)+\nu+1}}\mathbf{F}\left(\nu+1,\nu+\mu+1;2\nu+2;\frac{2}{1-x}\right),$ 14.3.20 $\frac{2\sin\left(\mu\pi\right)}{\pi}\boldsymbol{Q}^{\mu}_{\nu}\left(x\right)=% \frac{(x+1)^{\mu/2}}{\Gamma\left(\nu+\mu+1\right)(x-1)^{\mu/2}}\mathbf{F}\left% (\nu+1,-\nu;1-\mu;\tfrac{1}{2}-\tfrac{1}{2}x\right)-\frac{(x-1)^{\mu/2}}{% \Gamma\left(\nu-\mu+1\right)(x+1)^{\mu/2}}\mathbf{F}\left(\nu+1,-\nu;\mu+1;% \tfrac{1}{2}-\tfrac{1}{2}x\right).$ For further hypergeometric representations of $P^{\mu}_{\nu}\left(x\right)$ and $Q^{\mu}_{\nu}\left(x\right)$ see Erdélyi et al. (1953a, pp. 123–139), Andrews et al. (1999, §3.1), Magnus et al. (1966, pp. 153–163), and §15.8(iii). For further hypergeometric representations of $\mathsf{Q}^{\mu}_{\nu}\left(x\right)$ see Cohl et al. (2021). ## §14.3(iv) Relations to Other Functions In terms of the Gegenbauer function $C^{(\beta)}_{\alpha}\left(x\right)$ and the Jacobi function $\phi^{(\alpha,\beta)}_{\lambda}\left(t\right)$ (§§15.9(iii), 15.9(ii)): 14.3.21 $\displaystyle\mathsf{P}^{\mu}_{\nu}\left(x\right)$ $\displaystyle=\frac{2^{\mu}\Gamma\left(1-2\mu\right)\Gamma\left(\nu+\mu+1% \right)}{\Gamma\left(\nu-\mu+1\right)\Gamma\left(1-\mu\right)\left(1-x^{2}% \right)^{\mu/2}}C^{(\frac{1}{2}-\mu)}_{\nu+\mu}\left(x\right).$ 14.3.22 $\displaystyle P^{\mu}_{\nu}\left(x\right)$ $\displaystyle=\frac{2^{\mu}\Gamma\left(1-2\mu\right)\Gamma\left(\nu+\mu+1% \right)}{\Gamma\left(\nu-\mu+1\right)\Gamma\left(1-\mu\right)\left(x^{2}-1% \right)^{\mu/2}}C^{(\frac{1}{2}-\mu)}_{\nu+\mu}\left(x\right).$ 14.3.23 $\displaystyle P^{\mu}_{\nu}\left(x\right)$ $\displaystyle=\frac{1}{\Gamma\left(1-\mu\right)}\left(\frac{x+1}{x-1}\right)^{% \mu/2}\phi^{(-\mu,\mu)}_{-\mathrm{i}(2\nu+1)}\left(\operatorname{arcsinh}\left% ((\tfrac{1}{2}x-\tfrac{1}{2})^{\ifrac{1}{2}}\right)\right).$ Compare also (18.11.1). From (15.9.15) it follows that $1-2\mu=0,-1,-2,\dots$ and $\nu+\mu+1=0,-1,-2,\dots$ are removable singularities of the right-hand sides of (14.3.21) and (14.3.22).
2022-01-29T09:41:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 285, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9856159090995789, "perplexity": 3924.8464492283597}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304883.8/warc/CC-MAIN-20220129092458-20220129122458-00366.warc.gz"}
https://lar.bnl.gov/properties/spacecharge.html
## Space Charge ### Space Charge Effect in LAr When the ionized electrons are drifting toward the anode plane, the positive argon ion would drift towards the cathode plane. The drifting speed o the positive ion is much slower, about five orders of magnitude smaller than that of ionized electron. For LArTPC operating on the surface, the continuous flow of cosmic muons would lead to a large amount of positive ions accumulated inside the drift volume of LArTPC. Such charge accumulation is commonly referred to as the space charge, and would lead to a distortion of the electric field lines inside LArTPC. Space charge is common for Gaseous TPC. Since the diffusion of electrons inside LAr is small, the ionized electrons essentially travel along the electric field line. Therefore, space charge would lead to distortion in the reconstructed image from LArTPC. Since the space charge would also change the magnitude of electric field, it would also have an impact on the energy reconstruction through altering the electron-ion recombination. The impact of the space scaled with $L^3$ with L being the drift distance and $E^{-1.7}$ with E being the electric field. The number -1.7 (instead of -2) is due to more electron-ion recombination at low electric field. A summary of impact of space charge can be found at Ref [1]. The space charge effect has been clearly observed in the MicroBooNE experiment (Ref [2]). The calibration of the space charge effect is still an active area of research. ## References 1. M. Mooney, "The MicroBooNE Experiment and the Impact of Space Charge Effects", arXiv:1511.01563. 2. MicroBooNE Collaboration, "Study of Space Charge Effects in MicroBooNE", MICROBOONE-NOTE-1018-PUB
2021-09-23T01:52:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8245527744293213, "perplexity": 1099.3838521333405}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057416.67/warc/CC-MAIN-20210923013955-20210923043955-00443.warc.gz"}
http://alexeigor.wikidot.com/height
Computation of the scheduling priority Cyclic List Scheduling (1) \begin{align} $$T = \{T_{1},T_{2},...,T_{n}\}$$& $$<i,k>$$ \end{align} page revision: 8, last edited: 16 Nov 2006 19:08 Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-Share Alike 2.5 License.
2018-05-24T00:21:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3036021292209625, "perplexity": 3120.15757937691}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794865863.76/warc/CC-MAIN-20180523235059-20180524015059-00513.warc.gz"}
https://par.nsf.gov/biblio/10142573-widely-tunable-cavity-enhanced-frequency-combs
skip to main content Widely tunable cavity-enhanced frequency combs We describe the cavity enhancement of frequency combs over a wide tuning range of 450–700 nm ($><#comment/>7900cm−<#comment/>1$), covering nearly the entire visible spectrum. Tunable visible frequency combs from a synchronously pumped optical parametric oscillator are coupled into a four-mirror, dispersion-managed cavity with a finesse of 600–1400. An intracavity absorption path length enhancement greater than 190 is obtained over the entire tuning range, while preserving intracavity spectral bandwidths capable of supporting sub-200 fs pulse durations. These tunable cavity-enhanced frequency combs can find many applications in nonlinear optics and spectroscopy. Authors: ; ; ; ; ; ; ; Award ID(s): Publication Date: NSF-PAR ID: 10142573 Journal Name: Optics Letters Volume: 45 Issue: 7 Page Range or eLocation-ID: Article No. 2123 ISSN: 0146-9592; OPLEDP Publisher: Optical Society of America Sponsoring Org: National Science Foundation More Like this 1. Electro-optic quantum coherent interfaces map the amplitude and phase of a quantum signal directly to the phase or intensity of a probe beam. At terahertz frequencies, a fundamental challenge is not only to sense such weak signals (due to a weak coupling with a probe in the near-infrared) but also to resolve them in the time domain. Cavity confinement of both light fields can increase the interaction and achieve strong coupling. Using this approach, current realizations are limited to low microwave frequencies. Alternatively, in bulk crystals, electro-optic sampling was shown to reach quantum-level sensitivity of terahertz waves. Yet, the coupling strength was extremely weak. Here, we propose an on-chip architecture that concomitantly provides subcycle temporal resolution and an extreme sensitivity to sense terahertz intracavity fields below 20 V/m. We use guided femtosecond pulses in the near-infrared and a confinement of the terahertz wave to a volume of$VTHz∼<#comment/>10−<#comment/>9(λ<#comment/>THz/2)3$in combination with ultraperformant organic molecules ($r33=170pm/V$) and accomplish a record-high single-photon electro-optic coupling rate of, 10,000 times higher than in recent reports of sensing vacuum field fluctuations in bulk media. Via homodyne detection implemented directly on chip, the interaction results into an intensity modulation of the femtosecond pulses. The single-photon cooperativity is$C0=1.6×<#comment/>10−<#comment/>8$, and the multiphoton cooperativity is$C=0.002$at room temperature. We show$><#comment/>70dB$dynamic range in intensity at 500 ms integration under irradiation with a weak coherent terahertz field. Similar devices could be employed in future measurements of quantum states in the terahertz at the standard quantum limit, or for entanglement of subsystems on subcycle temporal scales, such as terahertz and near-infrared quantum bits. 2. Thin-film lithium-niobate-on-insulator (LNOI) has emerged as a superior integrated-photonics platform for linear, nonlinear, and electro-optics. Here we combine quasi-phase-matching, dispersion engineering, and tight mode confinement to realize nonlinear parametric processes with both high efficiency and wide wavelength tunability. On a millimeter-long, Z-cut LNOI waveguide, we demonstrate efficient ($1900±<#comment/>500%<#comment/>W−<#comment/>1cm−<#comment/>2$) and highly tunable ($−<#comment/>1.71nm/K$) second-harmonic generation from 1530 to 1583 nm by type-0 quasi-phase-matching. Our technique is applicable to optical harmonic generation, quantum light sources, frequency conversion, and many other photonic information processes across visible to mid-IR spectral bands. 3. The use of multispectral geostationary satellites to study aquatic ecosystems improves the temporal frequency of observations and mitigates cloud obstruction, but no operational capability presently exists for the coastal and inland waters of the United States. The Advanced Baseline Imager (ABI) on the current iteration of the Geostationary Operational Environmental Satellites, termed the$R$Series (GOES-R), however, provides sub-hourly imagery and the opportunity to overcome this deficit and to leverage a large repository of existing GOES-R aquatic observations. The fulfillment of this opportunity is assessed herein using a spectrally simplified, two-channel aquatic algorithm consistent with ABI wave bands to estimate the diffuse attenuation coefficient for photosynthetically available radiation,$Kd(PAR)$. First, anin situABI dataset was synthesized using a globally representative dataset of above- and in-water radiometric data products. Values of$Kd(PAR)$were estimated by fitting the ratio of the shortest and longest visible wave bands from thein situABI dataset to coincident,in situ$Kd(PAR)$data products. The algorithm was evaluated based on an iterative cross-validation analysis in which 80% of the dataset was randomly partitioned for fitting and the remaining 20%more » 4. Materials with strong second-order ($χ<#comment/>(2)$) optical nonlinearity, especially lithium niobate, play a critical role in building optical parametric oscillators (OPOs). However, chip-scale integration of low-loss$χ<#comment/>(2)$materials remains challenging and limits the threshold power of on-chip$χ<#comment/>(2)$OPO. Here we report an on-chip lithium niobate optical parametric oscillator at the telecom wavelengths using a quasi-phase-matched, high-quality microring resonator, whose threshold power ($∼<#comment/>30µ<#comment/>W$) is 400 times lower than that in previous$χ<#comment/>(2)$integrated photonics platforms. An on-chip power conversion efficiency of 11% is obtained from pump to signal and idler fields at a pump power of 93 µW. The OPO wavelength tuning is achieved by varying the pump frequency and chip temperature. With the lowest power threshold among all on-chip OPOs demonstrated so far, as well as advantages including high conversion efficiency, flexibility in quasi-phase-matching, and device scalability, the thin-film lithium niobate OPO opens new opportunities for chip-based tunable classical and quantum light sources and provides a potential platform for realizing photonic neural networks. 5. We have studied spectra and angular distribution of emission in Fabry–Perot cavities formed by two silver mirrors separated by a layer of poly (methyl methacrylate) polymer doped with rhodamine 6G (R6G) dye in low ($20g/l$) and high ($200g/l$) concentrations. The frequency of emission radiated to a cavity mode was larger at large outcoupling angles—the “rainbow” effect. At the same time, the angle of the strongest emission was also determined by the cavity size: the larger the cavity, the larger the angle. The angular distribution of emission is commonly dominated by two symmetrical lobes (located at the intersection of the three-dimensional emission cone with a horizontal plane) pointing to the left and to the right of the normal to the sample. Despite the strong Stokes shift in R6G dye, the branch of the cavity dispersion curve obtained in the emission experiment is positioned above the one obtained in the reflection (extinction) experiment. Some dye molecules are poorly coupled to cavity modes. Their emission has very broad angular distribution with the maximum at$θ<#comment/>=0∘<#comment/>$. The signatures of strong cavity–exciton coupling were observed at high dye concentrationmore »
2023-03-30T05:12:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 26, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4629647731781006, "perplexity": 4218.53121681583}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949097.61/warc/CC-MAIN-20230330035241-20230330065241-00429.warc.gz"}
https://pdglive.lbl.gov/ParticleGroup.action?init=0&node=MXXX020
#### STRANGE MESONS ($\mathit S$ = $\pm1$, $\mathit C$ = $\mathit B$ = $\mathit{0}$) ${{\mathit K}^{+}}$ = ${\mathit {\mathit u}}$ ${\mathit {\overline{\mathit s}}}$, ${{\mathit K}^{0}}$ = ${\mathit {\mathit d}}$ ${\mathit {\overline{\mathit s}}}$, ${{\overline{\mathit K}}^{0}}$ = ${\mathit {\overline{\mathit d}}}$ ${\mathit {\mathit s}}$, ${{\mathit K}^{-}}$ = ${\mathit {\overline{\mathit u}}}$ ${\mathit {\mathit s}}$, similarly for ${{\mathit K}^{*}}$'s Charged Kaon Mass Rare Kaon Decays Dalitz Plot Parameters for ${{\mathit K}}$ $\rightarrow$ 3 ${{\mathit \pi}}$ Decays $\mathit CPT$ Invariance Tests in Neutral Kaon Decay $\mathit CP$ Violation in ${{\mathit K}_S^0}$ $\rightarrow$ 3 ${{\mathit \pi}}$ $\mathit V_{{\mathit {\mathit u}}{\mathit {\mathit d}}}$, $\mathit V_{{\mathit {\mathit u}}{\mathit {\mathit s}}}$ the Cabibbo Angle, and CKM Unitarity $\mathit CP$ Violation in ${{\mathit K}_L^0}$ Decays $\Delta \mathit S$ = $\Delta \mathit Q$ in ${{\mathit K}^{0}}$ Decays Scalar Mesons below 1 GeV ${{\mathit K}^{*}{(892)}}$ Masses and Mass Differences
2022-08-16T15:34:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9976227283477783, "perplexity": 1139.0306011452399}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572408.31/warc/CC-MAIN-20220816151008-20220816181008-00303.warc.gz"}
https://bison.inl.gov/Documentation/source/actions/AddTransferAction.aspx
The AddTransferAction is the general action that creates Transfer objects when listed within the [Transfers] block of an input file. ## Input Parameters • active__all__ If specified only the blocks named will be visited and made active Default:__all__ C++ Type:std::vector Description:If specified only the blocks named will be visited and made active • isObjectActionTrueIndicates that this is a MooseObjectAction. Default:True C++ Type:bool Description:Indicates that this is a MooseObjectAction. • inactiveIf specified blocks matching these identifiers will be skipped. C++ Type:std::vector Description:If specified blocks matching these identifiers will be skipped.
2020-11-27T09:23:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19017089903354645, "perplexity": 11638.509340740733}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141191511.46/warc/CC-MAIN-20201127073750-20201127103750-00412.warc.gz"}
https://pos.sissa.it/390/175/
Volume 390 - 40th International Conference on High Energy physics (ICHEP2020) - Parallel: Neutrino Physics The T2K ND280 Upgrade D. Sgalaberna* On behalf of the T2K collaboration *corresponding author Full text: pdf Pre-published on: January 29, 2021 Published on: Abstract In view of the J-PARC program of upgrades of the beam intensity, the T2K collaboration is preparing towards an increase of the exposure aimed at reaching sensitivity for leptonic CP violation at 3$\sigma$ level for a significant fraction of the possible $\delta_{CP}$ values. To reach this goal, an upgrade of the T2K near detector ND280 will be installed at J-PARC in 2022, with the aim of reducing the combined statistical and systematic uncertainties to better than 4\%. We have developed an innovative concept for this neutrino detection system, comprising the Super-Fine-Grained-Detector (SuperFGD), two High Angle TPC (HA-TPC) and six TOF planes. The SuperFGD, a highly segmented scintillator detector, acting as a fully active target for the neutrino interactions, is a novel device with dimensions of approximately $~1.9\times1.9\times0.6~\text{m}^3$ and a total mass of about 2 ton. It consists of about 2 millions small scintillator cubes each $1~\text{cm}^3$. Each cube is optically isolated. The signal readout from each cube is provided by wavelength shifting fibers inserted through the cubes and connected to micro-pixel avalanche photodiodes MPPCs. The total number of channels will be $\sim$60,000. We have demonstrated that, by providing three 2D projections, this detector delivers excellent PID, timing, and tracking performance, including a $4\pi$ angular acceptance, especially important for short proton and pion tracks. The HA-TPC will be used for 3D track reconstruction, momentum measurement and particle identification. These TPCs, with overall dimensions of $2\times2\times0.8~\text{m}^3$, will be equipped with 32 resistive Micromegas. The thin field cage (3 cm thickness, 4\% rad. length) will be realized with laminated panels of Aramid and honeycomb covered with a kapton foil with copper strips. The $34\times42~\text{cm}^2$ resistive bulk Micromegas will use a 500 kOhm/square DLC foil to spread the charge over the pad plane, each pad being appr. $1~\text{cm}^2$. The front-end cards, based on the AFTER chip, will be mounted on the back of the Micromegas and parallel to its plane. The time-of-flight (TOF) detector will allow to reject events generated in the passive areas of the detector and improve particle identification. The TOF will consist of 6 planes with about $5~\text{m}^2$ surface area surrounding the SuperFGD and the TPCs. Each plane will be assembled with 2.2 m long cast plastic scintillator bars with light collected by arrays of large-area MPPCs from two ends. The time resolution at the bar centre is 150 ps. A report on the design of these detectors, their performance, the results of the test beam and the plan for the construction is provided. How to cite Metadata are provided both in "article" format (very similar to INSPIRE) as this helps creating very compact bibliographies which can be beneficial to authors and readers, and in "proceeding" format which is more detailed and complete. Open Access Copyright owned by the author(s) under the term of the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
2021-03-03T03:21:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5772372484207153, "perplexity": 3328.387495784086}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178365186.46/warc/CC-MAIN-20210303012222-20210303042222-00412.warc.gz"}
https://desi.lbl.gov/trac/wiki/PublicPages/MayallZbandLegacy/NotesforObservers
# OBSERVING INSTRUCTIONS FOR MzLS (PROPID 2016A-0453) Still need to incorporate the following instructions: • Running the guiding loop -- although we typically do not guide on the Mayall for these 1-2 min exposures These are instructions for observing for MzLS. ## VERY VERY IMPORTANT ### 1. DO NOT OVEREXPOSE MOSAIC CCDs to LIGHT! Prolonged exposure can damage and even destroy the devices. The following rules are in effect: 1. Operation on sky is restricted to between 10 deg twilights. 2. Operation is restricted to when OAs or other NOAO technical/scientific staff are present 3. The cameras and MOSAIC software should be shut down when observing / NOAO personnel are not present 4. Sky values should be kept < 40,000 adu (<30,000 adu is preferable). ### 2. PLEASE TALK TO THE OAs ABOUT SAFETY AT THE TELESCOPE 2. Do not leave the control room at night without telling the OA 3. Do not go into the dome without the OA's permission 4. Do not go exploring by yourself 5. Carry a flashlight and radio at night ## Some Preliminaries Who is your OA for tonight? See the OA Schedule Observing can only be done from the Mayall control room or from within the KPNO/NOAO network using a VPN. The telescope and low-level instrument control runs on [email protected]. Log on to the mayall-7 computer: To run MOSAIC3, use the MOSAIC3 Menu GUI. Or you can also log into the mosaic3 computer from mayall-7: ssh -XY observer mosaic3 and use command-line nocs commands. PLEASE BE CAREFUL ON THIS COMPUTER - IT RUNS THE CONTROLLERS AND THE INSTRUMENT!!''' Our observing scripts and data reduction runs on [email protected]: These environment variables define the locations of data and code: • HOME=/home/mzls -- This should be done for you by the operating system. • MOS3_DATA= -- This is where our raw data files are written. Should be something like /mosaic3/data2/observer/20160202. This is a network mounted disk on the machine mosaic3. • $MOS3_OBS=$HOME/products/mosaic3 -- This contains the observing product with code and versioned log files. • $PS1CAT_DIR=/data1/mzls/ps1/chunks-qz-star-v2 -- Pan-STARRS1 catalogs used by the IDL MOSSTAT routine for computing astrometric offsets and photometry These path names are set in the ~/.bashrc file, or can be set for example with export MOS3_DATA=/mosaic3/data2/observer. The$MOS3_DATA directory must be updated to point to the current night's data. Other directories in the home directory are: astrometry/, legacypipe/, obsbot/, tractor/, wcslib/ Python code for running copilot & mosbot data/ Link to the top-level data directory, where each night is stored in subdirectories like "20151213" exec/ Cross-mount to mosaic.kpno.noao.edu:/home/observer/exec products/ Code checked out from the SDSS and the DESI svn repositories Documentation for the IDL scripts can be printed from the IDL prompt with the DOC_LIBRARY command, for example: idl IDL> doc_library,'mosstat' ## Date convention All dates in log files are set to the local date of the beginning of the night. For example, any data taken during the night of December 13/14, 2015 will be written as 2015-12-13. This is consistent with how the NOAO Science Archive timestamps and saves the raw data files. ## Are we there yet? The tiles we are observing with MOSAIC3 in z-band are tracked in the file ~/products/mosaic3/obstatus/mosaic-tiles_obstatus.fits with the following cuts: IN_DESI = 1 DEC >= 30 88 < RA < 301 PASS <= 3 That’s 44,422 tiles. This will likely be further limited further to the 41,188 tiles at DEC >= 34. About 5% of the tiles have been removed from the list where there's a star brighter than V=6 within 0.35 deg of the tile centers. The tiles are broken into three passes. Each pass, pass1, pass2, and pass3, covers the basic footprint and each is offset optimally from the others. The tile are thus now fixed or defined on the sky for the duration of the survey. By definition: pass1 is the high quality, photometric coverage. To the greatest extent possible, we want to ensure pass one tiles are the best. As such, pass1 should be executed on photometric nights with good seeing (< 1.3"). The observer should not start pass1 if it is not a true photometric night or if 3-4 hour contiguous blocks are not stable and clear. The robot observing program will choose pass1 for seeing < 1.25", transparency > 90% and sky brightness not worse than 0.25 mag brighter than the fiducial. But the observer should really be setting forcepass1 based on whether its a very stable photometric night from beginning to end. Forcing the pass is described here: Use The Force pass2 is the next best pass. Seeing should be < 1.3" or the weather is photometric (but seeing worse than 1.3"). Pass2 and 3 may alternate throughout a given night and the mosbot observing robot will do this automatically based on seeing, transparency, and sky brightness. Pass2 could also be done if no pass1 tiles are available. The robot will stick with pass2 when conditions are a little worse than this. Specifically seeing < 2" and trans > 70%. Observers can force pass3 if they think the conditions are poor, even though the robot might say pass2. This is especially encouraged if conditions are variable on short timescales and the robot is moving frequently between pass2 and pass3. pass3 is the filler or worst pass executed when conditions are bad. Seeing > 1.3 and its not photometric (trans < 90%). Pass3 may be done in good conditions when no pass1 or pass2 tiles are available. Or, in more concise language: Pass 1 requires transparency > 0.9 AND seeing < 1.25 AND brightness < 0.25 brighter than nominal Pass 2 requires (transparency > 0.9 AND seeing < 2.0) OR (transparency > 0.7 AND seeing < 1.25) ## REMOTE OBSERVING • The fundamental rule of remote observing is: If there is a local observer, do not alter anything without notifying the local observer first! Even moving the mouse at the wrong time could interfere with an essential task that the local observer is performing. • Remote Observing notes ## EXAMPLE RUN THROUGH FOR A GIVEN NIGHT ### 1 Update everything Update the code, log files and most importantly the tile file: (if people have been working on the code and tile file during the day, this step may not be necessary) ssh mzls@mayall-idl cd ~/products/mosaic3 svn up cd ~/products/observing svn up cd ~/obsbot git pull ### 2 Set the paths The data path must be changed to point to the current night's data. Edit the entry for MOS3_DATA ~/.bashrc file, for example if the start of the night is 13 Dec 2015 change this to % emacs ~/.bashrc export MOS3_DATA=/mosaic3/data2/observer/20151213 % source ~/.bashrc Don't forget to source the file if you plan to continue to use this (or any open) terminal. ### 3 Create nightly plan There are 3 nightly plan files, one with a list of tiles to observe for each of the 3 passes. These files are in JSON format, and specify the pass 1, pass 2 or pass 3 tiles to be observed at each timestamp through the night. The selection of pass number while observing will depend upon the weather conditions as described at MayallZbandLegacy/ObservingStrategy, and will either be selected automatically or can be forced by the user. The IDL MSTRATEGY code is used to create the above plan files, as described at MayallZbandLegacy/ObservingStrategy Arjun or David may have already provided a set of plan files for the night and posted a note to the mayall-obs e-mail list saying so. These files would be checked into the "mosaic3" product here: ~/products/mosaic3/json/YYYY-MM-DD-p1.json ~/products/mosaic3/json/YYYY-MM-DD-p2.json ~/products/mosaic3/json/YYYY-MM-DD-p3.json and copied into the observing directory where they are read by mosbot here: ~/obsbot/pass1.json ~/obsbot/pass2.json ~/obsbot/pass3.json If the plan files have not been generated for the night, please do so as follows: idl IDL> mstrategy, copydir='/home/mzls/obsbot' IDL> exit Then check in the plan files to the "mosaic3" product as follows: cd ~/products/mosaic3/json svn commit -m "plan files for tonight" There are plot files in that directory too. We don't check those in, but it's informative to post the planned coverage map ~/products/mosaic3/json/<DATE>.pdf to [email protected] . ### 3b If you are Observing in the 2017B semester: The json files that are generated by the strategy code (mstrategy) will require a very large slew near the end of the night (~09:40ut = 02:40am) which will take you from the west end of the DESI North Galactic Cap footprint to its eastern end. The problem with this is that the system times out during the long slew, and this causes it to crash. Note: This has also happened on occasion in 2018. The procedure until further notice is to break each new json file that is created in the ~mzls/obsbot/ directory into two parts at the point of this large slew. (This will not be necessary if MzLS is scheduled for a first-half night, since you will be stopping observing at ~00:30am, but it will be an issue for all full nights.) This means that the observing script with which you start the night will terminate at around 2:40am. At this point, ask the telescope operator to slew to the first position on the eastern side of the NGC. Once the telescope has reached there, then restart mosbot by executing python mosbot.py pass1b.json pass2b.json pass3b.json —adjust —pass=3 —exptime=100 (where all the — are really two dashes). This will allow this system to continue without timing out during the slew. Deprecated instructions using a modified version of "nightlystrategy" is at MayallZbandLegacy/NightlyStrategyOld ### 4 Start up mosaic control software On mayall-7, double click on the MOSAIC3 icon, which brings up the MOSAIC3 Menu on the left edge of the screen. 1. Start the camera control program by pressing the yellow "Start Cameras" button. Wait for this to finish, then dismiss the screen by typing any key as instructed 2. Start the MOSAIC3 NOCS software by pressing the blue "Start MOSAIC" button. This launches a blue xterm. Move it out of the way and watch all the windows come up 3. Rearrange the desktop as needed. If, for some reason, the buttons do not work, you can start up the software on a command line as follows. On mayall-7 open a terminal window and: ssh -XY observer@mosaic3 nocs start ccp (this is equivalent to the "Start Cameras" button on the MOSAIC3 Menu) nocs start all (this is equivalent to the "Start MOSAIC" button on the MOSAIC3 Menu) Once nocs is up and running, rearrange windows as desired, and check the status of the system by typing the following in a nocs terminal window: nocs status all nocs fullstatus ccp If you want to know what these commands actually do, see here: MayallZbandLegacy/NotesforObservers/MOSAICGUI_Notes 4. If you are on mayall-7 in the new U-floor control room, please wait for the system to come up completely, and then close the DHS VNC viewer window using the small red button on the top bar of the window. Then relaunch a new VNC viewer by first clicking on the blue VNC icon in the dock, and then double clicking on the "mosaic3:1" icon in the window that comes up. This should bring up a new DHS VNC window that will respond more quickly to cursor commands. 5. Launch CCD temperature monitor from "MOSAIC Temps" icon (CCD and Dewar temps should be around 173C and 90C respectively) 6. Launch TCS acorn monitor from "VDU" icon 7. 4MAPS monitor from "4MAPS" icon 8. Launch Truss temperature monitor from "Truss C" icon All these icons are on the right side of the left hand screen of the mayall-7 computer display. Many or all of these may be running already. They can stay up. Rearrange the busy desktop as needed ... ### 5 Set the PROP-ID and Project Info On the NGUI window, press the "Set Project" button and fill out the relevant information. For example, Principal Investigator: Arjun Dey Actual Observers: Tristram Shandy, Bertram Wooster Observing Assistant: Karen Butler Proposal Identifier: 2016A-0453 Telescope System: KPNO Mayall 4m Science Instrument: Mosaic 3 Please ensure the Proposal Identifier number is correct'' Then, in one of the NOCS xterm windows, type: "nocs set project" ### 6 Take a test Zero image to ensure system is working Check that all is well by taking a test zero exposure. To do this, in a nocs window (an xterm connected to observer@mosaic3) do: cd ~/exec/MzLS ./ZERO1_MzLS.sh After the image is done, on the IRAF window, check the image statistics by cd-ing to the correct directory (e.g., /data/observer/20180105 for data from Jan. 5 2018) and typing "mscstat <filename>" All rms values should be about 4-6 adu/pix; the exceptions are amplifier [6] or [14] which can have an rms~8-10 adu/pix. If any one amplifier shows very high noise, then execute the following commands in a nocs window: nocs reset ccp nocs init ccp Then take two more zeros; the first one will be junk, but the second one should be OK. ### 7 Take dome flats and zeros During the afternoon (after 4pm), take dome flats with the telescope pointed at the white spot. Instructions for Domeflats are here: MayallZbandLegacy/NotesforObservers/Domeflats Go eat dinner. ### 8 Just before observing It is best to start things up after sunset and to be ready to start observing before the 10 deg twilight mark, see 10 deg twilight times for KPNO page. Before 10 deg twilight, you can do all the steps in 8 (i.e., this one). 1. Start the mayall-idl:1 VNC session. • In a terminal window from mayall-7, ssh to mzls@mayall-idl and log in using the mzls password • type: ~/bin/vnc • Now double click the VNC gui in the mayall-7 dock (i.e., the thing with all the icons at the bottom of the bottom left terminal). This launches a dialog box asking if you want to connect to mayall-idl.kpno.noao.edu:1 . Say "connect" and type in the mzls password. The VNC starts up. The VNC session should automatically start MUPTILES, MOSSTAT and COPILOT. Rearrange as desired. If it doesn't, open terminals in the VNC session and manually run the scripts for monitoring the observations and updating the tile file: 1. The FITS file listing which tiles have been completed should be updated throughout the night. This file is $MOS3_OBS/obstatus/mosaic-tiles_obstatus.fits. The following IDL command will monitor exposures as they are taken throughout the night, automatically updating this file. From a terminal on the mayall-idl VNC, in the mzls account, start this running: > idl IDL> muptiles At the end of the night, you should check the updated tile file into the svn repository. 2. In a second terminal on the VNC start the automated script for monitoring the data quality > idl IDL> mosstat_continuous This will run mosstat on each frame as it shows up and display the results on the screen. 3. In a third terminal on the VNC start running copilot: cd ~/obsbot python copilot.py 2. Take a zero image to ensure everything is working. Once the image has been written, use the IRAF window and run mscstat <filename> to make sure the system came up OK. See step [6] above. 3. Set the focus to some approximate value based on the Truss temperature and the formula: Focus(zd) = -8400+(1.4-Ttruss)x110 where Ttruss is in deg C. If the temperature is >18C, the following might work better: Focus(zd) = -10940+(22.9-Ttruss)x110 For convenience, you can create these two Python functions in a python terminal: >>>focus = lambda x: -8400 + (1.4 - x)*110 >>>focuswarm = lambda x: -10940 + (22.9 - x)*110 >>>focus(x); where x is the current Truss temp in deg C 4. Ask the telescope operator to point to an MzLS coordinate to do the pointing check. Use the first tile in the json file in pass 3 (for normal case, or json of forced pass if starting up non standard). The coordinates of the first tile can be found by: mayall-idl > less ~/obsbot/pass3.json 5. Wait patiently for 10 deg twilight. ### 9 Get Ready to Observe - start of night No on-sky observations are permitted before 10 deg twilight. 1. At start of night, check telescope pointing (see item 8.4 above; you should already be there)! For B-semester observing, use the pointing coordinate 280.0, +50.0) by doing the following: 1. create an OBJECT script with NGUI for 5 sec with the zd filter (do not create a TEST script, because MOSSTAT will ignore it) 2. Execute it, wait for it to readout completely, and make sure that it appears in your data directory; if it does not, use the "Update Status" button on the DHS GUI 3. Once the image is processed by copilot, the bottom panel on the copilot plot will print the pointing offsets. Give these offsets to the OA with opposite sign. That is, if copilot shows numbers (-15.3,+18.3), or if mosstat reads "RA,Dec offsets = -15.34, 18.32", then you need to provide the OA with the pointing offsets of +15,-18 to zero the telescope coordinates. 4. If this process does not work for some reason, then you can also zero the telescope coordinates by using a bright star placed on the telescope boresight (defined as the center of the mosaic3 focal plane; i.e. in the chip gaps). As before, when taking the image of the bright star, make sure to create an OBJECT script with NGUI (not a TEST, because MOSSTAT will ignore it), zd filter, 1 - 5 sec exposure. After centering the bright star, return to the first target field and use the above procedure to determine the pointing offsets - do not use the copilot or mosstat results based on the bright star image! Make sure you move back to the first MzLS tile position after you complete this test. 5. If mosstat or copilot fails, it could be because (a) the telescope is mis-pointed, (b) the telescope is out of focus (set the focus approx using the formula above and try again). 6. You can take another OBJECT frame if you want to check to make sure the offset went in the right direction. 2. Focus the Telescope 1. Create a focus script using the NGUI: exposure time 5 or 10 seconds, zd filter, -75 micron focus steps, click Midpoint to *on*, 9 exposures. NOTE: If observing in the 2017B semester, please make sure you are focusing at the sky location 270,+60. 2. run the focus script from the /home/observer/exec directory (./FOCUS.sh) 3. copilot will automatically analyze and report the best focus in the mayall-idl window. 4. or alternately, analyze the focus image using the IRAF script mscstarfocus • edit the /data/observer/mscfoc.cl script to correct the name of the image that needs to be analyzed • run 'cl < /data/observer/mscfoc.cl' in iraf window in data directory. • mark about 10 stars around the image; to get a quick idea if you have covered the right focus range mark "g" on the top star in a sequence which will pop up a graph that you will need to type "q" to get out of; mark the top star in each remaining sequence using "m"; "q" to quit; "d" to delete bad points; "q" to quit, then will get best focus value. 5. log the Truss temperature 6. set the telescope focus in the Configuration Monitor: Enter the value in Pedestal focus, hit return, and then hit Apply. If you forget the return, it will do nothing. 7. focus the guiders; this way you can use the guider images to monitor focus drifts 8. Example of a focus sequence ### 10 Observe - all night long! #### Routine Observations 1. From the mzls@mayall-idl window, generate the top-level observing script (tonight.sh) assuming that we have three plan files named pass1.json, etc. cd ~/obsbot python mosbot.py pass1.json pass2.json pass3.json --adjust --pass=3 --exptime=100 This will start the Mosbot script, which will watch the$MOS3_DATA directory for new images, analyze them, and update FUTURE exposure scripts, choosing the pass number and setting the exposure time. The exposure script behavior can be modified by creating or removing various files in the ~/exec/mosbot/ directory on mosaic3 (nocs xterm). • If the conditions look marginal, and you only want to run with pass3, say, create an empty forcepass3 file in the ~/exec/mosbot/ directory: rm ~/exec/mosbot/forcepass? touch ~/exec/mosbot/forcepass3 • If you want to allow only pass 2, then use: rm ~/exec/mosbot/forcepass? touch ~/exec/mosbot/forcepass2 • If you want to allow only pass 1, then use: rm ~/exec/mosbot/forcepass? touch ~/exec/mosbot/forcepass1 • If you want to allow only pass 2 and pass 3, then use: rm ~/exec/mosbot/forcepass? touch ~/exec/mosbot/nopass1 Mosbot checks for these files in the order: forcepass 1,2,3 then nopass1. • Force files are removed by mosbot.py on startup, so you will need to re-create those files if you restart the Mosbot. • For more options with mosbot, see MayallZbandLegacy/NotesforObservers/MosBot • Note that if you need to stop the script at some point during the night (see below), you will need to CTRL-C to stop mosbot.py and restart it when you are ready to start up again. If you forget this, you will be observing tiles that you already observed earlier in the night. 2. From the observer@mosaic3 xterm window, start taking exposures using the top-level observing script (tonight.sh): cd ~/exec/mosbot ./tonight.sh 3. Copilot should already be running. If not, start running Copilot in the mayall-idl VNC window. Copilot keeps a beautiful running plot of observing conditions. From any mzls@mayall-idl window: cd ~/obsbot python copilot.py Whenever a new image is detected in the data directory (as defined by $MOS3_DATA), a new image is generated as ~/obsbot/recent.png. 4. Monitor focus by checking the image quality on each frame or keeping an eye on the mosstat PSF display. Keep track of the truss temperature variation and use the information to modify the focus as needed. Note that the focus may not respond quickly to changes in temperature, so monitor the images carefully before adjusting focus. The Mayall has astigmatism, so one can sometimes tell from the shape of the images which way to move the focus. Here is a focus correction cheat sheet: To stop and do a focus sequence: • Create a file to tell tonight.sh to quit. On mosaic3: touch ~/exec/mosbot/quit • Wait for the current exposure to complete (at which point the above file is automatically removed). • CTRL-C the mosbot.py session on mayall-idl. • Run a focus sequence (see link below for instructions). • Re-start the observing as described above. IMPORTANT make sure to re-run mosbot.py so that you don't repeat exposures from the beginning of the night! And ALWAYS check the creation time of the tonight.sh script before you start running it. • Make a note in the logs of the exposure number of the image frame after or before which you made the focus change. 1. If one wants to know where on the footprint the images being taken are, in real time: mayall-idl> cd ~/obsbot mayall-idl> eog radec.png & which auto-update the display when the radec.png is overwritten. 2. Keep an eye on the CCD and dewar temps (should be around 173K and 90K respectively) 3. OK - you are off and running! Congratulations!!! If you need to stop an exposure do this: Using the “Abort” button does not work. The best thing to do if you need to stop an exposure quickly is to do the following: 1. close the shutter by pressing the “Dark” button on the MCCD GUI 2. cd ~/exec/mosbot and type 'touch quit' 3. cntrl-C out of the python mosbot.py run This will count down the exposure to completion, but at least it won’t crash nocs or compromise the system. #### If you have problems … #### Checking the Sky Brightness, Seeing and Transparency The Copilot and mosstat programs should already be running in a VNC. If they are not, you can launch xterms, login to the mayall-idl computer as mzls, and run these individually. From an IDL prompt, use the MOSSTAT routine to analyze the latest image on disk: IDL> mosstat There are keyword options that allow you to choose different exposure numbers or CCDs within that exposure. For example, to analyze chip 'im16' of the exposure number 12345, type: IDL> mosstat, 12345, ext='im16' The full documentation can be seen with: IDL> doc_library,'mosstat' To just have mosstat run continuously whenever each image appears, use IDL> mosstat_continuous #### Please write useful human logs Keep a log about weather conditions, which pass you observed, and telescope problems. Follow the example on the pages at MayallZbandLegacy/ObservingLogs . Please make a note of when you make any changes to the focus (note the time and the image exposure number before/after the focus change), any changes to weather conditions (increase in wind speed, change in direction, changes in humidity, sky brightness, cloud cover, etc.), and any pointing corrections (please note the time and magnitude, and if possible the frame number). Please stay aware of the observing conditions. Go out an look at the sky yourself every few hours - do not rely solely on the computer monitors and telemetry. Don't worry - you will not be eaten; just carry a flashlight. Sing loudly, if that helps :) Please record any catastrophically bad frames (such as saturated frames, or where the telescope moved) in the bad_expid.txt file. This file is svn-checked-in at the end of the night. ~products/mosaic3/obstatus/bad_expid.txt . ### 11 End of night No on-sky observations are permitted with MOSAIC3 before 10 deg twilight or after 10 deg dawn. No twilight flats are allowed. Once you exit the observing script (using "touch quit" or waiting for it to end), take a zero image. This ensures that the dark slide is put in place and the instrument is ready for shut down. #### 1 Shut down the software 1. Ensure that the "Shutter" and "ready" are both in the "Dark" position on the MCCD gui 2. Press the red "Stop MOSAIC" button on the MOSAIC Menu GUI. Wait for this to finish. 3. Then press the yellow "Stop Cameras" button on the MOSAIC Menu GUI. If the buttons do not work, then go to one of the NOCS xterm windows and type: nocs stop all (equivalent to pressing the "Stop MOSAIC" button) Once nocs is shut down, type nocs stop ccp (equivalent to pressing the "Stop Cameras" button) Once this is done, type nocs status all nocs fullstatus ccp and make sure everything is shut down. Only when this is done, is it safe for the OAs to put on the lights in the dome. Also, exit the window that was running mosbot, by typing CTRL-C Exit the xterm and close it. #### 2 Close down mayall-idl VNC session We update the .bashrc for the next night, so it is best if you close the VNC windows and exit any xterms on mosaic3 and mayall-idl. From here to item 5, one may now execute the bash shell script bin/end_of_night to do all the book keeping and plot making. Jump to item 6 if you use this option. On an xterm in the mayall-idl VNC window from any directory: end_of_night YYYY-MM-DD &> YYYY-MM-DD-eon.log Where DD is the day at the beginning of the night. Using the error redirect to a log file allows one to check for errors. One can watch as the script is running by: tail -f YYYY-MM-DD-eon.log in a separate xterm. To continue instead in manual mode, keep going: On a mayall-idl xterm, type: ~/bin/stopvnc Exit from the window running mosbot by typing CTRL-C, and exit from that xterm. #### 3 Check in the updated tile file and updated bad exposure file At the end of the night, you should check the updated tile file into the svn repository. Open a shell terminal on the mayall-7 desktop and log on to the mayall-idl computer: ssh mzls@mayall-idl Once you have logged on to the mayall-idl computer: cd$MOS3_OBS/obstatus svn -m "observing update of tile file and bad exposure list” commit If this does not work for some reason, try cd \$MOS3_OBS/obstatus You can also check in the files individually as cd ~/products/mosaic3/obstatus svn -m "obstatus update" commit mosaic-tiles_obstatus.fits #### 4 Create the Almanac and Almanac plot files At the end of the night, create the Almanac files and check them into svn. Do this using a terminal window on the mayall-idl VNC (if it is still running), or by opening a terminal window on mayall-7, and logging on to mayall-idl by executing an "ssh mzls@mayall-idl" command : - cd products/mosaic3/logs - idl - almanac - plotalmanac,'Almanac_date.fits',ps='plot_Almanac_date' - psfmovie - exit - svn commit *Almanac* For example, for the night of March 26/27, 2015, this is done with: cd products/mosaic3/logs idl almanac plotalmanac,'Almanac_2015-03-26.fits',ps='plot_Almanac_2015-03-26' exit svn commit *Almanac_2015-03-26.* There are actually two versions of this file, one that is an ASCII file (with .txt extension) and one that is a FITS file (with .fits extension). #### 5 Create the Coverage and Summary files for the Night In a mayall-idl window on the VNC if it is still running), or by opening a terminal window on mayall-7, and logging on to mayall-idl by executing an "ssh mzls@mayall-idl" command : - idl - IDL> moscoverage_plot This will produce a pdf plot of the current cumulative coverage. The plot is placed in the directory ~/products/mosaic3/obstatus/plots/ by the script. The file name has the form coverage.till_2016-MM-DD.pdf. - python copilot.py --night - mv night.png <date>.png (e.g., 2016-02-11.png) - mv <date>.png ~/products/mosaic3/logs/ - cd ~/products/mosaic3/logs/ - svn -m "" commit You can now mail these plots out to [email protected] (YYYY-MM-DD.png will suffice as mayall-desi@ does not like large attachments) to let your collaborators know how the night went. #### 6 Summary of night log page Fill out the summary log page entry for your night (i.e., add a row to the table seen at https://desi.lbl.gov/trac/wiki/MayallZbandLegacy/ObservingLogs ) You can read off the first and last image numbers from the ~/products/mosaic3/logs/Almanac-2016-02-11.txt, and for the total number of images taken, please subtract the bad images that you recorded. #### 7 Send out email to the collaboration Please send out an e-mail to [email protected] with a short report on the night. Please attach the YYYY-MM-DD.png nightly file. #### Summary: End of night check list • Dark slides in • MOSAIC3 NOCS shut down • MOSAIC3 cameras shut down • Almanac file created • mosaic-tile_obstatus.fits, almanac, bad_expid.txt files commited to SVN • Log out of all mosaic3 windows • Log out of all mayall-idl windows • Mayall-idl VNC shut down • Observing logs and summary observing log table on wiki up-to-date and closed • Summary e-mail describing night sent out to [email protected] • Chat with the OA about the mirror cooling set temperature for the next night
2019-03-22T22:30:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3249122202396393, "perplexity": 6037.665210665317}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202698.22/warc/CC-MAIN-20190322220357-20190323002357-00382.warc.gz"}
https://zbmath.org/authors/?q=ai%3Achen.bang-yen
## Chen, Bang-Yen Compute Distance To: Author ID: chen.bang-yen Published as: Chen, Bang-Yen; Chen, Bang-yen; Chen, B.-Y.; Chen, B. Y.; Chen, B.-y.; Chen, Bang Yen; Chen, Bang-Yeng External Links: MGP · Wikidata · ResearchGate · GND · IdRef Documents Indexed: 540 Publications since 1967, including 14 Books 1 Contribution as Editor Reviewing Activity: 326 Reviews Biographic References: 6 Publications Co-Authors: 76 Co-Authors with 249 Joint Publications 1,043 Co-Co-Authors all top 5 ### Co-Authors 289 single-authored 31 Dillen, Franki 29 Verstraelen, Leopold C. A. 25 Vrancken, Luc 20 Yano, Kentaro 16 Ogiue, Koichi 14 Deshmukh, Sharief 10 Morvan, Jean-Marie 10 Van der Veken, Joeri 10 Vanhecke, Lieven 9 Garay, Oscar Jesus 9 Lue, Huei-Shyong 8 Barros, Manuel 8 Houh, Chorng-Shi 8 Verheyen, Paul 7 Wei, Shihshu Walter 6 Nagano, Tadashi 6 Siraj-Uddin 5 Al-Solamy, Falleh Rijaullah 5 Deprez, Johan 5 Ludden, Gerald D. 5 Mihai, Ion 4 Blair, David E. 4 Li, Shijie 3 Alghanemi, Azeb 3 Ishikawa, Susumu 3 Kuan, Wei-Eihn 3 Maeda, Sadahiro 3 Mihai, Adela 3 Munteanu, Marian-Ioan 3 Piccinni, Paolo 3 Song, Hongzao 3 Tazawa, Yoshihiko 3 Vîlcu, Gabriel Eduard 2 Al-Jedani, Awatif 2 Alodan, Haila 2 Baikoussis, Christos 2 Fastenakels, Johan 2 Kim, Youngho 2 Nore, Thérèse 2 Petrovic, Mira 2 Turki, Nasser Bin 2 Yamaguchi, Seiichi 1 Alegre, Pablo 1 Alghamdi, Fatimah 1 Alshammari, Sana Hamoud 1 Arslan, Kadri 1 Blaga, Adara-Monica 1 Borrelli, Vincent 1 Camci, Çetin 1 Carriazo, Alfonso 1 Castro, Ildefonso 1 Chen, Pi-Mei 1 Choi, Miekyung 1 Decu, Simona 1 Defever, Filip 1 Fu, Yu 1 İlarslan, Kazim 1 Jiang, Sheng 1 Kim, Dongsoo S. 1 Kocaman, E. S. 1 Martín-Molina, Verónica 1 Montiel, Sebastián 1 Murathan, Cengizhan 1 Oh, Yun Myung 1 Okumura, Masafumi 1 Pérez Jiménez, Juan de Dios 1 Prieto-Martín, Alicia 1 Sarabia, Alfonso Sarabia 1 Shahid, Mohammed Hasan 1 Suceavă, Bogdan Dragos 1 Teng, Th. 1 Teng, Tsing-Houa 1 Uçum, Ali 1 Wang, Xianfeng 1 Wu, Baoqiang 1 Yang, Dan 1 Yang, Jie 1 Yildirim, Handan 1 Zhou, Zhengfang all top 5 ### Serials 19 Proceedings of the American Mathematical Society 17 Tamkang Journal of Mathematics 17 Soochow Journal of Mathematics 16 International Electronic Journal of Geometry 12 Bulletin of the Institute of Mathematics. Academia Sinica 12 Taiwanese Journal of Mathematics 11 Results in Mathematics 11 Kodai Mathematical Seminar Reports 10 Archiv der Mathematik 10 Comptes Rendus de l’Académie des Sciences. Série I 9 Bulletin of the Australian Mathematical Society 9 Geometriae Dedicata 9 Journal of Differential Geometry 9 Kragujevac Journal of Mathematics 8 Houston Journal of Mathematics 8 Journal of Mathematical Analysis and Applications 8 Journal of Geometry and Physics 8 Journal of Geometry 8 Kodai Mathematical Journal 8 Tohoku Mathematical Journal. Second Series 7 Journal of the Mathematical Society of Japan 7 Monatshefte für Mathematik 7 Bulletin of the American Mathematical Society 6 Beiträge zur Algebra und Geometrie 6 Glasgow Mathematical Journal 6 Journal of the London Mathematical Society. Second Series 6 Differential Geometry and its Applications 6 Atti della Accademia Nazionale dei Lincei. Serie Ottava. Rendiconti. Classe di Scienze Fisiche, Matematiche e Naturali 5 Journal of Mathematical Physics 5 Mathematical Proceedings of the Cambridge Philosophical Society 5 Indiana University Mathematics Journal 5 Nagoya Mathematical Journal 5 Proceedings of the Japan Academy. Series A 5 Publicationes Mathematicae 5 Tensor. New Series 5 Annals of Global Analysis and Geometry 4 Israel Journal of Mathematics 4 Annali di Matematica Pura ed Applicata. Serie Quarta 4 Chinese Journal of Mathematics 4 Japanese Journal of Mathematics. New Series 4 Michigan Mathematical Journal 4 Nonlinear Analysis. Theory, Methods & Applications. Series A: Theory and Methods 4 Proceedings of the Edinburgh Mathematical Society. Series II 4 Tokyo Journal of Mathematics 4 Annales de la Faculté des Sciences de Toulouse. Série V. Mathématiques 4 Bulletin de la Société Matheḿatique de Belgique. Série B 4 Kyushu Journal of Mathematics 4 Turkish Journal of Mathematics 4 Arab Journal of Mathematical Sciences 4 Balkan Journal of Geometry and its Applications (BJGA) 3 American Journal of Mathematics 3 Duke Mathematical Journal 3 Hokkaido Mathematical Journal 3 Journal of the Korean Mathematical Society 3 Mathematische Annalen 3 Mathematical Journal of Okayama University 3 Nanta Mathematica 3 Transactions of the American Mathematical Society 3 Tsukuba Journal of Mathematics 3 Comptes Rendus Mathématiques de l’Académie des Sciences 3 Bulletin of the Korean Mathematical Society 3 Acta Mathematica Hungarica 3 International Journal of Mathematics 3 Journal of the Australian Mathematical Society. Series A 3 Algebras, Groups and Geometries 3 Bollettino della Unione Matematica Italiana. Series V. A 3 Soochow Journal of Mathematics and Natural Sciences 2 Rocky Mountain Journal of Mathematics 2 Annales Polonici Mathematici 2 Colloquium Mathematicum 2 International Journal of Mathematics and Mathematical Sciences 2 Osaka Journal of Mathematics 2 The Quarterly Journal of Mathematics. Oxford Second Series 2 Proceedings of the Royal Society of Edinburgh. Section A. Mathematics 2 Bulletin of the Belgian Mathematical Society - Simon Stevin 2 Rendiconti del Seminario Matematico di Messina. Serie II 2 Serdica Mathematical Journal 2 Acta Mathematica Sinica. English Series 2 Bulletin of the Malaysian Mathematical Sciences Society. Second Series 2 Central European Journal of Mathematics 2 International Journal of Geometric Methods in Modern Physics 2 Mediterranean Journal of Mathematics 2 Journal of Geometry and Symmetry in Physics 2 Comptes Rendus Hebdomadaires des Séances de l’Académie des Sciences, Série A 2 Series in Pure Mathematics 2 Journal of Advanced Mathematical Studies 2 Revista de la Real Academia de Ciencias Exactas, Físicas y Naturales. Serie A: Matemáticas. RACSAM 2 Bulletin of the Transilvania University of Brașov. Series III. Mathematics, Informatics, Physics 1 American Mathematical Monthly 1 Classical and Quantum Gravity 1 Computer Methods in Applied Mechanics and Engineering 1 General Relativity and Gravitation 1 Applied Mathematics and Computation 1 Archivum Mathematicum 1 Atti della Accademia Peloritana dei Pericolanti. Classe di Scienze Fisiche, Matemàtiche e Naturali 1 Canadian Journal of Mathematics 1 The Formosan Science 1 Journal für die Reine und Angewandte Mathematik 1 Kyungpook Mathematical Journal 1 Mathematische Nachrichten ...and 19 more Serials all top 5 ### Fields 519 Differential geometry (53-XX) 36 Global analysis, analysis on manifolds (58-XX) 20 Several complex variables and analytic spaces (32-XX) 15 Algebraic geometry (14-XX) 13 Relativity and gravitational theory (83-XX) 11 Manifolds and cell complexes (57-XX) 11 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 9 Partial differential equations (35-XX) 6 Calculus of variations and optimal control; optimization (49-XX) 5 Convex and discrete geometry (52-XX) 4 Quantum theory (81-XX) 3 History and biography (01-XX) 3 Algebraic topology (55-XX) 3 Operations research, mathematical programming (90-XX) 2 Topological groups, Lie groups (22-XX) 2 Potential theory (31-XX) 2 Dynamical systems and ergodic theory (37-XX) 1 General and overarching topics; collections (00-XX) 1 Linear and multilinear algebra; matrix theory (15-XX) 1 Real functions (26-XX) 1 Ordinary differential equations (34-XX) 1 Difference and functional equations (39-XX) 1 Integral transforms, operational calculus (44-XX) 1 Numerical analysis (65-XX) 1 Mechanics of particles and systems (70-XX) 1 Mechanics of deformable solids (74-XX) 1 Fluid mechanics (76-XX) 1 Astronomy and astrophysics (85-XX) ### Citations contained in zbMATH Open 432 Publications have been cited 4,928 times in 1,988 Documents Cited by Year Geometry of submanifolds. Zbl 0262.53036 Chen, Bang-yen 1973 Total mean curvature and submanifolds of finite type. Zbl 0537.53049 Chen, Bang-Yen 1984 Some pinching and classification theorems for minimal submanifolds. Zbl 0811.53060 Chen, Bang-Yen 1993 Some open problems and conjectures on submanifolds of finite type. Zbl 0749.53037 Chen, Bang-Yen 1991 Geometry of slant submanifolds. Zbl 0716.53006 Chen, Bang-Yen 1990 On totally real submanifolds. Zbl 0286.53019 Chen, Bang-Yen; Ogiue, Koichi 1974 Geometry of warped product CR-submanifolds in Kaehler manifolds. Zbl 0996.53044 Chen, Bang-Yen 2001 Slant immersions. Zbl 0677.53060 Chen, Bang-Yen 1990 Pseudo-Riemannian geometry, $$\delta$$-invariants and applications. Zbl 1245.53001 Chen, Bang-Yen 2011 When does the position vector of a space curve always lie in its rectifying plane? Zbl 1035.53003 Chen, Bang-Yen 2003 A report on submanifolds of finite type. Zbl 0867.53001 Chen, Bang-Yen 1996 Geometry of warped product CR-submanifolds in Kaehler manifolds. II. Zbl 0996.53045 Chen, Bang-Yen 2001 Totally geodesic submanifolds of symmetric spaces. II. Zbl 0384.53024 1978 Biharmonic pseudo-Riemannian submanifolds in pseudo-Euclidean spaces. Zbl 0892.53012 Chen, Bang-Yeng; Ishikawa, Susumu 1998 Geometry of submanifolds and its applications. Zbl 0474.53050 Chen, Bang-yen 1981 Biharmonic surfaces in pseudo-Euclidean spaces. Zbl 0757.53009 Chen, Bang-Yen; Ishikawa, Susumu 1991 Relations between Ricci curvature and shape operator for submanifolds with arbitrary codimensions. Zbl 0962.53015 Chen, Bang-Yen 1999 Mean curvature and shape operator of isometric immersions in real-space-forms. Zbl 0866.53038 Chen, Bang-Yen 1996 Some conformal invariants of submanifolds and their applications. Zbl 0321.53042 Chen, Bang-yen 1974 Differential geometry of geodesic spheres. Zbl 0503.53013 Chen, Bang-Yen; Vanhecke, Lieven 1981 Geometry of warped product submanifolds: a survey. Zbl 1301.53049 Chen, Bang-Yen 2013 Some new obstructions to minimal and Lagrangian isometric immersions. Zbl 1026.53009 Chen, Bang-Yen 2000 Hypersurfaces of a conformally flat space. Zbl 0257.53027 Chen, Bang-Yen; Yano, Kentaro 1972 Warped product bi-slant immersions in Kaehler manifolds. Zbl 1369.53039 Uddin, Siraj; Chen, Bang-Yen; Al-Solamy, Falleh R. 2017 A simple characterization of generalized Robertson-Walker spacetimes. Zbl 1308.83159 Chen, Bang-Yen 2014 Differential geometry of warped product manifolds and submanifolds. Zbl 1390.53001 Chen, Bang-Yen 2017 Submanifolds with finite type Gauss map. Zbl 0672.53044 Chen, Bang-Yen; Piccinni, Paolo 1987 Rectifying curves as centrodes and extremal curves. Zbl 1082.53012 Chen, Bang-Yen; Dillen, Franki 2005 CR-submanifolds of a Kähler manifold. I. Zbl 0431.53048 Chen, Bang-yen 1981 On CR-submanifolds of Hermitian manifolds. Zbl 0453.53018 Blair, David E.; Chen, Bang-Yen 1979 Biharmonic ideal hypersurfaces in Euclidean spaces. Zbl 1260.53017 Chen, Bang-Yen; Munteanu, Marian Ioan 2013 Pointwise slant submanifolds in almost Hermitian manifolds. Zbl 1269.53059 Chen, Bang-Yen; Garay, Oscar J. 2012 Riemannian geometry of Lagrangian submanifolds. Zbl 1002.53053 Chen, Bang-Yen 2001 On isometric minimal immersions from warped products into real space forms. Zbl 1022.53022 Chen, Bang-Yen 2002 Totally geodesic submanifolds of symmetric spaces. I. Zbl 0368.53038 1977 Total mean curvature and submanifolds of finite type. 2nd ed. Zbl 1326.53004 Chen, Bang-Yen 2015 An exotic totally real minimal immersion of $$S^ 3$$ in $$\mathbb{C} P^ 3$$ and its characterisation. Zbl 0855.53011 Chen, B.-Y.; Dillen, F.; Verstraelen, L.; Vrancken, L. 1996 A Riemannian geometric invariant and its applications to a problem of Borel and Serre. Zbl 0656.53049 1988 Some results on concircular vector fields and their applications to Ricci solitons. Zbl 1343.53038 Chen, Bang-Yen 2015 Complex extensors and Lagrangian submanifolds in complex Euclidean spaces. Zbl 0877.53041 Chen, Bang-Yen 1997 Interaction of Legendre curves and Lagrangian submanifolds. Zbl 0884.53014 Chen, Bang-Yen 1997 Totally real submanifolds of $$\mathbb{C} P^ n$$ satisfying a basic equality. Zbl 0816.53034 Chen, B.-Y.; Dillen, F.; Verstraelen, L.; Vrancken, L. 1994 Warped product pointwise bi-slant submanifolds of Kaehler manifolds. Zbl 1413.53101 Chen, Bang-Yen; Uddin, Siraj 2018 Null 2-type surfaces in $$E^ 3$$ are circular cylinders. Zbl 0657.53002 Chen, Bang-Yen 1988 Two theorems on Kähler manifolds. Zbl 0295.53028 Chen, Bang-yen; Ogiue, Koichi 1974 Minimal submanifolds of a higher dimensional sphere. Zbl 0218.53073 Yano, Kentaro; Chen, Bang-Yen 1971 Classification of marginally trapped Lorentzian flat surfaces in $$\mathbb E^4_2$$ and its application to biharmonic surfaces. Zbl 1160.53007 Chen, Bang-Yen 2008 On the concurrent vector fields of immersed manifolds. Zbl 0221.53049 Yano, Kentaro; Chen, Bang-Yen 1971 Surfaces of revolution with pointwise 1-type Gauss map. Zbl 1082.53004 Chen, Bang-Yen; Choi, Miekyung; Kim, Young Ho 2005 Marginally trapped surfaces in Lorentzian space forms with positive relative nullity. Zbl 1141.53065 Chen, Bang-Yen; Van der Veken, Joeri 2007 Complete classification of parallel surfaces in 4-dimensional Lorentzian space forms. Zbl 1182.53018 Chen, Bang-Yen; Van der Veken, Joeri 2009 On the total curvature of immersed manifolds. I: An inequality of Fenchel-Borsuk-Willmore. Zbl 0209.52803 Chen, B.-y. 1971 A general inequality for submanifolds in complex-space-forms and its applications. Zbl 0871.53043 Chen, Bang-yen 1996 Curvature inequalities for Lagrangian submanifolds: the final solution. Zbl 1288.53014 Chen, Bang-Yen; Dillen, Franki; Van der Veken, Joeri; Vrancken, Luc 2013 Riemannian submanifolds. Zbl 0968.53002 Chen, Bang-Yen 2000 Ruled surfaces and tubes with finite type Gauss map. Zbl 0798.53055 Baikoussis, Christos; Chen, Bang-yen; Verstraelen, Leopold 1993 Quaternion CR-submanifolds of quaternion manifolds. Zbl 0481.53046 Barros, M.; Chen, B. Y.; Urbano, F. 1981 Geometry of warped products as Riemannian submanifolds and related problems. Zbl 1012.53051 Chen, Bang-Yen 2002 On rectifying curves in Euclidean 3-space. Zbl 1424.53021 Deshmukh, Sharief; Chen, Bang-Yen; Alshammari, Sana Hamoud 2018 Differential geometry of real submanifolds in a Kaehler manifold. Zbl 0451.53041 Chen, Bang-Yen 1981 Some characterizations of complex space forms in terms of Chern classes. Zbl 0315.53034 Chen, Bang-Yen; Ogiue, Koichi 1975 Ricci solitons and concurrent vector fields. Zbl 1334.53032 Chen, Bang-Yen; Deshmukh, Sharief 2015 Some open problems and conjectures on submanifolds of finite type: recent development. Zbl 1287.53044 Chen, Bang-Yen 2014 Integral formulas for submanifolds and their applications. Zbl 0203.53802 Chen, B.-y.; Yano, K. 1971 Surfaces with parallel normalized mean curvature vector. Zbl 0435.53040 Chen, Bang-Yen 1980 Lagrangian isometric immersions of a real-space-form $$M^n(c)$$ into a complex-space-form $$\widetilde M^n(4c)$$. Zbl 0929.53008 Chen, B.-Y.; Dillen, F.; Verstraelen, L.; Vrancken, L. 1998 Isometric, holomorphic and symplectic reflections. Zbl 0673.53035 Chen, B. Y.; Vanhecke, L. 1989 Differential geometry of submanifolds with planar normal sections. Zbl 0486.53004 Chen, Bang-Yen 1982 CR-warped products in complex projective spaces with compact holomorphic factor. Zbl 1063.53056 Chen, Bang-Yen 2004 Another general inequality for CR-warped products in complex space forms. Zbl 1049.53039 Chen, Bang-Yen 2003 Special conformally flat spaces and canal hypersurfaces. Zbl 0266.53043 Chen, Bang-yen; Yano, Kentaro 1973 Yamabe and quasi-Yamabe solitons on Euclidean submanifolds. Zbl 1425.53050 Chen, Bang-Yen; Deshmukh, Sharief 2018 Surfaces of finite type in Euclidean 3-space. Zbl 0628.53011 Chen, Bang-Yen 1987 Some topological obstructions to Bochner-Kähler metric and their applications. Zbl 0354.53049 Chen, Bang-yen 1978 A Riemannian invariant and its applications to submanifold theory. Zbl 0834.53045 Chen, Bang-Yen 1995 Ruled surface of finite type. Zbl 0704.53003 Chen, Bang-Yen; Dillen, Franki; Verstraelen, Leopold; Vrancken, Luc 1990 Riemannian submersions, $$\delta$$-invariants, and optimal inequality. Zbl 1253.53057 Alegre, Pablo; Chen, Bang-Yen; Munteanu, Marian Ioan 2012 Optimal general inequalities for Lagrangian submanifolds in complex space forms. Zbl 1217.53082 Chen, Bang-Yen; Dillen, Franki 2011 Finite type submanifolds in pseudo-Euclidean spaces and applications. Zbl 0586.53022 Chen, Bang-Yen 1985 Stationary 2-type surfaces in a hypersphere. Zbl 0613.53023 Barros, Manuel; Chen, Bang-Yen 1987 CR-submanifolds of a Kähler manifold. II. Zbl 0485.53051 Chen, Bang-Yen 1981 On spectral decomposition of immersions of finite type. Zbl 0771.53033 Chen, Bang-Yen; Petrovic, Mira 1991 Ideal Lagrangian immersions in complex space forms. Zbl 0980.53096 Chen, Bang-Yen 2000 Submanifolds with geodesic normal sections. Zbl 0536.53053 Chen, Bang-Yen; Verheyen, Paul 1984 Strings of Riemannian invariants, inequalities, ideal immersions and their applications. Zbl 1009.53041 Chen, Bang-Yen 1998 On the surface with parallel mean curvature vector. Zbl 0252.53021 Chen, Bang-Yen 1973 A note on Yamabe solitons. Zbl 1408.53051 Deshmukh, S.; Chen, B. Y. 2018 Scalar curvature, inequality and submanifold. Zbl 0256.53041 Chen, Bang-Yen; Okumura, Masafumi 1973 Classification of Wintgen ideal surfaces in Euclidean 4-space with equal Gauss and normal curvatures. Zbl 1203.53005 Chen, Bang-Yen 2010 Totally real submanifolds of a quaternion projective space. Zbl 0413.53031 Chen, Bang-yen; Houh, Chorng-shi 1979 Slant submanifolds in complex Euclidean spaces. Zbl 0735.53040 Chen, Bang-Yen; Tazawa, Yoshihiko 1991 Finite type submanifolds and generalizations. Zbl 0586.53023 Chen, Bang-yen 1985 The canonical foliations of a locally conformal Kähler manifold. Zbl 0587.53059 Chen, Bang-Yen; Piccinni, Paolo 1985 Submanifolds with planar normal sections. Zbl 0485.53004 Chen, Bang-Yen 1981 Characterizing a class of totally real submanifolds of $$S^ 6$$ by their sectional curvatures. Zbl 0829.53049 Chen, Bang-Yen; Dillen, Franki; Verstraelen, Leopold; Vrancken, Luc 1995 Differential geometry of rectifying submanifolds. Zbl 1375.53008 Chen, Bang-Yen 2016 Spatial and Lorentzian surfaces in Robertson-Walker space times. Zbl 1144.81324 Chen, Bang-Yen; Van der Veken, Joeri 2007 Classification of minimal Lorentz surfaces in indefinite space forms with arbitrary codimension and arbitrary index. Zbl 1274.53078 Chen, B.-Y. 2011 Geometry of compact shrinking Ricci solitons. Zbl 1316.53052 Chen, Bang-Yen; Deshmukh, Sharief 2014 Slant submanifolds of complex projective and complex hyperbolic spaces. Zbl 0979.53058 Chen, Bang-Yen; Tazawa, Yoshihiko 2000 Lagrangian submanifolds in complex space forms satisfying equality in the optimal inequality involving $$\delta (2, \ldots, 2)$$. Zbl 1475.53023 Chen, Bang-Yen; Vrancken, Luc; Wang, Xianfeng 2021 Biharmonic submanifolds and biharmonic maps in Riemannian geometry. Zbl 1455.53002 Ou, Ye-Lin; Chen, Bang-Yen 2020 A generalized Wintgen inequality for quaternionic CR-submanifolds. Zbl 1439.53054 Alodan, Haila; Chen, Bang-Yen; Deshmukh, Sharief; Vîlcu, Gabriel-Eduard 2020 Bi-warped product submanifolds of nearly Kaehler manifolds. Zbl 1434.53020 Uddin, Siraj; Chen, Bang-Yen; Al-Jedani, Awatif; Alghanemi, Azeb 2020 Geometry of pointwise CR-slant warped products in Kaehler manifolds. Zbl 1468.53011 Chen, Bang-Yen; Uddin, Siraj; Al-Solamy, Falleh R. 2020 A Chen first inequality for statistical submanifolds in Hessian manifolds of constant Hessian curvature. Zbl 1430.53061 Chen, Bang-Yen; Mihai, Adela; Mihai, Ion 2019 Sharp growth estimates for warping functions in multiply warped product manifolds. Zbl 1427.53046 Chen, Bang-Yen; Wei, Shihshu Walter 2019 Riemannian submanifolds with concircular canonical field. Zbl 1433.53084 Chen, Bang-Yen; Wei, Shihshu Walter 2019 On some geometric properties of quasi-product production models. Zbl 07056516 Alodan, Haila; Chen, Bang-Yen; Deshmukh, Sharief; Vîlcu, Gabriel-Eduard 2019 A polymorphic element formulation towards multiscale modelling of composite structures. Zbl 1440.74411 Kocaman, E. S.; Chen, B. Y.; Pinho, S. T. 2019 Warped product pointwise bi-slant submanifolds of Kaehler manifolds. Zbl 1413.53101 Chen, Bang-Yen; Uddin, Siraj 2018 On rectifying curves in Euclidean 3-space. Zbl 1424.53021 Deshmukh, Sharief; Chen, Bang-Yen; Alshammari, Sana Hamoud 2018 Yamabe and quasi-Yamabe solitons on Euclidean submanifolds. Zbl 1425.53050 Chen, Bang-Yen; Deshmukh, Sharief 2018 A note on Yamabe solitons. Zbl 1408.53051 Deshmukh, S.; Chen, B. Y. 2018 Two-numbers and their applications – a survey. Zbl 1411.53033 Chen, Bang-Yen 2018 Euclidean submanifolds with conformal canonical vector field. Zbl 1420.53012 Chen, Bang-Yen; Deshmukh, Sharief 2018 Classification of $$\delta(2,n-2)$$-ideal Lagrangian submanifolds in $$n$$-dimensional complex space forms. Zbl 1381.53154 Chen, Bang-Yen; Dillen, Franki; Van der Veken, Joeri; Vrancken, Luc 2018 Natural mates of Frenet curves in Euclidean 3-space. Zbl 1424.53020 Deshmukh, Sharief; Chen, Bang-Yen; Alghanemi, Azeb 2018 A differential equation for Frenet curves in Euclidean 3-space and its applications. Zbl 1424.53022 Deshmukh, Sharief; Chen, Bang-Yen; Turki, Nasser Bin 2018 A link between harmonicity of 2-distance functions and incompressibility of canonical vector fields. Zbl 1410.53059 Chen, Bang-Yen 2018 Erratum to: “Two optimal inequalities for anti-holomorphic submanifolds and their applications”. Zbl 1407.53055 Al-Solamy, Falleh R.; Chen, Bang-Yen; Deshmukh, Sharief 2018 Warped product bi-slant immersions in Kaehler manifolds. Zbl 1369.53039 Uddin, Siraj; Chen, Bang-Yen; Al-Solamy, Falleh R. 2017 Differential geometry of warped product manifolds and submanifolds. Zbl 1390.53001 Chen, Bang-Yen 2017 Rectifying curves and geodesics on a cone in the Euclidean 3-space. Zbl 1371.53002 Chen, Bang-Yen 2017 Topics in differential geometry associated with position vector fields on Euclidean submanifolds. Zbl 1362.53005 Chen, Bang-Yen 2017 Addendum to: “Differential geometry of rectifying submanifolds”. Zbl 1473.53010 Chen, Bang-Yen 2017 A link between torse-forming vector fields and rotational hypersurfaces. Zbl 1380.53012 Chen, Bang-Yen; Verstraelen, Leopold 2017 Classification of torqued vector fields and its applications to Ricci solitons. Zbl 07384044 Chen, Bang-Yen 2017 Euclidean submanifolds via tangential components of their position vector fields. Zbl 1391.37044 Chen, Bang-Yen 2017 Rectifying submanifolds of Riemannian manifolds and torqued vector fields. Zbl 07390792 Chen, Bang-Yen 2017 Classification of rectifying space-like submanifolds in pseudo-Euclidean spaces. Zbl 1442.53008 Chen, Bang-Yen; Oh, Yun Myung 2017 Euclidean submanifolds with incompressible canonical vector field. Zbl 07407414 Chen, Bang-Yen 2017 Covering maps and ideal embeddings of compact homogeneous spaces. Zbl 1391.53063 Chen, Bang-Yen 2017 Differential geometry of concircular submanifolds of Euclidean spaces. Zbl 07407399 Chen, Bang-Yen; Wei, Shihshu Walter 2017 Differential geometry of rectifying submanifolds. Zbl 1375.53008 Chen, Bang-Yen 2016 A survey on Ricci solitons on Riemannian submanifolds. Zbl 1360.53048 Chen, Bang-Yen 2016 Concircular vector fields and pseudo-Kähler manifolds. (Concircular vector fields and pseudo-Kaehler manifolds.) Zbl 1474.53280 Chen, Bang-Yen 2016 CR-warped submanifolds in Kaehler manifolds. Zbl 1351.32064 Chen, Bang-Yen 2016 CR-submanifolds and $$\delta$$-invariants. Zbl 1351.32065 Chen, Bang-Yen 2016 Total mean curvature and submanifolds of finite type. 2nd ed. Zbl 1326.53004 Chen, Bang-Yen 2015 Some results on concircular vector fields and their applications to Ricci solitons. Zbl 1343.53038 Chen, Bang-Yen 2015 Ricci solitons and concurrent vector fields. Zbl 1334.53032 Chen, Bang-Yen; Deshmukh, Sharief 2015 $$\delta(3)$$-ideal null 2-type hypersurfaces in Euclidean spaces. Zbl 1327.53008 Chen, Bang-Yen; Fu, Yu 2015 Classification of ideal submanifolds of real space forms with type number $$\leq 2$$. Zbl 1326.53078 Chen, Bang-Yen; Yıldırım, Handan 2015 Geometric and topological obstructions to various immersions in submanifold theory and some related open problems. Zbl 1474.53265 Chen, Bang-Yen 2015 Einstein manifolds as affine hypersurfaces. Zbl 1315.53042 Chen, Bang-Yen 2015 A simple characterization of generalized Robertson-Walker spacetimes. Zbl 1308.83159 Chen, Bang-Yen 2014 Some open problems and conjectures on submanifolds of finite type: recent development. Zbl 1287.53044 Chen, Bang-Yen 2014 Geometry of compact shrinking Ricci solitons. Zbl 1316.53052 Chen, Bang-Yen; Deshmukh, Sharief 2014 Solutions to homogeneous Monge-Ampère equations of homothetic functions and their applications to production models in economics. Zbl 1442.35464 Chen, Bang-Yen 2014 Classification of Ricci solitons on Euclidean hypersurfaces. Zbl 1310.53041 Chen, Bang-Yen; Deshmukh, Sharief 2014 Two optimal inequalities for anti-holomorphic submanifolds and their applications. Zbl 1357.53055 Al-Solamy, Falleh R.; Chen, Bang-Yen; Deshmukh, Sharief 2014 Notes on isotropic geometry of production models. Zbl 1461.91165 Chen, Bang-Yen; Decu, Simona; Verstraelen, Leopold 2014 Ricci solitons on Riemannian submanifolds. Zbl 1333.53054 Chen, Bang-Yen 2014 Geometry of warped product submanifolds: a survey. Zbl 1301.53049 Chen, Bang-Yen 2013 Biharmonic ideal hypersurfaces in Euclidean spaces. Zbl 1260.53017 Chen, Bang-Yen; Munteanu, Marian Ioan 2013 Curvature inequalities for Lagrangian submanifolds: the final solution. Zbl 1288.53014 Chen, Bang-Yen; Dillen, Franki; Van der Veken, Joeri; Vrancken, Luc 2013 Recent developments of biharmonic conjecture and modified biharmonic conjectures. Zbl 1300.53013 Chen, Bang-Yen 2013 A tour through $$\delta$$-invariants: from Nash’s embedding theorem to ideal immersions, best ways of living and beyond. Zbl 1340.53001 Chen, Bang-Yen 2013 Geometric classifications of homogeneous production functions. Zbl 1334.91043 Chen, Bang-Yen; Vîlcu, Gabriel Eduard 2013 Optimal inequalities, contact $$\delta$$-invariants and their applications. Zbl 1279.53048 Chen, Bang-Yen; Martin-Molina, Veronica 2013 On ideal hypersurfaces of Euclidean 4-space. Zbl 1280.53052 Chen, Bang-Yen 2013 The 2-ranks of connected compact Lie groups. Zbl 1294.22006 Chen, Bang-Yen 2013 Geometry of position function of totally real submanifolds in complex Euclidean spaces. Zbl 1473.53078 Chen, Bang-Yen 2013 Lagrangian submanifolds with prescribed second fundamental form. Zbl 1303.53096 Chen, Bang-Yen; Van der Veken, Joeri; Vrancken, Luc 2013 Classification of spherical Lagrangian submanifolds in complex Euclidean spaces. Zbl 1308.53120 Chen, Bang-Yen 2013 Pointwise slant submanifolds in almost Hermitian manifolds. Zbl 1269.53059 Chen, Bang-Yen; Garay, Oscar J. 2012 Riemannian submersions, $$\delta$$-invariants, and optimal inequality. Zbl 1253.53057 Alegre, Pablo; Chen, Bang-Yen; Munteanu, Marian Ioan 2012 On some geometric properties of quasi-sum production models. Zbl 1243.39019 Chen, Bang-Yen 2012 An optimal inequality for CR-warped products in complex space forms involving CR $$\delta$$-invariant. Zbl 1244.53059 Chen, Bang-Yen 2012 $$\delta (2)$$-ideal null 2-type hypersurfaces of Euclidean space are spherical cylinders. Zbl 1247.53067 Chen, Bang-Yen; Garay, Oscar J. 2012 Classification of homothetic functions with constant elasticity of substitution and its geometric applications. Zbl 1308.90060 Chen, Bang-Yen 2012 Classification of $$h$$-homogeneous production functions with constant elasticity of substitution. Zbl 1260.91161 Chen, Bang-Yen 2012 Geometry of quasi-sum production functions with constant elasticity of substitution property. Zbl 1273.90072 Chen, Bang-Yen 2012 A note on homogeneous production models. Zbl 1299.91088 Chen, Bang-Yen 2012 Classification of Lagrangian submanifolds in complex space forms satisfying a basic equality involving $$\delta(2,2)$$. Zbl 1237.53045 Chen, Bang-Yen; Prieto-Martín, Alicia 2012 Lagrangian submanifolds in complex space forms attaining equality in a basic inequality. Zbl 1246.53108 Chen, Bang-Yen; Dillen, Franki; Vrancken, Luc 2012 Geometry of $$\mathcal P R$$-warped products in para-Kähler manifolds. Zbl 1260.53039 Chen, Bang-Yen; Munteanu, Marian Ioan 2012 An explicit formula of Hessian determinants of composite functions and its applications. Zbl 1289.15015 Chen, Bang-Yen 2012 Wintgen ideal surfaces in four-dimensional neutral indefinite space form $${R^4_2(c)}$$. Zbl 1267.53054 Chen, Bang-Yen 2012 Pseudo-Riemannian geometry, $$\delta$$-invariants and applications. Zbl 1245.53001 Chen, Bang-Yen 2011 Optimal general inequalities for Lagrangian submanifolds in complex space forms. Zbl 1217.53082 Chen, Bang-Yen; Dillen, Franki 2011 Classification of minimal Lorentz surfaces in indefinite space forms with arbitrary codimension and arbitrary index. Zbl 1274.53078 Chen, B.-Y. 2011 On some geometric properties of $$h$$-homogeneous production functions in microeconomics. Zbl 1289.91107 Chen, Bang-Yen 2011 On Wintgen ideal surfaces. Zbl 1266.53007 Chen, Bang-Yen 2011 $$\delta$$-invariants for Lagrangian submanifolds of complex space forms. Zbl 1264.53057 Chen, Bang-Yen; Dillen, Franki 2011 Classification theorems for space-like surfaces in 4-dimensional indefinite space forms with index 2. Zbl 1230.53050 Chen, Bang-Yen; Suceavǎ, Bogdan D. 2011 Lagrangian $$H$$-umbilical submanifolds of para-Kähler manifolds. Zbl 1247.53066 Chen, Bang-Yen 2011 Classification of Wintgen ideal surfaces in Euclidean 4-space with equal Gauss and normal curvatures. Zbl 1203.53005 Chen, Bang-Yen 2010 Classification of marginally trapped surfaces with parallel mean curvature vector in Lorentzian space forms. Zbl 1213.53026 Chen, Bang-Yen; Van der Veken, Joeri 2010 On slant submanifolds of neutral Kähler manifolds. Zbl 1202.53022 Arslan, K.; Carriazo, A.; Chen, B.-Y.; Murathan, C. 2010 Complete classification of parallel spatial surfaces in pseudo-Riemannian space forms with arbitrary index and dimension. Zbl 1205.53061 Chen, Bang-Yen 2010 Submanifolds with parallel mean curvature vector in Riemannian and indefinite space forms. Zbl 1214.53014 Chen, Bang-Yen 2010 Addendum to “classification of marginally trapped Lorentzian flat surfaces in $$\mathbb E_2^4$$ and its application to biharmonic surfaces”. Zbl 1179.53021 Chen, Bang-Yen; Yang, Dan 2010 Complete classification of parallel Lorentzian surfaces in Lorentzian complex space forms. Zbl 1192.53016 Chen, Bang-Yen; Dillen, Franki; Van der Veken, Joeri 2010 Complete classification of Lorentz surfaces with parallel mean curvature vector in arbitrary pseudo-Euclidean space. Zbl 1207.53067 Chen, Bang-Yen 2010 A minimal immersion of the hyperbolic plane into the neutral pseudo-hyperbolic 4-space and its characterization. Zbl 1188.53009 Chen, Bang-Yen 2010 Explicit classification of parallel Lorentz surfaces in 4D indefinite space forms with index 3. Zbl 1217.53061 Chen, Bang-Yen 2010 Lagrangian submanifolds in para-Kähler manifolds. Zbl 1204.53043 Chen, Bang-Yen 2010 Complete classification of parallel Lorentz surfaces in neutral pseudo hyperbolic 4-space. Zbl 1210.53058 Chen, Bang-Yen 2010 ...and 332 more Documents all top 5 ### Cited by 1,345 Authors 173 Chen, Bang-Yen 47 Siraj-Uddin 45 Vrancken, Luc 38 Vanhecke, Lieven 36 Kim, Youngho 33 De, Uday Chand 30 Dillen, Franki 29 Verstraelen, Leopold C. A. 24 Ali, Akram 24 Fu, Yu 24 Vîlcu, Gabriel Eduard 23 Al-Solamy, Falleh Rijaullah 23 Oniciuc, Cezar 20 Garay, Oscar Jesus 20 Mihai, Ion 19 Deshmukh, Sharief 19 Lee, Jaewon 18 Barros, Manuel 18 Mantica, Carlo Alberto 17 Khan, Meraj Ali 17 Mihai, Adela 17 Shahid, Mohammed Hasan 16 Alkhaldi, Ali H. 16 Hui, Shyamal Kumar 16 Montaldo, Stefano 16 Ṣahin, Bayram 16 Urakawa, Hajime 16 Van der Veken, Joeri 15 Arslan, Kadri 15 Rosca, Radu M. 15 Suh, Young Jin 14 Hu, Zejun 14 Kumar, Rakesh 14 Lucas, Pascual 14 Molinari, Luca Guido 14 Ozgur, Cihan 14 Yoon, Dae Won 13 Fetcu, Dorel 13 Sasahara, Toru 13 Turgay, Nurettin Cenk 13 Yano, Kentaro 12 Carriazo, Alfonso 12 Chaubey, Sudhakar Kumar 12 de Lima, Henrique F. 12 Gupta, Ram Shankar 12 Kılıç, Erol 12 Kim, Dongsoo S. 12 Li, Haizhong 12 Öztürk, Günay 12 Shaikh, Absos Ali 11 Adachi, Toshiaki 11 Blair, David E. 11 Dragomir, Sorin 11 Houh, Chorng-Shi 11 Maeta, Shun 11 Milousheva, Velichka 11 Nagaich, Rakesh Kumar 11 Shenawy, Sameh 11 Tasaki, Hiroyuki 10 Atçeken, Mehmet 10 Bulca, Betül 10 Castro, Ildefonso 10 İlarslan, Kazim 10 Mohammadpouri, Akram 10 Murathan, Cengizhan 10 Park, Kwang-Soon 10 Pashaie, Firooz 10 Tanaka, Makiko Sumi 9 Deszcz, Ryszard 9 Dursun, Uǧur 9 Hasanis, Thomas 9 Kumar, Sangeet 9 Lone, Mehraj Ahmad 9 Lone, Mohamd Saleem 9 Maeda, Sadahiro 9 Sharma, Ramesh 9 Suceavă, Bogdan Dragos 9 Tripathi, Mukut Mani 9 Vlachos, Theodoros 9 Wang, Xianfeng 8 Antic, Miroslava 8 Berndt, Jürgen 8 Blaga, Adara-Monica 8 Choudhary, Majid Ali 8 Crâşmăreanu, Mircea 8 Djorić, Mirjana 8 Dos Santos, Fábio Reis 8 Guo, Zhen 8 Khan, Viqar Azam 8 Lee, Jieun 8 Liu, Jiancheng 8 Mofarreh, Fatemah Y. Y. 8 Munteanu, Marian-Ioan 8 Othman, Wan Ainun Mior 8 Ou, Ye-Lin 8 Pişcoran, Laurian-Ioan 8 Siddiqui, Aliya Naaz 8 Yang, Dan 7 Akyol, Mehmet Akif 7 Anciaux, Henri ...and 1,245 more Authors all top 5 ### Cited in 264 Serials 103 Journal of Geometry and Physics 70 Journal of Geometry 58 Kodai Mathematical Journal 58 Differential Geometry and its Applications 50 Mediterranean Journal of Mathematics 48 Journal of Mathematical Analysis and Applications 48 Results in Mathematics 43 Proceedings of the American Mathematical Society 40 Tohoku Mathematical Journal. Second Series 38 Annals of Global Analysis and Geometry 36 Geometriae Dedicata 36 International Journal of Geometric Methods in Modern Physics 35 Annali di Matematica Pura ed Applicata. Serie Quarta 31 The Journal of Geometric Analysis 31 International Electronic Journal of Geometry 29 Filomat 28 Bulletin of the Australian Mathematical Society 28 Archiv der Mathematik 25 Israel Journal of Mathematics 24 Transactions of the American Mathematical Society 23 Mathematische Zeitschrift 23 Kodai Mathematical Seminar Reports 21 Czechoslovak Mathematical Journal 19 Mathematische Annalen 19 International Journal of Mathematics 18 Rocky Mountain Journal of Mathematics 18 Glasgow Mathematical Journal 17 Communications of the Korean Mathematical Society 16 Monatshefte für Mathematik 16 Journal of Inequalities and Applications 15 Journal of Mathematical Physics 14 Rendiconti del Circolo Matemàtico di Palermo. Serie II 14 Acta Mathematica Sinica. English Series 14 Bulletin of the Malaysian Mathematical Sciences Society. Second Series 13 Manuscripta Mathematica 13 Hacettepe Journal of Mathematics and Statistics 12 Advances in Mathematics 12 Tokyo Journal of Mathematics 12 Facta Universitatis. Series Mathematics and Informatics 12 Turkish Journal of Mathematics 12 Taiwanese Journal of Mathematics 12 DGDS. Differential Geometry – Dynamical Systems 12 Afrika Matematika 11 General Relativity and Gravitation 11 Acta Mathematica Hungarica 11 Abstract and Applied Analysis 11 AIMS Mathematics 10 Journal of Soviet Mathematics 10 Nonlinear Analysis. Theory, Methods & Applications. Series A: Theory and Methods 10 Calculus of Variations and Partial Differential Equations 10 Journal of Mathematical Sciences (New York) 10 Balkan Journal of Geometry and its Applications (BJGA) 10 Advances in Mathematical Physics 10 Revista de la Real Academia de Ciencias Exactas, Físicas y Naturales. Serie A: Matemáticas. RACSAM 9 Mathematical Notes 9 Kyungpook Mathematical Journal 9 Advances in Geometry 9 Asian-European Journal of Mathematics 8 Indian Journal of Pure & Applied Mathematics 8 Ukrainian Mathematical Journal 8 Journal of the Mathematical Society of Japan 8 Osaka Journal of Mathematics 8 Proceedings of the Japan Academy. Series A 8 Arab Journal of Mathematical Sciences 8 Honam Mathematical Journal 8 Journal of Dynamical Systems and Geometric Theories 8 Bulletin of the American Mathematical Society 8 Science China. Mathematics 7 Applied Mathematics and Computation 7 International Journal of Mathematics and Mathematical Sciences 7 Mathematische Nachrichten 7 Nagoya Mathematical Journal 7 Annales de la Faculté des Sciences de Toulouse. Série V. Mathématiques 7 Kragujevac Journal of Mathematics 7 Boletim da Sociedade Paranaense de Matemática. Terceira Série 7 ISRN Geometry 6 Beiträge zur Algebra und Geometrie 6 Abhandlungen aus dem Mathematischen Seminar der Universität Hamburg 6 Acta Universitatis Palackianae Olomucensis. Facultas Rerum Naturalium. Mathematica 6 Annales de l’Institut Henri Poincaré. Nouvelle Série. Section A. Physique Théorique 6 Note di Matematica 6 Proceedings of the National Academy of Sciences, India. Section A. Physical Sciences 6 Central European Journal of Mathematics 6 Cubo 6 Frontiers of Mathematics in China 6 Acta Universitatis Sapientiae. Mathematica 6 Mathematics 5 Journal of the Korean Mathematical Society 5 Quaestiones Mathematicae 5 Rendiconti del Seminario Matematico della Università di Padova 5 Tamkang Journal of Mathematics 5 Bulletin of the Iranian Mathematical Society 5 Linear Algebra and its Applications 5 Proceedings of the Indian Academy of Sciences. Mathematical Sciences 5 Analele Științifice ale Universității Al. I. Cuza din Iași. Serie Nouă. Matematică 5 Mathematical Physics, Analysis and Geometry 5 Communications in Contemporary Mathematics 5 Lobachevskii Journal of Mathematics 5 Arabian Journal of Mathematics 5 Journal of Mathematics ...and 164 more Serials all top 5 ### Cited in 45 Fields 1,916 Differential geometry (53-XX) 184 Global analysis, analysis on manifolds (58-XX) 73 Several complex variables and analytic spaces (32-XX) 59 Relativity and gravitational theory (83-XX) 49 Partial differential equations (35-XX) 37 Manifolds and cell complexes (57-XX) 36 Calculus of variations and optimal control; optimization (49-XX) 19 Algebraic geometry (14-XX) 13 Nonassociative rings and algebras (17-XX) 13 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 12 Topological groups, Lie groups (22-XX) 12 Dynamical systems and ergodic theory (37-XX) 11 Convex and discrete geometry (52-XX) 10 Potential theory (31-XX) 10 Quantum theory (81-XX) 9 Functions of a complex variable (30-XX) 8 Linear and multilinear algebra; matrix theory (15-XX) 8 Geometry (51-XX) 8 Algebraic topology (55-XX) 7 Fluid mechanics (76-XX) 6 Operations research, mathematical programming (90-XX) 5 Real functions (26-XX) 3 General and overarching topics; collections (00-XX) 3 Number theory (11-XX) 3 Ordinary differential equations (34-XX) 3 Functional analysis (46-XX) 3 Probability theory and stochastic processes (60-XX) 3 Statistics (62-XX) 3 Numerical analysis (65-XX) 3 Computer science (68-XX) 3 Mechanics of particles and systems (70-XX) 2 History and biography (01-XX) 2 Mathematical logic and foundations (03-XX) 2 Combinatorics (05-XX) 2 Measure and integration (28-XX) 2 Mechanics of deformable solids (74-XX) 2 Statistical mechanics, structure of matter (82-XX) 2 Astronomy and astrophysics (85-XX) 2 Information and communication theory, circuits (94-XX) 1 Associative rings and algebras (16-XX) 1 Group theory and generalizations (20-XX) 1 Difference and functional equations (39-XX) 1 Operator theory (47-XX) 1 General topology (54-XX) 1 Classical thermodynamics, heat transfer (80-XX) ### Wikidata Timeline The data are displayed as stored in Wikidata under a Creative Commons CC0 License. Updates and corrections should be made in Wikidata.
2022-05-20T14:26:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44957196712493896, "perplexity": 6729.326145747143}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662532032.9/warc/CC-MAIN-20220520124557-20220520154557-00621.warc.gz"}
https://www.nist.gov/property-fieldsection/enable-laboratory-market-strategies-accelerate-commercialization-federal
# Enable laboratory-to-market strategies to accelerate commercialization of federal technologies and collaboration (+$6 million) image: ©anyaivanova-Fotolia.com Federally funded research dramatically affects everyday life. The Internet, global positioning system (GPS), lifesaving vaccines and many other advances started as federally funded research. Maximizing returns from our national investment in federal research and development is essential to economic growth and to U.S. leadership in global innovation, business development and job creation in cutting-edge industries. The America COMPETES Reauthorization Act of 2010 gave NIST federal government-wide responsibilities for ensuring as many taxpayer-funded technologies as possible make the transition from lab to market. This includes analysis, planning, coordination, reporting and general oversight of technology transfer. A 2011 Presidential Memorandum gave DOC and NIST a leadership role in helping agencies to establish goals, measure performance, streamline administrative processes and facilitate partnerships that encourage commercialization of federally funded research and development. NIST chairs interagency working groups that coordinate federal technology transfer policy development activities, including reporting and analysis of performance and outcomes related to technology transfer. NIST also serves as the host agency for the Federal Laboratory Consortium for Technology Transfer. This leadership position makes NIST the ideal place to develop and implement new programs, further strengthening NIST's role in coordination and cooperation across the federal enterprise. ## Proposed NIST Program NIST requests$6 million to develop and deploy laboratory-to-market strategies that accelerate collaboration and commercialization of federal technologies. Initiative-funded efforts will include the following: • Enhancing technology commercialization through policies and programs that allow industry entrepreneurs to gain experience at federal agencies; government researchers to work outside of the government for limited time periods; and by funding entrepreneurship educational opportunities; • Leading cross-agency efforts to streamline, prioritize and promote collaborations and the sharing of best practices between federal agencies and with the private sector at both the federal and regional levels; • Providing on-line search tools to help industry identify agency facilities and equipment; • Working with agencies to reduce the time, cost and complexity of executing intellectual property (IP) licenses and streamline IP transfer; and • Evaluating the impact of federal efforts by developing metrics to assess technology transfer programs and conducting economic studies that estimate the importance of technologies and supporting technical infrastructure within and across different industrial sectors. NIST's Technology Partnerships Office builds and sustains technology partnering activities between NIST laboratories and U.S. industries; local, state and federal agencies; and the general public. Created March 13, 2014, Updated April 22, 2014
2016-09-27T00:41:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2621871531009674, "perplexity": 11708.066033505871}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660916.37/warc/CC-MAIN-20160924173740-00268-ip-10-143-35-109.ec2.internal.warc.gz"}
http://math.lanl.gov/Research/Publications/bettencourt-1998-time.shtml
T-5 HomeResearchPublications › bettencourt-1998-time ### Cite Details Luis M. A. Bettencourt and C. Wetterich, "Time evolution of correlation functions for classical and quantum anharmonic oscillators", hep-ph/9805360, 1998 ### Abstract The time evolution of the correlation functions of an ensemble of anharmonic N-component oscillators with $O(N)$ symmetry is described by a flow equation, exact up to corrections of order $1/N^2$. We find effective irreversibility. Nevertheless, analytical and numerical investigation reveals that the system does not reach thermal equilibrium for large times, even when $N\rightarrow \infty$. Depending on the initial distribution, the dynamics is asymptotically stable or it exhibits growing modes which break the conditions for the validity of the 1/N expansion for large time. We investigate both classical and quantum systems, the latter being the limit of an O(N) symmetric scalar quantum field theory in zero spatial dimensions. ### BibTeX Entry @article{bettencourt-1998-time, author = {Luis M. A. Bettencourt and C. Wetterich}, title = {Time evolution of correlation functions for classical and quantum anharmonic oscillators}, year = {1998}, journal = {hep-ph/9805360} }
2017-03-30T08:40:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3110848665237427, "perplexity": 1142.285850632517}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218193288.61/warc/CC-MAIN-20170322212953-00237-ip-10-233-31-227.ec2.internal.warc.gz"}
https://www.ecb.europa.eu/pub/financial-stability/fsr/special/html/ecb.fsrart202205_01~9d4ae00a92.en.html
Search Options Home Media Explainers Research & Publications Statistics Monetary Policy The €uro Payments & Markets Careers Suggestions Sort by # Climate-related risks to financial stability Prepared by Tina Emambakhsh, Margherita Giuzio, Luca Mingarelli, Dilyara Salakhova and Martina Spaggiari[1] Published as part of the Financial Stability Review, May 2022. The ECB is continuing its work on incorporating climate-related risks into assessments of financial stability. This includes a new analysis of disclosure, pricing and greenwashing risks in financial markets, as well as continued monitoring of financial institutions’ exposure to transition and physical risks. There is some encouraging evidence of better disclosure by non-financial corporations and increasing awareness of climate-related risks in financial markets. Progress made by banks, however, has been more limited. Established and newer metrics show no clear evidence of a reduction in climate-related risks, revealing instead a potential for amplification mechanisms stemming from exposure concentration, cross-hazard correlation and financial institutions’ overlapping portfolios. These findings can inform evidence-based international and European policy debates around climate-related corporate disclosure, standards for sustainable financial instruments and climate-related prudential policies. More generally, amid high uncertainty around governments’ transition policies in an environment of volatile energy prices, further investments in the transition to a net-zero economy would also have a positive impact on medium-term growth and energy security. ## 1 Introduction Climate change has, for a number of years, been identified as a source of systemic risk, with potentially severe consequences for financial institutions and financial markets alike.[2] As our awareness of this risk has grown, the ECB has enhanced its approaches to understanding, monitoring and assessing the nature of climate risks and how such risks are evolving over time. Furthermore, the recent price increases and volatility seen in energy markets have underlined the wider value of supporting the transition to a net-zero economy. This special feature presents the latest developments, starting with a focus on green financing, which is needed to support the transition to a net-zero economy. The subsequent sections then provide updated assessments of bank and non-bank exposures to climate risks, by introducing aspects such as the link between climate risk and financial risk in exposures, concentration of exposures and correlations between hazards. ## 2 Increasing role of green finance in supporting the transition to a low-carbon economy Sustainable markets continued to grow globally in 2021, mostly thanks to an increased volume of euro area ESG funds and green bonds (Chart A.1, panel a). Their growth has accelerated over the last two years, with euro area sustainable assets doubling since 2019, although sustainable markets still only account for 10% of the euro area investment fund sector and 3% of outstanding bonds. These developments reflect the expected green investment through the EU recovery fund (NextGenerationEU), and the sharp increase in the number of financial institutions that have made net-zero commitments.[3] However, maintaining such momentum requires that decisive regulatory action be taken to strengthen capital markets beyond the sustainable finance segment and help channel investments towards green projects.[4] Empirical evidence suggests that (green) finance supports green investment and the reduction of emissions, with some differences across financing instruments and firm types.[5] While research has suggested that a higher share of equity financing is associated with greater reductions in countries’ carbon footprints, debt is the primary source of external financing for NFCs in the EU and is also used to support the development and adoption of new (greener) technologies. An analysis of changes in emissions at over 4,000 European carbon-intensive firms between 2013 and 2019 provides evidence that, up to a certain point, debt has a positive impact on environmental performance in subsequent years: firms reduce their emissions by investing in green technologies, without reducing economic activity. However, when a firm is too indebted, higher leverage is associated with higher emissions as firms then tend to invest less in energy efficiency.[6] In recent years, more firms have been disclosing both their exposure to transition risk and their emission reduction targets, but gaps in disclosure practices remain significant, signalling the need for international standards. More NFCs have been disclosing data on GHG emissions and setting emission-reduction targets over time, with high-emitting firms disclosing the most data, likely reflecting their greater exposure to public scrutiny (Chart A.1, panel b). Although a large part of this disclosure is verified by a third party, the risk of greenwashing remains high in the absence of global mandatory reporting requirements. In addition, although there has been an improvement in the climate-related disclosures of European banks since 2020, banks are not fully meeting supervisory expectations and gaps remain, especially regarding banks’ emission-reduction targets and interim milestones.[7] The prompt adoption of international disclosure standards across jurisdictions would allow investors to price and measure transition risk more effectively, while also supporting the transition to a low-carbon economy.[8] In particular, although there is evidence that firms which set an emission-reduction target have a lower credit risk and tend to reduce emissions more than other firms in subsequent years, the credibility of firms’ targets and their alignment with the Paris Agreement goals are difficult to assess.[9] Against this background, capital markets remain susceptible to greenwashing, and only the most credible green bonds seem to benefit from cheaper funding. The growth of green bond markets could help stimulate the integration of European capital markets.[10] But the credibility of green bonds and/or their issuers appears to determine whether green bonds trade at a greenium – with lower spreads than for conventional bonds – in secondary markets (Chart A.2, panel a). Only green bonds with an external review, issued by firms in green sectors (e.g. alternative energy) or by banks which are members of the United Nations Environment Programme Finance Initiative (UNEP FI) exhibit a greenium. As ESG and green funds keep attracting new investors, the demand for green bonds and the greenium has also increased over time (Chart A.2, panel b).[11] New instruments, such as sustainability-linked bonds, which link borrowing costs to specific company-level sustainability targets, partly address investor concerns about greenwashing in the green bond market. Greenwashing also poses a risk to financial stability because it could lead to an undervaluation of transition risk and to potential fire-sales of green bonds. A common regulatory standard that requires regular standardised reporting, impact assessment and review by approved external reviewers, as proposed under the EU Green Bond Standard, would provide assurance that green bonds effectively finance the transition and alleviate risks to financial stability. Implementing this standard and making it mandatory within a reasonable period of time could enhance investor confidence in this asset class, reinforce flows of funding to the transition and reduce risks to financial stability.[12] ESG – and particularly environmental – funds seem to have reduced their carbon footprint over time, but divergent ESG fund classification across data providers points towards greenwashing risks in the sector. In the absence of an ESG label and a common definition of ESG and environmental funds, investors rely on self-disclosure by asset managers and classifications from commercial data providers. The level of disagreement between these classifications is high (Chart A.3, panel a): the three main data providers agree in less than 20% of cases that a fund is ESG (317 funds out of more than 1,800 funds which are defined as ESG by at least one data provider). In this context, well-designed labels could materially reduce the risk of greenwashing. At the same time, environmental and other ESG funds do appear to have reduced the emission intensity[13] of their portfolios by more than non-ESG funds over the last four years (Chart A.3, panel b). But the extent to which this is driven by simply reshuffling portfolios towards already low-carbon sectors or by firms decarbonising – possibly due to supportive financing and activist pressure from impact investors – remains unclear, despite being important for the ultimate goal of transitioning to a net-zero economy. ## 3 Limited change in financial system exposures to transition risk While firms’ emissions have been decreasing, exposures of euro area banks to currently high-emitting firms have remained broadly stable. Around two-thirds of the corporate credit exposures held by euro area banks are still directed towards high-emitting firms, which are mainly concentrated in the manufacturing, real estate and retail sectors (Chart A.4, panel a).[14] Also, around 30% of both bank and non-bank holdings of securities issued by NFCs with known emission levels are currently issued by high-emitting firms, a share which has only decreased slightly over the last five years. At the same time, the recent increases and volatility in energy markets have underlined the urgency of supporting the transition to a net-zero economy. Metrics commonly used to assess corporate sector climate risks point to a small increase in carbon intensity in bank portfolios. Only a few (mainly large and highly exposed) banks have significantly decarbonised their credit portfolios since 2018, as measured by the loan-weighted emissions of the respective borrowers (Chart A.4, panel b). By contrast, two-thirds of banks have increased their loan-weighted emissions. The measures may still be missing the interaction between climate risk and financial risk of loans. Information on carbon emissions can be combined with the existing probability of default (PD) so a corporate borrower can provide a credit risk-adjusted metric of transition risk. The resulting score can be computed at bank level by aggregating loan-weighted borrowers’ emissions multiplied by their PDs over the bank’s entire corporate portfolio.[15] The PDs are included as a measure of credit risk and the GHG emissions are included as a measure of vulnerability to transition risk. Overall, the higher a firm’s contribution to the transition risk score, the higher its contribution to the bank’s financial risk induced by the combination of credit and transition risk, as long as PDs have not already accounted for the latter.[16] The credit risk-adjusted measure supports signals obtained from emissions-to-loans ratio measures indicating that risk has increased over time.[17] Once adjusted for financial risk using borrowers’ PDs, estimated transition risk has increased since 2012, with significant increases in sectors that face more underlying transition risk. This has some correlation with the signals from unadjusted measures of transition risk (Chart A.5, panel a). Exposures to the mining, manufacturing and electricity sectors together account for around 70% of the euro area aggregate (Chart A.5, panel b). Some of these sectors make an almost negligible contribution to the emissions-to-loans ratio but they play an important role when the financial risk component is considered. Since climate-related risks simultaneously affect multiple seemingly unrelated exposures, their concentration in individual institutions plays a significant role. Climate-related concentration risks can arise from exposures that share similar sensitivities to physical risks (e.g. due to their location or activity) or transition risks (e.g. due to their sector allocation or level of emissions). Focusing on transition risk and assuming a disorderly transition scenario,[18] it appears that higher concentrations of exposures to firms with high emission intensity coincide with higher expected losses at bank level over a 30-year period (Chart A.6, panel a). Around 35% of system-wide expected losses are incurred by the 10% of banks with the highest sensitivity to carbon price increases. In addition, carbon price shocks trigger a significant increase in firms’ default correlations.[19] For a transition risk intensity of $200€/tCO2$, capturing the cost due to increases in the cost of carbon borne by firms causes estimated average (median) correlations to double (Chart A.6, panel b). Transition risk not only leads to a source of novel correlation between previously uncorrelated or weakly correlated firms in general, but also increases correlations for high emitters[20] by ten times more than it does for low emitters. ## 4 Systemic amplifications could result from interconnected physical risks arising from climate change Financial stability risks arising from physical hazards are exacerbated by the fact that some investors hold assets which are vulnerable to multiple hazards. The occurrence of natural hazards is characterised by interactions between hazards in the form of either correlations or causal links (Chart A.7, panel a) which can generate self-reinforcing or feedback mechanisms. For example, the joint combination of thunderstorms and droughts (both captured by the “Heat stress” category in Chart A.7, panel a) can cause wildfires which, in turn, both increase the likelihood of more wildfires and exacerbate heat stress.[21] Future intensification of climate risk, especially when clustered hazards occur, may create hard-to-price tipping points and impair options for diversification, potentially posing financial stability risks, especially for securities with wider protection gaps. In addition to the direct exposure to physical risk, the impact of physical hazards could be amplified by fire-sale dynamics. In the event of a sudden reassessment of risks affecting portfolios, the liquidation of securities exposed to potential hazards may affect market prices. This could result in contagion losses spreading by way of the common holdings of different market participants and, in worst-case scenarios, spiralling deleveraging pressures.[22] Constructing estimates of the common asset holdings (overlapping portfolios) exposed to the different physical risks[23] of different market participants (Chart A.7, panel b) reveals a range of estimates running from 2% of overlapping portfolios for the hurricanes and typhoons category to an average of 45% for portfolios weighted for wildfires.[24] In addition, the concentration of overlapping portfolios in specific sectors may further exacerbate such risks, as in the case of financial corporates, which are much more exposed to wildfires than other sectors. Climate-related tipping points may translate into a financial tipping point in the form of a sudden risk repricing which would strain investors with overlapping portfolios. In the event of a sudden reassessment of risk following clustered hazard events, common holdings may cause several different investor segments to face large mark-to-market losses at once, which could be amplified by fire-sales and other portfolio rebalancing actions. This system-wide risk highlights the relevance of a macroprudential approach to prudential responses aimed at mitigating the impact of climate change on financial stability. This risk runs in parallel with the insurance protection gap relating to climate-related catastrophes.[25] ## 5 Conclusions and policy implications This special feature contributes to the ECB’s monitoring of climate risks by examining the role of green finance in supporting the transition to a low-carbon economy, the currently limited financial adaptation to transition risk and the financial system amplifiers of physical risk. While further progress on consistent climate data is required, especially for forward-looking metrics, granular physical risk exposures and insurance coverage, there is encouraging evidence of greater disclosure by NFCs and an increasing awareness of climate-related risks in financial markets. Yet the risk of greenwashing remains a concern and may be rising fast – in both the green bond market and the investment fund sector – given the absence of well-designed, consistent standards for sustainable financial instruments. The dynamic exposures of financial institutions to transition and physical risks, together with their risk metrics, show no clear evidence of financial institutions experiencing a significant reduction in risk. In addition, exposure concentration, cross-hazard correlation and institutions’ overlapping portfolios are shown to act as amplifiers of such risks. This analysis can contribute to the policy debate around disclosures, standards for sustainable financial instruments and climate-related prudential policies. The development of consistent sustainability disclosures via the Corporate Sustainability Reporting Directive and the IFRS Foundation, as well as the convergence of these requirements in common minimum international standards, are important factors allowing firms, investors and financial institutions to effectively measure and manage transition risk. Regulatory standards on sustainable financial instruments, such as the EU GBS and ESG/environmental fund labels, are key to reducing the risk of greenwashing and thus helping to scale up sustainable financing. Finally, based on the systemic aspect and possible amplification mechanisms originating from climate-related physical and transition risks, there should be further reflection on how to close any material gaps in the prudential framework.[26] Future work will focus on the extent to which existing macroprudential tools, including the systemic risk buffer, could be readily deployed to capture climate risks. New tools, such as concentration risk measures, may also be needed to address climate-related risks from a systemic perspective.[27] 1. This special feature has benefited from input received from Olimpia Carradori, Alberto Grassi, Giulio Mazzolini and Allegra Pietsch. 2. This special feature builds on the analysis presented in previous editions of the Financial Stability Review published since 2019 (see the special feature entitled “Climate change and financial stability”, Financial Stability Review, ECB, May 2019, and the special feature entitled “Climate-related risk to financial stability”, Financial Stability Review, ECB, May 2021). It complements recent ECB initiatives, including the decision to disclose climate-related information relating to Eurosystem central banks’ investments in non-monetary policy portfolios by the first quarter of 2023 (see the press release of 4 February 2021), the consideration of climate-related factors in the monetary policy strategy review (see the press release of 8 July 2021), the need for a macroprudential response (see Macroprudential Bulletin, Issue 15, ECB, October 2021) and the supervisory assessment of the progress made by European banks in considering climate and environmental risks (see “The state of climate and environmental risk management in the banking sector”, ECB, November 2021, and “Supervisory assessment of institutions’ climate-related and environmental risks disclosures”, ECB, March 2022). 3. See the Glasgow Financial Alliance for Net Zero (GFANZ), which encompasses the UN-convened Net-Zero Banking Alliance, Net-Zero Asset Owner Alliance, and Net-Zero Insurance Alliance, and the Net Zero Asset Managers initiative. The GFANZ aims at mobilising the necessary capital to build a global net-zero economy and deliver on the goals of the Paris Agreement. In addition, see the “Supervisory assessment of institutions’ climate-related and environmental risks disclosures”, ECB, March 2022. 4. See “Towards a green capital markets union: developing sustainable, integrated and resilient European capital markets”, Macroprudential Bulletin, Issue 15, ECB, October 2021. 5. See De Haas, R. and Popov, A., “Finance and carbon emissions,” Working Paper Series, No 2318, ECB, September 2019; Fatica, S. and Panzica, R., “Green bonds as a tool against climate change?”, Business Strategy and the Environment, March 2021; and Flammer, C., “Corporate green bonds”, Journal of Financial Economics, Vol. 142, Issue 2, November 2021, pp. 499-516. 6. The ECB analysis covers the sample of 4,000 European carbon-intensive NFCs that are included in the European Union Transaction Log database and are subject to the EU Emissions Trading System. The database includes information on verified GHG emissions. Firms’ revenues, profitability, and the age and number of plants with carbon-intensive activities, alongside country-specific factors such as fossil fuel subsidies, are also found to influence their ability to reduce emissions by investing in new green technologies. 7. See the Supervisory assessment of institutions’ climate-related and environmental risks disclosures, ECB, March 2022. 8. The climate change-related disclosure standards under the proposed European Union’s Corporate Sustainability Reporting Directive is expected to be used by companies for the first time in 2024, for the 2023 financial year. 9. See Carbone, S., Giuzio, M., Kapadia, S., Krämer, J., Nyholm, K. and Vozian, K., “The low-carbon transition, climate commitments and firm credit risk”, Working Paper Series, No 2631, ECB, December 2021. 10. See the box entitled “Home bias in green bond markets”, Financial Integration and Structure in the Euro Area Report, ECB, April 2022. 11. From Pietsch, A. and Salakhova, D., “Pricing of green bonds – drivers and dynamics of the greenium”, Working Paper Series, ECB, forthcoming. 12. The emission intensity of a portfolio is measured as the exposure-weighted emission intensity of respective firms, with firm’s emission intensity being absolute emissions scaled by revenues. 13. High-emitting firms are defined here as firms with reported emission intensity in the top 33% of the distribution as of end-2020, i.e. firms with 2020 emission intensity in excess of 556 tCO2e/USD million. 14. The credit-risk-weighted metric of transition risk for a bank $j$ is defined as: where $i$ is (one of) the borrower(s),is the level of (relative or absolute) GHG emissions produced by the borrower and $PD$ is the probability of default assigned to the borrower by the bank concerned. An alternative for the credit risk component would be to use loan loss provisions as a proportion of loans instead of PD. In the present case, PDs are used because they capture credit risk from a more forward-looking perspective. An alternative for the climate risk component would be to use emission targets alongside or instead of current emission levels. This choice would also improve the forward-looking power of the metric. 15. Transition risk can materialise in the form of higher operating expenditures and investment requirements for firms, the purpose being to reduce their emissions. These higher monetary costs can manifest themselves in transitional risk metrics (e.g. credit risk parameters such as PDs), although it is assumed that banks do not currently explicitly account for the contribution of transition risk to firms’ credit risk. 16. The bank-level emissions-to-loans ratio is computed by aggregating borrowers’ emissions and dividing this figure by the total value of the bank’s corporate loan portfolio. 17. This exercise measures a bank's sensitivity to carbon price increases under the NGFS Phase I disorderly transition scenario over a 30-year period, leveraging on model parameters developed in the ECB economy-wide climate stress test (see “ECB economy-wide climate stress test”, Occasional Paper Series, No 281, ECB, September 2021). The increase in banks’ expected losses stemming from carbon price increases is calculated for each of its credit exposure as where $βP$ and $βL$ are coefficients determining the extent to which borrower PDs react to changes in profitability and leverage. 18. Firms’ default correlations are estimated using a multi-firm Merton model calibrated on historical data for a large sample of euro area firms. Via 500,000 Monte Carlo iterations, the model simulates the default events of thousands of firms for which the asset value process is modelled as correlated geometric Brownian motions. The transition risk intensity , capturing the fraction of transition cost borne by firms for each tonne of CO2 emitted, incorporates both the transition risk shock (€/tCO2) and a pass-through factor capturing the degree to which firms can pass the cost of a transition risk shock on to consumers, and impacts the value of assets (see Belloni, M., Kuik, F. and Mingarelli, L., “Euro area banks’ sensitivity to changes in carbon price”, Working Paper Series, No 2654, ECB, March 2022). Under the simplifying assumption that firms would bear the full cost of an increase in carbon prices ($β=0$), the transition risk intensity would be equivalent to this increase in the cost of carbon, i.e. $α=T$. 19. Firms with emission intensities above (below) the sample’s 75th percentile are referred to as high (low) emitters. 20. Another example is typhoons and rainfall, which can trigger ground subsidence. This has the potential to start landslides which can, in turn, cause flooding. 21. Cont, R. and Schaanning, E., “Monitoring indirect contagion”, Journal of Banking and Finance, Vol. 104, Issue C, July 2019, pp. 85-102. 22. Firm-level risk scores for over four million firms worldwide, from Moody’s Four Twenty Seven, are used. 23. The degree to which the share of portfolios exposed to natural hazards will concretely be at risk is unclear as firms can implement physical risk-mitigation measures to reduce impacts. 24. See “Climate change, catastrophes and the macroeconomic benefits of insurance”, Financial Stability Report, EIOPA, July 2021, pp 105-123. 25. See Baranović et al., “The challenge of capturing climate risks in the banking regulatory framework: is there a need for a macroprudential response?”, Macroprudential Bulletin, ECB, October 2021.
2023-02-05T13:10:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 17, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.331170916557312, "perplexity": 5678.757515093919}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500255.78/warc/CC-MAIN-20230205130241-20230205160241-00512.warc.gz"}
https://par.nsf.gov/biblio/10364018-spectroscopic-confirmation-gravitationally-lensed-lyman-break-galaxy-ii-using-noema
skip to main content Spectroscopic confirmation of a gravitationally lensed Lyman-break galaxy at z [C ii ] = 6.827 using NOEMA ABSTRACT We present the spectroscopic confirmation of the brightest known gravitationally lensed Lyman-break galaxy in the Epoch of Reionization (EoR), A1703-zD1, through the detection of [C ii] 158 $\mu$m at a redshift of z = 6.8269 ± 0.0004. This source was selected behind the strong lensing cluster Abell 1703, with an intrinsic luminosity and a very blue Spitzer/Infrared Array Camera (IRAC) [3.6]–[4.5] colour, implying high equivalent width line emission of [O iii] + Hβ. [C ii] is reliably detected at 6.1σ cospatial with the rest-frame ultraviolet (UV) counterpart, showing similar spatial extent. Correcting for the lensing magnification, the [C ii] luminosity in A1703-zD1 is broadly consistent with the local $L_{\rm [C\, {\small II}]}$–star formation rate (SFR) relation. We find a clear velocity gradient of 103 ± 22 km $\rm s^{-1}$ across the source that possibly indicates rotation or an ongoing merger. We furthermore present spectral scans with no detected [C ii] above 4.6σ in two unlensed Lyman-break galaxies in the Extended Groth Strip (EGS)-Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey (CANDELS) field at z ∼ 6.6–6.9. This is the first time that the Northern Extended Millimeter Array (NOEMA) has been successfully used to observe [C ii] in a ‘normal’ star-forming galaxy at z > 6, and our results demonstrate its capability to complement the Atacama Large Millimeter/submillimeter Array (ALMA) in more » Authors: ;  ;  ;  ;  ;  ;  ;  ;  ;  ; Publication Date: NSF-PAR ID: 10364018 Journal Name: Monthly Notices of the Royal Astronomical Society Volume: 512 Issue: 1 Page Range or eLocation-ID: p. 535-543 ISSN: 0035-8711 Publisher: Oxford University Press Sponsoring Org: National Science Foundation ##### More Like this 1. ABSTRACT We present new [${\rm O\, {\small III}}$] 88-$\mu \mathrm{{m}}$ observations of five bright z ∼ 7 Lyman-break galaxies spectroscopically confirmed by ALMA through [${\rm C\, {\small II}}$] 158 $\mu \mathrm{{m}}$, unlike recent [${\rm O\, {\small III}}$] detections where Lyman α was used. This nearly doubles the sample of Epoch of Reionization galaxies with robust (5σ) [${\rm C\, {\small II}}$] and [${\rm O\, {\small III}}$] detections. We perform a multiwavelength comparison with new deep HST images of the rest-frame UV, whose compact morphology aligns well with [${\rm O\, {\small III}}$] tracing ionized gas. In contrast, we find more spatially extended [${\rm C\, {\small II}}$] emission likely produced in neutral gas, as indicated by an [${\rm N\, {\small II}}$] 205-$\mu \mathrm{{m}}$ non-detection in one source. We find a correlation between the optical ${[{\rm O\, {\small III}}]}+ {\mathrm{H\,\beta }}$ equivalent width and [${\rm O\, {\small III}}$]/[${\rm C\, {\small II}}$], as seen in local metal-poor dwarf galaxies. cloudy models of a nebula of typical density harbouring a young stellar population with a high-ionization parameter adequately reproduce the observed lines. Surprisingly, however, our models fail to reproduce the strength of [${\rm O\, {\small III}}$] 88-$\mu \mathrm{{m}}$, unless we assume an α/Fe enhancement and near-solar nebular oxygenmore » 2. ABSTRACT We report the serendipitous discovery of a dust-obscured galaxy observed as part of the Atacama Large Millimeter Array (ALMA) Large Program to INvestigate [C ii] at Early times (ALPINE). While this galaxy is detected both in line and continuum emissions in ALMA Band 7, it is completely dark in the observed optical/near-infrared bands and only shows a significant detection in the UltraVISTA Ks band. We discuss the nature of the observed ALMA line, that is [C ii] at $z$ ∼ 4.6 or high-J CO transitions at $z$ ∼ 2.2. In the first case, we find a [C ii]/FIR luminosity ratio of $\mathrm{log}{(L_{[\mathrm{ C}\, \rm {\small {II}}]}/L_{\mathrm{ FIR}})} \sim -2.5$, consistent with the average value for local star-forming galaxies (SFGs). In the second case instead, the source would lie at larger CO luminosities than those expected for local SFGs and high-z submillimetre galaxies. At both redshifts, we derive the star formation rate (SFR) from the ALMA continuum and the physical parameters of the galaxy, such as the stellar mass (M*), by fitting its spectral energy distribution. Exploiting the results of this work, we believe that our source is a ‘main-sequence’, dusty SFG at $z$ = 4.6 (i.e. [C ii] emitter) with $\mathrm{log(SFR/M_{\odot }\, yr^{-1})}\sim 1.4$more » 3. Abstract We present new ALMA observations and physical properties of a Lyman break galaxy at z = 7.15. Our target, B14-65666, has a bright ultra-violet (UV) absolute magnitude, MUV ≈ −22.4, and has been spectroscopically identified in Lyα with a small rest-frame equivalent width of ≈4 Å. A previous Hubble Space TElescope (HST) image has shown that the target is composed of two spatially separated clumps in the rest-frame UV. With ALMA, we have newly detected spatially resolved [O iii] 88 μm, [C ii] 158 μm, and their underlying dust continuum emission. In the whole system of B14-65666, the [O iii] and [C ii] lines have consistent redshifts of 7.1520 ± 0.0003, and the [O iii] luminosity, (34.4 ± 4.1) × 108 L⊙, is about three times higher than the [C ii] luminosity, (11.0 ± 1.4) × 108 L⊙. With our two continuum flux densities, the dust temperature is constrained to be Td ≈ 50–60 K under the assumption of a dust emissivity index of βd = 2.0–1.5, leading to a large total infrared luminosity of LTIR ≈ 1 × 1012 L⊙. Owing to our high spatial resolution data, we show that the [O iii] and [C ii] emission can be spatially decomposed into two clumps associated with the two rest-frame UV clumps whose spectra aremore » 4. ABSTRACT We present 10 main-sequence ALPINE galaxies (log (M/M⊙) = 9.2−11.1 and ${\rm SFR}=23-190\, {\rm M_{\odot }\, yr^{-1}}$) at z ∼ 4.5 with optical [O ii] measurements from Keck/MOSFIRE spectroscopy and Subaru/MOIRCS narrow-band imaging. This is the largest such multiwavelength sample at these redshifts, combining various measurements in the ultraviolet, optical, and far-infrared including [C ii]158 $\mu$m line emission and dust continuum from ALMA and H α emission from Spitzer photometry. For the first time, this unique sample allows us to analyse the relation between [O ii] and total star-formation rate (SFR) and the interstellar medium (ISM) properties via [O ii]/[C ii] and [O ii]/H α luminosity ratios at z ∼ 4.5. The [O ii]−SFR relation at z ∼ 4.5 cannot be described using standard local descriptions, but is consistent with a metal-dependent relation assuming metallicities around $50{{\ \rm per\ cent}}$ solar. To explain the measured dust-corrected luminosity ratios of $\log (L_{\rm [OII]}/L_{\rm [CII]}) \sim 0.98^{+0.21}_{-0.22}$ and $\log (L_{\rm [OII]}/L_{\rm H\alpha }) \sim -0.22^{+0.13}_{-0.15}$ for our sample, ionization parameters log (U) < −2 and electron densities $\log (\rm n_e / {\rm [cm^{-3}]}) \sim 2.5-3$ are required. The former is consistent with galaxies at z ∼ 2−3, however lower than at z > 6. The latter may be slightly higher than expected given the galaxies’ specific SFR. Themore » 5. Exploiting the sensitivity of the IRAM NOrthern Extended Millimeter Array (NOEMA) and its ability to process large instantaneous bandwidths, we have studied the morphology and other properties of the molecular gas and dust in the star forming galaxy, H-ATLAS J131611.5+281219 (HerBS-89a), at z = 2.95. High angular resolution (0 . ″3) images reveal a partial 1 . ″0 diameter Einstein ring in the dust continuum emission and the molecular emission lines of 12 CO(9−8) and H 2 O(2 02  − 1 11 ). Together with lower angular resolution (0 . ″6) images, we report the detection of a series of molecular lines including the three fundamental transitions of the molecular ion OH + , namely (1 1  − 0 1 ), (1 2  − 0 1 ), and (1 0  − 0 1 ), seen in absorption; the molecular ion CH + (1 − 0) seen in absorption, and tentatively in emission; two transitions of amidogen (NH 2 ), namely (2 02  − 1 11 ) and (2 20  − 2 11 ) seen in emission; and HCN(11 − 10) and/or NH(1 2  − 0 1 ) seen in absorption. The NOEMA data are complemented with Very Large Array data tracing the 12 CO(1 − 0) emission line, which provides a measurement ofmore »
2023-02-03T20:33:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.730889618396759, "perplexity": 4016.2401312708043}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500074.73/warc/CC-MAIN-20230203185547-20230203215547-00589.warc.gz"}
https://math.wikia.org/wiki/Related_rates
## FANDOM 1,168 Pages In differential calculus, Related Rates problems are an application of derivatives, where one uses given lengths and rates to find missing ones. The rate of change is usually respective to time. ## Steps Generally, the steps are as follows. 1. Find given and missing values 2. Related them in an equation 3. Implicitly derive both sides with respect to time 4. Substitute known quantities and solve ### Common equations used • Spheres: $V=4/3\pi r^3, SA=4\pi r^2$ • Cones: $V= 1/3\pi r^2h$ Note: With cones, usually one must do some reasoning with $r$ and $h$ by setting them in a proportion; i.e. since $r=x$ when $h=y$, $r/h = x/y$, etc. Solve for whatever variable is not given in terms of the other one. ## Example Oil is spilling from a ruptured tanker. The area of the spill is changing at a rate of 6 mi2/h. To find the rate the radius is changing when the area is 9 mi2, begin by listing which values are known and which are not. $A=9$ $\frac{dA}{dt}=6$ $R=\frac{3}{\sqrt{\pi}}$ (obtained by geometric formulas) $\frac{dR}{dt}=?$ Now take the derivative of the formula for the area of circle. $A=\pi r^2$ $\frac{dA}{dt}=2\pi r \frac{dR}{dt}$ From here, it is only a simple matter of solving for $\frac{dR}{dt}$ and substituting the known values. $\frac{\frac{dA}{dt}}{2\pi r}=\frac{dR}{dt}$ $\frac{6}{\frac{6\pi}{\sqrt{\pi}}}=\frac{dR}{dt}$ $\frac{dR}{dt}=\frac{1}{\sqrt\pi}\approx0.5642 mi/h$ Community content is available under CC-BY-SA unless otherwise noted.
2019-12-10T08:26:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9033724069595337, "perplexity": 1057.9796995103663}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540527010.70/warc/CC-MAIN-20191210070602-20191210094602-00030.warc.gz"}
https://www.usgs.gov/center-news/volcano-watch-youngest-flows-haleakal-crater-about-800-1000-years-old
# Volcano Watch — Youngest flows in Haleakalā Crater about 800-1,000 years old Release Date: The story is told of how Maui snared the sun, holding it hostage atop Haleakalā until he slowed its passage across the sky. One result of this slow burn is a barren, rocky landscape devoid of soil or vegetation. Geologically speaking, the devastation resulted as numerous cinder cones and fissures erupted lava that flowed across the crater floor. How young are these flows? The story is told of how Maui snared the sun, holding it hostage atop Haleakalā until he slowed its passage across the sky. One result of this slow burn is a barren, rocky landscape devoid of soil or vegetation. Geologically speaking, the devastation resulted as numerous cinder cones and fissures erupted lava that flowed across the crater floor. How young are these flows? This past year, scientists from the U.S. Geological Survey's Hawaiian Volcano Observatory collected charcoal from beneath several of the lava flows. The charcoal was created when lava ignited and buried vegetation in its path. The resulting ages, determined by the carbon-14 method of isotopic dating, provide this answer: the crater's floor is mantled by flows chiefly younger than 4,070 years. The 4,070-year age is from the lava erupted at Puu Maile, a cinder cone on the central crater floor east of Kapalaoa Cabin. Only about a dozen of the crater-mantling cones and flows are thought to be older, on the basis of their more highly vegetated surfaces compared to the Puu Maile lava. Some of the older features include Puu Mamane, Mauna Hina, Namana o ke Akua, and Honokahua. Finding charcoal beneath the older cones and flows is almost impossible because their margins have been buried by younger lava, thereby hiding the tell-tale charcoal beneath tons of rock. The youngest age so far is about 870 years, from a fissure system that traverses the central crater floor and north crater wall near the peak known as Hanakauhi. This fissure is probably the youngest eruptive feature in the eastern part of the crater. In contrast to the eastern crater, the youngest lava in the western crater remains undated. This lava issued from a vent on the north side of Ka Luu o ka Oo, a prominent cinder cone accessible by a two-mile hike from the summit visitor center. The Ka Luu lava oozed downslope toward Holua Cabin, where it buried a slightly older lava emplaced about 970 years ago. Thus the Ka Luu lava is only known to be younger than about 970 years in age. Could the Ka Luu flow be younger than the 870-year-old Hanakauhi fissure? The two flows are nowhere in contact with each other, so their ages relative to each other are unknown. The Ka Luu flow lies in a high, dry part of the crater, an area with little vegetation now and probably comparably sparse vegetation at the time the flow was active. It's unlikely that charcoal for dating will be found there, so additional techniques will be required to gain an answer. The other ages obtained from the crater this past year are mostly in the range of 1,000-3,000 years. For example, a lava from Puu Nole poured eastward into Kaupo Gap about 1,160 years ago. It banks against the 4,070-year-old Puu Maile flow and, in turn, is overlain by the 870-year-old Hanakauhi lava. Younger eruptions have occurred on East Maui, with the most recent about 200 years ago near La Perouse Bay. Several other flows higher on the southwest rift zone were active about 500 years ago, as were flows near Hana on the east side of the island. But in Haleakalā Crater, our best information so far indicates activity only as recently as 800-1,000 years ago. ### Volcano Activity Update Eruptive activity was visible within Puu Oo during the past week with lava rising and falling in three separate vents on the crater floor. Through a network of tubes, the lava flows from the vents to the seacoast and enters the ocean at two locations-Wahaula and Kamokuna. The public is reminded that these two ocean-entry areas are extremely dangerous, and the National Park Service has restricted access to them because of frequent explosions accompanying collapses of the growing lava delta. The highly-acidic steam plume is laced with glass particles. An earthquake was felt by a resident of Keauhou, Kona at noon on Monday, May 11. The epicenter of the magnitude 4.0 earthquake was 49 km (29.4 mi) west of Keauhou at a depth of 38 km (22.8 mi).
2019-11-17T02:58:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3373715579509735, "perplexity": 7108.608970258819}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668782.15/warc/CC-MAIN-20191117014405-20191117042405-00139.warc.gz"}
https://pos.sissa.it/247/131/
Volume 247 - XXIII International Workshop on Deep-Inelastic Scattering (DIS2015) - WG4: QCD and Hadronic Final States Forward-backward asymmetries of $(B^-, B^+)$, $(\Lambda_b, \bar\Lambda_b)$ and $(\Lambda,\bar\Lambda)$ in $p\bar p$ collisions at D0 B. Abbott Full text: pdf Published on: January 29, 2016 DOI: https://doi.org/10.22323/1.247.0131 How to cite Metadata are provided both in "article" format (very similar to INSPIRE) as this helps creating very compact bibliographies which can be beneficial to authors and readers, and in "proceeding" format which is more detailed and complete. Open Access Copyright owned by the author(s) under the term of the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
2022-05-27T04:03:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34135857224464417, "perplexity": 4471.610881742762}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662631064.64/warc/CC-MAIN-20220527015812-20220527045812-00625.warc.gz"}
https://pdglive.lbl.gov/DataBlock.action?node=S015D
# Heavy Particle Production Differential Cross Section INSPIRE search VALUE (cm${}^{2}$sr${}^{-1}$GeV${}^{-1}$) CL% DOCUMENT ID TECN CHG  COMMENT • • • We do not use the following data for averages, fits, limits, etc. • • • $<2.6 \times 10^{-36}$ 90 1 1976 CNTR - $\mathit Q$= 1, $\mathit m=2.1-9.4$ GeV $<2.2 \times 10^{-33}$ 90 2 1975 SPEC $\pm{}$ $\mathit Q$= $\pm1$, $\mathit m=4-$15 GeV $<1.1 \times 10^{-33}$ 90 2 1975 SPEC $\pm{}$ $\mathit Q$= $\pm2$, $\mathit m=6-$27 GeV $<8. \times 10^{-35}$ 90 3 1975 CNTR $\pm{}$ $\mathit m=15-$26 GeV $<1.5 \times 10^{-34}$ 90 3 1975 CNTR $\pm{}$ $\mathit Q$= $\pm2$, $\mathit m=3-$10 GeV $<6. \times 10^{-35}$ 90 3 1975 CNTR $\pm{}$ $\mathit Q$= $\pm2$, $\mathit m=10-$26 GeV $<1. \times 10^{-31}$ 90 4 1974 CNTR $\pm{}$ $\mathit m=3.2-7.2$ GeV $<5.8 \times 10^{-34}$ 90 5 1973 SPEC $\pm{}$ $\mathit m=1.5-$24 GeV $<1.2 \times 10^{-35}$ 90 6 1971 B CNTR - $\mathit Q=–$, $\mathit m=2.2-2.8$ $<2.4 \times 10^{-35}$ 90 7 1971 C CNTR - $\mathit Q=–$, $\mathit m=1.2-1.7$, $2.1-$4 $<2.4 \times 10^{-35}$ 90 1969 CNTR - $\mathit Q=–$, $\mathit m=1-$1.8 GeV $<1.5 \times 10^{-36}$ 8 1965 CNTR ${}^{}\mathrm {Be}$ target $\mathit m=3-$7 GeV $<3.0 \times 10^{-36}$ 8 1965 CNTR ${}^{}\mathrm {Fe}$ target $\mathit m=3-$7 GeV 1  BALDIN 1976 is a 70 GeV Serpukhov experiment. Value is per ${}^{}\mathrm {Al}$ nucleus at $\theta$ = 0. For other charges in range $-0.5$ to $-3.0$, CL = 90$\%$ limit is ($2.6 \times 10^{-36})/\vert$(charge)$\vert$ for mass range (2.1$-$9.4 GeV)${\times }\vert$(charge)$\vert$. Assumes stable particle interacting with matter as do antiprotons. 2  ALBROW 1975 is a CERN ISR experiment with $\mathit E_{{\mathrm {cm}}}$ = 53 GeV. $\theta$ = 40 mr. See figure 5 for mass ranges up to 35 GeV. 3  JOVANOVICH 1975 is a CERN ISR 26$+26$ and 15$+15$ GeV ${{\mathit p}}{{\mathit p}}$ experiment. Figure 4 covers ranges $\mathit Q$ = 1/3 to 2 and $\mathit m$ = 3 to 26 GeV. Value is per GeV momentum. 4  APPEL 1974 is NAL 300 GeV ${{\mathit p}}{}^{}\mathrm {W}$ experiment. Studies forward production of heavy (up to 24 GeV) charged particles with momenta 24$-$200 GeV ($–$charge) and 40$-$150 GeV ($+$charge). Above typical value is for 75 GeV and is per GeV momentum per nucleon. 5  ALPER 1973 is CERN ISR 26$+26$ GeV ${{\mathit p}}{{\mathit p}}$ experiment. $\mathit p$ $>$0.9 GeV, 0.2 $<$ $\beta$ $<$0.65. 6  ANTIPOV 1971B is from same 70 GeV ${{\mathit p}}$ experiment as ANTIPOV 1971C and BINON 1969 . 7  ANTIPOV 1971C limit inferred from flux ratio. 70 GeV ${{\mathit p}}$ experiment. 8  DORFAN 1965 is a 30 ${\mathrm {GeV/}}\mathit c$ ${{\mathit p}}$ experiment at BNL. Units are per GeV momentum per nucleus. References: BALDIN 1976 SJNP 22 264 Search for New Heavy Particles in Proton Collisions with Nuclei at 70 GeV ALBROW 1975 NP B97 189 Search for Stable Particles of Charge ${}\geq{}$ 1 and Mass ${}\geq{}$ Deuteron Mass JOVANOVICH 1975 PL 56B 105 A Search for Slow Massive Particles with ${{\mathit Z}}{}\leq{}$ 2 at the CERN Intersecting Storage Rings APPEL 1974 PRL 32 428 Heavy Particle Production in 300 ${\mathrm {GeV/}}\mathit c$ Proton Tungsten Collisions ALPER 1973 PL 46B 265 Large Angle Production of Stable Particles Heavier than the Proton and a Search for Quarks at the CERN Intersecting Storage Rings ANTIPOV 1971B NP B31 235 Observation of Antihelium-3 ANTIPOV 1971C PL 34B 164 Production of Low Momentum Negative Particles by 70 GeV Protons BINON 1969 PL 30B 510 Production of Antideuterons by 43, 52, and 70 GeV Protons DORFAN 1965 PRL 14 999 Search for Massive Particle
2021-03-02T21:21:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8420922160148621, "perplexity": 11999.106758668306}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178364764.57/warc/CC-MAIN-20210302190916-20210302220916-00206.warc.gz"}
https://par.nsf.gov/biblio/10350419-physical-model-delayed-rebrightenings-shock-interacting-supernovae-without-narrow-line-emission
This content will become publicly available on March 1, 2023 A Physical Model of Delayed Rebrightenings in Shock-interacting Supernovae without Narrow-line Emission Abstract Core-collapse supernovae can display evidence of interaction with preexisting, circumstellar shells of material by rebrightening and forming spectral lines, and can even change types as hydrogen appears in previously hydrogen-poor spectra. However, a recently observed core-collapse supernova—SN 2019tsf—was found to brighten after roughly 100 days after it was first observed, suggesting that the supernova ejecta was interacting with surrounding material, but it lacked any observable emission lines and thereby challenged the standard supernova-interaction picture. We show through linear perturbation theory that delayed rebrightenings without the formation of spectral lines are generated as a consequence of the finite sound-crossing time of the postshock gas left in the wake of a supernova explosion. In particular, we demonstrate that sound waves—generated in the postshock flow as a consequence of the interaction between a shock and a density enhancement—traverse the shocked ejecta and impinge upon the shock from behind in a finite time, generating sudden changes in the shock properties in the absence of ambient density enhancements. We also show that a blast wave dominated by gas pressure and propagating in a wind-fed medium is unstable from the standpoint that small perturbations lead to the formation of reverse shocks within the postshock flow, more » Authors: ; Award ID(s): Publication Date: NSF-PAR ID: 10350419 Journal Name: The Astrophysical Journal Volume: 927 Issue: 2 Page Range or eLocation-ID: 148 ISSN: 0004-637X National Science Foundation ##### More Like this 1. Abstract We present photometric and spectroscopic observations of the nearby (D≈ 28 Mpc) interacting supernova (SN) 2019esa, discovered within hours of explosion and serendipitously observed by the Transiting Exoplanet Survey Satellite (TESS). Early, high-cadence light curves from both TESS and the DLT40 survey tightly constrain the time of explosion, and show a 30 day rise to maximum light followed by a near-constant linear decline in luminosity. Optical spectroscopy over the first 40 days revealed a reddened object with narrow Balmer emission lines seen in Type IIn SNe. The slow rise to maximum in the optical light curve combined with the lack of broad Hαemission suggest the presence of very optically thick and close circumstellar material (CSM) that quickly decelerated the SN ejecta. This CSM was likely created from a massive star progenitor with an$Ṁ$∼ 0.2Myr−1lost in a previous eruptive episode 3–4 yr before eruption, similar to giant eruptions of luminous blue variable stars. At late times, strong intermediate-width Caii, Fei, and Feiilines are seen in the optical spectra, identical to those seen in the superluminous interacting SN 2006gy. The strong CSM interaction masks the underlying explosion mechanism in SN 2019esa, but the combination of the luminosity,more » 2. ABSTRACT Recent studies have shown that live (not decayed) radioactive 60Fe is present in deep-ocean samples, Antarctic snow, lunar regolith, and cosmic rays. 60Fe represents supernova (SN) ejecta deposited in the Solar system around $3 \, \rm Myr$ ago, and recently an earlier pulse ${\approx}7 \ \rm Myr$ ago has been found. These data point to one or multiple near-Earth SN explosions that presumably participated in the formation of the Local Bubble. We explore this theory using 3D high-resolution smooth-particle hydrodynamical simulations of isolated SNe with ejecta tracers in a uniform interstellar medium (ISM). The simulation allows us to trace the SN ejecta in gas form and those eject in dust grains that are entrained with the gas. We consider two cases of diffused ejecta: when the ejecta are well-mixed in the shock and when they are not. In the latter case, we find that these ejecta remain far behind the forward shock, limiting the distance to which entrained ejecta can be delivered to ≈100 pc in an ISM with $n_\mathrm{H}=0.1\,\, \rm cm^{-3}$ mean hydrogen density. We show that the intensity and the duration of 60Fe accretion depend on the ISM density and the trajectory of the Solar system. Furthermore, wemore » 3. ABSTRACT A core-collapse supernova is generated by the passage of a shock wave through the envelope of a massive star, where the shock wave is initially launched from the ‘bounce’ of the neutron star formed during the collapse of the stellar core. Instead of successfully exploding the star, however, numerical investigations of core-collapse supernovae find that this shock tends to ‘stall’ at small radii (≲10 neutron star radii), with stellar material accreting on to the central object through the standing shock. Here, we present time-steady, adiabatic solutions for the density, pressure, and velocity of the shocked fluid that accretes on to the compact object through the stalled shock, and we include the effects of general relativity in the Schwarzschild metric. Similar to previous works that were carried out in the Newtonian limit, we find that the gas ‘settles’ interior to the stalled shock; in the relativistic regime analysed here, the velocity asymptotically approaches zero near the Schwarzschild radius. These solutions can represent accretion on to a material surface if the radius of the compact object is outside of its event horizon, such as a neutron star; we also discuss the possibility that these solutions can approximately represent the accretion ofmore » 4. Abstract We present extensive multifrequency Karl G. Jansky Very Large Array (VLA) and Very Long Baseline Array (VLBA) observations of the radio-bright supernova (SN) IIb SN 2004C that span ∼40–2793 days post-explosion. We interpret the temporal evolution of the radio spectral energy distribution in the context of synchrotron self-absorbed emission from the explosion’s forward shock as it expands in the circumstellar medium (CSM) previously sculpted by the mass-loss history of the stellar progenitor. VLBA observations and modeling of the VLA data point to a blastwave with average velocity ∼0.06cthat carries an energy of ≈1049erg. Our modeling further reveals a flat CSM density profileρCSMR−0.03±0.22up to a break radiusRbr≈ (1.96 ± 0.10) × 1016cm, with a steep density gradient followingρCSMR−2.3±0.5at larger radii. We infer that the flat part of the density profile corresponds to a CSM shell with mass ∼0.021M, and that the progenitor’s effective mass-loss rate varied with time over the range (50–500) × 10−5Myr−1for an adopted wind velocityvw= 1000 km s−1and shock microphysical parametersϵe= 0.1,ϵB= 0.01. These results add to the mounting observational evidence for departures from the traditional single-wind mass-loss scenarios in evolved, massive stars in the centuries leading up to core collapse. Potentially viable scenarios include mass lossmore » 5. Abstract Despite recent progress, the astrophysical channels responsible for rapid neutron capture (r-process) nucleosynthesis remain an unsettled question. Observations of the kilonova following the gravitational-wave-detected neutron star merger GW170817 established mergers as one site of ther-process, but additional sources may be needed to fully explainr-process enrichment in the universe. One intriguing possibility is that rapidly rotating massive stars undergoing core collapse launchr-process-rich outflows off the accretion disks formed from their infalling matter. In this scenario,r-process winds are one component of the supernova (SN) ejecta produced by “collapsar” explosions. We present the first systematic study of the effects ofr-process enrichment on the emission from collapsar-generated SNe. We semianalytically modelr-process SN emission from explosion out to late times and determine its distinguishing features. The ease with whichr-process SNe can be identified depends on how effectively wind material mixes into the initiallyr-process-free outer layers of the ejecta. In many cases, enrichment produces a near-infrared (NIR) excess that can be detected within ∼75 days of explosion. We also discuss optimal targets and observing strategies for testing ther-process collapsar theory, and find that frequent monitoring of optical and NIR emission from high-velocity SNe in the first few months after explosion offers a reasonable chance ofmore »
2022-11-27T19:31:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5923705101013184, "perplexity": 3185.0758022747577}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710417.25/warc/CC-MAIN-20221127173917-20221127203917-00306.warc.gz"}
https://zbmath.org/authors/choi.yemon
## Choi, Yemon Compute Distance To: Author ID: choi.yemon Published as: Choi, Yemon; Choi, Y. Homepage: https://www.maths.lancs.ac.uk/~choiy1/pubmath/papers.html External Links: MGP · Wikidata · arXiv · Google Scholar · MathOverflow Documents Indexed: 35 Publications since 2006 1 Contribution as Editor Co-Authors: 15 Co-Authors with 17 Joint Publications 238 Co-Co-Authors all top 5 ### Co-Authors 19 single-authored 5 Ghandehari, Mahya 4 Samei, Ebrahim 2 Alaghmandan, Mahmood 2 Ghahramani, Fereidoun 2 Heath, Matthew J. 2 Pham, Hung Le 1 Farah, Ilijas 1 Gourdeau, Frédéric 1 Horváth, Bence 1 Laustsen, Niels Jakob 1 Ozawa, Narutaka 1 Stokke, Ross 1 White, Michael Christopher 1 Young, Nicholas John 1 Zhang, Yong all top 5 ### Serials 3 Journal of Functional Analysis 2 Journal of Mathematical Analysis and Applications 2 Canadian Mathematical Bulletin 2 Glasgow Mathematical Journal 2 Integral Equations and Operator Theory 2 Proceedings of the American Mathematical Society 2 Semigroup Forum 2 Proceedings of the Royal Society of Edinburgh. Section A. Mathematics 2 The Quarterly Journal of Mathematics 1 Bulletin of the Australian Mathematical Society 1 Houston Journal of Mathematics 1 Advances in Mathematics 1 Bulletin of the London Mathematical Society 1 Journal für die Reine und Angewandte Mathematik 1 Mathematica Scandinavica 1 Proceedings of the Edinburgh Mathematical Society. Series II 1 Transactions of the American Mathematical Society 1 European Journal of Combinatorics 1 Journal of the Australian Mathematical Society 1 London Mathematical Society Lecture Note Series 1 Complex Analysis and Operator Theory 1 Annals of Functional Analysis 1 International Journal of Group Theory 1 Forum of Mathematics, Sigma all top 5 ### Fields 27 Functional analysis (46-XX) 21 Abstract harmonic analysis (43-XX) 11 Operator theory (47-XX) 5 Group theory and generalizations (20-XX) 3 Associative rings and algebras (16-XX) 3 Topological groups, Lie groups (22-XX) 2 Mathematical logic and foundations (03-XX) 2 Order, lattices, ordered algebraic structures (06-XX) 1 General and overarching topics; collections (00-XX) 1 Commutative algebra (13-XX) 1 Category theory; homological algebra (18-XX) 1 Difference and functional equations (39-XX) ### Citations contained in zbMATH Open 24 Publications have been cited 109 times in 92 Documents Cited by Year Approximate and pseudo-amenability of various classes of Banach algebras. Zbl 1179.46040 Choi, Y.; Ghahramani, F.; Zhang, Y. 2009 A nonseparable amenable operator algebra which is not isomorphic to a $$C^{\ast}$$-algebra. Zbl 1287.47057 Choi, Yemon; Farah, Ilijas; Ozawa, Narutaka 2014 Biflatness of $$\ell^1$$-semilattice algebras. Zbl 1132.46033 Choi, Yemon 2007 Approximate amenability of Schatten classes, Lipschitz algebras and second duals of Fourier algebras. Zbl 1228.46043 Choi, Y.; Ghahramani, F. 2011 Weak and cyclic amenability for Fourier algebras of connected Lie groups. Zbl 1298.43004 Choi, Yemon; Ghandehari, Mahya 2014 Extension of derivations, and Connes-amenability of the enveloping dual Banach algebra. Zbl 1328.47041 Choi, Yemon; Samei, Ebrahim; Stokke, Ross 2015 Surveys in contemporary mathematics. Zbl 1128.00009 2008 On commutative, operator amenable subalgebras of finite von Neumann algebras. Zbl 1282.46054 Choi, Yemon 2013 Weak amenability for Fourier algebras of 1-connected nilpotent Lie groups. Zbl 1328.43005 Choi, Yemon; Ghandehari, Mahya 2015 Triviality of the generalised Lau product associated to a Banach algebra homomorphism. Zbl 1377.46029 Choi, Yemon 2016 Simplicial cohomology of band semigroup algebras. Zbl 1263.46060 Choi, Yemon; Gourdeau, Frédéric; White, Michael C. 2012 Translation-finite sets and weakly compact derivations from $$\ell^{1}(\mathbb Z_{+})$$ to its dual. Zbl 1204.43002 Choi, Y.; Heath, M. J. 2010 Group representations with empty residual spectrum. Zbl 1220.47005 Choi, Y. 2010 Characterizing derivations from the disk algebra to its dual. Zbl 1251.46026 Choi, Y.; Heath, M. J. 2011 Approximately multiplicative maps from weighted semilattice algebras. Zbl 1317.46033 Choi, Yemon 2013 Directly finite algebras of pseudofunctions on locally compact groups. Zbl 1405.22005 Choi, Yemon 2015 ZL-amenability constants of finite groups with two character degrees. Zbl 1300.43004 Alaghmandan, Mahmood; Choi, Yemon; Samei, Ebrahim 2014 Quotients of Fourier algebras, and representations which are not completely bounded. Zbl 1275.43005 Choi, Yemon; Samei, Ebrahim 2013 Injective convolution operators on $$\ell^\infty(\Gamma)$$ are surjective. Zbl 1211.43001 Choi, Yemon 2010 ZL-amenability and characters for the restricted direct products of finite groups. Zbl 1308.43003 Alaghmandan, Mahmood; Choi, Yemon; Samei, Ebrahim 2014 Simplicial homology and Hochschild cohomology of Banach semilattice algebras. Zbl 1112.46056 Choi, Yemon 2006 Hochschild homology and cohomology of $$\ell ^1 (\mathbb Z_+^k)$$. Zbl 1194.46110 Choi, Yemon 2010 Splitting maps and norm bounds for the cyclic cohomology of biflat Banach algebras. Zbl 1208.46070 Choi, Yemon 2010 Stability of characters and filters for weighted semilattices. Zbl 1471.46050 Choi, Yemon; Ghandehari, Mahya; Pham, Hung Le 2021 Stability of characters and filters for weighted semilattices. Zbl 1471.46050 Choi, Yemon; Ghandehari, Mahya; Pham, Hung Le 2021 Triviality of the generalised Lau product associated to a Banach algebra homomorphism. Zbl 1377.46029 Choi, Yemon 2016 Extension of derivations, and Connes-amenability of the enveloping dual Banach algebra. Zbl 1328.47041 Choi, Yemon; Samei, Ebrahim; Stokke, Ross 2015 Weak amenability for Fourier algebras of 1-connected nilpotent Lie groups. Zbl 1328.43005 Choi, Yemon; Ghandehari, Mahya 2015 Directly finite algebras of pseudofunctions on locally compact groups. Zbl 1405.22005 Choi, Yemon 2015 A nonseparable amenable operator algebra which is not isomorphic to a $$C^{\ast}$$-algebra. Zbl 1287.47057 Choi, Yemon; Farah, Ilijas; Ozawa, Narutaka 2014 Weak and cyclic amenability for Fourier algebras of connected Lie groups. Zbl 1298.43004 Choi, Yemon; Ghandehari, Mahya 2014 ZL-amenability constants of finite groups with two character degrees. Zbl 1300.43004 Alaghmandan, Mahmood; Choi, Yemon; Samei, Ebrahim 2014 ZL-amenability and characters for the restricted direct products of finite groups. Zbl 1308.43003 Alaghmandan, Mahmood; Choi, Yemon; Samei, Ebrahim 2014 On commutative, operator amenable subalgebras of finite von Neumann algebras. Zbl 1282.46054 Choi, Yemon 2013 Approximately multiplicative maps from weighted semilattice algebras. Zbl 1317.46033 Choi, Yemon 2013 Quotients of Fourier algebras, and representations which are not completely bounded. Zbl 1275.43005 Choi, Yemon; Samei, Ebrahim 2013 Simplicial cohomology of band semigroup algebras. Zbl 1263.46060 Choi, Yemon; Gourdeau, Frédéric; White, Michael C. 2012 Approximate amenability of Schatten classes, Lipschitz algebras and second duals of Fourier algebras. Zbl 1228.46043 Choi, Y.; Ghahramani, F. 2011 Characterizing derivations from the disk algebra to its dual. Zbl 1251.46026 Choi, Y.; Heath, M. J. 2011 Translation-finite sets and weakly compact derivations from $$\ell^{1}(\mathbb Z_{+})$$ to its dual. Zbl 1204.43002 Choi, Y.; Heath, M. J. 2010 Group representations with empty residual spectrum. Zbl 1220.47005 Choi, Y. 2010 Injective convolution operators on $$\ell^\infty(\Gamma)$$ are surjective. Zbl 1211.43001 Choi, Yemon 2010 Hochschild homology and cohomology of $$\ell ^1 (\mathbb Z_+^k)$$. Zbl 1194.46110 Choi, Yemon 2010 Splitting maps and norm bounds for the cyclic cohomology of biflat Banach algebras. Zbl 1208.46070 Choi, Yemon 2010 Approximate and pseudo-amenability of various classes of Banach algebras. Zbl 1179.46040 Choi, Y.; Ghahramani, F.; Zhang, Y. 2009 Surveys in contemporary mathematics. Zbl 1128.00009 2008 Biflatness of $$\ell^1$$-semilattice algebras. Zbl 1132.46033 Choi, Yemon 2007 Simplicial homology and Hochschild cohomology of Banach semilattice algebras. Zbl 1112.46056 Choi, Yemon 2006 all top 5 all top 5 ### Cited in 48 Serials 13 Semigroup Forum 12 Journal of Mathematical Analysis and Applications 5 Journal of Functional Analysis 3 Bulletin of the Australian Mathematical Society 3 Integral Equations and Operator Theory 3 Proceedings of the American Mathematical Society 3 Asian-European Journal of Mathematics 2 Advances in Mathematics 2 Archiv der Mathematik 2 Glasgow Mathematical Journal 2 Pacific Journal of Mathematics 2 Filomat 2 Journal of the Australian Mathematical Society 2 Kragujevac Journal of Mathematics 2 Journal of Noncommutative Geometry 1 Communications in Mathematical Physics 1 Reports on Mathematical Physics 1 Journal of Geometry and Physics 1 Canadian Mathematical Bulletin 1 Commentarii Mathematici Helvetici 1 Quaestiones Mathematicae 1 Theoretical Computer Science 1 Transactions of the American Mathematical Society 1 European Journal of Combinatorics 1 Bulletin of the Korean Mathematical Society 1 Expositiones Mathematicae 1 Boletín de la Sociedad Matemática Mexicana. Third Series 1 Discrete and Continuous Dynamical Systems 1 Analysis (München) 1 Bulletin of the Malaysian Mathematical Sciences Society. Second Series 1 Central European Journal of Mathematics 1 Journal of the Institute of Mathematics of Jussieu 1 Journal of Algebra and its Applications 1 Mediterranean Journal of Mathematics 1 Complex Analysis and Operator Theory 1 Journal of Mathematical Cryptology 1 Groups, Geometry, and Dynamics 1 International Journal of Nonlinear Analysis and Applications 1 Revista de la Real Academia de Ciencias Exactas, Físicas y Naturales. Serie A: Matemáticas. RACSAM 1 Eurasian Mathematical Journal 1 International Journal of Group Theory 1 Forum of Mathematics, Sigma 1 International Journal of Analysis and Applications 1 Complex Analysis and its Synergies 1 Journal of Linear and Topological Algebra 1 Sahand Communications in Mathematical Analysis 1 Bollettino dell’Unione Matematica Italiana 1 Cogent Mathematics & Statistics all top 5 ### Cited in 21 Fields 73 Functional analysis (46-XX) 44 Abstract harmonic analysis (43-XX) 23 Operator theory (47-XX) 15 Group theory and generalizations (20-XX) 7 Mathematical logic and foundations (03-XX) 7 Topological groups, Lie groups (22-XX) 4 Associative rings and algebras (16-XX) 2 Linear and multilinear algebra; matrix theory (15-XX) 2 Difference and functional equations (39-XX) 1 Order, lattices, ordered algebraic structures (06-XX) 1 Category theory; homological algebra (18-XX) 1 Real functions (26-XX) 1 Harmonic analysis on Euclidean spaces (42-XX) 1 Integral transforms, operational calculus (44-XX) 1 Differential geometry (53-XX) 1 General topology (54-XX) 1 Manifolds and cell complexes (57-XX) 1 Numerical analysis (65-XX) 1 Quantum theory (81-XX) 1 Relativity and gravitational theory (83-XX) 1 Information and communication theory, circuits (94-XX) ### Wikidata Timeline The data are displayed as stored in Wikidata under a Creative Commons CC0 License. Updates and corrections should be made in Wikidata.
2023-01-27T18:21:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4189924895763397, "perplexity": 3349.9966778816365}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764495001.99/warc/CC-MAIN-20230127164242-20230127194242-00561.warc.gz"}
https://par.nsf.gov/biblio/10005782-search-nonpointing-delayed-photons-diphoton-missing-transverse-momentum-final-state-pp-collisions-lhc-using-atlas-detector
Search for nonpointing and delayed photons in the diphoton and missing transverse momentum final state in 8 TeV $pp$ collisions at the LHC using the ATLAS detector
2022-10-02T23:45:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9244073033332825, "perplexity": 2213.586241789839}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00098.warc.gz"}
https://par.nsf.gov/biblio/10249775-mosdef-survey-comprehensive-analysis-rest-optical-emission-line-properties-star-forming-galaxies
The MOSDEF survey: a comprehensive analysis of the rest-optical emission-line properties of z ∼ 2.3 star-forming galaxies ABSTRACT We analyse the rest-optical emission-line spectra of z ∼ 2.3 star-forming galaxies in the complete MOSFIRE Deep Evolution Field (MOSDEF) survey. In investigating the origin of the well-known offset between the sequences of high-redshift and local galaxies in the [O iii]λ5008/Hβ versus [N ii]λ6585/Hα (‘[N ii] BPT’) diagram, we define two populations of z ∼ 2.3 MOSDEF galaxies. These include the high population that is offset towards higher [O iii]λ5008/Hβ and/or [N ii]λ6585/Hα with respect to the local SDSS sequence and the low population that overlaps the SDSS sequence. These two groups are also segregated within the [O  iii]λ5008/Hβ versus [S ii]λλ6718,6733/Hα and the [O iii]λλ4960,5008/[O ii ]λλ3727,3730 (O32) versus ([O  iii]λλ4960,5008+[O ii]λλ3727,3730)/Hβ (R23) diagrams, which suggests qualitatively that star-forming regions in the more offset galaxies are characterized by harder ionizing spectra at fixed nebular oxygen abundance. We also investigate many galaxy properties of the split sample and find that the high sample is on average smaller in size and less massive, but has higher specific star formation rate (SFR) and SFR surface density values and is slightly younger compared to the low population. From Cloudy+BPASS photoionization models, we estimate that the high population has a lower stellar metallicity (i.e. harder ionizing spectrum) but slightly higher nebular metallicity and higher ionization more » Authors: ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; Award ID(s): Publication Date: NSF-PAR ID: 10249775 Journal Name: Monthly Notices of the Royal Astronomical Society Volume: 502 Issue: 2 Page Range or eLocation-ID: 2600 to 2614 ISSN: 0035-8711 National Science Foundation ##### More Like this 1. Abstract We present a joint analysis of rest-UV and rest-optical spectra obtained using Keck/LRIS and Keck/MOSFIRE for a sample of 62 star-forming galaxies at z ∼ 2.3. We divide our sample into two bins based on their location in the [OIII]5007/Hβ vs. [NII]6584/Hα BPT diagram, and perform the first differential study of the rest-UV properties of massive ionizing stars as a function of rest-optical emission-line ratios. Fitting BPASS stellar population synthesis models, including nebular continuum emission, to our rest-UV composite spectra, we find that high-redshift galaxies offset towards higher [OIII]λ5007/Hβ and [NII]λ6584/Hα have younger ages ($\log (\textrm {~Age/yr})=7.20^{+0.57}_{-0.20}$) and lower stellar metallicities ($Z_*=0.0010^{+0.0011}_{-0.0003}$) resulting in a harder ionizing spectrum, compared to the galaxies in our sample that lie on the local BPT star-forming sequence ($\log (\textrm {Age/yr})=8.57^{+0.88}_{-0.84}$, $Z_*=0.0019^{+0.0006}_{-0.0006}$). Additionally, we find that the offset galaxies have an ionization parameter of $\log (U)=-3.04^{+0.06}_{-0.11}$ and nebular metallicity of ($12+\log (\textrm {~O/H})=8.40^{+0.06}_{-0.07}$), and the non-offset galaxies have an ionization parameter of $\log (U)=-3.11^{+0.08}_{-0.08}$ and nebular metallicity of $12+\log (\textrm {~O/H})=8.30^{+0.05}_{-0.06}$. The stellar and nebular metallicities derived for our sample imply that the galaxies offset from the local BPT relation are more α-enhanced ($7.28^{+2.52}_{-2.82}\textrm {~O/Fe}_{\odot }$) compared to those consistent with the local sequencemore » 2. ABSTRACT The ionizing photon escape fraction [Lyman continuum (LyC) fesc] of star-forming galaxies is the single greatest unknown in the reionization budget. Stochastic sightline effects prohibit the direct separation of LyC leakers from non-leakers at significant redshifts. Here we circumvent this uncertainty by inferring fesc using resolved (R > 4000) Lyman α (Lyα) profiles from the X-SHOOTER Lyα survey at z = 2 (XLS-z2). With empirically motivated criteria, we use Lyα profiles to select leakers ($f_{\mathrm{ esc}} > 20{{\ \rm per\ cent}}$) and non-leakers ($f_{\mathrm{ esc}} < 5{{\ \rm per\ cent}}$) from a representative sample of >0.2L* Lyman α emitters (LAEs). We use median stacked spectra of these subsets over λrest ≈ 1000–8000 Å to investigate the conditions for LyC fesc. Our stacks show similar mass, metallicity, MUV, and βUV. We find the following differences between leakers versus non-leakers: (i) strong nebular C iv and He ii emission versus non-detections; (ii) [O iii]/[O ii] ≈ 8.5 versus ≈3; (iii) Hα/Hβ indicating no dust versus E(B − V) ≈ 0.3; (iv) Mg ii emission close to the systemic velocity versus redshifted, optically thick Mg ii; and (v) Lyα fesc of ${\approx} 50{{\ \rm per\ cent}}$ versus ${\approx} 10{{\ \rm per\ cent}}$. The extreme equivalent widths (EWs) in leakers ([O iii]+$\mathrm{ H}\beta \approx 1100$ Å rest frame)more » 3. ABSTRACT We present constraints on the massive star and ionized gas properties for a sample of 62 star-forming galaxies at z ∼ 2.3. Using BPASS stellar population models, we fit the rest-UV spectra of galaxies in our sample to estimate age and stellar metallicity which, in turn, determine the ionizing spectrum. In addition to the median properties of well-defined subsets of our sample, we derive the ages and stellar metallicities for 30 high-SNR individual galaxies – the largest sample of individual galaxies at high redshift with such measurements. Most galaxies in this high-SNR subsample have stellar metallicities of 0.001 < Z* < 0.004. We then use Cloudy + BPASS photoionization models to match observed rest-optical line ratios and infer nebular properties. Our high-SNR subsample is characterized by a median ionization parameter and oxygen abundance, respectively, of log (U)med = −2.98 ± 0.25 and 12 + log (O/H)med = 8.48 ± 0.11. Accordingly, we find that all galaxies in our sample show evidence for α-enhancement. In addition, based on inferred log (U) and 12 + log (O/H) values, we find that the local relationship between ionization parameter and metallicity applies at z ∼ 2. Finally, we find that the high-redshift galaxies most offset from the local excitation sequence in the BPT diagram aremore » 4. ABSTRACT The combination of the MOSDEF and KBSS-MOSFIRE surveys represents the largest joint investment of Keck/MOSFIRE time to date, with ∼3000 galaxies at 1.4 ≲ z ≲ 3.8, roughly half of which are at z ∼ 2. MOSDEF is photometric- and spectroscopic-redshift selected with a rest-optical magnitude limit, while KBSS-MOSFIRE is primarily selected based on rest-UV colours and a rest-UV magnitude limit. Analysing both surveys in a uniform manner with consistent spectral-energy-distribution (SED) models, we find that the MOSDEF z ∼ 2 targeted sample has higher median M* and redder rest U−V colour than the KBSS-MOSFIRE z ∼ 2 targeted sample, and smaller median SED-based SFR and sSFR (SFR(SED) and sSFR(SED)). Specifically, MOSDEF targeted a larger population of red galaxies with U−V and V−J ≥1.25, while KBSS-MOSFIRE contains more young galaxies with intense star formation. Despite these differences in the z ∼ 2 targeted samples, the subsets of the surveys with multiple emission lines detected and analysed in previous work are much more similar. All median host-galaxy properties with the exception of stellar population age – i.e. M*, SFR(SED), sSFR(SED), AV, and UVJ colours – agree within the uncertainties. Additionally, when uniform emission-line fitting and stellar Balmer absorption correction techniquesmore » 5. ABSTRACT We present detections of [O iii] λ4363 and direct-method metallicities for star-forming galaxies at z = 1.7–3.6. We combine new measurements from the MOSFIRE Deep Evolution Field (MOSDEF) survey with literature sources to construct a sample of 18 galaxies with direct-method metallicities at z > 1, spanning 7.5 < 12+log(O/H) < 8.2 and log(M*/M⊙) = 7–10. We find that strong-line calibrations based on local analogues of high-redshift galaxies reliably reproduce the metallicity of the z > 1 sample on average. We construct the first mass–metallicity relation at z > 1 based purely on direct-method O/H, finding a slope that is consistent with strong-line results. Direct-method O/H evolves by ≲0.1 dex at fixed M* and star formation rate from z ∼ 0 to 2.2. We employ photoionization models to constrain the ionization parameter and ionizing spectrum in the high-redshift sample. Stellar models with supersolar O/Fe and binary evolution of massive stars are required to reproduce the observed strong-line ratios. We find that the z > 1 sample falls on the z ∼ 0 relation between ionization parameter and O/H, suggesting no evolution of this relation from z ∼ 0 to z ∼ 2. These results suggest that the offset of the strong-line ratios of this sample from local excitation sequences is driven primarilymore »
2023-02-08T23:45:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7173730731010437, "perplexity": 4969.080488498791}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500983.76/warc/CC-MAIN-20230208222635-20230209012635-00628.warc.gz"}
https://indico.fnal.gov/event/53004/contributions/244476/
# NuFact 2022: The 23rd International Workshop on Neutrinos from Accelerators July 30, 2022 to August 6, 2022 Cliff Lodge US/Mountain timezone ## Short-Baseline neutrino oscillation searches with the ICARUS detector Aug 2, 2022, 5:18 PM 18m Ballroom 2 ### Ballroom 2 Talk WG1: Neutrino Oscillation Physics ### Speakers Alessandro Menegolli Biswaranjan Behera (Colorado State University) ### Description The ICARUS collaboration employed the 760-ton T600 detector in a successful three-year physics run at the underground LNGS laboratories studying neutrino oscillations with the CNGS neutrino beam from CERN, and searching for atmospheric neutrino interactions. ICARUS performed a sensitive search for LSND-like anomalous νe appearance in the CNGS beam, which contributed to the constraints on the allowed parameters to a narrow region around 1 eV$^2$, where all the experimental results can be coherently accommodated at 90% C.L. After a significant overhaul at CERN, the T600 detector has been installed at Fermilab. In 2020 cryogenic commissioning began with detector cool down, liquid Argon filling and recirculation. ICARUS has started operations and is presently in its commissioning phase, collecting the first neutrino events from the Booster Neutrino Beam and the NuMI off-axis. The main goal of the first year of ICARUS data taking will then be the definitive verification of the recent claim by NEUTRINO-4 short baseline reactor experiment both in the $\nu_\mu$ channel with the BNB and in the $\nu_e$ with NuMI. After the first year of operations, ICARUS will commence its search for evidence of a sterile neutrino jointly with the SBND near detector, within the Short Baseline Neutrino (SBN) program. The ICARUS exposure to the NuMI beam will also give the possibility for other physics studies such as light dark matter searches and neutrino-Argon cross section measurements. The proposed contribution will address ICARUS achievements, its status and plans for the new run at Fermilab and the ongoing developments of the analysis tools needed to fulfill its physics program. Attendance type In-person presentation
2023-02-09T12:17:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6191299557685852, "perplexity": 6731.636028332806}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499966.43/warc/CC-MAIN-20230209112510-20230209142510-00496.warc.gz"}
http://lammps.sandia.gov/doc/compute_vacf.html
# compute vacf command ## Syntax compute ID group-ID vacf • ID, group-ID are documented in compute command • vacf = style name of this compute command ## Examples compute 1 all vacf compute 1 upper vacf ## Description Define a computation that calculates the velocity auto-correlation function (VACF), averaged over a group of atoms. Each atom’s contribution to the VACF is its current velocity vector dotted into its initial velocity vector at the time the compute was specified. A vector of four quantities is calculated by this compute. The first 3 elements of the vector are vx * vx0 (and similarly for the y and z components), summed and averaged over atoms in the group. Vx is the current x-component of velocity for the atom, vx0 is the initial x-component of velocity for the atom. The 4th element of the vector is the total VACF, i.e. (vx*vx0 + vy*vy0 + vz*vz0), summed and averaged over atoms in the group. The integral of the VACF versus time is proportional to the diffusion coefficient of the diffusing atoms. This can be computed in the following manner, using the variable trap() function: compute 2 all vacf fix 5 all vector 1 c_2[4] variable diff equal dt*trap(f_5) thermo_style custom step v_diff Note If you want the quantities calculated by this compute to be continuous when running from a restart file, then you should use the same ID for this compute, as in the original run. This is so that the fix this compute creates to store per-atom quantities will also have the same ID, and thus be initialized correctly with time=0 atom velocities from the restart file. Output info: This compute calculates a global vector of length 4, which can be accessed by indices 1-4 by any command that uses global vector values from a compute as input. See this section for an overview of LAMMPS output options. The vector values are “intensive”. The vector values will be in velocity^2 units. none
2017-08-22T14:40:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8306694030761719, "perplexity": 1977.4780166951507}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886110792.29/warc/CC-MAIN-20170822143101-20170822163101-00590.warc.gz"}
https://www.usgs.gov/news/history-innovation-leads-cutting-edge-technique-sampling-water-deep-within-k-lauea-s-volcanic
# History of Innovation Leads to Cutting-Edge Technique for Sampling Water Deep Within Kīlauea’s Volcanic Crater Release Date: This month marks the second anniversary of the largest rift zone eruption and summit collapse at Kīlauea Volcano in 200 years. In 2018, scientists at the U.S. Geological Survey Hawaii Volcano Observatory monitored more than 60 collapse events at the summit that caused the floor of Halema‘uma‘u crater to drop about 1600 feet, or more than five times the height of the Statue of Liberty. Clear weather allowed HVO geologists to make observations and take measurements of the water pond at Kīlauea's summit. No major changes were observed, and the water level continues to slowly rise. Note the former HVO observation tower can be seen above the geologist's helmet. (Credit: Matthew Patrick, USGS. Public domain.) In July 2019, yet another change occurred at the summit—water was seen at the bottom of the crater. Kilauea Crater, in which Halemaumau is located, is a sacred place in Hawaiian culture. Inquiries into oral histories of the volcano, however, found no mention of past water bodies forming for long periods in the crater. The pond, now more properly called a lake, has been present for 9 months, with the water level slowly rising about 3 feet per week. Today, it is larger than five football fields combined, and the total depth is about 100 feet. It has a yellowish color that is not uniform over the surface. Some patches near the edges are a clear green, presumed to be places where fresher groundwater flows into the lake. Other patches are variable shades of rusty brownish-orange, likely due to the presence of iron sulfate minerals in the water. Another common feature is steam rising off the water’s surface, a testament to the fact the lake is scalding hot, roughly 160 degrees Fahrenheit, as measured by a thermal camera. Initially, USGS scientists weren’t sure if the water was ponded rainwater or groundwater, so scientists needed a sample to chemically determine where the water was coming from. They also needed chemical analysis to help determine the total amount of sulfur dioxide (SO2) being released from the magma below the lake. The amount of SO2 emitted by a volcano can indicate how active a volcano is or how active it might become. Normally, such measurements only quantify the sulfur released to the atmosphere. However, SO2 is easily dissolved in water, so the new water pond could have been absorbing, or ‘hiding,’ a significant portion of the volcano’s released sulfur in the form of dissolved sulfur (e.g. sulfuric acid or sulfate). For the first few months after the pond formed, scientists were unable to conduct important chemical measurements and were limited to remote observations of the lake’s size, color and surface temperature. Access to the pond more than 1500 feet below the crater rim was impossible on foot and considered too risky by helicopter. In October 2019, following the tradition of innovative field methods started by HVO founder Dr. Thomas A. Jaggar, researchers brought in Unoccupied Aircraft Systems (UAS) to collect samples deep within Kīlauea's collapsed summit crater. The sampling mechanism (on blue tarp) is prepared and the Unoccupied Aircraft System (UAS) is inspected just before take off to collect water from the Halema‘uma‘u crater lake. Brightly colored flagging tape tied to a cable attached to the UAS indicated depth as the sampling tool was lowered into the water. (Credit: Joe Adams, USGS. Public domain.) USGS scientists had utilized the unique capabilities of UAS flights at Kilauea before. In fact, the 2018 eruption marked the first time the federal government used UAS to assist in an eruption response in the United States. UAS flights into hazardous areas allowed USGS scientists to provide 24/7 real-time situational awareness at the volcano’s summit and lower East Rift Zone and to safely view, document and better understand what was happening with Kīlauea's rapidly changing eruption. After months of planning, logistics and obtaining permission from Hawai‘i Volcanoes National Park, the sampling team finally got their chance to collect water. The team, with decades of combined experience, included scientists and pilots from the Hawaiian Volcano Observatory, the Volcano Disaster Assistance Program, Hawai‘i Volcanoes National Park and the Department of the Interior‘s Office of Aviation Services. The UAS pilots flew an initial reconnaissance flight carrying only a camera to get a sense of what things looked like down in the crater and how the winds would affect flying during sampling. HVO scientists then attached a water sampler and temperature probe to the UAS via a 30-foot cord. The UAS was also equipped with a dual thermal and color camera to detect the temperature of the lake and capture video of the mission. The pilot lifted off smoothly, with the sampler rising vertically while a scientist stabilized the sampler so it wouldn’t swing on lift off. Flagging was attached to the cord at 5-foot intervals so the pilot, who was operating through a first-person viewer on a tablet screen, and visual observers using stabilizing binoculars, could tell how deep the sampler was in the water. The sampler, which includes a long, durable plastic sleeve with a funnel-like cone on top, stayed closed as the pilot lowered it into the water. When it was pulled back up, water flowed into the sampler through the funnel, filling the sleeve. The October sample was collected at a depth of about 8 to10 feet below the surface of the pond and was enough fill a wine bottle, or about 750 milliliters. After sampling the water, the drone returned and hovered near the scientists waiting at the collection point. Researchers wore safety goggles, thick rubber gloves and safety smocks to make sure that if any water spilled, no one would be injured or burned by the potentially very acidic water. The sampler was stabilized, and the drone pilot released the sampling cord. Scientists were then able to process the sample into sterile containers on-site in a makeshift field lab. They were also able to determine on-site that the water had a pH of 4.2, which is mildly acidic. After a sample was collected, HVO team members transferred water from the sampling device to plastic bottles. Team members took notes, measured water pH and evaluated water temperature data for each sample collected. (Credit: Miki Warren, USGS. Public domain.) The sample was then shipped to the USGS California Volcano Observatory for advanced chemical analyses, which showed the water originated as rainwater, but hadn’t fallen directly into the lake. Rather, the rainwater had made its way below ground and then flowed as groundwater into the lake. The analysis also indicated that as suspected, there is a significant amount of SO2 being dissolved by the lake water. Most volcanic crater lakes around the world are routinely sampled to track changes that may indicate a change in hazards and it is no different here at the Kīlauea Volcano lake. In January 2020, the team of scientists and pilots conducted a second sampling mission, making further innovative improvements to the sampling payload. While the January chemistry was similar to the October sample, HVO scientists will continue to work with the National Park Service and other cooperators to do additional sampling to monitor the chemistry and determine what it might mean for degassing and hazards at the summit. In the meantime, Kīlauea is one of the best-monitored volcanoes on Earth with an extensive network of geophysical instruments and other monitoring tools to keep an eye on any changes in the volcano’s activity. For the latest information, go here https://volcanoes.usgs.gov/volcanoes/kilauea/summit_water_resources.html
2020-09-27T20:45:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2712549567222595, "perplexity": 3808.9133796391216}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401578485.67/warc/CC-MAIN-20200927183616-20200927213616-00120.warc.gz"}
https://zbmath.org/authors/?q=ai%3Amather.john-n
Mather, John N. Compute Distance To: Author ID: mather.john-n Published as: Mather, John N.; Mather, J. N.; Mather, J.; Mather, John more...less External Links: MGP · Wikidata · IdRef · theses.fr Documents Indexed: 80 Publications since 1965 2 Further Contributions Co-Authors: 13 Co-Authors with 10 Joint Publications 556 Co-Co-Authors all top 5 Co-Authors 70 single-authored 2 Yau, Stephen Shing-Toung 1 Bott, Raoul Harry 1 Chaperon, Marc 1 Fathi, Albert 1 Fell, Harriet J. 1 Forni, Giovanni 1 Kaloshin, Vadim Yu. 1 Laudenbach, François 1 McGehee, Richard P. 1 McKean, Henry P. jun. 1 Moser, Jürgen K. 1 Nirenberg, Louis 1 Rabinowitz, Paul Henry 1 Smale, Steve 1 Valdinoci, Enrico all top 5 Serials 9 Commentarii Mathematici Helvetici 5 Uspekhi Matematicheskikh Nauk [N. S.] 5 Ergodic Theory and Dynamical Systems 3 Publications Mathématiques 3 Topology 3 Annals of Mathematics. Second Series 3 Bulletin of the American Mathematical Society 2 Communications in Mathematical Physics 2 Advances in Mathematics 2 Annales de l’Institut Fourier 1 American Mathematical Monthly 1 Communications on Pure and Applied Mathematics 1 Bulletin de la Société Mathématique de France 1 Gazette des Mathématiciens 1 Inventiones Mathematicae 1 Mathematische Zeitschrift 1 Proceedings of the American Mathematical Society 1 Journal of the American Mathematical Society 1 Proceedings of the National Academy of Sciences of the United States of America 1 Bulletin of the American Mathematical Society. New Series 1 Notices of the American Mathematical Society 1 Boletim da Sociedade Brasileira de Matemática. Nova Série 1 Journal of Mathematical Sciences (New York) 1 Comptes Rendus Hebdomadaires des Séances de l’Académie des Sciences, Série A 1 Nederlandse Akademie van Wetenschappen. Proceedings. Series A. Indagationes Mathematicae all top 5 Fields 32 Dynamical systems and ergodic theory (37-XX) 29 Manifolds and cell complexes (57-XX) 29 Global analysis, analysis on manifolds (58-XX) 7 Mechanics of particles and systems (70-XX) 5 Algebraic geometry (14-XX) 5 Measure and integration (28-XX) 5 Algebraic topology (55-XX) 3 Several complex variables and analytic spaces (32-XX) 3 Differential geometry (53-XX) 3 General topology (54-XX) 2 History and biography (01-XX) 2 Group theory and generalizations (20-XX) 1 General and overarching topics; collections (00-XX) 1 Real functions (26-XX) 1 Functions of a complex variable (30-XX) 1 Partial differential equations (35-XX) 1 Functional analysis (46-XX) 1 Calculus of variations and optimal control; optimization (49-XX) Citations contained in zbMATH Open 69 Publications have been cited 3,195 times in 1,356 Documents Cited by Year Differentiable dynamical systems. With an appendix to the first part of the paper: “Anosov diffeomorphisms” by John Mather. Zbl 0202.55202 Smale, S. 1967 Action minimizing invariant measures for positive definite Lagrangian systems. Zbl 0696.58027 Mather, John N. 1991 Existence of quasi-periodic orbits for twist homeomorphisms of the annulus. Zbl 0506.58032 Mather, John N. 1982 Variational construction of connecting orbits. Zbl 0803.58019 Mather, John N. 1993 Stability of $$C^ \infty$$ mappings. III: Finitely determined map germs. Zbl 0159.25001 Mather, J. N. 1968 Stability of $$C^ \infty$$ mappings. V: Transversality. Zbl 0207.54303 Mather, J. N. 1970 Stability of $$C^\infty$$ mappings. II: Infinitesimal stability implies stability. Zbl 0177.26002 Mather, J. N. 1969 Stability of $$C^ \infty$$ mappings. IV: Classification of stable germs by R-algebras. Zbl 0202.55102 Mather, J. N. 1969 Notes on topological stability. Zbl 1260.57049 Mather, John 2012 Classification of isolated hypersurface singularities by their moduli algebras. Zbl 0499.32008 Mather, John N.; Yau, Stephen S.-T. 1982 Characterization of Anosov diffeomorphisms. Zbl 0165.57001 Mather, J. N. 1968 Stability of $$C^ \infty$$ mappings. I: The division theorem. Zbl 0159.24902 Mather, John N. 1968 Stratifications and mappings. Zbl 0286.58003 Mather, John N. 1973 Generic projections. Zbl 0242.58001 Mather, John N. 1973 Action minimizing orbits in Hamiltonian systems. Zbl 0822.70011 Mather, John N.; Forni, Giovanni 1994 A criterion for the non-existence of invariant circles. Zbl 0603.58028 Mather, John 1986 Commutators of diffeomorphisms. Zbl 0289.57014 Mather, John N. 1974 Glancing billiards. Zbl 0525.58021 Mather, John N. 1982 Non-existence of invariant circles. Zbl 0557.58019 Mather, John N. 1984 Variational construction of orbits of twist diffeomorphisms. Zbl 0737.58029 Mather, John N. 1991 More Denjoy minimal sets for area preserving diffeomorphisms. Zbl 0597.58015 Mather, John N. 1985 Stability of $$C^ \infty$$-mappings. VI: The nice dimensions. Zbl 0211.56105 Mather, J. N. 1971 Differentiable invariants. Zbl 0376.58002 Mather, John N. 1977 Differentiability of the minimal average action as a function of the rotation number. Zbl 0766.58033 Mather, John N. 1990 The vanishing of the homology of certain groups of homeomorphisms. Zbl 0207.21903 Mather, John N. 1971 Destruction of invariant circles. Zbl 0688.58024 Mather, John N. 1988 Minimal measures. Zbl 0689.58025 Mather, John N. 1989 Integrability in codimension 1. Zbl 0284.57016 Mather, John N. 1973 Arnold diffusion. I: Announcement of results. Zbl 1069.37044 Mather, J. N. 2003 Commutators of diffeomorphisms. II. Zbl 0299.58008 Mather, John N. 1975 Solutions of the collinear four body problem which become unbounded in finite time. Zbl 0331.70005 Mather, J. N.; McGehee, R. 1975 Failure of convergence of the Lax-Oleinik semi-group in the time-periodic case. Zbl 0989.37035 Fathi, Albert; Mather, John N. 2000 How to stratify mappings and jet spaces. Zbl 0398.58008 Mather, John N. 1976 Examples of Aubry sets. Zbl 1090.37047 Mather, John N. 2004 Invariant subsets for area preserving homeomorphisms of surfaces. Zbl 0505.58027 Mather, John N. 1981 Topological proofs of some purely topological consequences of Caratheodory’s theory of prime ends. Zbl 0506.57005 Mather, John N. 1982 Commutators of diffeomorphisms. III: A group which is not perfect. Zbl 0575.58011 Mather, John N. 1985 On Haefliger’s classifying space. I. Zbl 0224.55022 Mather, John N. 1971 On Thom-Boardman singularities. Zbl 0292.58004 Mather, John N. 1973 Modulus of continuity for Peierls’s barrier. Zbl 0658.58013 Mather, John N. 1987 Total disconnectedness of the quotient Aubry set in low dimensions. Zbl 1046.37039 Mather, John N. 2003 Arnold diffusion by variational methods. Zbl 1350.37067 Mather, John N. 2012 Criterion for biholomorphic equivalence of isolated hypersurface singularities. Zbl 0477.32005 Mather, John N.; Yau, Stephen S.-T. 1981 On Nirenberg’s proof of Malgrange’s preparation theorem. Zbl 0211.56102 Mather, J. N. 1971 Amount of rotation about a point and the Morse index. Zbl 0558.58010 Mather, John N. 1984 Invariance of the homology of a lattice. Zbl 0147.42102 Mather, J. 1966 Distance from a submanifold in euclidean space. Zbl 0519.58015 Mather, John N. 1983 Simplicity of certain groups of diffeomorphisms. Zbl 0275.58007 Mather, John N. 1974 Minimal action measures for positive-definite Lagrangian systems. Zbl 0850.70195 Mather, John N. 1989 A curious remark concerning the geometric transfer map. Zbl 0535.58006 Mather, John N. 1984 Non-uniqueness of solutions of Percival’s Euler-Lagrange equation. Zbl 0553.58011 Mather, John N. 1982 Instability of resonant totally elliptic points of symplectic maps in dimension 4. Zbl 1156.37313 Kaloshin, Vadim; Mather, John N.; Valdinoci, Enrico 2004 On the homology of Haefliger’s classifying space. Zbl 0469.57021 Mather, John N. 1979 Stable map-germs and algebraic geometry. Zbl 0217.04903 Mather, J. N. 1971 A property of compact, connected, laminated subsets of manifolds. Zbl 1079.37057 Mather, John N. 2002 Stratifications and mappings. Zbl 0253.58005 Mather, John N. 1972 Solutions of generic linear equations. Zbl 0272.26008 Mather, John N. 1973 Concavity of the Lagrangian for quasi-periodic orbits. Zbl 0508.58037 Mather, John N. 1982 Foliations and local homology of groups of diffeomorphisms. Zbl 0333.57015 Mather, John N. 1975 Characterization of stable mappings. Zbl 0167.51803 Mather, J. N. 1968 Structural stability of mappings. Zbl 0216.20801 Mather, J. 1968 Order structure on action minimizing orbits. Zbl 1211.37076 Mather, John N. 2010 Stability of $$C^\infty$$ mappings. VI: The nice dimensions. Zbl 0286.58005 Mather, J. N. 1974 Stability of $$C^\infty$$ mappings. V: Transversality. Zbl 0286.58006 Mather, John N. 1974 Area preserving twist homeomorphism of the annulus. Zbl 0414.57002 Mather, John N. 1979 Topics in topology and differential geometry. Zbl 0177.26001 Bott, Raoul; Mather, J. 1968 Some non-finitely determined map-germs. Zbl 0187.20504 Mather, J. N. 1969 Dynamics of area preserving maps. Zbl 0674.58026 Mather, John N. 1987 Loops and foliations. Zbl 0309.57009 Mather, John N. 1975 Notes on topological stability. Zbl 1260.57049 Mather, John 2012 Arnold diffusion by variational methods. Zbl 1350.37067 Mather, John N. 2012 Order structure on action minimizing orbits. Zbl 1211.37076 Mather, John N. 2010 Examples of Aubry sets. Zbl 1090.37047 Mather, John N. 2004 Instability of resonant totally elliptic points of symplectic maps in dimension 4. Zbl 1156.37313 Kaloshin, Vadim; Mather, John N.; Valdinoci, Enrico 2004 Arnold diffusion. I: Announcement of results. Zbl 1069.37044 Mather, J. N. 2003 Total disconnectedness of the quotient Aubry set in low dimensions. Zbl 1046.37039 Mather, John N. 2003 A property of compact, connected, laminated subsets of manifolds. Zbl 1079.37057 Mather, John N. 2002 Failure of convergence of the Lax-Oleinik semi-group in the time-periodic case. Zbl 0989.37035 Fathi, Albert; Mather, John N. 2000 Action minimizing orbits in Hamiltonian systems. Zbl 0822.70011 Mather, John N.; Forni, Giovanni 1994 Variational construction of connecting orbits. Zbl 0803.58019 Mather, John N. 1993 Action minimizing invariant measures for positive definite Lagrangian systems. Zbl 0696.58027 Mather, John N. 1991 Variational construction of orbits of twist diffeomorphisms. Zbl 0737.58029 Mather, John N. 1991 Differentiability of the minimal average action as a function of the rotation number. Zbl 0766.58033 Mather, John N. 1990 Minimal measures. Zbl 0689.58025 Mather, John N. 1989 Minimal action measures for positive-definite Lagrangian systems. Zbl 0850.70195 Mather, John N. 1989 Destruction of invariant circles. Zbl 0688.58024 Mather, John N. 1988 Modulus of continuity for Peierls’s barrier. Zbl 0658.58013 Mather, John N. 1987 Dynamics of area preserving maps. Zbl 0674.58026 Mather, John N. 1987 A criterion for the non-existence of invariant circles. Zbl 0603.58028 Mather, John 1986 More Denjoy minimal sets for area preserving diffeomorphisms. Zbl 0597.58015 Mather, John N. 1985 Commutators of diffeomorphisms. III: A group which is not perfect. Zbl 0575.58011 Mather, John N. 1985 Non-existence of invariant circles. Zbl 0557.58019 Mather, John N. 1984 Amount of rotation about a point and the Morse index. Zbl 0558.58010 Mather, John N. 1984 A curious remark concerning the geometric transfer map. Zbl 0535.58006 Mather, John N. 1984 Distance from a submanifold in euclidean space. Zbl 0519.58015 Mather, John N. 1983 Existence of quasi-periodic orbits for twist homeomorphisms of the annulus. Zbl 0506.58032 Mather, John N. 1982 Classification of isolated hypersurface singularities by their moduli algebras. Zbl 0499.32008 Mather, John N.; Yau, Stephen S.-T. 1982 Glancing billiards. Zbl 0525.58021 Mather, John N. 1982 Topological proofs of some purely topological consequences of Caratheodory’s theory of prime ends. Zbl 0506.57005 Mather, John N. 1982 Non-uniqueness of solutions of Percival’s Euler-Lagrange equation. Zbl 0553.58011 Mather, John N. 1982 Concavity of the Lagrangian for quasi-periodic orbits. Zbl 0508.58037 Mather, John N. 1982 Invariant subsets for area preserving homeomorphisms of surfaces. Zbl 0505.58027 Mather, John N. 1981 Criterion for biholomorphic equivalence of isolated hypersurface singularities. Zbl 0477.32005 Mather, John N.; Yau, Stephen S.-T. 1981 On the homology of Haefliger’s classifying space. Zbl 0469.57021 Mather, John N. 1979 Area preserving twist homeomorphism of the annulus. Zbl 0414.57002 Mather, John N. 1979 Differentiable invariants. Zbl 0376.58002 Mather, John N. 1977 How to stratify mappings and jet spaces. Zbl 0398.58008 Mather, John N. 1976 Commutators of diffeomorphisms. II. Zbl 0299.58008 Mather, John N. 1975 Solutions of the collinear four body problem which become unbounded in finite time. Zbl 0331.70005 Mather, J. N.; McGehee, R. 1975 Foliations and local homology of groups of diffeomorphisms. Zbl 0333.57015 Mather, John N. 1975 Loops and foliations. Zbl 0309.57009 Mather, John N. 1975 Commutators of diffeomorphisms. Zbl 0289.57014 Mather, John N. 1974 Simplicity of certain groups of diffeomorphisms. Zbl 0275.58007 Mather, John N. 1974 Stability of $$C^\infty$$ mappings. VI: The nice dimensions. Zbl 0286.58005 Mather, J. N. 1974 Stability of $$C^\infty$$ mappings. V: Transversality. Zbl 0286.58006 Mather, John N. 1974 Stratifications and mappings. Zbl 0286.58003 Mather, John N. 1973 Generic projections. Zbl 0242.58001 Mather, John N. 1973 Integrability in codimension 1. Zbl 0284.57016 Mather, John N. 1973 On Thom-Boardman singularities. Zbl 0292.58004 Mather, John N. 1973 Solutions of generic linear equations. Zbl 0272.26008 Mather, John N. 1973 Stratifications and mappings. Zbl 0253.58005 Mather, John N. 1972 Stability of $$C^ \infty$$-mappings. VI: The nice dimensions. Zbl 0211.56105 Mather, J. N. 1971 The vanishing of the homology of certain groups of homeomorphisms. Zbl 0207.21903 Mather, John N. 1971 On Haefliger’s classifying space. I. Zbl 0224.55022 Mather, John N. 1971 On Nirenberg’s proof of Malgrange’s preparation theorem. Zbl 0211.56102 Mather, J. N. 1971 Stable map-germs and algebraic geometry. Zbl 0217.04903 Mather, J. N. 1971 Stability of $$C^ \infty$$ mappings. V: Transversality. Zbl 0207.54303 Mather, J. N. 1970 Stability of $$C^\infty$$ mappings. II: Infinitesimal stability implies stability. Zbl 0177.26002 Mather, J. N. 1969 Stability of $$C^ \infty$$ mappings. IV: Classification of stable germs by R-algebras. Zbl 0202.55102 Mather, J. N. 1969 Some non-finitely determined map-germs. Zbl 0187.20504 Mather, J. N. 1969 Stability of $$C^ \infty$$ mappings. III: Finitely determined map germs. Zbl 0159.25001 Mather, J. N. 1968 Characterization of Anosov diffeomorphisms. Zbl 0165.57001 Mather, J. N. 1968 Stability of $$C^ \infty$$ mappings. I: The division theorem. Zbl 0159.24902 Mather, John N. 1968 Characterization of stable mappings. Zbl 0167.51803 Mather, J. N. 1968 Structural stability of mappings. Zbl 0216.20801 Mather, J. 1968 Topics in topology and differential geometry. Zbl 0177.26001 Bott, Raoul; Mather, J. 1968 Differentiable dynamical systems. With an appendix to the first part of the paper: “Anosov diffeomorphisms” by John Mather. Zbl 0202.55202 Smale, S. 1967 Invariance of the homology of a lattice. Zbl 0147.42102 Mather, J. 1966 all top 5 Cited by 1,216 Authors 28 de la Llave, Rafael 20 Yau, Stephen Shing-Toung 18 Damon, James Norman 16 Mather, John N. 16 Yan, Jun 14 Izumiya, Shyuichi 14 Sorrentino, Alfonso 13 Cheng, Chong-Qing 13 Cheng, Wei 13 Cui, Xiaojun 13 MacKay, Robert Sinclair 12 Gomes, Diogo Luís Aguiar 12 Rabinowitz, Paul Henry 12 Soares Ruas, Maria Aparecida 11 Bernard, Patrick 11 Kaloshin, Vadim Yu. 11 Wang, Kaizhi 11 Zuo, Huaiqing 10 Arnaud, Marie-Claude 10 Barreira, Luis Manuel 10 Valdinoci, Enrico 10 Valls Anglés, Cláudia 9 Bruce, James William 9 Celletti, Alessandra 9 Meiss, James D. 9 Mitake, Hiroyoshi 9 Rybicki, Tomasz 9 Wall, Charles Terence Clegg 8 Bessi, Ugo 8 Fathi, Albert 8 Isaev, Alexander 8 Ishikawa, Goo 8 Iturriaga, Renato 8 Qin, Wenxin 8 Saeki, Osamu 8 Trotman, David John Angelo 7 Bangert, Victor 7 Greuel, Gert-Martin 7 Hussain, Naveed 7 Le Calvez, Patrice 7 Li, Xia 7 Marò, Stefano 7 Nishimura, Takashi 7 Pei, Donghe 7 Sánchez-Morgado, Héctor 7 Tran Vinh Hung 7 Wang, Yanan 6 Bialy, Misha 6 Chierchia, Luigi 6 Delshams, Amadeu 6 Dragičević, Davor 6 Gaffney, Terence 6 Gidea, Marian 6 Goresky, Robert Mark 6 Greenberg, Peter 6 Haro, Àlex 6 Koropecki, Andres 6 Massart, Daniel 6 Michor, Peter Wolfram 6 Moser, Jürgen K. 6 Nuño-Ballesteros, Juan José 6 M-Seara, Tere 6 Shiota, Masahiro 6 Su, Xifeng 6 Zhang, Jianlu 5 Berger, Pierre 5 Bernardi, Olga 5 Cannarsa, Piermarco 5 Chen, Hao 5 Chen, Qinbo 5 Contreras, Gonzalo 5 Dubois, Jean-Guy 5 Franks, John M. 5 Gutkin, Eugene 5 Handel, Michael 5 Ishii, Hitoshi 5 Mond, David Michael Quentin 5 Paternain, Gabriel Pedro 5 Thieullen, Philippe 5 Zhou, Min 4 Ando, Yoshifumi 4 Bierstone, Edward 4 Bolotin, Sergeĭ Vladimirovich 4 Boyland, Philip L. 4 Cardin, Franco 4 Dias Carneiro, Mario Jorge 4 du Plessis, Andrew Allan 4 Dufour, Jean-Paul 4 Figalli, Alessio 4 Fuchs, Dmitry Borisovich 4 Fukuda, Takuo 4 Fukui, Kazuhiko 4 Galligo, André 4 Golubitsky, Martin A. 4 Guzzo, Massimiliano 4 Hauser, Herwig 4 Jekel, Solomon M. 4 Knill, Oliver 4 Latushkin, Yuri 4 Lê Dûng Tráng ...and 1,116 more Authors all top 5 Cited in 262 Serials 50 Communications in Mathematical Physics 49 Journal of Differential Equations 41 Inventiones Mathematicae 41 Transactions of the American Mathematical Society 40 Annales de l’Institut Fourier 39 Ergodic Theory and Dynamical Systems 35 Physica D 31 Topology and its Applications 29 Proceedings of the American Mathematical Society 27 Advances in Mathematics 27 Mathematische Annalen 26 Mathematische Zeitschrift 25 Compositio Mathematica 20 Duke Mathematical Journal 18 Calculus of Variations and Partial Differential Equations 17 Journal of Statistical Physics 17 Geometry & Topology 16 Geometriae Dedicata 16 Discrete and Continuous Dynamical Systems 15 Publications Mathématiques 14 Journal of Mathematical Analysis and Applications 14 Mathematical Proceedings of the Cambridge Philosophical Society 14 Annales Scientifiques de l’École Normale Supérieure. Quatrième Série 14 Manuscripta Mathematica 13 Israel Journal of Mathematics 13 Bulletin of the American Mathematical Society 12 Journal of Geometry and Physics 12 Journal of Pure and Applied Algebra 12 Annales de l’Institut Henri Poincaré. Analyse Non Linéaire 12 Chaos 12 Science China. Mathematics 11 Annali di Matematica Pura ed Applicata. Serie Quarta 11 Regular and Chaotic Dynamics 11 Bulletin of the Brazilian Mathematical Society. New Series 10 Nonlinearity 10 Boletim da Sociedade Brasileira de Matemática 10 Journal of Functional Analysis 9 Communications on Pure and Applied Mathematics 9 Functional Analysis and its Applications 9 Journal of Algebra 9 Science in China. Series A 9 Geometric and Functional Analysis. GAFA 9 Acta Mathematica Sinica. English Series 8 Journal of Mathematical Physics 8 Boletim da Sociedade Brasileira de Matemática. Nova Série 8 Algebraic & Geometric Topology 7 Archive for Rational Mechanics and Analysis 7 Proceedings of the Japan Academy. Series A 7 Annals of Global Analysis and Geometry 7 Differential Geometry and its Applications 7 Communications in Partial Differential Equations 7 Bulletin of the American Mathematical Society. New Series 7 Journal of Mathematical Sciences (New York) 7 Annales Henri Poincaré 7 Comptes Rendus. Mathématique. Académie des Sciences, Paris 6 Communications in Algebra 6 Proceedings of the Royal Society of Edinburgh. Section A. Mathematics 6 Journal of Nonlinear Science 5 ZAMP. Zeitschrift für angewandte Mathematik und Physik 5 Journal of the Mathematical Society of Japan 5 Kodai Mathematical Journal 5 Memoirs of the American Mathematical Society 5 Publications of the Research Institute for Mathematical Sciences, Kyoto University 5 Tohoku Mathematical Journal. Second Series 5 Chinese Annals of Mathematics. Series B 5 International Journal of Mathematics 5 Journal of Dynamics and Differential Equations 5 Annals of Mathematics. Second Series 5 Foundations of Computational Mathematics 5 Advanced Nonlinear Studies 5 Milan Journal of Mathematics 5 Journal of Topology and Analysis 4 Journal d’Analyse Mathématique 4 Acta Mathematica 4 Annales de l’Institut Henri Poincaré. Nouvelle Série. Section A. Physique Théorique 4 Journal of Mathematical Economics 4 Journal of Soviet Mathematics 4 Discrete & Computational Geometry 4 Journal of the American Mathematical Society 4 The Journal of Geometric Analysis 4 Annales de la Faculté des Sciences de Toulouse. Mathématiques. Série VI 4 NoDEA. Nonlinear Differential Equations and Applications 4 European Series in Applied and Industrial Mathematics (ESAIM): Control, Optimization and Calculus of Variations 4 Journal of the European Mathematical Society (JEMS) 4 Qualitative Theory of Dynamical Systems 4 Proceedings of the Steklov Institute of Mathematics 4 Journal of Fixed Point Theory and Applications 4 Journal of Singularities 4 Revista de la Real Academia de Ciencias Exactas, Físicas y Naturales. Serie A: Matemáticas. RACSAM 3 Letters in Mathematical Physics 3 Mathematical Notes 3 Mathematics of Computation 3 Bulletin de la Société Mathématique de France 3 Cahiers de Topologie et Géométrie Différentielle Catégoriques 3 Journal für die Reine und Angewandte Mathematik 3 Monatshefte für Mathematik 3 Rendiconti del Seminario Matematico della Università di Padova 3 Journal de Mathématiques Pures et Appliquées. Neuvième Série 3 Acta Mathematica Sinica. New Series 3 International Journal of Bifurcation and Chaos in Applied Sciences and Engineering ...and 162 more Serials all top 5 Cited in 59 Fields 572 Dynamical systems and ergodic theory (37-XX) 322 Global analysis, analysis on manifolds (58-XX) 253 Manifolds and cell complexes (57-XX) 187 Several complex variables and analytic spaces (32-XX) 171 Algebraic geometry (14-XX) 146 Differential geometry (53-XX) 145 Mechanics of particles and systems (70-XX) 126 Partial differential equations (35-XX) 80 Ordinary differential equations (34-XX) 74 Calculus of variations and optimal control; optimization (49-XX) 50 Algebraic topology (55-XX) 34 Group theory and generalizations (20-XX) 33 Statistical mechanics, structure of matter (82-XX) 32 Operator theory (47-XX) 30 Commutative algebra (13-XX) 29 Topological groups, Lie groups (22-XX) 27 General topology (54-XX) 26 Quantum theory (81-XX) 20 Real functions (26-XX) 20 Functional analysis (46-XX) 20 Numerical analysis (65-XX) 17 Measure and integration (28-XX) 15 Nonassociative rings and algebras (17-XX) 15 Probability theory and stochastic processes (60-XX) 15 Computer science (68-XX) 14 Category theory; homological algebra (18-XX) 13 Associative rings and algebras (16-XX) 13 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 12 Fluid mechanics (76-XX) 11 Mechanics of deformable solids (74-XX) 9 Convex and discrete geometry (52-XX) 8 History and biography (01-XX) 8 Combinatorics (05-XX) 8 Linear and multilinear algebra; matrix theory (15-XX) 8 Functions of a complex variable (30-XX) 8 Optics, electromagnetic theory (78-XX) 8 Operations research, mathematical programming (90-XX) 7 Difference and functional equations (39-XX) 7 Systems theory; control (93-XX) 6 Mathematical logic and foundations (03-XX) 6 Order, lattices, ordered algebraic structures (06-XX) 6 Number theory (11-XX) 6 Abstract harmonic analysis (43-XX) 6 Relativity and gravitational theory (83-XX) 5 $$K$$-theory (19-XX) 5 Biology and other natural sciences (92-XX) 4 Geometry (51-XX) 3 Approximations and expansions (41-XX) 2 Harmonic analysis on Euclidean spaces (42-XX) 2 Classical thermodynamics, heat transfer (80-XX) 2 Astronomy and astrophysics (85-XX) 1 General and overarching topics; collections (00-XX) 1 Field theory and polynomials (12-XX) 1 Potential theory (31-XX) 1 Special functions (33-XX) 1 Integral equations (45-XX) 1 Statistics (62-XX) 1 Geophysics (86-XX) 1 Information and communication theory, circuits (94-XX) Wikidata Timeline The data are displayed as stored in Wikidata under a Creative Commons CC0 License. Updates and corrections should be made in Wikidata.
2022-07-07T01:00:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6017417907714844, "perplexity": 4721.805936747119}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104683020.92/warc/CC-MAIN-20220707002618-20220707032618-00517.warc.gz"}
https://lammps.sandia.gov/doc/fix_balance.html
# fix balance command ## Syntax fix ID group-ID balance Nfreq thresh style args keyword args ... • ID, group-ID are documented in fix command • balance = style name of this fix command • Nfreq = perform dynamic load balancing every this many steps • thresh = imbalance threshold that must be exceeded to perform a re-balance • style = shift or rcb shift args = dimstr Niter stopthresh dimstr = sequence of letters containing "x" or "y" or "z", each not more than once Niter = # of times to iterate within each dimension of dimstr sequence stopthresh = stop balancing when this imbalance threshold is reached rcb args = none • zero or more keyword/arg pairs may be appended • keyword = weight or out weight style args = use weighted particle counts for the balancing style = group or neigh or time or var or store group args = Ngroup group1 weight1 group2 weight2 ... Ngroup = number of groups with assigned weights group1, group2, ... = group IDs weight1, weight2, ... = corresponding weight factors neigh factor = compute weight based on number of neighbors factor = scaling factor (> 0) time factor = compute weight based on time spend computing factor = scaling factor (> 0) var name = take weight from atom-style variable name = name of the atom-style variable store name = store weight in custom atom property defined by fix property/atom command name = atom property name (without d_ prefix) out arg = filename filename = write each processor's sub-domain to a file, at each re-balancing ## Examples fix 2 all balance 1000 1.05 shift x 10 1.05 fix 2 all balance 100 0.9 shift xy 20 1.1 out tmp.balance fix 2 all balance 100 0.9 shift xy 20 1.1 weight group 3 substrate 3.0 solvent 1.0 solute 0.8 out tmp.balance fix 2 all balance 100 1.0 shift x 10 1.1 weight time 0.8 fix 2 all balance 100 1.0 shift xy 5 1.1 weight var myweight weight neigh 0.6 weight store allweight fix 2 all balance 1000 1.1 rcb ## Description This command adjusts the size and shape of processor sub-domains within the simulation box, to attempt to balance the number of particles and thus the computational cost (load) evenly across processors. The load balancing is “dynamic” in the sense that re-balancing is performed periodically during the simulation. To perform “static” balancing, before or between runs, see the balance command. Load-balancing is typically most useful if the particles in the simulation box have a spatially-varying density distribution or where the computational cost varies significantly between different atoms. E.g. a model of a vapor/liquid interface, or a solid with an irregular-shaped geometry containing void regions, or hybrid pair style simulations which combine pair styles with different computational cost. In these cases, the LAMMPS default of dividing the simulation box volume into a regular-spaced grid of 3d bricks, with one equal-volume sub-domain per processor, may assign numbers of particles per processor in a way that the computational effort varies significantly. This can lead to poor performance when the simulation is run in parallel. The balancing can be performed with or without per-particle weighting. With no weighting, the balancing attempts to assign an equal number of particles to each processor. With weighting, the balancing attempts to assign an equal aggregate computational weight to each processor, which typically induces a different number of atoms assigned to each processor. Note The weighting options listed above are documented with the balance command in this section of the balance command doc page. That section describes the various weighting options and gives a few examples of how they can be used. The weighting options are the same for both the fix balance and balance commands. Note that the processors command allows some control over how the box volume is split across processors. Specifically, for a Px by Py by Pz grid of processors, it allows choice of Px, Py, and Pz, subject to the constraint that Px * Py * Pz = P, the total number of processors. This is sufficient to achieve good load-balance for some problems on some processor counts. However, all the processor sub-domains will still have the same shape and same volume. On a particular timestep, a load-balancing operation is only performed if the current “imbalance factor” in particles owned by each processor exceeds the specified thresh parameter. The imbalance factor is defined as the maximum number of particles (or weight) owned by any processor, divided by the average number of particles (or weight) per processor. Thus an imbalance factor of 1.0 is perfect balance. As an example, for 10000 particles running on 10 processors, if the most heavily loaded processor has 1200 particles, then the factor is 1.2, meaning there is a 20% imbalance. Note that re-balances can be forced even if the current balance is perfect (1.0) be specifying a thresh < 1.0. Note This command attempts to minimize the imbalance factor, as defined above. But depending on the method a perfect balance (1.0) may not be achieved. For example, “grid” methods (defined below) that create a logical 3d grid cannot achieve perfect balance for many irregular distributions of particles. Likewise, if a portion of the system is a perfect lattice, e.g. the initial system is generated by the create_atoms command, then “grid” methods may be unable to achieve exact balance. This is because entire lattice planes will be owned or not owned by a single processor. Note The imbalance factor is also an estimate of the maximum speed-up you can hope to achieve by running a perfectly balanced simulation versus an imbalanced one. In the example above, the 10000 particle simulation could run up to 20% faster if it were perfectly balanced, versus when imbalanced. However, computational cost is not strictly proportional to particle count, and changing the relative size and shape of processor sub-domains may lead to additional computational and communication overheads, e.g. in the PPPM solver used via the kspace_style command. Thus you should benchmark the run times of a simulation before and after balancing. The method used to perform a load balance is specified by one of the listed styles, which are described in detail below. There are 2 kinds of styles. The shift style is a “grid” method which produces a logical 3d grid of processors. It operates by changing the cutting planes (or lines) between processors in 3d (or 2d), to adjust the volume (area in 2d) assigned to each processor, as in the following 2d diagram where processor sub-domains are shown and atoms are colored by the processor that owns them. The leftmost diagram is the default partitioning of the simulation box across processors (one sub-box for each of 16 processors); the middle diagram is after a “grid” method has been applied. The rcb style is a “tiling” method which does not produce a logical 3d grid of processors. Rather it tiles the simulation domain with rectangular sub-boxes of varying size and shape in an irregular fashion so as to have equal numbers of particles (or weight) in each sub-box, as in the rightmost diagram above. The “grid” methods can be used with either of the comm_style command options, brick or tiled. The “tiling” methods can only be used with comm_style tiled. When a “grid” method is specified, the current domain partitioning can be either a logical 3d grid or a tiled partitioning. In the former case, the current logical 3d grid is used as a starting point and changes are made to improve the imbalance factor. In the latter case, the tiled partitioning is discarded and a logical 3d grid is created with uniform spacing in all dimensions. This is the starting point for the balancing operation. When a “tiling” method is specified, the current domain partitioning (“grid” or “tiled”) is ignored, and a new partitioning is computed from scratch. The group-ID is ignored. However the impact of balancing on different groups of atoms can be affected by using the group weight style as described below. The Nfreq setting determines how often a re-balance is performed. If Nfreq > 0, then re-balancing will occur every Nfreq steps. Each time a re-balance occurs, a reneighboring is triggered, so Nfreq should not be too small. If Nfreq = 0, then re-balancing will be done every time reneighboring normally occurs, as determined by the the neighbor and neigh_modify command settings. On re-balance steps, re-balancing will only be attempted if the current imbalance factor, as defined above, exceeds the thresh setting. The shift style invokes a “grid” method for balancing, as described above. It changes the positions of cutting planes between processors in an iterative fashion, seeking to reduce the imbalance factor. The dimstr argument is a string of characters, each of which must be an “x” or “y” or “z”. Eacn character can appear zero or one time, since there is no advantage to balancing on a dimension more than once. You should normally only list dimensions where you expect there to be a density variation in the particles. Balancing proceeds by adjusting the cutting planes in each of the dimensions listed in dimstr, one dimension at a time. For a single dimension, the balancing operation (described below) is iterated on up to Niter times. After each dimension finishes, the imbalance factor is re-computed, and the balancing operation halts if the stopthresh criterion is met. A re-balance operation in a single dimension is performed using a density-dependent recursive multisectioning algorithm, where the position of each cutting plane (line in 2d) in the dimension is adjusted independently. This is similar to a recursive bisectioning for a single value, except that the bounds used for each bisectioning take advantage of information from neighboring cuts if possible, as well as counts of particles at the bounds on either side of each cuts, which themselves were cuts in previous iterations. The latter is used to infer a density of particles near each of the current cuts. At each iteration, the count of particles on either side of each plane is tallied. If the counts do not match the target value for the plane, the position of the cut is adjusted based on the local density. The low and high bounds are adjusted on each iteration, using new count information, so that they become closer together over time. Thus as the recursion progresses, the count of particles on either side of the plane gets closer to the target value. The density-dependent part of this algorithm is often an advantage when you re-balance a system that is already nearly balanced. It typically converges more quickly than the geometric bisectioning algorithm used by the balance command. However, if can be a disadvantage if you attempt to re-balance a system that is far from balanced, and converge more slowly. In this case you probably want to use the balance command before starting a run, so that you begin the run with a balanced system. Once the re-balancing is complete and final processor sub-domains assigned, particles migrate to their new owning processor as part of the normal reneighboring procedure. Note At each re-balance operation, the bisectioning for each cutting plane (line in 2d) typically starts with low and high bounds separated by the extent of a processor’s sub-domain in one dimension. The size of this bracketing region shrinks based on the local density, as described above, which should typically be 1/2 or more every iteration. Thus if Niter is specified as 10, the cutting plane will typically be positioned to better than 1 part in 1000 accuracy (relative to the perfect target position). For Niter = 20, it will be accurate to better than 1 part in a million. Thus there is no need to set Niter to a large value. This is especially true if you are re-balancing often enough that each time you expect only an incremental adjustment in the cutting planes is necessary. LAMMPS will check if the threshold accuracy is reached (in a dimension) is less iterations than Niter and exit early. The rcb style invokes a “tiled” method for balancing, as described above. It performs a recursive coordinate bisectioning (RCB) of the simulation domain. The basic idea is as follows. The simulation domain is cut into 2 boxes by an axis-aligned cut in the longest dimension, leaving one new box on either side of the cut. All the processors are also partitioned into 2 groups, half assigned to the box on the lower side of the cut, and half to the box on the upper side. (If the processor count is odd, one side gets an extra processor.) The cut is positioned so that the number of atoms in the lower box is exactly the number that the processors assigned to that box should own for load balance to be perfect. This also makes load balance for the upper box perfect. The positioning is done iteratively, by a bisectioning method. Note that counting atoms on either side of the cut requires communication between all processors at each iteration. That is the procedure for the first cut. Subsequent cuts are made recursively, in exactly the same manner. The subset of processors assigned to each box make a new cut in the longest dimension of that box, splitting the box, the subset of processors, and the atoms in the box in two. The recursion continues until every processor is assigned a sub-box of the entire simulation domain, and owns the atoms in that sub-box. The out keyword writes text to the specified filename with the results of each re-balancing operation. The file contains the bounds of the sub-domain for each processor after the balancing operation completes. The format of the file is compatible with the Pizza.py mdump tool which has support for manipulating and visualizing mesh files. An example is shown here for a balancing by 4 processors for a 2d problem: ITEM: TIMESTEP 0 ITEM: NUMBER OF NODES 16 ITEM: BOX BOUNDS 0 10 0 10 0 10 ITEM: NODES 1 1 0 0 0 2 1 5 0 0 3 1 5 5 0 4 1 0 5 0 5 1 5 0 0 6 1 10 0 0 7 1 10 5 0 8 1 5 5 0 9 1 0 5 0 10 1 5 5 0 11 1 5 10 0 12 1 10 5 0 13 1 5 5 0 14 1 10 5 0 15 1 10 10 0 16 1 5 10 0 ITEM: TIMESTEP 0 ITEM: NUMBER OF SQUARES 4 ITEM: SQUARES 1 1 1 2 3 4 2 1 5 6 7 8 3 1 9 10 11 12 4 1 13 14 15 16 The coordinates of all the vertices are listed in the NODES section, 5 per processor. Note that the 4 sub-domains share vertices, so there will be duplicate nodes in the list. The “SQUARES” section lists the node IDs of the 4 vertices in a rectangle for each processor (1 to 4). For a 3d problem, the syntax is similar with 8 vertices listed for each processor, instead of 4, and “SQUARES” replaced by “CUBES”. Restart, fix_modify, output, run start/stop, minimize info: No information about this fix is written to binary restart files. None of the fix_modify options are relevant to this fix. This fix computes a global scalar which is the imbalance factor after the most recent re-balance and a global vector of length 3 with additional information about the most recent re-balancing. The 3 values in the vector are as follows: • 1 = max # of particles per processor • 2 = total # iterations performed in last re-balance • 3 = imbalance factor right before the last re-balance was performed As explained above, the imbalance factor is the ratio of the maximum number of particles (or total weight) on any processor to the average number of particles (or total weight) per processor. These quantities can be accessed by various output commands. The scalar and vector values calculated by this fix are “intensive”. No parameter of this fix can be used with the start/stop keywords of the run command. This fix is not invoked during energy minimization. ## Restrictions For 2d simulations, the z style cannot be used. Nor can a “z” appear in dimstr for the shift style. Balancing through recursive bisectioning (rcb style) requires comm_style tiled
2020-06-01T17:35:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5136601328849792, "perplexity": 1268.0727787431063}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347419056.73/warc/CC-MAIN-20200601145025-20200601175025-00435.warc.gz"}
https://libraryguides.centennialcollege.ca/c.php?g=717286&p=5119030
Skip to Main Content It looks like you're using Internet Explorer 11 or older. This website works best with modern browsers such as the latest versions of Chrome, Firefox, Safari, and Edge. If you continue with this browser, you may see unexpected results. # Set Theory ## Venn Diagrams A Venn diagram is a diagram that shows all possible logical relations between a finite collection of different sets. These diagrams depict elements as points in the plan, and set as regions inside closed curves. The rectangle represents the universal set (all elements), U, the portion bounded by the circle represents all elements in set A. The complement of set AA', contains all elements that are NOT in set A, but are contained in U For example, if set A represents all natural numbers less than 10 or $$\{x \in \mathbb{N}| x < 10\}$$, and the universal set U, contains all the natural numbers. Then the Venn Diagram will look like the following. The complement of set A in builder notation is represented by $$A' = \{x|x\in U \,and\,x\notin a\}$$ ## Subsets of a Set Set A is a subset of set B if every element of A is also an element of B. In symbols this is written $$A \subset B$$ or $$A \subseteq B$$ For example, if you are representing all the countries in the world, and set A represents Finland and Greece, and set represents all countries in Europe. Then $$A \subseteq B$$ and the Venn Diagram can be represented as: If set C represents all the countries in Asia, then set is not a subset of set C, represented by the notation $$A \nsubseteq C$$ Set A is a proper subset of set B if set A is a subset of B, but not equal to B. A proper subset can be expressed $$A \subset B$$ or $$A \subseteq B$$, but a subset that is not a proper subset is written (A \subseteq B\). For example, set {a, b, c} is a proper subset of {a, b, c, d}. Thus, it can be expressed as $$\{a, b, c\} \subset \{a, b, c, d\}$$ or $$\{a, b, c\} \subseteq \{a, b, c, d\}$$ Set {4, 7, 10} is not a proper subset of {4, 7, 10}. This can be expressed only with $$\{4, 7, 10\} \subseteq \{4, 7, 10\}$$ $$\therefore$$ the equal set is one of the subsets, but the equal set is not a proper subset ## Number of Subsets The number of subsets of a set with n elements is $$2^n$$. The number of proper subsets of a set with n elements is $$2^n - 1$$. Where the $$-1$$ represents one equal set being subtracted because it is not a proper subset. Something to think about: Is the empty set $$\varnothing$$ a subset? Example: Find the number of subsets and the number of proper subsets of the set {M, A, T, H} Solution: There are 4 elements, so the number of subsets $$2^4 = 16$$ and the number of proper subsets $$2^4 - 1 = 15$$. Can you write all the subsets out? This will help you answer the question above of whether $$\varnothing$$ a subset. Designed by Matthew Cheung. This work is licensed under a Creative Commons Attribution 4.0 International License. chat loading...
2021-10-20T16:19:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7000068426132202, "perplexity": 581.4801365080962}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585322.63/warc/CC-MAIN-20211020152307-20211020182307-00024.warc.gz"}
http://www.riscario.com/variables
Variables Here are the variables used in the equations on Riscario (shown alphabetically). (1) \begin{align} i = interest \; rate \end{align} • effective interest rate in one period • input as a decimal (e.g., 8% is input as 0.08) (2) \begin{align} n = number \; of \; periods \end{align} • a period is usually a year, but could represent months or days • can even be a decimal ### PS Network Twitter Blog Newsletter Website Twitter Blog Podcast Website ##### Tame Risk Taxevity page revision: 2, last edited: 07 Jul 2007 17:48 Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-NonCommercial-ShareAlike 3.0 License
2017-08-17T21:34:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6926332116127014, "perplexity": 6161.145940992566}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886104160.96/warc/CC-MAIN-20170817210535-20170817230535-00401.warc.gz"}
https://par.nsf.gov/biblio/10372918-latest-results-from-cuore-experiment
Latest Results from the CUORE Experiment Abstract The Cryogenic Underground Observatory for Rare Events (CUORE) is the first cryogenic experiment searching for$$0\nu \beta \beta$$$0\nu \beta \beta$decay that has been able to reach the one-tonne mass scale. The detector, located at the Laboratori Nazionali del Gran Sasso (LNGS) in Italy, consists of an array of 988$${\mathrm{TeO}}_{2}$$${\mathrm{TeO}}_{2}$crystals arranged in a compact cylindrical structure of 19 towers. CUORE began its first physics data run in 2017 at a base temperature of about 10 mK and in April 2021 released its$$3{\mathrm{rd}}$$$3\mathrm{rd}$result of the search for$$0\nu \beta \beta$$$0\nu \beta \beta$, corresponding to a tonne-year of$$\mathrm{TeO}_{2}$$${\mathrm{TeO}}_{2}$exposure. This is the largest amount of data ever acquired with a solid state detector and the most sensitive measurement of$$0\nu \beta \beta$$$0\nu \beta \beta$decay in$${}^{130}\mathrm{Te}$$${}^{130}\mathrm{Te}$ever conducted . We present the current status of CUORE search for$$0\nu \beta \beta$$$0\nu \beta \beta$with the updated statistics of one tonne-yr. We finally give an update of the CUORE background model and the measurement of the$${}^{130}\mathrm{Te}$$${}^{130}\mathrm{Te}$$$2\nu \beta \beta$$$2\nu \beta \beta$decay half-life and decay to excited states of$${}^{130}\mathrm{Xe}$$${}^{130}\mathrm{Xe}$, studies performed using an exposure of 300.7 kg yr. Authors: ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; more » Publication Date: NSF-PAR ID: 10372918 Journal Name: Journal of Low Temperature Physics Volume: 209 Issue: 5-6 Page Range or eLocation-ID: p. 927-935 ISSN: 0022-2291 Publisher: National Science Foundation ##### More Like this 1. Abstract We present the first unquenched lattice-QCD calculation of the form factors for the decay$$B\rightarrow D^*\ell \nu$$$B\to {D}^{\ast }\ell \nu$at nonzero recoil. Our analysis includes 15 MILC ensembles with$$N_f=2+1$$${N}_{f}=2+1$flavors of asqtad sea quarks, with a strange quark mass close to its physical mass. The lattice spacings range from$$a\approx 0.15$$$a\approx 0.15$fm down to 0.045 fm, while the ratio between the light- and the strange-quark masses ranges from 0.05 to 0.4. The valencebandcquarks are treated using the Wilson-clover action with the Fermilab interpretation, whereas the light sector employs asqtad staggered fermions. We extrapolate our results to the physical point in the continuum limit using rooted staggered heavy-light meson chiral perturbation theory. Then we apply a model-independent parametrization to extend the form factors to the full kinematic range. With this parametrization we perform a joint lattice-QCD/experiment fit using several experimental datasets to determine the CKM matrix element$$|V_{cb}|$$$|{V}_{\mathrm{cb}}|$. We obtain$$\left| V_{cb}\right| = (38.40 \pm 0.68_{\text {th}} \pm 0.34_{\text {exp}} \pm 0.18_{\text {EM}})\times 10^{-3}$$$\left({V}_{\mathrm{cb}}\right)=\left(38.40±0.{68}_{\text{th}}±0.{34}_{\text{exp}}±0.{18}_{\text{EM}}\right)×{10}^{-3}$. The first error is theoretical, the second comes from experiment and the last one includes electromagnetic and electroweak uncertainties, with an overall$$\chi ^2\text {/dof} = 126/84$$${\chi }^{2}\text{/dof}=126/84$, which illustrates the tensions between the experimental data sets, and between theory and experiment. This result is inmore » 2. Abstract The CUORE experiment is a large bolometric array searching for the lepton number violating neutrino-less double beta decay ( $$0\nu \beta \beta$$ 0 ν β β ) in the isotope $$\mathrm {^{130}Te}$$ 130 Te . In this work we present the latest results on two searches for the double beta decay (DBD) of $$\mathrm {^{130}Te}$$ 130 Te to the first $$0^{+}_2$$ 0 2 + excited state of $$\mathrm {^{130}Xe}$$ 130 Xe : the $$0\nu \beta \beta$$ 0 ν β β decay and the Standard Model-allowed two-neutrinos double beta decay ( $$2\nu \beta \beta$$ 2 ν β β ). Both searches are based on a 372.5 kg $$\times$$ × yr TeO $$_2$$ 2 exposure. The de-excitation gamma rays emitted by the excited Xe nucleus in the final state yield a unique signature, which can be searched for with low background by studying coincident events in two or more bolometers. The closely packed arrangement of the CUORE crystals constitutes a significant advantage in this regard. The median limit setting sensitivities at 90% Credible Interval (C.I.) of the given searches were estimated as $$\mathrm {S^{0\nu }_{1/2} = 5.6 \times 10^{24} \, \mathrm {yr}}$$ S 1 / 2 0more » 3. Abstract Hemiwicking is the phenomena where a liquid wets a textured surface beyond its intrinsic wetting length due to capillary action and imbibition. In this work, we derive a simple analytical model for hemiwicking in micropillar arrays. The model is based on the combined effects of capillary action dictated by interfacial and intermolecular pressures gradients within the curved liquid meniscus and fluid drag from the pillars at ultra-low Reynolds numbers$${\boldsymbol{(}}{{\bf{10}}}^{{\boldsymbol{-}}{\bf{7}}}{\boldsymbol{\lesssim }}{\bf{Re}}{\boldsymbol{\lesssim }}{{\bf{10}}}^{{\boldsymbol{-}}{\bf{3}}}{\boldsymbol{)}}$$$\left({10}^{-7}\lesssim \mathrm{Re}\lesssim {10}^{-3}\right)$. Fluid drag is conceptualized via a critical Reynolds number:$${\bf{Re}}{\boldsymbol{=}}\frac{{{\bf{v}}}_{{\bf{0}}}{{\bf{x}}}_{{\bf{0}}}}{{\boldsymbol{\nu }}}$$$\mathrm{Re}=\frac{{v}_{0}{x}_{0}}{\nu }$, wherev0corresponds to the maximum wetting speed on a flat, dry surface andx0is the extension length of the liquid meniscus that drives the bulk fluid toward the adsorbed thin-film region. The model is validated with wicking experiments on different hemiwicking surfaces in conjunction withv0andx0measurements using Water$${\boldsymbol{(}}{{\bf{v}}}_{{\bf{0}}}{\boldsymbol{\approx }}{\bf{2}}\,{\bf{m}}{\boldsymbol{/}}{\bf{s}}{\boldsymbol{,}}\,{\bf{25}}\,{\boldsymbol{\mu }}{\bf{m}}{\boldsymbol{\lesssim }}{{\bf{x}}}_{{\bf{0}}}{\boldsymbol{\lesssim }}{\bf{28}}\,{\boldsymbol{\mu }}{\bf{m}}{\boldsymbol{)}}$$$\left({v}_{0}\approx 2\phantom{\rule{0ex}{0ex}}m/s,\phantom{\rule{0ex}{0ex}}25\phantom{\rule{0ex}{0ex}}µm\lesssim {x}_{0}\lesssim 28\phantom{\rule{0ex}{0ex}}µm\right)$, viscous FC-70$${\boldsymbol{(}}{{\boldsymbol{v}}}_{{\bf{0}}}{\boldsymbol{\approx }}{\bf{0.3}}\,{\bf{m}}{\boldsymbol{/}}{\bf{s}}{\boldsymbol{,}}\,{\bf{18.6}}\,{\boldsymbol{\mu }}{\bf{m}}{\boldsymbol{\lesssim }}{{\boldsymbol{x}}}_{{\bf{0}}}{\boldsymbol{\lesssim }}{\bf{38.6}}\,{\boldsymbol{\mu }}{\bf{m}}{\boldsymbol{)}}$$$\left({v}_{0}\approx 0.3\phantom{\rule{0ex}{0ex}}m/s,\phantom{\rule{0ex}{0ex}}18.6\phantom{\rule{0ex}{0ex}}µm\lesssim {x}_{0}\lesssim 38.6\phantom{\rule{0ex}{0ex}}µm\right)$and lower viscosity Ethanol$${\boldsymbol{(}}{{\boldsymbol{v}}}_{{\bf{0}}}{\boldsymbol{\approx }}{\bf{1.2}}\,{\bf{m}}{\boldsymbol{/}}{\bf{s}}{\boldsymbol{,}}\,{\bf{11.8}}\,{\boldsymbol{\mu }}{\bf{m}}{\boldsymbol{\lesssim }}{{\bf{x}}}_{{\bf{0}}}{\boldsymbol{\lesssim }}{\bf{33.3}}\,{\boldsymbol{\mu }}{\bf{m}}{\boldsymbol{)}}$$$\left({v}_{0}\approx 1.2\phantom{\rule{0ex}{0ex}}m/s,\phantom{\rule{0ex}{0ex}}11.8\phantom{\rule{0ex}{0ex}}µm\lesssim {x}_{0}\lesssim 33.3\phantom{\rule{0ex}{0ex}}µm\right)$. 4. Abstract It has been recently established in David and Mayboroda (Approximation of green functions and domains with uniformly rectifiable boundaries of all dimensions.arXiv:2010.09793) that on uniformly rectifiable sets the Green function is almost affine in the weak sense, and moreover, in some scenarios such Green function estimates are equivalent to the uniform rectifiability of a set. The present paper tackles a strong analogue of these results, starting with the “flagship degenerate operators on sets with lower dimensional boundaries. We consider the elliptic operators$$L_{\beta ,\gamma } =- {\text {div}}D^{d+1+\gamma -n} \nabla$$${L}_{\beta ,\gamma }=-\text{div}{D}^{d+1+\gamma -n}\nabla$associated to a domain$$\Omega \subset {\mathbb {R}}^n$$$\Omega \subset {R}^{n}$with a uniformly rectifiable boundary$$\Gamma$$$\Gamma$of dimension$$d < n-1$$$d, the now usual distance to the boundary$$D = D_\beta$$$D={D}_{\beta }$given by$$D_\beta (X)^{-\beta } = \int _{\Gamma } |X-y|^{-d-\beta } d\sigma (y)$$${D}_{\beta }{\left(X\right)}^{-\beta }={\int }_{\Gamma }{|X-y|}^{-d-\beta }d\sigma \left(y\right)$for$$X \in \Omega$$$X\in \Omega$, where$$\beta >0$$$\beta >0$and$$\gamma \in (-1,1)$$$\gamma \in \left(-1,1\right)$. In this paper we show that the Green functionGfor$$L_{\beta ,\gamma }$$${L}_{\beta ,\gamma }$, with pole at infinity, is well approximated by multiples of$$D^{1-\gamma }$$${D}^{1-\gamma }$, in the sense that the function$$\big | D\nabla \big (\ln \big ( \frac{G}{D^{1-\gamma }} \big )\big )\big |^2$$$|D\nabla \left(ln\left(\frac{G}{{D}^{1-\gamma }}\right)\right){|}^{2}$satisfies a Carleson measure estimate on$$\Omega$$$\Omega$. We underline that the strong and the weak results are different in nature and, of course, at the levelmore » 5. Abstract Two-dimensional electron systems subjected to high transverse magnetic fields can exhibit Fractional Quantum Hall Effects (FQHE). In the GaAs/AlGaAs 2D electron system, a double degeneracy of Landau levels due to electron-spin, is removed by a small Zeeman spin splitting,$$g \mu _B B$$$g{\mu }_{B}B$, comparable to the correlation energy. Then, a change of the Zeeman splitting relative to the correlation energy can lead to a re-ordering between spin polarized, partially polarized, and unpolarized many body ground states at a constant filling factor. We show here that tuning the spin energy can produce fractionally quantized Hall effect transitions that include both a change in$$\nu$$$\nu$for the$$R_{xx}$$${R}_{\mathrm{xx}}$minimum, e.g., from$$\nu = 11/7$$$\nu =11/7$to$$\nu = 8/5$$$\nu =8/5$, and a corresponding change in the$$R_{xy}$$${R}_{\mathrm{xy}}$, e.g., from$$R_{xy}/R_{K} = (11/7)^{-1}$$${R}_{\mathrm{xy}}/{R}_{K}={\left(11/7\right)}^{-1}$to$$R_{xy}/R_{K} = (8/5)^{-1}$$${R}_{\mathrm{xy}}/{R}_{K}={\left(8/5\right)}^{-1}$, with increasing tilt angle. Further, we exhibit a striking size dependence in the tilt angle interval for the vanishing of the$$\nu = 4/3$$$\nu =4/3$and$$\nu = 7/5$$$\nu =7/5$resistance minima, including “avoided crossing” type lineshape characteristics, and observable shifts of$$R_{xy}$$${R}_{\mathrm{xy}}$at the$$R_{xx}$$${R}_{\mathrm{xx}}$minima- the latter occurring for$$\nu = 4/3, 7/5$$$\nu =4/3,7/5$and the 10/7. The results demonstrate both size dependence and the possibility, not just of competition between different spin polarized states at the same$$\nu$$$\nu$and$$R_{xy}$$${R}_{\mathrm{xy}}$, but also the tilt- or Zeeman-energy-dependent- crossover between distinct FQHE associated withmore »
2023-03-31T09:57:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 51, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6866009831428528, "perplexity": 2304.7418532250927}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949598.87/warc/CC-MAIN-20230331082653-20230331112653-00399.warc.gz"}
https://pdglive.lbl.gov/Particle.action?init=0&node=B189&home=BXXX020
${{\boldsymbol \Lambda}}$ BARYONS($\boldsymbol S$ = $-1$, $\boldsymbol I$ = 0) ${{\mathit \Lambda}^{0}}$ = ${\mathit {\mathit u}}$ ${\mathit {\mathit d}}$ ${\mathit {\mathit s}}$ INSPIRE search # ${{\boldsymbol \Lambda}{(1380)}}$ See the related review on "Pole Structure of the ${{\mathit \Lambda}{(1405)}}$ Region." ${{\boldsymbol \Lambda}{(1380)}}$ POLE POSITION REAL PART $-2{\times }$IMAGINARY PART
2021-04-11T12:06:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9499057531356812, "perplexity": 4036.1140531176216}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038062492.5/warc/CC-MAIN-20210411115126-20210411145126-00031.warc.gz"}
https://par.nsf.gov/biblio/10193583-clustering-jwst-constraining-galaxy-host-halo-masses-satellite-quenching-efficiencies-merger-rates
Clustering with JWST: Constraining galaxy host halo masses, satellite quenching efficiencies, and merger rates at z  = 4−10 ABSTRACT Galaxy clustering measurements can be used to constrain many aspects of galaxy evolution, including galaxy host halo masses, satellite quenching efficiencies, and merger rates. We simulate JWST galaxy clustering measurements at z ∼ 4–10 by utilizing mock galaxy samples produced by an empirical model, the universemachine. We also adopt the survey footprints and typical depths of the planned joint NIRCam and NIRSpec Guaranteed Time Observation program planned for Cycle 1 to generate realistic JWST survey realizations and to model high-redshift galaxy selection completeness. We find that galaxy clustering will be measured with ≳5σ significance at z ∼ 4–10. Halo mass precisions resulting from Cycle 1 angular clustering measurements will be ∼0.2 dex for faint (−18 ≳ $\mathit {M}_{\mathrm{UV}}^{ }$ ≳ −19) galaxies at z ∼ 4–10 as well as ∼0.3 dex for bright ($\mathit {M}_{\mathrm{UV}}^{ }$ ∼ −20) galaxies at z ∼ 4–7. Dedicated spectroscopic follow-up over ∼150 arcmin2 would improve these precisions by ∼0.1 dex by removing chance projections and low-redshift contaminants. Future JWST observations will therefore provide the first constraints on the stellar–halo mass relation in the epoch of reionization and substantially clarify how this relation evolves at z > 4. We also find that ∼1000 individual satellites will be identifiable at z ∼ 4–8 with JWST, enabling strong tests of satellite quenching more » Authors: ; ; ; ; ; ; ; Award ID(s): Publication Date: NSF-PAR ID: 10193583 Journal Name: Monthly Notices of the Royal Astronomical Society Volume: 493 Issue: 1 Page Range or eLocation-ID: 1178 to 1196 ISSN: 0035-8711 5. ABSTRACT We study the projected spatial offset between the ultraviolet continuum and Ly α emission for 65 lensed and unlensed galaxies in the Epoch of Reionization (5 ≤ z ≤ 7), the first such study at these redshifts, in order to understand the potential for these offsets to confuse estimates of the Ly α properties of galaxies observed in slit spectroscopy. While we find that ∼40 per cent of galaxies in our sample show significant projected spatial offsets ($|\Delta _{\rm {Ly}\alpha -\rm {UV}}|$), we find a relatively modest average projected offset of $|\widetilde{\Delta }_{\rm {Ly}\alpha -\rm {UV}}|$  = 0.61 ± 0.08 proper kpc for the entire sample. A small fraction of our sample, ∼10 per cent, exhibit offsets in excess of 2 proper kpc, with offsets seen up to ∼4 proper kpc, sizes that are considerably larger than the effective radii of typical galaxies at these redshifts. An internal comparison and a comparison to studies at lower redshift yielded no significant evidence of evolution of $|\Delta _{\rm {Ly}\alpha -\rm {UV}}|$ with redshift. In our sample, ultraviolet (UV)-bright galaxies ($\widetilde{L_{\mathrm{ UV}}}/L^{\ast }_{\mathrm{ UV}}=0.67$) showed offsets a factor of three greater than their fainter counterparts ($\widetilde{L_{\mathrm{ UV}}}/L^{\ast }_{\mathrm{ UV}}=0.10$), 0.89 ± 0.18 versus 0.27 ± 0.05 proper kpc, respectively. The presence of companion galaxies and early stage merging activitymore »
2023-01-31T03:40:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44770348072052, "perplexity": 3824.5376974443925}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499842.81/warc/CC-MAIN-20230131023947-20230131053947-00121.warc.gz"}
https://pos.sissa.it/283/071/
Volume 283 - Neutrino Oscillation Workshop (NOW2016) - Session IV: Neutrino masses, states, interactions NEXT G. Martinez* on behalf of the NEXT Collaboration *corresponding author Full text: pdf Pre-published on: February 21, 2017 Published on: June 20, 2017 Abstract The discovery of neutrinoless double beta decay is one of the main targets of particle physics nowadays. It would demonstrate that neutrinos are Majorana particles, which does not fit in the Standard Model as it currently stands. NEXT is an experiment aiming for such discovery. The detector, a high-pressure xenon TPC, provides two remarkable features for searching this decay: an excellent energy resolution (below 1\% FWHM at $Q_{\beta\beta}$) and topological signature for background rejection. This detection technique makes it possible to reach a sensitivity to $T_{1/2}$ of $5 \cdot 10^{25}$ y with an exposure of 300 kg $\cdot$ y, competitive with the leading experiments in the field. DOI: https://doi.org/10.22323/1.283.0071 How to cite Metadata are provided both in "article" format (very similar to INSPIRE) as this helps creating very compact bibliographies which can be beneficial to authors and readers, and in "proceeding" format which is more detailed and complete. Open Access Copyright owned by the author(s) under the term of the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
2020-11-30T10:37:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3708067834377289, "perplexity": 2659.621197487526}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141213431.41/warc/CC-MAIN-20201130100208-20201130130208-00542.warc.gz"}
https://www.ctcms.nist.gov/potentials/iprPy/calculation/dislocation_SDVPN/calc.html
# calc_dislocation_SDVPN.py ## Calculation script functions main(*args) Main function called when script is executed directly. peierlsnabarro(ucell, C, burgers, ξ_uvw, slip_hkl, gamma, m=[0, 1, 0], n=[0, 0, 1], cutofflongrange=1000.0, tau=array([[0., 0., 0.], [0., 0., 0.], [0., 0., 0.]]), alpha=[0.0], beta=array([[0., 0., 0.], [0., 0., 0.], [0., 0., 0.]]), cdiffelastic=False, cdiffsurface=True, cdiffstress=False, fullstress=True, halfwidth=1.0, normalizedisreg=True, xnum=None, xmax=None, xstep=None, xscale=False, min_method='Powell', min_options={}, min_cycles=10) Solves a Peierls-Nabarro dislocation model. Parameters • ucell (atomman.System) – The unit cell to use as the seed for the dislocation system. Note that only box information is used and not atomic positions. • C (atomman.ElasticConstants) – The elastic constants associated with the bulk crystal structure for ucell. • burgers (array-like object) – The dislocation’s Burgers vector given as a Miller or Miller-Bravais vector relative to ucell. • ξ_uvw (array-like object) – The dislocation’s line direction given as a Miller or Miller-Bravais vector relative to ucell. • slip_hkl (array-like object) – The dislocation’s slip plane given as a Miller or Miller-Bravais plane relative to ucell. • m (array-like object, optional) – The m unit vector for the dislocation solution. m, n, and ξ (dislocation line) should be right-hand orthogonal. Default value is [0,1,0] (y-axis). • n (array-like object, optional) – The n unit vector for the dislocation solution. m, n, and ξ (dislocation line) should be right-hand orthogonal. Default value is [0,0,1] (z-axis). n is normal to the dislocation slip plane. • cutofflongrange (float, optional) – The cutoff distance to use for computing the long-range energy. Default value is 1000 angstroms. • tau (numpy.ndarray, optional) – A (3,3) array giving the stress tensor to apply to the system using the stress energy term. Only the xy, yy, and yz components are used. Default value is all zeros. • alpha (list of float, optional) – The alpha coefficient(s) used by the nonlocal energy term. Default value is [0.0]. • beta (numpy.ndarray, optional) – The (3,3) array of beta coefficient(s) used by the surface energy term. Default value is all zeros. • cdiffelastic (bool, optional) – Flag indicating if the dislocation density for the elastic energy component is computed with central difference (True) or simply neighboring values (False). Default value is False. • cdiffsurface (bool, optional) – Flag indicating if the dislocation density for the surface energy component is computed with central difference (True) or simply neighboring values (False). Default value is True. • cdiffstress (bool, optional) – Flag indicating if the dislocation density for the stress energy component is computed with central difference (True) or simply neighboring values (False). Only matters if fullstress is True. Default value is False. • fullstress (bool, optional) – Flag indicating which stress energy algorithm to use. Default value is True. • halfwidth (float, optional) – A dislocation halfwidth guess to use for generating the initial disregistry guess. Does not have to be accurate, but the better the guess the fewer minimization steps will likely be needed. Default value is 1 Angstrom. • normalizedisreg (bool, optional) – If True, the initial disregistry guess will be scaled such that it will have a value of 0 at the minimum x and a value of burgers at the maximum x. Default value is True. Note: the disregistry of end points are fixed, thus True is usually preferential. • xnum (int, optional) – The number of x value points to use for the solution. Two of xnum, xmax, and xstep must be given. • xmax (float, optional) – The maximum value of x to use. Note that the minimum x value will be -xmax, thus the range of x will be twice xmax. Two of xnum, xmax, and xstep must be given. • xstep (float, optional) – The delta x value to use, i.e. the step size between the x values used. Two of xnum, xmax, and xstep must be given. • xscale (bool, optional) – Flag indicating if xmax and/or xstep values are to be taken as absolute or relative to ucell’s a lattice parameter. Default value is False, i.e. the x parameters are absolute and not scaled. • min_method (str, optional) – The scipy.optimize.minimize method to use. Default value is ‘Powell’. • min_options (dict, optional) – Any options to pass on to scipy.optimize.minimize. Default value is {}. • min_cycles (int, optional) – The number of minimization runs to perform on the system. Restarting after obtaining a solution can help further refine to the best pathway. Default value is 10. process_input(input_dict, UUID=None, build=True) Processes str input parameters, assigns default values if needed, and generates new, more complex terms as used by the calculation. Parameters • input_dict (dict) – Dictionary containing the calculation input parameters with string values. The allowed keys depends on the calculation style. • UUID (str, optional) – Unique identifier to use for the calculation instance. If not given and a ‘UUID’ key is not in input_dict, then a random UUID4 hash tag will be assigned. • build (bool, optional) – Indicates if all complex terms are to be built. A value of False allows for default values to be assigned even if some inputs required by the calculation are incomplete. (Default is True.)
2022-01-25T08:38:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4555186927318573, "perplexity": 6317.123538436382}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304798.1/warc/CC-MAIN-20220125070039-20220125100039-00551.warc.gz"}
http://pdglive.lbl.gov/ParticleGroup.action;jsessionid=68D2A60239DEA8F84F4391AE36762E48?init=0&node=MXXX005
# LIGHT UNFLAVORED MESONS ($\mathit S$ = $\mathit C$ = $\mathit B$ = 0) For $\mathit I = 1$ (${{\mathit \pi}}$, ${{\mathit b}}$, ${{\mathit \rho}}$, ${{\mathit a}}$): ${\mathit {\mathit u}}$ ${\mathit {\overline{\mathit d}}}$, ( ${\mathit {\mathit u}}$ ${\mathit {\overline{\mathit u}}}−$ ${\mathit {\mathit d}}$ ${\mathit {\overline{\mathit d}}})/\sqrt {2 }$, ${\mathit {\mathit d}}$ ${\mathit {\overline{\mathit u}}}$; for $\mathit I = 0$ (${{\mathit \eta}}$, ${{\mathit \eta}^{\,'}}$, ${{\mathit h}}$, ${{\mathit h}^{\,'}}$, ${{\mathit \omega}}$, ${{\mathit \phi}}$, ${{\mathit f}}$, ${{\mathit f}^{\,'}}$): ${\mathit {\mathit c}}_{{\mathrm {1}}}$( ${{\mathit u}}{{\overline{\mathit u}}}$ $+$ ${{\mathit d}}{{\overline{\mathit d}}}$ ) $+$ ${\mathit {\mathit c}}_{{\mathrm {2}}}$( ${{\mathit s}}{{\overline{\mathit s}}}$ ) Reviews: Form Factors for Radiative Pion and Kaon Decays (rev.) Note on Scalar Mesons below 2 GeV (rev.) The ${{\mathit \rho}{(770)}}$ The Pseudoscalar and Pseudovector Mesons in the 1400 MeV Region (rev.) The ${{\mathit \rho}{(1450)}}$ and ${{\mathit \rho}{(1700)}}$ ${{\mathit \pi}^{\pm}}$ ${{\mathit \pi}^{0}}$ ${{\mathit \eta}}$ ${{\mathit f}_{{0}}{(500)}}~$or ${{\mathit \sigma}}~$was ${{\mathit f}_{{0}}{(600)}}$ ${{\mathit \rho}{(770)}}$ ${{\mathit \omega}{(782)}}$ ${{\mathit \eta}^{\,'}{(958)}}$ ${{\mathit f}_{{0}}{(980)}}$ ${{\mathit a}_{{0}}{(980)}}$ ${{\mathit \phi}{(1020)}}$ ${{\mathit h}_{{1}}{(1170)}}$ ${{\mathit b}_{{1}}{(1235)}}$ ${{\mathit a}_{{1}}{(1260)}}$ ${{\mathit f}_{{2}}{(1270)}}$ ${{\mathit f}_{{1}}{(1285)}}$ ${{\mathit \eta}{(1295)}}$ ${{\mathit \pi}{(1300)}}$ ${{\mathit a}_{{2}}{(1320)}}$ ${{\mathit f}_{{0}}{(1370)}}$ ${{\mathit h}_{{1}}{(1380)}}$ ${{\mathit \pi}_{{1}}{(1400)}}$ ${{\mathit \eta}{(1405)}}$ ${{\mathit f}_{{1}}{(1420)}}$ ${{\mathit \omega}{(1420)}}$ ${{\mathit f}_{{2}}{(1430)}}$ ${{\mathit a}_{{0}}{(1450)}}$ ${{\mathit \rho}{(1450)}}$ ${{\mathit \eta}{(1475)}}$ ${{\mathit f}_{{0}}{(1500)}}$ ${{\mathit f}_{{1}}{(1510)}}$ ${{\mathit f}_{{2}}^{\,'}{(1525)}}$ ${{\mathit f}_{{2}}{(1565)}}$ ${{\mathit \rho}{(1570)}}$ ${{\mathit h}_{{1}}{(1595)}}$ ${{\mathit \pi}_{{1}}{(1600)}}$ ${{\mathit a}_{{1}}{(1640)}}$ ${{\mathit f}_{{2}}{(1640)}}$ ${{\mathit \eta}_{{2}}{(1645)}}$ ${{\mathit \omega}{(1650)}}$ ${{\mathit \omega}_{{3}}{(1670)}}$ ${{\mathit \pi}_{{2}}{(1670)}}$ ${{\mathit \phi}{(1680)}}$ ${{\mathit \rho}_{{3}}{(1690)}}$ ${{\mathit \rho}{(1700)}}$ ${{\mathit a}_{{2}}{(1700)}}$ ${{\mathit f}_{{0}}{(1710)}}$ ${{\mathit \eta}{(1760)}}$ ${{\mathit \pi}{(1800)}}$ ${{\mathit f}_{{2}}{(1810)}}$ ${{\mathit X}{(1835)}}$ ${{\mathit X}{(1840)}}$ ${{\mathit a}_{{1}}{(1420)}}$ ${{\mathit \phi}_{{3}}{(1850)}}$ ${{\mathit \eta}_{{2}}{(1870)}}$ ${{\mathit \pi}_{{2}}{(1880)}}$ ${{\mathit \rho}{(1900)}}$ ${{\mathit f}_{{2}}{(1910)}}$ ${{\mathit a}_{{0}}{(1950)}}$ ${{\mathit f}_{{2}}{(1950)}}$ ${{\mathit \rho}_{{3}}{(1990)}}$ ${{\mathit f}_{{2}}{(2010)}}$ ${{\mathit f}_{{0}}{(2020)}}$ ${{\mathit a}_{{4}}{(2040)}}$ ${{\mathit f}_{{4}}{(2050)}}$ ${{\mathit \pi}_{{2}}{(2100)}}$ ${{\mathit f}_{{0}}{(2100)}}$ ${{\mathit f}_{{2}}{(2150)}}$ ${{\mathit \rho}{(2150)}}$ ${{\mathit \phi}{(2170)}}$ ${{\mathit f}_{{0}}{(2200)}}$ ${{\mathit f}_{{J}}{(2220)}}$ ${{\mathit \eta}{(2225)}}$ ${{\mathit \rho}_{{3}}{(2250)}}$ ${{\mathit f}_{{2}}{(2300)}}$ ${{\mathit f}_{{4}}{(2300)}}$ ${{\mathit f}_{{0}}{(2330)}}$ ${{\mathit f}_{{2}}{(2340)}}$ ${{\mathit \rho}_{{5}}{(2350)}}$ ${{\mathit a}_{{6}}{(2450)}}$ ${{\mathit f}_{{6}}{(2510)}}$
2017-12-17T07:52:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9962884783744812, "perplexity": 54.3526493899823}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948594665.87/warc/CC-MAIN-20171217074303-20171217100303-00431.warc.gz"}
https://math.libretexts.org/Bookshelves/Pre-Algebra/Book%3A_Prealgebra_(OpenStax)/2%3A_Introduction_to_the_Language_of_Algebra/2.5%3A_Prime_Factorization_and_the_Least_Common_Multiple_(Part_2)
# 2.5: Prime Factorization and the Least Common Multiple (Part 2) $$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$ $$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$ ## Find the Least Common Multiple (LCM) of Two Numbers One of the reasons we look at multiples and primes is to use these techniques to find the least common multiple of two numbers. This will be useful when we add and subtract fractions with different denominators. ### Listing Multiples Method A common multiple of two numbers is a number that is a multiple of both numbers. Suppose we want to find common multiples of 10 and 25. We can list the first several multiples of each number. Then we look for multiples that are common to both lists—these are the common multiples. $\begin{split} 10 & \colon \; 10, 20, 30, 40, \textbf{50}, 60, 70, 80, 90, \textbf{100}, 110, \ldots \\ 25 & \colon \; 25, \textbf{50}, 75, \textbf{100}, 125, \ldots \end{split} \nonumber$ We see that $$50$$ and $$100$$ appear in both lists. They are common multiples of $$10$$ and $$25$$. We would find more common multiples if we continued the list of multiples for each. The smallest number that is a multiple of two numbers is called the least common multiple (LCM). So the least LCM of $$10$$ and $$25$$ is $$50$$. HOW TO: FIND THE LEAST COMMON MULTIPLE (LCM) OF TWO NUMBERS BY LISTING MULTIPLES Step 1. List the first several multiples of each number. Step 2. Look for multiples common to both lists. If there are no common multiples in the lists, write out additional multiples for each number. Step 3. Look for the smallest number that is common to both lists. Step 4. This number is the LCM. Example $$\PageIndex{5}$$: lcm Find the LCM of $$15$$ and $$20$$ by listing multiples. Solution List the first several multiples of $$15$$ and of $$20$$. Identify the first common multiple. $\begin{split}15 & \colon \; 15, 30, 45, \textbf{60}, 75, 90, 105, 120 \\ 20 & \colon \; 20, 40, \textbf{60}, 80, 100, 120, 140, 160 \end{split} \nonumber$ The smallest number to appear on both lists is $$60$$, so $$60$$ is the least common multiple of $$15$$ and $$20$$. Notice that $$120$$ is on both lists, too. It is a common multiple, but it is not the least common multiple. Exercise $$\PageIndex{9}$$ Find the least common multiple (LCM) of the given numbers: $$9$$ and $$12$$ $$36$$ Exercise $$\PageIndex{10}$$ Find the least common multiple (LCM) of the given numbers: $$18$$ and $$24$$ $$72$$ ### Prime Factors Method Another way to find the least common multiple of two numbers is to use their prime factors. We’ll use this method to find the LCM of $$12$$ and $$18$$. We start by finding the prime factorization of each number. $12 = 2 \cdot 2 \cdot 3 \qquad \qquad 18 = 2 \cdot 3 \cdot 3 \nonumber$ Then we write each number as a product of primes, matching primes vertically when possible. $\begin{split} 12 & = 2 \cdot 2 \cdot 3 \\ 18 & = 2 \cdot \quad \; 3 \cdot 3 \end{split} \nonumber$ Now we bring down the primes in each column. The LCM is the product of these factors. Notice that the prime factors of $$12$$ and the prime factors of $$18$$ are included in the LCM. By matching up the common primes, each common prime factor is used only once. This ensures that $$36$$ is the least common multiple. HOW TO: FIND THE LCM USING THE PRIME FACTORS METHOD Step 1. Find the prime factorization of each number. Step 2. Write each number as a product of primes, matching primes vertically when possible. Step 3. Bring down the primes in each column. Step 4. Multiply the factors to get the LCM. Example $$\PageIndex{6}$$: lcm Find the LCM of $$15$$ and $$18$$ using the prime factors method. Solution Write each number as a product of primes. $$15 = 3 \cdot 5 \qquad \qquad 18 = 2 \cdot 3 \cdot 3$$ Write each number as a product of primes, matching primes vertically when possible. $$\begin{split} 15 & = \quad \; 3 \cdot \qquad 5 \\ 18 & = 2 \cdot 3 \cdot 3 \end{split}$$ Bring down the primes in each column. Multiply the factors to get the LCM. LCM = 2 • 3 • 3 • 5 The LCM of 15 and 18 is 90. Exercise $$\PageIndex{11}$$ Find the LCM using the prime factors method: $$15$$ and $$20$$ $$60$$ Exercise $$\PageIndex{12}$$ Find the LCM using the prime factors method: $$15$$ and $$35$$ $$105$$ Example $$\PageIndex{7}$$: lcm Find the LCM of $$50$$ and $$100$$ using the prime factors method. Solution Write the prime factorization of each number. $$50 = 2 \cdot 5 \cdot 5 \qquad 100 = 2 \cdot 2 \cdot 5 \cdot 5$$ Write each number as a product of primes, matching primes vertically when possible. $$\begin{split} 50 & = \quad \; 2 \cdot 5 \cdot 5 \\ 100 & = 2 \cdot 2 \cdot 5 \cdot 5 \end{split}$$ Bring down the primes in each column. Multiply the factors to get the LCM. LCM = 2 • 2 • 5 • 5 The LCM of 50 and 100 is 100. Exercise $$\PageIndex{13}$$ Find the LCM using the prime factors method: $$55, 88$$ $$440$$ Exercise $$\PageIndex{14}$$ Find the LCM using the prime factors method: $$60, 72$$ $$360$$ ## Key Concepts • Find the prime factorization of a composite number using the tree method. • Find any factor pair of the given number, and use these numbers to create two branches. • If a factor is prime, that branch is complete. Circle the prime. • If a factor is not prime, write it as the product of a factor pair and continue the process. • Write the composite number as the product of all the circled primes. • Find the prime factorization of a composite number using the ladder method. • Divide the number by the smallest prime. • Continue dividing by that prime until it no longer divides evenly. • Divide by the next prime until it no longer divides evenly. • Continue until the quotient is a prime. • Write the composite number as the product of all the primes on the sides and top of the ladder. • Find the LCM by listing multiples. • List the first several multiples of each number. • Look for multiples common to both lists. If there are no common multiples in the lists, write out additional multiples for each number. • Look for the smallest number that is common to both lists. • This number is the LCM. • Find the LCM using the prime factors method. • Find the prime factorization of each number. • Write each number as a product of primes, matching primes vertically when possible. • Bring down the primes in each column. • Multiply the factors to get the LCM. ## Glossary least common multiple The smallest number that is a multiple of two numbers is called the least common multiple (LCM). prime factorization The prime factorization of a number is the product of prime numbers that equals the number. ## Practice Makes Perfect ### Find the Prime Factorization of a Composite Number In the following exercises, find the prime factorization of each number using the factor tree method. 1. 86 2. 78 3. 132 4. 455 5. 693 6. 420 7. 115 8. 225 9. 2475 10. 1560 In the following exercises, find the prime factorization of each number using the ladder method. 1. 56 2. 72 3. 168 4. 252 5. 391 6. 400 7. 432 8. 627 9. 2160 10. 2520 In the following exercises, find the prime factorization of each number using any method. 1. 150 2. 180 3. 525 4. 444 5. 36 6. 50 7. 350 8. 144 ### Find the Least Common Multiple (LCM) of Two Numbers In the following exercises, find the least common multiple (LCM) by listing multiples. 1. 8, 12 2. 4, 3 3. 6, 15 4. 12, 16 5. 30, 40 6. 20, 30 7. 60, 75 8. 44, 55 In the following exercises, find the least common multiple (LCM) by using the prime factors method. 1. 8, 12 2. 12, 16 3. 24, 30 4. 28, 40 5. 70, 84 6. 84, 90 In the following exercises, find the least common multiple (LCM) using any method. 1. 6, 21 2. 9, 15 3. 24, 30 4. 32, 40 ## Everyday Math 1. Grocery shopping Hot dogs are sold in packages of ten, but hot dog buns come in packs of eight. What is the smallest number of hot dogs and buns that can be purchased if you want to have the same number of hot dogs and buns? (Hint: it is the LCM!) 2. Grocery shopping Paper plates are sold in packages of 12 and party cups come in packs of 8. What is the smallest number of plates and cups you can purchase if you want to have the same number of each? (Hint: it is the LCM!) ## Writing Exercises 1. Do you prefer to find the prime factorization of a composite number by using the factor tree method or the ladder method? Why? 2. Do you prefer to find the LCM by listing multiples or by using the prime factors method? Why? ## Self Check (a) After completing the exercises, use this checklist to evaluate your mastery of the objectives of this section. (b) Overall, after looking at the checklist, do you think you are well-prepared for the next Chapter? Why or why not? ## Contributors • Lynn Marecek (Santa Ana College) and MaryAnne Anthony-Smith (formerly of Santa Ana College). This content produced by OpenStax and is licensed under a Creative Commons Attribution License 4.0 license.
2019-08-20T10:46:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7069724798202515, "perplexity": 531.6422453439511}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315321.52/warc/CC-MAIN-20190820092326-20190820114326-00527.warc.gz"}
https://wwwn.cdc.gov/nchs/nhanes/tutorials/Module6.aspx
Module 6: Descriptive Statistics The NHANES Tutorials are currently being reviewed and revised, and are subject to change. Specialized tutorials (e.g. Dietary, etc.) will be included in the future. NHANES data are often used to provide national estimates on important public health issues. This module introduces how to generate the descriptive statistics for NHANES data that are most often used to obtain these estimates. Topics covered in this module include checking frequency distribution and normality, generating percentiles, generating means, and generating proportions. It is highly recommended that you examine the frequency distribution and normality of the data before starting any analysis. These descriptive statistics are useful in determining whether parametric or non-parametric methods are appropriate to use, and whether you need to recode or transform data to account for extreme values and outliers. Frequency Distribution A frequency distribution shows the number of individuals located in each category of a categorical variable. For continuous variables, frequencies are displayed for values that appear at least one time in the dataset. Frequency distributions provide an organized picture of the data, and allow you to see how individual scores are distributed on a specified scale of measurement. For instance, a frequency distribution shows whether the data values are generally high or low, and whether they are concentrated in one area or spread out across the entire measurement scale. A frequency distribution not only presents an organized picture of how individual scores are distributed on a measurement scale, but also reveals extreme values and outliers. Researchers can make decisions on whether and how to recode or perform data transformation based on the distribution statistics. Frequency distributions can be structured as tables or graphs, but either should show the original measurement scale and the frequencies associated with each category. Because NHANES data have very large sample sizes with a potentially long list of different values for continuous variables, it is recommended that you use a graphic format to check the distribution for continuous variables, and either frequency tables or graphic forms for nominal or interval variables. Statistics of Normality (for Continuous Variables) Statistics of normality reveal whether a data distribution is normal and symmetrically bell-shaped or highly skewed. It is important to use these statistics to check the normality of a distribution because they will determine whether you will use parametric (which assume a normal distribution), non-parametric tests, or the need to use a transformation in your analysis. IMPORTANT NOTE Note: Before you analyze the data, it is important to check the distribution of the variables to identify outliers and determine whether parametric (for a normal distribution) or non-parametric tests are appropriate to use. NHANES 1999-2002 is a large, representative sample of the U.S. population, and most continuous variables from this sample are expected to be normally distributed. If you conduct tests for normality, results on most variables would be significant, i.e. even the slightest deviation from normality could result in rejecting the null hypothesis due to the extremely large sample sizes. Therefore, users are discouraged from solely depending on these tests for normality. Instead you can also request a Q-Q plot to examine normality. A Q-Q plot, or a quantile-quantile plot, is a graphical data analysis technique for assessing whether the distribution for data follows a particular distribution. In a Q-Q plot, the distribution of the variable in question is plotted against a normal distribution. The variable of interest is normally distributed, if a straight line intersects the y-axis at a 45 degree angle. Standard Deviation The standard deviation is a measure of the variability of the distribution of a random variable. To estimate the standard deviation 1. calculate the weighted sum of the squares of the differences of the observations in a simple random sample from the sample mean 2. divide the result obtained in 1 by an estimate of the population size minus 1 3. take the square root of the result obtained in 2. Skewness Skewness is a measure of the departure of the distribution of a random variable from symmetry. The skewness of a normally distributed random variable is 0. Kurtosis Kurtosis is a measure of the peakedness of the distribution. The kurtosis of a normally distributed random variable depends on the formula used. One formula subtracts 3, as used by SAS, which makes the value for a normal distribution equal to 0. The other formula does not subtract 3, as used by Stata, which makes the value for a normal distribution equal to 3. A kurtosis exceeding the value for a normal distribution indicates excess values close to the mean and at the tails of the distribution. A kurtosis of less than the value for a normal distribution indicates a distribution with a flatter top. Standard Error of the Mean The standard error of the mean based on data from a simple random sample is estimated by dividing the estimated standard deviation by the square root of the sample size. The value of the standard error obtained from SAS proc univariate using the freq option with the sample weight (i.e. freq appropriate sample weight) is obtained by dividing the estimated standard deviation (see above) by the sum of the sample weights (i.e. an estimate of the population size). In order to obtain the "correct" estimate of the simple random sample standard error of the mean, divide the estimated standard deviation by the square root of the sample size. The SRS estimate of the standard error of the mean thus obtained serves as a bench mark against which to compare the design based estimate of the standard error of mean which can be obtain from SUDAAN proc descript. (See Variance Estimation module for more information). The SAS procedure, proc univariate, generates descriptive and summary statistics that are useful in describing the characteristics of a distribution. These statistics can also be used to determine whether parametric (for a normal distribution) or non-parametric tests are appropriate to use in your analysis. As noted in the Clean & Recode Data module it is advisable to check for extreme weights and outliers before starting any analysis. Step 1: Use the univariate procedure to generate descriptive statistics in SAS Use the SAS procedure, proc univariate, to generate descriptive statistics. The frequency distribution can be presented in table or graphic format. The freq option generates the frequency distribution in tabular form by listing the number of observations for each value of the variable. Due to the large sample size and the possibility of a long list of different values, it is not reasonable to request the freq option for variables that are not nominal or ordinal. The plot option generates the frequency distribution in graphic form (histogram, box, and normal probability plots), and the normal option generates statistics to test the normality of the distribution. IMPORTANT NOTE These programs use variable formats listed in the Tutorial Formats page. You may need to format the variables in your dataset the same way to reproduce results presented in the tutorial. SAS Univariate Procedure for Descriptive Statistics Statements Explanation proc sort data=analysis_data ; by riagendr age; run ; Use the sort procedure to sort data by the same variables used in the by statement of the univariate procedure. In the example, data is sorted by gender (riagendr) and age (age). PROC UNIVARIATE PLOT NORMAL ; Use the univariate procedure to generate descriptive statistics, which include number of missing values, mean, standard errors, percentiles, and extreme values. Use the plot option to generate histogram, box and normal probability plots, and the normal option to generate statistics to test normality. In this example, plots (plot) and normality test statistics (normal) are requested and the results will be sorted and generated separately for each combination of the variables on the by statement. where ridageyr >= 20 ; Use the where statement to select those 20 years and older. by riagendr age; The by statement determines the groups (all combinations of the variables defined by the var statement) that separate descriptive statistics will be produced. This statement should match the by statement in the sort procedure preceding it. VAR lbxtc; Use the var statement to indicate variable(s) for which descriptive measures are requested. In this example, the total cholesterol variable (lbxtc) is used. FREQ wtmec4yr; run ; Use the freq option with the appropriate sample weight yields an estimate of the standard deviation whose denominator is the estimated population size. In this example, the 4-year examination weight (wtmec4yr) is used. WARNING The freq option, with the appropriate sample weight, yields an estimate of the standard deviation whose denominator is an estimate of the population size, i.e., the sum of the the sample weights. Using the weight option instead of the freq option yields an estimate of the standard error whose denominator is the sample size. Step 2: Check output of descriptive statistics The univariate procedure generates extensive descriptive statistics, including moments, percentiles, extremes, missing values, basic statistical measures, and tests for location. Below is a snapshot from the extensive output of the SAS program which shows the result of using the plot and normal options. • The output is arranged by gender and age group so you can see the results for each combination. • The standard deviation is a measure of the deviation of the observations for the mean. • Kurtosis is a measure of the peakedness of the distribution. For SAS, the kurtosis of a normally distributed random variable is 0. A kurtosis greater than 0, as in this example, indicates excess values close to the mean and at the tails of the distribution. • Skewness is a measure of the departure of the distribution of a random variable from symmetry. The skewness of a normally distributed random variable is 0. • The standard error of the mean is not correctly calculated and will not be used in this example. • The output also contains the five lowest and highest values, which are useful for review. • The histogram for a normally distributed random variable is symmetric and bell-shaped. For variables based on data collected in a survey, such as NHANES 1999-2002, the distribution will deviate at least slightly from normality. Note the one outlier on the upper tail of the distribution. • The variable of interest is plotted against a normally distributed random variable. The resulting plot is called a Q-Q plot. If the variable of interest is normally distributed a straight line intersecting the y axis at a 45 degree angle would be obtained. For this example note the outliers in the upper tail of this distribution. Step 3: Request selective statistics and output results to SAS dataset In some instances, you may not need all of the statistics generated by proc univariate. You can use proc univariate to select a few descriptive statistics and output the results to a SAS dataset to view. IMPORTANT NOTE These programs use variable formats listed in the Tutorial Formats page. You may need to format the variables in your dataset the same way to reproduce results presented in the tutorial. SAS univariate procedure for displaying selected statistics Statements Explanation proc sort data=analysis_data; by riagendr age; run ; Use the sort procedure to sort data by the same variables that will be used in the by statement of the univariate procedure. In the example, the data are sorted by gender (riagendr) and age (age). PROC UNIVARIATE NOPRINT; Use the univariate procedure to generate descriptive statistics. Use the noprint option to suppress the detailed default descriptive statistics. where ridageyr >= 20 ; Use the where statement to select those 20 years and older. by riagendr age; The by statement determines the groups (all combinations of the variables defined by the var statement) that separate descriptive statistics will be produced. This statement should match the by statement in the sort procedure preceding it. VAR lbxtc; Use the var statement to indicate variable(s) for which descriptive measures are requested. In this example, the total cholesterol variable (lbxtc) is used. FREQ wtmec4yr; Use the freq option with the appropriate sample weight yields an estimate of the standard deviation whose denominator is the estimated population size. In this example, the 4-year examination weight (wtmec4yr) is used. WARNING The freq option, with the appropriate sample weight, yields an estimate of the standard deviation whose denominator is an estimate of the population size, i.e., the sum of the the sample weights. Using the weight option instead of the freq option yields an estimate of the standard error whose denominator is the sample size. OUTPUT out= SASdataset mean=mean Q1=p_25 median=median Q3=p_75; run ; Use output statement to print the results to the new SAS dataset, SASdataset, which will contain the statistics of interest. The requested statistics are labeled with the names given after the equal sign. In this example, the mean, 25th, 50th, and 75th percentiles are requested. (For a complete list of statistics that can be requested see the proc univariate entry in SAS manual.) proc print DATA=SASdataset; run ; Use proc print to view the results in the new SAS dataset, SASdataset. Step 4: Check output of selective statistics The output is sent to a SAS dataset, which is printed to view. See results below. Note that the new SAS dataset contains only the statistics requested on the output statement. • Because this example used the noprint option, there is only one page of output with the requested statistics — mean, 25th percentile, median, and 75th percentile. The frequency distribution can be presented in table or graphic format. In this task, you will learn how to use the standard Stata commands - summarize, histogram, graph box, and tabstat - to generate these representations of data distributions. These statistics can also be used to determine whether parametric (for a normal distribution) or non-parametric tests are appropriate to use in your analysis. As noted in the Clean & Recode Data module it is advisable to check for extreme weights and outliers before starting any analysis. WARNING There are several things you should be aware of while analyzing NHANES data with Stata. Please see the Stata Tips page to review them before continuing. Step 1: Use the summarize command to generate weighted summary statistics for a population subset The Stata command, summarize, generates descriptive and summary statistics that are useful in describing the characteristics of a distribution. Because the SVY series of commands do not include the summarize command, you will need to use the standard summarize command, but tell Stata to incorporate weights. Below are instructions on how to write these commands and interpret the output. This command has the general structure: summarize varname [w=weightvar], detail IMPORTANT NOTE Without the detail option you just get obs, mean, std. dev., minimum and maximum. You can generate summary statistics for various population subsets (e.g. young men, young women, etc). The example below adds the by varname: prefix to the previous example to create this general format. by var1 var2, sort: sum varname [w=weightvar] if (condition), detail Here is the command to generate the summary statistics for six population subsets defined by gender (riagendr) and three age categories (age). The command also includes an if statement, which further restricts to age over 20 years (ridageyr>= 20) and people who have been both interviewed and examined (ridstatr==2). by riagendr age, sort : sum lbxtc [w = wtmec4yr] if (ridageyr >=20 & ridageyr <.) & ridstatr==2, detail IMPORTANT NOTE Stata represents missing numeric values (".") as large positive values. Therefore, a missing numeric value would be the highest value. Please see the (!! WE DON'T HAVE THIS PAGE IN OUR NEW TUTORIALS !!) Stata Tips page for more information. Reviewing the output, notice that • The output is arranged by gender and age group so you can see the results for each combination. • The standard deviation is a measure of the deviation of the observations for the mean. • Kurtosis is a measure of the peakedness of the distribution. For Stata, the kurtosis of a normally distributed random variable is 3. A kurtosis greater than 3, as in this example, indicates excess values close to the mean and at the tails of the distribution. • Skewness is a measure of the departure of the distribution of a random variable from symmetry. The skewness of a normally distributed random variable is 0. • The output also contains the four lowest and highest values, which are useful for review. Stata Non-Survey Command for Descriptive Statistics Statements Explanation use "C:\Stata\tutorial\analysis_data.dta", clear Use the use command to load the Stata-format dataset. Use the clear option to replace any data in memory. by riagendr age, sort : summarize lbxtc [aweight = wtmec4yr] if (ridageyr >=20 & ridageyr <.) & ridstatr==2, detail Use the sort command with the by prefix to sort and display the data by gender (riagendr) and age (age). Use the summarize command to generate univariate summary statistics (number of observations, sum of weights, mean, standard deviation) for the total cholesterol variable (lbxtc), for those who are 20 years and older and have been both interviewed and examined (ridstatr=2). Use the [aweight=] option to account for the NHANES sampling weights (obtain survey weighted estimates). In this example, the MEC weight for four years of data [aweight=wtmec4yr] is used. Note in this case the aweights as normally defined by Stata, that is weights inversely proportional to the variance of an observation, are NOT used." histogram lbxtc, by(riagendr age), if (ridageyr >=20 & ridageyr <.) & ridstatr==2, normal Use the histogram command to draw a histogram of the total cholesterol variable (lbxtc) for a select subpopulation (ages 20 and over). Use the normal option to overlay the histogram with normal density. graph box lbxtc [pweight = wtmec4yr], medtype(line) over(riagendr) over(age), if (ridageyr >=20 & ridageyr <.)& ridstatr==2 Use the graph box command to box plot the total cholesterol data, by gender and age for those who are 20 years and older and have been both interviewed and examined (ridstatr=2). Use the [pweight=] option to account for the unequal probability of sampling and non-response. In this example, the MEC weight for four years of data (wtmec4yr) is used. Use the medtype option to indicate how the median is indicated in the box. Step 2: Generate histograms and box plots To generate graphs of the distributions of a continuous variable, use the histogram and graph box commands. In this example, the general structure of the histogram command is: histogram varname, by(var1 var2), if (condition), [ options] In this example, the general structure of the graph box command, including the medtype() option to specify how the median is indicated and the over() option to specify different subgroups, is: graph box varname [w=weightvar], medtype(line) over(var1) over(var2), if (condition) The commands to generate histograms and box plots for six population subsets defined by gender (riagendr) and three age categories (age) are below. The commands also include if statements, which further restricts to age over 20 years (ridageyr >=20 & ridageyr <.) and people who have been both interviewed and examined (ridstatr==2). In addition, the histogram command uses the normal option to add a normal density to the graph. histogram lbxtc, by(riagendr age), if (ridageyr >=20 & ridageyr <.) & ridstatr==2, normal graph box lbxtc [pweight = wtmec4yr], medtype(line) over(riagendr) over(age), if (ridageyr >=20 & ridageyr <.) & ridstatr==2 Reviewing the output of these commands, notice that: • The histogram for a normally distributed random variable is symmetric and bell-shaped. For variables based on data collected in a survey, such as NHANES 1999-2002, the distribution will deviate at least slightly from normality. • The box plot of the weighted total cholesterol data show three outliers with variables above 600 mg/dl. Step 3: Use tabstat to request selective statistics In some instances, you may not need all of the statistics generated by summarize. You can use the tabstat command as a useful alternative to summarize because it allows specification of the statistics to be displayed. The general structure for the tabstat command is very similar to the summarize command, but you can specify the statistics you want. Using the tabstat command also arranges the output in a table. tabstat varname [w=weightvar], statistics(statname) Here is the same cholesterol (lbxtc) analysis for six population subsets defined by gender (riagendr) and three age categories (age). The command also includes an if statement, which further restricts to age over 20 years (ridageyr >=20 & ridageyr <.) and people who have been both interviewed and examined (ridstatr==2), which now only reports the mean, 25th percentile (p25), median, and 75th percentile (p75). by riagendr: tabstat lbxtc [w=wtmec4yr], by(age) stat(mean p25 median p75), if (ridageyr >=20 & ridageyr <.) & ridstatr==2 Note that there are two tables - one for each gender with three age categories and that only the statistics requested by the statistics option are displayed. IMPORTANT NOTE Although SAS 9.1 and Stata have commands for calculating estimates of weighted percentiles, they do not have commands to directly produce standard errors for the percentiles. So this tutorial will not provide sample programs in SAS 9.1 and Stata for percentiles and their standard errors. In SAS 9.2 Survey Procedures, variance estimation for percentiles using the Woodruff method is available. See the SAS 9.2 documentation for information on using this method. The rank or percentile rank of a raw score is the percentage of individuals in the distribution with scores at or below that particular score. When a raw score is identified by its percentile rank, the score is called a percentile. Using mathematical terms, the pth percentile is a value, Y(p), such that at most (100p)% of the measurements are less than this value and at most 100(1- p)% are greater. Percentiles are useful because raw scores, or X values, do not provide enough information by themselves. For example, if you are told that a boy is 27 inches tall and weighs 30 pounds, you may not be able to tell how well the boy is doing. You need additional information such as the average score of his age group, or the number of boys who score above or below this boy in his group. To determine the relative position of the boy's measurements in his group, you need to transform the raw scores into percentiles in order to compare. Therefore, it is much more informative if you could transform the height and weight of the boy into percentile rank, such as 75th percentile in height, and 50th percentile in weight in his age group. In summary, percentiles provide additional information about the distribution of values. percentiles represent the relative position of the measured values within a distribution. In this example, you will use SAS-callable SUDAAN to generate percentiles and standard errors for total cholesterol levels of persons 20 years and older by sex and age group. Step 1: Sort data To calculate the percentiles and standard errors, you will use SAS-callable SUDAAN because this software takes into account the complex survey design of NHANES data when determining variance estimates. The data from analysis_Data must be sorted by strata first and then PSU (unless the data have already been sorted by PSU within strata). The SAS proc sort statement must precede the SUDAAN statements. WARNING The design variables, sdmvstra and sdmvpsu, are provided in the demographic data files and are used to calculate variance estimates. Before you call SUDAAN into SAS, the data must first be sorted by these variables. Step 2: Use proc descript to generate percentiles in SUDAAN The SUDAAN procedure proc descript is used to generate percentiles and standard errors. These estimates are requested on the print statement along with the sample size (nsum). The general program for obtaining weighted percentiles and standard errors is below. IMPORTANT NOTE These programs use variable formats listed in the Tutorial Formats page. You may need to format the variables in your dataset the same way to reproduce results presented in the tutorial. Generate Percentiles in SUDAAN Statements Explanation PROC SORT DATA =analysis_data; BY sdmvstra sdmvpsu ; RUN ; Use the proc sort procedure to sort the dataset by strata (sdmvstra) and PSU (sdmvpsu). The data statement refers to the dataset, analysis_Data. proc descript< data=analysis_data design=wr; Use proc descript procedure to generate means and specify the sample design using the design option WR (with replacement). subpopn ridageyr >= 20 ; Use the subpopn statement to select the sample persons 20 years and older (ridageyr >=20) because only those individuals are of interest in this example. Please note that for accurate estimates, it is preferable to use subpopn in SUDAAN to select a subpopulation for analysis, rather than select the study population in the SAS program while preparing the data file. NEST sdmvstra sdmvpsu; Use the nest statement with strata (sdmvstra) and PSU (sdmvpsu) to account for the design effects. weight wtmec4yr; Use the weight statement to account for the unequal probability of sampling and non-response. In this example, the MEC weight for 4 years of data (wtmec4yr) is used. subgroup riagendr age ; Use the subgroup statement to list the categorical variables for which statistics are requested. This example uses gender (riagendr) and age (age). These variables will also appear in the table statement. levels 2 3 ; Use the levels statement to define the number of categories in each of the subgroup variables. The level must be an integer greater than 0. This example uses two genders and three age groups. var lbxtc; Use the var statement to name the variable(s) to be analyzed. In this example, the total cholesterol variables (lbxtc) is used. percentile 5 25 50 75 95 ; Use the percentile statement to request select percentiles. table riagendr * age; Use the table statement to specify cross-tabulations for which estimates are requested. If a table statement is not present, a one—dimensional distribution is generated for each variable in the subgroup statement. In this example, the estimates are for gender (riagendr) by age (age). PRINT nsum= "Sample Size" qtile= "Quantile" style=nchs nsumfmt= F7.0 qtilefmt= F9.2 ; Use the print statement to assign names, format the statistics desired, and view the output. If the statement print is used alone, all of the default statistics are printed with default labels and formats. In this example, the sample size (nsum) and quantile (qtile) are requested. Note: For a complete list of statistics that can be requested on the print statement see SUDAAN Users Manual. Use the style option equal to NCHS to produce output which parallels a table style used at NCHS. rtitle "Percentiles of total cholesterol by sex and age: NHANES 1999-2002" ; Use the rtitle statement to assign a heading for each page of output. Step 3: Review output The output will list the sample sizes, percentiles and their standard errors. • Reviewing the output of the program, note that 50% of the sampled population has a total cholesterol measurement less than the 50th percentile and 50% of the sampled population has a total cholesterol measurement of greater than the 50th percentile. Means are measures of a central tendency. In this section, you will learn about three types of means: • arithmetic, • weighted arithmetic, and • geometric. Arithmetic Means The finite population mean of X1 , X2 ,…. XN is defined as the sum of the values Xi divided by the population size N. Typically, in a non-survey setting an arithmetic mean is estimated by taking a simple random sample of the finite population, x1, x2,…,xn, summing the values and dividing by the sample size n. This is often referred to as the arithmetic mean. On average, the result of the arithmetic mean would be expected to equal the result of the population mean. Weighted arithmetic means For NHANES 1999-2002 a sample weight, wi, is associated with each sample person. The sample weight is a measure of the number of people in the population represented by that person. For more information on sample weights, please see the Weighting module. To obtain an unbiased estimate of the population mean, based on data from the NHANES 1999-2002 sample, it is necessary to take a weighted arithmetic mean. Geometric Means In instances where the data are highly skewed, geometric means can be used. A geometric mean, unlike an arithmetic mean, minimizes the effect of very high or low values, which could bias the mean if a straight average (arithmetic mean) were calculated. The geometric mean is a log-transformation of the data and is expressed as the N-th root of the product of N numbers. In this example, you will use SAS-callable SUDAAN to generate tables of means and standard errors for average cholesterol levels of persons 20 years and older by sex and race-ethnicity. Step 1: Sort data To calculate the means and standard errors, you will use SAS-callable SUDAAN because this software takes into account the complex survey design of NHANES data when determining variance estimates. Note that if standard errors are not needed, you can simply use a SAS procedure, i.e., proc means with the weight statement to calculate means. The data from analysis_Data must be sorted by strata first and then PSU (unless the data have already been sorted by PSU within strata). The SAS proc sort statement must precede the SUDAAN statements. WARNING The design variables, sdmvstra and sdmvpsu, are provided in the demographic data files and are used to calculate variance estimates. Before you call SUDAAN into SAS, the data must be sorted by these variables. Step 2: Use proc descript to generate means in SUDAAN The SUDAAN procedure, proc descript, is used to generate means and standard errors. The print statement is used to output those estimates along with the sample size (nsum), i.e., the number of survey participants with known values for the variable of interest. The general program for obtaining weighted means and standard errors is below. IMPORTANT NOTE These programs use variable formats listed in the Tutorial Formats page. You may need to format the variables in your dataset the same way to reproduce results presented in the tutorial. Generate Means in SUDAAN Statements Explanation PROC SORT DATA =analysis_data; BY sdmvstra sdmvpsu ; RUN ; Use the proc sort procedure to sort the dataset by strata (sdmvstra) and PSU (sdmvpsu). The data statement refers to the dataset, analysis_Data. proc descript data=analysis_data design=wr; Use the proc descript procedure to generate means and specify the sample design using the design option WR (with replacement). subpopn ridageyr >= 20 ; Use the subpopn statement to select the sample persons 20 years and older (ridageyr >=20) because only those individuals are of interest in this example. Please note that for accurate estimates, it is preferable to use subpopn in SUDAAN to select a subpopulation for analysis, rather than select the study population in the SAS program while preparing the data file. NEST sdmvstra sdmvpsu; Use the nest statement with strata (sdmvstra) and PSU (sdmvpsu) to account for the design effects. weight wtmec4yr; Use the weight statement to account for the unequal probability of sampling and non-response. In this example, the MEC weight for four years of data (wtmec4yr) is used. subgroup riagendr age ; Use the subgroup statement to list the categorical variables for which statistics are requested. This example uses gender (riagendr) and age (age). These variables also appear in the table statement. levels 2 3 ; Use the levels statement to define the number of categories in each of the subgroup variables. The level must be an integer greater than 0. This example uses two genders and three age groups. var lbxtc; Use the var statement to name the variable(s) to be analyzed. In this example, the total cholesterol variables (lbxtc) is used. table riagendr * age; Use the table statement to specify cross-tabulations for which estimates are requested. If a table statement is not present, a one—dimensional distribution is generated for each variable in the subgroup statement. In this example the estimates are for gender (riagendr) by age (age). PRINT nsum= "Sample Size" mean= "Mean" semean= "Standard Error" style=nchs nsumfmt= F7.0 meanfmt= F9.2 semeanfmt= F9.3 ; Use the print statement to assign names, format the statistics desired, and view the output. If the statement print is used alone, all of the default statistics are printed with default labels and formats. In this example, the sample size (nsum), mean (mean), and standard error of the mean (semean) are requested. Note: For a complete list of statistics that can be requested on the print statement see SUDAAN Users Manual. Use the style option equal to NCHS to produce output that parallels a table style used at NCHS. rtitle "Means of total cholesteroland standard errors by sex and age: NHANES 1999-2002" ; Use the rtitle statement to assign a heading for each page of output. run ; The run statement signifies the end of the program. Step 3: Review output The output will list the sample sizes, means, and their standard errors. • The output shows the sample size, mean, and standard error sorted into total, male and female groups with age subgroups. • Also notice that the mean for each group is very near the median results(50th percentile) from the descriptive program in Task 1. Step 4: Use proc descript to generate geometric means If you need to generate geometric means instead of arithmetic means, you would indicated this using options in the proc descript procedure, as shown below. WARNING The example below is for illustrative purposes only. Geometric means are not recommended for use with normally distributed data, such as the analysis_Data dataset. IMPORTANT NOTE These programs use variable formats listed in the Tutorial Formats page. You may need to format the variables in your dataset the same way to reproduce results presented in the tutorial. Generate Geometric Means in SUDAAN Statements Explanation PROC SORT DATA =analysis_data; BY sdmvstra sdmvpsu ; RUN ; Use the proc sort procedure to sort the dataset by strata (sdmvstra) and PSU (sdmvpsu). The data statement refers to the dataset, analysis_data. proc descript data=analysis_data geometric design=wr ; Use the proc descript procedure to generate means and specify geometric as an option to compute geometric means. Specify the sample design using the design option WR (with replacement). subpopn ridageyr >= 20 ; Use the subpopn statement to select sample persons 20 years and older (ridageyr >=20) because only those individuals are of interest in this example. Please note that for accurate estimates, it is preferable to use subpopn in SUDAAN to select a subpopulation for analysis, rather than select the study population in the SAS program while preparing the data file. NEST sdmvstra sdmvpsu; Use the nest statement with strata (sdmvstra) and PSU (sdmvpsu) to account for the design effects. weight wtmec4yr; Use the weight statement to account for the unequal probability of sampling and non-response. In this example, the MEC weight for 4 years of data (wtmec4yr) is used. subgroup riagendr age ; Use the subgroup statement to list the categorical variables for which statistics are requested. This example uses gender (riagendr) and age (age). These variables will also appear in the table statement. levels 2 3 ; Use the levels statement to define the number of categories in each of the subgroup variables. The level must be an integer greater than 0. This example uses two genders and three age groups. var lbxtc; Use the var statement to name the variable(s) to be analyzed. In this example, the total cholesterol variables (lbxtc) is used. table riagendr * age; Use the table statement to specify cross-tabulations for which estimates are requested. If a table statement is not present, a one—dimensional distribution is generated for each variable on the subgroup statement. This example uses the estimates for gender (riagendr) by age (age). PRINT nsum= "Sample Size" geomean= "Geometric Mean" segeomean= "Standard Error" / style=nchs nsumfmt= F7.0 geomeanfmt= F9.2 segeomeanfmt= F9.3 ; output nsum geomean segeomean; Use the print statement to assign names, format the statistics desired, and view the output. If the statement print is used alone, all of the default statistics are printed with default labels and formats. In this example, the sample size (nsum), geometric mean (geomean), and standard error of the geometric mean (segeomean) were requested. Note: For a complete list of statistics that can be requested on the print statement see SUDAAN Users Manual. Use the style option equal to NCHS to produce output that parallels a table style used at NCHS. rtitle "Geometric means of total cholesterol and standard errors by sex and age: NHANES 1999-2002" ; run ; Use the rtitle statement to assign a title (heading) to each page of output. In this example, you will use SAS Survey Procedures to generate tables of means and standard errors for average cholesterol levels of persons 20 years and older, by gender and race-ethnicity. Step 1: Create Variable to Subset Population In order to subset the data in SAS Survey Procedures, you will need to create a variable for the population of interest. In this example, the sel variable is set to 1 if the sample person is 20 years or older, and 2 if the sample person is younger than 20 years. Then this variable is used in the domain statement to specify the population of interest (those 20 years and older). if ridageyr GE 20 then sel = 1; else sel = 2; Step 2: Use proc surveymeans to generate means in SAS Survey Procedures The SAS procedure, proc surveymeans, is used to generate means and standard errors. The general program for obtaining weighted means and standard errors is below. IMPORTANT NOTE These programs use variable formats listed in the Tutorial Formats page. You may need to format the variables in your dataset the same way to reproduce results presented in the tutorial. Generate Means in SAS Survey Procedures Statements Explanation proc surveymeans data=ANALYSIS_DATA nobs mean stderr; Use the proc surveymeans procedure to obtain number of observations, mean, and standard error. stratum sdmvstra; Use the stratum statement to define the strata variable (sdmvstra). cluster sdmvpsu; Use the cluster statement to define the PSU variable (sdmvpsu). class riagendr age; Use the class statement to specify the discrete variables used to select from the subpopulations of interest. In this example, the subpopulation of interest are gender (riagendr) and age (age). var lbxtc; Use the var statement to name the variable(s) to be analyzed. In this example, the total cholesterol variable (lbxtc) is used. weight wtmec4yr; Use the weight statement to account for the unequal probability of sampling and non-response. In this example, the MEC weight for four years of data (wtmec4yr) is used. domain sel sel*riagendr*age; Use the domain statement to specify the subpopulations of interest. ods output domain(match_all)=domain; run ; Use the ods statement to output the dataset of estimates from the subdomains listed on the domain statement above. This set of commands will output two datasets for each subdomain specified in the domain statement above (domain for sel; domain1 for sel*riagendr*age). data all; set domain domain1; if sel= 'Age ge 20' ; run ; Use the data statement to name the temporary SAS dataset (all) append the two datasets, created in the previous step, if age is greater than or equal to 20 (sel). proc print noobs data =all split = '/'; var riagendr age N mean stderr; format n 5.0 mean 4.2 stderr 4.2 ; label N = 'Sample'/'Size' stderr='Standard'/'error'/'of the' / 'mean' mean='Mean'; title1 'Mean serum total cholesterol of adults 20 years and older, 1999-2002' ; run ; Use the print statement to print the number of observations, the mean, and standard error of the mean in a printer-friendly format. Step 3: Review output The output lists the sample sizes, means and their standard errors. • Reviewing the output, note that the mean for the total sample population for SAS Survey is the same as the mean reported in SUDAAN. • Looking further at the output, you will find the table that breaks down the genders by age group. These means are very similar to the medians reported in the descriptive statistics program in Task 1. • As in the SUDAAN program output, the SAS Survey output shows that the age 40-59 group has the highest mean cholesterol for the males, and the age 60+ female group has the highest mean for all groups. In this example, you will use Stata to generate tables of means and standard errors for average cholesterol levels of persons 20 years and older by sex and race-ethnicity. Following that example, is an example of calculating the geometric means. WARNING There are several things you should be aware of while analyzing NHANES data with Stata. Please see the Stata Tips page to review them before continuing. Step 1: Use svyset to define survey design variables Remember that you need to define the SVYSET before using the SVY series of commands. The general format of this command is below: svyset [w=weightvar], psu(psuvar) strata(stratavar) vce(linearized) To define the svyset for your cholesterol analysis, use the weight variable for four-yours of MEC data (wtmec4yr), the PSU variable (sdmvpsu), and strata variable (sdmvstra) .The vce option specifies the method for calculating the variance and the default is "linearized" which is Taylor linearization. Here is the svyset command for four years of MEC data: svyset [w= wtmec4yr], psu( sdmvpsu) strata(sdmvstra) vce(linearized) Step 2: Use svy:mean to generate means and standard errors in Stata Now, that the svyset has been defined you can use the Stata command, svy: mean, to generate means and standard errors. The general command for obtaining weighted means and standard errors of a subpopulation is below. svy: mean varname, subpop(if condition) Here is the command to generate the mean cholesterol (lbxtc) for the subpopulation of adults over the age of 20 (ridageyr>=20 & ridageyr <.): svy: mean lbxtc, subpop(if ridageyr >=20 & ridageyr <.) Step 3: Use over option of svy:mean command to generate means and standard errors for different subgroups in Stata You can also add the over() option to the svy:mean command to generate the means for different subgroups. When you do this, you can type a second command, estat size, to have the output display the subgroup observation numbers. Here is the general format of these commands for this example: svy: mean varname, subpop(if condition) over(var1 var2) estat size The prefix quietly before any svy command suppresses the appearance of the output of a command on the screen. In the following example, the first command is done "quietly"; the second command is executed to show the mean, standard error, plus the number of observations in each category. Below is the command to generate the mean cholesterol (lbxtc) for the subpopulation of adults over the age of 20 (ridageyr>=20 & ridageyr <.) by gender (riagendr). quietly svy: mean lbxtc, subpop(if ridageyr>=20 & ridageyr <. ) over(riagendr) estat size Additionally, the over option can take multiple variables. To generate means for the six gender-age groups you will need to add the age variable to the over option, as in the example below. quietly svy: mean lbxtc, subpop(if ridageyr>=20 & ridageyr <.) over(riagendr age) estat size The output will list the sample sizes, means, and their standard errors for each of the six gender-age groups. • The output shows the sample size, mean, and standard error sorted into total, male and female groups with age subgroups. • Also notice that the mean for each group is very near the median results (50th percentile) from the descriptive program in Task 1. Step 4: Use svy:means to generate geometric means If you need to generate geometric means instead of arithmetic means, you would first log transform the variable of interest. Then, use the svy:mean command to obtain the mean of the transformed variable. Finally, display the exponentiated form of the variable. The general format of these commands is: generate ln_varname=ln(varname) quietly svy: mean ln_varname, subpop(if condition) over(var1) ereturn display, eform(geo_mean) To generate geometric means of the cholesterol variable for persons aged 20 years and older by gender using the previous dataset, you would need to run the following commands and options. WARNING The example below is for illustrative purposes only. Geometric means are not recommended for use with normally distributed data, such as the cholesterol variables in this dataset. First, create a new variable which is equal to the natural log of the variable of interest. In this example, the variable of interest is the cholesterol variable (lbxtc). generate ln_lbxtc=ln(lbxtc) Then, estimate the mean of the log transformed cholesterol variable (ln_lbxtc) for persons over the age of 20 (ridageyr>=20 & ridageyr <.) by gender (riagendr). The quietly prefix is used to suppress the output. quietly svy: mean ln_lbxtc, subpop(if ridageyr>=20 & ridageyr <. ) over(riagendr) Finally, display the output in original units. Stata lets you do this automatically by using the command eform(geo_mean), which displays the exponentiated coefficients for the mean, standard error, and 95% CI (ie, it calculates e to the (ln_lbxtc) power. ereturn display, eform(geo_mean) Proportions or prevalence estimates are very useful in epidemiological studies. For a national cross-sectional survey such as NHANES, you often need to generate prevalence estimate of a particular disease, condition, or risk factor in U.S. population. It is also used to compare prevalence rates between different subgroups. In this example, to determine the prevalence rate of high blood pressure in the U.S., you will identify persons who have high blood pressure according to the conventional health care definition set out by the Joint National Committee on Prevention, Detection, Evaluation, and Treatment of High Blood Pressure. According to the Committee, a person with hypertension is defined as either having elevated blood pressure (systolic pressure of at least 140 mmHg or diastolic of at least 90 mmHg) or taking antihypertensive medication. In this example, you will look at the proportion of examined persons 20 years and older with measured high blood pressure by sex, age, and race-ethnicity. Step 1: Determine variables of interest According to the Joint National Committee on Prevention, Detection, Evaluation, and Treatment of High Blood Pressure, a person with hypertension is defined as either having elevated blood pressure (systolic pressure of at least 140 mmHg or diastolic of at least 90 mmHg) or taking antihypertensive medication. You will need to define a categorical variable (hbp) indicating persons with high blood pressure (1= high blood pressure; 2= no high blood pressure). Step 2: Sort data To calculate the proportions and standard errors, use SAS-callable SUDAAN because the software takes into account the complex survey design of NHANES data when determining variance estimates. If the standard errors are not needed, you simply could use a SAS procedure, i.e., proc freq with the weight statement. The data from analysis_Data must be sorted by strata first and then PSU (unless the data have already been sorted by PSU within strata). The SAS proc sort statement must precede the SUDAAN statements. WARNING The design variables sdmvstra and sdmvpsu are provided in the demographic data files and are used to calculate variance estimates. Before you call SUDAAN into SAS, the data must be sorted by these variables. Step 3: Use proc descript to generate proportions In this example, you will use proc descript in SUDAAN to generate proportions. Previously, you created a categorical variable, hbp, to indicate whether or not a person had high blood pressure. That categorical variable will be identified in the procedure and the weighted percent (prevalence) of sample persons with the value hbp=1 (high blood pressure) will be estimated along with the standard error. You can code your variables in this example in two possible ways. Using catlevel option in SUDAAN, persons with high blood pressure, as defined above, are assigned a value of 1. All other sample persons are assigned a value of 2. The weighted percentage of sample persons with a value equal to 1 is an estimate of the prevalence of high blood pressure in the U.S. An alternate method of coding the variables is to assign persons with high blood pressure, as defined above, a value of 100, and persons without high blood pressure a value of 0. The weighted mean of sample persons with a value equal to 100 (which will be expressed as a percent) is an estimate of the prevalence of high blood pressure in the U.S. To see this method in SAS Survey Procedures, but without the catlevel option, see Task 4b: How to Generate Proportions using SAS Survey Procedure. The SUDAAN procedure, proc descript, is used to generate percents and standard errors. You request those estimates on the print statement along with the sample size (nsum). The general program for obtaining weighted percents and standard errors is shown below. IMPORTANT NOTE These programs use variable formats listed in the Tutorial Formats page. You may need to format the variables in your dataset the same way to reproduce results presented in the tutorial. Generate Proportions in SUDAAN Statements Explanation PROC SORT DATA =analysis_data; BY sdmvstra sdmvpsu ; RUN ; Use the proc sort procedure to sort the dataset by strata (sdmvstra) and PSU (sdmvpsu). The data statement refers to the dataset, analysis_data. PROC descript data= analysis_data design=wr ; Use the proc descript procedure to generate means and specify the sample design using the design option WR (with replacement). subpopn ridageyr >=20 ; Use the subpopn statement to select sample persons 20 years and older (ridageyr >=20) because only those individuals are of interest in this example. Please note that for accurate estimates, it is preferable to use subpopn in SUDAAN to select a subpopulation for analysis, rather than select the study population in the SAS program while preparing the data file. NEST sdmvstra sdmvpsu; Use the nest statement with strata (sdmvstra) and PSU (sdmvpsu) to account for the design effects. weight wtmec4yr; Use the weight statement to account for the unequal probability of sampling and non-response. In this example, the MEC weight for 4 years of data (wtmec4yr) is used. subgroup riagendr age race; Use the subgroup statement to list the categorical variables for which statistics are requested. This example uses gender (riagendr), age (age), and race/ethnicity (race). These variables will also appear in the table statement. levels2 3 4 ; Use the levels statement to define the number of categories in each of the subgroup variables. The level must be an integer greater than 0. This example uses two genders, three age groups, and four race/ethnicity categories. var hbp; Use the var statement to name the variable(s) to be analyzed. In this example, the high blood pressure variable (hbp) is used. catlevel 1 ; Use the catlevel statement to indicate that the variable(s) on the var statement are categorical and to select the level of each variable to be analyzed. This example indicates the variable hbp is categorical and that hbp=1, i.e., persons who have high blood pressure. IMPORTANT NOTE Note that the catlevel statement may be omitted if you code the variable as 100 equals has HBP and 0 equals does not have HBP. table riagendr * age * race ; Use the table statement to specify cross-tabulations that estimates are requested. The example uses estimates are gender (riagendr) by age (age) and by race/ethnicity (race). print nsum= "Sample Size" percent="Percent" sepercent="SE" style=NCHS nsumfmt=f8.0 percentfmt=f8.4 sepercentfmt=f8.4 ; Use the print statement to assign names, format the statistics desired, and view the output. If the statement print is used alone, all of the default statistics are printed with default labels and formats. In this example, sample size (nsum), percent (percent), and standard error of the percent (sepercent) are requested. The percent represents the proportion of persons with hbp=1 or with high blood pressure. Note: For a complete list of statistics that can be requested on the print statement see SUDAAN Users Manual. Use the style option equal to NCHS to produce output which parallels a table style used at NCHS. rtitle "Prevalence of SPs with measured high blood pressure : NHANES 1999-2002" ; run ; Use the rtitle statement to assign a heading for each page of output. Step 4: Review Output The percents in the output are the proportions of sample persons with high blood pressure. • Reviewing the output, you will see tables for both genders, males only, and females only sorted by age group followed by race/ethnicity. • The "Other" race/ethnicity category is only included to complete the totals. It is not reported. • In the table for females, notice that the proportion of black females with high blood pressure is twice that of other races in the 20-39 years age group, and nearly twice that of other races in the 40-59 years age group. • Given the low proportion of high blood pressure in the 20-39 years age group, you will also want to consider using an arcsine of Clopper-Pearson transformation for standard error estimation. In this example, you will be looking at the proportion of examined persons 20 years and older with measured high blood pressure, by sex, age, and race-ethnicity. Step 1: Determine variables of interest According to the Joint National Committee on Prevention, Detection, Evaluation, and Treatment of High Blood Pressure, a person with hypertension is defined as either having elevated blood pressure (systolic pressure of at least 140 mmHg or diastolic of at least 90 mmHg) or taking antihypertensive medication. You will need to define a categorical variable (hbpx) indicating persons with high blood pressure (100= high blood pressure; 0= no high blood pressure). Step 2: Create Variable to Subset Population In order to subset the data in SAS Survey Procedures, you will need to create a variable for the population of interest. In this example, the sel variable is set to 1 if the sample person is 20 years or older, and 2 if the sample person is younger than 20 years. Then this variable is used in the domain statement to specify the population of interest (those 20 years and older). if ridageyr GE 20 then sel = 1; else sel = 2; Step 3: Use proc surveymeans to generate proportions and their standard errors in SAS Survey Procedures In SAS Survey Procedures, persons with high blood pressure, as defined above, are assigned a value of 100, and persons without high blood pressure are assigned a value of 0. The weighted mean of sample persons with a value equal to 100 (which will be expressed as a percent) is an estimate of the prevalence of high blood pressure in the U.S. IMPORTANT NOTE These programs use variable formats listed in the Tutorial Formats page. You may need to format the variables in your dataset the same way to reproduce results presented in the tutorial. Generate Proportions in SAS Survey Procedures Statements Explanation ods trace on ; Use the ods statement to provide printer-friendly output. proc surveymeans data=analysis_Data nobs mean stderr Use the proc surveymeans procedure to obtain number of observations, mean, and standard error. stratum sdmvstra; Use the stratum statement to define the strata variable (sdmvstra). cluster sdmvpsu; Use the cluster statement to define the PSU variable (sdmvpsu). class riagendr age race; Use the class statement to specify the discrete variables used to form the subpopulations of interest. In this example, the subpopulation of interest are gender (riagendr), age (age), and race/ethnicity (race). domain sel sel*riagendr*age*race; Use the domain statement to specify the table layout to form the subpopulations of interest. This example uses age greater than or equal to 20 (sel) by gender (riagendr) by age (age) and by race/ethnicity (race). var hbpx; Use the var statement to name the variable(s) to be analyzed. In this example, the high blood pressure variable (hbpx) is used. If the sample person has high blood pressure, then the value equals 100. If the sample person does not have high blood pressure, then the value equals 0. IMPORTANT NOTE The SAS Survey procedure, proc surveymeans, is only able to use the variable coded as 100 and 0. weight wtmec4yr; Use the weight statement to account for the unequal probability of sampling and non-response. In this example, the MEC weight for 4 years of data (wtmec4yr) is used. ods output domain(match_all)=domain; run; Use the ods statement to output the dataset of estimates from the subdomains listed on the domain statement above. This set of commands will output two datasets for each subdomain specified in the domain statement above (domain for sel; domain1 for sel*riagendr*age*race). data all; set domain domain1;if sel='Age ge 20'; run; Use the data statement to name the temporary SAS dataset (all) append the two datasets, created in the previous step, if age is greater than or equal to 20 (sel). proc print noobs data =all split = '/' ; var riagendr age race N mean stderr ; format n 5.0 mean 4.4 stderr 4.2 ; label N = 'Sample' / 'size' mean='Percent' stderr='Standard' / 'error' / 'of the' / 'percent'; title1 'Percent of adults 20 years and older with high blood pressure, 1999-2002' ; run ; Use the print statement to print the number of observations, the mean, and standard error of the mean in a printer- friendly format. Step 3: Review output The percents in the output are the proportions of sample persons with high blood pressure: • Reviewing the output, you will see that the tables of both genders, males only, and females only sorted by age group and then race/ethnicity. • The "Other" race/ethnicity category is only included to complete the totals. It is not reported. • In the table for females, notice that the proportion of black females with high blood pressure is twice that of the other races in the 20-39 years age group, and nearly twice that of the other races in the 40-59 years age group. • Given the low proportion of high blood pressure in the years 20-39 age group, you will also want to consider using an arcsine of Clopper-Pearson transformation for standard error estimation. Stata software can be used to calculate proportions and standard errors for NHANES data because the software takes into account the complex survey design of NHANES data when determining variance estimates. If the standard errors are not needed, you simply could use a standard Stata command, i.e., svy: proportion with the weight statement. In this example, you will be looking at the proportion of examined persons 20 years and older with measured high blood pressure, by sex, age, and race-ethnicity. WARNING There are several things you should be aware of while analyzing NHANES data with Stata. Please see the Stata Tips page to review them before continuing. Step 1: Determine variables of interest According to the Joint National Committee on Prevention, Detection, Evaluation, and Treatment of High Blood Pressure, a person with hypertension is defined as either having elevated blood pressure (systolic pressure of at least 140 mmHg or diastolic of at least 90 mmHg) or taking antihypertensive medication. You can code your variables in this example in two possible ways. Persons with high blood pressure, as defined above, are assigned a value of 1. All other sample persons are assigned a value of 2. The weighted percentage of sample persons with a value equal to 1 is an estimate of the prevalence of high blood pressure in the U.S. IMPORTANT NOTE An alternate method of coding the variables is to assign persons with high blood pressure, as defined above, a value of 100, and persons without high blood pressure a value of 0. The weighted mean of sample persons with a value equal to 100 (which will be expressed as a percent) is an estimate of the prevalence of high blood pressure in the U.S. This method can be used with SAS Survey Procedures. Step 2: Use svyset to define survey design variables Remember that you need to define the SVYSET before using the SVY series of commands. The general format of this command is below: svyset [w=weightvar], psu(psuvar) strata(stratavar) vce(linearized) To define the survey design variables for your cholesterol analysis, use the weight variable for four-yours of MEC data (wtmec4yr), the PSU variable (sdmvpsu), and strata variable (sdmvstra) .The vce option specifies the method for calculating the variance and the default is "linearized" which is Taylor linearization. Here is the svyset command for fur years of MEC data: svyset [w= wtmec4yr], psu(sdmvpsu) strata(sdmvstra) vce(linearized) Step 3: Use svy:proportion to generate proportions In this example, you will use svy: proportion in Stata to generate proportions. You created a categorical variable, hbp, to indicate whether or not a person had high blood pressure. That categorical variable will be identified in the procedure and the weighted percent (prevalence) of sample persons with the value hbp=1 (high blood pressure) will be estimated along with the standard error. The general format of the svy:proportion command is: svy, subpop(if condition) vce(linearized): proportion varname To generate the proportion of persons aged 20 years and older (ridageyr >=20 & ridageyr <.) with high blood pressure (hbp), the command would be: svy, subpop(if ridageyr >=20 & ridageyr <. ) vce(linearized): prop hbp Step 4: Use over option of svy:proportion command to generate means and standard errors for different subgroups in Stata The general format of the svy:proportion command with the over option is: svy, subpop(if condition) vce(linearized): proportion varname, over(var1) Here is the command to generate the proportion of people aged 20 years and older (ridageyr >=20 & ridageyr <.) by gender (riagendr) with hypertension (hbp): svy, subpop( if ridageyr >=20 & ridageyr <. ) vce(linearized): proportion varname, over(rigendr) Here is the command to generate the proportion of people aged 20 years and older (ridageyr >=20 & ridageyr <.) by gender (riagendr), race-ethnicity (race), and age (ridageyr) with hypertension (hbp): svy, subpop( if ridageyr >=20 & ridageyr <. ) vce(linearized): proportion varname, over(rigendr race ridageyr) Highlights from the output include: • Reviewing the output, you will see proportions for all persons, both genders, the four race categories, and three age groups, and finally the 24 gender-race-age groups. • The percents in the output are the proportions of sample persons with high blood pressure. • The "Other" race/ethnicity category is only included to complete the totals. It is not reported. • In the groups for females, notice that the proportion of black females with high blood pressure is twice that of other races in the 20-39 years age group, and nearly twice that of other races in the 40-59 years age group. • Given the low proportion of high blood pressure in the 20-39 years age group, you will also want to consider using an arcsine of Clopper-Pearson transformation for standard error estimation. Page last reviewed: 8/4/2020
2020-10-20T18:08:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5259902477264404, "perplexity": 2106.1010551387585}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107874026.22/warc/CC-MAIN-20201020162922-20201020192922-00375.warc.gz"}
https://gea.esac.esa.int/archive/documentation/GDR2/Catalogue_consolidation/chap_cu9val_cu9val/sec_cu9val_947/ssec_cu9val_947_temp947.html
# 10.6.5 Temperature We compared Gaia the absolute magnitudes $G_{\mathrm{abs}}$ and effective temperatures $T_{\rm eff}$ with PARSEC isochrones (Chen et al. 2014) for about 180 clusters. In the regions of low extinction, we generally find a good agreement between the Gaia $T_{\rm eff}$ and the value expected using an isochrone of corresponding age (taken from Kharchenko et al. 2013), for stars $T_{\rm eff}\sim 7000-8000K$. The distribution of temperatures for NGC 2360 ($E(B-V)=0.07$) is shown in Figure 10.55. The underestimated temperatures are consistent with the fact that $T_{\rm eff}$ was derived under the assumption of $A_{\rm G}$=0. Stronger deviations are observed in the presence of moderate and high extinction, for instance for NGC 5316 (Figure 10.56), a cluster for which $E(B-V)=0.29$. The $T_{\rm eff}$ values of Gaia DR2 were derived with a training set of templates in the range 3030 K $T_{\rm eff}<$ 9990 K (Andrae et al. 2018), restricting the output values to this range. This saturation effect is in fact seen even for temperatures lower than 9990 K, for instance NGC 2516 (Figure 10.57)where stars expected to be hotter than $T_{\rm eff}\sim 8000$ K already have underestimated temperatures. A certain granularity is also visible Figure 10.57, where points accumulate near certain values of $T_{\rm eff}$. This artefact is a consequence of the inhomogeneous distribution of templates in the training data.
2018-06-23T04:01:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 21, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.749252438545227, "perplexity": 1014.8714585283182}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864940.31/warc/CC-MAIN-20180623035301-20180623055301-00146.warc.gz"}
https://www.aimsciences.org/article/doi/10.3934/proc.2015.0562
Article Contents Article Contents # The Nehari solutions and asymmetric minimizers • We consider the boundary value problem $x'' = -q(t,h) x^3,$ $x(-1)=x(1)=0$ which exhibits bifurcation of the Nehari solutions. The Nehari solution of the problem is a solution which minimizes certain functional. We show that for $h$ small there is exactly one Nehari solution. Then under the increase of $h$ there appear two Nehari solutions which supply the functional smaller value than the remaining symmetrical solution does. So the bifurcation of the Nehari solutions is observed and the previously studied in the literature phenomenon of asymmetrical Nehari solutions is confirmed. Mathematics Subject Classification: Primary: 34B15; Secondary: 34B18. Citation: • [1] Z.Nehari, Characteristic values associated with a class of nonlinear second order differential equations, Acta Math., 105 (1961), 141-176. MR0123775 [2] A. Gritsans and F. Sadyrbaev, Characteristic numbers of non-autonomous Emden-Fowler type equations, Mathematical Modelling and Analysis., 11 (2006), 243-252. MR2268126 [3] A. Gritsans and F. Sadyrbaev, Lemniscatic functions in the theory of the Emden - Fowler diferential equation, Mathematics. Differential equations (Univ. of Latvia, Institute of Math. and Comp. Sci.), 3: 5 - 27, 2003. (electr. version http://www.lumii.lv/Pages/sbornik/s3f3v1.pdf ). [4] R. Kajikiya, Non-even least energy solutions of the Emden-Fowler equation, Proc. Amer. Math. Soc., 140 (2012), no. 4, 1353-1362. MR2869119 [5] F. Zh. Sadyrbaev, Solutions of an equation of Emden-Fowler type. (Russian), Differentsial'nye Uravneniya, 25 (1989), no. 5, 799-805; translation in Differential Equations 25 (1989), no. 5, 560-565. MR1003036 Open Access Under a Creative Commons license
2023-04-02T13:05:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.8304629921913147, "perplexity": 1473.8574890845143}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950528.96/warc/CC-MAIN-20230402105054-20230402135054-00388.warc.gz"}
https://www.federalreserve.gov/econres/notes/feds-notes/wealth-and-income-concentration-in-the-scf-20200928.htm
September 28, 2020 ### Wealth and Income Concentration in the SCF: 1989–2019 Jesse Bricker, Sarena Goodman, Kevin B. Moore, and Alice Henriques Volz with assistance from Dalton Ruh This note was revised on September 30, 2020 to correct the second column of Panel B of Table B, which erroneously reported the mean of inheritance assets that were received or expected. The corrected table reports the mean of expected inheritance assets only. Understanding how economic growth diffuses across the population is a key economic issue. Indeed, this question has inspired renewed debate about how to characterize wealth and income distributions in the United States, particularly with respect to the extent of concentration at the top. Most of this work has described an increase in recent decades in concentration, raising concerns that broader segments of the population are not sharing in economic gains (Bricker, Henriques, Krimmel, and Sabelhaus, 2016; Saez and Zucman, 2016; Piketty, Saez, and Zucman 2018; though see Auten and Splinter, 2020, for contrast). This note augments information from the recently released 2019 Survey of Consumer Finances (SCF) to provide updated estimates of the wealth and income distributions. The SCF is a triennial survey that provides the most comprehensive look at the assets, liabilities, income, and demographic characteristics of U.S. families. By combining a wealthy oversample with a nationally representative sample, the SCF is uniquely positioned to measure the entire wealth and income distributions.1 These 2019 SCF data offer the first look at the evolution of wealth concentration since 2016—the most recent year the prior SCF and other distributional estimates (Saez and Zucman, 2019; Smith, Zidar, and Zwick, 2020) describe—capturing the tail end of the longest economic expansion on record. Note that these data do not reflect the effects of the COVID-19 pandemic on family finances, as almost all of the data in the 2019 survey were collected prior to the onset of the pandemic.2 The 2019 SCF indicates that, except for a small shift in concentration from the top 1 percent to the remainder of the top decile, wealth concentration in 2019 was similar to the level seen in 2016 and was near the historical high over the 1989–2019 period.3 We show that historical estimates of the wealth distribution using the SCF and those relying on other data sources were in agreement until the financial crisis — indicating that from 1989 until the crisis, wealth became steadily more concentrated at the top, largely at the expense of the families in the upper-middle segment of the distribution — but since then, the extent of concentration described by the available measures has varied widely. We also leverage the detailed SCF data to characterize how both demographics and sources of wealth—including inheritances—differ across the distribution. Though our primary focus in this note is on wealth, we also provide estimates of recent trends in the concentration of income—a key flow input into a family's stock of wealth that other methods of characterizing wealth concentration rely on to formulate their estimates—and of the joint distribution between income and wealth. Income concentration at the top declined over 2016–19 but, similar to the concentration of wealth, remained high. Finally, though there is a strong positive correlation between these two important measures of economic well-being, the highest wealth families are often not among the highest income families. #### Wealth distribution Wealth concepts The standard SCF measure of wealth that, for example, characterized the economic well-being of families in 2019 in the recently released Federal Reserve Bulletin (Bhutta, et al., 2020) —referred to as "Bulletin" hereafter— is based on "marketable" wealth, which is the difference between the market value of assets owned by a family and the amount owed in debts and is a concept that is salient to families and includes assets with values that can be easily looked up. To be able to examine the full distribution of wealth, this note augments the information found in the Bulletin in two ways. First, we expand the wealth concept to include household claims on defined benefit (DB) pension assets. Wealth as measured here, then, includes all assets over which a family has legal claim that can be used to finance its present and future consumption. Second, we include an estimate of the wealth of Forbes 400 families, who are excluded from the SCF sample. A. Allocating DB wealth As is, the Bulletin wealth concept leads to uneven treatment of household retirement assets, as it includes assets held in defined contribution (DC) accounts but does not include household claims on DB pension assets. DB pension plans are similar to DC plans in that both provide future income for covered families.4 But unlike DC plans, DB plans cannot be sold and have no market value. Still, more than one-quarter of families aged 35 to 64 participate in DB plans, and DB plans represent the only retirement account for about 9 percent of families in this age range. Aggregate DB pension assets are collected in the Financial Accounts of the United States, and we use a method described in Sabelhaus and Volz (2019) to allocate this amount across SCF families. Using the SCF's detailed data on DB coverage (on a current or past job), this method allocates DB pension assets as a function of a family's current plan payouts, wages, ages, and expected future payouts (among other factors).5 Prior to allocating the DB pension assets, median family wealth in the SCF Bulletin is a bit more than $121,000 (table A). But DB pensions are a major component of household wealth, representing about 15 percent of the household balance sheet in the Financial Accounts. After allocating these DB reserves across families, median wealth increases to nearly$172,000.6 ##### Table A: Comparison of SCF Bulletin wealth and augmented wealth, 2019 SCF Median (thousands of dollars) Mean (thousands of dollars) Aggregate (trillions of dollars) Wealth Wealth (augmented) Wealth Wealth (augmented) Wealth Wealth (augmented) 121.7 171.8 748.8 897.2 96.3 115.4 Source: Federal Reserve Board, 2019 Survey of Consumer Finances. Note: Wealth uses “Bulletin” concept—also called “net worth”—and augmented wealth includes DB pension wealth, distributed across 2019 SCF families. Allocating DB pension assets increases wealth across the distribution. Figure A below shows the average balance sheet composition of four wealth groups that we will use throughout this note: families ranked in the Bottom 50 percent of the wealth distribution, those in the "Next 40" percent (the 50th to 90th percentiles), those in the "Next 9" percent (90th to 99th percentiles), and the wealthiest "Top 1" percent. Each group's balance sheet is shown on a different y-axis because mean wealth differs so much across groups. Wealth held in retirement plans is denoted in black, with DB pension wealth represented in the hashed black portion. While DB pension wealth increases average wealth in each group relative to the Bulletin wealth concept, wealth increases the most for families in the next 40 and next 9 groups. ##### Figure A. Asset composition—including DB assets—by wealth group Thousands of 2019 dollars B. Wealth coverage Growing up in a wealthy household may also impart other indirect advantages—through social connections, for example, or family loans—that play a role in wealth transmission. Inheritances that are expected, but not yet received, are informative about the wealth of the parents and other close relatives of SCF families. While expected inheritances should not directly influence the SCF family's wealth, as they are not yet received, table B indicates that the wealthiest families often come from families with significant wealth. Specifically, in 2019, wealthiest families expected to receive more than $940,000, on average, in future inheritances: an amount far greater than expected inheritances by the wealth groups lower in the distribution. ##### Table B. Characteristics of SCF families, by wealth group Panel A. Demographics and portfolio of SCF families Shares, except where noted Demographics Portfolio White Black Hisp. Other College graduate Age (years) 0.53 0.20 0.14 0.13 0.22 45 0.05 0.27 0.75 0.09 0.06 0.09 0.45 58 0.17 0.18 0.83 0.04 0.03 0.10 0.72 60 0.38 0.21 0.89 0.01 0.00 0.10 0.87 62 0.74 0.41 0.65 0.14 0.10 0.11 0.36 52 0.13 0.27 Panel B. Inheritances and family background of SCF families. Shares, except where noted Inheritances (thousands of dollars) Parent with College degree Received 9.7 29.4 0.28 45.9 60.1 0.33 174.2 266.6 0.47 719 941.1 0.58 46.2 72.2 0.32 Source: Federal Reserve Board, 2019 Survey of Consumer Finances. The wealthiest families also have the highest share of parents with a college degree—another indicator of high socioeconomic status when growing up. Almost 60 percent of the wealthiest families had at least one parent that went to college, a share that falls to 47 percent for the next 9, 33 percent for the next 40, and 28 percent for the bottom 50. However, a family's own education is a stronger predictor of its wealth than the education of its parents, as nearly 90 percent of the wealthiest 1 percent include a reference person with a college degree. #### Income An important source of wealth for many families is saved income, and the SCF collects detailed information on pre-tax income received in the year prior to the survey, which follows closely from what families report on an income tax form.12 For example, changes between the 2016 and 2019 surveys describe changes in income between 2015 and 2018, respectively.13 Figure E describes the distribution of income across four income groups that mirror the wealth groups found earlier: families ranked in the bottom 50 percent of the income distribution, those in the "Next 40" percent (the 50th to 90th percentiles), those in the "Next 9" percent (90th to 99th percentiles), and the top 1 percent. Comparing figure E to figure A, income is more evenly distributed than wealth. While the wealthiest 1 percent of families held about 33 percent of wealth in the 2019 SCF (figure A), the top 1 percent of SCF families by income received about 20 percent of total income (figure E). But, as with wealth, the income distribution shows a longer-term trend toward more concentration at the top. Although the share of income received by the top 1 percent fell in 2019, it still remained substantially higher than earlier in the decade. In most surveys since 2007, the top 1 percent of families by income received 19 percent or more of total income; prior to 2007 the share of income received by the top 1 percent was never larger than 19 percent. And the share of income received by the Next 9 income group—90th–99th percentiles—has continued to increase since 1989. Between the 2016 and 2019 surveys, the share of income received by the Bottom 50 and Next 40 income groups both increased, reversing a decade-long decline in the income share for those groups. Both, though, increased from their historical lows and their 2019 income shares are similar to the 2010–13 period. ##### Figure E. SCF Bulletin income distribution Share earned by income group Income reported on a tax form can be informative about well-being, but the concept of income that best measures a family's well-being most likely includes components that are fully taxed (such as wages) or partially taxed (such as Social Security), along with untaxed components (like employer-paid health insurance premiums or some government transfers). Such components are included in other efforts to create "more complete" measures of income, including the distributional national accounts (DINA) project (Piketty, Saez, and Zucman, 2018). Accordingly, we augment SCF Bulletin income to include some non-taxable sources of income: employer-paid health insurance premiums as well as Medicare and Medicaid receipt and other government assistance. We do so by allocating National Income and Product Account aggregates across SCF families using the detailed information collected in the SCF on health insurance coverage, age, and income to distribute employer health insurance, Medicare, Medicaid, and other government subsidies (see Bricker, Henriques, Krimmel, and Sabelhaus, 2016, for more information). These augmented SCF income estimates have a comparable distribution to the national income concept found in the WID database. Both show a somewhat increasing pattern in income concentration in the top 1 percent during the 1989–2019 period and a declining share of income to families outside of the top 10 percent, similar to but to a lesser extent than household wealth in figure C and D and Bulletin income in figure E. ##### Figure F. Augmented SCF and WID income distributions Share earned by income group #### Joint distribution of income and wealth Both income and wealth can describe a family's economic well-being, and the distributional analysis shown so far shows how that well-being is spread across the U.S. population. Unique to the SCF, we can also see the overlap in the two distributions, and identify how many families are at the top of both. The table below describes the share of families in a given income group by wealth group—the marginal distribution of income with respect to wealth. (The percentages in each column sum to 1, except for rounding errors.) The diagonal elements pertain to families that are in the same segment of the distribution for both measures. Of the wealthiest 1 percent, about half (49 percent) are also among the families with the highest incomes, and 41 percent are in the Next 9 income group. Nearly 10 percent of the wealthiest families, then, have income that puts them in the bottom 90 percent. Of the current methods employed to characterize the top of the wealth distribution, the SCF is the only one that does not rely on income to measure wealth, and instead collects a direct measure of both wealth and income. The joint distribution in methods that "capitalize" income into wealth rely on models to map income to wealth, and the recent modelling work—from Bricker, Henriques, and Hansen (2018) to Saez and Zucman (2019) to Smith, Zidar, and Zwick (2020)—indicates that there is much work to be done before the income-to-wealth relationship is fully understood. ##### Table C. Share of families in income group, by wealth group, 2019 SCF Income groups Wealth groups Bottom 50 Next 40 Next 9 Top 1 Bottom 50 0.72 0.34 0.03 0.04 Next 40 0.27 0.56 0.46 0.05 Next 9 0.01 0.10 0.47 0.41 Top 1 0 0 0.05 0.49 All 1.00 1.00 1.00 1.00 Source: Federal Reserve Board, 2019 Survey of Consumer Finances. Note: Table displays the share of families in augmented wealth groups that are also in income groups. For example, 49 percent of the top 1 by wealth are also in the top 1 by augmented income. Columns sum to 1, though may not due to rounding. #### Conclusion This note provides the first look at the wealth distribution since 2016 (figure B). This note also lays out the best practices for augmenting the SCF wealth concept to characterize—and be more comparable to other measures of—the full U.S. wealth distribution. The data collected in the 2010–2016 SCF surveys indicated that the sustained economic growth that followed the Great Recession had initially accrued primarily to the wealthiest families. These 2019 data were collected at the end of the longest economic expansion on record, when growth spread more equitably to the larger segments of the distribution. Overall, the 2019 SCF data mostly show small changes between the 2016 and 2019 SCF distributions. Wealth concentration in the top 1 percent of families is still near the high point in the historical 1989–2019 SCF time series, and concentration in the top 10 percent is unchanged. Although wealth concentration did not increase between 2016 and 2019, the gains that accrued to the rest of the distribution did little to reduce the large existing disparities. Income concentration in the top 1 percent fell but remained at the typical levels seen since the 2007 SCF. However, some of the income share earned by the top 1 percent shifted to the bottom 90 percent of the income distribution, reversing a nearly decade-long decline in the income share for that group. #### References Alvaredo, Facundo, Anthony Atkinson, Lucas Chancel, Thomas Piketty, Emmanuel Saez, and Gabriel Zucman (2016). "Distributional National Accounts (DINA) Guidelines: Concepts and Methods used in WID.world," WID.world working paper series, No. 2016/1. Auten, Gerald, and David Splinter (2020). "Income Inequality in the United States: Using Tax Data to Measure Long-term Trends," mimeo. Batty, Michael, Jesse Bricker, Joseph Briggs, Elizabeth Holmquist, Susan McIntosh, Kevin Moore, Eric Nielsen, Sarah Reber, Molly Shatto, Kamila Sommer, Tom Sweeney, and Alice Volz (2019). "Introducing the Distributional Financial Accounts of the United States," FEDS Paper No. 2019-017. Bastani, Spencer, and Daniel Waldenstrom (2019). "Salience of Inherited Wealth and the Support for Inheritance Taxation," CESifo Working Paper No. 7482. Bhutta, Neil, Jesse Bricker, Andrew C. Chang, Lisa J. Dettling, Sarena Goodman, Joanne W. Hsu, Kevin B. Moore, Sarah Reber, Alice Henriques Volz, and Richard A. Windle (2020). "Changes in U.S. Family Finances from 2016 to 2019: Evidence from the Survey of Consumer Finances," Federal Reserve Bulletin, September, Vol. 106, No. 5. Bhutta, Neil, Andrew C. Chang, Lisa J. Dettling, and Joanne W. Hsu (2020). "Disparities in Wealth by Race and Ethnicity in the 2019 Survey of Consumer Finances," FEDS Notes. Washington: Board of Governors of the Federal Reserve System, September 28, 2020, https://doi.org/10.17016/2380-7172.2797 Board of Governors of the Federal Reserve System (2020). Statistical Release Z.1, "Financial Accounts of the United States,". Bricker, Jesse, Alice Henriques, Jacob Krimmel, and John Sabelhaus (2016). "Measuring Income and Wealth at the Top Using Administrative and Survey Data," Brookings Papers on Economic Analysis, Spring. Bricker, Jesse, Alice Henriques, and Kevin B. Moore (2017). "Updates to the Sampling of Wealthy Families in the Survey of Consumer Finances," FEDS Working Paper No. 2017-114. Bricker, Jesse, Alice Henriques, and Peter Hansen (2018). "How Much Has Wealth Concentration Grown in the United States? A Re-examination of Data from 2001-2013," FEDS Working Paper No. 2018-024. Bricker, Jesse, Peter Hansen, and Alice Henriques Volz (2019). "Wealth Concentration in the U.S. after Augmenting the Upper Tail of the Survey of Consumer Finances," Economic Letters Vol 184. Feiveson, Laura, and John Sabelhaus (2018). "How Does Intergenerational Wealth Transmission Affect Wealth Concentration?" FEDS Paper No. 2018-06-01. Kennickell, Arthur. (1999). "Using income to predict wealth." Technical report, Board of Governors of the Federal Reserve System (U.S.). Kopczuk, Wojciech, and Emmanuel Saez (2004). "Top wealth shares in the United States, 1916-2000: Evidence from estate tax returns," National Tax Journal, 47(2):445–487. Lampman, R. J. (1962). "The Share of Top Wealth-Holders in National Wealth, 1922-56." NBER Books. Piketty, Thomas, and Emmanuel Saez (2003). "Income Inequality in the United States, 1913-1998," Quarterly Journal of Economics, vol. 118, no. 1: 1-39. Piketty, Thomas, Emmanuel Saez, and Gabriel Zucman (2018). "Distributional National Accounts: Methods and Estimates for the United States," Quarterly Journal of Economics, vol. 133, no. 2: 553-609. Robbins, Jacob (2018). "Capital gains and the distribution of income in the United States," mimeo. Sabelhaus, John, and Alice Henriques Volz (2019). "Are Disappearing Employer Pensions Contributing to Rising Wealth Inequality?" FEDS Notes, February 1, 2019. Sabelhaus, John, and Alice Henriques Volz (2020). "Social Security Wealth, Inequality, and Lifecycle Saving," NBER Working Paper 27110. Saez, Emmanuel, and Gabriel Zucman (2016). "Wealth Inequality in the United States since 1913: Evidence from Capitalized Income Tax Data," Quarterly Journal of Economics, vol. 131, no. 2, pp. 519-578. Saez, Emmanuel, and Gabriel Zucman (2019). "Progressive Wealth Taxation," Brookings Papers on Economic Analysis, Fall. Smith, Matthew, Owen Zidar, and Eric Zwick (2020). "Top Wealth in the United States: New Estimates and Implications for Taxing the Rich." Mimeo. Vermuelen, Philip (2018). "How fat is the top tail of the wealth distribution?" The Review of Income and Wealth, 64(2):357–387. 1. See Bricker, Henriques, and Moore (2017) for more information on the SCF sampling process and Bricker, Henriques, Krimmel, and Sabelhaus (2016) for more information on top-end coverage in the SCF. The SCF cannot sample the Forbes 400, but the results shown here include our best estimate of the wealth of these families. Return to text 2. The main results of the 2019 SCF as well as information on the timing of the interviews are found in Bhutta, et al. (2020). Additional details concerning the data and definitions used in this note can be found in this article and its accompanying appendix. Return to text 3. For context, we also provide a comparison between the historical evolution of SCF wealth concentration estimates and those from tax records that are currently available only through 2016. Return to text 4. DB pensions are often called "traditional pensions". The amount of retirement payments are typically based on years of employment multiplied by a share of pay, and an amount that is a function of annual income. These plans are managed by employers—with assets held in reserve to pay future benefits—so employees typically have access to information about future benefits but not account values. In contrast, DC plans are owned by employees, such as 401(k), 403(b), or similar plans which are managed by the employer, along with IRA accounts, which are managed by the individual. Return to text 5. Notably, this adjustment excludes social security, another important source of retirement-age consumption funding on which many families depend. But, unlike DB pensions, there are no legal claims to future Social Security payments. See Board of Governors of the Federal Reserve System (2020), Statistical Release Z.1, "Financial Accounts of the United States," for the full Financial Accounts data. Return to text 6. More than$19 trillion in asset reserves are allocated to fund DB pension obligations to households in the 2019 Q3 Financial Accounts, representing about 15% of all household wealth. Allocating these DB reserves to the 2019 SCF also adds about 15 percent to the SCF aggregate household wealth as the SCF aggregate net worth typically lines up well with aggregate net worth in table B.101.h of the Financial Accounts (Batty et al 2019). This stock of DB pension wealth is the accrued pension benefit obligations (ABO) of pension plans (i.e. all benefits accrued to date for workers), which includes both funded and unfunded obligations. Pension accounting principles and laws differ for private and public sector employers, leading to higher funding levels for private sector plans. Including both funded and unfunded obligations here allocates all pension benefits workers have accrued, since they are liabilities on the balance sheet of firms, regardless of whether the assets have been set aside (for further discussion of DB concept included see Sabelhaus and Volz, 2019). The optimal choice of pension entitlement measure reflects one's view on the risk of firm (or government) default on pension obligations. Return to text 7. Bricker, Hanson, and Volz (2019), Vermeulen (2018), and Kennickell (1999) each show that any under-coverage in the baseline SCF does not arise until the extreme top, as the wealthiest SCF families have wealth comparable to the lower end of the Forbes 400. Thus, adding the Forbes 400 increases aggregate household wealth by $2.65 trillion to total 2019 SCF wealth, as about$300 billion of Forbes wealth was already represented by sampled SCF families. Return to text 8. Note that first including DB pensions brings top 1 share down to 31.7 percent, and then including Forbes families brings share up to 33.1. Aggregate DB wealth is far greater than Forbes wealth (19 trillion to 3 trillion). Return to text 9. The Distributional Financial Accounts (DFA) distribute the aggregate Financial Accounts based on the SCF distribution of assets and debts (see Batty et al 2019), and updated DFA wealth concentration results will be in the upcoming release, covering data through 2020 Q2. The only official recording of wealth that exists in the U.S. comes from an estate tax (if applicable) at death. While these data have been widely used in the past to estimate wealth concentration (e.g., Lampman, 1962, Kopczuk and Saez, 2003), the estate tax data from the past 15 years have relatively small sample sizes and are less representative than past data as estate tax filing thresholds have increased. Return to text 10. See www.wid.world and Alvaredo et al (2016) for more information. Note that the augmented SCF wealth concept used here is more comparable to the WID wealth concept. Return to text 11. This analysis is based on solely the 2019 SCF and thus excludes the Forbes families from earlier figures. Return to text 12. The Bulletin income concept is the pre-tax sum of wages, interest (taxable and non-taxable), dividends, capital gains, pass-through income from businesses, pension, Social Security, and Supplemental Security Income payments, retirement account withdrawals, and some forms of transfer income. Return to text 13. Over this period, there was a change in tax rates brought on by the Tax Cuts and Jobs Act of 2017. Return to text
2022-05-17T12:22:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3493615686893463, "perplexity": 4383.668646392885}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662517245.1/warc/CC-MAIN-20220517095022-20220517125022-00705.warc.gz"}
https://googology.wikia.org/wiki/Flan_numbers
11,052 Pages Flan numbers (Japanese: フラン数) are numbers coined by Japanese googologist 巨大数大好きbot. There are several versions of Flan numbers. ## Episode The numbers are named after Flandre Scarlet (Japanese: フランドール・スカーレット, nickname: フラン)[1], who is a famous character in Japanese game series 東方Project. One of the most famous versions Flan number 4 version 3 (Japanese: フラン数第四形態改三) was coined on 03/22/2018.[2] Its predeccessor Flan number 4 version 2 (Japanese: フラン数第四形態改二) is known to be equal to 6, and is the origin of the tradition in one of Japanese googology communities that $$6$$ is a large number.[3][4][5] Perhaps this episode is the origin of the number later coined by Japanese googologist Nayuta Ito. ## Versions The following is a table of informations of versions at the time when they were first created.[6] version name approximation フラン数第一形態 $$10^{10^{10^{670767}}}$$ フラン数第二形態 $$f_{\omega^{\omega^{\omega}}}^6(5)$$ フラン数第三形態 $$f_{\varepsilon_0}(6)$$ フラン数第四形態 ill-defined フラン数第四形態改 ill-defined フラン数第四形態改二 $$6$$ フラン数第四形態改三 $$f_{\varepsilon_4^{\varepsilon_4^{\varepsilon_4}}}(5)$$ フラン数第五形態改二 $$5$$ フラン数第五形態改三 $$f_{\varepsilon_{\omega^{\omega^{\omega^{\omega+2}}}}}(5)$$ フラン数第六形態改二 ill-defined フラン数第六形態改二甲 ill-defined フラン数第七形態改二 ill-defined フラン数準第八形態改二 ill-defined Here, $$f$$ denotes the fast-growing hierarchy associated to Veblen hierarchy. ## Japanese letters The word "フラン" means "Flan", i.e. the nickname of Flandre Scarlet, and the letter "数" means "number". Therefore the combination "フラン数" means "Flan number". The letter "第" means "-th", and the letters "一", "二", "三", "四", "五", "六", "七", and "八" mean "1", "2", "3", "4", "5", "6", "7", and "8" respectively. Therefore for example, the combination "第三" means "third". The word "形態" means "form" or "version". The letter "改" means "revision", and hence the combination "改二" means "revision 2", i.e. the second revision. The letter "甲" means "first" in one of Japanese old counting systems. Therefore the combination "フラン数第六形態改二甲" means "The first version of the second revision of the sixth form of Flan number". Community content is available under CC-BY-SA unless otherwise noted.
2021-06-19T13:25:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4487045705318451, "perplexity": 13830.06101849836}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487648194.49/warc/CC-MAIN-20210619111846-20210619141846-00347.warc.gz"}
https://www.usgs.gov/center-news/volcano-watch-thermal-activity-yellowstone-sparks-increased-monitoring
# Volcano Watch — Thermal Activity in Yellowstone Sparks Increased Monitoring Release Date: Norris Geyser Basin of Yellowstone National Park has long been recognized as the hottest and most changeable of Yellowstone's famous hydrothermal wonders. This summer, Norris lived up to its hot, unstable reputation as scientists and visitors alike have seen significant changes in many geysers and increased ground temperatures in the western part of the basin. Porkchop Geyser, which sprang to life from a small hot spring in 1971, erupted in July for the first time since 1989. Water has drained away from several active geysers, resulting in hissing steam vents and ground temperatures as high as 93 degrees Centigrade (200 degrees Fahrenheit). Still other geysers have erupted more frequently and regularly, while some thermal features that usually release hot water and steam now send steam jetting into the air. On July 11, the staff of Yellowstone National Park also noted the formation of a new mud pot-a small cauldron filled with boiling acidic water and mud. Within one week, the mudpot turned into a high-pressure steam vent. Also, pine trees are dying in three areas in response to the increased thermal activity. Norris is one of the more popular geyser basins in Yellowstone, with as many as 4,000 people visiting the nearby museum each week. On July 23, the park superintendent closed access to the western part of Norris Geyser Basin, known as the Back Basin, for public safety (other parts of Norris remain open to the public). About a mile of trail and boardwalk in the Back Basin remain closed because of the hazard to visitors and park staff from the high temperatures. Another potential hazard is from hydrothermal explosions that could send boiling water andd rocks shooting into the air The concern for public safety is real. Hydrothermal explosions have occurred recently at Norris and other areas of Yellowstone. For example, Porkchop Geyser exploded on September 5, 1989. Rocks surrounding the old geyser were upended by the force of the explosion, and some rocks were thrown more than 66 m (216 feet) from the spouting geyser. Luckily, people in the area were not injured by the flying debris and scalding water. The cause of the increased thermal activity is not known, but scientists associated with the Yellowstone Volcano Observatory (YVO) launched a temporary monitoring experiment in August in order to learn from the ongoing activity. YVO is a collaborative partnership between the U.S. Geological Survey, the University of Utah, and Yellowstone National Park. The Norris monitoring experiment is also supported by two research organizations-the Integrated Research Institutes in Seismology (IRIS) and University NAVSTAR Consortium (NAVCO). Scientists installed a network of 7 new seismic stations for recording various types of earthquakes. The instruments, called broadband seismometers, record a wide range of vibrations typical of hydrothermal and volcanic systems. These seismometers are especially sensitive to the long-wavelength ground vibrations that occur as water and gas move through underground cracks. Five high-precision Global Positioning System receivers also were installed at Norris in order to track movement of the ground in response to underground pulses of groundwater and steam and, in case one occurs, a hydrothermal explosion. Data from the broadband and GPS receivers are being stored on site. The instruments and data will be retrieved in the next few weeks before the onset of winter. Thermometers were also placed in hot springs and downstream from geysers and other thermal features to continuously measure temperature fluctuations that may occur. The Norris experiment is intended to document activity within the shallow hydrothermal system that may be causing changes at the surface of the Back Basin. In the coming months, scientists will be pouring over the mounds of data collected by the Norris experiment for possible clues to the renewed heating of Norris. There is no evidence, however, that magma beneath the enormous Yellowstone caldera is directly involved. Scientists have noted similar changes at Norris in the past, but the current activity is perhaps the best opportunity yet to quantitatively document and better understand hydrothermal disturbances and their possible causes at Yellowstone. ### Volcano Activity Update Eruptive activity at the Puu Oo vent of Kīlauea Volcano continued unabated during the past week. Surface activity is mainly visible in the westernmost section of the pali flow field. The Kohola arm of the Mother's Day flow has some breakouts on the coastal flat. The breakouts, small and sluggish, are scattered through the flow. The east-side lobe of the Mother's Day flow also remains visible as a series of incandescent patches from the top of Pulama pali out onto the gentle slope below. The August 9 breakout, which starts high up the Mother's Day tube system, has been slowly moving southward and just appeared at the top of Pulama pali on September 10-11. No lava is entering the ocean. Three earthquakes were felt on the island during the past 7 days. One, of magnitude 3.6, was felt from Captain Cook to Kailua at 12:38 a.m. September 5; it was located west of island, about 45 km (28 miles) west-southwest of South Point at a depth of 54 km (33 miles). Another earthquake, with magnitude 3.4, shook the northern part of the island at 7:14 p.m. September 6. It was felt in the area bounded by Hale Pokaho, Papaaloa, Kohala Estates, and Waikoloa and was located about 5 km (3 miles) west-southwest of Honoka`a at a depth of 12 km (8 miles). Volcano residents awoke to a magnitude 3.4 earthquake at 12:24 a.m. September 10. The earthquake came from the southern part of Kīlauea caldera at a depth of 3.5 km (2 miles). Mauna Loa is not erupting. The summit region continues to inflate slowly. Seismic activity remains low, with only three earthquakes located in the summit area during the last seven days.
2020-08-04T12:19:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3164660334587097, "perplexity": 5034.816227976423}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735867.93/warc/CC-MAIN-20200804102630-20200804132630-00089.warc.gz"}
https://bison.inl.gov/Documentation/source/materials/VSwellingUPuZr.aspx
# Volumetric Swelling of UPuZr Computes a volumetric strain to account for solid and gaseous swelling and for open pore compression in U-Pu-Zr metal fuel system warning:Deprecated Solid Mechanics Material The functionality of this solid mechanics material is being replaced in the TensorMechanics system by UPuZrVolumetricSwellingEigenstrain. ## Description The VSwellingUPuZr model computes a volumetric strain to account for solid and gaseous swelling and for open pore compression in U-Pu-Zr metal fuel systems. The solid swelling and gaseous swelling are optionally saved as a material property, named solid_swell and gas_swell, respectively. Also, porosity (as-fabricated + gas swelling porosity) is available as a material property. The compressive strain increment due to open pore compression (hot pressing) is computed in CreepUPuZr and passed to VSwellingUPuZr as the material property open_pore_compression. The dilatational components of the strain increment tensor can be scaled in this model with a user-defined input parameter called the anisotropic_strain_scaling vector. The default value of this parameter is '1 1 1', where these components are of type Real in C++ parlance. Those components can be changed, but are limited such that the sum of the components divided by 3 must equal 1 and the components can only be positive and less than or equal to 3. For example, taking the total amount of strain and applying it all in the first component of the strain increment tensor can be done by setting anisotropic_strain_scaling to '3 0 0'. This ensures that the total amount of strain is preserved. To include the open pore compression strain in VSwellingUPuZr, the hydrostatic_stress and plenum_pressure must be defined in the CreepUPuZr block. ### Gaseous Swelling See the dicussion of the gaseous swelling equations on the UPuZrVolumetricSwellingEigenstrain page. ### Solid Swelling See the dicussion of the solid fission products swelling equations on the UPuZrVolumetricSwellingEigenstrain page. Note that the dilatational components of the strain increment tensor can be scaled in this model with a user-defined input parameter called the anisotropic_strain_scaling vector. ### Total Isotropic Volumetric Swelling Following Karahan and Buongiorno (2010), the three isotropic volumetric swelling components, the equations for gaseous swelling and solid swelling, and the model for open pore compression strain from CreepUPuZr, are summed together: (1) ## Example Input Syntax [./swelling] type = VSwellingUPuZr block = '1 2 3 4 5 6 7' temp = temp fission_rate = fission_rate hydrostatic_stress = hydrostatic_stress save_solid_swell = true save_gas_swell = true hot_pressing_strain_increment = open_pore_compression fabrication_porosity = 0.239997 outputs = all [../] (test/tests/vswelling_upuzr/swelling.i) To model the open pore compression, the CreepUPuZr model must also be included with the hydrostatic_stress parameter set as shown below [./creep] type = CreepUPuZr block = '1 2 3 4 5 6 7' fission_rate = fission_rate hydrostatic_stress = hydrostatic_stress youngs_modulus = 1.0 poissons_ratio = 0.3 temp = temp thermal_expansion = 1.e-5 large_strain = true outputs = all [../] (test/tests/vswelling_upuzr/swelling.i) ## Input Parameters • save_gas_swellFalseShould the gaseous swelling be saved in a material property Default:False C++ Type:bool Description:Should the gaseous swelling be saved in a material property • computeTrueWhen false, MOOSE will not call compute methods on this material. The user must call computeProperties() after retrieving the Material via MaterialPropertyInterface::getMaterial(). Non-computed Materials are not sorted for dependencies. Default:True C++ Type:bool Description:When false, MOOSE will not call compute methods on this material. The user must call computeProperties() after retrieving the Material via MaterialPropertyInterface::getMaterial(). Non-computed Materials are not sorted for dependencies. • fission_rateCoupled Fission Rate C++ Type:std::vector Description:Coupled Fission Rate • fabrication_porosity0The as-fabricated porosity Default:0 C++ Type:double Description:The as-fabricated porosity • use_material_fission_rateFalseFlag to use the material 'fission_rate_material' instead of variable fission rate Default:False C++ Type:bool Description:Flag to use the material 'fission_rate_material' instead of variable fission rate • anisotropic_strain_scaling1 1 1 The scale factor applied to each component of the dilatational strain computed in this model. When defining these scale factors ensure that total strain is perserved. Default:1 1 1 C++ Type:std::vector Description:The scale factor applied to each component of the dilatational strain computed in this model. When defining these scale factors ensure that total strain is perserved. • hot_pressing_strain_incrementThe increment of strain due to compression of the open pores C++ Type:std::vector Description:The increment of strain due to compression of the open pores • use_old_hydrostatic_stressTrueFlag to use the hydrostatic stress calculated at the previous timestep instead of the current value Default:True C++ Type:bool Description:Flag to use the hydrostatic stress calculated at the previous timestep instead of the current value • hydrostatic_stressCoupled Hydrostatic Stress C++ Type:std::vector Description:Coupled Hydrostatic Stress • tempCoupled Temperature C++ Type:std::vector Description:Coupled Temperature • save_solid_swellFalseShould the solid swelling be saved in a material property Default:False C++ Type:bool Description:Should the solid swelling be saved in a material property • plenum_pressureThe name of the plenum_pressure postprocessor value. C++ Type:PostprocessorName Description:The name of the plenum_pressure postprocessor value. • boundaryThe list of boundary IDs from the mesh where this boundary condition applies C++ Type:std::vector Description:The list of boundary IDs from the mesh where this boundary condition applies • fission_rate_materialfission_rate_materialFission rate material property name Default:fission_rate_material C++ Type:MaterialPropertyName Description:Fission rate material property name • blockThe list of block ids (SubdomainID) that this object will be applied C++ Type:std::vector Description:The list of block ids (SubdomainID) that this object will be applied ### Optional Parameters • enableTrueSet the enabled status of the MooseObject. Default:True C++ Type:bool Description:Set the enabled status of the MooseObject. • use_displaced_meshFalseWhether or not this object should use the displaced mesh for computation. Note that in the case this is true but no displacements are provided in the Mesh block the undisplaced mesh will still be used. Default:False C++ Type:bool Description:Whether or not this object should use the displaced mesh for computation. Note that in the case this is true but no displacements are provided in the Mesh block the undisplaced mesh will still be used. • control_tagsAdds user-defined labels for accessing object parameters via control logic. C++ Type:std::vector Description:Adds user-defined labels for accessing object parameters via control logic. • seed0The seed for the master random number generator Default:0 C++ Type:unsigned int Description:The seed for the master random number generator • implicitTrueDetermines whether this object is calculated using an implicit or explicit form Default:True C++ Type:bool Description:Determines whether this object is calculated using an implicit or explicit form • constant_onNONEWhen ELEMENT, MOOSE will only call computeQpProperties() for the 0th quadrature point, and then copy that value to the other qps.When SUBDOMAIN, MOOSE will only call computeSubdomainProperties() for the 0th quadrature point, and then copy that value to the other qps. Evaluations on element qps will be skipped Default:NONE C++ Type:MooseEnum Description:When ELEMENT, MOOSE will only call computeQpProperties() for the 0th quadrature point, and then copy that value to the other qps.When SUBDOMAIN, MOOSE will only call computeSubdomainProperties() for the 0th quadrature point, and then copy that value to the other qps. Evaluations on element qps will be skipped • gas_factor3.59e-24Conversion factor to align BISON and published model units Default:3.59e-24 C++ Type:double Description:Conversion factor to align BISON and published model units • solid_factor4.16e-29Conversion factor to align BISON and published model units Default:4.16e-29 C++ Type:double Description:Conversion factor to align BISON and published model units ### Advanced: Unit Conversion Factors Parameters • output_propertiesList of material properties, from this material, to output (outputs must also be defined to an output type) C++ Type:std::vector Description:List of material properties, from this material, to output (outputs must also be defined to an output type) • outputsnone Vector of output names were you would like to restrict the output of variables(s) associated with this object Default:none C++ Type:std::vector Description:Vector of output names were you would like to restrict the output of variables(s) associated with this object ## References 1. A. Karahan and J. Buongiorno. A new code for predicting the thermo-mechanical and irradiation behavior of metallic fuels in sodium fast reactors. Journal of Nuclear Materials, 396:283–293, 2010.[BibTeX]
2020-12-04T05:34:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3446616232395172, "perplexity": 6509.68775516534}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141733122.72/warc/CC-MAIN-20201204040803-20201204070803-00614.warc.gz"}
https://tjyj.stats.gov.cn/CN/10.19343/j.cnki.11-1302/c.2020.06.003
• • ### 财政分权和环境规制促进了中国绿色技术创新吗? • 出版日期:2020-06-25 发布日期:2020-06-23 ### Do Fiscal Decentralization and Environmental Regulation Promote Green Technology Innovation in China? Chen Bin & Li Tuo • Online:2020-06-25 Published:2020-06-23 Abstract: This paper uses the provincial panel data of 2003-2017 in China to measure the degree and efficiency of fiscal decentralization. Combining the two-stage DEA network, we evaluate the green innovation efficiency, efficiency of green technology R&D, the conversion efficiency of green technology achievements. We establish mathematical and empirical models to analyze the effect of fiscal decentralization and environmental regulation on green technology innovation. The results show that on the whole, the degree of fiscal decentralization, fiscal decentralization efficiency and environmental regulation are all positive factors to promote the development of green innovation in China, and fiscal decentralization also affects local environmental regulation, having a positive indirect impact on green technology innovation; however, fiscal decentralization also leads to local governments’ short-sighted neglect of green technology R&D. From different periods of time,fiscal decentralization and environmental regulation have different impacts on green innovation. First, after the global financial crisis, fiscal decentralization has increased the crowding-out effect of economic stimulus policies on the development of green innovation, but this problem is alleviated after the new era. Second, after 2013,local environmental regulation has deepened too fast, which is not conducive to green innovation to some extent.From the perspective of regions with different levels of development,fiscal decentralization has slightly insufficient support for the development of green innovation in high-level and low-level regions, but improving the efficiency of fiscal decentralization can significantly promote the development of high-level regions; thepromotion of environmental regulation on green innovation is mainly reflected in high-level regions, yet for regions with low efficiency of green innovation, excessive environmental regulation has no practical significance.
2022-12-03T11:40:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3509426414966583, "perplexity": 7385.230422352999}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710931.81/warc/CC-MAIN-20221203111902-20221203141902-00401.warc.gz"}
https://designfils.eba.gov.tr/blog/index.php?entryid=84103
## Annette Messier tarafından blog girdileri CourseLab 2.7.rar CourseLab 2.7.rar I read the formatting rules of this website and I would like to know if the output file's name format could not match with the input file's format, so how can I know? A: It appears that you're running into content-length: 8 on your curl request. Given the URL you've used (one that, at that moment, was unreachable) it appears that the Content-Length header is being attached with the value of 8, rather than the actual length of the Content-Disposition: attachment, which is 488. The error in the curl output you've posted is: Content-Length: 0 So the error occurs when you echo a Content-Length: 0. I would suggest putting a regular expression in place of the non-regex replacement you've put in your code. This could be done with something like: $data_string = 'Content-Disposition: attachment; filename="Debug_Download.zip".';$data = preg_replace('/Content-Disposition: attachment;/', '', \$data_string); It's also important to note that you will need to change the filename that your curl command generates to match the filename of the file you are trying to download. Below is your exact curl command that is failing. Be sure to put in the proper formatting for your document (including the Content-Type etc). You also need to check for errors in your curl call such as a time out or a curl exception, given the document that you've linked to: My apologies for the inaccurate prior statement, but it should be obvious from this link that it's not the latest release. & );// document.
2022-09-29T13:34:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4136102497577667, "perplexity": 1096.2729921996606}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00521.warc.gz"}
https://lardbucket.org/blog/archives/2014/11/
# lardbucket: 2014 : november ## 11/30/2014 ### 2012 Book Archive: Now With PDFs Filed under: General — Andy @ 10:58 am Almost two years ago, I launched (what became) my 2012 book archive. There’s a bit of background on that project on that page. While there have been a few minor developments since then, none of them have been noteworthy enough to document here. Recently, however, I decided that I would try to make PDF copies of the books. I wanted these to be good-quality PDFs, as although a number of PDFs of the books had circulated from other groups in the past, they weren’t particularly visually appealing: they were essentially what you would get by quickly printing each of the HTML files from the publisher. My guess was that a simple HTML file with all of the book content could be combined with a quick print CSS style and get good-quality output without too much effort. I figured that with a decent number of iterations on the book CSS and test runs, the project might take a week or so. Over two months later, I’m finally set to release the PDFs. I’ve made a few notes of the things that I did along the way, in case anybody finds them useful in the future. ## Getting a PDF It’s worth noting that the content I have from the books is already in an HTML format, not necessarily the raw input to a book creator. I decided early on to try to work with HTML and CSS, because although the HTML I have is structured and could probably be parsed into a different format, doing that correctly for over a hundred books would be iffy. With that said, the HTML was well-structured, and lent itself to easy use of CSS to style specific things. If you’ve been following CSS development, there are a lot of features in CSS that are supposed to allow styling of printed content. Unfortunately, support for some of these in browsers is spotty, and they don’t necessarily provide all of the features that you’d like when styling a book. (In particular, footnotes, page-relative positioning, page numbering, and PDF bookmarks are all difficult. There are likely other features I’ve forgotten about as well.) At first, I was hoping to be able to use something like wkhtmltopdf, which wraps WebKit and converts an input file to a PDF. This gave me a number of problems, and didn’t seem to support the concept of pages as natively as would be desirable. (It’s still impressive that they managed to get the project to work as well as it does, but it doesn’t produce nice-looking books yet.) After that, I decided that perhaps Firefox’s support for printing would work for me: it works pretty well with many of the CSS printing features, and I can probably script Firefox to output PDFs if I want. Unfortunately, I again ran in to bugs with the rendering. I don’t recall the details at the moment, but I believe that content had a tendency to not wrap between pages in the right spots. Either way, this sent me in search of a high-quality PDF rendering solution. If you look online for advice about generating PDFs from HTML, you will inevitably run upon many people suggesting PrinceXML (or, as it seems to be rebranding itself, Prince). They’re probably right. It is a commercial piece of software in a case where I had hoped to use free software, but it is still the best solution I have found by far, both in ease of use and functionality. ## Princely Things Prince itself is not cheap. A personal license is $495 at the time of writing, and even that may not cover what I intend to do in terms of converting books automatically. (To be clear, it might be covered, but only just barely if it is. I haven’t asked, for reasons I’ll explain shortly) If you are doing anything serious with Prince, you’re probably looking at a$3800 license per server generating PDFs, or a $1900 one if you’re only doing academic things. Upgrades are available for an additional annual cost. To be clear, if you’re generating revenue from the PDFs (or even just saving yourself loads of time), Prince is almost certainly worth every penny, but it’s prohibitive for side projects. For non-commercial projects, Prince offers a free version with the requirement that you allow it to add a logo and link to the corner of your document’s first page, link to their website wherever you have Prince PDFs for download, and link to their website on a sponsors/partners page. This is mostly unintrusive (although a tad confusing at first: I’ve considered trying to style in a little “Made with” above the logo to explain why it’s there), and very nice of Prince to allow. (To get the “Non-commercial” license, just download the software: you don’t need a special license key or anything.) In fact, I had a question about their licensing (“The books are licensed under a Creative Commons license that doesn’t allow me to add restrictions to them, so is it required for people who receive the PDFs from me to keep the Prince logo on them? If so, I can’t use the noncommercial license.”), emailed them, and got an email back quite quickly from Håkon Wium Lie, Prince’s Director (not to mention CTO at Opera and founding member of the Pirate Party of Norway). He’s definitely on top of things, and was quite happy to help. (The answer is no, other people can do whatever they want to the PDFs. In my case, they’re still subject to the Creative Commons license they always were, but that’s not because of Prince.) Later, I had a question about how to get something to render correctly (a somewhat minor, obscure layout bug), and quickly received a comment from Mike Day, the CEO, noting that they were looking into the issue. When I followed up, the bug hadn’t yet been fixed (it undoubtedly has tricky interactions with their page layout code), but I quickly received an alternative suggestion complete with example code. Definitely a pleasant experience all the way around. If you’re looking for a cheaper option to start with, you’ll probably run into DocRaptor as well. DocRaptor started out as Prince-as-a-service, providing an API to allow people to generate PDFs using Prince. It now appears to support Excel files, although I haven’t looked in to those features. For many people, the benefits of being able to rely on DocRaptor to scale up as your workloads do (they claim “thousands of documents a second”) and the lower initial costs are probably a great benefit. They also provide well-supported libraries for a number of languages, where Prince usage is largely done by command line (although Prince has a PHP API as well). Overall, DocRaptor almost certainly provides benefits for many people. However, their plans aren’t super cheap either, and they’re targeted at recurring use, not one-shot uses like mine. I generated over 2500 PDFs in my final output (one per book, plus one per chapter), which would probably have cost me$149 in a month, assuming I didn’t want to tweak them later. Still far cheaper than the cheapest Prince license, but pricey for a personal side project like mine. DocRaptor does have a 7-day free trial, which probably would have allowed me to generate whatever I wanted during that time, but that’s not exactly ideal, either. (Nor do I mind paying something for the service, but over a hundred dollars a shot is high for my purposes.) I emailed the DocRaptor folks about a pay-as-you-go plan (so I wasn’t paying monthly fees when I wasn’t using the service), because I had found references to such a plan elsewhere. I got a very nice response from Matt Gordon, the “lead vocalist” for the group running DocRaptor. Unfortunately, they no longer offer that plan, because they found that disproportionately more of their support costs (and they do provide good support) were going to users who didn’t spend much on the service anyway. We had a nice conversation about the possibility of plans that might support alternative uses such as mine, but it doesn’t sound like there’s anything planned in the immediate future. (I can’t blame them, as they need to make money and do what makes sense for their business to continue existing.) They did make a very nice offer (I won’t disclose the details) that I turned down for unrelated reasons, but they’re definitely nice folks too. My conclusion is that you pretty much can’t go wrong with Prince or DocRaptor. Both have very nice and responsive folks behind them, and seem to be quite well done. ## Tables of Contents and Bookmarks One of the things relatively unique to printed books is cross-references with page numbers. Most of the book content doesn’t include these. This is primarily because any existing cross references are links to a specific section, and I didn’t think it necessary to include a page number along with the section number. However, the table of contents for the book definitely benefits from page numbers. Pulling a table of contents together in Prince is relatively easy. It could possibly be done automatically with JavaScript, but I chose to create taables of contents in a Ruby preprocessor as I was assembling whole-book files anyway. Prince makes it easy to include page numbers for links to given anchors, so I only needed to pull out the anchor for each section. (Luckily for me, I already had the anchors in a database.) Secondly, I wanted to make sure that chapters and sections were listed in the PDF list of bookmarks. This list is sometimes useful when navigating a book in a PDF viewer, although some viewers don’t show it. Prince again makes this quite easy, simply requiring a CSS annotation for the items you wish to be bookmark headings. (In fact, by default it uses h1-h6 tags, but I disabled that default because it picked up way too many bookmarks.) ## Optimization In creating the full-book files, I noticed that some books created particularly large files. In general, this appeared to be because they embedded the full source images, rather than resampling them. While an option to resample the images inside Prince would be great, it doesn’t exist at this time. Some of the source images were quite large, and clearly intended to be printed at >= 300 dpi, while most users of the PDFs wouldn’t benefit from such images. My first attempt at reducing file size was to use Ghostscript to resample the images. Ghostscript has some features that work similarly to the now-unavailable Acrobat Distiller, and seemed likely to do the job. Unfortunately, after getting Ghostscript working (Ubuntu 14.04’s version appears to crash on larger documents, but 14.10’s works), I found that it removed page numbering information and bookmarks. The next step was to try to export this metadata using PDFtk before using Ghostscript, and then import it again afterward. Unfortunately, while PDFtk will output page numbering details, it won’t import them into a PDF, and there doesn’t appear to be any easily-available way to do so. So, I temporarily abandoned the option to resample the images using Ghostscript. (It also may or may not have been worth it in the first place: some Ghostscript-generated files were larger than the Prince originals, so I had to handle both cases.) It may be worth patching Ghostscript in the future to keep the metadata around, but that seems likely to be quite involved. In many cases, you may get some benefit out of using Ghostscript with appropriate options (“gs -q -sDEVICE=pdfwrite -dPDFSETTINGS=/printer -dBATCH -dNOPAUSE -sOutputFile=[outfile] [inputfile]” seems to work well besides the additional metadata), but it was unfortunately unsuitable for my purposes at this time. ## Chapter Files Following up on the “whole book PDFs can be pretty big” issue, and after trying to open some such books and experiencing slow loading times, I decided that it may be appropriate to create one PDF per chapter as well as the whole-book PDF. My first pass at creating these PDFs was to use PDFtk to pull out just the pages from a given chapter. This posed a few problems: first, I had to figure out which pages belonged to which chapter. Luckily, the bookmarks inserted by Prince, combined with PDFtk’s metadata output, gave me the starting page for each chapter (although for a few minor reasons, this link was a bit iffy: the generated bookmark title did not always match the section name I had in my database), and I could assume that a chapter ended just before the next one began. Unfortunately, this ran into the same problem I had before: I would lose the page numbering and bookmarks. (Not to mention the fact that I would need to separately render a new first page to describe the licensing and get the Prince logo back on the first page.) Finally, I decided to simply depend on Prince once again. I got Prince to log the page number and ID for each chapter heading by using a ::before psuedo-class with a content property of “prince-script(log, counter(page), attr(id))”, and a small “log” function in the JavaScript on each page. This allowed me to use the IDs to match up with my database, and easily identify where each chapter started. Because I already had whole-chapter HTML files, I could then use those HTML files to render the chapter in Prince, and everything would still be in sync, without having to try to render and merge together separate front pages for each chapter. (I still needed to get the page numbers to Prince for rendering purposes, but for this, I simply placed the page number in a CSS block in the HTML file.) This solution appears to have worked surprisingly well, with the page numbers matching up where expected. Because the files were rendered separately, there is the possibility of some unforeseen issue (I certainly didn’t inspect the thousands of files by hand), but it seems unlikely. ## Math Finally, when reviewing one of the math textbooks in the collection, I noticed that Prince’s MathML rendering wasn’t particularly great. It is definitely better than nothing, but the rendering quality did leave something to be desired. Unfortunately, the most common web-based solution here, MathJax, doesn’t work very well with Prince. (This is a noted todo item on Prince’s release notes, but it’s not available yet.) After stumbling through a number of other options to try, I ended up using PhantomJS together with MathJax to prerender the math to MathJax’s “HTML-CSS” output (the SVG output didn’t look very good and produced a very large PDF file after the required fixes to make Prince display the SVG output). I forced MathJax to use the STIX fonts (which I installed on my computer), and after the math was rendered, I output the document’s HTML form again (after removing the MathJax wrapper divs). This produced files with reasonably good-looking math, the way they were intended to look. The prerendering code hasn’t been published yet because I haven’t taken the time to clean it up, but if someone is interested, I can definitely post it. Prerendering with MathJax is a step that seems to have very poor asymptotic time complexity. I haven’t formally benchmarked it, but a chapter’s sections took about two minutes to prerender in total, while the whole chapter itself took roughly twelve minutes to prerender. The whole book took roughly four days to prerender. It’s not clear why this occurred, but the prerendering did eventually succeed. It’s also not clear if this is a bug in MathJax, or simply some inefficiency in PhantomJS, so I have yet to report it as a bug to either project (and may never report it – it’s unlikely to come up in common use). ## Fin So, to summarize, getting PDFs of a quality I’m comfortable with took quite a bit of effort. In the end, Prince does most of the work, and I rarely had problems with Prince itself. I think it was worth the effort, at least for a personal learning experience. Hopefully the books will be useful to other people as well. Once again, they’re all available at http://2012books.lardbucket.org. Please feel free to copy or redistribute them as you see fit, pursuant to the terms of the associated Creative Commons by-nc-sa license. Andy Schmitz P.S. If you’re interested in any of the print-specific (or Prince-specific) things I did to make the books look decent when printed, it’s all left in the book’s CSS file toward the bottom, under the “prince” @media type. Feel free to reuse any of that styling for any purpose you see fit, in any situation. I do not believe it is covered under the Creative Commons license: you may consider it to be public domain.
2017-06-27T07:07:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3568291664123535, "perplexity": 1287.8336305928917}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128321025.86/warc/CC-MAIN-20170627064714-20170627084714-00218.warc.gz"}
https://par.nsf.gov/biblio/10339006-investigating-charm-production-fragmentation-via-azimuthal-correlations-prompt-mesons-charged-particles-pp-collisions-mathbf-sqrt-tev
This content will become publicly available on April 1, 2023 Investigating charm production and fragmentation via azimuthal correlations of prompt D mesons with charged particles in pp collisions at $$\mathbf {\sqrt{ s} = 13}$$ TeV Abstract Angular correlations of heavy-flavour and charged particles in high-energy proton–proton collisions are sensitive to the production mechanisms of heavy quarks and to their fragmentation as well as hadronisation processes. The measurement of the azimuthal-correlation function of prompt D mesons with charged particles in proton–proton collisions at a centre-of-mass energy of $$\sqrt{s} = 13$$ s = 13  TeV with the ALICE detector is reported, considering $$\mathrm D^{0}$$ D 0 , $$\mathrm D^{+}$$ D + , and $$\mathrm D^{*+}$$ D ∗ + mesons in the transverse-momentum interval $$3< p_{\mathrm{T}} < 36$$ 3 < p T < 36  GeV/ $$c$$ c at midrapidity ( $$|y| < 0.5$$ | y | < 0.5 ), and charged particles with $$p_{\mathrm{T}} > 0.3$$ p T > 0.3  GeV/ $$c$$ c and pseudorapidity $$|\eta | < 0.8$$ | η | < 0.8 . This measurement has an improved precision and provides an extended transverse-momentum coverage compared to previous ALICE measurements at lower energies. The study is also performed as a function of the charged-particle multiplicity, showing no modifications of the correlation function with multiplicity within uncertainties. The properties and the transverse-momentum evolution of the near- and away-side correlation peaks are studied and compared more » Authors: ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; more » Award ID(s): Publication Date: NSF-PAR ID: 10339006 Journal Name: The European Physical Journal C Volume: 82 Issue: 4 ISSN: 1434-6052 1. Abstract The measurement of the azimuthal-correlation function of prompt D mesons with charged particles in pp collisions at $$\sqrt{s} =5.02\ \hbox {TeV}$$ s = 5.02 TeV and p–Pb collisions at $$\sqrt{s_{\mathrm{NN}}} = 5.02\ \hbox {TeV}$$ s NN = 5.02 TeV with the ALICE detector at the LHC is reported. The $$\mathrm{D}^{0}$$ D 0 , $$\mathrm{D}^{+}$$ D + , and $$\mathrm{D}^{*+}$$ D ∗ + mesons, together with their charge conjugates, were reconstructed at midrapidity in the transverse momentum interval $$3< p_\mathrm{T} < 24\ \hbox {GeV}/c$$ 3 < p T < 24 GeV / c and correlated with charged particlesmore » 2. Abstract This paper presents the measurements of $$\pi ^{\pm }$$ π ± , $$\mathrm {K}^{\pm }$$ K ± , $$\text {p}$$ p and $$\overline{\mathrm{p}}$$ p ¯ transverse momentum ( $$p_{\text {T}}$$ p T ) spectra as a function of charged-particle multiplicity density in proton–proton (pp) collisions at $$\sqrt{s}\ =\ 13\ \text {TeV}$$ s = 13 TeV with the ALICE detector at the LHC. Such study allows us to isolate the center-of-mass energy dependence of light-flavour particle production. The measurements reported here cover a $$p_{\text {T}}$$ p T range from 0.1 to 20 $$\text {GeV}/c$$ GeV / c and aremore » 3. A bstract The production of prompt D 0 , D + , and D *+ mesons was measured at midrapidity (| y | < 0.5) in Pb–Pb collisions at the centre-of-mass energy per nucleon–nucleon pair $$\sqrt{s_{\mathrm{NN}}}$$ s NN = 5 . 02 TeV with the ALICE detector at the LHC. The D mesons were reconstructed via their hadronic decay channels and their production yields were measured in central (0–10%) and semicentral (30–50%) collisions. The measurement was performed up to a transverse momentum ( p T ) of 36 or 50 GeV/c depending on the D meson species andmore » 4. Abstract Measurements of event-by-event fluctuations of charged-particle multiplicities in Pb–Pb collisions at $$\sqrt{s_{\mathrm {NN}}}$$ s NN   $$=$$ =  2.76 TeV using the ALICE detector at the CERN Large Hadron Collider (LHC) are presented in the pseudorapidity range $$|\eta |<0.8$$ | η | < 0.8 and transverse momentum $$0.2< p_{\mathrm{T}} < 2.0$$ 0.2 < p T < 2.0  GeV/ c . The amplitude of the fluctuations is expressed in terms of the variance normalized by the mean of the multiplicity distribution. The $$\eta$$ η and $$p_{\mathrm{T}}$$ p T dependences of the fluctuations and their evolution with respect to collision centralitymore » 5. Abstract The production of $$\pi ^{\pm }$$ π ± , $$\mathrm{K}^{\pm }$$ K ± , $$\mathrm{K}^{0}_{S}$$ K S 0 , $$\mathrm{K}^{*}(892)^{0}$$ K ∗ ( 892 ) 0 , $$\mathrm{p}$$ p , $$\phi (1020)$$ ϕ ( 1020 ) , $$\Lambda$$ Λ , $$\Xi ^{-}$$ Ξ - , $$\Omega ^{-}$$ Ω - , and their antiparticles was measured in inelastic proton–proton (pp) collisions at a center-of-mass energy of $$\sqrt{s}$$ s = 13 TeV at midrapidity ( $$|y|<0.5$$ | y | < 0.5 ) as a function of transverse momentum ( $$p_{\mathrm{T}}$$ p T ) using the ALICE detector at the CERNmore »
2022-09-26T14:08:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9113189578056335, "perplexity": 1538.936972315741}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00796.warc.gz"}
http://dlmf.nist.gov/35.8
§35.8(i) Definition Let $p$ and $q$ be nonnegative integers; $a_{1},\dots,a_{p}\in\Complex$; $b_{1},\dots,b_{q}\in\Complex$; $-b_{j}+\tfrac{1}{2}(k+1)\notin\NatNumber$, $1\leq j\leq q$, $1\leq k\leq m$. The generalized hypergeometric function $\mathop{{{}_{p}F_{q}}\/}\nolimits$ with matrix argument $\mathbf{T}\in\boldsymbol{\mathcal{S}}$, numerator parameters $a_{1},\dots,a_{p}$, and denominator parameters $b_{1},\dots,b_{q}$ is 35.8.1 $\mathop{{{}_{p}F_{q}}\/}\nolimits\!\left({a_{1},\dots,a_{p}\atop b_{1},\dots,b% _{q}};\mathbf{T}\right)=\sum_{k=0}^{\infty}\frac{1}{k!}\sum_{|\kappa|=k}\frac{% \left[a_{1}\right]_{\kappa}\cdots\left[a_{p}\right]_{\kappa}}{\left[b_{1}% \right]_{\kappa}\cdots\left[b_{q}\right]_{\kappa}}\mathop{Z_{\kappa}\/}% \nolimits\!\left(\mathbf{T}\right).$ Convergence Properties If $-a_{j}+\tfrac{1}{2}(k+1)\in\NatNumber$ for some $j,k$ satisfying $1\leq j\leq p$, $1\leq k\leq m$, then the series expansion (35.8.1) terminates. If $p\leq q$, then (35.8.1) converges for all $\mathbf{T}$. If $p=q+1$, then (35.8.1) converges absolutely for $||\mathbf{T}||<1$ and diverges for $||\mathbf{T}||>1$. If $p>q+1$, then (35.8.1) diverges unless it terminates. §35.8(ii) Relations to Other Functions 35.8.2 $\mathop{{{}_{0}F_{0}}\/}\nolimits\!\left({-\atop-};\mathbf{T}\right)=\mathop{% \mathrm{etr}\/}\nolimits\!\left(\mathbf{T}\right),$ $\mathbf{T}\in\boldsymbol{\mathcal{S}}$. 35.8.3 $\mathop{{{}_{2}F_{1}}\/}\nolimits\!\left({a,b\atop b};\mathbf{T}\right)=% \mathop{{{}_{1}F_{0}}\/}\nolimits\!\left({a\atop-};\mathbf{T}\right)=|\mathbf{% I}-\mathbf{T}|^{-a},$ $\boldsymbol{{0}}<\mathbf{T}<\mathbf{I}$. 35.8.4 $\mathop{A_{\nu}\/}\nolimits\!\left(\mathbf{T}\right)=\dfrac{1}{\mathop{\Gamma_% {m}\/}\nolimits\!\left(\nu+\frac{1}{2}(m+1)\right)}\mathop{{{}_{0}F_{1}}\/}% \nolimits\!\left({-\atop\nu+\frac{1}{2}(m+1)};-\mathbf{T}\right),$ $\mathbf{T}\in\boldsymbol{\mathcal{S}}$. Kummer Transformation Let $c=b_{1}+b_{2}-a_{1}-a_{2}-a_{3}$. Then 35.8.5 $\mathop{{{}_{3}F_{2}}\/}\nolimits\!\left({a_{1},a_{2},a_{3}\atop b_{1},b_{2}};% \mathbf{I}\right)=\frac{\mathop{\Gamma_{m}\/}\nolimits\!\left(b_{2}\right)% \mathop{\Gamma_{m}\/}\nolimits\!\left(c\right)}{\mathop{\Gamma_{m}\/}\nolimits% \!\left(b_{2}-a_{3}\right)\mathop{\Gamma_{m}\/}\nolimits\!\left(c+a_{3}\right)% }\*\mathop{{{}_{3}F_{2}}\/}\nolimits\!\left({b_{1}-a_{1},b_{1}-a_{2},a_{3}% \atop b_{1},c+a_{3}};\mathbf{I}\right),$ $\realpart{(b_{2})},\realpart{(c)}>\frac{1}{2}(m-1)$. Pfaff–Saalschutz Formula Let $a_{1}+a_{2}+a_{3}+\frac{1}{2}(m+1)=b_{1}+b_{2}$; one of the $a_{j}$ be a negative integer; $\realpart{(b_{1}-a_{1})}$, $\realpart{(b_{1}-a_{2})}$, $\realpart{(b_{1}-a_{3})}$, $\realpart{(b_{1}-a_{1}-a_{2}-a_{3})}>\frac{1}{2}(m-1)$. Then 35.8.6 $\mathop{{{}_{3}F_{2}}\/}\nolimits\!\left({a_{1},a_{2},a_{3}\atop b_{1},b_{2}};% \mathbf{I}\right)=\frac{\mathop{\Gamma_{m}\/}\nolimits\!\left(b_{1}-a_{1}% \right)\mathop{\Gamma_{m}\/}\nolimits\!\left(b_{1}-a_{2}\right)}{\mathop{% \Gamma_{m}\/}\nolimits\!\left(b_{1}\right)\mathop{\Gamma_{m}\/}\nolimits\!% \left(b_{1}-a_{1}-a_{2}\right)}\*\frac{\mathop{\Gamma_{m}\/}\nolimits\!\left(b% _{1}-a_{3}\right)\mathop{\Gamma_{m}\/}\nolimits\!\left(b_{1}-a_{1}-a_{2}-a_{3}% \right)}{\mathop{\Gamma_{m}\/}\nolimits\!\left(b_{1}-a_{1}-a_{3}\right)\mathop% {\Gamma_{m}\/}\nolimits\!\left(b_{1}-a_{2}-a_{3}\right)}.$ Thomae Transformation Again, let $c=b_{1}+b_{2}-a_{1}-a_{2}-a_{3}$. Then 35.8.7 $\mathop{{{}_{3}F_{2}}\/}\nolimits\!\left({a_{1},a_{2},a_{3}\atop b_{1},b_{2}};% \mathbf{I}\right)=\frac{\mathop{\Gamma_{m}\/}\nolimits\!\left(b_{1}\right)% \mathop{\Gamma_{m}\/}\nolimits\!\left(b_{2}\right)\mathop{\Gamma\/}\nolimits\!% \left(c\right)}{\mathop{\Gamma_{m}\/}\nolimits\!\left(a_{1}\right)\mathop{% \Gamma_{m}\/}\nolimits\!\left(c+a_{2}\right)\mathop{\Gamma\/}\nolimits\!\left(% c+a_{3}\right)}\*\mathop{{{}_{3}F_{2}}\/}\nolimits\!\left({b_{1}-a_{1},b_{2}-a% _{2},c\atop c+a_{2},c+a_{3}};\mathbf{I}\right),$ $\realpart{(b_{1})}$, $\realpart{(b_{2})}$, $\realpart{(c)}>\frac{1}{2}(m-1)$. Value at $\mathbf{T}=\boldsymbol{{0}}$ 35.8.8 $\mathop{{{}_{p}F_{q}}\/}\nolimits\!\left({a_{1},\dots,a_{p}\atop b_{1},\dots,b% _{q}};\boldsymbol{{0}}\right)=1.$ Confluence 35.8.9 $\lim_{\gamma\to\infty}\mathop{{{}_{p+1}F_{q}}\/}\nolimits\!\left({a_{1},\dots,% a_{p},\gamma\atop b_{1},\dots,b_{q}};\gamma^{-1}\mathbf{T}\right)=\mathop{{{}_% {p}F_{q}}\/}\nolimits\!\left({a_{1},\dots,a_{p}\atop b_{1},\dots,b_{q}};% \mathbf{T}\right),$ 35.8.10 $\lim_{\gamma\to\infty}\mathop{{{}_{p}F_{q+1}}\/}\nolimits\!\left({a_{1},\dots,% a_{p}\atop b_{1},\dots,b_{q},\gamma};\gamma\mathbf{T}\right)=\mathop{{{}_{p}F_% {q}}\/}\nolimits\!\left({a_{1},\dots,a_{p}\atop b_{1},\dots,b_{q}};\mathbf{T}% \right).$ Invariance 35.8.11 $\mathop{{{}_{p}F_{q}}\/}\nolimits\!\left({a_{1},\dots,a_{p}\atop b_{1},\dots,b% _{q}};\mathbf{H}\mathbf{T}\mathbf{H}^{-1}\right)=\mathop{{{}_{p}F_{q}}\/}% \nolimits\!\left({a_{1},\dots,a_{p}\atop b_{1},\dots,b_{q}};\mathbf{T}\right),$ $\mathbf{H}\in\mathbf{O}(m)$. Laplace Transform 35.8.12 ${\int_{\boldsymbol{\Omega}}\mathop{\mathrm{etr}\/}\nolimits\!\left(-\mathbf{T}% \mathbf{X}\right)|\mathbf{X}|^{\gamma-\frac{1}{2}(m+1)}\*\mathop{{{}_{p}F_{q}}% \/}\nolimits\!\left({a_{1},\dots,a_{p}\atop b_{1},\dots,b_{q}};-\mathbf{X}% \right)d\mathbf{X}}=\mathop{\Gamma_{m}\/}\nolimits\!\left(\gamma\right)|% \mathbf{T}|^{-\gamma}\mathop{{{}_{p+1}F_{q}}\/}\nolimits\!\left({a_{1},\dots,a% _{p},\gamma\atop b_{1},\dots,b_{q}};-\mathbf{T}^{-1}\right),$ $\realpart{(\gamma)}>\frac{1}{2}(m-1)$. Euler Integral 35.8.13 $\int\limits_{\boldsymbol{{0}}<\mathbf{X}<\mathbf{I}}|\mathbf{X}|^{a_{1}-\frac{% 1}{2}(m+1)}{|\mathbf{I}-\mathbf{X}|}^{b_{1}-a_{1}-\frac{1}{2}(m+1)}\*\mathop{{% {}_{p}F_{q}}\/}\nolimits\!\left({a_{2},\dots,a_{p+1}\atop b_{2},\dots,b_{q+1}}% ;\mathbf{T}\mathbf{X}\right)d\mathbf{X}=\frac{1}{\mathop{\mathrm{B}_{m}\/}% \nolimits\!\left(b_{1}-a_{1},a_{1}\right)}\mathop{{{}_{p+1}F_{q+1}}\/}% \nolimits\!\left({a_{1},\dots,a_{p+1}\atop b_{1},\dots,b_{q+1}};\mathbf{T}% \right),$ $\realpart{(b_{1}-a_{1})},\realpart{(a_{1})}>\frac{1}{2}(m-1)$. §35.8(v) Mellin–Barnes Integrals Multidimensional Mellin–Barnes integrals are established in Ding et al. (1996) for the functions $\mathop{{{}_{p}F_{q}}\/}\nolimits$ and $\mathop{{{}_{p+1}F_{p}}\/}\nolimits$ of matrix argument. A similar result for the $\mathop{{{}_{0}F_{1}}\/}\nolimits$ function of matrix argument is given in Faraut and Korányi (1994, p. 346). These multidimensional integrals reduce to the classical Mellin–Barnes integrals (§5.19(ii)) in the special case $m=1$.
2014-10-21T16:49:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 149, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9987480044364929, "perplexity": 1525.4082865344385}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507444582.16/warc/CC-MAIN-20141017005724-00153-ip-10-16-133-185.ec2.internal.warc.gz"}
https://math.libretexts.org/Bookshelves/Precalculus/Book%3A_Precalculus_-_An_Investigation_of_Functions_(Lippman_and_Rasmussen)/9%3A_Conics/9.1%3A_Ellipses
# 9.1: Ellipses $$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$ $$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$ The National Statuary Hall1 in Washington, D.C. is an oval-shaped room called a whispering chamber because the shape makes it possible for sound to reflect from the walls in a special way. Two people standing in specific places are able to hear each other whispering even though they are far apart. To determine where they should stand, we will need to better understand ellipses. Figure 1: 1 Image used with permission (CC BY; Gary Palmer) An ellipse is a type of conic section, a shape resulting from intersecting a plane with a cone and looking at the curve where they intersect. They were discovered by the Greek mathematician Menaechmus over two millennia ago. The figure below2 shows two types of conic sections. When a plane is perpendicular to the axis of the cone, the shape of the intersection is a circle. A slightly titled plane creates an oval-shaped conic section called an ellipse. Figure 2: Pbroks13 (https://commons.wikimedia.org/wiki/F...with_plane.svg), “Conic sections with plane”, cropped to show only ellipse and circle by L Michaels, CC BY 3.0 An ellipse can be drawn by placing two thumbtacks in a piece of cardboard then cutting a piece of string longer than the distance between the thumbtacks. Tack each end of the string to the cardboard, and trace a curve with a pencil held taught against the string. An ellipse is the set of all points where the sum of the distances from two fixed points is constant. The length of the string is the constant, and the two thumbtacks are the fixed points, called foci. Figure 3: Definitions: Ellipse Definition and Vocabulary An ellipse is the set of all points $$\;Q\left( {x,y} \right)$$ for which the sum of the distance to two fixed points $$F_1 \left( x_1,y_1 \right)$$ and $$F_2 \left( x_2,y_2 \right)$$, called the foci (plural of focus), is a constant k: $$d\left( {Q,{F_1}} \right) + d\left( {Q,{F_2}} \right) = k$$. • The major axis is the line passing through the foci. • Vertices are the points on the ellipse which intersect the major axis. • The major axis length is the length of the line segment between the vertices. • The center is the midpoint between the vertices (or the midpoint between the foci). • The minor axis is the line perpendicular to the minor axis passing through the center. • Minor axis endpoints are the points on the ellipse which intersect the minor axis. • The minor axis endpoints are also sometimes called co-vertices. • The minor axis length is the length of the line segment between minor axis endpoints. Note that which axis is major and which is minor will depend on the orientation of the ellipse. In the ellipse shown at right, the foci lie on the y axis, so that is the major axis, and the x axis is the minor axis. Because of this, the vertices are the endpoints of the ellipse on the y axis, and the minor axis endpoints (co-vertices) are the endpoints on the x axis. ## Ellipses Centered at the Origin From the definition above we can find an equation for an ellipse. We will find it for a ellipse centered at the origin $$C\left( {0,0} \right)$$ with foci at $${F_1}\left( {c,0} \right)$$ and $${F_2}\left( { - c,0} \right)$$ where c > 0. Suppose$$\;Q\left( {x,y} \right)$$ is some point on the ellipse. The distance from F1 to Q is $d\left( Q,{F_1} \right) = \sqrt {{{\left( {x - c} \right)}^2} + {{\left( {y - 0} \right)}^2}\;} = \sqrt {{{\left( {x - c} \right)}^2} + {y^2}\;}$ Likewise, the distance from F2 to Q is $d\left( {Q,{F_2}} \right) = \sqrt {{\left( {x - \left( { - c} \right)} \right)}^2 + {\left( {y - 0} \right)}^2} = \sqrt {{{\left( {x + c} \right)}^2} + {y^2}\;}$ From the definition of the ellipse, the sum of these distances should be constant: $d\left( {Q,{F_1}} \right) + d\left( {Q,{F_2}} \right) = k$ so that $\sqrt {{{\left( {x - c} \right)}^2} + {y^2}\;} + \sqrt {{{\left( {x + c} \right)}^2} + {y^2}\;} = k$ If we label one of the vertices $$\left( {a,0} \right)$$, it should satisfy the equation above since it is a point on the ellipse. This allows us to write k in terms of a. $\sqrt {{{\left( {a - c} \right)}^2} + {0^2}\;} + \sqrt {{{\left( {a + c} \right)}^2} + {0^2}\;} = k$ $\left| {a - c} \right| + \left| {a + c} \right| = k$ Since a > c, these will be positive $(a - c) + (a + c) = k$ $2a = k$ Substituting that into our equation, we will now try to rewrite the equation in a friendlier form. $$\sqrt {{{\left( {x - c} \right)}^2} + {y^2}\;} + \sqrt {{{\left( {x + c} \right)}^2} + {y^2}\;} = 2a$$ Move one radical $$\sqrt {{{\left( {x - c} \right)}^2} + {y^2}\;} = 2a - \sqrt {{{\left( {x + c} \right)}^2} + {y^2}\;}$$ Square both sides $${\left( {\sqrt {{{\left( {x - c} \right)}^2} + {y^2}\;} } \right)^2} = {\left( {2a - \sqrt {{{\left( {x + c} \right)}^2} + {y^2}\;} } \right)^2}$$ Expand $${\left( {x - c} \right)^2} + {y^2} = 4{a^2} - 4a\sqrt {{{\left( {x + c} \right)}^2} + {y^2}\;} + {\left( {x + c} \right)^2} + {y^2}$$ Expand more $${x^2} - 2xc + {c^2} + {y^2} = 4{a^2} - 4a\sqrt {{{\left( {x + c} \right)}^2} + {y^2}\;} + {x^2} + 2xc + {c^2} + {y^2}$$ Combining like terms and isolating the radical leaves $$4a\sqrt ParseError: "}" expected (click for details) Callstack: at (Bookshelves/Precalculus/Book:_Precalculus_-_An_Investigation_of_Functions_(Lippman_and_Rasmussen)/9:_Conics/9.1:_Ellipses), /content/body/div[2]/p[16]/span[1], line 1, column 2 \right) = {a^4} + 2{a^2}xc + {x^2}{c^2}$$ Expand $${a^2}\left( ParseError: EOF expected (click for details) Callstack: at (Bookshelves/Precalculus/Book:_Precalculus_-_An_Investigation_of_Functions_(Lippman_and_Rasmussen)/9:_Conics/9.1:_Ellipses), /content/body/div[2]/p[16]/span[2], line 1, column 2 \right) = {a^4} + 2{a^2}xc + {x^2}{c^2}$$ Distribute $${a^2}{x^2} + 2{a^2}xc + {a^2}{c^2} + {a^2}{y^2} = {a^4} + 2{a^2}xc + {x^2}{c^2}$$ Combine like terms $${a^2}{x^2} - {x^2}{c^2} + {a^2}{y^2} = {a^4} - {a^2}{c^2}$$ Factor common terms $$\left( ParseError: EOF expected (click for details) Callstack: at (Bookshelves/Precalculus/Book:_Precalculus_-_An_Investigation_of_Functions_(Lippman_and_Rasmussen)/9:_Conics/9.1:_Ellipses), /content/body/div[2]/p[16]/span[3], line 1, column 2 \right){x^2} + {a^2}{y^2} = {a^2}\left( ParseError: EOF expected (click for details) Callstack: at (Bookshelves/Precalculus/Book:_Precalculus_-_An_Investigation_of_Functions_(Lippman_and_Rasmussen)/9:_Conics/9.1:_Ellipses), /content/body/div[2]/p[16]/span[4], line 1, column 2 \right)$$ Let $${b^2} = {a^2} - {c^2}$$. Since a > c, we know b > 0. Substituting $${b^2}$$for $${a^2} - {c^2}$$ leaves $${b^2}{x^2} + {a^2}{y^2} = {a^2}{b^2}$$ Divide both sides by $${a^2}{b^2}$$ $$\frac ParseError: colon expected (click for details) Callstack: at (Bookshelves/Precalculus/Book:_Precalculus_-_An_Investigation_of_Functions_(Lippman_and_Rasmussen)/9:_Conics/9.1:_Ellipses), /content/body/div[2]/p[17]/span[1], line 1, column 3 ParseError: colon expected (click for details) Callstack: at (Bookshelves/Precalculus/Book:_Precalculus_-_An_Investigation_of_Functions_(Lippman_and_Rasmussen)/9:_Conics/9.1:_Ellipses), /content/body/div[2]/p[17]/span[2], line 1, column 3 + \frac ParseError: colon expected (click for details) Callstack: at (Bookshelves/Precalculus/Book:_Precalculus_-_An_Investigation_of_Functions_(Lippman_and_Rasmussen)/9:_Conics/9.1:_Ellipses), /content/body/div[2]/p[17]/span[3], line 1, column 3 ParseError: colon expected (click for details) Callstack: at (Bookshelves/Precalculus/Book:_Precalculus_-_An_Investigation_of_Functions_(Lippman_and_Rasmussen)/9:_Conics/9.1:_Ellipses), /content/body/div[2]/p[17]/span[4], line 1, column 3 = 1$$ This is the standard equation for an ellipse. We typically swap a and b when the major axis of the ellipse is vertical. Definition: Equation of an Ellipse Centered at the Origin in Standard Form The standard form of an equation of an ellipse centered at the origin $$C\left( {0,0} \right)$$ depends on whether the major axis is horizontal or vertical. The table below gives the standard equation, vertices, minor axis endpoints, foci, and graph for each. Major Axis Horizontal Vertical Vertices $$\frac{x^2}{a^2} + \frac{y^2}{b^2} = 1$$ $$\frac{x^2}{b^2} + \frac{y^2}{a^2} = 1$$ Standard Equation (?a, 0) and (a, 0 (0, ?a) and (0, a) Minor Axis Endpoints (0, ?b) and (0, b) (?b, 0) and (b, 0) Foci (?c, 0) and (c, 0) where $${b^2} = {a^2} - {c^2}$$ (0, ?c) and (0, c) where $${b^2} = {a^2} - {c^2}$$ Graph Example $$\PageIndex{1}$$ Put the equation of the ellipse $$9{x^2} + {y^2} = 9$$ in standard form. Find the vertices, minor axis endpoints, length of the major axis, and length of the minor axis. Sketch the graph, then check using a graphing utility. Solution The standard equation has a 1 on the right side, so this equation can be put in standard form by dividing by 9: $\frac ParseError: EOF expected (click for details) Callstack: at (Bookshelves/Precalculus/Book:_Precalculus_-_An_Investigation_of_Functions_(Lippman_and_Rasmussen)/9:_Conics/9.1:_Ellipses), /content/body/div[2]/div[2]/p[5]/span[1], line 1, column 2 {1} + \frac ParseError: EOF expected (click for details) Callstack: at (Bookshelves/Precalculus/Book:_Precalculus_-_An_Investigation_of_Functions_(Lippman_and_Rasmussen)/9:_Conics/9.1:_Ellipses), /content/body/div[2]/div[2]/p[5]/span[2], line 1, column 2 {9} = 1$ Since the y-denominator is greater than the x-denominator, the ellipse has a vertical major axis. Comparing to the general standard form equation $\;\frac ParseError: EOF expected (click for details) Callstack: at (Bookshelves/Precalculus/Book:_Precalculus_-_An_Investigation_of_Functions_(Lippman_and_Rasmussen)/9:_Conics/9.1:_Ellipses), /content/body/div[2]/div[2]/p[7]/span[1], line 1, column 2 ParseError: EOF expected (click for details) Callstack: at (Bookshelves/Precalculus/Book:_Precalculus_-_An_Investigation_of_Functions_(Lippman_and_Rasmussen)/9:_Conics/9.1:_Ellipses), /content/body/div[2]/div[2]/p[7]/span[2], line 1, column 2 + \frac ParseError: EOF expected (click for details) Callstack: at (Bookshelves/Precalculus/Book:_Precalculus_-_An_Investigation_of_Functions_(Lippman_and_Rasmussen)/9:_Conics/9.1:_Ellipses), /content/body/div[2]/div[2]/p[7]/span[3], line 1, column 2 ParseError: EOF expected (click for details) Callstack: at (Bookshelves/Precalculus/Book:_Precalculus_-_An_Investigation_of_Functions_(Lippman_and_Rasmussen)/9:_Conics/9.1:_Ellipses), /content/body/div[2]/div[2]/p[7]/span[4], line 1, column 2 = 1$ we see the value of $$a = \sqrt 9 = 3$$ and the value of $$\;b = \sqrt 1 = 1$$. • The vertices lie on the y-axis at (0,±a) = (0, ±3). • The minor axis endpoints lie on the x-axis at (±b, 0) = (±1, 0). • The length of the major axis is $$2\left( a \right) = 2\left( 3 \right) = 6$$. • The length of the minor axis is $$2\left( b \right) = 2\left( 1 \right) = 2$$. To sketch the graph we plot the vertices and the minor axis endpoints. Then we sketch the ellipse, rounding at the vertices and the minor axis endpoints. To check on a graphing utility, we must solve the equation for y. Isolating $${y^2}$$ gives us $${y^2} = 9\left( {1 - {x^2}} \right)$$ Taking the square root of both sides we get $$y = \pm 3\sqrt {1 - {x^2}}$$ Under Y= on your graphing utility enter the two halves of the ellipse as $$y = 3\sqrt {1 - {x^2}}$$ and $$y = - 3\sqrt {1 - {x^2}}$$. Set the window to a comparable scale to the sketch with xmin = -5, xmax = 5, ymin= -5, and ymax = 5. Here’s an example output on a TI-84 calculator: Sometimes we are given the equation. Sometimes we need to find the equation from a graph or other information. Example $$\PageIndex{1}$$: Find the standard form of the equation for an ellipse centered at (0,0) with horizontal major axis length 28 and minor axis length 16. Solution Since the center is at (0,0) and the major axis is horizontal, the ellipse equation has the standard form $$\frac{x^2}{a^2} + \frac{y^2}{b^2} = 1$$. The major axis has length $$2a = 28$$ or a = 14. The minor axis has length $$2b = 16$$ or b = 8. Substituting gives $$\frac{x^2}{{16}^2} + \frac{y^2}{8^2} = 1$$ or $$\frac{x^2}{256} + \frac{y^2}{64.} = 1$$. Exercise $$\PageIndex{1}$$ Find the standard form of the equation for an ellipse with horizontal major axis length 20 and minor axis length 6. Add texts here. Do not delete this text first. Example $$\PageIndex{3}$$ Find the standard form of the equation for the ellipse graphed here. Solution The center is at (0,0) and the major axis is vertical, so the standard form of the equation will be $$\frac{x^2}{b^2} + \frac{y^2}{a^2} = 1$$. From the graph we can see the vertices are (0,4) and (0,-4), giving a = 4. The minor-axis endpoints are (2,0) and (-2,0), giving b = 2. The equation will be $$\frac ParseError: colon expected (click for details) Callstack: at (Bookshelves/Precalculus/Book:_Precalculus_-_An_Investigation_of_Functions_(Lippman_and_Rasmussen)/9:_Conics/9.1:_Ellipses), /content/body/div[2]/div[5]/p[7]/span[1], line 1, column 3 ParseError: colon expected (click for details) Callstack: at (Bookshelves/Precalculus/Book:_Precalculus_-_An_Investigation_of_Functions_(Lippman_and_Rasmussen)/9:_Conics/9.1:_Ellipses), /content/body/div[2]/div[5]/p[7]/span[2], line 1, column 3 + \frac ParseError: colon expected (click for details) Callstack: at (Bookshelves/Precalculus/Book:_Precalculus_-_An_Investigation_of_Functions_(Lippman_and_Rasmussen)/9:_Conics/9.1:_Ellipses), /content/body/div[2]/div[5]/p[7]/span[3], line 1, column 3 ParseError: colon expected (click for details) Callstack: at (Bookshelves/Precalculus/Book:_Precalculus_-_An_Investigation_of_Functions_(Lippman_and_Rasmussen)/9:_Conics/9.1:_Ellipses), /content/body/div[2]/div[5]/p[7]/span[4], line 1, column 3 = 1$$ or $$\frac ParseError: colon expected (click for details) Callstack: at (Bookshelves/Precalculus/Book:_Precalculus_-_An_Investigation_of_Functions_(Lippman_and_Rasmussen)/9:_Conics/9.1:_Ellipses), /content/body/div[2]/div[5]/p[7]/span[5], line 1, column 3 {4} + \frac ParseError: colon expected (click for details) Callstack: at (Bookshelves/Precalculus/Book:_Precalculus_-_An_Investigation_of_Functions_(Lippman_and_Rasmussen)/9:_Conics/9.1:_Ellipses), /content/body/div[2]/div[5]/p[7]/span[6], line 1, column 3 16 = 1$$. ## Ellipses Not Centered at the Origin Not all ellipses are centered at the origin. The graph of such an ellipse is a shift of the graph centered at the origin, so the standard equation for one centered at (h, k) is slightly different. We can shift the graph right h units and up k units by replacing x with x – h and y with y – k, similar to what we did when we learned transformations. Definition: Equation of an Ellipse Centered at (h, k) in Standard Form The standard form of an equation of an ellipse centered at the point C$$\left( {h,k} \right)$$ depends on whether the major axis is horizontal or vertical. The table below gives the standard equation, vertices, minor axis endpoints, foci, and graph for each. Major Axis Horizontal Vertical Standard Equation $$\frac ParseError: "}" expected (click for details) Callstack: at (Bookshelves/Precalculus/Book:_Precalculus_-_An_Investigation_of_Functions_(Lippman_and_Rasmussen)/9:_Conics/9.1:_Ellipses), /content/body/div[3]/p[3]/span[1], line 1, column 2 ParseError: colon expected (click for details) Callstack: at (Bookshelves/Precalculus/Book:_Precalculus_-_An_Investigation_of_Functions_(Lippman_and_Rasmussen)/9:_Conics/9.1:_Ellipses), /content/body/div[3]/p[3]/span[2], line 1, column 3 + \frac ParseError: "}" expected (click for details) Callstack: at (Bookshelves/Precalculus/Book:_Precalculus_-_An_Investigation_of_Functions_(Lippman_and_Rasmussen)/9:_Conics/9.1:_Ellipses), /content/body/div[3]/p[3]/span[3], line 1, column 2 ParseError: colon expected (click for details) Callstack: at (Bookshelves/Precalculus/Book:_Precalculus_-_An_Investigation_of_Functions_(Lippman_and_Rasmussen)/9:_Conics/9.1:_Ellipses), /content/body/div[3]/p[3]/span[4], line 1, column 3 = 1$$ $$\frac ParseError: "}" expected (click for details) Callstack: at (Bookshelves/Precalculus/Book:_Precalculus_-_An_Investigation_of_Functions_(Lippman_and_Rasmussen)/9:_Conics/9.1:_Ellipses), /content/body/div[3]/p[3]/span[5], line 1, column 2 ParseError: colon expected (click for details) Callstack: at (Bookshelves/Precalculus/Book:_Precalculus_-_An_Investigation_of_Functions_(Lippman_and_Rasmussen)/9:_Conics/9.1:_Ellipses), /content/body/div[3]/p[3]/span[6], line 1, column 3 + \frac ParseError: "}" expected (click for details) Callstack: at (Bookshelves/Precalculus/Book:_Precalculus_-_An_Investigation_of_Functions_(Lippman_and_Rasmussen)/9:_Conics/9.1:_Ellipses), /content/body/div[3]/p[3]/span[7], line 1, column 2 ParseError: colon expected (click for details) Callstack: at (Bookshelves/Precalculus/Book:_Precalculus_-_An_Investigation_of_Functions_(Lippman_and_Rasmussen)/9:_Conics/9.1:_Ellipses), /content/body/div[3]/p[3]/span[8], line 1, column 3 = 1$$ Vertices ( h ± a, k ) (h, k ± a) Minor Axis Endpoints ( h, k ± b ) ( h ± b, k ) Foci ( h ± c, k ) where b2 = a2 – c2 (h, k ± c) where b2 = a2 – c2 Graph Example 4 Put the equation of the ellipse $${x^2} + 2x + 4{y^2} - 24y = - 33$$ in standard form. Find the vertices, minor axis endpoints, length of the major axis, and length of the minor axis. Sketch the graph. To rewrite this in standard form, we will need to complete the square, twice. Looking at the x terms, $${x^2} + 2x$$, we like to have something of the form $${(x + n)^2}$$. Notice that if we were to expand this, we’d get $${x^2} + 2nx + {n^2}$$, so in order for the coefficient on x to match, we’ll need $${(x + 1)^2} = {x^2} + 2x + 1$$. However, we don’t have a +1 on the left side of the equation to allow this factoring. To accommodate this, we will add 1 to both sides of the equation, which then allows us to factor the left side as a perfect square: $${x^2} + 2x + 1 + 4{y^2} - 24y = - 33 + 1$$ $${(x + 1)^2} + 4{y^2} - 24y = - 32$$ Repeating the same approach with the y terms, first we’ll factor out the 4. $$4{y^2} - 24y = 4({y^2} - 6y)$$ Now we want to be able to write $$4\left( ParseError: EOF expected (click for details) Callstack: at (Bookshelves/Precalculus/Book:_Precalculus_-_An_Investigation_of_Functions_(Lippman_and_Rasmussen)/9:_Conics/9.1:_Ellipses), /content/body/div[3]/p[10]/span, line 1, column 2 \right)$$. For the coefficient of y to match, n will have to -3, giving $$4{(y - 3)^2} = 4\left( {{y^2} - 6y + 9} \right) = 4{y^2} - 24y + 36$$. To allow this factoring, we can add 36 to both sides of the equation. $${(x + 1)^2} + 4{y^2} - 24y + 36 = - 32 + 36$$ $${(x + 1)^2} + 4\left( {{y^2} - 6y + 9} \right) = 4$$ $${(x + 1)^2} + 4{\left( {y - 3} \right)^2} = 4$$ Dividing by 4 gives the standard form of the equation for the ellipse $$\frac ParseError: "}" expected (click for details) Callstack: at (Bookshelves/Precalculus/Book:_Precalculus_-_An_Investigation_of_Functions_(Lippman_and_Rasmussen)/9:_Conics/9.1:_Ellipses), /content/body/div[3]/p[12]/span[1], line 1, column 2 {4} + \frac ParseError: "}" expected (click for details) Callstack: at (Bookshelves/Precalculus/Book:_Precalculus_-_An_Investigation_of_Functions_(Lippman_and_Rasmussen)/9:_Conics/9.1:_Ellipses), /content/body/div[3]/p[12]/span[2], line 1, column 2 {1} = 1$$ Since the x-denominator is greater than the y-denominator, the ellipse has a horizontal major axis. From the general standard equation $$\;\frac ParseError: "}" expected (click for details) Callstack: at (Bookshelves/Precalculus/Book:_Precalculus_-_An_Investigation_of_Functions_(Lippman_and_Rasmussen)/9:_Conics/9.1:_Ellipses), /content/body/div[3]/p[13]/span[1], line 1, column 2 ParseError: colon expected (click for details) Callstack: at (Bookshelves/Precalculus/Book:_Precalculus_-_An_Investigation_of_Functions_(Lippman_and_Rasmussen)/9:_Conics/9.1:_Ellipses), /content/body/div[3]/p[13]/span[2], line 1, column 3 + \frac ParseError: "}" expected (click for details) Callstack: at (Bookshelves/Precalculus/Book:_Precalculus_-_An_Investigation_of_Functions_(Lippman_and_Rasmussen)/9:_Conics/9.1:_Ellipses), /content/body/div[3]/p[13]/span[3], line 1, column 2 ParseError: colon expected (click for details) Callstack: at (Bookshelves/Precalculus/Book:_Precalculus_-_An_Investigation_of_Functions_(Lippman_and_Rasmussen)/9:_Conics/9.1:_Ellipses), /content/body/div[3]/p[13]/span[4], line 1, column 3 = 1$$ we see the value of $$a = \sqrt 4 = 2$$ and the value of $$\;b = \sqrt 1 = 1$$. The center is at (h, k) = (-1, 3). The vertices are at (h±a, k) or (-3, 3) and (1,3). The minor axis endpoints are at (h, k±b) or (-1, 2) and (-1,4). The length of the major axis is $$2\left( a \right) = 2\left( 2 \right) = 4$$. The length of the minor axis is $$2\left( b \right) = 2\left( 1 \right) = 2$$. To sketch the graph we plot the vertices and the minor axis endpoints. Then we sketch the ellipse, rounding at the vertices and the minor axis endpoints. Example 5 Find the standard form of the equation for an ellipse centered at (-2,1), a vertex at (-2,4) and passing through the point (0,1). The center at (-2,1) and vertex at (-2,4) means the major axis is vertical since the x-values are the same. The ellipse equation has the standard form $$\frac ParseError: "}" expected (click for details) Callstack: at (Bookshelves/Precalculus/Book:_Precalculus_-_An_Investigation_of_Functions_(Lippman_and_Rasmussen)/9:_Conics/9.1:_Ellipses), /content/body/div[3]/p[19]/span[1], line 1, column 2 ParseError: colon expected (click for details) Callstack: at (Bookshelves/Precalculus/Book:_Precalculus_-_An_Investigation_of_Functions_(Lippman_and_Rasmussen)/9:_Conics/9.1:_Ellipses), /content/body/div[3]/p[19]/span[2], line 1, column 3 + \frac ParseError: "}" expected (click for details) Callstack: at (Bookshelves/Precalculus/Book:_Precalculus_-_An_Investigation_of_Functions_(Lippman_and_Rasmussen)/9:_Conics/9.1:_Ellipses), /content/body/div[3]/p[19]/span[3], line 1, column 2 ParseError: colon expected (click for details) Callstack: at (Bookshelves/Precalculus/Book:_Precalculus_-_An_Investigation_of_Functions_(Lippman_and_Rasmussen)/9:_Conics/9.1:_Ellipses), /content/body/div[3]/p[19]/span[4], line 1, column 3 = 1$$. The value of a = 4-1=3. Substituting a = 3, h = -2, and k = 1 gives $$\frac ParseError: "}" expected (click for details) Callstack: at (Bookshelves/Precalculus/Book:_Precalculus_-_An_Investigation_of_Functions_(Lippman_and_Rasmussen)/9:_Conics/9.1:_Ellipses), /content/body/div[3]/p[20]/span[1], line 1, column 2 ParseError: colon expected (click for details) Callstack: at (Bookshelves/Precalculus/Book:_Precalculus_-_An_Investigation_of_Functions_(Lippman_and_Rasmussen)/9:_Conics/9.1:_Ellipses), /content/body/div[3]/p[20]/span[2], line 1, column 3 + \frac ParseError: "}" expected (click for details) Callstack: at (Bookshelves/Precalculus/Book:_Precalculus_-_An_Investigation_of_Functions_(Lippman_and_Rasmussen)/9:_Conics/9.1:_Ellipses), /content/body/div[3]/p[20]/span[3], line 1, column 2 ParseError: colon expected (click for details) Callstack: at (Bookshelves/Precalculus/Book:_Precalculus_-_An_Investigation_of_Functions_(Lippman_and_Rasmussen)/9:_Conics/9.1:_Ellipses), /content/body/div[3]/p[20]/span[4], line 1, column 3 = 1$$. Substituting for x and y using the point (0,1) gives $$\frac ParseError: "}" expected (click for details) Callstack: at (Bookshelves/Precalculus/Book:_Precalculus_-_An_Investigation_of_Functions_(Lippman_and_Rasmussen)/9:_Conics/9.1:_Ellipses), /content/body/div[3]/p[20]/span[5], line 1, column 2 ParseError: colon expected (click for details) Callstack: at (Bookshelves/Precalculus/Book:_Precalculus_-_An_Investigation_of_Functions_(Lippman_and_Rasmussen)/9:_Conics/9.1:_Ellipses), /content/body/div[3]/p[20]/span[6], line 1, column 3 + \frac ParseError: "}" expected (click for details) Callstack: at (Bookshelves/Precalculus/Book:_Precalculus_-_An_Investigation_of_Functions_(Lippman_and_Rasmussen)/9:_Conics/9.1:_Ellipses), /content/body/div[3]/p[20]/span[7], line 1, column 2 ParseError: colon expected (click for details) Callstack: at (Bookshelves/Precalculus/Book:_Precalculus_-_An_Investigation_of_Functions_(Lippman_and_Rasmussen)/9:_Conics/9.1:_Ellipses), /content/body/div[3]/p[20]/span[8], line 1, column 3 = 1$$. Solving for b gives b=2. The equation of the ellipse in standard form is $$\frac ParseError: "}" expected (click for details) Callstack: at (Bookshelves/Precalculus/Book:_Precalculus_-_An_Investigation_of_Functions_(Lippman_and_Rasmussen)/9:_Conics/9.1:_Ellipses), /content/body/div[3]/p[20]/span[9], line 1, column 2 ParseError: colon expected (click for details) Callstack: at (Bookshelves/Precalculus/Book:_Precalculus_-_An_Investigation_of_Functions_(Lippman_and_Rasmussen)/9:_Conics/9.1:_Ellipses), /content/body/div[3]/p[20]/span[10], line 1, column 3 + \frac ParseError: "}" expected (click for details) Callstack: at (Bookshelves/Precalculus/Book:_Precalculus_-_An_Investigation_of_Functions_(Lippman_and_Rasmussen)/9:_Conics/9.1:_Ellipses), /content/body/div[3]/p[20]/span[11], line 1, column 2 ParseError: colon expected (click for details) Callstack: at (Bookshelves/Precalculus/Book:_Precalculus_-_An_Investigation_of_Functions_(Lippman_and_Rasmussen)/9:_Conics/9.1:_Ellipses), /content/body/div[3]/p[20]/span[12], line 1, column 3 = 1$$ or $$\frac ParseError: "}" expected (click for details) Callstack: at (Bookshelves/Precalculus/Book:_Precalculus_-_An_Investigation_of_Functions_(Lippman_and_Rasmussen)/9:_Conics/9.1:_Ellipses), /content/body/div[3]/p[20]/span[13], line 1, column 2 {4} + \frac ParseError: "}" expected (click for details) Callstack: at (Bookshelves/Precalculus/Book:_Precalculus_-_An_Investigation_of_Functions_(Lippman_and_Rasmussen)/9:_Conics/9.1:_Ellipses), /content/body/div[3]/p[20]/span[14], line 1, column 2 {9} = 1$$. Try it Now 2. Find the center, vertices, minor axis endpoints, length of the major axis, and length of the minor axis for the ellipse $${\left( {x - 4} \right)^2} + \frac ParseError: "}" expected (click for details) Callstack: at (Bookshelves/Precalculus/Book:_Precalculus_-_An_Investigation_of_Functions_(Lippman_and_Rasmussen)/9:_Conics/9.1:_Ellipses), /content/body/div[3]/p[21]/span, line 1, column 2 {4} = 1$$. ## Bridges with Semielliptical Arches Arches have been used to build bridges for centuries, like in the Skerton Bridge in England which uses five semielliptical arches for support3. Semielliptical arches can have engineering benefits such as allowing for longer spans between supports. Example 6 A bridge over a river is supported by a single semielliptical arch. The river is 50 feet wide. At the center, the arch rises 20 feet above the river. The roadway is 4 feet above the center of the arch. What is the vertical distance between the roadway and the arch 15 feet from the center? Put the center of the ellipse at (0,0) and make the span of the river the major axis. Since the major axis is horizontal, the equation has the form $$\frac ParseError: colon expected (click for details) Callstack: at (Bookshelves/Precalculus/Book:_Precalculus_-_An_Investigation_of_Functions_(Lippman_and_Rasmussen)/9:_Conics/9.1:_Ellipses), /content/body/div[4]/p[4]/span[1], line 1, column 3 ParseError: colon expected (click for details) Callstack: at (Bookshelves/Precalculus/Book:_Precalculus_-_An_Investigation_of_Functions_(Lippman_and_Rasmussen)/9:_Conics/9.1:_Ellipses), /content/body/div[4]/p[4]/span[2], line 1, column 3 + \frac ParseError: colon expected (click for details) Callstack: at (Bookshelves/Precalculus/Book:_Precalculus_-_An_Investigation_of_Functions_(Lippman_and_Rasmussen)/9:_Conics/9.1:_Ellipses), /content/body/div[4]/p[4]/span[3], line 1, column 3 ParseError: colon expected (click for details) Callstack: at (Bookshelves/Precalculus/Book:_Precalculus_-_An_Investigation_of_Functions_(Lippman_and_Rasmussen)/9:_Conics/9.1:_Ellipses), /content/body/div[4]/p[4]/span[4], line 1, column 3 = 1$$. The value of $$a = \frac{1}{2}(50) = 25$$ and the value of b = 20, giving $$\frac ParseError: colon expected (click for details) Callstack: at (Bookshelves/Precalculus/Book:_Precalculus_-_An_Investigation_of_Functions_(Lippman_and_Rasmussen)/9:_Conics/9.1:_Ellipses), /content/body/div[4]/p[4]/span[5], line 1, column 3 ParseError: "}" expected (click for details) Callstack: at (Bookshelves/Precalculus/Book:_Precalculus_-_An_Investigation_of_Functions_(Lippman_and_Rasmussen)/9:_Conics/9.1:_Ellipses), /content/body/div[4]/p[4]/span[6], line 1, column 2 + \frac ParseError: colon expected (click for details) Callstack: at (Bookshelves/Precalculus/Book:_Precalculus_-_An_Investigation_of_Functions_(Lippman_and_Rasmussen)/9:_Conics/9.1:_Ellipses), /content/body/div[4]/p[4]/span[7], line 1, column 3 ParseError: "}" expected (click for details) Callstack: at (Bookshelves/Precalculus/Book:_Precalculus_-_An_Investigation_of_Functions_(Lippman_and_Rasmussen)/9:_Conics/9.1:_Ellipses), /content/body/div[4]/p[4]/span[8], line 1, column 2 = 1$$. Substituting x = 15 gives $$\frac ParseError: "}" expected (click for details) Callstack: at (Bookshelves/Precalculus/Book:_Precalculus_-_An_Investigation_of_Functions_(Lippman_and_Rasmussen)/9:_Conics/9.1:_Ellipses), /content/body/div[4]/p[4]/span[9], line 1, column 2 ParseError: "}" expected (click for details) Callstack: at (Bookshelves/Precalculus/Book:_Precalculus_-_An_Investigation_of_Functions_(Lippman_and_Rasmussen)/9:_Conics/9.1:_Ellipses), /content/body/div[4]/p[4]/span[10], line 1, column 2 + \frac ParseError: colon expected (click for details) Callstack: at (Bookshelves/Precalculus/Book:_Precalculus_-_An_Investigation_of_Functions_(Lippman_and_Rasmussen)/9:_Conics/9.1:_Ellipses), /content/body/div[4]/p[4]/span[11], line 1, column 3 ParseError: "}" expected (click for details) Callstack: at (Bookshelves/Precalculus/Book:_Precalculus_-_An_Investigation_of_Functions_(Lippman_and_Rasmussen)/9:_Conics/9.1:_Ellipses), /content/body/div[4]/p[4]/span[12], line 1, column 2 = 1$$. Solving for y, $$y = 20\sqrt {1 - \frac225625} = 16$$. The roadway is 20 + 4 = 24 feet above the river. The vertical distance between the roadway and the arch 15 feet from the center is 24 ? 16 = 8 feet. Ellipse Foci The location of the foci can play a key role in ellipse application problems. Standing on a focus in a whispering gallery allows you to hear someone whispering at the other focus. To find the foci, we need to find the length from the center to the foci, c, using the equation $${b^2} = {a^2} - {c^2}$$. It looks similar to, but is not the same as, the Pythagorean Theorem. Example 7 The National Statuary Hall whispering chamber is an elliptical room 46 feet wide and 96 feet long. To hear each other whispering, two people need to stand at the foci of the ellipse. Where should they stand? We could represent the hall with a horizontal ellipse centered at the origin. The major axis length would be 96 feet, so $$a = \frac{1}{2}(96) = 48$$, and the minor axis length would be 46 feet, so $$b = \frac{1}{2}(46) = 23$$. To find the foci, we can use the equation $${b^2} = {a^2} - {c^2}$$. $${23^2} = {48^2} - {c^2}$$ $${c^2} = {48^2} - {23^2}$$ $$c = \sqrt {1775} \approx \pm 42$$ ft. To hear each other whisper, two people would need to stand 2(42) = 84 feet apart along the major axis, each about 48 – 42 = 6 feet from the wall. Example 8 Find the foci of the ellipse $$\frac ParseError: "}" expected (click for details) Callstack: at (Bookshelves/Precalculus/Book:_Precalculus_-_An_Investigation_of_Functions_(Lippman_and_Rasmussen)/9:_Conics/9.1:_Ellipses), /content/body/div[4]/p[11]/span[1], line 1, column 2 {4} + \frac ParseError: "}" expected (click for details) Callstack: at (Bookshelves/Precalculus/Book:_Precalculus_-_An_Investigation_of_Functions_(Lippman_and_Rasmussen)/9:_Conics/9.1:_Ellipses), /content/body/div[4]/p[11]/span[2], line 1, column 2 29 = 1$$. The ellipse is vertical with an equation of the form $$\frac ParseError: "}" expected (click for details) Callstack: at (Bookshelves/Precalculus/Book:_Precalculus_-_An_Investigation_of_Functions_(Lippman_and_Rasmussen)/9:_Conics/9.1:_Ellipses), /content/body/div[4]/p[12]/span[1], line 1, column 2 ParseError: colon expected (click for details) Callstack: at (Bookshelves/Precalculus/Book:_Precalculus_-_An_Investigation_of_Functions_(Lippman_and_Rasmussen)/9:_Conics/9.1:_Ellipses), /content/body/div[4]/p[12]/span[2], line 1, column 3 + \frac ParseError: "}" expected (click for details) Callstack: at (Bookshelves/Precalculus/Book:_Precalculus_-_An_Investigation_of_Functions_(Lippman_and_Rasmussen)/9:_Conics/9.1:_Ellipses), /content/body/div[4]/p[12]/span[3], line 1, column 2 ParseError: colon expected (click for details) Callstack: at (Bookshelves/Precalculus/Book:_Precalculus_-_An_Investigation_of_Functions_(Lippman_and_Rasmussen)/9:_Conics/9.1:_Ellipses), /content/body/div[4]/p[12]/span[4], line 1, column 3 = 1$$. The center is at (h, k) = (2, ?3). The foci are at (h, k ± c). To find length c we use $${b^2} = {a^2} - {c^2}$$. Substituting gives $$4 = 29 - {c^2}$$ or $$c = \sqrt {25} = 5$$. The ellipse has foci (2, ?3 ± 5), or (2, ?8) and (2, 2). Example 9 Find the standard form of the equation for an ellipse with foci (-1,4) and (3,4) and major axis length 10. Since the foci differ in the x -coordinates, the ellipse is horizontal with an equation of the form $$\frac ParseError: "}" expected (click for details) Callstack: at (Bookshelves/Precalculus/Book:_Precalculus_-_An_Investigation_of_Functions_(Lippman_and_Rasmussen)/9:_Conics/9.1:_Ellipses), /content/body/div[4]/p[16]/span[1], line 1, column 2 ParseError: colon expected (click for details) Callstack: at (Bookshelves/Precalculus/Book:_Precalculus_-_An_Investigation_of_Functions_(Lippman_and_Rasmussen)/9:_Conics/9.1:_Ellipses), /content/body/div[4]/p[16]/span[2], line 1, column 3 + \frac ParseError: "}" expected (click for details) Callstack: at (Bookshelves/Precalculus/Book:_Precalculus_-_An_Investigation_of_Functions_(Lippman_and_Rasmussen)/9:_Conics/9.1:_Ellipses), /content/body/div[4]/p[16]/span[3], line 1, column 2 ParseError: colon expected (click for details) Callstack: at (Bookshelves/Precalculus/Book:_Precalculus_-_An_Investigation_of_Functions_(Lippman_and_Rasmussen)/9:_Conics/9.1:_Ellipses), /content/body/div[4]/p[16]/span[4], line 1, column 3 = 1$$. The center is at the midpoint of the foci $$\left( {\frac ParseError: colon expected (click for details) Callstack: at (Bookshelves/Precalculus/Book:_Precalculus_-_An_Investigation_of_Functions_(Lippman_and_Rasmussen)/9:_Conics/9.1:_Ellipses), /content/body/div[4]/p[16]/span[5], line 1, column 5 {2},\frac ParseError: colon expected (click for details) Callstack: at (Bookshelves/Precalculus/Book:_Precalculus_-_An_Investigation_of_Functions_(Lippman_and_Rasmussen)/9:_Conics/9.1:_Ellipses), /content/body/div[4]/p[16]/span[6], line 1, column 5 {2}} \right) = \left( {\frac ParseError: invalid DekiScript (click for details) Callstack: at (Bookshelves/Precalculus/Book:_Precalculus_-_An_Investigation_of_Functions_(Lippman_and_Rasmussen)/9:_Conics/9.1:_Ellipses), /content/body/div[4]/p[16]/span[7], line 1, column 1 {2},\frac8{2}} \right) = \left( {1,\;4} \right)$$. The value of a is half the major axis length: $$a = \frac{1}{2}(10) = 5$$. The value of c is half the distance between the foci: $$c = \frac{1}{2}(3 - ( - 1)) = \frac{1}{2}(4) = 2$$. To find length b we use $${b^2} = {a^2} - {c^2}$$. Substituting a and c gives $${b^2} = {5^2} - {2^2}$$ = 21. The equation of the ellipse in standard form is $$\frac ParseError: "}" expected (click for details) Callstack: at (Bookshelves/Precalculus/Book:_Precalculus_-_An_Investigation_of_Functions_(Lippman_and_Rasmussen)/9:_Conics/9.1:_Ellipses), /content/body/div[4]/p[17]/span[1], line 1, column 2 ParseError: colon expected (click for details) Callstack: at (Bookshelves/Precalculus/Book:_Precalculus_-_An_Investigation_of_Functions_(Lippman_and_Rasmussen)/9:_Conics/9.1:_Ellipses), /content/body/div[4]/p[17]/span[2], line 1, column 3 + \frac ParseError: "}" expected (click for details) Callstack: at (Bookshelves/Precalculus/Book:_Precalculus_-_An_Investigation_of_Functions_(Lippman_and_Rasmussen)/9:_Conics/9.1:_Ellipses), /content/body/div[4]/p[17]/span[3], line 1, column 2 21 = 1$$ or $$\frac ParseError: "}" expected (click for details) Callstack: at (Bookshelves/Precalculus/Book:_Precalculus_-_An_Investigation_of_Functions_(Lippman_and_Rasmussen)/9:_Conics/9.1:_Ellipses), /content/body/div[4]/p[17]/span[5], line 1, column 2 25 + \frac ParseError: "}" expected (click for details) Callstack: at (Bookshelves/Precalculus/Book:_Precalculus_-_An_Investigation_of_Functions_(Lippman_and_Rasmussen)/9:_Conics/9.1:_Ellipses), /content/body/div[4]/p[17]/span[7], line 1, column 2 21 = 1$$. Try it Now 3. Find the standard form of the equation for an ellipse with focus (2,4), vertex (2,6), and center (2,1). ## Planetary Orbits It was long thought that planetary orbits around the sun were circular. Around 1600, Johannes Kepler discovered they were actually elliptical4. His first law of planetary motion says that planets travel around the sun in an elliptical orbit with the sun as one of the foci. The length of the major axis can be found by measuring the planet’s aphelion, its greatest distance from the sun, and perihelion, its shortest distance from the sun, and summing them together. Example 10 Mercury’s aphelion is 35.98 million miles and its perihelion is 28.58 million miles. Write an equation for Mercury’s orbit. Let the center of the ellipse be (0,0) and its major axis be horizontal so the equation will have form $$\frac ParseError: colon expected (click for details) Callstack: at (Bookshelves/Precalculus/Book:_Precalculus_-_An_Investigation_of_Functions_(Lippman_and_Rasmussen)/9:_Conics/9.1:_Ellipses), /content/body/div[5]/p[2]/span[1], line 1, column 3 ParseError: colon expected (click for details) Callstack: at (Bookshelves/Precalculus/Book:_Precalculus_-_An_Investigation_of_Functions_(Lippman_and_Rasmussen)/9:_Conics/9.1:_Ellipses), /content/body/div[5]/p[2]/span[2], line 1, column 3 + \frac ParseError: colon expected (click for details) Callstack: at (Bookshelves/Precalculus/Book:_Precalculus_-_An_Investigation_of_Functions_(Lippman_and_Rasmussen)/9:_Conics/9.1:_Ellipses), /content/body/div[5]/p[2]/span[3], line 1, column 3 ParseError: colon expected (click for details) Callstack: at (Bookshelves/Precalculus/Book:_Precalculus_-_An_Investigation_of_Functions_(Lippman_and_Rasmussen)/9:_Conics/9.1:_Ellipses), /content/body/div[5]/p[2]/span[4], line 1, column 3 = 1$$. The length of the major axis is $$2a = 35.98 + 28.58 = 64.56$$ giving $$a = 32.28$$ and $${a^2} = 1041.9984$$. Since the perihelion is the distance from the focus to one vertex, we can find the distance between the foci by subtracting twice the perihelion from the major axis length: $$2c = 64.56 - 2\left( {28.58} \right) = 7.4$$ giving $$c = 3.7$$. Substitution of a and c into $${b^2} = {a^2} - {c^2}$$ yields $${b^2} = {32.28^2} - {3.7^2} = 1028.3084$$. The equation is $$\frac ParseError: colon expected (click for details) Callstack: at (Bookshelves/Precalculus/Book:_Precalculus_-_An_Investigation_of_Functions_(Lippman_and_Rasmussen)/9:_Conics/9.1:_Ellipses), /content/body/div[5]/p[5]/span[1], line 1, column 3 1041.9984 + \frac ParseError: colon expected (click for details) Callstack: at (Bookshelves/Precalculus/Book:_Precalculus_-_An_Investigation_of_Functions_(Lippman_and_Rasmussen)/9:_Conics/9.1:_Ellipses), /content/body/div[5]/p[5]/span[3], line 1, column 3 1028.3084 = 1$$. ## Important Topics of This Section • Ellipse Definition • Ellipse Equations in Standard Form • Ellipse Foci • Applications of Ellipses 1. 2a = 20, so a =10. 2b = 6, so b = 3. $$\frac ParseError: colon expected (click for details) Callstack: at (Bookshelves/Precalculus/Book:_Precalculus_-_An_Investigation_of_Functions_(Lippman_and_Rasmussen)/9:_Conics/9.1:_Ellipses), /content/body/div[6]/p[1]/span[1], line 1, column 3 100 + \frac ParseError: colon expected (click for details) Callstack: at (Bookshelves/Precalculus/Book:_Precalculus_-_An_Investigation_of_Functions_(Lippman_and_Rasmussen)/9:_Conics/9.1:_Ellipses), /content/body/div[6]/p[1]/span[3], line 1, column 3 {9} = 1$$ 2. Center (4, -2). Vertical ellipse with a = 2, b = 1. Vertices at (4, -2±2) = (4,0) and (4,-4), minor axis endpoints at (4±1, -2) = (3,-2) and (5,-2), major axis length 4, minor axis length 2 3. Vertex, center, and focus have the same x-value, so it’s a vertical ellipse. Using the vertex and center, a = 6 – 1 = 5 Using the center and focus, c = 4 – 1 = 3 $${b^2} = {5^2} - {3^2}$$. b = 4. $$\frac ParseError: "}" expected (click for details) Callstack: at (Bookshelves/Precalculus/Book:_Precalculus_-_An_Investigation_of_Functions_(Lippman_and_Rasmussen)/9:_Conics/9.1:_Ellipses), /content/body/div[6]/p[3]/span[1], line 1, column 2 16 + \frac ParseError: "}" expected (click for details) Callstack: at (Bookshelves/Precalculus/Book:_Precalculus_-_An_Investigation_of_Functions_(Lippman_and_Rasmussen)/9:_Conics/9.1:_Ellipses), /content/body/div[6]/p[3]/span[3], line 1, column 2 25 = 1$$ This chapter is part of Precalculus: An Investigation of Functions © Lippman & Rasmussen 2017. This material is licensed under a Creative Commons CC-BY-SA license. This chapter contains content remixed from work by Lara Michaels and work from OpenStax Precalculus (OpenStax.org), CC-BY 3.0.
2019-06-17T03:47:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8272517323493958, "perplexity": 458.9214017069593}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998369.29/warc/CC-MAIN-20190617022938-20190617044938-00436.warc.gz"}
https://www.usgs.gov/media/images/geologists-volcano
# Geologists on Volcano ### Detailed Description Two HVO geologists are standing on the east rim of Puu Ō `ō cone, triangulating the depth of several degassing vents inside the crater. An infrared camera is being used to see the vents through the fume. The plume in the background is coming from the east wall vent. Public Domain.
2023-03-29T05:18:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18322138488292694, "perplexity": 9361.6541869493}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948932.75/warc/CC-MAIN-20230329023546-20230329053546-00303.warc.gz"}
https://www.usgs.gov/node/129921
# The types of data needed for assessing the environmental and human health impacts of coal December 31, 1998 Coal is one of the most important sources of energy. Its worldwide use will continue to expand during the next several decades, particularly in rapidly developing countries such as China and India. Unfortunately, coal use may bring with it environmental and human health costs. Many of the environmental and health problems attributed to coal combustion are due to mobilization of potentially toxic elements. Some of these problems could be minimized or even avoided if comprehensive databases containing appropriate coal quality information were available to decision makers so that informed decisions could be made regarding coal use. Among the coal quality parameters that should be included in these databases are: C, H, N, O, pyritic sulfur, organic sulfur, major, minor, and trace element concentrations, modes of occurrence of environmentally sensitive elements, cleanability, mineralogy, organic chemistry, petrography, and leachability.Coal is one of the most important sources of energy. Its worldwide use will continue to expand during the next several decades, particularly in rapidly developing countries such as China and India. Unfortunately, coal use may bring with it environmental and human health costs. Many of the environmental and health problems attributed to coal combustion are due to mobilization of potentially toxic elements. Some of these problems could be minimized or even avoided if comprehensive databases containing appropriate coal quality information were available to decision makers so that informed decisions could be made regarding coal use. Among the coal quality parameters that should be included in these databases are: C, H, N, O, pyritic sulfur, organic sulfur, major, minor, and trace element concentrations, modes of occurrence of environmentally sensitive elements, cleanability, mineralogy, organic chemistry, petrography, and leachability.
2022-01-23T09:53:46
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8153938055038452, "perplexity": 3963.258019888455}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304217.55/warc/CC-MAIN-20220123081226-20220123111226-00639.warc.gz"}
https://tjyj.stats.gov.cn/CN/Y2014/V31/I7/3
• 论文 • ### 隐性收入对当前中国居民消费率低估的影响机理 ——基于国民经济核算原理和实务的探讨 • 出版日期:2014-07-15 发布日期:2014-07-14 ### Impact Mechanism of Hidden Income on the Underestimation of China's Consumption Rate--A Discussion Based on Principles and Practices of the National Accouting Gao Minxue • Online:2014-07-15 Published:2014-07-14 Abstract: The fact of the ratio of households’ consumption to GDP (consumption rate) is underestimated is concerned with the basic judgment about the China's macro-economic structure. Although the underestimation has been discussed from the viewpoint of gray costs, it’s never been analyzed in the context of the national accounting framework. Analyzing the backgrounds and forms of the gray costs in a broader sense of hidden income and hidden consumption helps to study the impact of underestimation of gray costs on the underestimation of consumption. Under the symmetric GDP accounting framework, this paper analyzes the hidden income and its effect on the hidden consumption based on Chinese accounting practices. The results show although the Chinese consumption rate is underestimated due to the hidden income, the underestimation of consumption rate and the underestimation of GDP concur. Therefore, it is not reasonable to re-estimate the consumption rate only through the adjustment of proportion of investment, government consumption and household consumption, and it is more ridiculous to regard the overestimation of investment in the economic sense as that in the accounting sense.
2022-07-05T05:42:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39307135343551636, "perplexity": 2365.4675112209807}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104514861.81/warc/CC-MAIN-20220705053147-20220705083147-00168.warc.gz"}
https://www.zbmath.org/authors/?q=ai%3Aleitmann.george
# zbMATH — the first resource for mathematics ## Leitmann, George Compute Distance To: Author ID: leitmann.george Published as: Leitmann, G.; Leitmann, G. L.; Leitmann, Georg; Leitmann, George Homepage: https://www.me.berkeley.edu/people/faculty/george-leitmann External Links: Wikidata · dblp Documents Indexed: 210 Publications since 1959, including 15 Books Biographic References: 6 Publications all top 5 #### Co-Authors 62 single-authored 20 Corless, Martin J. 10 Reithmeier, Eduard 9 Blaquiere, Austin 9 Carlson, Dean A. 9 Lee, Chun-Shing 7 Stalford, Harold L. 6 Wan, Henry Y. jun. 6 Yu, Po-Lung 5 Kaitala, Veijo T. 5 Lambertini, Luca 5 Ryan, Eugene P. 5 Udwadia, Firdaus E. 4 Garofalo, Franco 4 Lee, Chee Sing 4 Pandey, Sandeep 4 Seube, Nicolas 3 Clemhout, Simone 3 Gutman, Shaul 3 Moitie, Rodéric 3 Schmitendorf, William E. 3 Skowronski, Jaislaw M. 3 Vincent, Thomas L. 2 Barmish, B. Ross 2 Breinl, W. 2 Chen, Ying-Hsiu 2 Dragone, Davide 2 Feichtinger, Gustav 2 Haurie, Alain B. 2 Kai, Xiong Zhong 2 Liu, Pan-Tai 2 Palestini, Arsen 2 Rocklin, Sol M. 2 Soldatos, Argiris G. 2 Stipanović, Dušan M. 2 Tomlin, Claire J. 1 Amemiya, Takashi 1 Avula, Xavier J. R. 1 Chen, Santiago Fei-Hung 1 Chouinard, Leo G. II 1 Cruck, Eva 1 Dauer, Jerald P. 1 Dockner, Engelbert J. 1 Flashner, Henryk 1 Getz, Wayne M. 1 Goh, Bean San 1 Goldsmith, Werner 1 Hildén, Mikael 1 Hofer, Eberhard P. 1 Hsu, Chia-Sheng 1 Jörgl, Hanns Peter 1 Kelly, James M. 1 Kryazhimskiĭ, Arkadiĭ Viktorovich 1 Litt, F.-X. 1 Liu, Haosen 1 Marzollo, Angelo 1 Mote, C. D. jun. 1 Novak, Andreas J. 1 Petty, Clinton M. 1 Pickl, Stefan Wolfgang 1 Rodellar, José 1 Rodin, Ervin Y. 1 Saunders, K. V. 1 Stadler, Werner 1 Steinberg, Alan N. 1 Tolwinski, Boleslaw 1 Torres, Delfim Fernando Marado 1 Troger, Hans 1 Wang, Gaizhen 1 Wang, Zhengyu 1 Weber, Hans Ingo 1 Wilson, David J. 1 Wrzaczek, Stefan all top 5 #### Serials 41 Journal of Optimization Theory and Applications 7 International Journal of Non-Linear Mechanics 6 Computers & Mathematics with Applications 6 Applied Mathematics and Computation 6 IEEE Transactions on Automatic Control 6 Journal of Dynamic Systems, Measurement and Control 5 Journal of Mathematical Analysis and Applications 4 International Journal of Control 4 Dynamics and Control 3 Journal of the Franklin Institute 3 Mathematical and Computer Modelling 3 Dynamics and Stability of Systems 2 Applied Mathematics and Optimization 2 Automatica 2 SIAM Journal on Control and Optimization 2 Optimal Control Applications & Methods 2 Zagadnienia Drgań Nieliniowych 2 Buletinul Institutului Politehnic din Iaşi, New Series 1 Acta Astronautica 1 AIAA Journal 1 International Journal of Systems Science 1 Journal of Applied Mechanics 1 Mathematical Biosciences 1 Econometrica 1 Journal of Economic Theory 1 Kibernetika 1 Kybernetika 1 Monatshefte für Mathematik 1 Nonlinear Analysis. Theory, Methods & Applications. Series A: Theory and Methods 1 Regelungstechnik 1 Large Scale Systems 1 Annals of Operations Research 1 Journal of Global Optimization 1 European Journal of Operational Research 1 Problems of Control and Information Theory 1 ZOR. Zeitschrift für Operations Research 1 Archive of Applied Mechanics 1 International Journal of Robust and Nonlinear Control 1 Set-Valued Analysis 1 Mathematical Modelling and Scientific Computing 1 Dynamics of Continuous, Discrete and Impulsive Systems 1 Vychislitel’nye Tekhnologii 1 Discrete Dynamics in Nature and Society 1 Dynamics of Continuous, Discrete & Impulsive Systems. Series A. Mathematical Analysis 1 Dynamics of Continuous, Discrete & Impulsive Systems. Series B. Applications & Algorithms 1 Nonlinear Dynamics and Systems Theory 1 Cubo Matemática Educacional 1 International Journal of Control, I. Series 1 Journal of the Aerospace Sciences 1 PMM, Journal of Applied Mathematics and Mechanics 1 Journal of Cybernetics 1 CISM International Centre for Mechanical Sciences. Courses and Lectures 1 Mathematics in Science and Engineering 1 Numerical Algebra, Control and Optimization 1 Nonlinear Analysis. Theory, Methods & Applications all top 5 #### Fields 105 Systems theory; control (93-XX) 76 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 51 Calculus of variations and optimal control; optimization (49-XX) 20 Ordinary differential equations (34-XX) 15 Biology and other natural sciences (92-XX) 11 Operations research, mathematical programming (90-XX) 9 General and overarching topics; collections (00-XX) 9 Mechanics of particles and systems (70-XX) 5 History and biography (01-XX) 5 Mechanics of deformable solids (74-XX) 4 Statistics (62-XX) 3 Dynamical systems and ergodic theory (37-XX) 2 Linear and multilinear algebra; matrix theory (15-XX) 2 Numerical analysis (65-XX) 2 Fluid mechanics (76-XX) 1 Real functions (26-XX) 1 Computer science (68-XX) 1 Geophysics (86-XX) 1 Mathematics education (97-XX) #### Citations contained in zbMATH Open 165 Publications have been cited 2,193 times in 1,246 Documents Cited by Year Continuous state feedback guaranteeing uniform ultimate boundedness for uncertain dynamic systems. Zbl 0473.93056 Corless, Martin J.; Leitmann, George 1981 A new class of stabilizing controllers for uncertain dynamical systems. Zbl 0503.93049 Barmish, B. Ross; Corless, M.; Leitmann, G. 1983 On the efficacy of nonlinear control in uncertain linear systems. Zbl 0473.93055 Leitmann, G. 1981 On ultimate boundedness control of uncertain systems in the absence of matching assumptions. Zbl 0469.93043 Barmish, B. Ross; Leitmann, G. 1982 Guaranteed asymptotic stability for some linear systems with bounded uncertainties. Zbl 0416.93077 Leitmann, G. 1979 The calculus of variations and optimal control. An introduction. Zbl 0475.49003 Leitmann, George 1981 An introduction to optimal control. Zbl 0196.46302 Leitmann, George 1966 Robustness of uncertain systems in the absence of matching assumptions. Zbl 0623.93023 Chen, Y. H.; Leitmann, G. 1987 Optimal control of a prey-predator system. Zbl 0297.92013 Goh, Bean San; Leitmann, George; Vincent, Thomas L. 1974 Adaptive control of systems containing uncertain functions and unknown functions with uncertain bounds. Zbl 0497.93028 Corless, M.; Leitmann, G. 1983 Compromise solutions, domination structures, and Salukvadze’s solution. Zbl 0362.90111 Yu, P. L.; Leitmann, G. 1974 Cooperative equilibria in differential games. Zbl 0607.90097 Tolwinski, B.; Haurie, A.; Leitmann, G. 1986 Profit maximization through advertising: A nonzero sum differential game approach. Zbl 0383.90019 Leitmann, G.; Schmitendorf, W. E. 1978 On generalized Stackelberg strategies. Zbl 0372.90137 Leitmann, G. 1978 Cooperative and non-cooperative many players differential games. Course held at the Department of Automation and Information, July 1973, Udine 1974. Zbl 0358.90085 Leitmann, George 1974 Jeux quantitatifs. (Quantitative games). Zbl 0228.90061 Blaquière, A.; Leitmann, G. 1969 Bounded controllers for robust exponential convergence. Zbl 0791.93022 Corless, M.; Leitmann, G. 1993 Avoidance control. Zbl 0346.93025 Leitmann, G.; Skowronski, J. 1977 Qualitative differential games with two targets. Zbl 0497.90097 Getz, W. M.; Leitmann, G. 1979 Some sufficiency conditions for Pareto-optimal control. Zbl 0298.49005 Leitmann, G.; Schmitendorf, W. 1973 Optimization techniques. With applications to aerospace systems. (Mathematics in Science and Engineering. Vol. 5). Zbl 0133.05602 Leitmann, G. (ed.) 1962 Robust control design for interconnected systems with time-varying uncertainties. Zbl 0758.93021 Chen, Y. H.; Leitmann, G.; Kai, Xiong Zhong 1991 Continuous feedback guaranteeing uniform ultimate boundedness for uncertain linear delay systems: An application to river pollution control. Zbl 0673.93052 Lee, C. S.; Leitmann, G. 1988 Feedback control of uncertain systems: robustness with respect to neglected actuator and sensor dynamics. Zbl 0588.93056 Leitmann, G.; Ryan, E. P.; Steinberg, A. 1986 Nondominated decisions and cone convexity in dynamic multicriteria decision problems. Zbl 0273.90003 Yu, P. L.; Leitmann, G. 1974 Guaranteed ultimate boundedness for a class of uncertain linear dynamical systems. Zbl 0388.93060 Leitmann, G. 1978 A note on absolute extrema of certain integrals. Zbl 0148.10702 Leitmann, G. 1967 Sufficiency conditions for Nash equilibria in N-person differential games. Zbl 0325.90075 Stalford, H.; Leitmann, G. 1973 Sufficiency theorems for optimal control. Zbl 0175.39005 Leitmann, G. 1968 A sufficiency theorem for optimal control. Zbl 0208.17306 Leitmann, G.; Stalford, H. 1971 Control space properties of cooperative games. Zbl 0185.24002 Vincent, T. L.; Leitmann, G. 1970 A differential game related to terrorism: Nash and Stackelberg strategies. Zbl 1188.91037 Novak, A. J.; Feichtinger, G.; Leitmann, G. 2010 On a class of direct optimization problems. Zbl 0983.49002 Leitmann, G. 2001 Stabilizing control for linear systems with bounded parameter and input uncertainty. Zbl 0334.93030 Gutman, Shaul; Leitmann, G. 1976 On the global asymptotic stability of equilibrium solutions for open-loop differential games. Zbl 0548.90103 Haurie, A.; Leitmann, G. 1984 Guaranteed avoidance strategies. Zbl 0419.90096 Leitmann, G. 1980 Optimal strategies in the neighborhood of a collision course. Zbl 0366.90139 Gutman, Shaul; Leitmann, G. 1976 On optimal long-term management of some ecological systems subject to uncertain disturbances. Zbl 0511.90052 Lee, C. S.; Leitmann, G. 1983 Guaranteeing ultimate boundedness and exponential rate of convergence for a class of nominally linear uncertain systems. Zbl 0714.93013 Garofalo, F.; Leitmann, G. 1989 A method for designing a stabilizing control for a class of uncertain linear delay systems. Zbl 0800.93951 Amemiya, Takashi; Leitmann, George 1994 Aircraft control for flight in an uncertain environment: Takeoff in windshear. Zbl 0728.93023 Leitmann, G.; Pandey, S. 1991 Robust control of base-isolated structures under earthquake excitation. Zbl 0596.93033 Kelly, J. M.; Leitmann, G.; Soldatos, A. G. 1987 Adaptive control for uncertain dynamical systems. Zbl 0556.93042 Corless, M.; Leitmann, G. 1984 Sufficient conditions for optimality in two-person zero-sum differential games with state and strategy constraints. Zbl 0225.90055 Stalford, Harold L.; Leitmann, George 1971 A note on a sufficiency theorem for optimal control. Zbl 0164.39903 Leitmann, G. 1969 Some extensions to a direct optimization method. Zbl 0999.49017 Leitmann, G. 2001 Deterministic control of uncertain systems. Zbl 0667.93060 Corless, M.; Leitmann, G. 1988 Practical stabilizability of uncertain dynamical systems: Application to robotic tracking. Zbl 0549.93044 Ryan, E. P.; Leitmann, G.; Corless, M. 1985 A note on avoidance control. Zbl 0528.49004 Leitmann, G.; Skowronski, J. 1983 A note on control space properties of cooperative games. Zbl 0235.90059 Leitmann, G.; Rocklin, S.; Vincent, T. L. 1972 A direct method of optimization and its application to a class of differential games. Zbl 1084.49031 Leitmann, George 2004 Coordinate transformations and derivation of open-loop Nash equilibria. Zbl 0989.91013 Dockner, E. J.; Leitmann, G. 2001 Adaptive control for avoidance or evasion in an uncertain environment. Zbl 0633.90108 Corless, M.; Leitmann, G.; Skowronski, J. M. 1987 Deterministic control of uncertain systems. Zbl 0444.70017 Leitmann, G. 1980 Cooperative and non-cooperative differential games. Zbl 0317.90069 Leitmann, G. 1975 On a class of linear differential games. Zbl 0296.90054 Gutman, Shaul; Leitmann, G. 1975 Errate corrige: A differential game model of labor-management negotiation during a strike. Zbl 0288.90101 Leitmann, G.; Liu, P. T. 1974 A differential game model of oligopoly. Zbl 0288.90102 Clemhout, S.; Leitmann, G.; Wan, H. Y. jun. 1973 A differential game model of duopoly. Zbl 0237.90076 Clemhout, S.; Leitmann, G.; Wan, H. Y. jun. 1971 Some remarks on Hamilton’s principle. Zbl 0127.39405 Leitmann, G. 1963 Contrasting two transformation-based methods for obtaining absolute extrema. Zbl 1147.49002 Torres, D. F. M.; Leitmann, G. 2008 Output feedback control of uncertain coupled systems. Zbl 0781.93035 Rodellar, J.; Leitmann, G.; Ryan, E. P. 1993 Tracking in the presence of bounded uncertainties. Zbl 0586.93017 Corless, M.; Leitmann, G.; Ryan, E. P. 1985 A simple derivation of necessary conditions for Pareto optimality. Zbl 0288.49016 Schmitendorf, W. E.; Leitmann, G. 1974 Topics in optimization. Zbl 0199.48602 Leitmann, George (ed.) 1967 On a class of variational problems in rocket flight. Zbl 0095.17701 Leitmann, G. 1959 A dynamical model of terrorism. Zbl 1211.91215 Udwadia, Firdaus; Leitmann, George; Lambertini, Luca 2006 Coordinate transformation method for the extremization of multiple integrals. Zbl 1183.49018 Carlson, D. A.; Leitmann, G. 2005 On a method of direct optimization. Zbl 1030.49032 Leitmann, G. 2002 Zustandsrueckfuehrung für dynamische Systeme mit Parameterunsicherheiten. Zbl 0504.93023 Breinl, W.; Leitmann, G. 1983 Sufficiency for optimal control with state and control constraints. Zbl 0269.49021 Leitmann, G. 1970 Fields of extremals and sufficient conditions for the simplest problem of the calculus of variations. Zbl 1194.49022 Carlson, Dean A.; Leitmann, George 2008 A direct method for open-loop dynamic games for affine control systems. Zbl 1182.91041 Carlson, Dean A.; Leitmann, George 2005 An extension of the coordinate transformation method for open-loop Nash equilibria. Zbl 1107.91029 Carlson, D. A.; Leitmann, G. 2004 Exponential convergence for uncertain systems with component-wise bounded controllers. Zbl 0857.93084 Corless, Martin; Leitmann, George 1996 A drug administration problem. Zbl 0754.92009 Lee, C. S.; Leitmann, G. 1991 Guaranteed asymptotic stability for a class of uncertain linear dynamical systems. Zbl 0377.93064 Leitmann, G. 1979 Multicriteria decision making and differential games. Zbl 0349.00026 Leitmann, George (ed.) 1976 Collective bargaining: A differential game. Zbl 0243.90067 Leitmann, G. 1973 Monotone approximations of minimum and maximum functions and multi-objective problems. Zbl 1257.49042 Stipanović, Dušan M.; Tomlin, Claire J.; Leitmann, George 2012 Dynamical systems and control. Selected papers from the 11th international workshop on dynamics and control, Rio de Janeiro, Brazil, October 9–11, 2000. Zbl 1050.00009 Udwadia, Firdaus E. (ed.); Weber, H. I. (ed.); Leitmann, George (ed.) 2004 Robust vibration control of dynamical systems based on the derivative of the state. Zbl 1068.70544 Reithmeier, E.; Leitmann, G. 2003 Dynamics and control. Selected papers from the 8th workshop held in Sopron, Hungary, July 23–27, 1995. Zbl 0997.93504 Leitmann, G. (ed.); Udwadia, F. E. (ed.); Kryazhimskii, A. V. (ed.) 1999 On one aspect of science policy based on an uncertain model. Zbl 0941.93045 Lee, C. S.; Leitmann, G. 1999 One approach to the control of uncertain dynamical systems. Zbl 0825.93649 Leitmann, G. 1995 Lyapunov stability theory based control of uncertain dynamical systems. Zbl 0794.93080 Leitmann, G. 1993 Stabilizing uncertain systems with bounded control. Zbl 0727.93059 Soldatos, A. G.; Corless, M.; Leitmann, G. 1991 Deterministic control of uncertain systems via a constructive use of Lyapunov stability theory. Zbl 0708.93027 Leitmann, George 1990 State feedback for uncertain dynamical systems. Zbl 0626.93061 Breinl, W.; Leitmann, G. 1987 Evasion in the plane. Zbl 0375.90091 Leitmann, G.; Liu, H. S. 1978 Stabilization of dynamical systems under bounded input disturbance and parameter uncertainty. Zbl 0399.93035 Leitmann, G. 1977 Bargaining under strike: A differential game view. Zbl 0318.90071 Clemhout, S.; Leitmann, G.; Wan, H. Y. jun. 1975 Further geometric aspects of optimal processes: multiple-stage dynamic systems. Zbl 0214.40304 Blaquiére, A.; Leitmann, G. 1967 Hamiltonian potential functions for differential games. Zbl 1329.49069 Dragone, Davide; Lambertini, Luca; Leitmann, George; Palestini, Arsen 2015 The direct method for a class of infinite horizon dynamic games. Zbl 1142.91353 Carlson, Dean A.; Leitmann, George 2005 Aircraft take-off in windshear: A viability approach. Zbl 0965.93076 Seube, N.; Moitie, R.; Leitmann, G. 2000 Componentwise bounded controllers for robust exponential convergence. Zbl 0883.93046 Corless, M.; Leitmann, G. 1997 Stabilizing employment in a fluctuating resource economy. Zbl 0687.90025 Kaitala, V.; Leitmann, G. 1990 On a student-related optimal control problem. Zbl 0675.49012 Lee, C. S.; Leitmann, G. 1990 Guaranteed ultimate boundedness for a class of uncertain linear dynamical systems. Zbl 0403.93040 Leitmann, G. 1979 R&D for green technologies in a dynamic oligopoly: Schumpeter, Arrow and inverted-U’s. Zbl 1346.91179 Feichtinger, Gustav; Lambertini, Luca; Leitmann, George; Wrzaczek, Stefan 2016 Hamiltonian potential functions for differential games. Zbl 1329.49069 Dragone, Davide; Lambertini, Luca; Leitmann, George; Palestini, Arsen 2015 Multi-agent optimal control problems and variational inequality based reformulations. Zbl 1315.49001 Leitmann, George; Pickl, Stefan; Wang, Zhengyu 2014 Monotone approximations of minimum and maximum functions and multi-objective problems. Zbl 1257.49042 Stipanović, Dušan M.; Tomlin, Claire J.; Leitmann, George 2012 A penalty method approach for open-loop variational games with equality constraints. Zbl 1296.91033 Carlson, Dean A.; Leitmann, George 2012 A differential game related to terrorism: Nash and Stackelberg strategies. Zbl 1188.91037 Novak, A. J.; Feichtinger, G.; Leitmann, G. 2010 A stochastic optimal control model of pollution abatement. Zbl 1211.91184 Dragone, D.; Lambertini, L.; Leitmann, G.; Palestini, A. 2010 Fields of extremals and sufficient conditions for the simplest problem of the calculus of variations in $$n$$-variables. Zbl 1182.49019 Carlson, Dean A.; Leitmann, George 2009 Contrasting two transformation-based methods for obtaining absolute extrema. Zbl 1147.49002 Torres, D. F. M.; Leitmann, G. 2008 Fields of extremals and sufficient conditions for the simplest problem of the calculus of variations. Zbl 1194.49022 Carlson, Dean A.; Leitmann, George 2008 A dynamical model of terrorism. Zbl 1211.91215 Udwadia, Firdaus; Leitmann, George; Lambertini, Luca 2006 Coordinate transformation method for the extremization of multiple integrals. Zbl 1183.49018 Carlson, D. A.; Leitmann, G. 2005 A direct method for open-loop dynamic games for affine control systems. Zbl 1182.91041 Carlson, Dean A.; Leitmann, George 2005 The direct method for a class of infinite horizon dynamic games. Zbl 1142.91353 Carlson, Dean A.; Leitmann, George 2005 A direct method of optimization and its application to a class of differential games. Zbl 1084.49031 Leitmann, George 2004 An extension of the coordinate transformation method for open-loop Nash equilibria. Zbl 1107.91029 Carlson, D. A.; Leitmann, G. 2004 Dynamical systems and control. Selected papers from the 11th international workshop on dynamics and control, Rio de Janeiro, Brazil, October 9–11, 2000. Zbl 1050.00009 Udwadia, Firdaus E. (ed.); Weber, H. I. (ed.); Leitmann, George (ed.) 2004 Robust vibration control of dynamical systems based on the derivative of the state. Zbl 1068.70544 Reithmeier, E.; Leitmann, G. 2003 A direct method of optimization and its application to a class of differential games. Zbl 1162.91316 Leitmann, George 2003 On a method of direct optimization. Zbl 1030.49032 Leitmann, G. 2002 Viability analysis of an aircraft flight domain for take-off in a windshear. Zbl 1032.76030 Seube, N.; Moitie, R.; Leitmann, G. 2002 On a class of direct optimization problems. Zbl 0983.49002 Leitmann, G. 2001 Some extensions to a direct optimization method. Zbl 0999.49017 Leitmann, G. 2001 Coordinate transformations and derivation of open-loop Nash equilibria. Zbl 0989.91013 Dockner, E. J.; Leitmann, G. 2001 Structural vibration control. Zbl 1169.74497 Reithmeier, E.; Leitmann, G. 2001 Aircraft take-off in windshear: A viability approach. Zbl 0965.93076 Seube, N.; Moitie, R.; Leitmann, G. 2000 Analysis and control of a communicable disease. Zbl 1158.93377 Corless, M.; Leitmann, G. 2000 Dynamics and control. Selected papers from the 8th workshop held in Sopron, Hungary, July 23–27, 1995. Zbl 0997.93504 Leitmann, G. (ed.); Udwadia, F. E. (ed.); Kryazhimskii, A. V. (ed.) 1999 On one aspect of science policy based on an uncertain model. Zbl 0941.93045 Lee, C. S.; Leitmann, G. 1999 A bounded harvest strategy for an ecological system in the presence of uncertain disturbances. Zbl 0959.92028 Lee, C. S.; Leitmann, G. 1999 The use of screening for the control of an endemic disease. Zbl 0929.92029 Leitmann, Georg 1998 Componentwise bounded controllers for robust exponential convergence. Zbl 0883.93046 Corless, M.; Leitmann, G. 1997 Destabilization via active stiffness. Zbl 0884.93048 Corless, Martin; Leitmann, George 1997 Exponential convergence for uncertain systems with component-wise bounded controllers. Zbl 0857.93084 Corless, Martin; Leitmann, George 1996 One approach to the control of uncertain dynamical systems. Zbl 0825.93649 Leitmann, G. 1995 A control scheme based on ER-materials for vibration attenuation of dynamical systems. Zbl 0844.70020 Leitmann, G.; Reithmeier, E. 1995 Control strategies for an endemic disease in the presence of uncertainty. Zbl 0880.92032 Lee, C. S.; Leitmann, G. 1995 A method for designing a stabilizing control for a class of uncertain linear delay systems. Zbl 0800.93951 Amemiya, Takashi; Leitmann, George 1994 Stabilizing management and structural development of open-access fisheries. Zbl 0809.90027 Hildén, Mikael; Kaitala, Veijo; Leitmann, George 1994 A stabilizing harvesting strategy for an uncertain model of an ecological system. Zbl 0813.90021 Lee, C. S.; Leitmann, G. 1994 Robust exponential convergence with bounded controllers. Zbl 0828.93058 Corless, Martin; Leitmann, G. 1994 An ER-material based control scheme for vibration suppression of dynamical systems with uncertain excitation. Zbl 0925.93061 Leitmann, G.; Reithmeier, E. 1994 Bounded controllers for robust exponential convergence. Zbl 0791.93022 Corless, M.; Leitmann, G. 1993 Output feedback control of uncertain coupled systems. Zbl 0781.93035 Rodellar, J.; Leitmann, G.; Ryan, E. P. 1993 Lyapunov stability theory based control of uncertain dynamical systems. Zbl 0794.93080 Leitmann, G. 1993 Adaptive control of aircraft in windshear. Zbl 0800.93675 Leitmann, George; Pandey, Sandeep; Ryan, Eugene 1993 A discrete stabilizing study strategy for a student related problem under uncertainty. Zbl 0800.93761 Leitmann, G.; Lee, C. S. 1993 Reduced order feedback control for a two-compartment drug administration model in the presence of model parameter uncertainty. Zbl 0800.93439 Leitmann, G.; Lee, C. S. 1993 Robust control design for interconnected systems with time-varying uncertainties. Zbl 0758.93021 Chen, Y. H.; Leitmann, G.; Kai, Xiong Zhong 1991 Aircraft control for flight in an uncertain environment: Takeoff in windshear. Zbl 0728.93023 Leitmann, G.; Pandey, S. 1991 A drug administration problem. Zbl 0754.92009 Lee, C. S.; Leitmann, G. 1991 Stabilizing uncertain systems with bounded control. Zbl 0727.93059 Soldatos, A. G.; Corless, M.; Leitmann, G. 1991 Some stabilizing study strategies for a student-related problem under uncertainty. Zbl 0729.93058 Lee, C. S.; Leitmann, G. L. 1991 Tracking and force control for a class of robotic manipulators. Zbl 0752.93048 Reithmeier, E.; Leitmann, G. 1991 Robust aircraft take-off control: A comparison of aircraft performance under different windshear conditions. Zbl 0800.93913 Kaitala, Veijo; Leitmann, George; Pandey, Sandeep 1991 Stabilizing management of fluctuating resources. Zbl 0739.90010 Kaitala, V.; Leitmann, G. 1991 Deterministic control of uncertain systems via a constructive use of Lyapunov stability theory. Zbl 0708.93027 Leitmann, George 1990 Stabilizing employment in a fluctuating resource economy. Zbl 0687.90025 Kaitala, V.; Leitmann, G. 1990 On a student-related optimal control problem. Zbl 0675.49012 Lee, C. S.; Leitmann, G. 1990 Aircraft control under conditions of windshear. Zbl 0709.93557 1990 Guaranteeing ultimate boundedness and exponential rate of convergence for a class of nominally linear uncertain systems. Zbl 0714.93013 Garofalo, F.; Leitmann, G. 1989 Adaptive controllers for avoidance or evasion in an uncertain environment: Some examples. Zbl 0699.90106 Corless, M.; Leitmann, G. 1989 Stabilizing management of fishery resources in a fluctuating environment. Zbl 0678.90027 Kaitala, Veijo; Leitmann, George 1989 Deterministic control of uncertain systems. Zbl 0676.93056 Corless, M.; Leitmann, George 1989 Guaranteeing ultimate boundedness and exponential rate of convergence for a class of uncertain systems. Zbl 0727.93054 Corless, M.; Garofalo, F.; Leitmann, G. 1989 Controlling singularly perturbed uncertain dynamical systems. Zbl 0697.93046 Leitmann, G. 1989 Continuous feedback guaranteeing uniform ultimate boundedness for uncertain linear delay systems: An application to river pollution control. Zbl 0673.93052 Lee, C. S.; Leitmann, G. 1988 Deterministic control of uncertain systems. Zbl 0667.93060 Corless, M.; Leitmann, G. 1988 A composite controller ensuring ultimate boundedness for a class of singularly perturbed uncertain systems. Zbl 0662.93033 Garofalo, F.; Leitmann, G. 1988 Adaptive controllers for uncertain dynamical systems. Zbl 0652.93034 Corless, Martin; Leitmann, George 1988 Robustness of uncertain systems in the absence of matching assumptions. Zbl 0623.93023 Chen, Y. H.; Leitmann, G. 1987 Robust control of base-isolated structures under earthquake excitation. Zbl 0596.93033 Kelly, J. M.; Leitmann, G.; Soldatos, A. G. 1987 Adaptive control for avoidance or evasion in an uncertain environment. Zbl 0633.90108 Corless, M.; Leitmann, G.; Skowronski, J. M. 1987 State feedback for uncertain dynamical systems. Zbl 0626.93061 Breinl, W.; Leitmann, G. 1987 Cooperative equilibria in differential games. Zbl 0607.90097 Tolwinski, B.; Haurie, A.; Leitmann, G. 1986 Feedback control of uncertain systems: robustness with respect to neglected actuator and sensor dynamics. Zbl 0588.93056 Leitmann, G.; Ryan, E. P.; Steinberg, A. 1986 The calculus of variations and optimal control. An introduction. 3rd printing. Zbl 0696.49001 Leitmann, George 1986 Practical stabilizability of uncertain dynamical systems: Application to robotic tracking. Zbl 0549.93044 Ryan, E. P.; Leitmann, G.; Corless, M. 1985 Tracking in the presence of bounded uncertainties. Zbl 0586.93017 Corless, M.; Leitmann, G.; Ryan, E. P. 1985 Adaptive long-term management of some ecological systems subject to uncertain disturbances. Zbl 0621.92022 Corless, M.; Leitmann, G. 1985 Properties of matrices used in uncertain linear control systems. Zbl 0568.93015 Chouinard, L. G.; Dauer, J. P.; Leitmann, G. 1985 On the global asymptotic stability of equilibrium solutions for open-loop differential games. Zbl 0548.90103 Haurie, A.; Leitmann, G. 1984 Adaptive control for uncertain dynamical systems. Zbl 0556.93042 Corless, M.; Leitmann, G. 1984 A new class of stabilizing controllers for uncertain dynamical systems. Zbl 0503.93049 Barmish, B. Ross; Corless, M.; Leitmann, G. 1983 Adaptive control of systems containing uncertain functions and unknown functions with uncertain bounds. Zbl 0497.93028 Corless, M.; Leitmann, G. 1983 On optimal long-term management of some ecological systems subject to uncertain disturbances. Zbl 0511.90052 Lee, C. S.; Leitmann, G. 1983 A note on avoidance control. Zbl 0528.49004 Leitmann, G.; Skowronski, J. 1983 Zustandsrueckfuehrung für dynamische Systeme mit Parameterunsicherheiten. Zbl 0504.93023 Breinl, W.; Leitmann, G. 1983 On ultimate boundedness control of uncertain systems in the absence of matching assumptions. Zbl 0469.93043 Barmish, B. Ross; Leitmann, G. 1982 Continuous state feedback guaranteeing uniform ultimate boundedness for uncertain dynamic systems. Zbl 0473.93056 Corless, Martin J.; Leitmann, George 1981 On the efficacy of nonlinear control in uncertain linear systems. Zbl 0473.93055 Leitmann, G. 1981 The calculus of variations and optimal control. An introduction. Zbl 0475.49003 Leitmann, George 1981 Guaranteed avoidance strategies. Zbl 0419.90096 Leitmann, G. 1980 Deterministic control of uncertain systems. Zbl 0444.70017 Leitmann, G. 1980 Guaranteed avoidance feedback control. Zbl 0445.49013 Leitmann, G. 1980 Labour-management bargaining modelled as a dynamic game. Zbl 0437.90120 Chen, Santiago Fei-Hung; Leitmann, George 1980 Guaranteed asymptotic stability for some linear systems with bounded uncertainties. Zbl 0416.93077 Leitmann, G. 1979 Qualitative differential games with two targets. Zbl 0497.90097 Getz, W. M.; Leitmann, G. 1979 Guaranteed asymptotic stability for a class of uncertain linear dynamical systems. Zbl 0377.93064 Leitmann, G. 1979 Guaranteed ultimate boundedness for a class of uncertain linear dynamical systems. Zbl 0403.93040 Leitmann, G. 1979 ...and 65 more Documents all top 5 #### Cited by 1,533 Authors 77 Leitmann, George 17 Qu, Zhihua 16 Chen, YunHai 16 Stalford, Harold L. 15 Vincent, Thomas L. 14 Corless, Martin J. 13 Feichtinger, Gustav 13 Haurie, Alain B. 12 Lambertini, Luca 12 Schmitendorf, William E. 11 Shinar, Josef 11 Wu, Hansheng 11 Yeung, David Wing-Kay 11 Zaccour, Georges 10 Dawson, Darren M. 10 Dockner, Engelbert J. 10 Galperin, Efim A. 10 Goh, Bean San 10 Mahmoud, Magdi Sadik Mostafa 10 Torres, Delfim Fernando Marado 9 Jørgensen, Steffen 9 Kaitala, Veijo T. 9 Morgan, Jacqueline 9 Park, Juhyun (Jessie) 9 Ryan, Eugene P. 8 Barmish, B. Ross 8 Chen, Ye-Hwa 8 Viscolani, Bruno 8 Yu, Peilong 7 Assunção, Edvaldo 7 Glizer, Valery Y. 7 Hammami, Mohamed Ali 7 Lee, Chun-Shing 7 Martín-Herrán, Guiomar 7 Miele, Angelo 7 Petersen, Ian Richard 6 Amemiya, Takashi 6 Chen, Chung-Cheng 6 Dorsey, John F. 6 Fateh, Mohammad Mehdi 6 Hayek, Naïla 6 Lewis, Frank Leroy 6 Mizukami, Koichi 6 Novak, Andreas J. 6 Patsko, Valerii S. 6 Prasad, U. R. 6 Skowronski, Jaislaw M. 6 Stipanović, Dušan M. 6 Taras’ev, Aleksandr Mikhaĭlovich 6 Turetsky, Vladimir 6 Wu, Huaining 6 Yu, Po-Lung 5 Ahmed, Nasir Uddin 5 Cardim, Rodrigo 5 Carlson, Dean A. 5 Engwerda, Jacob Christiaan 5 Goodall, David P. 5 Hämäläinen, Raimo P. 5 Hsieh, Jer-Guang 5 Li, Peng 5 Malinowska, Agnieszka Barbara 5 Mallozzi, Lina 5 Reithmeier, Eduard 5 Rodellar, José 5 Shieh, Leang-San 5 Spurgeon, Sarah K. 5 Stadler, Werner 5 Teixeira, Marcelo C. M. 5 Watanabe, Chihiro 5 Xie, Lihua 5 Yavin, Yaakov 5 Żak, Stanislaw H. 4 Aboussoror, Abdelmalek 4 Averboukh, Yuriĭ Vladimirovich 4 Chao, Chi H. 4 Chen, Ying-Hsiu 4 Clemhout, Simone 4 Colbaugh, Richard 4 Duan, Zhisheng 4 Ehtamo, Harri 4 Fong, I-Kong 4 Fu, Li-Chen 4 Gearhart, William B. 4 Getz, Wayne M. 4 Grosset, Luca 4 Hartl, Richard F. 4 Jiang, Zhong-Ping 4 Khalil, Hassan K. 4 Kuo, Teson 4 Lee, Chee Sing 4 Li, Zhongkui 4 Mehlmann, Alexander 4 Phan Thanh Nam 4 Quincampoix, Marc 4 Saberi, Ali 4 Samanta, Guru Prasad 4 Shi, Peng 4 Spong, Mark W. 4 Stonier, Russel J. 4 Summers, Danny ...and 1,433 more Authors all top 5 #### Cited in 177 Serials 265 Journal of Optimization Theory and Applications 116 Automatica 84 International Journal of Control 55 International Journal of Systems Science 39 Systems & Control Letters 38 Journal of Mathematical Analysis and Applications 38 Applied Mathematics and Computation 37 Journal of the Franklin Institute 30 Computers & Mathematics with Applications 29 European Journal of Operational Research 19 Dynamics and Control 16 Journal of Economic Dynamics & Control 15 International Journal of Robust and Nonlinear Control 15 Dynamic Games and Applications 14 Nonlinear Dynamics 12 Mathematical Biosciences 12 Optimal Control Applications & Methods 10 Operations Research Letters 10 Mathematical and Computer Modelling 9 Fuzzy Sets and Systems 9 Theoretical Population Biology 9 Mathematical Problems in Engineering 9 International Game Theory Review 8 Optimization 8 Journal of Robotic Systems 8 European Journal of Control 8 Journal of Vibration and Control 7 Journal of Economic Theory 7 Nonlinear Analysis. Theory, Methods & Applications. Series A: Theory and Methods 7 Journal of Global Optimization 7 Journal of Dynamical and Control Systems 6 Annals of Operations Research 6 Asian Journal of Control 5 Acta Mechanica 5 Bulletin of Mathematical Biology 5 Applied Mathematics and Optimization 5 Journal of Mathematical Economics 5 Kybernetika 5 MCSS. Mathematics of Control, Signals, and Systems 5 International Journal of Adaptive Control and Signal Processing 5 Automation and Remote Control 5 International Journal of Control, I. Series 5 PMM, Journal of Applied Mathematics and Mechanics 5 International Journal of Systems Science. Principles and Applications of Systems and Integration 4 Journal of Applied Mathematics and Mechanics 4 Journal of Engineering Mathematics 4 Journal of Mathematical Biology 4 Chaos, Solitons and Fractals 4 Journal of Differential Equations 4 Journal of Soviet Mathematics 4 Dynamics and Stability of Systems 4 Journal of Intelligent & Robotic Systems 4 Applied Mathematical Modelling 4 Journal of Mathematical Sciences (New York) 4 Journal of Systems Science and Complexity 4 Nonlinear Analysis. Theory, Methods & Applications 3 Computer Methods in Applied Mechanics and Engineering 3 Mathematical Methods in the Applied Sciences 3 Reports on Mathematical Physics 3 Prikladnaya Matematika i Mekhanika 3 Information Sciences 3 Numerical Functional Analysis and Optimization 3 OR Spektrum 3 Circuits, Systems, and Signal Processing 3 Stochastic Analysis and Applications 3 Complexity 3 Games 3 Vestnik Udmurtskogo Universiteta. Matematika. Mekhanika. Komp’yuternye Nauki 2 Applicable Analysis 2 ZAMP. Zeitschrift für angewandte Mathematik und Physik 2 Mathematics and Computers in Simulation 2 SIAM Journal on Control and Optimization 2 Theoretical Computer Science 2 Journal of Information & Optimization Sciences 2 Computational Mechanics 2 Applied Mathematics Letters 2 Journal of Elasticity 2 ZOR. Zeitschrift für Operations Research 2 Journal of Computer and Systems Sciences International 2 Economic Theory 2 Journal of Difference Equations and Applications 2 Abstract and Applied Analysis 2 International Journal of Applied Mathematics and Computer Science 2 Journal of Applied Mathematics and Computing 2 Journal of Intelligent and Fuzzy Systems 2 Mediterranean Journal of Mathematics 2 Proceedings of the Steklov Institute of Mathematics 2 Optimization Letters 2 Journal of Control Science and Engineering 2 Decision Analysis 2 Journal of Dynamics and Games 2 Mathematics 1 International Journal of Modern Physics B 1 Artificial Intelligence 1 International Journal of Mathematical Education in Science and Technology 1 Rocky Mountain Journal of Mathematics 1 Zhurnal Vychislitel’noĭ Matematiki i Matematicheskoĭ Fiziki 1 Czechoslovak Mathematical Journal 1 International Journal of Game Theory 1 International Journal for Numerical Methods in Engineering ...and 77 more Serials all top 5 #### Cited in 38 Fields 677 Systems theory; control (93-XX) 364 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 284 Calculus of variations and optimal control; optimization (49-XX) 165 Operations research, mathematical programming (90-XX) 104 Ordinary differential equations (34-XX) 87 Biology and other natural sciences (92-XX) 72 Mechanics of particles and systems (70-XX) 30 Numerical analysis (65-XX) 27 Mechanics of deformable solids (74-XX) 22 Linear and multilinear algebra; matrix theory (15-XX) 19 Dynamical systems and ergodic theory (37-XX) 19 Computer science (68-XX) 12 Probability theory and stochastic processes (60-XX) 10 Partial differential equations (35-XX) 7 Real functions (26-XX) 7 Statistics (62-XX) 7 Fluid mechanics (76-XX) 5 Difference and functional equations (39-XX) 5 Geophysics (86-XX) 4 History and biography (01-XX) 4 Information and communication theory, circuits (94-XX) 3 Functional analysis (46-XX) 3 Optics, electromagnetic theory (78-XX) 3 Quantum theory (81-XX) 2 General and overarching topics; collections (00-XX) 2 Mathematical logic and foundations (03-XX) 2 Combinatorics (05-XX) 2 Topological groups, Lie groups (22-XX) 2 Operator theory (47-XX) 2 General topology (54-XX) 2 Global analysis, analysis on manifolds (58-XX) 2 Relativity and gravitational theory (83-XX) 2 Mathematics education (97-XX) 1 Measure and integration (28-XX) 1 Integral equations (45-XX) 1 Convex and discrete geometry (52-XX) 1 Algebraic topology (55-XX) 1 Classical thermodynamics, heat transfer (80-XX) #### Wikidata Timeline The data are displayed as stored in Wikidata under a Creative Commons CC0 License. Updates and corrections should be made in Wikidata.
2021-05-11T07:57:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5937039852142334, "perplexity": 9150.344533875688}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991904.6/warc/CC-MAIN-20210511060441-20210511090441-00396.warc.gz"}
http://dlmf.nist.gov/LaTeXML/manual/cssclasses/
# Appendix K CSS Classes When the target format is in the HTML family (XHTML, HTML or HTML5), LaTeXML adds various classes to the generated html elements. This provides a trail back to the originating markup, and leverage to apply CSS styling to the results. Recall that the class attribute is a space-separated list of class names. This appendix describes the class names used. The basic strategy is the following: ltx_element with element being the LaTeXML element name that generated the html element. These elements reflect the original TeX/LaTeX markup, but are not identical. See Appendix I for details. ltx_font_font where font can indicate any of the font characteristics: family : serif, sansserif, typewriter, caligraphic, fraktur, script; series : bold, medium; shape : upright, italic, slanted, smallcaps; These sets are open-ended. ltx_align_alignment where alignment indicates the alignment of the contents within the element. horizontally : left, right, center, justify; vertically : top, bottom, baseline, middle. ltx_border_edges indicates single or double borders on an element with edges being: t, r, b, l, tt, rr, bb, ll; these are typically used for table cells. ltx_role_role reflects the distinct uses a particular LaTeXML elements serve which is indicated by the role attribute. Examples include creator, for ‘document creators’, where the role may be author, editor, translator or others. Thus, depending on your purposes and the expected markup, you might choose to write CSS rules for ltx_creator or ltx_role_author. Similarly, quote is stretched to accomodate translation or verse. ltx_title_section marks the titles of various sectional units. For example, a chapter’s title will have two classes: ltx_title and ltx_title_chapter. ltx_theorem_type marks various types of ‘theorem-like’ objects, where the type is whatever was used in \newtheorem. ltx_float_type marks various types of floating objects, such as might be defined using the float package using \newfloat. ltx_lst_role reflects the various roles of items within listings, such as those created using the listings package (whose containing element would have class ltx_lstlisting). Such classes include: ltx_lst_language_lang, ltx_lst_keywordclass, ltx_lxt_line, ltx_lst_linenum. ltx_bib_item indicates various items in bibliographys, typically generated via BibTeX; the items include key, number, type, author, editor, year, title, author-year, edition, series, part, journal, volume, number, status, pages, language, publisher, place, status, crossref, external, cited and others. ltx_toclist_type, ltx_tocentry_type reflects the levels of Table of Contents lists: they carry the ltx_toclist class, from the element used to represent them, and also ltx_toclist_section naming the sectional unit for which this list applies to assist in styling. A nested TOC for a chapter might thus have ul’s carrying ltx_toclist_chapter and ltx_toclist_section. Additionally, ltx_toc_compact and ltx_toc_verycompact can be added to style compact and very compact styles (eg single line). Note that the generated li items will have class ltx_tocentry and ltx_tocentry_type, for the type of the entry. ltx_ref_item hypertext links, whether within or across documents, whether created from \ref or \href, will get ltx_ref and, sometimes, extra classes applied. For example, a reference that ends up pointing to the current page is marked with ltx_ref_self. Cross-referencing material used to fill-in the contents of the reference is marked: a reference number gets ltx_ref_tag; a title ltx_ref_title. ltx_note_part reflects the separate parts of notes; Note that the kind of note is generally reflected in the role attribute, such as footnote, endnote, etc. The parts are separated to facilitate formatting, hover effects, etc: outer contains the whole; mark for the mark, if any; content the actual contents of the note. type is for an extra span indicating the type of note if it is unusual. ltx_page_item reflects page layout components created during the XSLT; items include: main, content, header, footer, navbar logo, columns, column1, column2. ltx_eqn_item reflects different parts related to equation formatting: pad reflects padding to align equations on the page; eqnarray and lefteqn arise from LaTeX’s eqnarray environment; gather and align arise from AMS environments; intertext arises from text injected between aligned equations. Any other explicit use of the addClass(class) function or of the \lxAddClass{class} macro from the latexml package will add the given class as is, without any additional ltx_ prefix. Two oddball items that may get refactored away are: ltx_phantom and ltx_centering. The latter seems slightly distinct from ltx_align_center.
2018-01-19T01:51:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.566776692867279, "perplexity": 8239.274865018302}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084887692.13/warc/CC-MAIN-20180119010338-20180119030338-00280.warc.gz"}
https://www.usgs.gov/center-news/volcano-watch-chronology-a-volcanic-disaster
# Volcano Watch - Chronology of a volcanic disaster Release Date: The worst volcanic disaster of the 20th century occurred in 1902 on Martinique, an island in the French West Indies. The north end of the island is dominated by Mount Pelee, whose name—;"bald" or "peeled" mountain—;refers to the scarcity of vegetation at its summit when French colonists arrived in 1635. Its baldness was in marked contrast to the lush vegetation that characterized the rest of the island. Sugar cane thrived in the rich volcanic soil and became the foundation of the island's economy. Twenty large sugar mills and 113 rum distilleries were in operation by 1902. The first eruption of Mount Pelee witnessed by colonists was phreatic (steam-driven) and occurred in 1792. Other volcanic activity—;also phreatic but more vigorous and more sustained—;occurred from the summer of 1851 into January 1852. Sulfurous fumes were more common, explosions were larger and more numerous, water appeared in the previously dry crater, and the flow of water in a local river increased. Saint-Pierre, the chief city of Martinique, was dusted with ash. On the scale of volcanic magnitude, the 1792 and 1851-52 activity barely registered. When Mount Pelee reawakened in 1902, citizens of Saint-Pierre expected the volcano to mimic its past sluggish behavior, and they assumed that their city offered a safe haven from any lethal volcanic activity. In late April 1902, earthquakes were felt in Saint-Pierre, and phreatic explosions began on Mount Pelee. Within days, the vigor of the explosions exceeded anything witnessed since the island was settled. The intensity then subsided for a few days. Such "roller coaster" behavior is common when long dormant volcanoes reawaken. The roller coaster explosions increased again—;to a higher level—;as the eruptions returned on May Day. Lightning laced the eruption clouds, and trade winds dumped ash on villages to the west. Heavy ashfall at times caused total darkness, breathing was difficult, and domestic animals cried out in terror. Some of the afflicted residents panicked and headed for the perceived safety of larger settlements, especially Saint-Pierre, about 10 km (6 miles) south of Pelee's summit. Saint-Pierre received its first ashfall on May 3. Mount Pelee was relatively quiet for most of the next two days. But on the afternoon of May 5, a mudflow swept down a river on the southwest flank of the volcano, destroying a sugar mill. The massive flow crushed 23 people and generated a series of three tsunamis as it hit the sea. The tsunamis swept along the coast, damaging buildings and boats. The explosions resumed the night of May 5. The following morning, parts of the eruption plume became incandescent, signifying that the character of the eruption had changed. The phreatic explosions had finally given way to magmatic explosions as magma reached the surface. The explosions continued though the next day and night. A brief lull was shattered by a tremendous explosion at about 8:00 a.m. on May 8. A ground-hugging cloud of incandescent lava particles suspended by searing turbulent gases moved at hurricane speed down the southwest flank of the volcano, reaching Saint-Pierre at 8:02 a.m. Escape from the city was virtually impossible. Almost everyone within the city proper—;about 26,000 people—;died horrifically, burned or buried by falling masonry. The hot ash ignited a firestorm, fueled by smashed buildings and countless casks of rum. Only two survived within the city, along with a few tens of people caught within the margins of the cloud. All survivors were badly burned. The phenomenon that destroyed Saint-Pierre—;unknown to science in 1902—;is now called a pyroclastic flow and has been witnessed at many other volcanoes around the world. Pyroclastic flows are usually produced by volcanoes whose lavas have a high proportion of silica. Fortunately, Hawaiian lavas have rather low silica content and do not produce pyroclastic flows. ### Volcano Activity Update Eruptive activity at the Puu Oo vent of Kīlauea Volcano continued unabated during the past week. Most lava flows have been at the lower and upper ends of the rootless shield complex along the Mother's Day lava tube south of Puu Oo. On March 4, a lava flow issued from vents at the south base of Puu Oo and moved some 3 km (2 miles) southward; this flow has been sporadically active throughout the week. Vents within the crater of Puu Oo remain incandescent. No active flows are on Pulama pali or the coastal flat below Paliuli, and no lava is entering the ocean. No earthquakes were reported felt on the island during the past week. Mauna Loa is not erupting. The summit region continues to inflate slowly. Seismic activity remains very low, with no earthquakes located in the summit area since early February 19.
2019-11-11T20:41:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17556296288967133, "perplexity": 11598.023739272398}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496664437.49/warc/CC-MAIN-20191111191704-20191111215704-00353.warc.gz"}
https://indico.bnl.gov/event/13873/
***ATTENTION Indico Users*** Important changes to user logins are coming to Indico at BNL. BNL Physics Colloquia # How do we discover Majorana particles in nanowires? ## by Prof. Sergey Frolov (Univ. of Pittsburgh) US/Eastern Online #### Online Description Majorana particles are real solutions of the Dirac equation, representing their own antiparticles. In the condensed matter context, Majorana refers to electronic modes in nanostructures described by peculiar ‘pulled-apart’ wavefunctions and by hypothesized non-Abelian exchange. This last property makes them interesting for quantum computing. I will present our efforts to generate and verify Majorana modes in semiconductor nanowires coupled to superconductors. In particular, how can we tell Majorana signatures apart from similar Andreev states that do not have non-Abelian properties? While we may not have a verified Majorana observation now, I will talk about ways to get there: through careful experiments, improved nanowires and device fabrication and with eyes open for alternative explanations. Join ZoomGov Meeting https://bnl.zoomgov.com/j/1605020278?pwd=cHJ1bDRuK1FDNnZLSnpxVkZhcDQ3QT09 Meeting ID: 160 502 0278 Passcode: E=mc2
2022-08-08T10:10:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5796505808830261, "perplexity": 4812.886266381186}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570793.14/warc/CC-MAIN-20220808092125-20220808122125-00713.warc.gz"}
https://www.nist.gov/ncnr/spin-filters/references/nsf-triple-axis-info
An official website of the United States government The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site. The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely. # NSF TRIPLE AXIS INFO ## STANDARD SETUP SEQUENCE NAME DESCRIPTION PB_polarizesetup.seq.txt This creates all the devices and opens Python communication with the 3He instrument rack. This must be done at the beginning of the experiment. PB_Qmode.seq.txt This sets the sample and guide field currents for P $$\parallel$$ Q. PB_Vmode.seq.txt This sets the sample and guide field currents for P $$\perp$$ Q. PB_polarizeddestroy.seq.txt This destroys all the devices and closes Python communication with the 3He instrument rack. This must be done at the end of the experiment. NOTE: If the server re-starts, destroy all devices first and then run the sequence file to create devices. ## 7T SCM SETUP SEQUENCE NAME DESCRIPTION PB_polarizesetup7T.seq.txt This creates all the devices and opens Python communication with the 3He instrument rack. This must be done at the beginning of the experiment. PB_SC7TGuides.seq.txt This sets the guide field currents for P $$\perp$$ Q. PB_polarizeddestroy7T.seq.txt This destroys all the devices and closes Python communication with the 3He instrument rack. This must be done at the end of the experiment. NOTE: If the server re-starts, destroy all devices first and then run the sequence file to create devices. ## NSF CELL FLIPPING SEQUENCE NAME DESCRIPTION PB_OffOff.seq.txt This corresponds to the 'A' state. PB_ONOff.seq.txt This corresponds to the 'B' state. PB_OffON.seq.txt This corresponds to the 'C' state. PB_ONON.seq.txt This corresponds to the 'D' state. ## MEASUREMENT MODIFIER DESCRIPTION MODIFIER(S) Beam profile: size, slits, etc • pb wide • pb 1 cm Experiment type • pb 999 (inelastic or powder) • pb 60 (elastic) Flipping ratio • pb fr Empty cell transmission • pb mt Polarizer cell transmission Polarizer killed cell • pb p cell name • pb po cell name Analyzer cell transmission Analyzer cell killed • pb a cell name • pb ao cell name ## Contacts Created June 10, 2019, Updated November 15, 2019
2020-01-19T21:04:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38298746943473816, "perplexity": 8055.176680489695}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250595282.35/warc/CC-MAIN-20200119205448-20200119233448-00345.warc.gz"}
https://wikimho.com/us/q/astronomy/33899
### Does the sun cross other spiral arms in its movement around the galaxy's center? • Today, Ansa.it released an article that states: [...]. In questo suo peregrinare galattico, il Sole ha attraversato anche i due bracci della Via Lattea Perseo e Centauro. "Sono zone di alta densità stellare, in corrispondenza delle quali il Sole e le stelle intorno rallentano e possono anche fermarsi, [...] (my) translation to English: In this galactic wandering, the sun traversed the two spiral arms of the Milky Way, Perseus and Centaurus. These are regions of high density of stars, in correspondence of which the Sun and other surrounding stars slow down and can even stop, [...] The article is complemented with a suggestive video-clip that depicts the Sun orbiting around the galaxy's center and crossing various spiral arms. Given my lack of an education on the topic, I cannot reconcile the existence of an entity called "spiral arm" with the notion that a star can freely cross several "spiral arms" in its movement around the galaxy's center. My intuition is that if every star within the Milky Way could freely cross the "spiral arms" several times during their life-span, then there would be no such thing as a "spiral arm" at all (because this is supposed to be a mass grouping wherein all matter moves --please forgive me the abuse of the word-- "together"). • Where am I wrong? • Is the above description of the movement of the Sun accurate? • In case of an affirmative answer, is it a very special case or is it a defining characteristic of the entire Orion's arm? • In the latter case, can other spiral arms cross each other? • pela Correct answer 3 years ago ### What is a spiral arm? The reason that the Sun, in principle (but see below), may cross spiral arms is that galactic spiral arms are not rigid entities consisting of some particular stars; rather they are "waves" with a temporary increase in density. An often-used analogy is the pile-up of cars behind a slow-moving truck: At all times, all cars are moving forward, but for a while, a car behind the truck will be moving slow, until it overtakes and speeds up. Similarly, stars may overtake, or be overtaken by, the spiral arms. Inside a certain distance from the center of the galaxy called the corotation radius ($$R_\mathrm{c}$$) stars move faster than the arms, while outside they move slower. Since the stars and the interstellar gas follow the rotation of the arms for a while once they're inside, the density of the arms is higher than outside, but only by a factor of a few (e.g. Rix & Rieke 1993). When interstellar gas falls into the potential well it is compressed, triggering star formation. Since the most luminous stars burn their fuel fast, they will mostly have died once they leave an arm. Hence, what we see as spiral arms is not so much the extra stars, but mostly due to the light from the youngest stars which are still inside the arms. Since most luminous also means hottest, their light peaks in the bluish region — hence spiral arms appear blue. ### Origin of the spiral structure At least the most prominent spiral arms (especially grand designs) are thought to be created by these long-lived, quasi-stationary density waves (Lin & Shu 1964). The reason that the density waves exist in the first place is not well-understood, I think, but may have to do with anisotropic gravitational potentials and/or tidal forces from nearby galaxies (e.g. Semczuk et al. 2017). But in fact even small perturbations may spawn gravitational instabilities that propagate as density waves. In computer simulations of galaxy formation, even numerical instabilities may cause this, so the fact that your simulated galaxy has spiral arms doesn't necessarily mean that you got your physics right. When the luminous and hence massive stars die, they explode as supernovae. The feedback from this process, as well as that exerted by the radiation pressure before they die, may help maintaining the density waves, at least in flocculent galaxies (Mueller & Arnett 1972). Perhaps this so-called Stochastic Self-Propagating Star Formation may also initiate the density waves (see discussion in Aschwanden et al. 2018). The rotation speed of the material in the galactic disk is roughly constant with distance from the center (this is mainly due to the dark matter halo hosting a galaxy). Hence, stars close to the center complete a revolution faster than those farther away. In contrast, the spiral pattern rotate more like a rigid disk such that, in an intertial frame, the pattern can be described by a constant angular speed $$\Omega_\mathrm{p}$$ throughout the disk. However, note that spiral arms are transient phenomena; they appear and disappear with lifetimes of the order of (a few) Myr (e.g. Grand et al. 2012; 2014). Sometimes you also see multiple spiral patterns propagating with different velocities. ### The Sun in the Milky Way Note: This section first contained errors based on dubious values for the angular speed of the spiral arms, as pointed out by @PeterErwin and @eagle275. In the case of our Sun, we happen to be located very near the corotational radius $$R_\mathrm{c}$$; we sit at a distance of $$R_0 = 8.32\,\mathrm{kpc}$$ from the center of the galaxy (Gillessen et al. 2017), while $$R_\mathrm{c} = 8.51\,\mathrm{kpc}$$ (Dias et al. 2019). Using Gaia data, Dias et al. (2019) find a pattern angular speed $$\Omega_\mathrm{p} = 28.2 \pm 2.1\,\mathrm{km}\,\mathrm{s}^{-1}\,\mathrm{kpc}^{-1}$$. At the location of the Sun ($$R_0$$) this implies a pattern speed of $$\simeq235\,\mathrm{km}\,\mathrm{s}^{-1}$$. This is only a little bit slower than (and in fact statistically consistent with) the Sun's velocity of $$239\pm5\,\mathrm{km}\,\mathrm{s}^{-1}$$ (Planck Collaboration et al. 2018). If the spiral arms were "permanent"*, the timescale for the Sun crossing an arm would hence be gigayears. However, because as described above they're quite transient, we might once in a while overtake a spiral arm. I haven't been able to find firm evidence for whether we have or haven't crossed any arms; Gies & Helsel (2005) argue that we have crossed an arm four times within the last 500 Myr, but base this on matching glaciation epochs with passages through spiral arms (and admit that this requires a lower but still acceptable pattern speed). ### The article you link to I now wrote to Jesse Christiansen (who the linked article quotes) and asked her if she knows whether or not we are moving in and out of spiral arms; she replied within roughly 8 seconds, tagging Karen Masters who chimed in even faster — they both agree that this is an ongoing debate with no conclusive evidence. Anyway, the article seems to have misunderstood the tweet from Jesse Christiansen. In her animation she shows the journey of the Sun, but shows the galaxy itself as being static, which she did on purpose to keep it simple. Hence, you see the Sun traversing the arms unnaturally fast. +1 This simulation shows stars entering and leaving the denisty waves (sprial arms) https://www.youtube.com/watch?v=9B9i4vjj5D4 @DaveGremlin Very nice illustration, but after watching it my room spins in the opposite direction. The assumed rate of overtake seems to high for my "nose".. The sun takes roughly 250 to 280 million years for a rotation and your equation says we overtake roughly 2 arms per rotation .. I would assume with this speed difference and the way the sun takes its maybe closer to 1 overtake every couple of rotations @eagle275 I'm not sure I understand… Two arms per rotation — i.e. per ~250 Myr — gives one arm per ~125 Myr. Or, in other words, the circumference at $R_0$ is ~50 kpc, so ~13 kpc between arms. And since at $R_0$ the vel. diff. is ~100 km/s, it should take 50kpc / 100km/s, i.e. ~130 Myr, right? Where do your 100 km/s come into play ... you say speed difference is 11.9 km/s / kpc - but the difference I see is only 0.2 kpc .. so I see a speed differential of 2.26 km/s .. and with this 2.26 your overtake rate slows down by a factor of 1/44 .. or once per 5.4 billion years @LightnessRaceswithMonica Interesting, yours has the spirals themselves rotating around the centre, while the earlier link has the spirals remain in a fixed position. I wonder which is correct @JBentley -- The "simulation" video linked to by DaveGremlin isn't a real spiral galaxy simulation; it's a kludge using a Solar System simulator where the elliptical orbits are carefully aligned to produce a *stationary* spiral pattern. The second video (based on an actual N-body simulation) is much more correct. Comments are not for extended discussion; this conversation has been moved to chat. License under CC-BY-SA with attribution Content dated before 7/24/2021 11:53 AM • {{ error }}
2022-12-05T21:29:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 9, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6774670481681824, "perplexity": 1688.5550946307403}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711045.18/warc/CC-MAIN-20221205200634-20221205230634-00254.warc.gz"}
https://par.nsf.gov/biblio/10077684-diazotrophic-lt-gt-trichodesmium-lt-gt-impact-uvvis-radiance-pigment-composition-western-tropical-south-pacific
Diazotrophic <i>Trichodesmium</i> impact on UV–Vis radiance and pigment composition in the western tropical South Pacific Abstract. We assessed the influence of the marine diazotrophic cyanobacterium Trichodesmium on the bio-optical properties of western tropical South Pacific (WTSP) waters (18–22°S, 160°E–160°W) during the February–March 2015 OUTPACE cruise. We performed measurements of backscattering and absorption coefficients, irradiance, and radiance in the euphotic zone with a Satlantic MicroPro free-fall profiler and took Underwater Vision Profiler 5 (UPV5) pictures for counting the largest Trichodesmium spp. colonies. Pigment concentrations were determined by fluorimetry and high-performance liquid chromatography and picoplankton abundance by flow cytometry. Trichome concentration was estimated from pigment algorithms and validated by surface visual counts. The abundance of large colonies counted by the UVP5 (maximum 7093coloniesm−3) was well correlated to the trichome concentrations (maximum 2093trichomesL−1) with an aggregation factor of 600. In the Melanesian archipelago, a maximum of 4715trichomesL−1 was enumerated in pump samples (3.2m) at 20°S,16730°E. High Trichodesmium abundance was always associated with absorption peaks of mycosporine-like amino acids (330, 360nm) and high particulate backscattering, but not with high Chl a fluorescence or blue particulate absorption (440nm). Along the west-to-east transect, Trichodesmium together with Prochlorococcus represented the major part of total chlorophyll concentration; the more » Authors: ; ; ; ; ; ; ; ; ; ; ; Award ID(s): Publication Date: NSF-PAR ID: 10077684 Journal Name: Biogeosciences Volume: 15 Issue: 16 Page Range or eLocation-ID: 5249 to 5269 ISSN: 1726-4189 4. Abstract. Photoacoustic spectroscopy (PAS) has become a popular technique for measuringabsorption of light by atmospheric aerosols in both the laboratory andfield campaigns. It has low detection limits, measures suspended aerosols,and is insensitive to scattering. But PAS requires rigorous calibration to beapplied quantitatively. Often, a PAS instrument is either filled with a gasof known concentration and absorption cross section, such that the absorptionin the cell can be calculated from the product of the two, or the absorptionis measured independently with a technique such as cavity ring-downspectroscopy. Then, the PAS signal can be regressed upon the known absorptionto determine a calibration slope that reflects the sensitivity constant ofthe cell and microphone. Ozone has been used for calibrating PAS instrumentsdue to its well-known UV–visible absorption spectrum and the ease with whichit can be generated. However, it is known to photodissociate up toapproximately 1120nm via the O3 + $h\mathit{\nu }\phantom{\rule{0.25em}{0ex}}\left(>\mathrm{1.1}\mathrm{eV}\right)\to {\mathrm{O}}_{\mathrm{2}}{\left(}^{\mathrm{3}}{\mathrm{\Sigma }}_{g}^{-}\right)$ + O(3P) pathway, which is likely tolead to inaccuracies in aerosol measurements. Two recent studies haveinvestigated the use of O3 for PASmore »
2023-03-28T01:42:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44787999987602234, "perplexity": 13053.143290964596}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948756.99/warc/CC-MAIN-20230328011555-20230328041555-00012.warc.gz"}
https://itl.nist.gov/div898/software/dataplot/refman2/auxillar/diffhdle.htm
Dataplot Vol 2 Vol 1 # DIFFERENCE OF HODGES-LEHMANN Name: DIFFERENCE OF HODGES-LEHMANN (LET) Type: Let Subcommand Purpose: Compute the difference between the Hodges-Lehmanns location estimator for two response variables. Description: The Hodge-Lehmann location estimate is based on ranks. This makes it more resistant, as defined above, than the mean. This estimator also has high efficiency for symmetric disributions. It may be less successful with some skewed distributions. Specifically, the Hodges-Lehmann estimate for location is defined as $$\hat{\mu} = \mbox{median} \frac{X_i + X_j} {2} \hspace{0.5in} 1 \le i \le j \le n$$ Dataplot uses ACM algorithm 616 (HLQEST written by John Monohan) to compute the estimate. This is a fast, exact algoirthm. One modification is that for n <= 25 Dataplot computes the estimate directly from the definition. For the difference of the Hodges-Lehmann location estimats, the Hodges-Lehmann location estimate is computed for each of two samples then their difference is taken. Syntax: LET <par> = DIFFERENCE OF HODGES-LEHMANN <y1> <y2> <SUBSET/EXCEPT/FOR qualification> where <y1> is the first response variable; <y2> is the first response variable; <par> is a parameter where the computed difference of the Hodges-Lehmann location estimate is stored; and where the <SUBSET/EXCEPT/FOR qualification> is optional. Examples: LET A = DIFFERENCE OF HODGES-LEHMANN Y1 Y2 LET A = DIFFERENCE OF HODGES-LEHMANN Y1 Y2 SUBSET X > 1 Note: Dataplot statistics can be used in a number of commands. For details, enter Default: None Synonyms: None Related Commands: HODGES-LEHMANN = Compute the Hodges-Lehmann location estimate. MEAN = Compute the mean. MEDIAN = Compute the median. TRIMMED MEAN = Compute the trimmed mean. BIWEIGHT LOCATION = Compute the biweight location. DIFFERENCE OF MEAN = Compute the difference of the means. DIFFERENCE OF MEDIAN = Compute the difference of the median. DIFFERENCE OF TRIMMED MEAN = Compute the difference of the trimmed mean. DIFFERENCE OF BIWEIGHT LOCATION = Compute the difference of the biweight location. STATISTICS PLOT = Generate a statistic versus subset plot. BOOTSTRAP PLOT = Generate a bootstrap plot. TABULATE = Perform a tabulation for a specified statistic. Reference: John Monahan (1984), "Algorithm 616: Fast Computation of the Hodges-Lehmann Location Estimator," ACM Transactions on Mathematical Software, Vol. 10, No. 3, pp. 265-270. Rand Wilcox (1997), "Introduction to Robust Estimation and Hypothesis Testing," Academic Press, Applications: Data Analysis Implementation Date: 2003/03 Program: SKIP 25 READ IRIS.DAT Y1 TO Y4 X . LET A = DIFFERENCE OF HODGES-LEHMANN Y1 Y2 TABULATE DIFFERENCE OF HODGES-LEHMANN Y1 Y2 X . XTIC OFFSET 0.2 0.2 X1LABEL GROUP ID Y1LABEL DIFFERENCE OF HODGES-LEHMANN CHAR X LINE BLANK DIFFERENCE OF HODGES-LEHMANN PLOT Y1 Y2 X CHAR X ALL LINE BLANK ALL BOOTSTRAP DIFFERENCE OF HODGES-LEHMANN PLOT Y1 Y2 X Dataplot generated the following output. ************************************************** ** LET A = DIFFERENCE OF HODGES LEHMANN Y1 Y2 ** ************************************************** THE COMPUTED VALUE OF THE CONSTANT A = 0.27500002E+01 ***************************************************** ** TABULATE DIFFERENCE OF HODGES LEHMANN Y1 Y2 X ** ***************************************************** * Y1 AND Y2 X * DIFFERENCE OF HODGES-LEHMANN ********************************************** 1.00000 * 1.60000 2.00000 * 3.10000 3.00000 * 3.60000 GROUP-ID AND STATISTIC WRITTEN TO FILE DPST1F.DAT NIST is an agency of the U.S. Commerce Department. Date created: 03/27/2003 Last updated: 11/09/2015
2021-12-02T13:20:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.646619439125061, "perplexity": 8752.895447778259}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362219.5/warc/CC-MAIN-20211202114856-20211202144856-00371.warc.gz"}
https://zbmath.org/authors/?q=ai%3Amassey.william-s
# zbMATH — the first resource for mathematics ## Massey, William S. Compute Distance To: Author ID: massey.william-s Published as: Massey, W. S.; Massey, William S. External Links: MGP · Wikidata · GND Documents Indexed: 56 Publications since 1949, including 10 Books all top 5 #### Co-Authors 41 single-authored 5 Blakers, Albert Laurence 3 Peterson, Franklin P. 2 Traldi, Lorenzo 1 Auslander, Louis 1 Green, Leon W. 1 Hahn, Frank J. 1 Markus, Lawrence 1 Rolfsen, Dale 1 Stallings, John R. 1 Szczarba, Robert H. 1 Uehara, Hiroshi all top 5 #### Serials 7 Annals of Mathematics. Second Series 5 Pacific Journal of Mathematics 4 Indiana University Mathematics Journal 4 Proceedings of the American Mathematical Society 4 Graduate Texts in Mathematics 3 Boletín de la Sociedad Matemática Mexicana. Segunda Serie 2 American Mathematical Monthly 2 Duke Mathematical Journal 2 Topology and its Applications 2 Bulletin of the American Mathematical Society 1 American Journal of Mathematics 1 Geometriae Dedicata 1 Memoirs of the American Mathematical Society 1 Tohoku Mathematical Journal. Second Series 1 Topology 1 Transactions of the American Mathematical Society 1 Proceedings of the National Academy of Sciences of the United States of America 1 Journal of Knot Theory and its Ramifications 1 Journal of Mathematics and Mechanics 1 Annals of Mathematics Studies 1 Princeton Mathematical Series 1 Pure and Applied Mathematics, Marcel Dekker all top 5 #### Fields 20 Manifolds and cell complexes (57-XX) 18 Algebraic topology (55-XX) 4 Group theory and generalizations (20-XX) 2 History and biography (01-XX) 1 Linear and multilinear algebra; matrix theory (15-XX) 1 Global analysis, analysis on manifolds (58-XX) #### Citations contained in zbMATH 48 Publications have been cited 1,239 times in 1,102 Documents Cited by Year Algebraic topology: An introduction. Zbl 0153.24901 Massey, William S. 1967 A basic course in algebraic topology. Zbl 0725.55001 Massey, William S. 1991 Algebraic topology: An introduction. 4th corr. print. Zbl 0361.55002 Massey, William S. 1977 Flows on homogeneous spaces. Appendix by L. Greenberg. Zbl 0106.36802 Auslander, Louis; Green, Leon W.; Hahn, Frank J.; Markus, Lawrence; Massey, William S. 1963 Singular homology theory. Zbl 0442.55001 Massey, William S. 1980 Homology and cohomology theory. An approach based on Alexander-Spanier cochains. Zbl 0377.55004 Massey, William S. 1978 Exact couples in algebraic topology. I.-V. Zbl 0049.24002 Massey, W. S. 1953 How to give an exposition of the Cech-Alexander-Spanier type homology theory. Zbl 0377.55005 Massey, W. S. 1978 Proof of a conjecture of Whitney. Zbl 0198.56701 Massey, W. S. 1969 The cohomology structure of certain fibre spaces. I. Zbl 0132.19103 Massey, W. S.; Peterson, F. P. 1965 The mod 2 cohomology structure of certain fibre spaces. Zbl 0168.44002 Massey, W. S.; Peterson, F. P. 1967 Some higher order cohomology operations. Zbl 0123.16103 Massey, W. S. 1958 On the Stiefel-Whitney classes of a manifold. Zbl 0089.39301 Massey, W. S. 1960 Surfaces of Gaussian curvature zero in Euclidean 3-space. Zbl 0114.36903 Massey, W. S. 1962 Obstructions to the existence of almost complex structures. Zbl 0192.29601 Massey, W. S. 1961 On the normal bundle of a sphere imbedded in Euclidean space. Zbl 0094.36002 Massey, W. S. 1959 Homotopy classification of higher dimensional links. Zbl 0575.57011 Massey, W. S.; Rolfsen, D. 1985 The Jacobi identity for Whitehead products. Zbl 0077.36402 Uehara, Hiroshi; Massey, W. S. 1957 The homotopy groups of a triad. II. Zbl 0046.40604 Blakers, A. L.; Massey, W. S. 1952 On the cohomology ring of a sphere bundle. Zbl 0089.39204 Massey, W. S. 1958 Cross products of vectors in higher dimensional Euclidean spaces. Zbl 0532.55011 Massey, W. S. 1983 Products in homotopy theory. Zbl 0053.12802 Blakers, A. L.; Massey, W. S. 1953 Products in exact couples. Zbl 0057.15204 Massey, W. S. 1954 The homotopy groups of a triad I. Zbl 0042.17301 Blakers, A. L.; Massey, W. S. 1951 The quotient space of the complex projective plane under conjugation is a 4-sphere. Zbl 0273.57019 Massey, W. S. 1973 Completion of link modules. Zbl 0464.57001 Massey, W. S. 1980 On the dual Stiefel-Whitney classes of a manifold. Zbl 0121.18005 Massey, W. S.; Peterson, F. P. 1963 Some problems in algebraic topology and the theory of fibre bundles. Zbl 0068.16205 Massey, W. S. 1955 Algebraic topology: An introduction. (Algebraiceskaja topologija. Vvedenie.). Zbl 0361.55003 Massey, William S.; Stallings, John 1977 On the Stiefel-Whitney classes of a manifold. II. Zbl 0109.15902 Massey, W. S. 1962 Higher order linking numbers. Zbl 0911.57009 Massey, W. S. 1998 Pontryagin squares in the Thom space of a bundle. Zbl 0188.28504 Massey, W. S. 1969 Imbedding of projective planes and related manifolds in spheres. Zbl 0285.57016 Massey, W. S. 1974 Higher order linking numbers. Zbl 0212.55904 Massey, W. S. 1969 Some new algebraic methods in topology. Zbl 0055.16601 Massey, W. S. 1954 The homotopy groups of a triad. III. Zbl 0053.12901 Blakers, A. L.; Massey, W. S. 1953 Non-existence of almost complex structures on quaternionic projective spaces. Zbl 0112.14702 Massey, W. S. 1962 Normal vector fields on manifolds. Zbl 0100.19301 Massey, W. S. 1961 On a conjecture of K. Murasugi. Zbl 0563.57003 Massey, William S.; Traldi, Lorenzo 1986 Reminiscences of forty years as a mathematician. Zbl 0666.01009 Massey, W. S. 1989 Algebraic topology: An introduction. 5th corr. printing. Zbl 0457.55001 Massey, William S. 1981 Finite covering spaces of 2-manifolds with boundary. Zbl 0291.57001 Massey, W. S. 1974 Line elements fields on manifolds. Zbl 0112.38002 Massey, W. S.; Szczarba, R. H. 1962 On the imbeddability of the real projective spaces in Euclidean space. Zbl 0094.36003 Massey, W. S. 1959 A history of cohomology theory. Zbl 1001.55002 Massey, William S. 1999 The homotopy type of certain configuration spaces. Zbl 0853.57022 Massey, W. S. 1992 Homotopy classification of 3-component links of codimension greater than 2. Zbl 0717.57009 Massey, W. S. 1990 A generalization of the Alexander duality theorem. Zbl 0483.55001 Massey, W. S. 1981 A history of cohomology theory. Zbl 1001.55002 Massey, William S. 1999 Higher order linking numbers. Zbl 0911.57009 Massey, W. S. 1998 The homotopy type of certain configuration spaces. Zbl 0853.57022 Massey, W. S. 1992 A basic course in algebraic topology. Zbl 0725.55001 Massey, William S. 1991 Homotopy classification of 3-component links of codimension greater than 2. Zbl 0717.57009 Massey, W. S. 1990 Reminiscences of forty years as a mathematician. Zbl 0666.01009 Massey, W. S. 1989 On a conjecture of K. Murasugi. Zbl 0563.57003 Massey, William S.; Traldi, Lorenzo 1986 Homotopy classification of higher dimensional links. Zbl 0575.57011 Massey, W. S.; Rolfsen, D. 1985 Cross products of vectors in higher dimensional Euclidean spaces. Zbl 0532.55011 Massey, W. S. 1983 Algebraic topology: An introduction. 5th corr. printing. Zbl 0457.55001 Massey, William S. 1981 A generalization of the Alexander duality theorem. Zbl 0483.55001 Massey, W. S. 1981 Singular homology theory. Zbl 0442.55001 Massey, William S. 1980 Completion of link modules. Zbl 0464.57001 Massey, W. S. 1980 Homology and cohomology theory. An approach based on Alexander-Spanier cochains. Zbl 0377.55004 Massey, William S. 1978 How to give an exposition of the Cech-Alexander-Spanier type homology theory. Zbl 0377.55005 Massey, W. S. 1978 Algebraic topology: An introduction. 4th corr. print. Zbl 0361.55002 Massey, William S. 1977 Algebraic topology: An introduction. (Algebraiceskaja topologija. Vvedenie.). Zbl 0361.55003 Massey, William S.; Stallings, John 1977 Imbedding of projective planes and related manifolds in spheres. Zbl 0285.57016 Massey, W. S. 1974 Finite covering spaces of 2-manifolds with boundary. Zbl 0291.57001 Massey, W. S. 1974 The quotient space of the complex projective plane under conjugation is a 4-sphere. Zbl 0273.57019 Massey, W. S. 1973 Proof of a conjecture of Whitney. Zbl 0198.56701 Massey, W. S. 1969 Pontryagin squares in the Thom space of a bundle. Zbl 0188.28504 Massey, W. S. 1969 Higher order linking numbers. Zbl 0212.55904 Massey, W. S. 1969 Algebraic topology: An introduction. Zbl 0153.24901 Massey, William S. 1967 The mod 2 cohomology structure of certain fibre spaces. Zbl 0168.44002 Massey, W. S.; Peterson, F. P. 1967 The cohomology structure of certain fibre spaces. I. Zbl 0132.19103 Massey, W. S.; Peterson, F. P. 1965 Flows on homogeneous spaces. Appendix by L. Greenberg. Zbl 0106.36802 Auslander, Louis; Green, Leon W.; Hahn, Frank J.; Markus, Lawrence; Massey, William S. 1963 On the dual Stiefel-Whitney classes of a manifold. Zbl 0121.18005 Massey, W. S.; Peterson, F. P. 1963 Surfaces of Gaussian curvature zero in Euclidean 3-space. Zbl 0114.36903 Massey, W. S. 1962 On the Stiefel-Whitney classes of a manifold. II. Zbl 0109.15902 Massey, W. S. 1962 Non-existence of almost complex structures on quaternionic projective spaces. Zbl 0112.14702 Massey, W. S. 1962 Line elements fields on manifolds. Zbl 0112.38002 Massey, W. S.; Szczarba, R. H. 1962 Obstructions to the existence of almost complex structures. Zbl 0192.29601 Massey, W. S. 1961 Normal vector fields on manifolds. Zbl 0100.19301 Massey, W. S. 1961 On the Stiefel-Whitney classes of a manifold. Zbl 0089.39301 Massey, W. S. 1960 On the normal bundle of a sphere imbedded in Euclidean space. Zbl 0094.36002 Massey, W. S. 1959 On the imbeddability of the real projective spaces in Euclidean space. Zbl 0094.36003 Massey, W. S. 1959 Some higher order cohomology operations. Zbl 0123.16103 Massey, W. S. 1958 On the cohomology ring of a sphere bundle. Zbl 0089.39204 Massey, W. S. 1958 The Jacobi identity for Whitehead products. Zbl 0077.36402 Uehara, Hiroshi; Massey, W. S. 1957 Some problems in algebraic topology and the theory of fibre bundles. Zbl 0068.16205 Massey, W. S. 1955 Products in exact couples. Zbl 0057.15204 Massey, W. S. 1954 Some new algebraic methods in topology. Zbl 0055.16601 Massey, W. S. 1954 Exact couples in algebraic topology. I.-V. Zbl 0049.24002 Massey, W. S. 1953 Products in homotopy theory. Zbl 0053.12802 Blakers, A. L.; Massey, W. S. 1953 The homotopy groups of a triad. III. Zbl 0053.12901 Blakers, A. L.; Massey, W. S. 1953 The homotopy groups of a triad. II. Zbl 0046.40604 Blakers, A. L.; Massey, W. S. 1952 The homotopy groups of a triad I. Zbl 0042.17301 Blakers, A. L.; Massey, W. S. 1951 all top 5 #### Cited by 1,402 Authors 9 Han, Sang-Eon 9 Thomas, Paul Emery 8 Suciu, Alexander I. 7 Carter, J. Scott 6 Church, Philip T. 6 Host, Bernard 6 Kra, Bryna 6 Reidys, Christian Michael 6 Smith, Larry 6 Timourian, J. G. 5 Dłotko, Paweł 5 Gross, Jonathan L. 5 Kamada, Seiichi 5 Mahowald, Mark Edward 5 Massey, William S. 5 Mohar, Bojan 5 Mrozek, Marian 5 Peralta-Salas, Daniel 5 Singer, William M. 5 Širáň, Jozef 5 Wang, He 4 Dydak, Jerzy 4 Fernández Rodríguez, Marisa 4 Huang, Fenix W. D. 4 Hurder, Steven E. 4 Kasuya, Naohiko 4 Koschorke, Ulrich 4 Levine, Jerome P. 4 Lukina, Olga 4 Nedela, Roman 4 Parry, Gareth P. 4 Repovš, Dušan D. 4 Schrijver, Alexander 4 Selvakumar, Krishnan 4 Skopenkov, Arkadiĭ Borisovich 4 Spanier, Edwin Henry 4 Toda, Hirosi 4 Tucker, Thomas W. 4 Zieschang, Heiner 3 Alpert, Seth R. 3 Arquès, Didier G. 3 Bödi, Richard 3 Bressan, Alberto 3 Brown, Robert F. 3 Buijs, Urtzi 3 Chen, Kuo-Tsai 3 Daverman, Robert J. 3 Ding, Xie Ping 3 Enciso, Alberto 3 Fiala, Jiří 3 Finashin, Sergey Mikhailovich 3 Geiges, Hansjörg 3 Glasner, Eli 3 Golasiński, Marek 3 Gonçalves, Daciberg Lima 3 Gorbatsevich, Vladimir V. 3 Hardie, Keith A. 3 Harper, John R. 3 Helmke, Uwe R. 3 Honda, Atsufumi 3 Hsiang, Wu-Chung 3 James, Ioan Mackenzie 3 Kaczynski, Tomasz 3 Khashyarmanesh, Kazem 3 Komendarczyk, Rafal 3 Lannes, Jean E. 3 Löwen, Rainer 3 Mardešić, Sibe 3 Martin, John Rowlay 3 May, Jon Peter 3 McClendon, James F. 3 Mdzinarishvili, Leonard 3 Melikhov, Sergey Aleksandrovich 3 Molnár, Emil 3 Moreno-Fernández, José Manuel 3 Mundici, Daniele 3 Muñoz Velázquez, Vicente 3 Nicks, Rachel 3 Palazzo, Reginaldo jun. 3 Rafat, Hesham 3 Saito, Masahico 3 Satoh, Shin 3 Schirmer, Helga 3 Schwartz, Lionel 3 Shonkwiler, Clayton 3 Škoviera, Martin 3 Smirnov, Vladimir Alekseevich 3 Specogna, Ruben 3 Stong, Robert E. 3 Swann, Andrew F. 3 Szczarba, Robert H. 3 Walters, Peter 3 Wei, Guofang 3 Whitehead, George W. 3 Wood, John W. 3 Yang, Huijun 3 Zarati, Said 3 Ziou, Djemel 2 Acquistapace, Francesca 2 Afkhami, Mojgan ...and 1,302 more Authors all top 5 #### Cited in 253 Serials 100 Transactions of the American Mathematical Society 72 Topology and its Applications 50 Proceedings of the American Mathematical Society 34 Mathematische Zeitschrift 26 Journal of Pure and Applied Algebra 20 Journal of Algebra 20 Journal of Combinatorial Theory. Series B 20 Mathematische Annalen 20 Bulletin of the American Mathematical Society 19 Discrete Mathematics 19 Inventiones Mathematicae 15 Advances in Mathematics 14 Annales de l’Institut Fourier 13 Manuscripta Mathematica 12 Ergodic Theory and Dynamical Systems 11 Duke Mathematical Journal 11 Tohoku Mathematical Journal. Second Series 11 Discrete & Computational Geometry 11 Algebraic & Geometric Topology 10 Archiv der Mathematik 9 Communications in Mathematical Physics 9 Geometriae Dedicata 9 Differential Geometry and its Applications 8 European Journal of Combinatorics 8 Linear Algebra and its Applications 7 Rocky Mountain Journal of Mathematics 7 Journal of Geometry and Physics 7 Journal of Differential Equations 7 Journal of Functional Analysis 7 Nagoya Mathematical Journal 7 Theoretical Computer Science 7 Bulletin of the American Mathematical Society. New Series 7 Nonlinear Analysis. Theory, Methods & Applications 6 Israel Journal of Mathematics 6 Journal d’Analyse Mathématique 6 Journal of Mathematical Analysis and Applications 6 Mathematical Notes 6 Mathematical Proceedings of the Cambridge Philosophical Society 6 Nuclear Physics. B 6 Information Sciences 6 Journal of Computer and System Sciences 6 Mathematical Systems Theory 6 International Journal of Mathematics 6 Journal of Mathematical Sciences (New York) 6 Geometry & Topology 6 Journal of Homotopy and Related Structures 5 Communications in Algebra 5 Journal of Mathematical Physics 5 Annali di Matematica Pura ed Applicata. Serie Quarta 5 Bulletin de la Société Mathématique de France 5 Journal of Combinatorial Theory. Series A 5 Journal of Soviet Mathematics 5 Nonlinear Analysis. Theory, Methods & Applications. Series A: Theory and Methods 5 Advances in Applied Mathematics 5 Journal of Symbolic Computation 5 Calculus of Variations and Partial Differential Equations 5 Annals of Mathematics. Second Series 5 Comptes Rendus. Mathématique. Académie des Sciences, Paris 5 Proceedings of the Steklov Institute of Mathematics 5 Journal of Fixed Point Theory and Applications 4 Chaos, Solitons and Fractals 4 The Mathematical Intelligencer 4 Abhandlungen aus dem Mathematischen Seminar der Universität Hamburg 4 Acta Mathematica 4 Annales Scientifiques de l’École Normale Supérieure. Quatrième Série 4 Applied Mathematics and Computation 4 Cahiers de Topologie et Géométrie Différentielle Catégoriques 4 Glasgow Mathematical Journal 4 Journal of Geometry 4 Mathematika 4 Annals of Pure and Applied Logic 4 Computational Geometry 4 The Journal of Geometric Analysis 4 Journal of Elasticity 4 Journal of Knot Theory and its Ramifications 3 Archive for Rational Mechanics and Analysis 3 Computers & Mathematics with Applications 3 International Journal of Theoretical Physics 3 Mathematical Biosciences 3 Compositio Mathematica 3 Michigan Mathematical Journal 3 Proceedings of the Edinburgh Mathematical Society. Series II 3 Siberian Mathematical Journal 3 Acta Applicandae Mathematicae 3 Annals of Global Analysis and Geometry 3 $$K$$-Theory 3 Forum Mathematicum 3 Science in China. Series A 3 Pattern Recognition 3 Indagationes Mathematicae. New Series 3 Journal of Algebraic Combinatorics 3 Documenta Mathematica 3 Acta Mathematica Sinica. English Series 3 Kodai Mathematical Seminar Reports 3 Journal of Topology 2 Bulletin of the Australian Mathematical Society 2 Computer Methods in Applied Mechanics and Engineering 2 Communications on Pure and Applied Mathematics 2 International Journal of Control 2 International Journal of Mathematical Education in Science and Technology ...and 153 more Serials all top 5 #### Cited in 57 Fields 341 Manifolds and cell complexes (57-XX) 283 Algebraic topology (55-XX) 124 Differential geometry (53-XX) 111 Combinatorics (05-XX) 89 Dynamical systems and ergodic theory (37-XX) 74 Group theory and generalizations (20-XX) 71 Global analysis, analysis on manifolds (58-XX) 70 Algebraic geometry (14-XX) 65 Computer science (68-XX) 63 General topology (54-XX) 51 Category theory; homological algebra (18-XX) 44 Topological groups, Lie groups (22-XX) 43 Several complex variables and analytic spaces (32-XX) 34 Associative rings and algebras (16-XX) 33 Commutative algebra (13-XX) 33 Quantum theory (81-XX) 30 Convex and discrete geometry (52-XX) 28 Partial differential equations (35-XX) 27 Functions of a complex variable (30-XX) 22 Number theory (11-XX) 19 Nonassociative rings and algebras (17-XX) 19 Functional analysis (46-XX) 19 Geometry (51-XX) 17 Measure and integration (28-XX) 17 Operator theory (47-XX) 17 Numerical analysis (65-XX) 15 Mathematical logic and foundations (03-XX) 15 Calculus of variations and optimal control; optimization (49-XX) 14 Ordinary differential equations (34-XX) 14 Systems theory; control (93-XX) 13 Fluid mechanics (76-XX) 13 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 12 $$K$$-theory (19-XX) 11 Linear and multilinear algebra; matrix theory (15-XX) 10 Biology and other natural sciences (92-XX) 9 Order, lattices, ordered algebraic structures (06-XX) 9 Mechanics of particles and systems (70-XX) 9 Relativity and gravitational theory (83-XX) 7 Probability theory and stochastic processes (60-XX) 7 Mechanics of deformable solids (74-XX) 6 Statistical mechanics, structure of matter (82-XX) 6 Operations research, mathematical programming (90-XX) 5 Field theory and polynomials (12-XX) 5 Abstract harmonic analysis (43-XX) 5 Optics, electromagnetic theory (78-XX) 4 General and overarching topics; collections (00-XX) 4 Potential theory (31-XX) 4 Statistics (62-XX) 3 History and biography (01-XX) 3 General algebraic systems (08-XX) 3 Information and communication theory, circuits (94-XX) 2 Harmonic analysis on Euclidean spaces (42-XX) 1 Real functions (26-XX) 1 Difference and functional equations (39-XX) 1 Approximations and expansions (41-XX) 1 Integral equations (45-XX) 1 Classical thermodynamics, heat transfer (80-XX) #### Wikidata Timeline The data are displayed as stored in Wikidata under a Creative Commons CC0 License. Updates and corrections should be made in Wikidata.
2021-04-18T06:27:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4977671802043915, "perplexity": 4225.581495806803}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038468066.58/warc/CC-MAIN-20210418043500-20210418073500-00547.warc.gz"}
https://www.doitpoms.ac.uk/tlplib/stereographic/HTML5/Diehls_rule.html
Re-use of this resource is governed by a Creative Commons Attribution- Noncommercial-Share Alike 2.0 Licence UK: England & Wales Single crystal slip and Diehl's rule In single crystals, slip typically occurs on the closely-packed planes in well defined directions, i.e. in ccp metals, the slip system is {111}<110>. This slip can be examined using the stereogram, and can be predicted in the same way. Firstly, we need to look at slip itself. Slip occurs on a slip plane, which has a normal at an angle of $$\phi$$° to the tensile axis. Slip occurs in the slip direction, at an angle of λ° to the tensile axis. The slip plane area is $$A / \cos\phi$$, and the force is $$F \cos$$ λ in the slip direction. We can show this geometry on a stereogram as seen here: The slip is caused by the tensile force, resolved into the slip direction, giving a resolved shear stress of: $$\tau =$$$$\frac{F}{A}$$ $$\cos \phi \cos$$ λ The factor $$\cos \phi \cos$$ λ is called the Schmid factor. Slip occurs when $$\tau$$ reaches a critical value on the slip plane(s) for which the Schmid factor is a maximum. During slip, if the sample is not constrained, the ends move relative to each other. Typically, in a tensile test the ends are constrained, meaning that the slip causes only internal motion. This results in rotation of the slip plane, so that the direction of slip rotates towards the tensile axis, or equivalently, the tensile axis rotates towards the slip direction. Now, the slip direction is constrained to lie in the slip plane. Therefore, on the stereogram, this rotation is represented by the rotation of the tensile axis towards the slip direction, along the great circle joining the tensile axis to the slip direction. The system on which slip occurs is the one with the highest Schmid factor. Diehl's rule can be used to find the slip system on which slip occurs, by using a stereographic method. This can be used to identify slip in ccp crystals: Slip system {111}<110> and bcc crystals: Slip system {110} <111>. Use of Diehl's rule for ccp crystals First, start with a standard cubic stereogram, showing all poles of the form {100}, {110} and {111}. The great circles are right-angled spherical triangles, 24 in the northern hemisphere and 24 in the southern hemisphere. The next step is to identify the triangle containing the tensile axis. For example if the tensile axis is [123], it is located within the triangle constructed from 001, 011 and 111. It can be plotted using the techniques already covered in this TLP. For cubic crystals, the vector [123] is parallel to the normal to the plane (123), and so all we need to do is identify the location of the normal to the (123) planes. Reflection plane To find the slip plane, take the {111} type pole in this triangle, and reflect it across the side of the triangle which it is opposite, i.e. reflecting 111 in the great circle containing 001 and 011, giving the slip plane as (111). Reflection plane To find the slip direction, the <110> type pole in the triangle that contains the tensile axis is reflected in the great circle opposite to it. This analysis allows us to quickly produce the slip system. We can now see that slip occurs on the (111) plane, and in the [101] direction. Since we know that the tensile axis moves towards the slip direction, we can now predict the movement of the tensile axis. It heads towards [101], i.e. towards the 101 pole on the stereogram, or equivalently, the location of the normal to the (101) planes since we are dealing here with cubic crystals. When it reaches the edge of the triangle, two slip systems are of equal Schmid factor. The slip system operating can change as the tensile axis moves. At this point, the slip direction becomes the vector sum of the slip directions of the two slip systems, i.e. in this case, the slip direction now becomes [112].
2023-01-27T05:17:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4758763909339905, "perplexity": 1323.8445911604763}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494936.89/warc/CC-MAIN-20230127033656-20230127063656-00570.warc.gz"}
https://par.nsf.gov/biblio/10361559-tracing-birth-properties-stars-abundance-clustering
Tracing Birth Properties of Stars with Abundance Clustering Abstract To understand the formation and evolution of the Milky Way disk, we must connect its current properties to its past. We explore hydrodynamical cosmological simulations to investigate how the chemical abundances of stars might be linked to their origins. Using hierarchical clustering of abundance measurements in two Milky Way–like simulations with distributed and steady star formation histories, we find that groups of chemically similar stars comprise different groups in birth place (Rbirth) and time (age). Simulating observational abundance errors (0.05 dex), we find that to trace distinct groups of (Rbirth, age) requires a large vector of abundances. Using 15 element abundances (Fe, O, Mg, S, Si, C, P, Mn, Ne, Al, N, V, Ba, Cr, Co), up to ≈10 groups can be defined with ≈25% overlap in (Rbirth, age). We build a simple model to show that in the context of these simulations, it is possible to infer a star’s age andRbirthfrom abundances with precisions of ±0.06 Gyr and ±1.17 kpc, respectively. We find that abundance clustering is ineffective for a third simulation, where low-αstars form distributed in the disk and early high-αstars form more rapidly in clumps that sink toward the Galactic center as their constituent stars evolve more » Authors: ; ; ; ; ; ; Award ID(s): Publication Date: NSF-PAR ID: 10361559 Journal Name: The Astrophysical Journal Volume: 924 Issue: 2 Page Range or eLocation-ID: Article No. 60 ISSN: 0004-637X Publisher: DOI PREFIX: 10.3847 1. ABSTRACT Using a sample of red giant stars from the Apache Point Observatory Galactic Evolution Experiment (APOGEE) Data Release 16, we infer the conditional distribution $p([\alpha /{\rm Fe}]\, |\, [{\rm Fe}/{\rm H}])$ in the Milky Way disk for the α-elements Mg, O, Si, S, and Ca. In each bin of [Fe/H] and Galactocentric radius R, we model p([α/Fe]) as a sum of two Gaussians, representing ‘low-α’ and ‘high-α’ populations with scale heights $z_1=0.45\, {\rm kpc}$ and $z_2=0.95\, {\rm kpc}$, respectively. By accounting for age-dependent and z-dependent selection effects in APOGEE, we infer the [α/Fe] distributions that would be found for a fair sample of long-lived stars covering all z. Near the Solar circle, this distribution is bimodal at sub-solar [Fe/H], with the low-α and high-α peaks clearly separated by a minimum at intermediate [α/Fe]. In agreement with previous results, we find that the high-α population is more prominent at smaller R, lower [Fe/H], and larger |z|, and that the sequence separation is smaller for Si and Ca than for Mg, O, and S. We find significant intrinsic scatter in [α/Fe] at fixed [Fe/H] for both the low-α and high-α populations, typically ∼0.04-dex. The means, dispersions, and relative amplitudes of thismore »
2023-03-26T21:05:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6670097708702087, "perplexity": 4885.778948087517}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296946535.82/warc/CC-MAIN-20230326204136-20230326234136-00563.warc.gz"}
https://www.rba.gov.au/publications/rdp/2019/2019-06/does-mortgage-debt-affect-spending.html
# RDP 2019-06: The Effect of Mortgage Debt on Consumer Spending: Evidence from Household-level Data 4. Does Mortgage Debt Affect Spending? ## 4.1 Identification As discussed above, there are several challenges associated with identifying the effect of mortgage debt on spending. To see this, consider a simple regression of spending on mortgage debt at the household level: $E h,t = β 0 + β 1 D h,t + ε h,t$ where the dependent variable (Eh,t) is the level of non-housing spending of household h in year t, and the key variable of interest is the level of owner-occupier mortgage debt (Dh,t). First, reverse causality is a problem. A household may choose to spend more than it earns, implying more borrowing and hence a higher stock of debt.[10] To partly mitigate this, we estimate the relationship between households' current spending and the previous year's mortgage debt. It is also worth noting that reverse causality would drive a positive bias in the coefficient estimate, implying that it would be harder to pin down a potential negative debt overhang effect. Second, omitted variables can influence both spending and mortgage debt. For example, an increase in households' income expectations may lift both their intention to spend and their desire to take on debt. Alternatively, an increase in risk aversion may discourage both spending and borrowing. Again, this is most likely to induce a positive correlation between spending and debt and attenuate any negative debt overhang effect. Some of the challenges in identifying the causal effect of mortgage debt on spending are highlighted in an ‘event study’ around the time of home purchase. Most home purchases in Australia are financed at least in part through mortgage debt. When a household buys a home for the first time (or when they trade up to a larger/higher-quality home) they typically take on a large amount of debt. And when they buy a new home they also tend to spend more on either furnishing the new home or renovating their existing home for sale. Given the HILDA Survey is longitudinal in nature we can observe the spending, income and debt of a household both before and after home purchase (Figure 4, left panel). In the year of home purchase, there is a notable jump in spending on durable goods. In the year after home purchase, durable goods spending returns to the pre-purchase level. This pattern of home purchase-related debt accumulation and spending in one year followed by lower spending the next year would lead us to empirically find a negative correlation between current spending and lagged mortgage debt. But the relationship would not be causal – it would be driven by an omitted variable – the decision to buy a new home. This would be similar to the ‘spending normalisation’ hypothesis (Andersen et al 2016). In contrast to the findings of Gross (2017) for the United States, we find little evidence of a fall in non-durable spending around the time of home purchase. Another notable feature of this event study is the rise in household income in the years leading up to the purchase. We find that this is partly due to households working longer hours, presumably to save for a home deposit. But, in line with Gross (2017), it also seems to reflect a ‘selection effect’; the households that choose to buy a new home are those that received an increase in income (through, say, a promotion or bonus). Either way, this event study highlights the need to control for factors that influence both debt and spending behaviour, such as the age, income, wealth and labour force characteristics of the household. A similar event study can be undertaken around the year in which households fully pay off their mortgage. Under the PIH, household spending should not respond to anticipated changes in scheduled debt. We would expect spending to remain constant. At odds with this prediction, and suggesting that debt may constrain spending, we find that durables spending increases in the year that households fully repay their mortgage debt and non-durables spending increases in the years following (Figure 4, right panel). The increase in spending is larger than both the observed increase in disposable income and the average mortgage payment prior to paying off the debt, which suggests that the spending response cannot be fully explained by cash flow effects. ## 4.2 Household Fixed Effects Model To deal with the issues highlighted above, we exploit the rich longitudinal information available in the HILDA Survey. To test whether debt levels directly influence spending, we first estimate the following regression model, which we refer to as the fixed effects (FE) model: $E h,t = β 0 + β 1 D h,t−1 + β 2 Y h,t + β 3 A h,t−1 +γ X h,t + δ h + ε h,t$ This model includes the lagged level of owner-occupier mortgage debt (Dh,t − 1) as the key variable of interest, household disposable income (Yh,t) and the lagged reported home value (Ah,t − 1). The model also includes a set of control variables (Xh,t), to summarise the other observed determinants of spending, including factors associated with a household's permanent income, such as age, education and labour force status of the household reference person.[11] The model includes a household fixed effect $\left({\delta }_{h}\right)$ which captures household characteristics that determine spending but are plausibly invariant over time (e.g. degree of impatience and risk aversion). Estimates are presented with and without the household fixed effect to gauge the importance of these characteristics. Our results are robust to including year fixed effects. ## 4.3 Instrumental Variables Model To further alleviate any endogeneity concerns about unobserved time-varying confounding factors (such as changes in income expectations or local labour demand shocks), we also adopt an instrumental variables approach. For this, we exploit the home purchase history of each owner-occupier household in the survey. We use information on the timing of their most recent home purchase relative to other home owners in the same postcode as an instrument for the level of owner-occupier housing debt held by the household. The logic behind this instrument is that households living in the same area are exposed to identical time-varying local demand shocks, but differ in their debt holdings based on when they happened to purchase their home. The instrument should therefore be correlated with outstanding mortgage debt but uncorrelated with differences in spending for borrowers in the same postcode other than their level of debt.[12] To take a hypothetical example, suppose there are two households that own identical homes in the same street. The only difference between them is that household A bought before a local housing boom happened while household B bought after. It is plausible that household A borrowed less (in dollars) than household B because housing prices in the area were lower when household A made their purchase decision. The timing of the purchase decision should not affect the spending of household A relative to household B over and above its impact on their respective levels of indebtedness. Australia experienced a large housing price boom in the early 2000s. The timing of this boom varied by state, generally starting in 2001 in the larger capital cities of Sydney, Melbourne and Brisbane, and in 2002 in other capital cities. We can think of the households that bought just before the boom in housing prices as the ‘lucky’ households while the comparable households that bought just after the boom are ‘unlucky’. To the extent there are other differences between households that bought before and after the boom that affect their consumption behaviour (such as the level of housing wealth), we can only control for observable differences in the model. To gauge the relevance of the instrument, we compare the average initial debt holdings of households purchasing homes just before and just after the boom (Figure 5). There is a clear jump in average mortgage debt for those households that were ‘unlucky’ to buy just after the housing boom compared to the ‘lucky’ households that bought just before the boom.[13] Based on the weak identification test in Stock and Yogo (2005), the instrument is found to be significantly correlated with the household's current holdings of mortgage debt (even after the age of loan is taken into account). This second model is based on a two-stage least squares regression, which we refer to as the instrumental variables (IV) model: $D hp,t = α 0 + α 1 BOO M hp + α 2 Y hp,t + α 3 A hp,t−1 +ρ X ph,t + σ p + μ hp,t$ $E hp,t = β 0 + β 1 D ^ hp,t−1 + β 2 Y hp,t + β 3 A hp,t−1 +γ X hp,t + θ p + ε hp,t$ where most of the variables are as denoted before and p denotes the postcode in which household h lives. The main difference is that owner-occupier housing debt is estimated in the first-stage regression using a dummy variable (BOOMhp) as an instrument. The dummy variable takes the value of one if a household purchased their home after the early 2000s housing price boom in the state of purchase and zero otherwise.[14] To control for the location choice of the household in their home purchase decision, we drop the household fixed effects and instead include postcode fixed effects $\left({\theta }_{p}\right)$ in the model. Otherwise the timing of home purchase (the ‘birth cohort’ of the mortgage) will be absorbed by the household fixed effect. Thus, whilst this approach may help to alleviate endogeneity concerns by controlling for unobserved time-varying factors such as local demand shocks, the resulting estimator will include the effects of household debt on spending both within households (over time) and between households (at a point in time). ## 4.4 Results ### 4.4.1 Baseline models Table 1 presents the key results from estimating the OLS, FE and IV models for durables, non-durables and total spending (see Appendix F for full table of results). Table 1: The Debt Overhang Effect – Baseline Models By type of spending Model Non-durables spending 2006–17 Durables spending 2006–10 Total spending 2006–10 OLS FE IV OLS FE IV OLS FE IV Lagged mortgage debt >−0.00 (0.77) −0.01** (0.03) −0.15*** (<0.00) −0.07* (0.05) −0.10* (0.10) −0.80*** (<0.00) −0.01 (0.11) −0.03*** (0.01) −0.20*** (<0.00) Income 0.26*** (<0.00) 0.10*** (<0.00) 0.28*** (<0.00) 1.02*** (<0.00) 0.36** (0.02) 1.09*** (<0.00) 0.30*** (<0.00) 0.09*** (<0.00) 0.33*** (<0.00) Lagged home value 0.21*** (<0.00) 0.17*** (<0.00) 0.32*** (<0.00) 0.55*** (<0.00) 0.28 (0.28) 1.01*** (<0.00) 0.26*** (<0.00) 0.11** (0.04) 0.38*** (<0.00) First-stage: Boom dummy     0.46*** (<0.00) 0.45*** (<0.00) 0.45*** (<0.00) Household FE No Yes No   No Yes No   No Yes No Postcode FE No No Yes   No No Yes   No No Yes Observations 21,460 21,460 21,460   6,622 6,622 6,622   6,622 6,622 6,622 Notes: The sample excludes non-indebted households in the previous year and the top and bottom 1 per cent of income growth, spending growth and housing price growth; controls include household income, lagged home value, age dummies, education dummies, number of children, number of adults, marital status, unemployment and not in the labour force status; standard errors are clustered by household; *, **, *** represent statistical significance at the 10, 5 and 1 per cent levels, respectively; p-values are in parentheses Sources: Authors' calculations; HILDA Survey Release 17.0 We find that higher mortgage debt reduces household spending across all specifications. The overall similarity between our FE estimates and the OLS estimates suggests that unobservable time-invariant variables do not play a major role after controlling for a range of household socio-economic characteristics. The FE model specification suggests that the effect of debt on spending is relatively small with a 10 per cent increase in debt reducing households' non-durables, durables and total spending by 0.1, 1.0 and 0.3 per cent, respectively. The stronger response of durables relative to non-durables spending is consistent with the common finding of larger wealth effects for durables spending. The IV model estimates provide further evidence that the negative debt overhang effect is not driven by unobservable, time-varying factors such as local demand shocks. The first-stage regression estimates indicate that our instrument is relevant and that households that bought after the boom held mortgage debt levels that were about 45 per cent higher than comparable households living in the same area that bought before the boom. The second-stage IV estimates are more economically significant than those in the FE model and suggest that a 10 per cent increase in debt lowers non-durables, durables, and total spending by 1.5, 8.0 and 2.0 per cent, respectively. While the IV model may help to further remove time-varying confounding factors, it does not account for unobserved household fixed effects. We therefore take the FE model estimates as our benchmark estimates for the remainder of the paper. In these regressions, we control for household income and the lagged value of gross housing assets. As expected, higher household income and housing prices raise spending. In contrast to the estimated effect on debt, the OLS coefficient estimates on income and home value are considerably larger than the FE estimates, suggesting that unobserved time-invariant factors are significant drivers behind the positive relationship between spending and both income and housing prices. For example, it may be the case that impatient households spend more and buy more expensive homes than patient households, and this partly explains the positive link between spending and home values. We also estimate all three models using three additional debt measures (the debt-to-income, debt-to-assets and debt service-to-income ratios) that are commonly used in the literature. The results are presented in Appendix E. Using the IV model, we find some evidence that higher debt ratios negatively affect household spending, consistent with a number of papers that have found evidence of a significant negative relationship between these debt ratios and spending. In particular, we find that a 10 percentage point increase in the debt-to-assets ratio or debt-servicing ratio significantly reduces total household spending by 0.1 per cent. ### 4.4.2 Gross versus net housing wealth In the previous regressions, we control for gross housing wealth. Another possibility is to control for net housing wealth (housing equity), equal to the difference between the reported value of the home and any outstanding mortgage debt. This allows us to directly test whether the composition of household balance sheets matters for spending. The results can be found in Table 2. Overall, our estimates of the effect of debt on household spending are broadly unchanged when we control for households' housing equity instead of the value of their home. Importantly, this implies that households lower their spending when the gross value of both their debt and their assets increases. In other words, we find that a deepening of household balance sheets is associated with less household spending, even if it is not associated with rising net indebtedness. This directly violates conventional consumption theories such as the PIH that assume the composition of a household's balance sheet does not affect consumption (Garriga and Hedlund 2017). Our results suggest a small, and occasionally negative, effect of lagged housing equity on spending. When using contemporaneous housing equity, we recover the expected positive effect. Table 2: The Debt Overhang Effect – Baseline Models (Housing Equity) By type of spending Model Non-durables spending 2006–17 Durables spending 2006–10 Total spending 2006–10 OLS FE IV OLS FE IV OLS FE IV Lagged mortgage debt 0.02*** (<0.00) –0.01 (0.19) –0.12*** (<0.00)   –0.01 (0.74) –0.08 (0.17) –0.75*** (<0.00)   0.02* (0.06) –0.03** (0.04) –0.17*** (<0.00) Income 0.33*** (<0.00) 0.12*** (<0.00) 0.34*** (<0.00)   1.19*** (<0.00) 0.38** (0.01) 1.22*** (<0.00)   0.38*** (<0.00) 0.10*** (<0.00) 0.38*** (<0.00) Lagged housing equity <0.00*** (0.01) >–0.00 (0.85) –0.01*** (<0.00)   0.01 (0.47) 0.01 (0.59) –0.05*** (<0.00)   0.01** (0.01) <0.00 (0.31) –0.01*** (<0.00) First-stage: Boom dummy     0.47*** (<0.00)       0.44*** (<0.00)       0.44*** (<0.00) Household FE No Yes No   No Yes No   No Yes No Postcode FE No No Yes   No No Yes   No No Yes Observations 21,460 21,460 21,460   6,622 6,622 6,622   6,622 6,622 6,622 Note: See notes to Table 1 Sources: Authors' calculations; HILDA Survey Release 17.0 ## Footnotes Households in Australia can use their mortgage debt to buy a consumption item (e.g. car, holiday) through their offset or redraw facilities; see Appendix B for more information. [10] See Table D1 for definitions of the variables used in the regression models. We identify the household reference person as the individual with the longest household membership, the highest personal income, or the highest age, in that order. [11] We assume that households make mortgage prepayments at the same speed across the age of the loan. [12] The jump in average household debt levels is unique to the reference years chosen around state housing price booms. We find little difference in the average debt levels when using alternative reference years. [13] Purchased after 2001 in New South Wales, Victoria and Queensland, and after 2002 in all other states and territories. [14]
2022-09-24T20:00:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 6, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3163534700870514, "perplexity": 3706.5868234740524}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00689.warc.gz"}
http://www.itl.nist.gov/div898/software/dataplot/refman2/auxillar/empqua.htm
Dataplot Vol 2 Vol 1 # EMPIRICAL QUANTILE FUNCTION Name: EMPIRICAL QUANTILE FUNCTION (LET) Type: Let Subcommand Purpose: Compute the empirical quantile function. Description: The quantile function is the inverse of the cumulative distribution function, F, $$Q(u) = F^{-1}(u) \hspace{0.2in} 0 < u < 1$$ Given a set of ordered data, x1x2 ... ≤ xn, an empirical estimate of the quantile function can be obtained from the following piecewise linear function $$\hat{Q}(u) = (nu - j + \frac{1}{2}) x_{(j+1)} + (j + \frac{1}{2} - nu) x_{j}$$ $$\frac{2j - 1}{2n} \le u \le \frac{2j + 1}{2n}$$ This will be computed for a specified number of equi-spaced points between the lower and upper limits. Dataplot will use the number of points in the sample if this is greater than 1,000. Otherwise 1,000 points will be used. Syntax: LET <y> <u> = EMPIRICAL QUANTILE FUNCTION <x> <SUBSET/EXCEPT/FOR qualification> where <x> is the response variable; <y> is a variable containing the empirical quantile function; <u> is a variable containing the values where the empirical quantile function is computed; and where the <SUBSET/EXCEPT/FOR qualification> is optional. Examples: LET Y U = EMPIRICAL QUANTILE FUNCTION X LET Y U = EMPIRICAL QUANTILE FUNCTION X SUBSET X > 0 Default: None Synonyms: None Related Commands: EMPIRICAL QUANTILE PLOT = Generate an empirical quantile plot. EMPIRICAL CDF PLOT = Generates an empiricial CDF plot. KAPLAN MEIER PLOT = Generates a Kaplan Meier plot. PROBABILITY PLOT = Generates a probability plot. INFORMATIVE QUANTILE FUNCTION = Compute the informative quantile function. References: "MIL-HDBK-17-1F Volume 1: Guidelines for Characterization of Structural Materials", Depeartment of Defense, pp. 8-36, 8-37, 2002. Parzen (1983), "Informative Quantile Functions and Identification of Probability Distribution Types", Technical Report No. A-26, Texas A&M University. Applications: Distributional Analysis Implementation Date: 2017/02 Program: . Step 1: Define some default plot control features . title offset 2 title case asis case asis label case asis line color blue red multiplot scale factor 2 multiplot corner coordinates 5 5 95 95 . . Step 2: Create 50, 100, 200, and 1000 normal random numbers and . compute the empirical quantile funciton . let nv = data 50 100 200 1000 let p = sequence 0.01 0.01 .99 let y2 = norppf(p) . . Step 3: Loop through the four cases and compute and plot the . empirical quantile funciton with overlaid NORPPF . multiplot 2 2 loop for k = 1 1 4 let n = nv(k) let x = norm rand numb for i = 1 1 n let y u = empirical quantile function x title N: ^n plot y u and plot y2 p end of loop end of multiplot . justification center move 50 97 text Empirical Quantile Functions (blue) Overlaid with ... NORPPF (red) for Normal Random Numbers move 50 5 text u direction vertical move 5 50 text Q(u) NIST is an agency of the U.S. Commerce Department. Date created: 07/20/2017 Last updated: 07/20/2017
2017-10-24T02:27:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5834603309631348, "perplexity": 7327.1706070325745}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187827853.86/warc/CC-MAIN-20171024014937-20171024034937-00854.warc.gz"}
https://zbmath.org/authors/?q=ai%3Abrenner.susanne-c
## Brenner, Susanne Cecelia Compute Distance To: Author ID: brenner.susanne-c Published as: Brenner, Susanne C.; Brenner, S. C.; Brenner, Susanne; Brenner, Susanne Cecelia more...less Homepage: https://www.math.lsu.edu/~brenner/ External Links: MGP · Wikidata · GND · IdRef Documents Indexed: 137 Publications since 1989, including 3 Books 8 Contributions as Editor Co-Authors: 58 Co-Authors with 87 Joint Publications 2,435 Co-Co-Authors all top 5 ### Co-Authors 38 single-authored 8 Gudi, Thirupathi 7 Cui, Jintao 6 Davis, Christopher B. 6 Gedicke, Joscha 5 Li, Fengyan 5 Neilan, Michael 5 Owens, Luke 4 Barker, Andrew T. 4 Park, Eunhee 4 Wang, Kening 3 Oh, Duk-Soon 3 Scott, Larkin Ridgway 3 Tan, Zhiyu 3 Zhang, Hongchao 2 Carstensen, Carsten 2 Diegel, Amanda E. 2 Garay, José C. 2 Li, Hengguang 2 Liu, Sijing 2 Monk, Peter B. 2 Oh, Minah 2 Porwal, Kamana 2 Sun, Jiguang 2 Szyld, Daniel B. 2 Wollner, Winnifried 2 Zhao, Jie 1 Antonietti, Paola Francesca 1 Ayuso de Dios, Blanca 1 Ben Belgacem, Faker 1 Bjørstad, Petter Erling 1 Bunde, Armin 1 Cai, Xiao-Chuan 1 Çeşmelioğlu, Ayçıl 1 Demkowicz, Leszek F. 1 Gander, Martin Jakob 1 Govindan, R. B. 1 Gu, Shiyuan 1 Guan, Qingguang 1 Halpern, Lawrence 1 Havlin, Shlomo 1 He, Qingmi 1 Hoppe, Ronald H. W. 1 Kawecki, Ellya L. 1 Kim, Hyea Hyun 1 Klawonn, Axel 1 Kornhuber, Ralf 1 Leykekhman, Dmitriy 1 Pollock, Sara 1 Rahman, Talal 1 Reiser, Armin 1 Rivière, Beatrice M. 1 Sarkis, Marcus V. 1 Schedensack, Mira 1 Schellnhuber, Hans-Joachim 1 Sharma, Natasha S. 1 Shparlinski, Igor E. 1 Shu, Chi-Wang 1 Vexler, Boris 1 Vjushin, Dmitry 1 Wang, Zhuo 1 Widlund, Olof B. 1 Wriggers, Peter 1 Xu, Yuesheng all top 5 ### Serials 16 Mathematics of Computation 13 Numerische Mathematik 12 SIAM Journal on Numerical Analysis 11 Journal of Scientific Computing 10 ETNA. Electronic Transactions on Numerical Analysis 7 Journal of Computational and Applied Mathematics 6 Computational Methods in Applied Mathematics 4 Computer Methods in Applied Mechanics and Engineering 4 Oberwolfach Reports 3 Numerical Functional Analysis and Optimization 3 M$$^3$$AS. Mathematical Models & Methods in Applied Sciences 3 Numerical Linear Algebra with Applications 3 Texts in Applied Mathematics 2 IMA Journal of Numerical Analysis 2 Calcolo 2 Applied Numerical Mathematics 2 SIAM Journal on Scientific Computing 2 Results in Applied Mathematics 1 Computers & Mathematics with Applications 1 Houston Journal of Mathematics 1 Mathematical Methods in the Applied Sciences 1 Physica A 1 BIT 1 SIAM Journal on Control and Optimization 1 RAIRO. Modélisation Mathématique et Analyse Numérique 1 Numerical Methods for Partial Differential Equations 1 Applied Mathematics Letters 1 East-West Journal of Numerical Mathematics 1 Advances in Computational Mathematics 1 European Series in Applied and Industrial Mathematics (ESAIM): Control, Optimization and Calculus of Variations 1 Wuhan University Journal of Natural Sciences (WUJNS) 1 Optimization and Engineering 1 The ANZIAM Journal 1 Journal of Numerical Mathematics 1 ANACM. Applied Numerical Analysis and Computational Mathematics 1 International Journal of Numerical Analysis and Modeling 1 European Series in Applied and Industrial Mathematics (ESAIM): Mathematical Modelling and Numerical Analysis 1 Contemporary Mathematics 1 The IMA Volumes in Mathematics and its Applications 1 Lecture Notes in Computational Science and Engineering 1 JNAIAM. Journal of Numerical Analysis, Industrial and Applied Mathematics all top 5 ### Fields 137 Numerical analysis (65-XX) 66 Partial differential equations (35-XX) 26 Mechanics of deformable solids (74-XX) 19 Calculus of variations and optimal control; optimization (49-XX) 11 Optics, electromagnetic theory (78-XX) 9 Fluid mechanics (76-XX) 7 General and overarching topics; collections (00-XX) 4 Functional analysis (46-XX) 3 Potential theory (31-XX) 3 Global analysis, analysis on manifolds (58-XX) 2 History and biography (01-XX) 2 Operations research, mathematical programming (90-XX) 1 Number theory (11-XX) 1 Linear and multilinear algebra; matrix theory (15-XX) 1 Associative rings and algebras (16-XX) 1 Real functions (26-XX) 1 Approximations and expansions (41-XX) 1 Geophysics (86-XX) ### Citations contained in zbMATH Open 126 Publications have been cited 4,943 times in 3,571 Documents Cited by Year The mathematical theory of finite element methods. 3rd ed. Zbl 1135.65042 Brenner, Susanne C.; Scott, L. Ridgway 2008 The mathematical theory of finite element methods. Zbl 0804.65101 Brenner, Susanne C.; Scott, L. Ridgway 1994 The mathematical theory of finite element methods. 2nd ed. Zbl 1012.65115 Brenner, Susanne C.; Scott, L. Ridgway 2002 Poincaré–Friedrichs inequalities for piecewise $$H^{1}$$ functions. Zbl 1045.65100 Brenner, Susanne C. 2003 $$C^0$$ interior penalty methods for fourth order elliptic boundary value problems on polygonal domains. Zbl 1071.65151 Brenner, Susanne C.; Sung, Li-Yeng 2005 Korn’s inequalities for piecewise $$H^1$$ vector fields. Zbl 1055.65118 Brenner, Susanne C. 2004 Linear finite element methods for planar linear elasticity. Zbl 0766.73060 Brenner, Susanne C.; Sung, Li-Yeng 1992 Two-level additive Schwarz preconditioners for nonconforming finite element methods. Zbl 0859.65124 Brenner, Susanne C. 1996 Virtual element methods on meshes with small edges or faces. Zbl 1393.65049 Brenner, Susanne C.; Sung, Li-Yeng 2018 BDDC and FETI-DP without matrices or vectors. Zbl 1173.65363 Brenner, Susanne C.; Sung, Li-Yeng 2007 $$\mathcal{C}^{0}$$ penalty methods for the fully nonlinear Monge-Ampère equation. Zbl 1228.65220 Brenner, Susanne C.; Gudi, Thirupathi; Neilan, Michael; Sung, Li-Yeng 2011 Some estimates for virtual element methods. Zbl 1434.65237 Brenner, Susanne C.; Guan, Qingguang; Sung, Li-Yeng 2017 A weakly over-penalized symmetric interior penalty method. Zbl 1171.65077 Brenner, Susanne C.; Owens, Luke; Sung, Li-Yeng 2008 An a posteriori error estimator for a quadratic $$C^{0}$$-interior penalty method for the biharmonic problem. Zbl 1201.65197 Brenner, Susanne C.; Gudi, Thirupathi; Sung, Li-Yeng 2010 Convergence of nonconforming multigrid methods without full elliptic regularity. Zbl 0912.65099 Brenner, Susanne C. 1999 An optimal-order multigrid method for P1 nonconforming finite elements. Zbl 0664.65103 Brenner, Susanne C. 1989 A two-level additive Schwarz preconditioner for nonconforming plate elements. Zbl 0855.73071 Brenner, Susanne C. 1996 A locally divergence-free interior penalty method for two-dimensional curl-curl problems. Zbl 1168.65068 Brenner, Susanne C.; Li, Fengyan; Sung, Li-Yeng 2008 A multigrid algorithm for the lowest-order Raviart-Thomas mixed triangular finite element method. Zbl 0759.65080 Brenner, Susanne C. 1992 Multigrid methods for the computation of singular solutions and stress intensity factors. I: Corner singularities. Zbl 1043.65136 Brenner, Susanne C. 1999 A locally divergence-free nonconforming finite element method for the time-harmonic Maxwell equations. Zbl 1126.78017 Brenner, Susanne C.; Li, Fengyan; Sung, Li-Yeng 2007 A quadratic $$C^0$$ interior penalty method for the displacement obstacle problem of clamped Kirchhoff plates. Zbl 1263.65110 Brenner, Susanne C.; Sung, Li-Yeng; Zhang, Hongchao; Zhang, Yi 2012 Poincaré–Friedrichs inequalities for piecewise $$H^2$$ functions. Zbl 1072.65147 Brenner, Susanne C.; Wang, Kening; Zhao, Jie 2004 An optimal-order nonconforming multigrid method for the biharmonic equation. Zbl 0679.65083 Brenner, Susanne C. 1989 Convergence of multigrid algorithms for interior penalty methods. Zbl 1073.65117 Brenner, Susanne C.; Zhao, Jie 2005 A $$\mathcal C^0$$ interior penalty method for a fourth order elliptic singular perturbation problem. Zbl 1225.65108 Brenner, Susanne C.; Neilan, Michael 2011 Forty years of the Crouzeix-Raviart element. Zbl 1310.65142 Brenner, Susanne C. 2015 A Morley finite element method for the displacement obstacle problem of clamped Kirchhoff plates. Zbl 1290.65108 Brenner, Susanne C.; Sung, Li-yeng; Zhang, Hongchao; Zhang, Yi 2013 $$C^{0}$$ interior penalty methods. Zbl 1248.65120 Brenner, Susanne C. 2012 Finite element methods for the displacement obstacle problem of clamped plates. Zbl 1250.74023 Brenner, Susanne C.; Sung, Li-Yeng; Zhang, Yi 2012 A nonconforming finite element method for a two-dimensional curl-curl and grad-div problem. Zbl 1166.78006 Brenner, S. C.; Cui, J.; Li, F.; Sung, L.-Y. 2008 $$C^0$$ Interior penalty Galerkin method for biharmonic eigenvalue problems. Zbl 1349.65439 Brenner, Susanne C.; Monk, Peter; Sun, Jiguang 2015 A nonconforming mixed multigrid method for the pure displacement problem in planar linear elasticity. Zbl 0767.73068 Brenner, Susanne C. 1993 The condition number of the Schur complement in domain decomposition. Zbl 0936.65141 Brenner, Susanne C. 1999 Multigrid methods for the symmetric interior penalty method on graded meshes. Zbl 1224.65288 Brenner, S. C.; Cui, J.; Sung, L.-Y. 2009 Multigrid methods for parameter dependent problems. Zbl 0848.73062 Brenner, Susanne C. 1996 Hodge decomposition methods for a quad-curl problem on planar domains. Zbl 1398.65290 Brenner, Susanne C.; Sun, Jiguang; Sung, Li-yeng 2017 A $$C^0$$ interior penalty method for a von Kármán plate. Zbl 1457.65181 Brenner, Susanne C.; Neilan, Michael; Reiser, Armin; Sung, Li-Yeng 2017 Hodge decomposition for divergence-free vector fields and two-dimensional Maxwell’s equations. Zbl 1274.78089 Brenner, S. C.; Cui, J.; Nan, Z.; Sung, L.-Y. 2012 Convergence of the multigrid $$V$$-cycle algorithm for second-order boundary value problems without full elliptic regularity. Zbl 0990.65121 Brenner, Susanne C. 2002 An adaptive $$P_1$$ finite element method for two-dimensional transverse magnetic time harmonic Maxwell’s equations with general material properties and general boundary conditions. Zbl 1373.78416 Brenner, S. C.; Gedicke, J.; Sung, L.-Y. 2016 Finite element approximations of the three dimensional Monge-Ampère equation. Zbl 1272.65088 Brenner, Susanne Cecelia; Neilan, Michael 2012 A nonconforming multigrid method for the stationary Stokes equations. Zbl 0705.76027 Brenner, Susanne C. 1990 Convergence of nonconforming $$V$$-cycle and $$F$$-cycle multigrid algorithms for second order elliptic boundary value problems. Zbl 1052.65102 Brenner, Susanne C. 2004 A quadratic $$C^0$$ interior penalty method for an elliptic optimal control problem with state constraints. Zbl 1282.65074 Brenner, S. C.; Sung, L.-Y.; Zhang, Y. 2014 A new convergence analysis of finite element methods for elliptic distributed optimal control problems with pointwise state constraints. Zbl 1370.49006 Brenner, Susanne C.; Sung, Li-yeng 2017 Two-level additive Schwarz preconditioners for $$C^0$$ interior penalty methods. Zbl 1088.65108 Brenner, Susanne C.; Wang, Kening 2005 A partition of unity method for a class of fourth order elliptic variational inequalities. Zbl 1425.65070 Brenner, Susanne C.; Davis, Christopher B.; Sung, Li-yeng 2014 A $$W$$-cycle algorithm for a weakly over-penalized interior penalty method. Zbl 1173.65368 Brenner, Susanne C.; Owens, Luke 2007 Balancing domain decomposition for nonconforming plate elements. Zbl 0937.74060 Brenner, Susanne C.; Sung, Li-yeng 1999 A quadratic $$C^0$$ interior penalty method for linear fourth order boundary value problems with boundary conditions of the Cahn-Hilliard type. Zbl 1256.65101 Brenner, Susanne C.; Gu, Shiyuan; Gudi, Thirupathi; Sung, Li-Yeng 2012 Multigrid algorithms for symmetric discontinuous Galerkin methods on graded meshes. Zbl 1229.65224 Brenner, S. C.; Cui, J.; Gudi, T.; Sung, L.-Y. 2011 Multigrid algorithms for $$C^0$$ interior penalty methods. Zbl 1114.65151 Brenner, Susanne C.; Sung, Li-yeng 2006 A weakly over-penalized symmetric interior penalty method for the biharmonic problem. Zbl 1205.65311 Brenner, Susanne C.; Gudi, Thirupathi; Sung, Li-Yeng 2010 A weakly over-penalized non-symmetric interior penalty method. Zbl 1145.65095 Brenner, Susanne C.; Owens, Luke 2007 Hodge decomposition for two-dimensional time-harmonic Maxwell’s equations: impedance boundary condition. Zbl 1361.78007 Brenner, S. C.; Gedicke, J.; Sung, L.-Y. 2017 Higher order weakly over-penalized symmetric interior penalty methods. Zbl 1273.65174 Brenner, Susanne C.; Owens, Luke; Sung, Li-Yeng 2012 A nonconforming mixed multigrid method for the pure traction problem in planar linear elasticity. Zbl 0809.73064 Brenner, Susanne C. 1994 Two-level additive Schwarz preconditioners for nonconforming finite elements. Zbl 0817.65107 Brenner, Susanne C. 1994 Multigrid methods for the computation of singular solutions and stress intensity factors. II: Crack singularities. Zbl 0890.73060 Brenner, S. C.; Sung, L.-Y. 1997 Schwarz methods for a preconditioned WOPSIP method for elliptic problems. Zbl 1284.65162 Antonietti, Paola F.; Ayuso de Dios, Blanca; Brenner, Susanne C.; Sung, Li-yeng 2012 A posteriori error control for a weakly over-penalized symmetric interior penalty method. Zbl 1203.65230 Brenner, Susanne C.; Gudi, Thirupathi; Sung, Li-Yeng 2009 An intrinsically parallel finite element method. Zbl 1203.65244 Brenner, S. C.; Gudi, T.; Owens, L.; Sung, L.-Y. 2010 Some nonstandard finite element estimates with applications to $$3D$$ Poisson and Signorini problems. Zbl 0981.65131 Ben Belgacem, Faker; Brenner, Susanne C. 2001 A $$C^0$$ interior penalty method for elliptic distributed optimal control problems in three dimensions with pointwise state constraints. Zbl 1384.65038 Brenner, Susanne C.; Oh, Minah; Pollock, Sara; Porwal, Kamana; Schedensack, Mira; Sharma, Natasha S. 2016 Two-level additive Schwarz preconditioners for a weakly over-penalized symmetric interior penalty method. Zbl 1230.65124 Barker, A. T.; Brenner, S. C.; Park, E.-H.; Sung, L.-Y. 2011 Isoparametric $$C ^{0}$$ interior penalty methods for plate bending problems on smooth domains. Zbl 1341.74147 Brenner, Susanne C.; Neilan, Michael; Sung, Li-Yeng 2013 A balancing domain decomposition by constraints preconditioner for a weakly over-penalized symmetric interior penalty method. Zbl 1313.65049 Brenner, Susanne C.; Park, Eun-Hee; Sung, Li-Yeng 2013 Multigrid methods for saddle point problems: Stokes and Lamé systems. Zbl 1298.76067 Brenner, Susanne C.; Li, Hengguang; Sung, Li-Yeng 2014 A nonconforming penalty method for a two-dimensional curl-curl problem. Zbl 1168.78315 Brenner, Susanne C.; Li, Fengyan; Sung, Li-Yeng 2009 $$C^0$$ interior penalty methods for an elliptic distributed optimal control problem on nonconvex polygonal domains with pointwise state constraints. Zbl 1396.49003 Brenner, Susanne C.; Gedicke, Joscha; Sung, Li-yeng 2018 A Morley finite element method for an elliptic distributed optimal control problem with pointwise state and control constraints. Zbl 1412.49025 Brenner, Susanne C.; Gudi, Thirupathi; Porwal, Kamana; Sung, Li-Yeng 2018 Nonconforming Maxwell eigensolvers. Zbl 1203.65236 Brenner, Susanne C.; Li, Fengyan; Sung, Li-Yeng 2009 $$C^0$$ interior penalty methods for an elliptic state-constrained optimal control problem with Neumann boundary condition. Zbl 07006472 Brenner, Susanne C.; Sung, Li-yeng; Zhang, Yi 2019 Discrete Sobolev and Poincaré inequalities for piecewise polynomial functions. Zbl 1065.65128 Brenner, Susanne C. 2004 Analysis of two-dimensional FETI-DP preconditioners by the standard additive Schwarz framework. Zbl 1065.65136 Brenner, Susanne 2003 An a posteriori analysis of $$C^0$$ interior penalty methods for the obstacle problem of clamped Kirchhoff plates. Zbl 1381.74129 Brenner, Susanne C.; Gedicke, Joscha; Sung, Li-Yeng; Zhang, Yi 2017 Virtual enriching operators. Zbl 1471.65192 Brenner, Susanne C.; Sung, Li-Yeng 2019 Overcoming corner singularities using multigrid methods. Zbl 0914.65117 Brenner, Susanne C. 1998 Adaptive $$C^0$$ interior penalty methods for Hamilton-Jacobi-Bellman equations with cordes coefficients. Zbl 1458.65145 Brenner, Susanne C.; Kawecki, Ellya L. 2021 An adaptive $$P_1$$ finite element method for two-dimensional Maxwell’s equations. Zbl 1266.78027 Brenner, S. C.; Gedicke, J.; Sung, L.-Y. 2013 Post-processing procedures for an elliptic distributed optimal control problem with pointwise state constraints. Zbl 1320.65169 Brenner, Susanne C.; Sung, Li-Yeng; Zhang, Yi 2015 Overlapping Schwarz domain decomposition preconditioners for the local discontinuous Galerkin method for elliptic problems. Zbl 1232.65164 Barker, A. T.; Brenner, S. C.; Sung, L.-Y. 2011 Multigrid methods based on Hodge decomposition for a quad-curl problem. Zbl 1420.65119 Brenner, Susanne C.; Cui, Jintao; Sung, Li-yeng 2019 Lower bounds for two-level additive Schwarz preconditioners with small overlap. Zbl 0959.65062 Brenner, Susanne C. 2000 A mixed finite element method for the Stokes equations based on a weakly over-penalized symmetric interior penalty approach. Zbl 1306.65276 Barker, Andrew T.; Brenner, Susanne C. 2014 A partition of unity method for the displacement obstacle problem of clamped Kirchhoff plates. Zbl 1293.74424 Brenner, Susanne C.; Davis, Christopher B.; Sung, Li-yeng 2014 A robust solver for a mixed finite element method for the Cahn-Hilliard equation. Zbl 1407.65179 Brenner, Susanne C.; Diegel, Amanda E.; Sung, Li-Yeng 2018 $$P_1$$ finite element methods for an elliptic state-constrained distributed optimal control problem with Neumann boundary conditions. Zbl 1443.49007 Brenner, S. C.; Oh, M.; Sung, L.-Y. 2020 Preconditioning complicated finite elements by simple finite elements. Zbl 0857.65114 Brenner, Susanne C. 1996 Multigrid methods for the computation of singular solutions and stress intensity factors. III: Interface singularities. Zbl 1054.74047 Brenner, Susanne C.; Sung, Li-yeng 2003 A one-level additive Schwarz preconditioner for a discontinuous Petrov-Galerkin method. Zbl 1382.65432 Barker, Andrew T.; Brenner, Susanne C.; Park, Eun-Hee; Sung, Li-Yeng 2014 A quadratic nonconforming vector finite element for $$H(\text{curl}; \varOmega)\cap H(\text{div}; \varOmega)$$. Zbl 1170.65091 Brenner, Susanne C.; Sung, Li-Yeng 2009 Multigrid methods for saddle point problems: Oseen system. Zbl 1402.65176 Brenner, Susanne C.; Li, Hengguang; Sung, Li-yeng 2017 A two-level additive Schwarz preconditioner for macro-element approximations of the plate bending problem. Zbl 0838.73069 Brenner, Susanne C. 1995 An iterative substructuring algorithm for a $$C^{0}$$ interior penalty method. Zbl 1321.65181 Brenner, Susanne C.; Wang, Kening 2012 A nonconforming finite element method for an acoustic fluid-structure interaction problem. Zbl 1456.65148 Brenner, Susanne C.; Çeşmelioǧlu, Ayçıl; Cui, Jintao; Sung, Li-Yeng 2018 Lower bounds for nonoverlapping domain decomposition preconditioners in two dimensions. Zbl 0974.65114 Brenner, Susanne C.; Sung, Li-Yeng 2000 Smoothers, mesh dependent norms, interpolation and multigrid. Zbl 1022.65134 Brenner, Susanne C. 2002 Lower bounds for three-dimensional nonoverlapping domain decomposition algorithms. Zbl 1054.65043 Brenner, Susanne C.; He, Qingmi 2003 Adaptive $$C^0$$ interior penalty methods for Hamilton-Jacobi-Bellman equations with cordes coefficients. Zbl 1458.65145 Brenner, Susanne C.; Kawecki, Ellya L. 2021 Additive Schwarz preconditioners for a localized orthogonal decomposition method. Zbl 1473.65337 Brenner, Susanne C.; Garay, José C.; Sung, Li-Yeng 2021 A $$C^1$$ virtual element method for an elliptic distributed optimal control problem with pointwise state constraints. Zbl 1478.65115 Brenner, Susanne C.; Sung, Li-Yeng; Tan, Zhiyu 2021 $$P_1$$ finite element methods for an elliptic state-constrained distributed optimal control problem with Neumann boundary conditions. Zbl 1443.49007 Brenner, S. C.; Oh, M.; Sung, L.-Y. 2020 A robust solver for a second order mixed finite element method for the Cahn-Hilliard equation. Zbl 1478.65080 Brenner, Susanne C.; Diegel, Amanda E.; Sung, Li-Yeng 2020 $$P_1$$ finite element methods for an elliptic optimal control problem with pointwise state constraints. Zbl 1464.65061 Brenner, Susanne C.; Sung, Li-yeng; Gedicke, Joscha 2020 A cubic $$C^{\mathbf{0}}$$ interior penalty method for elliptic distributed optimal control problems with pointwise state and control constraints. Zbl 1464.49021 Brenner, Susanne C.; Sung, Li-yeng; Tan, Zhiyu 2020 Finite element methods for elliptic distributed optimal control problems with pointwise state constraints (survey). Zbl 1440.49037 Brenner, Susanne C. 2020 A one dimensional elliptic distributed optimal control problem with pointwise derivative constraints. Zbl 1447.49001 Brenner, Susanne C.; Sung, Li-yeng; Wollner, Winnifried 2020 Multigrid methods for saddle point problems: optimality systems. Zbl 1433.49003 Brenner, Susanne C.; Liu, Sijing; Sung, Li-yeng 2020 $$C^0$$ interior penalty methods for an elliptic state-constrained optimal control problem with Neumann boundary condition. Zbl 07006472 Brenner, Susanne C.; Sung, Li-yeng; Zhang, Yi 2019 Virtual enriching operators. Zbl 1471.65192 Brenner, Susanne C.; Sung, Li-Yeng 2019 Multigrid methods based on Hodge decomposition for a quad-curl problem. Zbl 1420.65119 Brenner, Susanne C.; Cui, Jintao; Sung, Li-yeng 2019 Virtual element methods on meshes with small edges or faces. Zbl 1393.65049 Brenner, Susanne C.; Sung, Li-Yeng 2018 $$C^0$$ interior penalty methods for an elliptic distributed optimal control problem on nonconvex polygonal domains with pointwise state constraints. Zbl 1396.49003 Brenner, Susanne C.; Gedicke, Joscha; Sung, Li-yeng 2018 A Morley finite element method for an elliptic distributed optimal control problem with pointwise state and control constraints. Zbl 1412.49025 Brenner, Susanne C.; Gudi, Thirupathi; Porwal, Kamana; Sung, Li-Yeng 2018 A robust solver for a mixed finite element method for the Cahn-Hilliard equation. Zbl 1407.65179 Brenner, Susanne C.; Diegel, Amanda E.; Sung, Li-Yeng 2018 A nonconforming finite element method for an acoustic fluid-structure interaction problem. Zbl 1456.65148 Brenner, Susanne C.; Çeşmelioǧlu, Ayçıl; Cui, Jintao; Sung, Li-Yeng 2018 Multigrid methods for saddle point problems: Darcy systems. Zbl 1408.76361 Brenner, Susanne C.; Oh, Duk-Soon; Sung, Li-Yeng 2018 Multigrid methods for $$H(\text{div})$$ in three dimensions with nonoverlapping domain decomposition smoothers. Zbl 06987013 Brenner, Susanne C.; Oh, Duk-Soon 2018 Additive Schwarz preconditioners for the obstacle problem of clamped Kirchhoff plates. Zbl 1408.65084 Brenner, Susanne C.; Davis, Christopher B.; Sung, Li-Yeng 2018 Some estimates for virtual element methods. Zbl 1434.65237 Brenner, Susanne C.; Guan, Qingguang; Sung, Li-Yeng 2017 Hodge decomposition methods for a quad-curl problem on planar domains. Zbl 1398.65290 Brenner, Susanne C.; Sun, Jiguang; Sung, Li-yeng 2017 A $$C^0$$ interior penalty method for a von Kármán plate. Zbl 1457.65181 Brenner, Susanne C.; Neilan, Michael; Reiser, Armin; Sung, Li-Yeng 2017 A new convergence analysis of finite element methods for elliptic distributed optimal control problems with pointwise state constraints. Zbl 1370.49006 Brenner, Susanne C.; Sung, Li-yeng 2017 Hodge decomposition for two-dimensional time-harmonic Maxwell’s equations: impedance boundary condition. Zbl 1361.78007 Brenner, S. C.; Gedicke, J.; Sung, L.-Y. 2017 An a posteriori analysis of $$C^0$$ interior penalty methods for the obstacle problem of clamped Kirchhoff plates. Zbl 1381.74129 Brenner, Susanne C.; Gedicke, Joscha; Sung, Li-Yeng; Zhang, Yi 2017 Multigrid methods for saddle point problems: Oseen system. Zbl 1402.65176 Brenner, Susanne C.; Li, Hengguang; Sung, Li-yeng 2017 A two-level additive Schwarz domain decomposition preconditioner for a flat-top partition of unity method. Zbl 1404.65299 Brenner, Susanne C.; Davis, Christopher B.; Sung, Li-Yeng 2017 A finite element method for the one-dimensional prescribed curvature problem. Zbl 1380.65361 Brenner, Susanne C.; Sung, Li-Yeng; Wang, Zhuo; Xu, Yuesheng 2017 A BDDC preconditioner for a symmetric interior penalty Galerkin method. Zbl 1368.65230 Brenner, Susanne C.; Park, Eun-Hee; Sung, Li-Yeng 2017 An adaptive $$P_1$$ finite element method for two-dimensional transverse magnetic time harmonic Maxwell’s equations with general material properties and general boundary conditions. Zbl 1373.78416 Brenner, S. C.; Gedicke, J.; Sung, L.-Y. 2016 A $$C^0$$ interior penalty method for elliptic distributed optimal control problems in three dimensions with pointwise state constraints. Zbl 1384.65038 Brenner, Susanne C.; Oh, Minah; Pollock, Sara; Porwal, Kamana; Schedensack, Mira; Sharma, Natasha S. 2016 Topics in numerical partial differential equations and scientific computing. Based on the presentations at the 2nd IMA’s Women in Applied Mathematics, WhAM!, research collaboration workshop, Minneapolis, MN, USA, August 12–15, 2014. Zbl 1353.65001 2016 Forty years of the Crouzeix-Raviart element. Zbl 1310.65142 Brenner, Susanne C. 2015 $$C^0$$ Interior penalty Galerkin method for biharmonic eigenvalue problems. Zbl 1349.65439 Brenner, Susanne C.; Monk, Peter; Sun, Jiguang 2015 Post-processing procedures for an elliptic distributed optimal control problem with pointwise state constraints. Zbl 1320.65169 Brenner, Susanne C.; Sung, Li-Yeng; Zhang, Yi 2015 A partition of unity method for the obstacle problem of simply supported Kirchhoff plates. Zbl 1342.74177 Brenner, Susanne C.; Davis, Christopher B.; Sung, Li-yeng 2015 Piecewise $$\mathrm{H}^1$$ functions and vector fields associated with meshes generated by independent refinements. Zbl 1311.65143 Brenner, Susanne C.; Sung, Li-Yeng 2015 A quadratic $$C^0$$ interior penalty method for an elliptic optimal control problem with state constraints. Zbl 1282.65074 Brenner, S. C.; Sung, L.-Y.; Zhang, Y. 2014 A partition of unity method for a class of fourth order elliptic variational inequalities. Zbl 1425.65070 Brenner, Susanne C.; Davis, Christopher B.; Sung, Li-yeng 2014 Multigrid methods for saddle point problems: Stokes and Lamé systems. Zbl 1298.76067 Brenner, Susanne C.; Li, Hengguang; Sung, Li-Yeng 2014 A mixed finite element method for the Stokes equations based on a weakly over-penalized symmetric interior penalty approach. Zbl 1306.65276 Barker, Andrew T.; Brenner, Susanne C. 2014 A partition of unity method for the displacement obstacle problem of clamped Kirchhoff plates. Zbl 1293.74424 Brenner, Susanne C.; Davis, Christopher B.; Sung, Li-yeng 2014 A one-level additive Schwarz preconditioner for a discontinuous Petrov-Galerkin method. Zbl 1382.65432 Barker, Andrew T.; Brenner, Susanne C.; Park, Eun-Hee; Sung, Li-Yeng 2014 A Morley finite element method for the displacement obstacle problem of clamped Kirchhoff plates. Zbl 1290.65108 Brenner, Susanne C.; Sung, Li-yeng; Zhang, Hongchao; Zhang, Yi 2013 Isoparametric $$C ^{0}$$ interior penalty methods for plate bending problems on smooth domains. Zbl 1341.74147 Brenner, Susanne C.; Neilan, Michael; Sung, Li-Yeng 2013 A balancing domain decomposition by constraints preconditioner for a weakly over-penalized symmetric interior penalty method. Zbl 1313.65049 Brenner, Susanne C.; Park, Eun-Hee; Sung, Li-Yeng 2013 An adaptive $$P_1$$ finite element method for two-dimensional Maxwell’s equations. Zbl 1266.78027 Brenner, S. C.; Gedicke, J.; Sung, L.-Y. 2013 An additive analysis of multiplicative Schwarz methods. Zbl 1271.65147 Brenner, Susanne C. 2013 A quadratic $$C^0$$ interior penalty method for the displacement obstacle problem of clamped Kirchhoff plates. Zbl 1263.65110 Brenner, Susanne C.; Sung, Li-Yeng; Zhang, Hongchao; Zhang, Yi 2012 $$C^{0}$$ interior penalty methods. Zbl 1248.65120 Brenner, Susanne C. 2012 Finite element methods for the displacement obstacle problem of clamped plates. Zbl 1250.74023 Brenner, Susanne C.; Sung, Li-Yeng; Zhang, Yi 2012 Hodge decomposition for divergence-free vector fields and two-dimensional Maxwell’s equations. Zbl 1274.78089 Brenner, S. C.; Cui, J.; Nan, Z.; Sung, L.-Y. 2012 Finite element approximations of the three dimensional Monge-Ampère equation. Zbl 1272.65088 Brenner, Susanne Cecelia; Neilan, Michael 2012 A quadratic $$C^0$$ interior penalty method for linear fourth order boundary value problems with boundary conditions of the Cahn-Hilliard type. Zbl 1256.65101 Brenner, Susanne C.; Gu, Shiyuan; Gudi, Thirupathi; Sung, Li-Yeng 2012 Higher order weakly over-penalized symmetric interior penalty methods. Zbl 1273.65174 Brenner, Susanne C.; Owens, Luke; Sung, Li-Yeng 2012 Schwarz methods for a preconditioned WOPSIP method for elliptic problems. Zbl 1284.65162 Antonietti, Paola F.; Ayuso de Dios, Blanca; Brenner, Susanne C.; Sung, Li-yeng 2012 An iterative substructuring algorithm for a $$C^{0}$$ interior penalty method. Zbl 1321.65181 Brenner, Susanne C.; Wang, Kening 2012 $$\mathcal{C}^{0}$$ penalty methods for the fully nonlinear Monge-Ampère equation. Zbl 1228.65220 Brenner, Susanne C.; Gudi, Thirupathi; Neilan, Michael; Sung, Li-Yeng 2011 A $$\mathcal C^0$$ interior penalty method for a fourth order elliptic singular perturbation problem. Zbl 1225.65108 Brenner, Susanne C.; Neilan, Michael 2011 Multigrid algorithms for symmetric discontinuous Galerkin methods on graded meshes. Zbl 1229.65224 Brenner, S. C.; Cui, J.; Gudi, T.; Sung, L.-Y. 2011 Two-level additive Schwarz preconditioners for a weakly over-penalized symmetric interior penalty method. Zbl 1230.65124 Barker, A. T.; Brenner, S. C.; Park, E.-H.; Sung, L.-Y. 2011 Overlapping Schwarz domain decomposition preconditioners for the local discontinuous Galerkin method for elliptic problems. Zbl 1232.65164 Barker, A. T.; Brenner, S. C.; Sung, L.-Y. 2011 An a posteriori error estimator for a quadratic $$C^{0}$$-interior penalty method for the biharmonic problem. Zbl 1201.65197 Brenner, Susanne C.; Gudi, Thirupathi; Sung, Li-Yeng 2010 A weakly over-penalized symmetric interior penalty method for the biharmonic problem. Zbl 1205.65311 Brenner, Susanne C.; Gudi, Thirupathi; Sung, Li-Yeng 2010 An intrinsically parallel finite element method. Zbl 1203.65244 Brenner, S. C.; Gudi, T.; Owens, L.; Sung, L.-Y. 2010 Multigrid methods for the symmetric interior penalty method on graded meshes. Zbl 1224.65288 Brenner, S. C.; Cui, J.; Sung, L.-Y. 2009 A posteriori error control for a weakly over-penalized symmetric interior penalty method. Zbl 1203.65230 Brenner, Susanne C.; Gudi, Thirupathi; Sung, Li-Yeng 2009 A nonconforming penalty method for a two-dimensional curl-curl problem. Zbl 1168.78315 Brenner, Susanne C.; Li, Fengyan; Sung, Li-Yeng 2009 Nonconforming Maxwell eigensolvers. Zbl 1203.65236 Brenner, Susanne C.; Li, Fengyan; Sung, Li-Yeng 2009 A quadratic nonconforming vector finite element for $$H(\text{curl}; \varOmega)\cap H(\text{div}; \varOmega)$$. Zbl 1170.65091 Brenner, Susanne C.; Sung, Li-Yeng 2009 The mathematical theory of finite element methods. 3rd ed. Zbl 1135.65042 Brenner, Susanne C.; Scott, L. Ridgway 2008 A weakly over-penalized symmetric interior penalty method. Zbl 1171.65077 Brenner, Susanne C.; Owens, Luke; Sung, Li-Yeng 2008 A locally divergence-free interior penalty method for two-dimensional curl-curl problems. Zbl 1168.65068 Brenner, Susanne C.; Li, Fengyan; Sung, Li-Yeng 2008 A nonconforming finite element method for a two-dimensional curl-curl and grad-div problem. Zbl 1166.78006 Brenner, S. C.; Cui, J.; Li, F.; Sung, L.-Y. 2008 BDDC and FETI-DP without matrices or vectors. Zbl 1173.65363 Brenner, Susanne C.; Sung, Li-Yeng 2007 A locally divergence-free nonconforming finite element method for the time-harmonic Maxwell equations. Zbl 1126.78017 Brenner, Susanne C.; Li, Fengyan; Sung, Li-Yeng 2007 A $$W$$-cycle algorithm for a weakly over-penalized interior penalty method. Zbl 1173.65368 Brenner, Susanne C.; Owens, Luke 2007 A weakly over-penalized non-symmetric interior penalty method. Zbl 1145.65095 Brenner, Susanne C.; Owens, Luke 2007 Multigrid algorithms for $$C^0$$ interior penalty methods. Zbl 1114.65151 Brenner, Susanne C.; Sung, Li-yeng 2006 $$C^0$$ interior penalty methods for fourth order elliptic boundary value problems on polygonal domains. Zbl 1071.65151 Brenner, Susanne C.; Sung, Li-Yeng 2005 Convergence of multigrid algorithms for interior penalty methods. Zbl 1073.65117 Brenner, Susanne C.; Zhao, Jie 2005 Two-level additive Schwarz preconditioners for $$C^0$$ interior penalty methods. Zbl 1088.65108 Brenner, Susanne C.; Wang, Kening 2005 Korn’s inequalities for piecewise $$H^1$$ vector fields. Zbl 1055.65118 Brenner, Susanne C. 2004 Poincaré–Friedrichs inequalities for piecewise $$H^2$$ functions. Zbl 1072.65147 Brenner, Susanne C.; Wang, Kening; Zhao, Jie 2004 Convergence of nonconforming $$V$$-cycle and $$F$$-cycle multigrid algorithms for second order elliptic boundary value problems. Zbl 1052.65102 Brenner, Susanne C. 2004 Discrete Sobolev and Poincaré inequalities for piecewise polynomial functions. Zbl 1065.65128 Brenner, Susanne C. 2004 Poincaré–Friedrichs inequalities for piecewise $$H^{1}$$ functions. Zbl 1045.65100 Brenner, Susanne C. 2003 Analysis of two-dimensional FETI-DP preconditioners by the standard additive Schwarz framework. Zbl 1065.65136 Brenner, Susanne 2003 Multigrid methods for the computation of singular solutions and stress intensity factors. III: Interface singularities. Zbl 1054.74047 Brenner, Susanne C.; Sung, Li-yeng 2003 Lower bounds for three-dimensional nonoverlapping domain decomposition algorithms. Zbl 1054.65043 Brenner, Susanne C.; He, Qingmi 2003 An additive Schwarz preconditioner for the FETI method. Zbl 1030.65115 Brenner, Susanne C. 2003 The mathematical theory of finite element methods. 2nd ed. Zbl 1012.65115 Brenner, Susanne C.; Scott, L. Ridgway 2002 Convergence of the multigrid $$V$$-cycle algorithm for second-order boundary value problems without full elliptic regularity. Zbl 0990.65121 Brenner, Susanne C. 2002 Smoothers, mesh dependent norms, interpolation and multigrid. Zbl 1022.65134 Brenner, Susanne C. 2002 A new look at FETI. Zbl 1026.65096 Brenner, Susanne C. 2002 Some nonstandard finite element estimates with applications to $$3D$$ Poisson and Signorini problems. Zbl 0981.65131 Ben Belgacem, Faker; Brenner, Susanne C. 2001 Long-range correlations and trends in global climate models: Comparison with real data. Zbl 0978.86005 Govindan, R. B.; Vjushin, D.; Brenner, S.; Bunde, A.; Havlin, S.; Schellnhuber, H.-J. 2001 Lower bounds for two-level additive Schwarz preconditioners with small overlap. Zbl 0959.65062 Brenner, Susanne C. 2000 ...and 26 more Documents all top 5 ### Cited by 3,517 Authors 84 Brenner, Susanne Cecelia 71 Carstensen, Carsten 43 Huang, Jianguo 41 Huang, Yunqing 38 Zhang, Shangyou 36 Beirão da Veiga, Lourenço 33 Zhang, Zhimin 32 Li, Jichun 31 Gudi, Thirupathi 31 Han, Weimin 31 Yang, Yidu 30 Chen, Shaochun 29 Larson, Mats G. 28 Neilan, Michael 26 Antonietti, Paola Francesca 26 Huang, Xuehai 26 Nataraj, Neela 26 Rebholz, Leo G. 25 Feng, Xiaobing 25 Girault, Vivette 25 Ye, Xiu 24 Bi, Hai 24 Nochetto, Ricardo Horacio 23 Li, Hengguang 23 Xie, Hehu 22 He, Yinnian 22 Lamichhane, Bishnu Prasad 21 Manzini, Gianmarco 21 Pani, Amiya Kumar 21 Shi, Zhongci 20 Hansbo, Peter 20 Mao, Shipeng 20 Shi, Dongyang 19 Bartels, Sören 19 Vabishchevich, Pëtr Nikolaevich 18 Chen, Jinru 18 Chen, Long 18 Di Pietro, Daniele Antonio 18 Gedicke, Joscha 18 Hu, Jun 18 Scott, Larkin Ridgway 18 Vacca, Giuseppe 18 Xu, Xuejun 17 Badia, Santiago 17 Burman, Erik 17 Chung, Tsz Shun Eric 17 Mora, David 17 Peterseim, Daniel 17 Xu, Jinchao 17 Yang, Suh-Yuh 16 Bacuta, Constantin 16 Cockburn, Bernardo 16 Ern, Alexandre 16 Gatica, Gabriel N. 16 Guo, Hailong 16 Zhao, Jikun 15 Casas, Eduardo 15 Dassi, Franco 15 Kwak, Do Young 15 Mu, Lin 15 Park, Eun-Jae 15 Wohlmuth, Barbara I. 14 Codina, Ramon 14 Hiptmair, Ralf 14 Huang, Qiumei 14 Marcinkowski, Leszek 14 Wheeler, Mary Fanett 14 Wu, Haijun 14 Yang, Wei 13 Chen, Yanping 13 Cui, Jintao 13 Droniou, Jérôme 13 Duan, Huoyuan 13 Han, Jiayu 13 Makridakis, Charalambos G. 13 Prohl, Andreas 13 Rivière, Beatrice M. 13 Verani, Marco 13 Wang, Cheng 13 Wollner, Winnifried 13 Yang, Min 12 Bi, Chunjia 12 Guan, Hongbo 12 Hinze, Michael 12 Kim, Hyea Hyun 12 Liu, Jiangguo 12 Mascotto, Lorenzo 12 Monk, Peter B. 12 Porwal, Kamana 12 Qiu, Weifeng 12 Schwab, Christoph 12 Sinha, Rajen Kumar 12 Tsuchiya, Takuya 12 Wang, Junping 12 Zhang, Shuo 12 Zikatanov, Ludmil T. 11 An, Rong 11 Brezzi, Franco 11 Chen, Zhangxin 11 Gunzburger, Max D. ...and 3,417 more Authors all top 5 ### Cited in 260 Serials 279 Journal of Scientific Computing 276 Computer Methods in Applied Mechanics and Engineering 251 Mathematics of Computation 227 Numerische Mathematik 225 Journal of Computational and Applied Mathematics 192 Computers & Mathematics with Applications 144 SIAM Journal on Numerical Analysis 127 Applied Numerical Mathematics 110 European Series in Applied and Industrial Mathematics (ESAIM): Mathematical Modelling and Numerical Analysis 109 Journal of Computational Physics 97 Applied Mathematics and Computation 94 Numerical Methods for Partial Differential Equations 91 SIAM Journal on Scientific Computing 65 Advances in Computational Mathematics 62 Computational Methods in Applied Mathematics 57 M$$^3$$AS. Mathematical Models & Methods in Applied Sciences 44 Calcolo 35 Journal of Mathematical Analysis and Applications 32 BIT 27 International Journal for Numerical Methods in Engineering 27 Numerical Functional Analysis and Optimization 27 Journal of Numerical Mathematics 25 Japan Journal of Industrial and Applied Mathematics 24 Numerical Algorithms 23 Applied Mathematics Letters 22 Computational Optimization and Applications 21 International Journal of Computer Mathematics 21 Science China. Mathematics 20 Mathematical Methods in the Applied Sciences 19 Computational Geosciences 18 Advances in Applied Mathematics and Mechanics 17 Applications of Mathematics 17 Numerical Linear Algebra with Applications 17 M2AN. Mathematical Modelling and Numerical Analysis. ESAIM, European Series in Applied and Industrial Mathematics 17 Communications in Computational Physics 14 Multiscale Modeling & Simulation 14 International Journal of Numerical Analysis and Modeling 13 Communications in Numerical Methods in Engineering 12 Computers and Fluids 12 Computing 12 Discrete and Continuous Dynamical Systems. Series B 11 Engineering Analysis with Boundary Elements 11 Mathematical Problems in Engineering 10 Mathematics and Computers in Simulation 10 European Series in Applied and Industrial Mathematics (ESAIM): Control, Optimization and Calculus of Variations 10 Foundations of Computational Mathematics 10 Comptes Rendus. Mathématique. Académie des Sciences, Paris 10 East Asian Journal on Applied Mathematics 9 Science in China. Series A 9 Computational and Applied Mathematics 9 Results in Applied Mathematics 8 Applicable Analysis 8 SIAM Journal on Control and Optimization 8 Computational Mechanics 8 Applied Mathematical Modelling 8 Electronic Research Archive 7 Journal of Approximation Theory 7 Applied Mathematics and Mechanics. (English Edition) 7 Journal of Computational Mathematics 7 ETNA. Electronic Transactions on Numerical Analysis 7 Discrete and Continuous Dynamical Systems. Series S 7 Mathematical Control and Related Fields 6 Abstract and Applied Analysis 6 Communications on Pure and Applied Analysis 6 Journal of Applied Mathematics and Computing 6 International Journal of Computational Methods 6 SIAM/ASA Journal on Uncertainty Quantification 6 Communications on Applied Mathematics and Computation 5 Archive for Rational Mechanics and Analysis 5 Journal of Optimization Theory and Applications 5 Acta Applicandae Mathematicae 5 Journal of Integral Equations and Applications 5 Computational Mathematics and Mathematical Physics 5 Applied Mathematics. Series B (English Edition) 5 Communications in Nonlinear Science and Numerical Simulation 5 Central European Journal of Mathematics 5 Analysis and Applications (Singapore) 4 International Journal for Numerical Methods in Fluids 4 Journal of the Mechanics and Physics of Solids 4 Applied Mathematics and Optimization 4 RAIRO. Modélisation Mathématique et Analyse Numérique 4 Journal of Complexity 4 Journal of Nonlinear Science 4 Journal of Inverse and Ill-Posed Problems 4 Russian Journal of Numerical Analysis and Mathematical Modelling 4 Mathematics and Mechanics of Solids 4 Computing and Visualization in Science 4 Journal of Mathematical Fluid Mechanics 4 Archives of Computational Methods in Engineering 4 Journal of Applied Mathematics 4 Advances in Numerical Analysis 4 SMAI Journal of Computational Mathematics 3 Indian Journal of Pure & Applied Mathematics 3 Inverse Problems 3 Meccanica 3 Computer Aided Geometric Design 3 Acta Mathematicae Applicatae Sinica. English Series 3 Mathematical and Computer Modelling 3 SIAM Journal on Matrix Analysis and Applications 3 Journal of Elasticity ...and 160 more Serials all top 5 ### Cited in 46 Fields 3,120 Numerical analysis (65-XX) 1,684 Partial differential equations (35-XX) 780 Fluid mechanics (76-XX) 630 Mechanics of deformable solids (74-XX) 236 Calculus of variations and optimal control; optimization (49-XX) 180 Optics, electromagnetic theory (78-XX) 75 Approximations and expansions (41-XX) 67 Probability theory and stochastic processes (60-XX) 55 Biology and other natural sciences (92-XX) 51 Statistical mechanics, structure of matter (82-XX) 49 Functional analysis (46-XX) 40 Potential theory (31-XX) 36 Real functions (26-XX) 36 Operator theory (47-XX) 33 Geophysics (86-XX) 31 Ordinary differential equations (34-XX) 29 Computer science (68-XX) 28 Operations research, mathematical programming (90-XX) 27 Integral equations (45-XX) 25 Global analysis, analysis on manifolds (58-XX) 23 Classical thermodynamics, heat transfer (80-XX) 23 Systems theory; control (93-XX) 21 Linear and multilinear algebra; matrix theory (15-XX) 16 Statistics (62-XX) 16 Quantum theory (81-XX) 15 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 11 Harmonic analysis on Euclidean spaces (42-XX) 9 Dynamical systems and ergodic theory (37-XX) 9 Information and communication theory, circuits (94-XX) 8 Special functions (33-XX) 8 Differential geometry (53-XX) 7 Mechanics of particles and systems (70-XX) 4 General and overarching topics; collections (00-XX) 4 Combinatorics (05-XX) 4 Astronomy and astrophysics (85-XX) 3 History and biography (01-XX) 3 Mathematical logic and foundations (03-XX) 3 Measure and integration (28-XX) 2 Difference and functional equations (39-XX) 2 Convex and discrete geometry (52-XX) 1 Category theory; homological algebra (18-XX) 1 Functions of a complex variable (30-XX) 1 Several complex variables and analytic spaces (32-XX) 1 Integral transforms, operational calculus (44-XX) 1 General topology (54-XX) 1 Relativity and gravitational theory (83-XX) ### Wikidata Timeline The data are displayed as stored in Wikidata under a Creative Commons CC0 License. Updates and corrections should be made in Wikidata.
2022-11-30T03:47:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7758504748344421, "perplexity": 9891.16328941113}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710719.4/warc/CC-MAIN-20221130024541-20221130054541-00255.warc.gz"}
https://www.zbmath.org/authors/?q=ai%3Ade-boor.carl
# zbMATH — the first resource for mathematics ## de Boor, Carl Compute Distance To: Author ID: de-boor.carl Published as: De Boor, C.; De Boor, Carl; de Boor, C.; de Boor, C. R.; de Boor, Carl External Links: MGP · Wikidata · dblp · GND Documents Indexed: 166 Publications since 1962, including 11 Books all top 5 #### Co-Authors 83 single-authored 19 Höllig, Klaus H. 18 Ron, Amos 10 DeVore, Ronald A. 9 Riemenschneider, Sherman D. 7 Swartz, Blair K. 5 Pinkus, Allan M. 4 Rice, John R. 3 Birkhoff, Garrett 3 Conte, Samuel D. 3 Jia, Rong-Qing 2 de Hoog, Frank Robert 2 Dyn, Nira 2 Golub, Gene Howard 2 Schoenberg, Isaac Jacob 2 Shen, Zuowei 2 Weiss, Richard 1 Askey, Richard Allen 1 Daniel, James W. 1 Fix, George J. 1 Friedland, Shmuel 1 Jerome, Joseph W. 1 Keller, Herbert Bishop 1 Kreiss, Heinz-Otto 1 Lyche, Tom 1 Lynch, Robert E. 1 Nevai, Paul G. 1 Rosser, John Barkley 1 Sabin, Malcolm A. 1 Saff, Edward Barry 1 Schumaker, Larry L. 1 Shekhtman, Boris 1 Stahl, Dominik 1 Wendroff, Burton all top 5 #### Serials 25 Journal of Approximation Theory 10 SIAM Journal on Numerical Analysis 9 Mathematics of Computation 7 Constructive Approximation 7 Linear Algebra and its Applications 6 Proceedings of the American Mathematical Society 4 Transactions of the American Mathematical Society 4 Journal of Mathematics and Mechanics 3 ACM Transactions on Mathematical Software 3 Computer Aided Geometric Design 3 Applied Mathematical Sciences 2 Advances in Mathematics 2 Mathematische Zeitschrift 2 SIAM Journal on Scientific and Statistical Computing 2 Numerical Algorithms 2 SIAM Journal on Mathematical Analysis 2 Advances in Computational Mathematics 2 Journal of the Society for Industrial & Applied Mathematics 1 American Mathematical Monthly 1 Journal d’Analyse Mathématique 1 Journal of Mathematical Analysis and Applications 1 Mathematics Magazine 1 American Journal of Mathematics 1 Illinois Journal of Mathematics 1 Indiana University Mathematics Journal 1 Journal of Computational and Applied Mathematics 1 Journal of Functional Analysis 1 Journal of the London Mathematical Society. Second Series 1 Numerische Mathematik 1 Pacific Journal of Mathematics 1 Advances in Applied Mathematics 1 Applied Numerical Mathematics 1 Approximation Theory and its Applications 1 M$$^3$$AS. Mathematical Models & Methods in Applied Sciences 1 Journal of Elasticity 1 SIAM Review 1 Annals of Numerical Mathematics 1 Surveys in Approximation Theory (SAT) 1 Bulletin of the American Mathematical Society 1 Journal of Mathematics and Physics 1 Classics in Applied Mathematics 1 Proceedings of the Steklov Institute of Mathematics 1 Proceedings of Symposia in Applied Mathematics 1 Journal of Numerical Analysis and Approximation Theory all top 5 #### Fields 117 Approximations and expansions (41-XX) 56 Numerical analysis (65-XX) 11 Linear and multilinear algebra; matrix theory (15-XX) 8 Functional analysis (46-XX) 6 Harmonic analysis on Euclidean spaces (42-XX) 6 Operator theory (47-XX) 5 Ordinary differential equations (34-XX) 3 General and overarching topics; collections (00-XX) 3 History and biography (01-XX) 3 Commutative algebra (13-XX) 3 Difference and functional equations (39-XX) 2 Special functions (33-XX) 1 Number theory (11-XX) 1 Real functions (26-XX) 1 Partial differential equations (35-XX) 1 Differential geometry (53-XX) 1 Mechanics of deformable solids (74-XX) #### Citations contained in zbMATH Open 149 Publications have been cited 5,401 times in 4,198 Documents Cited by Year A practical guide to splines. Zbl 0406.41003 De Boor, Carl 1978 A practical guide to splines. Rev. ed. Zbl 0987.65015 De Boor, Carl 2001 On calculating with B-splines. Zbl 0239.41006 de Boor, Carl 1972 Elementary numerical analysis. An algorithmic approach. 3rd ed. Zbl 0496.65001 Conte, S. D.; de Boor, Carl 1980 Collocation at Gaussian points. Zbl 0232.65065 de Boor, Carl; Swartz, Blair 1973 The structure of finitely generated shift-invariant spaces in $$L_ 2(\mathbb{R}^ d)$$. Zbl 0806.46030 de Boor, Carl; DeVore, Ronald A.; Ron, Amos 1994 Box splines. Zbl 0814.41012 De Boor, Carl; Höllig, Klaus; Riemenschneider, Sherman 1993 Approximation from shift-invariant subspaces of $$L_ 2(\mathbb{R}^ d)$$. Zbl 0790.41012 de Boor, Carl; Devore, Ronald A.; Ron, Amos 1994 Spline approximation by quasiinterpolants. Zbl 0279.41008 de Boor, C.; Fix, G. J. 1973 On the construction of multivariate (pre)wavelets. Zbl 0773.41013 de Boor, Carl; DeVore, Ronald A.; Ron, Amos 1993 Elementary numerical analysis, an algorithmic approach. 2nd ed. Zbl 0257.65002 Conte, S. D.; de Boor, Carl 1972 B-splines from parallelepipeds. Zbl 0534.41007 de Boor, C.; Höllig, K. 1983 High accuracy geometric Hermite interpolation. Zbl 0646.65004 de Boor, Carl; Höllig, Klaus; Sabin, Malcolm 1987 Splines as linear combinations of B-splines. A survey. Zbl 0343.41011 de Boor, Carl 1976 The numerically stable reconstruction of a Jacobi matrix from spectral data. Zbl 0388.15010 de Boor, C.; Golub, G. H. 1978 On multivariate polynomial interpolation. Zbl 0719.41006 de Boor, Carl; Ron, Amos 1990 Good approximation by splines with variable knots. Zbl 0255.41007 de Boor, Carl 1973 Approximation by smooth multivariate splines. Zbl 0529.41010 de Boor, C.; DeVore, R. 1983 Divided differences. Zbl 1071.65027 de Boor, Carl 2005 Bicubic spline interpolation. Zbl 0108.27103 de Boor, C. 1962 On uniform approximation by splines. Zbl 0193.02502 de Boor, Carl 1968 Error bounds for spline interpolation. Zbl 0143.28503 Birkhoff, G.; de Boor, C. 1964 Rayleigh-Ritz approximation by piecewise cubic polynomials. Zbl 0143.38002 Birkhoff, G.; de Boor, C.; Swartz, B.; Wendroff, B. 1966 Proof of the conjectures of Bernstein and Erdős concerning the optimal nodes for polynomial interpolation. Zbl 0412.41002 De Boor, Carl; Pinkus, Allan 1978 A bound on the $$L_\infty$$-norm of $$L_2$$-approximation by splines in terms of a global mesh ratio. Zbl 0345.65004 de Boor, Carl 1976 Piecewise polynomial interpolation and approximation. Zbl 0136.04703 Birkhoff, G.; de Boor, C. R. 1965 Fourier analysis of the approximation power of principal shift-invariant spaces. Zbl 0801.41027 de Boor, Carl; Ron, Amos 1992 Computational aspects of polynomial interpolation in several variables. Zbl 0767.41003 De Boor, Carl; Ron, Amos 1992 Bivariate box splines and smooth pp functions on a three direction mesh. Zbl 0521.41009 de Boor, C.; Hoellig, K. 1983 On splines and their minimum properties. Zbl 0185.20501 de Boor, C.; Lynch, R. E. 1966 On the convergence of odd-degree spline interpolation. Zbl 0174.09902 de Boor, C. 1968 Package for calculating with B-splines. Zbl 0364.65008 de Boor, Carl 1977 Backward error analysis for totally positive linear systems. Zbl 0336.65020 de Boor, Carl; Pinkus, Allan 1977 Best approximation properties of spline functions of odd degree. Zbl 0116.27601 de Boor, C. 1963 The approximation of a totally positive band matrix by a strictly banded totally positive one. Zbl 0479.15015 de Boor, Carl 1982 The least solution for the polynomial interpolation problem. Zbl 0735.41001 de Boor, Carl; Ron, Amos 1992 The quasi-interpolant as a tool in elementary polynomial spline theory. Zbl 0317.41007 de Boor, Carl 1973 Cutting corners always works. Zbl 0637.41014 de Boor, Carl 1987 Controlled approximation and a characterization of the local approximation order. Zbl 0592.41027 de Boor, C.; Jia, R.-Q. 1985 The polynomials in the linear span of integer translates of a compactly supported function. Zbl 0624.41013 de Boor, Carl 1987 Bivariate cardinal interpolation by splines on a three-direction mesh. Zbl 0586.41005 de Boor, Carl; Höllig, Klaus; Riemenschneider, Sherman 1985 Piecewise monotone interpolation. Zbl 0367.41001 de Boor, Carl; Swartz, Blair 1977 On polynomial ideals of finite codimension with applications to box spline theory. Zbl 0743.41013 De Boor, Carl; Ron, Amos 1991 Good approximation by splines with variable knots. II. Zbl 0343.65005 de Boor, Carl 1974 On ’best’ interpolation. Zbl 0314.41001 de Boor, Carl 1976 Approximation order from bivariate $$C^ 1$$-cubics: A counterexample. Zbl 0545.41017 de Boor, C.; Höllig, K. 1983 Cardinal interpolation and spline functions VIII. The Budan-Fourier theorem for splines and applications. Zbl 0319.41010 de Boor, Carl; Schoenberg, I. J. 1976 A remark concerning perfect splines. Zbl 0286.41010 de Boor, Carl 1974 On bounding spline interpolation. Zbl 0302.41006 de Boor, Carl 1975 Ideal interpolation. Zbl 1126.41003 de Boor, Carl 2005 Fundamental solutions for multivariate difference equations. Zbl 0679.39001 de Boor, Carl; Höllig, Klaus; Riemenschneider, Sherman 1989 Total positivity of the spline collocation matrix. Zbl 0311.41008 de Boor, Carl 1976 Approximation power of smooth bivariate pp functions. Zbl 0616.41010 de Boor, C.; Höllig, K. 1988 Extremal polynomials with application to Richardson iteration for indefinite linear systems. Zbl 0476.65021 De Boor, Carl; Rice, John R. 1982 Efficient computer manipulation of tensor products. Zbl 0405.65011 de Boor, Carl 1979 Approximation orders of FSI spaces in $$L_2(\mathbb{R}^d)$$. Zbl 0919.41009 de Boor, C.; DeVore, R. A.; Ron, A. 1998 Bounding the error in spline interpolation. Zbl 0272.65006 de Boor, Carl 1974 The exponentials in the span of the multiinteger translates of a compactly supported function; quasiinterpolation and approximation order. Zbl 0757.41012 de Boor, Carl; Ron, Amos 1992 On local linear functionals which vanish at all B-splines but one. Zbl 0346.41007 de Boor, Carl 1976 A sharp upper bound on the approximation order of smooth bivariate pp functions. Zbl 0784.41010 de Boor, C.; Jia, R. Q. 1993 Multivariate piecewise polynomials. Zbl 0796.41009 de Boor, C. 1993 Partitions of unity and approximation. Zbl 0547.41007 de Boor, Carl; DeVore, Ronald R. 1985 Recurrence relations for multivariate B-splines. Zbl 0506.41008 de Boor, Carl; Hoellig, Klaus 1982 Collocation approximation to eigenvalues of an ordinary differential equation: The principle of the thing. Zbl 0444.65053 De Boor, Carl; Swartz, Blair 1980 Quasiinterpolants and approximation power of multivariate splines. Zbl 0694.41018 de Boor, Carl 1990 A geometric proof of total positivity for spline interpolation. Zbl 0599.41021 de Boor, C.; DeVore, R. 1985 How does Agee’s smoothing method work? Zbl 0449.65004 De Boor, Carl 1979 On the approximation by $$\gamma$$-polynomials. Zbl 0273.41014 de Boor, Carl 1969 Multivariate polynomial interpolation: conjectures concerning GC-sets. Zbl 1123.41003 de Boor, Carl 2007 Finite sequences of orthogonal polynomials connected by a Jacobi matrix. Zbl 0614.65035 de Boor, Carl; Saff, Edward B. 1986 How small can one make the derivatives of an interpolating function? Zbl 0293.41007 de Boor, Carl 1975 On the evaluation of box splines. Zbl 0798.65012 de Boor, Carl 1993 On two polynomial spaces associated with a box spline. Zbl 0678.41009 de Boor, Carl; Dyn, Nira; Ron, Amos 1991 The exact condition of the B-spline basis may be hard to determine. Zbl 0687.41011 de Boor, Carl 1990 Convergence of cardinal series. Zbl 0624.41023 de Boor, Carl; Höllig, Klaus; Riemenschneider, Sherman 1986 Convergence of abstract splines. Zbl 0477.41012 De Boor, Carl 1981 Dichotomies for band matrices. Zbl 0453.15002 de Boor, Carl 1980 On the cardinal spline interpolant to $$e^{iut}$$. Zbl 0341.41008 de Boor, Carl 1976 On cubic spline functions that vanish at all knots. Zbl 0326.41010 de Boor, Carl 1976 On local spline approximation by moments. Zbl 0162.08402 de Boor, C. 1968 Approximation orders of FSI spaces in $$L_2(\mathbb{R}^d)$$. Zbl 0948.41012 de Boor, C.; DeVore, R. A.; Ron, A. 1998 The error in polynomial tensor-product, and Chung-Yao, interpolation. Zbl 1133.41314 de Boor, Carl 1997 An adaptive algorithm for multivariate approximation giving optimal convergence rates. Zbl 0411.41008 De Boor, Carl; Rice, John R. 1979 On calculating with B-splines. II: Integration. Zbl 0338.65002 de Boor, Carl; Lyche, Tom; Schumaker, Larry L. 1976 Quadratic spline interpolation and the sharpness of Lebesgue’s inequality. Zbl 0338.41014 de Boor, Carl 1976 Odd-degree spline interpolation at a biinfinite knot sequence. Zbl 0337.41004 de Boor, Carl 1976 On the pointwise limits of bivariate Lagrange projectors. Zbl 1149.41017 De Boor, C.; Shekhtman, B. 2008 A naive proof of the representation theorem for isotropic, linear asymmetric stress-strain relations. Zbl 0582.73021 de Boor, Carl 1985 The stability of one-step schemes for first-order two-point boundary value problems. Zbl 0529.65052 de Boor, C.; de Hoog, F.; Keller, H. B. 1983 SOLVEBLOK: A package for solving almost block diagonal linear systems. Zbl 0434.65013 De Boor, Carl; Weiss, Richard 1980 Mathematical aspects of finite elements in partial differential equations. Proceedings of a symposium conducted by the Mathematics Research Center, The University of Wisconsin, Madison April 1-3, 1974. Zbl 0324.00023 de Boor, Carl (ed.) 1974 Interpolation from spaces spanned by monomials. Zbl 1116.65009 de Boor, C. 2007 Computational aspects of multivariate polynomial interpolation: Indexing the coefficients. Zbl 0944.41002 de Boor, Carl 2000 Chebyshev approximation by a $$\Pi(x-r_ i)/(x+s_ i)$$ and application to ADI iteration. Zbl 0116.04503 de Boor, C.; Rice, J. R. 1963 An asymptotic expansion for the error in a linear map that reproduces polynomials of a certain order. Zbl 1075.41020 de Boor, Carl 2005 Local corner cutting and the smoothness of the limiting curve. Zbl 0704.65008 de Boor, Carl 1990 On the condition of the linear systems associate with discretized BVPs of ODEs. Zbl 0613.65085 de Boor, C.; Kreiss, H.-O. 1986 What is the main diagonal of a biinfinite band matrix? Zbl 0457.41014 De Boor, Carl 1980 A multivariate divided difference. Zbl 1137.41302 de Boor, C. 1995 Approximation order without quasi-interpolants. Zbl 0767.41002 de Boor, Carl 1993 Elementary numerical analysis. An algorithmic approach. Updated with MATLAB. Reprint of the third edition 1980. Zbl 1392.65002 Conte, S. D.; de Boor, Carl 2018 On the (bi)infinite case of Shadrin’s theorem concerning the $$L_{\infty}$$-boundedness of the $$L_{2}$$-spline projector. Zbl 1298.41016 de Boor, Carl 2012 The way things were in multivariate splines: a personal view. Zbl 1202.41006 de Boor, Carl 2009 Multivariate polynomial interpolation: Aitken-Neville sets and generalized principal lattices. Zbl 1181.41006 de Boor, Carl 2009 On the pointwise limits of bivariate Lagrange projectors. Zbl 1149.41017 De Boor, C.; Shekhtman, B. 2008 Box splines revisited: Convergence and acceleration methods for the subdivision and the cascade algorithms. Zbl 1162.65068 de Boor, Carl; Ron, Amos 2008 Multivariate polynomial interpolation: conjectures concerning GC-sets. Zbl 1123.41003 de Boor, Carl 2007 Interpolation from spaces spanned by monomials. Zbl 1116.65009 de Boor, C. 2007 Ideal interpolation: Mourrain’s condition vs. $$D$$-invariance. Zbl 1348.41005 de Boor, C. 2006 Divided differences. Zbl 1071.65027 de Boor, Carl 2005 Ideal interpolation. Zbl 1126.41003 de Boor, Carl 2005 An asymptotic expansion for the error in a linear map that reproduces polynomials of a certain order. Zbl 1075.41020 de Boor, Carl 2005 The B-spline recurrence relations of Chakalov and of Popoviciu. Zbl 1028.41009 de Boor, Carl; Pinkus, Allan 2003 A divided difference expansion of a divided difference. Zbl 1022.65024 de Boor, Carl 2003 A Leibniz formula for multivariate divided differences. Zbl 1053.41006 de Boor, Carl 2003 A practical guide to splines. Rev. ed. Zbl 0987.65015 De Boor, Carl 2001 Calculation of the smoothing spline with weighted roughness measure. Zbl 1012.65013 de Boor, Carl 2001 Computational aspects of multivariate polynomial interpolation: Indexing the coefficients. Zbl 0944.41002 de Boor, Carl 2000 Polynomial interpolation to data on flats in $$\mathbb{R}^d$$. Zbl 0960.41003 de Boor, Carl; Dyn, Nira; Ron, Amos 2000 On Pták’s derivation of the Jordan normal form. Zbl 0974.15004 de Boor, Carl 2000 Approximation orders of FSI spaces in $$L_2(\mathbb{R}^d)$$. Zbl 0919.41009 de Boor, C.; DeVore, R. A.; Ron, A. 1998 Approximation orders of FSI spaces in $$L_2(\mathbb{R}^d)$$. Zbl 0948.41012 de Boor, C.; DeVore, R. A.; Ron, A. 1998 The error in polynomial tensor-product, and Chung-Yao, interpolation. Zbl 1133.41314 de Boor, Carl 1997 The multiplicity of a spline zero. Zbl 0887.41013 de Boor, Carl 1997 On ascertaining inductively the dimension of the joint kernel of certain commuting linear operators. Zbl 0999.47001 de Boor, Carl; Ron, Amos; Shen, Zuowei 1996 On ascertaining inductively the dimension of the joint kernel of certain commuting linear operators. II. Zbl 0917.47001 de Boor, Carl; Ron, Amos; Shen, Zuowei 1996 On the Sauer-Xu formula for the error in multivariate polynomial interpolation. Zbl 0852.41003 De Boor, Carl 1996 A multivariate divided difference. Zbl 1137.41302 de Boor, C. 1995 The structure of finitely generated shift-invariant spaces in $$L_ 2(\mathbb{R}^ d)$$. Zbl 0806.46030 de Boor, Carl; DeVore, Ronald A.; Ron, Amos 1994 Approximation from shift-invariant subspaces of $$L_ 2(\mathbb{R}^ d)$$. Zbl 0790.41012 de Boor, Carl; Devore, Ronald A.; Ron, Amos 1994 Gauss elimination by segments and multivariate polynomial interpolation. Zbl 0851.65008 de Boor, C. 1994 Box splines. Zbl 0814.41012 De Boor, Carl; Höllig, Klaus; Riemenschneider, Sherman 1993 On the construction of multivariate (pre)wavelets. Zbl 0773.41013 de Boor, Carl; DeVore, Ronald A.; Ron, Amos 1993 A sharp upper bound on the approximation order of smooth bivariate pp functions. Zbl 0784.41010 de Boor, C.; Jia, R. Q. 1993 Multivariate piecewise polynomials. Zbl 0796.41009 de Boor, C. 1993 On the evaluation of box splines. Zbl 0798.65012 de Boor, Carl 1993 Approximation order without quasi-interpolants. Zbl 0767.41002 de Boor, Carl 1993 Fourier analysis of the approximation power of principal shift-invariant spaces. Zbl 0801.41027 de Boor, Carl; Ron, Amos 1992 Computational aspects of polynomial interpolation in several variables. Zbl 0767.41003 De Boor, Carl; Ron, Amos 1992 The least solution for the polynomial interpolation problem. Zbl 0735.41001 de Boor, Carl; Ron, Amos 1992 The exponentials in the span of the multiinteger translates of a compactly supported function; quasiinterpolation and approximation order. Zbl 0757.41012 de Boor, Carl; Ron, Amos 1992 On the error in multivariate polynomial interpolation. Zbl 0759.41001 De Boor, C. 1992 On polynomial ideals of finite codimension with applications to box spline theory. Zbl 0743.41013 De Boor, Carl; Ron, Amos 1991 On two polynomial spaces associated with a box spline. Zbl 0678.41009 de Boor, Carl; Dyn, Nira; Ron, Amos 1991 An alternative approach to (the teaching of) rank, basis, and dimension. Zbl 0724.15003 de Boor, Carl 1991 Box-spline tilings. Zbl 0742.41013 de Boor, Carl; Höllig, Klaus 1991 On multivariate polynomial interpolation. Zbl 0719.41006 de Boor, Carl; Ron, Amos 1990 Quasiinterpolants and approximation power of multivariate splines. Zbl 0694.41018 de Boor, Carl 1990 The exact condition of the B-spline basis may be hard to determine. Zbl 0687.41011 de Boor, Carl 1990 Local corner cutting and the smoothness of the limiting curve. Zbl 0704.65008 de Boor, Carl 1990 Splinefunktionen. Zbl 0719.41013 de Boor, Carl 1990 Fundamental solutions for multivariate difference equations. Zbl 0679.39001 de Boor, Carl; Höllig, Klaus; Riemenschneider, Sherman 1989 A local basis for certain smooth bivariate pp spaces. Zbl 0682.41046 de Boor, Carl 1989 Polynomial ideals and multivariate splines. Zbl 0682.41023 de Boor, Carl; Ron, Amos 1989 Approximation power of smooth bivariate pp functions. Zbl 0616.41010 de Boor, C.; Höllig, K. 1988 The condition of the B-spline basis for polynomials. Zbl 0651.41005 de Boor, Carl 1988 High accuracy geometric Hermite interpolation. Zbl 0646.65004 de Boor, Carl; Höllig, Klaus; Sabin, Malcolm 1987 Cutting corners always works. Zbl 0637.41014 de Boor, Carl 1987 The polynomials in the linear span of integer translates of a compactly supported function. Zbl 0624.41013 de Boor, Carl 1987 Minimal support for bivariate splines. Zbl 0682.41022 de Boor, C.; Höllig, K. 1987 Some qualitative properties of bivariate Euler-Frobenius polynomials. Zbl 0635.41012 de Boor, Carl; Höllig, Klaus; Riemenschneider, Sherman 1987 Finite sequences of orthogonal polynomials connected by a Jacobi matrix. Zbl 0614.65035 de Boor, Carl; Saff, Edward B. 1986 Convergence of cardinal series. Zbl 0624.41023 de Boor, Carl; Höllig, Klaus; Riemenschneider, Sherman 1986 On the condition of the linear systems associate with discretized BVPs of ODEs. Zbl 0613.65085 de Boor, C.; Kreiss, H.-O. 1986 Stability of finite difference schemes for two-point boundary value problems. Zbl 0603.65056 de Boor, C.; de Hoog, F. 1986 Controlled approximation and a characterization of the local approximation order. Zbl 0592.41027 de Boor, C.; Jia, R.-Q. 1985 Bivariate cardinal interpolation by splines on a three-direction mesh. Zbl 0586.41005 de Boor, Carl; Höllig, Klaus; Riemenschneider, Sherman 1985 Partitions of unity and approximation. Zbl 0547.41007 de Boor, Carl; DeVore, Ronald R. 1985 A geometric proof of total positivity for spline interpolation. Zbl 0599.41021 de Boor, C.; DeVore, R. 1985 A naive proof of the representation theorem for isotropic, linear asymmetric stress-strain relations. Zbl 0582.73021 de Boor, Carl 1985 Convergence of bivariate cardinal interpolation. Zbl 0606.41004 de Boor, Carl; Höllig, Klaus; Riemenschneider, Sherman 1985 The limits of multivariate cardinal splines. Zbl 0571.41008 de Boor, Carl; Höllig, Klaus; Riemenschneider, Sherman 1985 On bivariate cardinal interpolation. Zbl 0596.41006 de Boor, C.; Höllig, K.; Riemenschneider, S. D. 1984 B-splines from parallelepipeds. Zbl 0534.41007 de Boor, C.; Höllig, K. 1983 Approximation by smooth multivariate splines. Zbl 0529.41010 de Boor, C.; DeVore, R. 1983 Bivariate box splines and smooth pp functions on a three direction mesh. Zbl 0521.41009 de Boor, C.; Hoellig, K. 1983 Approximation order from bivariate $$C^ 1$$-cubics: A counterexample. Zbl 0545.41017 de Boor, C.; Höllig, K. 1983 The stability of one-step schemes for first-order two-point boundary value problems. Zbl 0529.65052 de Boor, C.; de Hoog, F.; Keller, H. B. 1983 Approximation order from smooth bivariate pp functions. Zbl 0538.41013 de Boor, C.; DeVore, R.; Höllig, K. 1983 The approximation of a totally positive band matrix by a strictly banded totally positive one. Zbl 0479.15015 de Boor, Carl 1982 Extremal polynomials with application to Richardson iteration for indefinite linear systems. Zbl 0476.65021 De Boor, Carl; Rice, John R. 1982 Recurrence relations for multivariate B-splines. Zbl 0506.41008 de Boor, Carl; Hoellig, Klaus 1982 Structure of invertible (bi)infinite totally positive matrices. Zbl 0504.15014 de Boor, C.; Jia, Rong-qing; Pinkus, A. 1982 The inverse of a totally positive bi-infinite band matrix. Zbl 0502.47014 De Boor, Carl 1982 Inverses of infinite sign regular matrices. Zbl 0502.47015 De Boor, C.; Friedland, S.; Pinkus, A. 1982 Topics in multivariate approximation theory. Zbl 0501.41001 de Boor, C. 1982 Convergence of abstract splines. Zbl 0477.41012 De Boor, Carl 1981 Local piecewise polynomial projection methods for an O.D.E. which give high-order convergence at knots. Zbl 0456.65056 de Boor, Carl; Swartz, Blair 1981 Collocation approximation to eigenvalues of an ordinary differential equation: Numerical illustrations. Zbl 0456.65055 de Boor, Carl; Swartz, Blair 1981 On a max-norm bound for the least-squares spline approximant. Zbl 0487.41014 de Boor, C. 1981 Elementary numerical analysis. An algorithmic approach. 3rd ed. Zbl 0496.65001 Conte, S. D.; de Boor, Carl 1980 Collocation approximation to eigenvalues of an ordinary differential equation: The principle of the thing. Zbl 0444.65053 De Boor, Carl; Swartz, Blair 1980 Dichotomies for band matrices. Zbl 0453.15002 de Boor, Carl 1980 SOLVEBLOK: A package for solving almost block diagonal linear systems. Zbl 0434.65013 De Boor, Carl; Weiss, Richard 1980 What is the main diagonal of a biinfinite band matrix? Zbl 0457.41014 De Boor, Carl 1980 FFT as nested multiplication, with a twist. Zbl 0459.65098 De Boor, Carl 1980 Mixed norm n-widths. Zbl 0458.41018 De Boor, C.; Devore, R.; Höllig, K. 1980 ALGORITHM 546: SOLVEBLOK $$[F4]$$. Zbl 0434.65014 De Boor, Carl; Weiss, Richard 1980 Efficient computer manipulation of tensor products. Zbl 0405.65011 de Boor, Carl 1979 How does Agee’s smoothing method work? Zbl 0449.65004 De Boor, Carl 1979 ...and 49 more Documents all top 5 #### Cited by 5,188 Authors 43 de Boor, Carl 37 Chui, Charles Kam-tai 34 Ron, Amos 34 Sbibih, Driss 34 Speleers, Hendrik 33 Manni, Carla 29 Peña, Juan Manuel 27 Micchelli, Charles A. 25 Sangalli, Giancarlo 23 Dyn, Nira 23 Wang, Renhong 22 Sablonnière, Paul 22 Ward, Joseph Dinneen 21 Dahmen, Wolfgang A. 21 Lian, Heng 20 Dagnino, Catterina 20 Floater, Michael S. 20 Jia, Rong-Qing 20 Shen, Zuowei 19 Lyche, Tom 19 Peters, Jorg 19 Yang, Lijian 18 Höllig, Klaus H. 18 Ibáñez, María J. 17 Barrera, Domingo 17 Schumaker, Larry L. 17 Tijini, Ahmed 16 Goodman, Timothy N. T. 16 Hughes, Thomas J. R. 16 Lai, Mingjun 16 Smith, Philip W. 15 Buffa, Annalisa 15 Jia, Rongqing 15 Kobza, Jiří 14 Bownik, Marcin 14 Carnicer, Jésus Miguel 14 Lim, Jae Kun 14 Sauer, Tomas 13 Cabrelli, Carlos A. 13 Farouki, Rida T. 13 Johnson, Michael James 13 Li, Chin-Shang 13 Liang, Hua 13 Shekhtman, Boris 13 Sun, Qiyu 12 Beirão da Veiga, Lourenço 12 Dierckx, Paul 12 García, Antonio G. 12 Garoni, Carlo 12 Ghosal, Subhashis 12 Kim, Hong Oh 12 Kozak, Jernej 12 Lang, Feng-Gong 12 Remogna, Sara 12 Wu, Zongmin 11 Dehghan Takht Fooladi, Mehdi 11 Jüttler, Bert 11 Kohler, Michael 11 Li, Yunzhang 11 Ma, Shujie 11 Nürnberger, Günther 11 Pelosi, Francesca 11 Pérez-Villalón, Gerardo 11 Serghini, Abdelhafid 11 Serra-Capizzano, Stefano 11 Skopina, Maria A. 11 Stöckler, Joachim 11 Unser, Michael A. 11 Xu, Xiaoping 11 Yang, Hu 10 Calo, Victor Manuel 10 Cramer, Erhard 10 Han, Bin 10 Huang, Jianhua Z. 10 Jaklič, Gašper 10 Kim, Rae Young 10 Krajnc, Marjeta 10 Paternostro, Victoria 10 Riemenschneider, Sherman D. 10 Russell, Robert D. 10 Sampoli, Maria Lucia 10 Tahrichi, Mohamed 10 Varga, Richard Steven 10 Verhoosel, Clemens V. 10 Zhang, Shugong 10 Zidna, Ahmed 9 Abbas, Muhammad 9 Cao, Jiguo 9 de Borst, René 9 Geum, Young Hee 9 Giannelli, Carlotta 9 Jetter, Kurt 9 Kilgore, Theodore A. 9 Kim, Young Ik 9 Sestini, Alessandra 9 Volkov, Yuriĭ Stepanovich 9 Xu, Yuesheng 8 Bialecki, Bernard 8 Böhm, Wolfgang 8 Buhmann, Martin Dietrich ...and 5,088 more Authors all top 5 #### Cited in 477 Serials 300 Journal of Computational and Applied Mathematics 294 Journal of Approximation Theory 223 Computer Aided Geometric Design 145 Applied Mathematics and Computation 133 Computer Methods in Applied Mechanics and Engineering 124 Numerische Mathematik 109 Journal of Computational Physics 99 Mathematics of Computation 97 Applied and Computational Harmonic Analysis 91 Computers & Mathematics with Applications 91 Computational Statistics and Data Analysis 88 Linear Algebra and its Applications 75 Journal of Mathematical Analysis and Applications 73 Constructive Approximation 64 Advances in Computational Mathematics 59 Applied Numerical Mathematics 59 Numerical Algorithms 53 BIT 42 Computing 41 Journal of Multivariate Analysis 39 Journal of Statistical Planning and Inference 33 International Journal for Numerical Methods in Engineering 32 Computer Physics Communications 32 Mathematics and Computers in Simulation 31 The Journal of Fourier Analysis and Applications 31 Journal of Nonparametric Statistics 28 The Annals of Statistics 28 Transactions of the American Mathematical Society 28 International Journal of Computer Mathematics 27 Proceedings of the American Mathematical Society 26 Computational Statistics 24 Applied Mathematical Modelling 22 Journal of Scientific Computing 22 Electronic Journal of Statistics 21 Statistics & Probability Letters 21 Journal of Statistical Computation and Simulation 19 Calcolo 19 Statistics and Computing 18 Journal of Optimization Theory and Applications 18 Numerical Functional Analysis and Optimization 18 European Journal of Operational Research 18 International Journal of Wavelets, Multiresolution and Information Processing 17 Computers and Fluids 17 Journal of Functional Analysis 17 Communications in Statistics. Theory and Methods 17 Engineering Analysis with Boundary Elements 16 International Journal for Numerical Methods in Fluids 15 Annals of the Institute of Statistical Mathematics 15 Communications in Statistics. Simulation and Computation 15 Test 14 Applied Mathematics Letters 14 SIAM Journal on Scientific Computing 13 Acta Universitatis Palackianae Olomucensis. Facultas Rerum Naturalium. Mathematica 13 Automatica 13 Science China. Mathematics 12 Applicable Analysis 12 Journal of Complexity 12 Journal of Systems Science and Complexity 11 Journal of the American Statistical Association 11 SIAM Journal on Numerical Analysis 11 Acta Mathematica Sinica. English Series 10 Bulletin of the Australian Mathematical Society 10 Results in Mathematics 10 Computational Mechanics 10 Statistical Papers 9 Mathematical Biosciences 9 Journal of Econometrics 9 Numerical Methods for Partial Differential Equations 9 Applied Mathematics. Series B (English Edition) 9 Journal of Applied Mathematics and Computing 9 Proceedings of the Steklov Institute of Mathematics 8 Linear and Multilinear Algebra 8 Mathematical Notes 8 Mathematische Zeitschrift 8 Acta Applicandae Mathematicae 8 Applications of Mathematics 8 Computational Mathematics and Mathematical Physics 8 International Journal of Bifurcation and Chaos in Applied Sciences and Engineering 8 BIT. Nordisk Tidskrift for Informationsbehandling 8 The Annals of Applied Statistics 7 International Journal of Systems Science 7 ZAMP. Zeitschrift für angewandte Mathematik und Physik 7 Acta Mathematica Hungarica 7 Acta Mathematicae Applicatae Sinica. English Series 7 Mathematical and Computer Modelling 7 M$$^3$$AS. Mathematical Models & Methods in Applied Sciences 7 Journal of Mathematical Sciences (New York) 7 Journal of Applied Statistics 7 Mediterranean Journal of Mathematics 6 Acta Mechanica 6 International Journal of Control 6 Journal of the Franklin Institute 6 Journal of Mathematical Physics 6 Metrika 6 Psychometrika 6 Scandinavian Journal of Statistics 6 Ukrainian Mathematical Journal 6 Zeitschrift für Angewandte Mathematik und Mechanik (ZAMM) 6 Advances in Mathematics 6 Fuzzy Sets and Systems ...and 377 more Serials all top 5 #### Cited in 59 Fields 2,271 Numerical analysis (65-XX) 1,307 Approximations and expansions (41-XX) 648 Statistics (62-XX) 398 Harmonic analysis on Euclidean spaces (42-XX) 285 Partial differential equations (35-XX) 204 Ordinary differential equations (34-XX) 178 Mechanics of deformable solids (74-XX) 174 Fluid mechanics (76-XX) 153 Computer science (68-XX) 118 Linear and multilinear algebra; matrix theory (15-XX) 118 Functional analysis (46-XX) 106 Operations research, mathematical programming (90-XX) 103 Integral equations (45-XX) 102 Information and communication theory, circuits (94-XX) 98 Operator theory (47-XX) 76 Systems theory; control (93-XX) 74 Biology and other natural sciences (92-XX) 73 Probability theory and stochastic processes (60-XX) 71 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 60 Calculus of variations and optimal control; optimization (49-XX) 51 Real functions (26-XX) 41 Dynamical systems and ergodic theory (37-XX) 40 Combinatorics (05-XX) 40 Mechanics of particles and systems (70-XX) 38 Commutative algebra (13-XX) 38 Special functions (33-XX) 37 Quantum theory (81-XX) 35 Difference and functional equations (39-XX) 34 Functions of a complex variable (30-XX) 33 Differential geometry (53-XX) 31 Statistical mechanics, structure of matter (82-XX) 30 Geophysics (86-XX) 28 Abstract harmonic analysis (43-XX) 27 Classical thermodynamics, heat transfer (80-XX) 25 Optics, electromagnetic theory (78-XX) 22 Convex and discrete geometry (52-XX) 17 Number theory (11-XX) 16 Algebraic geometry (14-XX) 15 Integral transforms, operational calculus (44-XX) 14 History and biography (01-XX) 13 Field theory and polynomials (12-XX) 13 Potential theory (31-XX) 11 Geometry (51-XX) 10 Topological groups, Lie groups (22-XX) 8 Global analysis, analysis on manifolds (58-XX) 7 General and overarching topics; collections (00-XX) 7 Group theory and generalizations (20-XX) 7 Measure and integration (28-XX) 6 Associative rings and algebras (16-XX) 6 Several complex variables and analytic spaces (32-XX) 6 Astronomy and astrophysics (85-XX) 5 Sequences, series, summability (40-XX) 4 Mathematical logic and foundations (03-XX) 2 $$K$$-theory (19-XX) 2 General topology (54-XX) 2 Relativity and gravitational theory (83-XX) 1 Order, lattices, ordered algebraic structures (06-XX) 1 Nonassociative rings and algebras (17-XX) 1 Mathematics education (97-XX) #### Wikidata Timeline The data are displayed as stored in Wikidata under a Creative Commons CC0 License. Updates and corrections should be made in Wikidata.
2021-06-21T23:40:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6096513867378235, "perplexity": 8810.041704920055}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488504838.98/warc/CC-MAIN-20210621212241-20210622002241-00051.warc.gz"}
https://par.nsf.gov/biblio/10364681-how-obtain-redshift-distribution-from-probabilistic-redshift-estimates
How to Obtain the Redshift Distribution from Probabilistic Redshift Estimates Abstract A reliable estimate of the redshift distributionn(z) is crucial for using weak gravitational lensing and large-scale structures of galaxy catalogs to study cosmology. Spectroscopic redshifts for the dim and numerous galaxies of next-generation weak-lensing surveys are expected to be unavailable, making photometric redshift (photo-z) probability density functions (PDFs) the next best alternative for comprehensively encapsulating the nontrivial systematics affecting photo-zpoint estimation. The established stacked estimator ofn(z) avoids reducing photo-zPDFs to point estimates but yields a systematically biased estimate ofn(z) that worsens with a decreasing signal-to-noise ratio, the very regime where photo-zPDFs are most necessary. We introduce Cosmological Hierarchical Inference with Probabilistic Photometric Redshifts (CHIPPR), a statistically rigorous probabilistic graphical model of redshift-dependent photometry that correctly propagates the redshift uncertainty information beyond the best-fit estimator ofn(z) produced by traditional procedures and is provably the only self-consistent way to recovern(z) from photo-zPDFs. We present thechipprprototype code, noting that the mathematically justifiable approach incurs computational cost. TheCHIPPRapproach is applicable to any one-point statistic of any random variable, provided the prior probability density used to produce the posteriors is explicitly known; if the prior is implicit, as may be the case for popular photo-ztechniques, then the resulting posterior PDFs cannot be used for more » Authors: ; Publication Date: NSF-PAR ID: 10364681 Journal Name: The Astrophysical Journal Volume: 928 Issue: 2 Page Range or eLocation-ID: Article No. 127 ISSN: 0004-637X Publisher: DOI PREFIX: 10.3847
2023-03-24T00:25:05
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8288502097129822, "perplexity": 5906.196173852713}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945218.30/warc/CC-MAIN-20230323225049-20230324015049-00223.warc.gz"}
https://www.anl.gov/article/chain-reaction-innovations-announces-expanded-call-for-applications-to-join-its-4th-cohort-of
# Argonne National Laboratory Press Release | Argonne National Laboratory # Chain Reaction Innovations announces expanded call for applications to join its 4th Cohort of innovators at Argonne Chain Reaction Innovations, the entrepreneurship program at Argonne National Laboratory, will now be accepting applications in any technology area that can leverage the vast resources available at Argonne. Chain Reaction Innovations (CRI), the entrepreneurship program that embeds innovators for two years at the U.S. Department of Energy’s (DOE’s) Argonne National Laboratory, is expanding and will now be accepting applications in any technology area that can be accelerated to market by leveraging the vast resources available at Argonne National Laboratory. Previously applications were limited to technologies specifically related to advanced manufacturing. The CRI program supports the next generation of energy entrepreneurs moving their energy innovations to market, choosing members of each new cohort with an annual call for applications. The application period will open on Sept. 17, 2019, and will run through 5 p.m. Central Daylight Time on Oct. 31, 2019. We’re opening this year’s call to encourage applications from anyone who feels Argonne can help them de-risk their technologies more efficiently than can be accomplished in the private sector” ― Adria Wilson, CRI’s entrepreneurial program lead Basically, we’re opening this year’s call to encourage applications from anyone who feels Argonne can help them de-risk their technologies more efficiently than can be accomplished in the private sector,” said Adria Wilson, CRI’s entrepreneurial program lead.  This follows a number of successful outcomes from CRI[‘s] first graduating cohort of innovators. To date, CRI startups have raised more than $12.5 million in funding and created over 60 jobs, while working with world-class scientists and resources at Argonne. Examples of new technologies areas that are synergistic with Argonne strengths include innovations related to electrification of the economy (smart grid, grid reliability and residency, and mobility), nanotechnology, advanced materials, water-energy nexus and advanced simulation to support the optimization of a variety of technologies. Argonne has a long history of expertise in advanced materials, catalysis, transportation and [also] in nanotech, and is also actively developing new strengths in edge computing, big data, quantum computing, etc. We’re interested in getting applications from innovators working in all these spaces,” said CRI director John Carlisle. Currently CRI is supported by the Advanced Manufacturing Office of the U.S. Department of Energy’s (DOE’s) Office of Energy Efficiency and Renewable Energy (EERE), Advanced Manufacturing Office. CRI will begin accepting applications September 17 and will close the solicitation on October 31. Semi-finalists for Cohort 4 will be selected in early January 2020; finalists will present their technologies at a pitch competition held at Argonne in early February. The new cohort will join the program from June 2020 – May 2022. CRI will host a series of three informational webinars in August, September and October for interested applicants to learn more about the program and the availability of resources at Argonne that can potentially accelerate the development of their technologies. If selected, innovators will receive support including salary, benefits, travel and$220,000 in support of technical work at Argonne. CRI also provides innovators with mentors in the Chicago energy ecosystem, including the Polsky Center for Entrepreneurship and Innovation at the University of Chicago; mHUB, the innovation center for physical product development and manufacturing; and others. More information about applying to the program can be found at http://​chain​re​ac​tion​.anl​.gov/​a​pply/. Chain Reaction Innovations provides innovators with the laboratory tools, seed capital, and collaborators needed to grow their early-stage technologies to enable them to attract the long-term capital and commercial partners needed to scale and launch into the marketplace. CRI is part of the Lab-Embedded Entrepreneurship Programs from the U.S. Department of Energy’s Office of Energy Efficiency and Renewable Energy (EERE). EERE created the Lab-Embedded Entrepreneurship Programs to provide an institutional home for innovative postdoctoral researchers to build their research into products and train to be entrepreneurs. The two-year program for each innovator is funded by EERE’s Advanced Manufacturing Office (AMO). The Office of Energy Efficiency and Renewable Energy supports early-stage research and development of energy efficiency and renewable energy technologies to strengthen U.S. economic growth, energy security, and environmental quality. Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science. The U.S. Department of Energy’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, visit https://​ener​gy​.gov/​s​c​ience.
2020-01-24T14:24:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24263420701026917, "perplexity": 3783.4939603045177}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250620381.59/warc/CC-MAIN-20200124130719-20200124155719-00035.warc.gz"}
https://docs.nersc.gov/current/
# Current known issues¶ ## Perlmutter¶ Perlmutter is not a production resource Perlmutter is not a production resource and usage is not charged against your allocation of time. While we will attempt to make the system available to users as much as possible, it is subject to unannounced and unexpected outages, reconfigurations, and periods of restricted access. Please visit the timeline page for more information about changes we've made in our recent upgrades. NERSC has automated monitoring that tracks failed nodes, so please only open tickets for node failures if the node consistently has poor performance relative to other nodes in the job or if the node repeatedly causes your jobs to fail. ### New issues¶ • Users will encounter problems linking CUDA math libraries (cufft.h, cusolver.h, etc.) with any CUDAToolkit. A temporary workaround is to prepend the $CPATH or the $CMAKE_PREFIX_PATH (if using CMake) to point to the math_libs folder (note: change the cuda compiler version to match your CUDAToolkit, below is what you need for cudatoolkit/21.9_11.4: export CPATH=/opt/nvidia/hpc_sdk/Linux_x86_64/21.9/math_libs/include:$CPATH • to prepend your CMake path: export CMAKE_PREFIX_PATH=/opt/nvidia/hpc_sdk/Linux_x86_64/21.9/math_libs/11.4:$CMAKE_PREFIX_PATH • Spack on Perlmutter has gotten out-of-date with reference to the installed compilers and other PE components, and consequently mostly does not work at the moment. We're working to update the configuration. ### Ongoing issues¶ • MPI users may hit segmentation fault errors when trying to launch an MPI job with many ranks due to incorrect allocation of GPU memory. We provide more information and a suggested workaround. • Some users may see messages like -bash: /usr/common/usg/bin/nersc_host: No such file or directory when you login. This means you have outdated dotfiles that need to be updated. To stop this message, you can either delete this line from your dot files or check if NERSC_HOST is set before overwriting it. Please see our environment page for more details. • Known issues for Machine Learning applications • Nodes on Perlmutter currently do not get a constant hostid (IP address) response. • collabsu is not available. Please create a direct login with sshproxy to login into Perlmutter or switch to a collaboration account on Cori and then login to Perlmutter. Be careful with NVIDIA Unified Memory to avoid crashing nodes In your code, NVIDIA Unified Memory might look something like cudaMallocManaged. At the moment, we do not have the ability to control this kind of memory and keep it under a safe limit. Users who allocate a large pool of this kind of memory may end up crashing nodes if the UVM memory does not leave enough room for necessary system tools like our filesystem client. We expect a fix in early 2022. In the meantime, please keep the size of memory pools allocated via UVM relatively small. If you have questions about this, please contact us. ## Cori¶ The Burst Buffer on Cori has a number of known issues, documented at Cori Burst Buffer.
2022-01-23T18:27:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21115170419216156, "perplexity": 3232.0830263857406}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304309.5/warc/CC-MAIN-20220123172206-20220123202206-00701.warc.gz"}
https://gea.esac.esa.int/archive/documentation/GDR1/Data_processing/chap_astpre/sec_cu3pre_process.html
# 2.4 Processing steps Author(s): Claus Fabricius As mentioned above (Section 2.1.1) the pre-processing is run in almost real time on a daily basis, as well as much later during each data processing cycle. Processing steps are of course somewhat different in the two cases, and not all of them need to be repeated cyclically. The demand of always being up to date in the daily processing, leads to a complication when the processing has been stopped for some days due to maintenance or when the data volume is very high as it happens when the spin axis is close to the Galactic poles. The adopted solution in some cases is to skip some processing steps for data of lower priority in these specific situations, and essentially postpone the full treatment to the cyclic pre-processing. This is relevant for Gaia DR1, which is based on the daily pre-processing. ## 2.4.1 Overview Author(s): Claus Fabricius The major complication for the daily pre-processing is the ambition to process a given time span before all the telemetry has arrived at the processing centre, and without knowing for sure if data that appears to be missing will in fact ever arrive. The driver is the wish to keep a close eye on the instrument and to issue alerts on interesting new sources. As a rule, housekeeping telemetry, and the so-called auxiliary science data (ASD, see Section 2.2.2), is sent to ground first, followed by the actual observations for selected magnitude ranges. This is meant to be the minimum required for the monitoring tasks. Later follows other magnitude ranges, unless memory becomes short on-board and the data in down-link queue overwritten. Data can be received at different ground stations, and may also for that reason arrive unordered at the processing centre. The daily pre-processing is known as the Initial Data Treatment (IDT) and is described below (see Section 2.4.2). It is followed immediately by a quality assessment and validation known as First-Look (FL, see Section 2.5.1 and Section 2.5.2), which takes care of the monitoring tasks and the daily calibrations. The cyclic pre-processing, known as the Intermediate Data Updating (see Section 2.4.2), runs over well-defined data sets and can be executed in a much more orderly manner. It consists of three major tasks, viz. calibrations, image parameter determination, and crossmatch, as well as several minor tasks as described below. ## 2.4.2 Daily and cyclic processing Author(s): Jordi Portell, Claus Fabricius, Javier Castañeda As previously explained, there are good reasons for performing a daily pre-processing, even if that leads to not fully consistent data outputs. That is fixed regularly in the cyclic pre-processing task. Some algorithms and tasks are only run in the daily systems (mainly the raw data reconstruction, which feeds all of the DPAC systems), whereas other tasks can only reliably run on a cyclic basis over the accumulated data. There are also intermediate cases, that is, tasks that must run on a daily basis but over quite consolidated inputs. That is achieved by means of the First-Look (FL) system, which is able to generate some preliminary calibrations and detailed diagnostics. Table 2.6 provides an overview of the main tasks executed in these two types of data pre-processing systems. Please note that most of the ‘Final’ tasks mentioned in IDU are not included in the present release: only the crossmatch is included. The final determination of spectro-photometric image parameters (that is, BP/RP processing) is done in PhotPipe (see Section 5). ### Initial Data Treatment (IDT) IDT includes several major tasks. It must establish a first on-ground attitude (see Section 2.4.5), to know where the telescopes are pointing in every moment; it must calibrate the bias, and it must calibrate the sky background (see Section 2.4.6). Only with those pieces in place can it start thinking of attacking the actual observations. For the observations, the first thing is to reconstruct all relevant circumstances of the data acquisition, as explained in Section 2.4.3. From the BP and RP windows we can determine a source colour, and then proceed to determine the image parameters (see Section 2.4.8). The final step of IDT is the crossmatch between the on-board detections and a catalogue of astronomical sources, having filtered detections deemed spurious (see Section 2.4.9). One catalogue source is assigned to each detection, and if no one is found, a new source is added. ### Intermediate Data Updating (IDU) The Intermediate Data Updating (IDU) is the instrument calibration and data reduction system more demanding in terms of data volume and processing power across DPAC. IDU includes some of the most challenging Gaia calibrations tasks and aims to provide: • Updated crossmatch table using the latest attitude, geometric calibration and source catalogue available. • Updated calibrations for CCD bias and astrophysical background (see Section 2.4.6). • Updated instrument LSF/PSF model (see Section 2.3.2). • Updated astrometric image parameters; location and fluxes (see Section 2.4.8). All these tasks have been integrated in the same system due to the strong relation between them. They are also run in the same environment, the Marenostrum supercomputer hosted by the Barcelona Supercomputing Centre (BSC) (Spain). This symbiosis facilitates the delivery of suitable observations to the calibrations, and of calibration data to IDU tasks. As anticipated in Section 2.1.1, IDU plays an essential role in the iterative data reduction; the successive iterations between IDU, AGIS and PhotPipe (as shown in Figure 2.10) are what will make possible to achieve the high accuracies envisaged for the final Gaia catalogue. Fundamentally, IDU incorporates the astrometric solution from AGIS resulting in an improved crossmatch but also incorporates the photometric solution from PhotPipe within the LSF/PSF model calibration obtaining improved image parameters. These improved results are the starting point for the next iterative reduction loop. Without IDU, Gaia would not be able to provide the envisaged accuracies and its presence is key to get the optimum convergence of the iterative process on which all the data processing of the spacecraft is based. ## 2.4.3 Raw data reconstruction Author(s): Javier Castañeda The raw data reconstruction establishes the detailed circumstances for each observation, including SM, AF, BP, and RP windows of normal observations, RVS windows for some brighter sources, as well as the BAM windows. The result is stored in persistent raw data records, separately for SM and AF (into AstroObservations); for BP and RP (into PhotoObservations); for RVS (into SpectroObservations); and for BAM (into BamObservations). These records need no later updates and are therefore only created in IDT. The telemetry star packets with the individual observations include of course the samples of each window, but do not include several vital pieces of information, e.g. the AC position of each window line, if some lines of the window are gated, or if there is a charge injection within or close to the window. These details, which are common to many observations, are instead sent as auxiliary science data (ASD). As previously described, there are several kinds of ASD files. ASD1 files detail the AC offsets for each CCD for each telescope. These offsets give the AC positions of window lines in the CCD at a given instant, relative to the position of the window in AF1. Due to the precession of the spin axis, the stellar images will have a drift in the AC direction, which can reach 4–5 pixels while transiting a single CCD. This shift changes during a revolution and must therefore be updated regularly. When an update occurs, it affects all window lines immediately, but differently for the two telescopes. Windows may therefore end up with a non-rectangular shape, or windows may suddenly enter into conflict with a window from the other telescope. The regular charge injections for AF and BP/RP CCDs are recorded in the ASD5 records. IDT must then determine the situation of each window with respect to the more recent charge injection. This task has an added twist, because charge injections that encounter a closed gate will be held back for a while, and actually diluted. Also the gating is recorded in an ASD file, and here IDT must determine the gate corresponding to each window line. The detection causing the gate will have the same gate activated for the full window, but other sources observed around the same time will have only gating in a part of their windows. Any awkward combination may occur. An added twist is that the samples immediately after a release of a gate, will be contaminated by the charge held back by the gate, and are therefore useless. ## 2.4.4 Basic angle variation determination Author(s): Alcione Mora The Gaia measurement principle is that differences in the transit time between stars observed by each telescope can be translated into angular measurements. All these measurements are affected if the basic angle (the angle between telescopes, $\Gamma=106^{\circ}5$) is variable. Either it needs to be stable, or its variations be known to the mission accuracy level ($\approx$$\mu$as). Gaia is largely self-calibrating (calibration parameters are estimated from observations). Low frequency variations ($f<1/2P_{\rm rot}$) can be fully eliminated by self-calibration. High frequency random variations are also not a concern because they are averaged during all transits. However, intermediate-frequency variations are difficult to eliminate by self-calibration, especially if they are synchronised with the spacecraft spin phase, and the residuals can introduce systematic errors in the astrometric results (Michalik and Lindegren 2016, Sect. 2). Thus, such intermediate-frequency changes need to be monitored by metrology. The BAM device is continuously measuring differential changes in the basic angle. It basically generates one artificial fixed star per telescope by introducing two collimated laser beams into each primary mirror (see Fig. 2.11). The BAM is composed of two optical benches in charge of producing the interference pattern for each telescope. A number of optical fibres, polarisers, beam splitters and mirrors are used to generate all four beams from one common light source. See Gielesen et al. (2012) for further details. Each Gaia telescope then generates an image on the same dedicated BAM CCD, which is an interference pattern due to the coherent input light source. The relative AL displacement between the two fringe patterns is a direct measurement of the basic-angle variations. A detailed description of the BAM data model, the data’s collection, fitting and daily processing are outlined in Section 7 of Fabricius et al. (2016). ## 2.4.5 On-ground attitude reconstruction (OGA1 & 2) Author(s): David Hobbs, Michael Biermann, Jordi Portell The processing of attitude telemetry from the Gaia spacecraft is unique due to the high accuracy requirements of the mission. Normally, the on board measured attitude from the star trackers, in the form of attitude quaternions, would be sufficient for the scientific data reduction but perhaps requiring some degree of smoothing and improvement before use. This raw attitude is accurate to the order of a few arcseconds ( ${}^{\prime\prime}$) but for the Gaia mission an attitude accurate to a few tens of $\mu$as is required. This is achieved through a series of processing steps as illustrated in Figure 2.12. The raw attitude is received in TM and stored in the IDTFL database. This is then available for IDT which performs the IOGA which fits a set of B-spline coefficients to the available TM, resulting is an array of B-spline coefficients and the associated knot times (see Section 3.3.4). The output from IOGA can then be used as the the input to OGA1 which is a Kalman filter designed to smooth the attitude and to improve its accuracy to the order of 50 milli-arcseconds (mas). At this point more Gaia specific processing begins. For Gaia a First Look (FL) process is employed to do a direct astrometric solution on a single days worth of data, known as the One Day Astrometric Solution (ODAS). This is basically a quality check on the Gaia data but also results in an order of magnitude improvement in the attitude accuracy and will be available in the form of B-splines and quaternions. The results of this process, known as OGA2, were the intended nominal input to AGIS although it would also be possible to use OGA1 from IDT as input to AGIS. However, in practise, mainly due to data gaps and discontinuities between OGA1 segments, it has been found that a simple spline fit to the commanded attitude is sufficient for initializing AGIS processing. AGIS is the the final step in the attitude improvement where all the available observations for primary stars are used together with the available attitude and calibration parameters to iteratively arrive at the final solution with a targeted accuracy. This AGIS final attitude referred to as the On-Ground-Attitude-3 (OGA3). The attitude related tasks in IDT are (see Section 2.4.2): • ingest the ancillary science data (ASD), star packets (SP1) for the brightest detections ($G<14$) and raw attitude from IDTFL DB, noting the time intervals covered; • compute the Initial OGA (IOGA) for suitable time intervals; • extract a list of sources from the Attitude Star Catalogue using IOGA, and identify (crossmatch) those sources corresponding to the mentioned bright detections; • determine OGA1 by correcting IOGA with the match distances to the catalogue by means of an Extended Kalman Filter (EKF); ### IOGA In IDT the raw attitude values from the AOCS are processed to obtain a mathematical representation of the attitude as a set of spline coefficients. The details of the spline fitting are outlined in Appendix A of the AGIS paper (Lindegren et al. 2012). The result of this fitting process is the Initial OGA (IOGA). The time intervals processed can be defined by natural boundaries, like interruptions in the observations, e.g. due to micro-meteorites. The boundaries can also be defined by practical circumstances, like the end of a data transmission contact, or the need to start processing. Using IOGA, a list of sources is extracted from the Attitude Star Catalogue (ASC) in the bands covered by Gaia during the time interval being processed. The ASC will in the early phases of the mission be a subset of the IGSL, but can later be replaced by stars from the MDB catalogue. This allows the next process, OGA1, to run efficiently, knowing in advance if a given observation is likely to belong to an ASC star. ### OGA1 The main objective of OGA1 is to reconstruct the non-real-time First On-Ground Attitude (OGA1) for the Gaia mission with very high accuracy for further processing. The accuracy requirements for the OGA1 determination (along and across scan) can be set to 50 milliarcsec for the first 9 months, to be improved later on in the mission to 5 mas. OGA1 relies on an extended Kalman filter (KF) to estimate the orientation, $\mathbf{q}$, and angular velocity, $\boldsymbol{\omega}$, of the spacecraft with respect to the Satellite Reference System (SRS) defining the state vector $\boldsymbol{x}=\left(\begin{matrix}\mathbf{q}\\ \boldsymbol{\omega}\end{matrix}\right)\,.$ (2.16) #### System model The system model is fully described by two sets of differential equations, the first one describing the satellite’s attitude following the quaternion representation $\mathbf{\dot{q}}(t)=\frac{1}{2}\boldsymbol{\Omega}({\boldsymbol{\omega}})% \mathbf{q}(t)$ (2.17) where $\boldsymbol{\Omega}({\boldsymbol{\omega}})=\begin{bmatrix}0&\omega_{z}&-\omega% _{y}&\omega_{x}\\ -\omega_{z}&0&\omega_{x}&\omega_{y}\\ \omega_{y}&-\omega_{x}&0&\omega_{z}\\ -\omega_{x}&-\omega_{y}&-\omega_{z}&0\\ \end{bmatrix}$ (2.18) and the second one using the Euler’s equations $\dot{\boldsymbol{\omega}}(t)=I_{\rm sc}^{-1}(T_{\rm e}-\boldsymbol{\omega}% \times I_{\rm sc}\boldsymbol{\omega})$ (2.19) where $I_{\rm sc}$ is the moment of inertia of the satellite and $T_{\rm e}$ is the total disturbance and control torques acting on the spacecraft. The satellite is assumed to be represented as a freely rotating rigid body, which implies to set the external torques to zero in Equation 2.19. If this simplification will not work nicely to reconstruct the Gaia attitude, then the proper $T_{\rm e}$ required to follow the NSL should be taken into account. #### Process and measurement model The process model predicts the evolution of the state vector $\boldsymbol{x}$ and describes the influence of a random variable $\mu(t)$, the process noise. For non-linear systems, the process dynamics is described as following: $\dot{\boldsymbol{x}}(t)=\boldsymbol{f}(\boldsymbol{x}(t),t)+\boldsymbol{G}(% \boldsymbol{x}(t),t)~{}\mu(t)$ (2.20) where $\boldsymbol{f}$ and $\boldsymbol{G}$ are functions defining the system properties. For OGA1, $\boldsymbol{f}$ is given by Equation 2.17 and Equation 2.19, and $\mu(t)$ is a discrete Gaussian white noise process with variance matrix $\boldsymbol{Q}(t)$: $\mu(t)\sim M(0,Q(t))\,.$ (2.21) The measurement model relates the measurement value $\boldsymbol{y}$ to the value of the state vector $\boldsymbol{x}$ and describes also the influence of a random variable $\nu(t)$, the measurement noise of the measured value. The generalized form of the model equation is: $\boldsymbol{y}_{k}=\boldsymbol{h}(\boldsymbol{x}(t_{k}),t)+\nu(t)$ (2.22) where $\boldsymbol{h}$ is the function defining the measurement principle, and $\nu(t)$ is a discrete Gaussian white noise process with variance matrix $R(t)$ (with standard deviations of 0.1 mas and 0.5 mas along and across scan respectively): $\nu(t)\sim N(0,\boldsymbol{R}(t))\,.$ (2.23) In order to estimate the state, the equations expressing the two models must be linearized in order to use the KF model equations, around the current estimation ($\boldsymbol{x}^{-}_{k}$) for propagation periods and update events. This procedure yields the following two matrices to be the Jacobian of $\boldsymbol{f}$ and $\boldsymbol{h}$ functions with respect to the state: $\boldsymbol{F}=\left.\frac{\partial\boldsymbol{f}(\boldsymbol{x}(t),t)}{% \partial\boldsymbol{x}(t)}\right|_{\boldsymbol{x}=\boldsymbol{x}^{-}_{k}}$ (2.24) and $\boldsymbol{H_{k}}=\left.\frac{\partial\boldsymbol{h}(\boldsymbol{x}(t),t)}{% \partial\boldsymbol{x}(t_{k})}\right|_{\boldsymbol{x}=\boldsymbol{x}^{-}_{k}}\,.$ (2.25) #### KF propagation equations The KF propagation equations consist of two parts: the state system model and the state covariance equations. The first one $\dot{\boldsymbol{x}}(t)=\boldsymbol{F}(t)\boldsymbol{x}(t)+\boldsymbol{G}(t)% \mu(t)$ (2.26) can be propagated using a numerical integrator, such as the fourth-order Runge-Kutta method. The $\boldsymbol{F}$ matrix is called the transition matrix, $\boldsymbol{Q}$ the system noise covariance matrix and $\boldsymbol{G}$ the system noise covariance coupling matrix. The transition matrix can be expressed as: $\boldsymbol{F}=\begin{bmatrix}0.5\boldsymbol{\Omega}(\boldsymbol{\omega})&0.5% \boldsymbol{\Theta}(\mathbf{q})\\ 0_{3\times 4}&\boldsymbol{F}_{\dot{\boldsymbol{\omega}}\boldsymbol{\omega}}\\ \end{bmatrix}$ (2.27) where $\boldsymbol{\Theta}({\mathbf{q}})=\begin{bmatrix}q_{w}&-q_{z}&q_{y}\\ q_{z}&q_{w}&-q_{x}\\ -q_{y}&q_{x}&q_{w}\\ -q_{x}&-q_{y}&-q_{z}\\ \end{bmatrix}$ (2.28) and $\boldsymbol{F}_{\dot{\boldsymbol{\omega}}\boldsymbol{\omega}}=-\left(I_{\rm sc% }^{-1}([\boldsymbol{\omega\times}]I_{\rm sc}-[(I_{\rm sc}\boldsymbol{\omega})% \times])\right)\,.$ (2.29) Here the matrix notation $[\boldsymbol{a}\times]$ represents the skew symmetric matrix of the generic vector $\boldsymbol{a}$. For the state covariance propagation, the Riccati formulation is used: $\dot{\boldsymbol{P}}=\boldsymbol{F}\boldsymbol{P}+\boldsymbol{P}\boldsymbol{F}% ^{T}+\boldsymbol{G}\boldsymbol{Q}\boldsymbol{G}^{T}\,.$ (2.30) Its prediction can be carried out through the application of the fundamental matrix $\boldsymbol{\Phi}$ (i.e. first order approximation using the Taylor series) about $\boldsymbol{F}$ which becomes now. $\boldsymbol{\Phi}(\Delta t)\approx\boldsymbol{I}+\boldsymbol{F}\cdot\Delta t$ (2.31) where $\Delta t$ represents the propagation step. The process noise matrix $\boldsymbol{Q}$ used for the Riccati propagation Equation 2.30 is considered to be $\boldsymbol{G}\boldsymbol{Q}\boldsymbol{G}^{T}={\rm diag\left(\left[(10^{-8})^% {2},(10^{-8})^{2},(10^{-8})^{2},(10^{-8})^{2},(10^{-8})^{2},(10^{-8})^{2},(10^% {-8})^{2}\right]\right)}$ (2.32) since OGA1 will depend more on the measurements (even if not so accurate at this stage) than on the system dynamic model. #### KF update equations The KF update equations correct the state and the covariance estimates with the measurements coming from the satellite. In fact, the measurement vector $\boldsymbol{y}_{k}$ consists of the so called measured along scan angle $\eta_{m}$, and the measured across scan angle $\zeta_{m}$, and they are the values as read from the AF1 CCD’s. On the other hand, the calculated field angles ($\boldsymbol{h}(\boldsymbol{x}_{t})=[\eta_{c},\zeta_{c}]$) are the field angles calculated from an ASC for each time of observation. The set of the update equations are listed below: $\displaystyle\hat{\boldsymbol{x}}^{+}_{k}=\hat{\boldsymbol{x}}^{-}_{k}+% \boldsymbol{K}_{k}\left[\boldsymbol{y}_{k}-\boldsymbol{h}_{k}(\hat{\boldsymbol% {x}}^{-}_{k})\right]$ (2.33) $\displaystyle\boldsymbol{P}^{+}_{k}=\left[\boldsymbol{I}-\boldsymbol{K}_{k}% \boldsymbol{H}_{k}(\hat{\boldsymbol{x}}^{-}_{k})\right]\boldsymbol{P}_{k}^{-}$ (2.34) $\displaystyle\boldsymbol{K}^{+}_{k}=\boldsymbol{P}_{k}^{-}\boldsymbol{H}_{k}^{% T}\left[\boldsymbol{H}_{k}\boldsymbol{P}_{k}^{-}\boldsymbol{H}_{k}^{T}+% \boldsymbol{R}_{k}\right]^{-1}$ (2.35) where the measurement sensitivity matrix $\boldsymbol{H}_{k}$is given by $\boldsymbol{H}_{k}=\begin{bmatrix}\frac{\partial\eta_{k}}{\partial\mathbf{q}}&% 0_{1\times 3}\\ \frac{\partial\zeta_{k}}{\partial\mathbf{q}}&0_{1\times 3}\end{bmatrix}$ (2.36) and the measurement noise matrix $\boldsymbol{R}$ is chosen such that $\boldsymbol{R}=\begin{bmatrix}\sigma_{\eta}^{2}&0\\ 0&\sigma_{\zeta}^{2}\\ \end{bmatrix}$ (2.37) The standard deviation for the field angle errors along and across scan are computed and provided by IDT. #### The processing scheme The OGA1 process can be divided in 3 main parts: input, processing and output steps. 1) The inputs are: • Oga1Observations (OGA1 needs these in time sequence from IDT) composed essentially by: • transit identifiers (TransitId) • observation time (TObs) • observed field angles (FAs) including geometry calibration • A raw attitude (IOGA), with about 7 ${}^{\prime\prime}$noise (in B-splines). • A crossmatch table with pairs of: SourceId-TransitId, plus proper direction to the star at the instance of observation. 2) The processing steps The OGA1 determination is a Kalman filter (KF) process, i.e. essentially an optimization loop over the individual observations, plus at the end a spline-fitting of the resulting quaternions. The main steps are: • Sort by time the list of elementaries. Then, sort the list of crossmatch sources by the transit identifier with the ones from the sorted list of elementary. The unmatched elementary transits are simply discarded. • Initialize the KF: interpolate the B-spline (IOGA attitude format) in order to get the first quaternion and angular velocity to start the filter. Optionally, the external torque can be reconstructed in order to have a better accuracy for the dynamical system model. • Forward KF: for a generic time $t_{i}$, predicting the attitude quaternion from the state vector at the time $t_{i-1}$ of the preceding observation. • Backward KF: for a generic time $t_{i-1}$, predicting the attitude quaternion from the state vector at the earlier time $t_{i}$ of the preceding observation. • From the pixel coordinates compute the field coordinates from SM and AF measurements, using a Gaia calibration file. As observed field angles OGA1 will use the AF2 values. • The calculated field coordinates of known stars (from ASC) are computed. • Correct the state using the difference between the observed and the calculated measurements in the along-scan ($\Delta\eta$) and across-scan ($\Delta\zeta$) directions. • At the end of the loop over the measurements generate a B-spline representation for the whole time interval for output. • A last consolidation step is carried out in order to remove any attitude spike that may appear, for example due to a wrong crossmatch record. The OGA1 determination process was found to need backward propagation of the KF in order to sufficiently reduce the errors near the start of the time interval. This was found to be a problem during testing and is a well known issue for Kalman filters. The problem was solved by introducing the backward filtering which resulted in a uniform distribution of errors. 3) The outputs are the improved attitude (for each observation time $t_{i}$) in two formats: • quaternions $\mathbf{q}_{OGA1}(t_{i})$ in the form of array of doubles. • B-spline representation from the OGA1 quaternions. The OGA1 process, being a Kalman filter, needs its measurements in strict time sequence. In order to keep the OGA1 process simple, the baseline is thus to separately use the CCD transits. OGA1 must use two-dimensional (2D) astrometric measurements from Gaia. That is, the CCD transits of stars used by OGA1 must have produced 2D windows. OGA1 furthermore requires at least about one such 2D measurement per second and per FoV, in order to obtain the required precision. ### OGA2 and ODAS source positions The main objective of the ODAS is to produce a daily high-precision astrometric solution that is analysed by First Look Scientists in order to judge Gaia’s instrument health and scientific data quality (see also Section 2.5.2). The resulting attitude reconstruction, OGA2, together with source position updates computed in the framework of the First Look system is also used as input parameters by the photometric and spectroscopic wavelength calibrations (for the last mission data segment in each Gaia data release). For the first two data segments, the OGA2 is accurate to the 50 mas level because it is (like OGA1) tied to the system of the ground based catalogues, but it is precise at the sub-mas level, i.e. internally consistent except for a global rotation w.r.t. the ICRF. The OGA2 accuracy will improve during the mission with each catalogue produced by DPAC. The computation of the OGA2 is not a separate task but one of the outputs of the ODAS (One-Day Astrometric Solution) software of the First Look System (see Section 2.5.2). This process can be divided into three main parts: input, processing and output steps: 1. 1. The OGA2 inputs are outputs of the IDT system, namely: • AstroElementaries with transit IDs and observation times, • OGA1 quaternions $\mathbf{q}_{OGA1}(t_{i})$, • a source catalogue with source IDs, and • a crossmatch table with pairs of source IDs and transit IDs. 2. 2. The processing steps: The attitude OGA2 is determined in one go together with daily geometric instrument calibration parameter updates and updated source positions in the framework of the First Look ODAS system which is a weighted least-square method. 3. 3. The OGA2 outputs are the B-spline representation of the improved attitude (as a function of observation time $t_{i}$). Along with the OGA2, FL produces on a daily basis improved sources positions which also are used as input parameters by the photometric and spectroscopic wavelength calibrations (for the last mission data segment in each Gaia data release). The accuracy and precision levels of the source positions are the same as those of the OGA2. ## 2.4.6 Bias and astrophysical background determination Author(s): Nigel Hambly As mentioned previously in Section 2.3.5 concerning bias, on-ground monitoring of the electronic offset levels is enabled via pre-scan telemetry that arrives in one second bursts approximately once per hour per device. IDT simply analyses these bursts by recording robust mean and dispersion measures for each burst for each device, and low-order spline interpolation is employed to provide model offset levels at arbitrary times when processing samples from the CCDs. Regarding the offset non-uniformities mentioned previously we reiterate that only the readout-independent offset between the prescan level and the offset level during the image section part of the serial scan is corrected in IDT — all other small effects are ignored. The approach to modelling the ‘large-scale’ background is to use high priority observations to measure a two-dimensional background surface independently for each device so that model values can be provided at arbitrary along-scan time and across-scan position during downstream processing (e.g. when making astrometric and photometric measurements from all science windows). A combination of empty windows (VOs) and a subset of leading/trailing samples from faint star windows are used as the input data to a linear least-squares determination of the spline surface coefficients. The procedure is iterative to enable outlier rejection of those samples adversely affected by prompt-particle events (commonly known as ‘cosmic rays’) and other perturbing phenomena. For numerical robustness the least-squares implementation employs Householder decomposition (van Leeuwen 2007) for the matrix manipulations. Some example large-scale astrophysical background models are illustrated in Figure 2.13. Following the large-scale background determination a set of residuals for a subset of the calibrating data are saved temporarily for use downstream in the charge release calibration process which is implemented as a ‘one day calibration’ in the First Look subsystem (see later). Residuals folded by distance from last charge injection are analysed by determining the robust mean value and formal error on that value in each TDI line after the injection. The across-scan injection profile, also determined in a one day calibration employing empty windows that happen to lie over injection lines, is used to factor out the power-law dependency of release signal versus injection level. Note that in this way new calibrations of the charge injection profile and the charge release signature are produced each day. This is done to follow the assumed slow evolution in their characteristics as on-chip radiation damage accumulates. The new calibrations are fed back into the daily pipeline at regular intervals (see later) such that an up-to-date injection/release calibration is available to all processing that requires them. Figure 2.14 shows some example charge release curves typical of those during the Gaia DR1 observation period. Example across-scan charge injection profiles are shown in Figure 2.15. ## 2.4.7 Spectro-Photometric Image Parameters determination Author(s): Anthony Brown Although the BP and RP data are treated from scratch again in the photometric processing (see Section 5), a pre-processing of these data within IDT is needed in order to derive instantaneous source colour information (which may differ from the mean source colour). The source colours are needed in the astrometric image parameters determination (Section 2.4.8). The photometric processing is described in detail in the Gaia data release paper (See sections 5.3 and 5.4. in Fabricius et al. 2016). ## 2.4.8 Astrometric Image Parameters determination Author(s): Claus Fabricius, Lennart Lindegren The image parameter determination needs to know the relevant PSF or LSF, and as the image shape among other things depends on the source colour, we first need to determine the colour. We start with the determination of quick and simple image parameters in AF using a Tukey’s bi-weight method. The resulting positions and fluxes serve two purposes. They are used as starting points for the final image parameter determination, and they are also used to propagate the image location from the AF field to the BP and RP field in order to obtain reliable colours. This process is explained in more detail in Fabricius et al. (2016), Sect. 3.3. Note, however, that for Gaia DR1 the colour dependence of the image shapes was not yet calibrated. The final image parameters, viz. transit time, flux, and for 2D windows also the AC position, were determined with a maximum likelihood method described in Section 2.4.8. For converting the fluxes from digital units to ${\rm e}^{-}/{\rm s}$, gain factors determined before launch were used. The resulting parameters are stored as intermediate data for later use in the astrometric and photometric core processes. ### A general Maximum-Likelihood algorithm for CCD modelling The general principle for Maximum-Likelihood (ML) fitting of arbitrary models in the presence of Poissonian noise is quite simple and can be formulated in a general framework which is independent of the precise model. In this way it should be possible to use the same fitting procedure for 1D and 2D profile fitting to CCD sample data, as well as for more complex fitting (e.g. for estimating the parameters of the LSF model). Here we outline the basic model for this framework. #### Model of sample data The basic input for the estimation procedure consists of data and a parametrised model. The estimation procedure will adjust the model parameters until the predicted data agrees as well as possible with observed data. At the same time it will provide an estimate of the covariance matrix of the estimated parameters and a measure of the goodness-of-fit. The ML criterion is used for the fit, which in principle requires that the probability distribution of the data is known as a function of the model parameters. In practise a simplified noise model is used and this is believed to be accurate enough and leads to simple and efficient algorithms. Let $\{N_{k}\}$ be the sample data, $\boldsymbol{\theta}=\{\theta_{i}\}$ the model parameters, and $\{\lambda_{k}(\boldsymbol{\theta})\}$ the sample values predicted by the model for given parameters. Thus, if the model is correct and $\boldsymbol{\theta}$ are the true model parameters, we have for each $k$ $\mbox{E}(N_{k})=\lambda_{k}(\boldsymbol{\theta})$ (2.38) Using a noise model, we have in addition $\mbox{Var}(N_{k})=\lambda_{k}(\boldsymbol{\theta})+r^{2}$ (2.39) where $r$ is the standard deviation of the readout noise. More precisely, the adopted continuous probability density function (pdf) for the random variable $N_{k}$ is given by $p(N|\lambda,r)=\mbox{const}\times\frac{(\lambda+r^{2})^{N+r^{2}}}{\Gamma(N+r^{% 2}+1)}\,e^{-\lambda-r^{2}}$ (2.40) valid for any real value $N\geq-r^{2}$. It is assumed that $N_{k}$, $\lambda_{k}$ and $r$ are all expressed in electrons per sample (not in arbitrary AD units, voltages, or similar). In particular, $N_{k}$ is the sample value after correction for bias and gain, but including dark signal and background. The readout noise $r$ is assumed to be known; it is never one of the parameters to be estimated by the methods described in this note. The functions $\lambda_{k}(\boldsymbol{\theta})$ are in principle defined by the various source, attitude and calibration models, including the LSF, PSF and CDM models. The set of parameters included in the vector $\boldsymbol{\theta}$ varies depending on the application. For example, in the 1D image centroiding algorithm $\boldsymbol{\theta}$ may consist of just two parameters representing the intensity and location of the image; in the LSF calibration process, $\boldsymbol{\theta}$ will contain the parameters (e.g. spline coefficients) defining the LSF for a particular class of stars; and so on. The intensity model $\lambda_{k}(\boldsymbol{\theta})$ is left completely open here; the only thing we need to know about it is the number of free parameters, $n=\dim(\boldsymbol{\theta})$. #### Maximum Likelihood estimation Given a set of sample data $\{N_{k}\}$, the ML estimation of the parameter vector $\boldsymbol{\theta}$ is done by maximizing the likelihood function $L(\boldsymbol{\theta}|\{N_{k}\})=\prod_{k}p(N_{k}|\lambda_{k}(\boldsymbol{% \theta}),r)$ (2.41) where $p(N|\lambda,r)$ is the pdf of the sample value from the adopted noise model (Equation 2.40). Mathematically equivalent, but more convenient in practise, is to maximize the log-likelihood function $\ell(\boldsymbol{\theta}|\{N_{k}\})=\sum_{k}\ln p(N_{k}|\lambda_{k}(% \boldsymbol{\theta}),r)$ (2.42) Using the modified Poissonian model, Equation 2.40, we have $\ell(\boldsymbol{\theta}|\{N_{k}\})=\mbox{const}+\sum_{k}\left[(N_{k}+r^{2})% \ln\left(\lambda_{k}(\boldsymbol{\theta})+r^{2}\right)-\lambda_{k}(\boldsymbol% {\theta})\right]$ (2.43) where the additive constant absorbs all terms that do not depend on $\boldsymbol{\theta}$. (Remember that $r$ is never one of the free model parameters.) The maximum of Equation 2.43 is obtained by solving the $n$ simultaneous likelihood equations $\frac{\partial\ell(\boldsymbol{\theta}|\{N_{k}\})}{\partial\boldsymbol{\theta}% }=\boldsymbol{0}$ (2.44) Using Equation 2.43 these equations become $\sum_{k}\frac{N_{k}-\lambda_{k}(\boldsymbol{\theta})}{\lambda_{k}(\boldsymbol{% \theta})+r^{2}}\,\frac{\partial\lambda_{k}}{\partial\boldsymbol{\theta}}=% \boldsymbol{0}$ (2.45) ## 2.4.9 Crossmatch (XM) processing Author(s): Javier Castañeda The crossmatch provides the link between the Gaia detections and the entries in the Gaia working catalogue. It consists of a single source link for each detection, and consequently a list of linked detections for each source. When a detection has more than one source candidate fulfilling the match criterion, in principle only one is linked, the principal match, while the others are registered as ambiguous matches. To facilitate the identification of working catalogue sources with existing astronomical catalogues, the crossmatch starts from an initial source list, as explained in Section 2.2.3, but this initial catalogue is far from complete. The resolution of the crossmatch will therefore often require the creation of new source entries. These new sources can be created directly from the unmatched Gaia detections. A first, preliminary crossmatching pre-processing is done on a daily basis, in IDT, to bootstrap downstream DPAC systems during the first months of the mission, as well as to process the most recent data before it reaches cyclic pre-processing in IDU. By definition, such daily crossmatching cannot be completely accurate, as some data will typically arrive with a delay of some hours or even days to IDT. On the other hand, the final crossmatching (also for the present release) is executed by IDU over the complete set of accumulated data. This provides better consistency as having all of the data available for the resolution allows a more efficient resolution of dense sky regions, multiple stars, high proper motion sources and other complex cases. Additionally in the cyclic processing, the crossmatch is revised using the improvements on the working catalogue, of the calibrations, and of the removal of spurious detections (see Section 2.4.9). Some of the crossmatching algorithms and tasks are nearly identical in the daily and cyclic executions, but the most important ones are only executed in the final crossmatching done by IDU. For the cyclic executions of the crossmatch the data volume is rather small. However the number of detections will be huge at the end of the mission, reaching $\sim 10^{11}$ records. Ideally, the crossmatch should handle all these detections in a single process, which is clearly not an efficient approach, especially when deploying the software in a computer cluster. The solution is to arrange the detections by spatial index, such as HEALPix (Górski et al. 2005), and then distribute and treat the arranged groups of detections separately. However, this solution presents some disadvantages: • Complicated treatment of detections close to the region boundaries of the adopted spatial arrangement. • Handling of detections of high proper motion stars which cannot be easily bounded to any fixed region. • Repeated accessing to time-based data such as attitude and geometric calibration from spatially distributed jobs. These issues could in principle be solved but would introduce more complexity into the software. Therefore another procedure better adapted to Gaia operations has been developed. This processing splits the crossmatch task into three different steps. Detection Processor In this first step, the input observations are processed in time order to compute the detection sky coordinates and obtain the preliminary source candidates for each individual detection. Covered in Section 2.4.9 and Section 2.4.9. Sky Partitioner This second step is in charge of grouping the results from the previous step according to the source candidates provided for each individual detection. The objective is to determine isolated groups of detections, all located in a rather small and confined sky region which are related to each other according to the source candidates. Therefore, this step does not perform any scientific processing but provides an efficient spatial data arrangement by solving any region boundary issues and high proper motion scenarios. Therefore, this stage acts as a bridge between the time-based and the final spatial-based processing. See Section 2.4.9. Match Resolver Final step where the crossmatch is resolved and the final data products are produced. This step is ultimately a spatial-based processing where all detections from a given isolated sky region are treated together, thus taking into account all observations of the sources within that regiod Rel n from the different scans. See Section 2.4.9. In the following subsections we describe the main processing steps and algorithms involved in the crossmatching, focusing on the cyclic (final) case. ### Sky Coordinates determination The images detected on board, in the real-time analysis of the sky mapper data, are propagated to their expected transit positions in the first strip of astrometric CCDs, AF1, i.e. their transit time and AC column are extrapolated and expressed as a reference acquisition pixel. This pixel is the key to all further on-board operations and to the identification of the transit. For consistency, the crossmatch does not use any image analysis other than the on-board detection, and is therefore based on the reference pixel of each detection, even if the actual image in AF1 may be slightly offset from it. This decision was made because, in general, we do not have the same high-resolution SM and AF1 images on ground as the ones used on board. The first step of the crossmatch is the determination of the sky coordinates of the Gaia detections, but only for those considered genuine. As mentioned, the sky coordinates are computed using the reference acquisition pixel in AF1. The precision is therefore limited by the pixel resolution as well as by the precision of the on-board image parameter determination. The conversion from the observed positions on the focal plane to celestial coordinates, e.g. right ascension and declination, involves several steps and reference systems as shown in Figure 2.16. The reference system for the source catalogue is the Barycentric Celestial Reference System (BCRS/ICRS), which is a quasi-inertial, relativistic reference system non-rotating with respect to distant extra-galactic objects. Gaia observations are more naturally expressed in the Centre-of-Mass Reference System (CoMRS) which is defined from the BCRS by special relativistic coordinate transformations. This system moves with the Gaia spacecraft and is defined to be kinematically non-rotating with respect to the BCRS/ICRS. BCRS is used to define the positions of the sources and to model the light propagation from the sources to Gaia. Observable proper directions towards the sources as seen by Gaia are then defined in CoMRS. The computation of observable directions requires several sorts of additional data like the Gaia orbit, solar system ephemeris, etc. As a next step, we introduce the Scanning Reference System (SRS), which is co-moving and co-rotating with the body of the Gaia spacecraft, and is used to define the satellite attitude. Celestial coordinates in SRS differ from those in CoMRS only by a spatial rotation given by the attitude quaternions. The attitude used to derive the sky coordinates for the crossmatch is the initial attitude reconstruction OGA1 described in  Section 2.4.5. We now introduce separate reference systems for each telescope, called the Field of View Reference Systems (FoVRS) with their origins at the centre of mass of the spacecraft and with the primary axis pointing to the optical centre of each of the fields, while the third axis coincides with the one of the SRS. Spherical coordinates in this reference system, the already mentioned field angles ($\eta,\zeta$), are defined for convenience of the modelling of the observations and instruments. Celestial coordinates in each of the FoVRS differ from those in the SRS only by a fixed nominal spatial rotation around the spacecraft rotation axis, namely by half the basic angle of 106 degrees. Finally, and through the optical projections of each instrument, we reach the focal plane reference system (FPRS), which is the natural system for expressing the location of each CCD and each pixel. It is also convenient to extend the FPRS to express the relevant parameters of each detection, specifically the field of view, CCD, gate, and pixel. This is the Window Reference System (WRS). In practical applications, the relation between the WRS and the FoVRS must be modelled. This is done through a geometric calibration, expressed as corrections to nominal field angles as detailed in Section 3.3.5. The geometric calibration used in the daily pipeline is derived by the First-Look system in the ‘One-Day Astrometric Solution’ (ODAS), see Section 2.4.5 whereas the calibration for cyclic system is produced by AGIS. ### Scene determination The scene is in charge of providing a prediction of the objects scanned by the two fields of view of Gaia according to the spacecraft attitude and orbit, the planetary ephemeris and the source catalogue. It was originally introduced to track the illumination history of the CCDs columns for the parametrization of the CTI mitigation. However, this information is also relevant for: • The astrophysical background estimation and the LSF/PSF profile calibration, to identify the nearby sources that may be affecting a given observation. The scene can easily reveal if the transit is disturbed or polluted by a parasitic source. • The crossmatch, to identify sources that will probably not be detected directly, but still leave many spurious detections, for example from diffraction spikes or internal reflections. Therefore, the scene does not only include the sources actually scanned by both fields of view but it also identifies: • Sources without the corresponding Gaia observations. This can happen in the case of: • Very bright sources (brighter than 6th magnitude) and SSO transits not detected in the Sky Mapper (SM) or not finally confirmed in the first CCD of the Astrometric Field (AF1). • Very high proper motion SSO, detected in SM but not successfully confirmed in AF1. • High density regions where the on board resources are not able to cope with all the crossing objects. • Very close sources where the detection and acquisition of two separate observations is not feasible due to the capacity of the Video Processing Unit. • Data losses due to: on board storage overflow, data transfer issues or processing errors. • Sources falling into the edges and between CCD rows. • Sources falling out of both fields of view but so bright that they may disturb or pollute nearby observations. It must be specially noted that the scene is established not from the individual observations, but from the catalogue sources and planetary ephemeris and is therefore limited by the completeness and quality of those input tables. ### Spurious Detections identification The Gaia on-board detection software was built to detect point-like images on the SM CCDs and to autonomously discriminate star images from cosmic rays, etc. For this, parametrised criteria of the image shape are used, which need to be calibrated and tuned. There is clearly a trade-off between a high detection probability for stars at 20 mag and keeping the detections from diffraction spikes (and other disturbances) at a minimum. A study of the detection capability, in particular for non-saturated stars, double stars, unresolved external galaxies, and asteroids is provided by de Bruijne et al. (2015). The main problem with spurious detections arises from the fact that they are numerous (15–20% of all detections), and that each of them may lead to the creation of a (spurious) new source during the crossmatch. Therefore, a classification of the detections as either genuine or spurious is needed to only consider the former in the crossmatch. The main categories of spurious detections found in the data so far are: • Spurious detections around and along the diffraction spikes of sources brighter than approximately 16 mag. For very bright stars there may be hundreds or even thousands of spurious detections generated in a single transit, especially along the diffraction spikes in the AL direction, see Fig. 2.17 for an extreme example. • Spurious detections in one telescope originating from a very bright source in the other telescope, due to unexpected light paths and reflections within the payload. • Spurious detections from major planets. These transits can pollute large sky regions with thousands of spurious detections, see Fig. 2.18, but they can be easily removed. • Detections from extended and diffuse objects. Fig. 2.19 shows that Gaia is actually detecting not only stars but also filamentary structures of high surface brightness. These detections are not strictly spurious, but require a special treatment, and are not processed for Gaia DR1. • Duplicated detections produced from slightly asymmetric images where more than one local maximum is detected. These produce redundant observations and must be identified during the crossmatch. • Spurious detections due to cosmic rays. A few manage to get through the on-board filters, but these are relatively harmless as they happen randomly across the sky. • Spurious detections due to background noise or hot CCD columns. Most are caught on-board, so they are few and cause no serious problems. No countermeasures are yet in place for Gaia DR1 for the last two categories, but this has no impact on the published data, as these detections happen randomly on the sky and there will be no corresponding stellar images in the astrometric (AF) CCDs. For Gaia DR1 we identify spurious detections around bright source transits, either using actual Gaia detections of those or the predicted transits obtained in the scene, and we select all the detections contained within a predefined set of boxes centred on the brightest transit. The selected detections are then analysed, and they are classified as spurious if certain distance and magnitude criteria are met. These predefined boxes have been parametrised with the features and patterns seen in the actual data according to the magnitude of the source producing the spurious detections. For very bright sources (brighter than 6 mag) and for the major planets this model has been extended. For these cases, larger areas around the predicted transits are considered. Also both fields of view are scanned for possible spurious detections. Identifying spurious detections around fainter sources (down to 16 mag) is more difficult, since there are often only very few or none. In these cases, a multi-epoch treatment is required to know if a given detection is genuine or spurious – i.e. checking if more transits are in agreement and resolve to the same new source entry. These cases will be addressed in future data releases as the data reduction cycles progress and more information from that sky region is available. Finally, spurious new sources can also be introduced by excursions of the on-ground attitude reconstruction used to project the detections on the sky (i.e. short intervals of large errors in OGA1), leading to misplaced detections. Therefore, the attitude is carefully analysed to identify and clean up these excursions before the crossmatch is run. ### Detection Processor This processing step is in charge of providing an initial list of source candidates for each individual observation. The first step is the determination of the sky coordinates as described in Section 2.4.9. This step is executed in multiple tasks split by time interval blocks. All Gaia observations enter this step, with the exception of Virtual Objects, and data from dedicated calibration campaigns. Also, all the observations positively classified as spurious detections are filtered out. Once the observation sky coordinates are available these are compared with a list of sources. In this step, the Obs–Src Match, the sources that cover the sky seen by Gaia in the time interval of each task are extracted from the Gaia catalogue. These sources are propagated with respect to parallax, proper motion, orbital motion, etc. to the relevant epoch. The candidate sources are selected based on a pure distance criterion. The decision of only using distance was taken because the position of a source changes slowly and predictably, whereas other parameters such as the magnitude may change in an unpredictable way. Additionally, the initial Gaia catalogue is quite heterogeneous, exhibiting different accuracies and errors which suggest the need of a match criterion subjected to the provenance of the source data. In later stages of the mission, when the source catalogue is dominated by Gaia astrometry, this dependency can be removed and then the criterion should be updated to take advantage of the better accuracy of the detection in the along scan direction. At that point it will be possible to use separate along and across scan criteria, or use an ellipse with the major axis oriented across scan which will benefit the resolution of the most complex cases. A special case is the treatment of solar system objects observations. The processing of these objects are the responsibility of CU4 and for this reason no special considerations have been implemented in the crossmatch. These observations will have Gaia Catalogue entries created on daily basis by IDT and those entries will remain, so the corresponding observations will be matched again and again to their respective sources without any major impact on the other observations. An additional processing may be required when we find observations with no source candidates at all after these observation to source matching process. In principle this situation should be rare as IDT has already treated all observations before IDU runs. However, unmatched observations may arise because of IDT processing failures, updates in the detection classification, updates in the source catalogue or simply the usage of a more strict match criterion in IDU. Thus, this additional process is basically in charge of processing the unmatched observations and creating temporary sources as needed just to remove all the unmatched observations in a second run of the source matching process. The new sources created by these tasks will ultimately be resolved (by confirmation or deletion) in the last crossmatch step. Summarising, the result of this first step is a set of MatchCandidates for the whole accumulated mission data. Each MatchCandidate corresponds to a single detection and contains a list of source candidates. Together with the MatchCandidates, an auxiliary table is also produced to track the number of links created to each source, the SourceLinksCount. Results are stored in a space based structure using HEALPix Górski et al. (2005) for convenience of the next processing steps. ### Sky Partitioner The Sky Partitioner task is in charge of grouping the results of the Obs–Src Match according to the source candidates provided for each individual detection. The purpose of this process is to create self contained groups of MatchCandidates. The process starts loading all MatchCandidates for a given sky region. From the loaded entries, the unique list of matched sources is identified and the corresponding SourceLinksCount information is loaded. Once loaded, a recursive process is followed to find the isolated and self contained groups of detections and sources. The final result of this process is a set of MatchCandidateGroups (as shown in Figure 2.20) where all the input observations are included. In summary, within a group all observations are related to each other by links to source candidates. Consequently, sources present in a given group are not present in any other group. In early runs, there is a certain risk to end with unmanageably big groups. For those cases we have introduced a limit in the number of sources per group so the processing is not stopped. The adopted approach may create spurious or duplicated sources in the overlapped area of these groups. However, as the cyclic processing progresses, these cases should disappear (groups will be reduced) due to better precision in the catalogue, improved attitude and calibration and the adoption of smaller match radius. So far we have not encountered any of these cases and therefore we have not reached the practical limit for the number of sources per group. After this process each MatchCandidateGroup can be processed independently from the others as the observations and sources from two different groups do not have any relation between them. ### Crossmatch resolution The final step of the crossmatch is the most complex, resolving the final matches and consolidating the final new sources. We distinguish three main cases to solve: • Duplicate matches: when two (or more) detections close in time are matched to the same source. This will typically be either newly resolved binaries or spurious double detections. • Duplicate sources: when a pair of sources from the catalogue have never been observed simultaneously, thus never identifying two detections within the same time frame, but having the same matches. This can be caused by double entries in the working catalogue. • Unmatched observations: observations without any valid source candidate. For the first cyclic processing, the resolution algorithm has been based on a nearest-neighbour solution where the conflict between two given observations is resolved independently from the other observations included in the group. This is a very simple and quick conflict resolution algorithm. However, this approach does not minimize the number of new sources created, when more than two observations close in time have the same source as primary match. The crossmatch resolution algorithms in forthcoming Gaia data releases will be based on much more sophisticated. In particular, the next crossmatch will use clustering solutions and algorithms where all the relations between the observations contained in each group are taken into account to generate the best possible resolution.
2018-05-23T03:17:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 112, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5490282773971558, "perplexity": 1769.5360916722366}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794865411.56/warc/CC-MAIN-20180523024534-20180523044534-00476.warc.gz"}
http://heliotactic.github.io/articles/sipv-howto/
Image credit: Pixabay--Creative Commons CC0 Bye-Bye BIPV. Hello, Co-creation of Knowledge Time for a lesson in commodities, systems integration, and hopefully a little comment on the value of jointly intentional group agency to develop solutions, nerded-down with the smooth phrasing: co-creation of knowledge. But first, the News: GreenTech media posted that: “Dow Chemical Sheds Its Solar Shingle Business”. My gut reaction was: finally, it’s about time. Dow was about the last of the niche photovoltaic (PV) manufacturers who were trying to carve out a commercial space for upper/upper-middle class homeowners to purchase conspicuous displays of wealth and/or to purchase carbon indulgences. They had a great idea born of the 1990s. It didn’t work. Why? First, because PV modules are global commodities, surprisingly similar to barrels of oil. Second, because solar adoption is now ultimately about people, working together to co-create the new identity of solar in that framework that I keep calling Solar Ecology. Let’s take a look. Let’s say you just finished dropping off your kids at the swim/soccer/theatre summer practice and you see your tank is empty (yes, you know that you still have a car or truck, and I don’t care if you ride bike to commute sometimes this is a story, shutup). When you go to the gas station, do you request a super-special “magic octane” fuel made by Apple, or do you just pump some regular in and call it a day (soon to be followed with a beer, do you know how much kids summer swim/soccer/theatre costs?). You get the regular. Why? because special gas does jack for your car, and oil or gas is a fungible commodity–meaning gas is gas is gas (or petrol is petrol is petrol). You buy regular gasoline in State College, PA, and it’s the same darn thing as in Hibbing, MN. Both are capable of mutual substitution, and both products originated as barrels of oil bought and sold on a global market. Take a look at an excerpt explaining crude oil markets from one of our energy courses taught at Penn State by Prof. Seth Blumsack (EME 801: “Energy Markets, Policy, and Regulation”): “Crude oil is one of the most economically mature commodity markets in the world. Even though most crude oil is produced by a relatively small number of companies, and often in remote locations that are very far from the point of consumption, trade in crude oil is robust and global in nature. Nearly 80% of international crude oil transactions involve delivery via waterway in supertankers. Oil traders are able to quickly redirect transactions towards markets where prices are higher. Oil and coal are global commodities that are shipped all over the world. Thus, global supply and demand determines prices for these energy sources. Events around the world can affect our prices at home for oil-based energy such as gasoline and heating oil. Oil prices are high right now because of rapidly growing demand in the developing world (primarily Asia). As demand in these places grows, more oil cargoes head towards these countries. Prices in other countries must rise as a result. Political unrest in some oil-producing nations also contributes to high prices - basically, there is a fear that political instability could shut down oil production in these countries. OPEC, the large oil-producing cartel, does have some ability to influence world prices, but OPEC’s influence in the world oil market is shrinking rapidly as new supplies in non-OPEC countries are discovered and developed.” –credit: Seth Blumsack In a similar fashion, solar PV modules (the blue or black panels) can make electricity from sunlight with relatively similar conversion efficiencies, and have competitive costs per unit of power conversion. Plus, you can ship a crate of PV modules around the planet as easily as a crate of smart phones. Meaning: the unit cost (dollars per peak watt) of PV is comparable globally, and everybody just wants regular PV power). Don’t have quite enough efficiency? This is flow-based systems thinking (e.g. oil is stock-based systems thinking)…just add another PV panel or two. Here’s the rub. Nobody really cares what kind of PV module you buy in the end–and really, nobody but you, your state Public Utilities Commission, and your suburban homeowners association cares if the PV is on your roof at all. Electricity is electricity is electricity. I imagine looking back in 20 years time: there will be a blip when people will have cared about publicly displaying their use of PV to generate low-carbon electricity. Even now, there is even a phenomenal contagion effect to having PV on your house and your neighbors getting PV on their house. But someday folks are going to find out that they don’t really need the PV exactly on the roof, and it will be like finding out that you don’t really need a desktop computer in your house to check your email (working in the “cloud” means email is in your smart phone, at the library, school, or any number of distributed devices). Solar PV is a Commodity During the 80s, 90s, and 00s, solar grew to be interpreted exclusively as PV (solar photovoltaics to convert light into electricity). Solar was still expensive, and the emerging goal from the top was to cram photovoltaics into the built environment–arguably using poorly construed financial and engineering arguments. Many years and many dollars went into developing “nifty” variants of PV tech for what was called BIPV (Building Integrated PV). It was like a supply-side techies wet dream. You replace the façade of your home (roof/window/wall) with a specialized PV module, and somehow reduce the cost of the PV module…(the financial analyses get very grey here, as the BIPV tech was ultimately more expensive than the commodity). Unfortunately, a systems integration message was lost along the way, and you often designed a unique device that worked worse than a traditional PV panel (overheating is undesirable in PV), and a building element that functioned worse than a traditional roof/window/wall. A snippet from my textbook Solar Energy Conversion Systems (Academic Press (2013)): One of the systems that are popular targets for integration are buildings, hence the term Building Integrated PV (BIPV). However, we should note that a building itself is composed of many complex systems such as roofs, windows, awnings, and walls–each system composed of real components that functioned together as a whole. The historical meaning of BIPV originated in the 1980s as a cost reduction strategy, whereby a designer was expected to remove a functional part of a wall system or roof system within the built environment and replace that component with a photovoltaic module. The exchange in cost for the removed component (such as a shingle or a pane of glass) was supposed to defray the net cost of installing the photovoltaic module–in essence, sacrificing a functional piece from the whole to make room for cost. Unfortunately, integration for better building costs does not imply integration with the goals of the system. From this perspective, a window–mounted air conditioning unit would be classified as “building integrated AC,” rather than the actual BIAC from a forced air exchange and integrated ducting… –credit J. R. S. Brownson In essence, Bill Gates and peers have been working to save the world (or to grow their own vision of the world) by proposing “energy miracles”–a better technology to capitalize upon from the top-down. Meanwhile, the rest of the world is working with what we’ve got, just making good use of what is available from the bottom-up, or deciding to not make use of a newer technology at all. A “special” kind of PV panel is no longer anyone’s vision in the solar field–because they are fungible commodities: interchangeable, substitutes abound, and there is no significant cult of product to be found in a PV module at present. And so, there are very few niche PV industries (e.g. ECD, Evergreen, or Solyndra) really left standing; the belated Dow solar shingle enterprise was ultimately just one of the later additions to the graveyard, perhaps propped up by a very large petrochemical industry parent company. For an annual update on the recently deceased players of the solar manufacturing world, see Rest in Peace: The Fallen Solar Companies of…. All is not lost in though, as one can observe when solar technology cycles (and the emergent shakeouts) are compared to semiconductor memory business cycles. While working with the Solar Decathlon team from 2007-2009, we developed a new term to be more specific, called Systems Integrative Photovoltaics. SIPV was used to describe our work in GRIPV systems (Green Roof Integrative PV), but still leaves out the dynamics of the locale and the stakeholders from solar design solutions. SIPV: Systems Integrative Photovoltaics. Coupling performance between system and surroundings, including microclimate effects of irradiance and temperature. From a sustainability systems perspective, SIPV would include increasing ecosystem resilience. As I noted in my work on Solar Energy Conversion Systems, it is well past the time to move beyond removing parts to make room for PV. There is also the deep challenge of incorporating solar awareness and the story of solar into our mindsets. So not only is solar a commodity exposed to a global market, but it also is a technology that has not experienced a significant period of social reflection as to its place in the new energy-information economy. As such, entrepreneurs and communities will really benefit from thinking about what behavioral economist Prof. Sendhil Mullainathan calls “the last mile” problem. Problems that we seem to know how to solve, but collectively don’t achieve. Here he is talking about technology solutions in medicine and agriculture, and the challenge of technology when the innovators miss the opportunity to “recognize the complexity of the human mind”, and address the nudge to solve social challenges that are linked with technology. Change is in Co-Creation The challenge: Solar energy as a resource is thoroughly decentralized, pretty much non-exclusive (you can’t block the Sun for the most part), and highly-context dependent. Solar energy design and planning depends a lot on societal norms, meteorological and climate regimes, and yes, economics. Think about local organic farming, energy efficient building design, even grape growing for wines. It’s all about the locale: what we would call the contextual constraints in space and time the local solar resource, local ecosystems, local energy costs, and local policies (i.e. all solar is local). It’s also closely tied to the preference and values of the participating stakeholders surrounding a community. If the people don’t see a local value to solar (let’s keep using PV as a solar example), no amount of nifty technology will sway them in the short term. As such, there are relatively few “turn-key business” solutions in solar photovoltaic energy to deploy at a major commercial level (that certainly doesn’t stop innovative companies from trying, though). So we need a shift, a transformation in solar discovery that addresses local community and personal needs, rewards local knowledge, and aligns with the regional ecosystems supporting the community at hand. The key, embodied in the Solar Ecology school of thought, is working with people: working with stakeholders to co-design or co-create a systems integrative solution; working with our communities in the places where they live, and listening to the unique and place-based ways that they embrace the space that they live and work in. I do love words, and co-creation is a beautiful term that has emerged to describe working among all stakeholders, to collectively align and form a localized integrative design strategy. When used skillfully, co-creation of knowledge can mean a multi-stakeholder engagement process for learning and implementation of localized, meaningful solutions–here in solar design. In the building design world, this would be called the integrative design process, tested in practice for years by architects and by my friends and peers at 7group, based in PA. And so, almost a decade after beginning solar team work at Penn State with the advent of the Solar Decathlon, I am continuing my work among large teams of people as an act of design. I have found amazing discoveries occur while working with communities, here in Central PA, and as far off as Burkina Faso, West Africa. I am rediscovering that the key to solar transformation and solar adoption: pay more attention to people and place, and less attention to the flashy novelty of the newest tech or materials. Now get going and dig in with your own community for new discoveries in Solar Ecology. There is a new age of energy exploration out there! -JRSB
2019-01-21T16:59:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24606217443943024, "perplexity": 3197.385509018123}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583795042.29/warc/CC-MAIN-20190121152218-20190121174218-00403.warc.gz"}
https://alldimensions.fandom.com/wiki/Realm
## FANDOM 2,808 Pages The 3rd dimension, the realm, has length, width, and height. Points on the realm can be described with three coordinates, written in the form $(x,y,z)$. Space is also used to describe things in a physical realm. The smallest major terrestrial body in space is a Planet. ## Polyhedra Main Article: List of Polyhedra By Type The platonic solids have all their faces be the same type, all their edges be the same length, and all vertices have the same number of edges coming from them, and are convex. They are sometimes associated with the classical elements. All platonic solids can also be used as dice, with the number of possible values being equal to the number of faces. ## Dimension Name: Polyhedron Prev: Plane Next:  Flune, or sometimes time (though time is somewhat useless when talking about geometry). Community content is available under CC-BY-SA unless otherwise noted.
2020-05-28T17:45:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6043357253074646, "perplexity": 1126.3406872303808}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347399830.24/warc/CC-MAIN-20200528170840-20200528200840-00221.warc.gz"}
https://pdglive.lbl.gov/DataBlock.action?node=M082W&home=MXXX005
# ${{\boldsymbol f}_{{J}}{(2220)}}$ WIDTH INSPIRE search VALUE (MeV) CL% EVTS DOCUMENT ID TECN  COMMENT $\bf{ 23 {}^{+8}_{-7}}$ OUR AVERAGE $19$ ${}^{+13}_{-11}$ $\pm12$ 74 1996 B BES ${{\mathit e}^{+}}$ ${{\mathit e}^{-}}$ $\rightarrow$ ${{\mathit J / \psi}}$ $\rightarrow$ ${{\mathit \gamma}}{{\mathit \pi}^{+}}{{\mathit \pi}^{-}}$ $20$ ${}^{+20}_{-15}$ $\pm17$ 46 1996 B BES ${{\mathit e}^{+}}$ ${{\mathit e}^{-}}$ $\rightarrow$ ${{\mathit J / \psi}}$ $\rightarrow$ ${{\mathit \gamma}}{{\mathit K}^{+}}{{\mathit K}^{-}}$ $20$ ${}^{+25}_{-16}$ $\pm14$ 23 1996 B BES ${{\mathit e}^{+}}$ ${{\mathit e}^{-}}$ $\rightarrow$ ${{\mathit J / \psi}}$ $\rightarrow$ ${{\mathit \gamma}}{{\mathit K}_S^0}$ ${{\mathit K}_S^0}$ $15$ ${}^{+12}_{-9}$ $\pm9$ 32 1996 B BES ${{\mathit e}^{+}}$ ${{\mathit e}^{-}}$ $\rightarrow$ ${{\mathit J / \psi}}$ $\rightarrow$ ${{\mathit \gamma}}{{\mathit p}}{{\overline{\mathit p}}}$ $60$ ${}^{+107}_{-57}$ 1988 F LASS 11 ${{\mathit K}^{-}}$ ${{\mathit p}}$ $\rightarrow$ ${{\mathit K}^{+}}{{\mathit K}^{-}}{{\mathit \Lambda}}$ $80$ $\pm30$ 1988 SPEC 40 ${{\mathit \pi}^{-}}$ ${{\mathit p}}$ $\rightarrow$ ${{\mathit K}_S^0}$ ${{\mathit K}_S^0}$ ${{\mathit n}}$ $26$ ${}^{+20}_{-16}$ $\pm17$ 93 1986 D MRK3 ${{\mathit e}^{+}}$ ${{\mathit e}^{-}}$ $\rightarrow$ ${{\mathit \gamma}}{{\mathit K}^{+}}{{\mathit K}^{-}}$ $18$ ${}^{+23}_{-15}$ $\pm10$ 23 1986 D MRK3 ${{\mathit e}^{+}}$ ${{\mathit e}^{-}}$ $\rightarrow$ ${{\mathit \gamma}}{{\mathit K}_S^0}$ ${{\mathit K}_S^0}$ • • • We do not use the following data for averages, fits, limits, etc. • • • $8.6 \pm2.5$ 1 2008 SPEC 40 ${{\mathit \pi}^{-}}$ ${{\mathit p}}$ $\rightarrow$ ${{\mathit K}_S^0}$ ${{\mathit K}_S^0}$ ${{\mathit n}}$ +m${{\mathit \pi}^{0}}$ $\text{<80}$ 90 1987 C GAM2 38 ${{\mathit \pi}^{-}}$ ${{\mathit p}}$ $\rightarrow$ ${{\mathit \eta}^{\,'}}{{\mathit \eta}}{{\mathit n}}$ 1  $\mathit J{}^{PC} = 2{}^{++}$. Systematic uncertainties not evaluated References: PRL 76 3502 Studies of ${{\mathit \xi}{(2230)}}$ in ${{\mathit J / \psi}}$ Radiative Decays PL B215 199 Evidence for a $\mathit J{}^{PC} = 4++$ ${{\mathit K}}{{\overline{\mathit K}}}$ State at 2.2 ${\mathrm {GeV/}}\mathit c{}^{2}$ from ${{\mathit K}^{-}}{{\mathit p}}$ Interaction at 11 ${\mathrm {GeV/}}\mathit c$ NP B309 426 ${{\mathit \theta}{(1700)}}$ and ${{\mathit \xi}{(2230)}}$ Resonances in the ${{\mathit \pi}^{-}}$ ${{\mathit p}}$ $\rightarrow$ ${{\mathit K}_S^0}$ ${{\mathit K}_S^0}$ ${{\mathit n}}$ Reaction at Momentum 40 ${\mathrm {GeV/}}\mathit c$ SJNP 45 255 2.22 GeV ${{\mathit \eta}}{{\mathit \eta}^{\,'}}$ Structure Observed in 38 and 100 ${\mathrm {GeV/}}\mathit c$ ${{\mathit \pi}^{-}}{{\mathit p}}$ Collisions PRL 56 107 Observation of a Narrow ${{\mathit K}}{{\overline{\mathit K}}}$ State in ${{\mathit J / \psi}}$ Radiative Decays
2021-02-28T22:58:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8962470293045044, "perplexity": 1512.9214661125459}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178361776.13/warc/CC-MAIN-20210228205741-20210228235741-00493.warc.gz"}
http://herschel.esac.esa.int/twiki/bin/view/Public/HipeWhatsNew5x?sortcol=5;table=13;up=0
What's New in HIPE 5 Interested in what's new for other versions of HIPE? See this page for links to all the What's New documents from HIPE 4.0 onwards. This document lists the changes in HIPE version 5.0 with respect to the 4.x series. Additional pages list changes between minor versions of the 5.x series. See also the HIPE known issues and the Data products known issues pages for last-minute information on known problems. Core system HIPE Editors & Views • Files ending with extension ".csv" are recognized as tables, and opened as such with double click in the Navigator view or with File > Open. • Autocompletion in the Console has been improved. • A variable value can be copied by right clicking on the variable in the Variables view and selecting Copy value; it can be pasted in the Console or Jython editor afterwards. This works only for strings and numbers. • Drag & drop a variable from the Variables view to the Console to write the variable name. The same behaviour can be achieved when dropping it to a Jython editor. • Drag & drop selected text from a Jython editor to the Console or vice-versa to copy & paste that text. • The small level-2 preview shown in the top-right corner of the Observation Context viewer can be enlarged by clicking on it. Startup & Shutdown If the running Java platform is not supported, a warning is displayed. Feedback A feedback button has been added for documentation and HIPE functionality. Look & Feel Aqua look & feel has been replaced in Mac OS X by the HIPE look & feel used on the other platforms. Plug-ins HIPE now supports plug-ins, small packages that anyone can easily create and distribute to share code (Java or Jython) and data (LocalStores). Plug-ins can be downloaded and installed with a single-click. Plotting • Title and label can now accept the LaTeX commands \textrm \textit \textbf \mathrm \mathit \mathbf. For example: V$_{\textrm{LSR}}$ Astronomical utilities • RadialVelocity now uses TimeCorrProduct to select pointing data. • SSO now uses TimeCorrProduct to select pointing data. • Improved documentation. Numeric routines Data fitting • A complete new GUI for the SpectrumFitter was added. Start it from the spectrum toolbox menu, select SpectrumFitterGUI. Besides the functionality that was available in the SpectrumFitterTool, the GUI has an export tab that enables you to export the residual, total model, and individual models as an object with the type of the input object and as text. • The SpectrumFitter (commandline version) accepts Container with PointSpectrum and SpectralSegment index. • The SpectrumFitter (commandline version) has methods to export the data as the input type and as text. • The SpectrumFitter (commandline version) has method that returns a Jython script. Random Numbers • New random numbers generators from different distribution: Uniform, Gaussian, Poisson, Gamma, Cauchy (Lorentz), Exponential and Jeffreys. Statistics functions • RMS has been renamed to QRMS. • RMS implements a new functionality (see the User Reference Manual). • Improved StdDev and Variance algorithms. • Improved CorrelateMatrix algorithm. New functions • GeoMean, which calculates the geometric mean of a array of numeric data types with 1 to 5 dimensions. • Mode, which returns the mode(s), or the most common element(s), of an array of numeric data with 1 to 5 dimensions. • Covariance, which returns the covariance between two random variables/vectors x and y with finite second moments. If one vector is longer than the other, only the values up to the length of the shorter vector will be taken into account. • CovarianceMatrix, which returns the covariance matrix of the input M x N matrix. The result is a N x N matrix with each i, j value equal to the covariance of the ith and jth columns of the original matrix. Other functions • SigClip • Median mode uses median absolute deviation instead of standard deviation. • Iterative use is allowed. Images Display • Images can be opened as RgbImage directly from the Navigator in HIPE. • Regular image files (jpg, gif, png, ...) are shown as preview in the outline when being clicked on in the navigator view of HIPE. • Two images can be compared by setting the opacity of the image. This can be done using the setOpacity(float) method or using a slider in the Image Display. Analysis • Added the methods containsNorthCelestialPole and containsSouthCelestialPole to the Wcs. • Regridding an image on the grid of another image using RegridTask. • Speed improvement of createRgbImageTask. • Update of documentation in the User Reference Manual. • You can now crop an image by drawing a rectangle on it • Correction of the calculation of the dimensions of a mosaic. Spectra Display • Task & Tools Toolbox: All task and tool panels are displayed in the same toolbox panel and selectable through a drop-down list. The task GUI components have become clearer as well. • The new spectrum toolbox panel is opened by clicking on in the toolbar • Instead of dragging a selection of spectra to the dataset parameter, the task is applied on either the displayed or the selected spectra within the plot. • Load selectable datasets from a product: When opening the spectrum explorer on a product, a selection table is displayed showing all datasets that are stored in this product along with certain characteristics. Just as within the original spectrum selection panel, these datasets can be sorted by clicking on the header columns. To load a set of datasets instead of just one or all, select each row and right click to load the selected datasets. • Realtime statistics information: Selecting the Statistics task within the Toolbox displays realtime statistics while dragging ranges in the plot. • Added relativistic frequency to velocity transform: When changing the unit of the wave axis to velocity the transform is now relativistic. Analysis • Gaussian resampling introduced. • 'Varient' and 'overwrite' options removed from ExtractSpectrum task Herschel Science Archive • Support for exporting data in packed HSA format. • Import and Export Views improvements: • Better error reporting • File dialogue windows can show hidden files • Versions of observations inside pools are now shown Pipeline processing • Crated a daemon dispatcher mechanism. • Aliases set tool has been modified for performing the required verifications before a tag is ingested. • HsaPoolDaemon • Releases memory calling Java garbage collector if it is needed. • Can write compressed files. • Can save a binary representation of a product if the FITS save operation fails. • HsaStagePool can export products into a LocalStore. • HSA ingestion procedure can generate an XML ingestion request file even if a previous version does not contain its own XML. Calibration sources A new view has been added to HIPE for easy access to the Calibration Sources Database (Calsdb). Products and datasets • Product comparison API is available in the ia_toolbox_util module. Observation context • Included new metadata slewTime returning the scheduled start time of the slew as a FineTime. • Specific accessors, getSlewStartTime() and setSlewStartTime(), where added to the ObservationContext class. • For old observations in the HSA without this value, the method will return a null value indicating that this field was never set before. History A new history dataset viewer is available: When you click on the History dataset in your product two tables appear: • An overview table which shows all the tasks used to generate the product. For each task, the name, execution time, build number and used calibration files are shown. • When you click on a row in this overview table then a second table is shown with details for the selected task. This table contains the details of this task: For each parameter its simple value and some product information is shown (when available). Product Access Layer Cache Several bugfixes to improve the effectiveness and robustness of caching have been introduced. One of these is a check to verify that the default LocalStore directory is not modified at any time during the cache's existence, which is a common source of corruption of the cache. HsaPool • HsaReadPool/HsaXmlPool can work with compressed data. Preferences Product storages and pools can be managed in the new Data Access > Storages & Pools preferences panel. • Updated documentation: • Wiki chapters • Standardised __ doc __ of general tasks (signature, description, parameter descriptions) • Examples of general tasks in the URM are now tested automatically • Compress and Decompress tasks: they support zip, tar, gzip, tgz in a portable way. Decompress can guess the compression of a file. • FitsReader supports compressed FITS files. Quality control • New metadata into the QualityContext product. The field is fixed and the main interest is to include this information into the FITS file associated to the product. FITS keyword Value PCAVEATS Please refer to http://herschel.esac.esa.int/DpKnownIssues.shtml for known problems in products • Perform bulk actions on several quality control reports simultaneously. • Flags colour codes depending on their importance. Systematic product generation • New framework to process/combine data from several observations in the same pipeline. • This will allow, for instances, better results using MADmap algorithm with cross-scan data from two or more observations. • Automatic processing: • New start-time algorithm based on the DTCP time of the OD being processed instead of fixed timestamps. • New check on TM data based on the gap checker reports. Only those observations with complete tm data will be processed at this point. • Uplink plugin is executed without problems in different conditions, even for manual commanding and no uplink data at all. • Obsids shown also in decimal format in all the application tables. • Include proper motion data into the ObservationContext. • SPG stand-alone application can now run on Mac OS with no Versant libraries available. • On-demand request are now recorded into a report file. Data input-output • Improved error reporting in AsciiTableTool. • FitsArchive: • ESO Hierarch keyword implemented when reading FITS files. • FITS translation dictionary updated: Herschel keyword FITS keyword pmRA PMRA pmDEC PMDEC state STATUS action ACTION slewTime SLEWTIME proCaveats PCAVEATS • FitsArchive allow flag for compression when saving FITS files. Installer There was a major restructuring of the pre-installation steps: • The Basic/Advanced selection page has been removed. • The memory, proxy and Versant settings were restructured. There are no separate panels asking if you want to set a proxy or a Versant database. There is a dedicated page for the proxy setting and one for the Versant settings. • Instead of the old instrument selection page, you are now offered the following list with check boxes for selection: Option Description PACS (preselected by default) PACS pipelines and analysis tools for astronomers. PACS expert Extra instrument-related applications for Calibration Scientists. SPIRE (preselected by default) SPIRE pipelines and analysis tools for astronomers. SPIRE expert Extra instrument-related applications for Calibration Scientists. HIFI (preselected by default) HIFI pipelines and analysis tools for astronomers. HIFI expert Extra instrument-related applications for Calibration Scientists. • The instrument selection window allows you to click in any combination of items to install those components. • You can also deselect all the instruments to get a barebone HIPE framework only for FITS I/O and numeric analysis. • If possible, the installer informs you of the amount of data to download for the selected combination of modules. Documentation To improve the quality of the examples, we have introduced automatic testing of (many of) the source code examples occurring in manuals such as the Data Analysis Guide. We have also introduced automatic testing of the examples occurring in the User's Reference Manual. HIFI HIFI Software Configuration and Properties • Generally most HIFI properties have moved from the hcss.dp.hifi build to the hcss.icc.hifi (expert) area and it is recommended that all internal ICC users always install the expert build in order to be able to run the pipeline from level -1.0. • Those using the expert build and wishing to run the pipeline from level zero will need to set the property var.hcss.instrument = hifi ' this so the system knows to use the HIFI binstruct settings for reading the packets. Those using the legacy hifi-new.props file will already have this property set. • The hcss.dp.hifi (non-expert) build has pre-configured pools to access the HSA and the local pool via the Product Browser and PAL Storage Manager • Unnecessary and deprecated properties files have been removed from the system HIFI Calibration • The Spur table has been updated to reflect the improvements in band 1a. For data taken after OD 516 the spur table now only shows three entries: • 1a 537500.00 3000.0 LO_settling 1,2,3,4 • 1a 540929.25 1000.0 Strong_spur 1 • 1a 542306.25 1000.0 Weak_spur 1 • The minimum and maximum chopper threshold values have been changed. • HkThresholdsContext_H fpuChopperMin -> -0.04 • HkThresholdsContext_V fpuChopperMin -> -0.04 • HkThresholdsContext_H fpuChopperMax -> 0.06 • HkThresholdsContext_H fpuChopperMax -> 0.06 Previous values used were those appropriate for the Hot and Cold black bodies, which are not representative of the full, planned range of the chopper. As a result, the quality control flag "FPU Check: Chopper measured values differ from the commanded" had been frequently raised more than necessary. • The upper limit of the mixer current quality control check for HEBs (bands 6 and 7) have been decreased from 0.08 mA to 0.055 mA, where tests have shown the mixers start to be under-pumped. • Update to calibration tree for results of Mars beam measurements The following items have been added to the HIFI Calibration tree for the aperture efficiencies (H and V), main beam efficiencies (H and V), and the half power beam widths (H and V) : • apertureEfficiency-H • apertureEfficiency-V • beamEfficiency-H • beamEfficiency-V • beamWidths-H • beamWidths-V The approach so far is to strictly use observed numbers (see tables below with initial results), without involving fitting results or empirical formulas. In particular this means: • For beam widths the fitted widths of circular Gaussians, corrected for the convolution with the Mars disk. Open question: is the fit of a circular Gaussian correct? Need to check by how much Mars moved during an integration. • For main beam efficiencies the values derived using the observed beam widths for the beam coupling. It is not yet decided if an average value or a linear interpolation between (and beyond) frequencies produces the best results • For aperture efficiencies the values are derived, again, using observed beam widths for the coupling of the beam to the source. In addition, the already available entries for the forward efficiencies (H and V) are updated from 1.0 to 0.96 (in all bands, all frequencies) to be compatible with above results. This requires the change of constant 1.0 to 0.96 . Aperture Efficiency (H) Band Frequency (GHz) Eta_A 1a 489.0 0.676 1a 512.0 0.675 1a 548.0 0.675 1b 564.0 0.675 1b 595.0 0.674 1b 627.0 0.673 2a 641.0 0.673 2a 677.0 0.672 2a 710.0 0.671 2b 724.0 0.671 2b 756.0 0.670 2b 792.0 0.669 3a 809.0 0.669 3a 830.0 0.668 3a 850.0 0.668 3b 868.0 0.667 3b 910.0 0.666 3b 945.0 0.665 4a 959.0 0.664 4a 1006.0 0.663 4a 1052.0 0.661 4b 1064.0 0.661 4b 1086.0 0.660 4b 1112.0 0.659 5a 1118.0 0.562 5a 1179.0 0.560 5a 1240.0 0.558 5b 1148.0 0.561 5b 1207.0 0.559 5b 1268.0 0.557 6a 1431.0 0.646 6a 1503.0 0.642 6a 1574.0 0.639 6b 1579.0 0.638 6b 1637.0 0.635 6b 1694.0 0.632 7a 1708.0 0.631 7a 1768.0 0.628 7a 1813.0 0.626 7b 1720.0 0.631 7b 1815.0 0.625 7b 1897.0 0.621 Aperture Efficiency (V) Band Frequency (GHz) Eta_A 1a 489.0 0.676 1a 512.0 0.675 1a 548.0 0.675 1b 564.0 0.675 1b 595.0 0.674 1b 627.0 0.673 2a 641.0 0.673 2a 677.0 0.672 2a 710.0 0.671 2b 724.0 0.671 2b 756.0 0.670 2b 792.0 0.669 3a 809.0 0.669 3a 830.0 0.668 3a 850.0 0.668 3b 868.0 0.667 3b 910.0 0.666 3b 945.0 0.665 4a 959.0 0.664 4a 1006.0 0.663 4a 1052.0 0.661 4b 1064.0 0.661 4b 1086.0 0.660 4b 1112.0 0.659 5a 1118.0 0.562 5a 1179.0 0.560 5a 1240.0 0.558 5b 1148.0 0.561 5b 1207.0 0.559 5b 1268.0 0.557 6a 1431.0 0.646 6a 1503.0 0.642 6a 1574.0 0.639 6b 1579.0 0.638 6b 1637.0 0.635 6b 1694.0 0.632 7a 1708.0 0.631 7a 1768.0 0.628 7a 1813.0 0.626 7b 1720.0 0.631 7b 1815.0 0.625 7b 1897.0 0.621 Beam Efficiency (H) Band frequency (GHz) eta_mb 1a 489.0 0.755 1a 512.0 0.755 1a 548.0 0.754 1b 564.0 0.754 1b 595.0 0.753 1b 627.0 0.752 2a 641.0 0.752 2a 677.0 0.751 2a 710.0 0.750 2b 724.0 0.750 2b 756.0 0.749 2b 792.0 0.748 3a 809.0 0.747 3a 830.0 0.747 3a 850.0 0.746 3b 868.0 0.746 3b 910.0 0.744 3b 945.0 0.743 4a 959.0 0.742 4a 1006.0 0.741 4a 1052.0 0.739 4b 1064.0 0.738 4b 1086.0 0.738 4b 1112.0 0.737 5a 1118.0 0.639 5a 1179.0 0.637 5a 1240.0 0.635 5b 1148.0 0.638 5b 1207.0 0.636 5b 1268.0 0.634 6a 1431.0 0.722 6a 1503.0 0.718 6a 1574.0 0.714 6b 1579.0 0.713 6b 1637.0 0.710 6b 1694.0 0.707 7a 1708.0 0.706 7a 1768.0 0.702 7a 1813.0 0.699 7b 1720.0 0.705 7b 1815.0 0.699 7b 1897.0 0.694 Beam Efficiency (V) Band Frequency (GHz) eta_mb 1a 489.0 0.755 1a 512.0 0.755 1a 548.0 0.754 1b 564.0 0.754 1b 595.0 0.753 1b 627.0 0.752 2a 641.0 0.752 2a 677.0 0.751 2a 710.0 0.750 2b 724.0 0.750 2b 756.0 0.749 2b 792.0 0.748 3a 809.0 0.747 3a 830.0 0.747 3a 850.0 0.746 3b 868.0 0.746 3b 910.0 0.744 3b 945.0 0.743 4a 959.0 0.742 4a 1006.0 0.741 4a 1052.0 0.739 4b 1064.0 0.738 4b 1086.0 0.738 4b 1112.0 0.737 5a 1118.0 0.639 5a 1179.0 0.637 5a 1240.0 0.635 5b 1148.0 0.638 5b 1207.0 0.636 5b 1268.0 0.634 6a 1431.0 0.722 6a 1503.0 0.718 6a 1574.0 0.714 6b 1579.0 0.713 6b 1637.0 0.710 6b 1694.0 0.707 7a 1708.0 0.706 7a 1768.0 0.702 7a 1813.0 0.699 7b 1720.0 0.705 7b 1815.0 0.699 7b 1897.0 0.694 Beam Widths (H) Band Frequency (GHz) HPBW 1a 489.0 43.362 1a 512.0 41.414 1a 548.0 38.694 1b 564.0 37.596 1b 595.0 35.637 1b 627.0 33.818 2a 641.0 33.080 2a 677.0 31.321 2a 710.0 29.865 2b 724.0 29.287 2b 756.0 28.048 2b 792.0 26.773 3a 809.0 26.210 3a 830.0 25.547 3a 850.0 24.946 3b 868.0 24.429 3b 910.0 23.301 3b 945.0 22.438 4a 959.0 22.111 4a 1006.0 21.078 4a 1052.0 20.156 4b 1064.0 19.929 4b 1086.0 19.525 4b 1112.0 19.068 5a 1118.0 18.966 5a 1179.0 17.985 5a 1240.0 17.100 5b 1148.0 18.470 5b 1207.0 17.568 5b 1268.0 16.722 6a 1431.0 14.818 6a 1503.0 14.108 6a 1574.0 13.471 6b 1579.0 13.429 6b 1637.0 12.953 6b 1694.0 12.517 7a 1708.0 12.415 7a 1768.0 11.993 7a 1813.0 11.696 7b 1720.0 12.328 7b 1815.0 11.683 7b 1897.0 11.178 Beam Widths (V) Band Frequency (GHz) HPBW 1a 489.0 43.362 1a 512.0 41.414 1a 548.0 38.694 1b 564.0 37.596 1b 595.0 35.637 1b 627.0 33.818 2a 641.0 33.080 2a 677.0 31.321 2a 710.0 29.865 2b 724.0 29.287 2b 756.0 28.048 2b 792.0 26.773 3a 809.0 26.210 3a 830.0 25.547 3a 850.0 24.946 3b 868.0 24.429 3b 910.0 23.301 3b 945.0 22.438 4a 959.0 22.111 4a 1006.0 21.078 4a 1052.0 20.156 4b 1064.0 19.929 4b 1086.0 19.525 4b 1112.0 19.068 5a 1118.0 18.966 5a 1179.0 17.985 5a 1240.0 17.100 5b 1148.0 18.470 5b 1207.0 17.568 5b 1268.0 16.722 6a 1431.0 14.818 6a 1503.0 14.108 6a 1574.0 13.471 6b 1579.0 13.429 6b 1637.0 12.953 6b 1694.0 12.517 7a 1708.0 12.415 7a 1768.0 11.993 7a 1813.0 11.696 7b 1720.0 12.328 7b 1815.0 11.683 7b 1897.0 11.178 HIFI Pipeline • The path/filename for a user defined/edited pipeline algorithm can now be passed to the hifiPipeline task via the command line, previously this was only possible in the GUI. obs =hifiPipeline (obs=obs, level2AlgoPath="/home/Me/myLevel2Algo.py") The older method of defining a function with the def keyword in Jython and passing that function to the hifiPipeline task still exists obs =hifiPipeline (obs=obs, level2Algo=level2PipelineAlgo) • hifiPipeline will make an ObservationContext even if creation of trend products fails • hifiPipeline GUI layout improved to make more efficient use of space • When reprocessing with the hifiPipeline task it is not recommend to set aux=True when using the HSA to retrieve your data as not all Auxiliary data products will be updated correctly. The default option of aux=False is valid for all observations created with SPG v4.0+ (which should be all HIFI observations in the HSA). Level 0 Pipeline • H and V spectrometer astrometry SpectrumDatasets contain columns, in decimal degrees, for the commanded pointing (a position between the H and V beams known as the synthesized aperture), labelled longitude_cmd and latitude_cmd, while the actual coordinates for the spectrometer beam are found in the longitude and latitude columns. Note that HIPE 5 RC 9 and later versions only should be used for this. WBS Pipeline • Fixes for frequency calibration in the case that line centres fall precisely in the middle of two pixels Level 2 Pipeline • DoMainBeamTemp now multiplies by the forward efficiency. In earlier versions of HIPE this was not an issue because a value of Feff=1 was assumed. This update was required as a consequence of the adoption of Feff=0.96 in the calibration tree. • The ra and dec in cube meta data is now set to be the centre of the cube. Previously the ra and dec in the first dataset of the HTP had been used. HIFI Products • Modifications to HifiSpectrumDatasets are now reflected in SpectrumExplorer when the HTP is viewed there. • It is now possible to create a SimpleSpectrum from a HIFI product or dataset containing only one spectrum. A SimpleSpectrum is easily written to FITS and maintains metadata. The conversion is done as: simpleSpectrum = convertSingleHifiSpectrum(spectra=data) HIFI Data Processing Tools • PolarPairs now has a tolerance of 100. Previously it as 5 and this sometimes caused the task to fail resulting in the output being identical to the first input product. • A task to allow conversion of Kelvin to Jansky, ConverK2Jy, is introduced Standing Wave Removal • It is now possible to remove the smoothed baseline calculated by FitHifiFringe from the spectrum, in addition to the fitted sine waves. This option (sub_base) should be used with care as real spectral features can be removed this way. For interactive fitting of polynomial baselines, the new task FitBaseline is recommended. • Tooltips implemented in the FitHifiFringe GUI • Improved error messaging from FitHifiFringe • Bug that appeared in early versions of HIPE 5 where usermask max/min boxes in FitHifiFringe GUI disappeared fixed. • A Deep Fourier Transform (DFT) module is available (not a part of FitHifiFringe) Baseline removal • Introduction of a new task, FitBaseline, to interactively fit and subtract or divide out baselines in HIFI spectra. • A bug that appeared in early versions of HIPE 5 where domask=2 in did not work in FitBaseline (preventing automatic line masking) now fixed. • A bug that appeared in early versions of HIPE 5 in which fitBaseline would not save on an intermediate quit is resolved. • The fitting routine in fitBaseline has a condition set such that no polynomials of order n are fitted if less than n+1 channels are available. This prevents crashes of the task in the case of heavily flagged data. Deconvolution • More understandable error message when deconvolution fails (usual due to poor data input). • Plotting of DSB solution as deconvolutions runs now possible when diag_mode_on selected. HiClass export tool • HiClass will now appear an an applicable task for HTP as well as ObservationContexts • HiClass now exports the correct frequencies for bands 6 and 7 • A bug that appeared in early versions of HIPE 5 preventing export of Spectral Scans now fixed • HiClass now exports the source velocity as provided by the astronomer in the AOR into a new FITS header keyword. At this time, the CLASS software will not do anything with this information, but the requirement to provide ALL the information is satisfied by having the velocity information present in the FITS header somewhere. The keywords used, which are slightly different than the usual ones used to reflect the fact that only the velocity in the proposal is exported, are tabulated below. reference frame usual keyword proposed keyword topocentric VELO-OBS PVEL-OBS geocentric VELO-GEO PVEL-GEO heliocentric VELO-HEL PVEL-HEL LSRk VELO-LSR PVEL-LSR LSRd VELO-LSR PVEL-LSR LSR VELO-LSR PVEL-LSR galactocentric VELO-GAL PVEL-GAL source VELOREST PVELREST unknown VELOCITY PVELCITY These keyword takes care of the value, the unit and the reference frame. The PVELTYPE keyword will identify the type of 'velocity' (velocity, redshift, radio or optical convention). HIFI Documentation • New chapters in the HIFI User's Manual: What was done to my data? and HIFI Baseline Removal. • Updates to the HIFI Users Manual: HIFI cookbook, Data Primer, Running the HIFI Pipeline, HIFI Standing Wave Removal Tool , How to make Spectral Cubes and Exporting HIFI data to CLASS. • Updates to HIFI Pipeline Specification document: improved pipeline flow diagrams, improved information about Euler resampling used in DoFreq grid task of pipeline. • Creation of FitHifiFringe URM entry. • The HIFI cookbook can now be selected from the Help menu in the HIPE toolbar under Help -> Data Reduction -> HIFI PACS Pipeline • PACS "New Style Sliced" pipeline implementation is now activated. • Phot and spec ipipe script names were changed to be more logic and intuitive. • Pipeline Script access: • HIPE -> Menu -> pipeline -> PACS • Calibration framework: Calibration products are no longer delivered with the build in a local store. Calibration products are now delivered as a directory of FITS files. This directory with calibration products can be found in the data/pcal sub-directory of your installation directory. HIPE> fm = getCalTree(verbose=1) The calibration framework is adapted and will read these FITS files by default. The getCalTree() command and all access to calibration data has not changed, under the hood, the local store has been replaced by the directory of FITS files. Other small changes: • getCalTree(), getCalProduct() now understand the 'verbose=True' keyword and will print information on the location of the calibration products. • getCalProduct() the third argument, 'version', is no longer mandatory, if not given, the last version of the calibration product will be returned. • A new Pacs Documentation page has been opened on the PacsDocumentation public wiki. Photometry • Extended Madmap (using scan and cross scan) for interactive usage • HIPE -> Menu -> pipeline -> PACS -> L2_scanMap.py A prototype for the extended Madmap pipeline is now available in HIPE for interactive usage. The extended Madmap pipeline is designed to combine scans and cross-scans in order to produce optional map. Unlike DP pipelines (which normally starts with one ObservationContext), this pipeline takes an obsList (an ArrayList) which is a list of ObservationContext. The pipeline will run even there is only one obs in the obsList. Below is an example creating an obsList: HIPE> obsList=ArrayList() • New script for scan Map pipeline • Two-stage photometer pipeline with masked highpass filtering • Now the standard processing script for scan maps • Photometer Mapping • Photometer mapping accepts a variable input pixelsize (parameter pixfrac in the PhotProjectTask and the MapIndexTask). • The MapIndexViewer has a Sigclip gui to display the effect of different Sigclip parameters on the second level deglitching. Now it is not necessary any more to guess Sigclip parameters, run the deglitching and check the quality afterwards. The new option allows to find the best parameter quickly before deglitching has been applied. The MapIndexViewer now also displays the signal vectors in the "timeordered" mode. • The MapIndexViewer shows mean/median +/- error permanently in the signal plot • The Full MapIndex needs less memory than before • Wcs4mapTask got a crota option to manually set the requested rotation angle of a map. • CorrectRaDec4SsoTask is now also accepting the new Horizons class definition. An example how to use it can be found in the attached. • photProject error map has been checked for NaN. • Exposure map has been removed in favour of the coverage map. • Noise treatment • A new module herschel.pacs.spg.phot.PhotAddNoisePerPixelTask provides four ways to calculate the initial photometer noise. *Noise propagation is also implemented in the HighpassFilter, Mean- and MedianFilters. • Pointing • New Horizons Product used in PACS pipeline (SSO observations). • Residual spatial offset PACS photometer blue/green vs red. • Corners shall follow changes in PhotAssignRaDec. • Separate blue and green in ArrayInstrument calfile and the code using it. • Astrometric offset between two scan maps in parallel mode on the same field. • Others • PACS EDP: phot pipeline adaptation to cope with variable scan velocity. • framesOut = photMaskFrames(framesIn [,beforeFirstScanLeg] [,noScanData] [,afterCalblock=0] ) • beforeFirstScanLeg: Mask data befor the first scan leg • noScanData: mask data where the ScanBBID is not set (includes turn arounds) • afterCalblock: Mask n data after calibration block • Sigclip can now be applied n-times in one go. Spectrometry • new AOR mode "unchopped" including improved spatial calibration: • First draft of the unchopped mode pipeline added to the build. • A "newstyle" unchopped mode pipeline is in place which can be run automatically or interactively. • New task specSubtractOffPosition implemented for unchopped pipeline . • New interactive tool SpectrumExplorer: • PACS Product interface to SpectrumExplorer • Improved wavelength calibration: • New wavelength calibration applied in waveCalc task using the wavePolynomes cal file. • spectrometer trend analysis product for the calibration blocks: • Calibration block trendAnalysis product added to the observation context. • Extra options to specDiffChop (note these are expert options for the analysis of special observations!). • Extra option 'extend' added which allows for other pairwise differencing schemes. These schemes will subtract more datapoints from each other than the standard scheme. • Extra option 'chopNodScheme' which provides different scheme for determining which label is on-source and which is off-source for a given nodding position. Needed for some special test AORs. • Different bands in a range scan are now stored in their own blocks in the block table. This means that it is now possible to slice SEDs per band. • Several fixes to specWaveRebin, etc. to reduce the amount of NaNs considerably in the rebinned data. • SpecProject sets the flux to Jy. Calibration Photometry • Residual spatial offset PACS photometer blue/green vs red --> PCalPhotometer_ArrayInstrument_FM_v6.fits. Spectrometry • New Wavelength Calibration Product. • Version 8 of the Pacs Spectrometer Array to Instrument and version 4 of the Module2Array cal file are inserted in the system. This generates correct values also for center chopper positions (unchopped and waveswitch mode). • Updated spectrometer spatial cal file --> PCalSpectrometer_ArrayInstrument_FM_v6.fits. • The RSRF calibration tables contain "NaN" which are then introduced into the signal by rsrfCal. Analysis tools Spectrometry • Compatibility of PACS products with HCSS tools improved: you can now properly select data points and use the fitter tool on PACS frames and cubes. SPIRE Calibration Products Photometer Updated 1. Flux Calibration updated for the nominal voltages. 2. Temperature Drift Correction to match the new flux conversion. Spectrometer New 1. Non-linear phase • This calibration product is used by the phase correction module to make the data symmetric prior to application of the Fourier transform. 2. Spectrometer Instrument RSRF • This calibration product is used, along with the instrument temperatures from the housekeeping telemetry, to compute a model for the contribution to the measured spectra from the SPIRE instrument. 3. Telescope RSRF • This calibration product is used, along with the Telescope temperatures from the housekeeping telemetry, to compute a model for the contribution to the measured spectra from the Herschel Telescope. • This calibration product is also used for extended source flux calibration. Updated 1. Point-source RSRF • This calibration product has been updated to use a higher signal-to-noise observation of Uranus. Calibration Framework • Updated to match the new products and dependencies. • The spireCal task now has an option to read from a local pool. Common Pipeline • Minor changes in logs and documentation. Photometer Pipeline • Pipeline scripts: • Comment out optical and electrical crosstalk corrections. • Removed plotting blocks. Spectrometer Pipeline The spectrometer pipelines have undergone major changes for version 5. It is HIGHLY recommended that users with data processed with older versions update to version 5 to reprocess their data. No change. Level-0.5 -> Level 1 (IFGM) • Electrical crosstalk correction has been commented out of the SPG pipelines. Level 1 (IFGM) -> Single-sided Fourier transform • Subtraction of the Instrument/Telescope contribution has been moved to the spectral domain. • Phase correction now uses a Jiggle dependent empirically-derived Non-linear phase cal product. Single-sided Fourier transform -> Level 1 (SPEC) • Correction for the contribution of the SPIRE instrument. A new processing step has been added to remove from the measured spectrum the contribution from the SPIRE instrument. • Correction for the contribution of the Herschel Telescope. A new processing step has been added to remove from the measured spectrum the contribution from the Herschel Telescope. • Optical crosstalk correction has been commented out of the SPG pipelines. Level 1 (SPEC) -> Level 2 1. SOF1 Sparse spatial sampling mode • A minor change in that the point source calibration product is now tailored to spectral resolution and apodization. 2. Mapping Modes • Added NaiveProjectionTask, a new task for creating spectral simple cubes from preprocessed spectral data. This task creates spectral cubes by averaging the flux of samples within each pixel. Pixels without samples are given a value of NaN. Interactive Analysis and Tools • Added the new DTMosaic class that allows the user to plot data recorded by many detectors during one observation on a single page to compare data more effectively (mosaic plot). This features is available from HIPE command line or within DetectorTimelineExplorer. • DetectorTimelineExplorer and DetectorTimelineExplorerComponent: • Added a new features that allows the creation of mosaic plot of the detectors via right-click on the desired array. • A new SpireMultiObs task to process several observations through the pipeline at once.
2020-08-05T23:01:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24652117490768433, "perplexity": 9879.93280583692}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735989.10/warc/CC-MAIN-20200805212258-20200806002258-00151.warc.gz"}
http://dergipark.gov.tr/alku/issue/39176/445056
| | | | ## entrIn-Plane Free Vibration Frequencies of Stepped Circular BeamsIn-Plane Free Vibration Frequencies of Stepped Circular Beams #### Timuçin Alp Aslan [1] , Beytullah Temel [2] , Ahmad Reshad Noori [3] ##### 115 131 ABSTRACT: In this study, in-plane natural frequencies of stepped circular beams are presented. The material of the beam is considered to be isotropic, homogeneous and elastic. The effects of shear deformation and rotary inertia are considered in the necessary assumptions and formulations. The obtained canonical form of governing equations is solved numerically by the complementary functions method (CFM) in the Laplace domain. To obtain the natural frequencies of the considered structures the Laplace parameter is replaced with the parameter of free vibration. The 5th order Runge Kutta (RK5) algorithm is applied to solve the initial-value problem based on the CFM. To examine the in-plane free vibration of Timoshenko beams with stepped circular cross-section a program is coded in FORTRAN. In order to demonstrate the accuracy of the current scheme, present results are compared with those of literature.  The accuracy and efficiency of the presented results are observed. In this study, in-plane natural frequencies of stepped circular beams are presented. The material of the beam is considered to be isotropic, homogeneous and elastic. The effects of shear deformation and rotary inertia are considered in the necessary assumptions and formulations. The obtained canonical form of governing equations is solved numerically by the complementary functions method (CFM) in the Laplace domain. To obtain the natural frequencies of the considered structures the Laplace parameter is replaced with the parameter of free vibration. The 5th order Runge Kutta (RK5) algorithm is applied to solve the initial-value problem based on the CFM. To examine the in-plane free vibration of Timoshenko beams with stepped circular cross-section a program is coded in FORTRAN. In order to demonstrate the accuracy of the current scheme, present results are compared with those of literature.  The accuracy and efficiency of the presented results are observed. • [1] Aktan H, 2008.In-Plane dynamic analysis of circular beams. MSc. Thesis, Istanbul Technical University, Istanbul. • [2] Çoban M, 2008.Out-Of-plane dynamic analysis of curved beams using mixed finite element method. MSc. Thesis, Istanbul Technical University, Istanbul. • [3] Tong X, Mrad N, Tabarrok B, 1998. In-Plane vibration of circular arches with variable cross-sections. J. Sound Vib. 212 (1) 121-140. • [4] Huang C S, Tseng Y P, and Lin C J, 1998. In-plane transient responses of arch with variable curvature using dynamic stiffness method. J. Eng. Mech. 124:826-835. • [5] Wu J S, and Chiang L K, 2004. A new approach for free vibration analysis of arches with effects of shear deformation and rotary inertia considered. J. Sound and Vib. 277 49-71. • [6] Tüfekci E, and Arpacı A, 1998. Exact solution of in-plane vibrations of circular arches with account taken of axial extension, transverse shear and rotatory inertia effects. J. Sound and Vibration, 209(5), 845-856 . • [7] Tüfekci E, Özdemirci Ö, 2006. Exact solution of free in-plane vibration of a stepped circular arch. Journal of Sound and Vibration, 295, 725-738. • [8] Lee W Y, Lee Y J, 2016. Free vibration analysis using the transfer-matrix method on a tapered beam. Computers and Structures 164 (2016) 75–82. • [9] Temel B, Aslan T A, and Noori A R., 2017. An efficient dynamic analysis of planar arches. European Mechanical Science, vol. 1(3), pp. 82-88. • [10] Noori A R, Temel B, and Aslan, T A, Transient analysis of in-plane loaded elastic stepped circular arches. International Conference on Advances and Innovations in Engineering (ICAIE), 2017 May 10-12, p.721-728. • [11] Aslan T A, Noori A R, Temel B, Forced vibration of out of plane loaded stepped circular rods. International Conference on Civil and Enviromental Engireening, May 8-10, 2017 , Nevsehir, Turkey, 2062-2074. • [12] Noori A R, Aslan T A, Temel B, Damped transient response of in-plane and out-of-plane loaded stepped curved rods. Journal of the Brazilian Society of Mechanical Sciences and Engineering, 40 1-24, 2018. • [13] Chapra S C, and Canale, R P, 1998.Numerical Methods for Engineers With Programming and software Applications. McGraw-Hill Books. • [14] ANSYS Swanson Analysis System, Inc., 201 Johnson Road, Houston, PA15342-1300, USA. Birincil Dil en Makaleler Yazar: Timuçin Alp AslanÜlke: Turkey Yazar: Beytullah Temel Yazar: Ahmad Reshad Noori (Sorumlu Yazar) Bibtex @araştırma makalesi { alku445056, journal = {ALKÜ Fen Bilimleri Dergisi}, issn = {}, address = {Alanya Alaaddin Keykubat Üniversitesi}, year = {2019}, volume = {1}, pages = {1 - 7}, doi = {}, title = {In-Plane Free Vibration Frequencies of Stepped Circular Beams}, key = {cite}, author = {Aslan, Timuçin Alp and Temel, Beytullah and Noori, Ahmad Reshad} } APA Aslan, T , Temel, B , Noori, A . (2019). In-Plane Free Vibration Frequencies of Stepped Circular Beams. ALKÜ Fen Bilimleri Dergisi, 1 (1), 1-7. Retrieved from http://dergipark.gov.tr/alku/issue/39176/445056 MLA Aslan, T , Temel, B , Noori, A . "In-Plane Free Vibration Frequencies of Stepped Circular Beams". ALKÜ Fen Bilimleri Dergisi 1 (2019): 1-7 Chicago Aslan, T , Temel, B , Noori, A . "In-Plane Free Vibration Frequencies of Stepped Circular Beams". ALKÜ Fen Bilimleri Dergisi 1 (2019): 1-7 RIS TY - JOUR T1 - In-Plane Free Vibration Frequencies of Stepped Circular Beams AU - Timuçin Alp Aslan , Beytullah Temel , Ahmad Reshad Noori Y1 - 2019 PY - 2019 N1 - DO - T2 - ALKÜ Fen Bilimleri Dergisi JF - Journal JO - JOR SP - 1 EP - 7 VL - 1 IS - 1 SN - - M3 - UR - Y2 - 2018 ER - EndNote %0 ALKÜ Fen Bilimleri Dergisi In-Plane Free Vibration Frequencies of Stepped Circular Beams %A Timuçin Alp Aslan , Beytullah Temel , Ahmad Reshad Noori %T In-Plane Free Vibration Frequencies of Stepped Circular Beams %D 2019 %J ALKÜ Fen Bilimleri Dergisi %P - %V 1 %N 1 %R %U ISNAD Aslan, Timuçin Alp , Temel, Beytullah , Noori, Ahmad Reshad . "In-Plane Free Vibration Frequencies of Stepped Circular Beams". ALKÜ Fen Bilimleri Dergisi 1 / 1 (Ocak 2019): 1-7.
2019-03-26T08:52:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2786000370979309, "perplexity": 13908.795494302833}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912204885.27/warc/CC-MAIN-20190326075019-20190326101019-00514.warc.gz"}
http://c51.lbl.gov/~walkloud/callat/software
Because we leverage computational techniques, we need software. Below we describe the software we've developed or improved for accomplishing good science. # BIGSTICK BIGSTICK is a configuration-interaction shell model code originally developed by Johnson and Ormand, under UNEDF’s SciDAC2 program. The code employs the Lanczos algorithm to solve the Schrödinger equation using an m-scheme operation-number representation of Slater determinants. It can be used to find extremum eigenvalues and eigenvectors of a large matrix, but can also calculate inclusive operators, including total response functions and Green’s functions. BIGSTICK's development continued under SciDAC3 and is MPI/OpenMP aware and has been used to treat bases of size $10^{11}$. Its on-the-fly Hamiltonian construction alleviates communications and storage problems that typically affect Lanczos algorithms on massively parallel machines. BIGSTICK is also well-suited for implementing large HOBET calculations. # HDF5 in QDP++ HDF5 is a widely-used and professionally-maintained file format and I/O library. It is commonly installed on large supercomputers, and the format can be understood by many third-party software packages such as python or Mathematica. We integrated HDF5 into the widely-used QDP++ library, a lattice QCD framework. We achieved a 30% performance improvement over the commonly-used QIO library. # hypre hypre is a linear solver library that has many advanced techniques for solving linear systems. hypre previously was restricted to at most three dimensions---we generalized it to any number of dimensions. hypre also previously only handled real numbers--we incorporated complex numbers. Both of these improvements are essential for using hypre for lattice QCD. We are interested in applying cutting-edge multigrid methods to lattice QCD problems, as these methods avoid the performance hits that traditional methods suffer from at physically realistic pion mass. # latscat latscat is a measurement code for multi-particle observables in lattice QCD. Built on top of the USQCD software stack, it uses the baryon block formalism and sparse matrix multiplication to efficiently compute the needed Wick contractions. It takes advantage of Fourier acceleration via FFTW for convolutions and uses HDF5 for I/O. With latscat we performed the first parity-odd two-nucleon scattering calculation from lattice QCD. # METAQ METAQ is a small suite of bash scripts that increase the ease and efficiency with which collaborators can use supercomputing resources. Supercomputing time is almost always awarded through a competitive proposal process, and there's usually not enough to make everybody happy. So, if you receive an allocation, you want to be sure you use it as best you can. METAQ helps you avoid wasted cycles, by making it simple to group computational tasks together and to slot smaller and shorter tasks into otherwise-wasted cycles. METAQ reduced our wasted cycles dramatically, leading to an equivalent of a 25% global software speedup. # mpi_jm mpi_jm is a C++-level resource manager that stresses the shared resources on a supercomputer substantially less than METAQ and allows for much finer control over the high-performance resources. Users' executables are compiled against the mpi_jm library, and tasks are described via a python interface, allowing for easy, high-level description and manipulation. mpi_jm is under developement, and we plan to make a first production-quality version available soon.
2018-07-19T20:40:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4087694585323334, "perplexity": 2824.778228724655}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591296.46/warc/CC-MAIN-20180719203515-20180719223515-00054.warc.gz"}
https://phys.libretexts.org/TextBooks_and_TextMaps/College_Physics/Book%3A_College_Physics_(OpenStax)/07._Work%2C_Energy%2C_and_Energy_Resources/7.02%3A_Work%3A_The_Scientific_Definition
$$\require{cancel}$$ # 7.1: Work: The Scientific Definition [ "article:topic", "authorname:openstax", "energy", "work", "joule" ] # What It Means to Do Work The scientific definition of work differs in some ways from its everyday meaning. Certain things we think of as hard work, such as writing an exam or carrying a heavy load on level ground, are not work as defined by a scientist. The scientific definition of work reveals its relationship to energy—whenever work is done, energy is transferred. For work, in the scientific sense, to be done, a force must be exerted and there must be motion or displacement in the direction of the force. Formally, the work done on a system by a constant force is defined to be the product of the component of the force in the direction of motion times the distance through which the force acts. For one-way motion in one dimension, this is expressed in equation form as $W = |\vec{F}| \, \cos \, \theta |\vec{d}| \label{eq1}$ where $$W$$ is work, $$d$$ is the displacement of the system, and $$\theta$$ is the angle between the force vector $$\vec{F}$$ and the displacement vector $$\vec{d}$$, as in Figure $$\PageIndex{1}$$. We can also write Equation \ref{eq1} as $W = F \, d \, \cos \, \theta \label{eq2}$ To find the work done on a system that undergoes motion that is not one-way or that is in two or three dimensions, we divide the motion into one-way one-dimensional segments and add up the work done over each segment. What is Work? The work done on a system by a constant force is the product of the component of the force in the direction of motion times the distance through which the force acts. For one-way motion in one dimension, this is expressed in equation form as $W = F \, d \, \cos \, \theta$ where $$W$$ is work, $$F$$ is the magnitude of the force on the system, $$d$$ is the magnitude of the displacement of the system, and $$\theta$$ is the angle between the force vector $$F$$ d the displacement vector $$d$$. Figure $$\PageIndex{1}$$: Examples of work. (a) The work done by the force $$F$$ on this lawn mower is $$Fd \, cos \,\theta$$. Note that $$F \, cos \, \theta$$ is the component of the force in the direction of motion. (b) A person holding a briefcase does no work on it, because there is no motion. No energy is transferred to or from the briefcase. (c) The person moving the briefcase horizontally at a constant speed does no work on it, and transfers no energy to it. (d) Work is done on the briefcase by carrying it upstairs at constant speed, because there is necessarily a component of force $$F$$ in the direction of the motion. Energy is transferred to the briefcase and could in turn be used to do work. (e) When the briefcase is lowered, energy is transferred out of the briefcase and into an electric generator. Here the work done on the briefcase by the generator is negative, removing energy from the briefcase, because $$F$$ and $$d$$ are in opposite directions. To examine what the definition of work means, let us consider the other situations shown in Figure. The person holding the briefcase in Figure $$\PageIndex{1b}$$does no work, for example. Here $$d = 0$$, so $$W = 0$$. Why is it you get tired just holding a load? The answer is that your muscles are doing work against one another, but they are doing no work on the system of interest (the “briefcase-Earth system” - see Gravitational Potential Energy for more details). There must be motion for work to be done, and there must be a component of the force in the direction of the motion. For example, the person carrying the briefcase on level ground in Figure $$\PageIndex{1c}$$ does no work on it, because the force is perpendicular to the motion. That is, $$\cos \, 90^o = 0$$, so $$W = 0$$. In contrast, when a force exerted on the system has a component in the direction of motion, such as in Figure $$\PageIndex{1d}$$, work is done—energy is transferred to the briefcase. Finally, in Figure $$\PageIndex{1e}$$, energy is transferred from the briefcase to a generator. There are two good ways to interpret this energy transfer. One interpretation is that the briefcase’s weight does work on the generator, giving it energy. The other interpretation is that the generator does negative work on the briefcase, thus removing energy from it. The drawing shows the latter, with the force from the generator upward on the briefcase, and the displacement downward. This makes $$\theta = 180^o$$, and $$\cos \, 180^o = -1$$, therefore $$W$$ is negative. # Calculating Work Work and energy have the same units. From the definition of work, we see that those units are force times distance. Thus, in SI units, work and energy are measured in newton-meters. A newton-meter is given the special name joule (J), and $$1 \, J = 1 \, N \cdot m = 1 \, kg \, m^2/s^2$$. One joule is not a large amount of energy; it would lift a small 100-gram apple a distance of about 1 meter. Example $$\PageIndex{1}$$: Calculating the Work You Do to Push a Lawn Mower Across a Large Lawn How much work is done on the lawn mower by the person in Figure (a) if he exerts a constant force of 75.0 N at an angle $$35^o$$ below the horizontal and pushes the mower 25 m. on level ground? Convert the amount of work from joules to kilocalories and compare it with this person’s average daily intake of 10,000 kJ (about 2400 kcal) of food energy. One calorie (1 cal) of heat is the amount required to warm 1 g of water by $$1^o C$$ and is equivalent to 4,184 J, while one food calorie (1 kcal) is equivalent to 4,184 J. Strategy We can solve this problem by substituting the given values into the definition of work done on a system, stated in the equation $$W = Fd \, cos \, \theta$$. The force, angle, and displacement are given, so that only the work $$W$$ is unknown. Solution The equation for the work is (Equation \ref{eq2}): $W = Fd \, \cos \, \theta \nonumber$ Substituting the known values gives \begin{align*} W &= (75 \, N)(25.0 \, m)(cos \, 35^o) \\[5pt] &= 1536 \, J \nonumber \\[5pt] &= 1.54 \times 10^3 \, J \nonumber \end{align*} Converting the work in joules to kilocalories yields $$W = (1536 \, J)(1 \, kcal/4184 \, J) = 0.367 kcal.$$ The ratio of the work done to the daily consumption is $\dfrac{W}{2400 \, kcal} = 1.53 \times 10^{-4}. \nonumber$ Discussion This ratio is a tiny fraction of what the person consumes, but it is typical. Very little of the energy released in the consumption of food is used to do work. Even when we “work” all day long, less than 10% of our food energy intake is used to do work and more than 90% is converted to thermal energy or stored as chemical energy in fat. # Summary • Work is the transfer of energy by a force acting on an object as it is displaced. • The work $$W$$ that a force $$F$$ does on an object is the product of the magnitude $$F$$ of the force, times the magnitude $$d$$ of the displacement, times the cosine of the angle $$\theta$$ between them. In symbols, $W = Fd \, \cos \, \theta.$ • The SI unit for work and energy is the joule (J), where $$1 \, J = 1 \, N \cdot m = 1 \, kg \, m^2/s^2$$. • The work done by a force is zero if the displacement is either zero or perpendicular to the force. • The work done is positive if the force and displacement have the same direction, and negative if they have opposite direction. ## Glossary energy the ability to do work work the transfer of energy by a force that causes an object to be displaced; the product of the component of the force in the direction of the displacement and the magnitude of the displacement joule SI unit of work and energy, equal to one newton-meter ## Contributors Paul Peter Urone (Professor Emeritus at California State University, Sacramento) and Roger Hinrichs (State University of New York, College at Oswego) with Contributing Authors: Kim Dirks (University of Auckland) and Manjula Sharma (University of Sydney). This work is licensed by OpenStax University Physics under a Creative Commons Attribution License (by 4.0).
2018-12-10T00:22:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.860988199710846, "perplexity": 319.564720028941}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823228.36/warc/CC-MAIN-20181209232026-20181210013526-00460.warc.gz"}
https://lammps.sandia.gov/doc/velocity.html
# velocity command ## Syntax velocity group-ID style args keyword value ... • group-ID = ID of group of atoms whose velocity will be changed • style = create or set or scale or ramp or zero create args = temp seed temp = temperature value (temperature units) seed = random # seed (positive integer) set args = vx vy vz vx,vy,vz = velocity value or NULL (velocity units) any of vx,vy,vz van be a variable (see below) scale arg = temp temp = temperature value (temperature units) ramp args = vdim vlo vhi dim clo chi vdim = vx or vy or vz vlo,vhi = lower and upper velocity value (velocity units) dim = x or y or z clo,chi = lower and upper coordinate bound (distance units) zero arg = linear or angular linear = zero the linear momentum angular = zero the angular momentum • zero or more keyword/value pairs may be appended • keyword = dist or sum or mom or rot or temp or bias or loop or units dist value = uniform or gaussian sum value = no or yes mom value = no or yes rot value = no or yes temp value = temperature compute ID bias value = no or yes loop value = all or local or geom rigid value = fix-ID fix-ID = ID of rigid body fix units value = box or lattice ## Examples velocity all create 300.0 4928459 rot yes dist gaussian velocity border set NULL 4.0 v_vz sum yes units box velocity flow scale 300.0 velocity flow ramp vx 0.0 5.0 y 5 25 temp mytemp velocity all zero linear ## Description Set or change the velocities of a group of atoms in one of several styles. For each style, there are required arguments and optional keyword/value parameters. Not all options are used by each style. Each option has a default as listed below. The create style generates an ensemble of velocities using a random number generator with the specified seed at the specified temperature. The set style sets the velocities of all atoms in the group to the specified values. If any component is specified as NULL, then it is not set. Any of the vx,vy,vz velocity components can be specified as an equal-style or atom-style variable. If the value is a variable, it should be specified as v_name, where name is the variable name. In this case, the variable will be evaluated, and its value used to determine the velocity component. Note that if a variable is used, the velocity it calculates must be in box units, not lattice units; see the discussion of the units keyword below. Equal-style variables can specify formulas with various mathematical functions, and include thermo_style command keywords for the simulation box parameters or other parameters. Atom-style variables can specify the same formulas as equal-style variables but can also include per-atom values, such as atom coordinates. Thus it is easy to specify a spatially-dependent velocity field. The scale style computes the current temperature of the group of atoms and then rescales the velocities to the specified temperature. The ramp style is similar to that used by the compute temp/ramp command. Velocities ramped uniformly from vlo to vhi are applied to dimension vx, or vy, or vz. The value assigned to a particular atom depends on its relative coordinate value (in dim) from clo to chi. For the example above, an atom with y-coordinate of 10 (1/4 of the way from 5 to 25), would be assigned a x-velocity of 1.25 (1/4 of the way from 0.0 to 5.0). Atoms outside the coordinate bounds (less than 5 or greater than 25 in this case), are assigned velocities equal to vlo or vhi (0.0 or 5.0 in this case). The zero style adjusts the velocities of the group of atoms so that the aggregate linear or angular momentum is zero. No other changes are made to the velocities of the atoms. If the rigid option is specified (see below), then the zeroing is performed on individual rigid bodies, as defined by the fix rigid or fix rigid/small commands. In other words, zero linear will set the linear momentum of each rigid body to zero, and zero angular will set the angular momentum of each rigid body to zero. This is done by adjusting the velocities of the atoms in each rigid body. All temperatures specified in the velocity command are in temperature units; see the units command. The units of velocities and coordinates depend on whether the units keyword is set to box or lattice, as discussed below. For all styles, no atoms are assigned z-component velocities if the simulation is 2d; see the dimension command. The keyword/value options are used in the following ways by the various styles. The dist keyword is used by create. The ensemble of generated velocities can be a uniform distribution from some minimum to maximum value, scaled to produce the requested temperature. Or it can be a gaussian distribution with a mean of 0.0 and a sigma scaled to produce the requested temperature. The sum keyword is used by all styles, except zero. The new velocities will be added to the existing ones if sum = yes, or will replace them if sum = no. The mom and rot keywords are used by create. If mom = yes, the linear momentum of the newly created ensemble of velocities is zeroed; if rot = yes, the angular momentum is zeroed. *line If specified, the temp keyword is used by create and scale to specify a compute that calculates temperature in a desired way, e.g. by first subtracting out a velocity bias, as discussed on the Howto thermostat doc page. If this keyword is not specified, create and scale calculate temperature using a compute that is defined internally as follows: compute velocity_temp group-ID temp where group-ID is the same ID used in the velocity command. i.e. the group of atoms whose velocity is being altered. This compute is deleted when the velocity command is finished. See the compute temp command for details. If the calculated temperature should have degrees-of-freedom removed due to fix constraints (e.g. SHAKE or rigid-body constraints), then the appropriate fix command must be specified before the velocity command is issued. The bias keyword with a yes setting is used by create and scale, but only if the temp keyword is also used to specify a compute that calculates temperature in a desired way. If the temperature compute also calculates a velocity bias, the the bias is subtracted from atom velocities before the create and scale operations are performed. After the operations, the bias is added back to the atom velocities. See the Howto thermostat doc page for more discussion of temperature computes with biases. Note that the velocity bias is only applied to atoms in the temperature compute specified with the temp keyword. As an example, assume atoms are currently streaming in a flow direction (which could be separately initialized with the ramp style), and you wish to initialize their thermal velocity to a desired temperature. In this context thermal velocity means the per-particle velocity that remains when the streaming velocity is subtracted. This can be done using the create style with the temp keyword specifying the ID of a compute temp/ramp or compute temp/profile command, and the bias keyword set to a yes value. The loop keyword is used by create in the following ways. If loop = all, then each processor loops over all atoms in the simulation to create velocities, but only stores velocities for atoms it owns. This can be a slow loop for a large simulation. If atoms were read from a data file, the velocity assigned to a particular atom will be the same, independent of how many processors are being used. This will not be the case if atoms were created using the create_atoms command, since atom IDs will likely be assigned to atoms differently. If loop = local, then each processor loops over only its atoms to produce velocities. The random number seed is adjusted to give a different set of velocities on each processor. This is a fast loop, but the velocity assigned to a particular atom will depend on which processor owns it. Thus the results will always be different when a simulation is run on a different number of processors. If loop = geom, then each processor loops over only its atoms. For each atom a unique random number seed is created, based on the atom’s xyz coordinates. A velocity is generated using that seed. This is a fast loop and the velocity assigned to a particular atom will be the same, independent of how many processors are used. However, the set of generated velocities may be more correlated than if the all or local keywords are used. Note that the loop geom keyword will not necessarily assign identical velocities for two simulations run on different machines. This is because the computations based on xyz coordinates are sensitive to tiny differences in the double-precision value for a coordinate as stored on a particular machine. The rigid keyword only has meaning when used with the zero style. It allows specification of a fix-ID for one of the rigid-body fix variants which defines a set of rigid bodies. The zeroing of linear or angular momentum is then performed for each rigid body defined by the fix, as described above. The units keyword is used by set and ramp. If units = box, the velocities and coordinates specified in the velocity command are in the standard units described by the units command (e.g. Angstroms/fmsec for real units). If units = lattice, velocities are in units of lattice spacings per time (e.g. spacings/fmsec) and coordinates are in lattice spacings. The lattice command must have been previously used to define the lattice spacing. ## Restrictions Assigning a temperature via the create style to a system with rigid bodies or SHAKE constraints may not have the desired outcome for two reasons. First, the velocity command can be invoked before all of the relevant fixes are created and initialized and the number of adjusted degrees of freedom (DOFs) is known. Thus it is not possible to compute the target temperature correctly. Second, the assigned velocities may be partially canceled when constraints are first enforced, leading to a different temperature than desired. A workaround for this is to perform a run 0 command, which insures all DOFs are accounted for properly, and then rescale the temperature to the desired value before performing a simulation. For example: velocity all create 300.0 12345 run 0 # temperature may not be 300K velocity all scale 300.0 # now it should be ## Default The keyword defaults are dist = uniform, sum = no, mom = yes, rot = no, bias = no, loop = all, and units = lattice. The temp and rigid keywords are not defined by default.
2018-12-15T09:28:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6297051310539246, "perplexity": 1626.4907365450365}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376826842.56/warc/CC-MAIN-20181215083318-20181215105318-00568.warc.gz"}
https://www.usgs.gov/center-news/volcano-watch-important-almost-forgotten-eruption-k-lauea
# Volcano Watch — An important but almost forgotten eruption of Kīlauea Release Date: March 5 is the 36th anniversary of one of Kīlauea's most important eruptions—the 1965 eruption that formed Makaopuhi lava lake. It was the fifth of six rift eruptions between the summit eruptions of 1961 and 1967-68, and it was the longest and largest. Volcanic tremor began at Makaopuhi 8:02 a.m. and suddenly increased in strength at 9:21, when fountaining probably started. HVO staff members rushed down the Chain of Craters Road, which then extended to Makaopuhi. When they arrived at 9:50, a line of fountains was nearly continuous from Makaopuhi downrift past Napau Crater. The west end of the fissure was in the wall of the deep pit in the Makaopuhi double crater. The activity stopped in Makaopuhi in the afternoon, but two other centers briefly erupted farther downrift; one built Puu Kimo 2.5 km (1.5 miles) west of Kalalua cone. A third, 1.6 km (1 mile) east of Napau, was active for 22 hours on March 9-10. After a 15-hour pause, the eruption resumed in Makaopuhi on March 6 and lasted until March 15, forming the famous lava lake. Most visual observations were made from the scenic overlook at the crater, remains of which can be seen near today's trail to Napau, though you have to look closely amid the flood of younger lava flows from Mauna Ulu. Adventurous trips were made to the rising lake, 210 m (700 ft) below the rim of the crater, in order to collect samples and measure how fast the lake deepened. The final lake was 84 m (275 feet) deep and 800 m (2,600 feet) wide. The visual observations document, better than before or since, how lava circulates in a filling lava lake. Most notable are descriptions of how crust forms on a pond of lava, breaks up, and sinks. Floating surface crust became gravitationally unstable as rising gas bubbles collected beneath it. Periodically, plates of the relatively heavy crust turned on end and dove under like sinking ships. The process of crustal overturning swept across the lake like falling dominoes, renewing the crust in minutes. Surveying on the lake surface began within 72 hours after a permanent crust had formed. Within a month, an aerial tramway was set up just east of the overlook and anchored in the middle of the crusted lake. The tramway carried supplies for drilling and scientific experiments. People climbed into the crater, using a fixed rope across the upper cliff and scrambling down loose talus for most of the way. The trip up was a lesson in dedication. Core-drilling into the molten lava began on April 19, headed by Tom Wright and Reggie Okamura. Twenty-seven holes were drilled, the last in February 1969. Temperature measurements down each hole traced the cooling of lava and thickening of crust. Hundreds of samples were obtained from the core, including some from molten lava itself. When the drill bit entered lava, water used to cool the bit quenched the lava to a black glass. The bit was quickly raised to keep from becoming stuck in the hardened glass. Then drilling resumed into the glass, with luck recovering an intact core that on one occasion was 1.2 m (4 feet) long. The drilling was an art form perfected by Reggie. Too little water resulted in no glass; too much resulted in geyser-like eruptions of scalding-hot glass sand from the hole. The samples, together with numerous experiments and measurements, made the 1965 eruption of Makaopuhi scientifically famous. The results were ahead of their time and still form one of the most valuable data sets for quantitatively studying the cooling and crystallization of basaltic lava. But all good things must end; Mauna Ulu lava covered the lake a few years later, ending that marvelous natural experiment. ### Volcano Activity Update Eruptive activity of Kīlauea Volcano continued unabated at the Puu Oo vent during the past week and provided visitors with an occasional glimpse of surface flow activity on Pulama pali and on the coastal flats. Lava is pooling in the coastal flats and not entering the ocean at this time. The closest flow is 0.8 km (0.5 mi) away from the sea coast. No earthquakes were reported felt during the week ending on March 1.
2019-11-14T16:55:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2929299473762512, "perplexity": 5773.344139484512}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668529.43/warc/CC-MAIN-20191114154802-20191114182802-00149.warc.gz"}
http://kcchao.wikidot.com/fourier-series
Fourier Series • A Fourier series is an expansion of a periodic function $f(x)$ in terms of an infinite sum of sines and cosines. • period = $2\pi$ $f(x) = a_0 + \Sigma_{n=1}^{\infty}a_ncos(nx)+\Sigma_{n=1}^{\infty}b_nsin(nx)$ where $a_0 = \frac{1}{2 \pi}\int_{-\pi}^{\pi}f(x)dx$ $a_n = \frac{1}{\pi}\int_{-\pi}^{\pi}f(x)cos(nx)dx$ $b_n = \frac{1}{\pi}\int_{-\pi}^{\pi}f(x)sin(nx)dx$ • period = 2L $f(x) = a_0+\Sigma_{n=1}^{\infty}a_ncos(n\frac{\pi}{L}x)+\Sigma_{n=1}^{\infty}b_nsin(n\frac{\pi}{L}x)$ or $f(x) = a_0 + \Sigma_{n=1}^{\infty} D_n cos(nx+{\theta}_n)$ where $a_0 = \frac{1}{2L}\int_{-L}^{L}f(x)dx$ $a_n = \frac{1}{L}\int_{-L}^{L}f(x)cos(n\frac{\pi}{L}x)dx$ $b_n = \frac{1}{L}\int_{-L}^{L}f(x)sin(n\frac{\pi}{L}x)dx$ $D_n \angle \theta_n = 2 c_n = a_n - i b_n$ • period p $f(x) = a_0 + \Sigma_{n=1}^{\infty}(a_n cos(n\omega_0x) + b_n sin(n\omega_0x))$, $\omega_0 = \frac{2\pi}{p}$ $a_0 = \frac{1}{p}\int_{-p/2}^{p/2} f(x) dx$ $a_n = \frac{2}{p}\int_{-p/2}^{p/2} f(x)cos(n\omega_0x) dx$ $b_n = \frac{2}{p}\int_{-p/2}^{p/2} f(x)sin(n\omega_0x) dx$ $f(x) = a_0 + \Sigma_{n=1}^{\infty}c_n cos(n\omega_0x + \delta_n)$, $a_n cos(n\omega_0x) + b_n sin(n\omega_0x) = c_n cos(n\omega_0x + \delta_n)$ $c_n = \sqrt{a_n^2+b_n^2}$, $\delta_n = tan^{-1}(-\frac{b_n}{a_n})$ page revision: 27, last edited: 27 Apr 2016 02:47 Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-ShareAlike 3.0 License
2019-05-24T11:04:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8749714493751526, "perplexity": 2721.5416716194004}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257605.76/warc/CC-MAIN-20190524104501-20190524130501-00009.warc.gz"}
https://www.business.qld.gov.au/running-business/finance/improve-performance/ratios
# Financial ratios and calculators There are a range of ratios you can use – the most important financial ratios are explained in our quick reference guide to financial ratios (JPG, 340KB). ## Quick reference guide: financial ratios Assess the performance of your business by focusing on 4 types of financial ratios: • profitability ratios • liquidity ratios • operating efficiency ratios • leverage ratios. Use our quick reference ratios infographic (JPG, 340KB) to understand how to calculate each ratio. ## Profitability ratios These ratios are an effective measure of the amount of money you take home after all of your expenses and debts are paid. They're also a valuable measure of business performance. Comparing your net and gross profit margins with sector or industry-wide ratios: • provides relevant benchmarks and figures for comparing against others in your industry, sector or location • identifies areas for improvement in your margins. ### Common profitability ratios Gross profit is the amount of money your business has left over from total revenue once your cost of goods sold has been deducted. Formula: Gross profit = Total revenue – Cost of goods sold #### Calculate gross profit $$\text{Gross profit} = \text{Total revenue} - \text{Cost of goods sold}$$ The gross profit margin ratio compares the gross profit of your business to its total revenue to show how much profit your business is making after paying your cost of goods sold. This ratio shows the percentage margin between what you receive for your product or service and what it costs you in cost of sales. Your gross profit margin shows whether your sales are sufficient to cover your costs of goods sold. It also allows you to compare your performance with other businesses, or over time, and is a good measure of how efficient your business is at converting products and services into revenue. Formula: Gross profit margin (%) = (Gross profit ÷ Total revenue) x 100 Aim for: Your figure will depend on your industry or sector. For example, professional services might have 80% or higher, while manufacturing or construction industry might have between 45% and 60%. #### Calculate gross profit margin $$\text{Gross profit margin} = \frac{\text{Gross profit}}{\text{Total revenue}} \times 100$$ ### Gross profit margin ratio example Brett's Bakery has a total sales revenue of $450,000. After subtracting the$300,000 cost of raw materials (e.g. flour, eggs, sugar) and wages directly involved in baking and selling the goods, the bakery has a gross profit of $150,000. Based on these sales and costs, they have a gross profit margin of 33.33%. The net profit margin ratio compares the net profit of your business to your total revenue to determine operating efficiency. Your net profit margin is one of the most important indicators of your business's health. It can help you assess: • if your business is generating enough profit from sales • if your operating costs and overhead costs are being managed. Formula: Net profit margin (%) = (Net profit ÷ Total revenue) × 100 Aim for: 10% (average), 20% (high), 5% (low). This varies by industry and other factors. #### Calculate net profit margin $$\text{Net profit margin} = \frac{\text{Net profit}}{\text{Total revenue}} \times 100$$ #### Net profit margin ratio example Brett's Bakery has a total sales revenue of$450,000. After subtracting their $405,000 total operating expenses, this leaves a net profit of$45,000. Based on these sales and costs, Brett's Bakery has a net profit margin of 10%. The return on assets ratio quantifies how well your business uses its assets to generate profit. This ratio is useful to help assess a business's financial strength and its efficiency in using all available resources. This ratio provides a valuable business benchmark when compared with other businesses in your sector or industry. Formula: Return on assets ratio (%) = (Net profit ÷ Total assets) × 100 Aim for: 5% (good), 20% or higher (excellent). This varies by industry. #### Calculate return on assets $$\text{Return on assets} = \frac{\text{Net profit}}{\text{Total assets}} \times 100$$ #### Return on assets ratio example Brett's Bakery has a net profit of $45,000, and a total value of the business's assets of$600,000 (e.g. cash, stock, equipment, delivery vehicle). This gives an assets ratio of 7.5%. The return on equity ratio measures whether all the effort put into the business is returning an appropriate return on the owner's equity generated. A sustainable and increasing return on equity over time can mean your business is good at generating value for you. A declining return on equity can mean that you’re making poor decisions on reinvesting capital in unproductive assets. Formula: Return on equity (%) = Net profit ÷ Owner's equity This can also be read as: Money invested by the owner of the business + Profits – Money owed – Money taken out of the business by the owner. Aim for: A high return on equity as this indicates your business can generate cash internally. #### Calculate return on equity $$\text{Return on equity} = \frac{\text{Net profit}}{\text{Owner's equity}} \times 100$$ The earnings to sales ratio measures your profits against your sales to make sure you're not spending more than you're making. You can use this ratio (expressed as a percentage) to measure how well you are containing your expenses. For example, you might set yourself a goal to achieve better than 18%. Formula: Earnings to sales ratio (%) = (Net profit ÷ Total sales) x 100 #### Calculate your earnings to sales ratio $$\text{Earnings to sales ratio} = \frac{\text{Net profit}}{\text{Total sales}} \times 100$$ The material to sales ratio indicates how much of your sales dollar is consumed by the cost of direct materials. Formula: Material to sales ratio (%) = (Cost of direct materials ÷ Sales) x 100 #### Calculate your materials to sales ratio $$\text{Materials to sales ratio} = \frac{\text{Cost of direct materials}}{\text{Sales}} \times 100$$ #### Material to sales ratio example Brett's Bakery has a cost of direct materials of $85,000 and sales of$145,000. This gives a material to sales ratio of 58.6%. ## Liquidity ratios Liquidity ratios measure your business's ability to turn assets into cash to repay debts or make purchases and investments. A higher liquidity ratio means you have more current assets than current liabilities. This indicates you should be able to withstand periods of tight cash flow. ### Common liquidity ratios The current ratio, also known as a working capital ratio, measures your business's ability to pay off short-term liabilities (due within a year) with current assets. Formula: Current ratio = Current assets ÷ Current liabilities Aim for: Between 1.5 and 2 (for most industries). There is no indication of 'too high' but a very high current ratio may indicate the misuse of excess cash. A current ratio less than 1 can result in reduced opportunities or deregistration if your business is registered under a regulatory body. #### Calculate current ratio $$\text{Current ratio} = \frac{\text{Current assets}}{\text{Current liabilities}}$$ ## Operating efficiency ratios Operating efficiency ratios, also known as activity financial ratios, measure how well your business is using assets and resources to determine the effectiveness of your operations. These ratios show: • how quickly stock is being replaced • frequency of customer debt collection • frequency of supplier payments. Benchmarking your operating efficiency ratios with sector businesses will help to identify possible areas for improvement. ### Common operating efficiency ratios The accounts receivable days, also known as debtors turnover, measures how often your business can convert debtors into cash over a given period. Manage your cash flow by trying to collect cash from your debtors before paying your creditors. Aim for your accounts receivable figure to be less than your creditors turnover figure. Formula: Accounts receivable days = (Accounts receivable ÷ Total credit sales) × 365 Aim for: Less than 40 days (depending on a range of factors). If debtors are too slow in being converted to cash, the liquidity of your business will be severely affected. #### Calculate accounts receivable days Accounts receivable days are sometimes called 'debtor days'. $$\text{Accounts receivable days} = \frac{\text{Accounts receivable}}{\text{Total credit sales}} \times 365 \text{ days}$$ The accounts payable days, also known as creditors turnover, is a measure of how well you are managing creditors over a given period. A lower number of days indicates your business is paying off debts quickly. Formula: Accounts payable days = (Accounts payable ÷ Total purchase on account) × 365 Aim for: A lower number of days (good, but dependent on a range of factors). A higher number of days (bad) indicates slow payments to suppliers which could damage supplier relationships. #### Calculate accounts payable days Accounts payable days are sometimes called 'creditor days'. $$\text{Accounts payable days} = \frac{\text{Accounts payable}}{\text{Total purchases on account}} \times 365 \text{ days}$$ The stock turnover measures how many times your business's inventory (in dollars) is sold and replaced (stock management) over a given period. Benchmark your stock turnover ratio against industry averages. A high stock turnover indicates you are selling goods faster. A low turnover rate indicates weak sales and excess inventories. Your stock turnover will be different depending on your industry or sector. For example, a food business might have a stock turnover of less than 5 (perishables) or manufacturing might have a stock turnover of 40+. Formulas: • Stock (inventory) turnover = Cost of goods sold ÷ Average inventory • Average inventory = (Beginning inventory – Ending inventory) ÷ 2 Aim for: Between 5 and 10 (good for most industries). #### Calculate inventory turnover Inventory turnover is also known as 'stock turnover'. $$\text{Inventory turnover} = \frac{\text{Cost of goods sold}}{\text{Average inventory}}$$ The asset turnover ratio measures your business's ability to generate sales from assets. Formulas: • Asset turnover ratio = Net revenue ÷ Total assets • Net revenue = Total revenue – (Returns + discounts) Aim for: A high asset turnover, as this indicates you're efficient at generating revenue from your assets. This can vary across industries. #### Calculate asset turnover ratio $$\text{Asset turnover} = \frac{\text{Net revenue}}{\text{Total assets}}$$ The error rate ratio will help you evaluate the quality of your production and whether the cost of producing your goods is too high. A high number may indicate you need to look at your production processes. For example, your goal might be to achieve an error rate of less than 1% (i.e. less than 10 items rejected in 1,000 produced). Formula: Error rate ratio (%) = (Total items rejected ÷ Total items produced) x 100 #### Calculate your error rate ratio $$\text{Error rate ratio} = \frac{\text{Total items rejected}}{\text{Total items produced}} \times 100$$ #### Error rate ratio example Brett's Bakery produces 20,000 products per month. Out of these, they reject 230 products that get damaged during production. This gives the bakery an error rate ratio of 1.15%. The labour to sales ratio indicates how much of your sales dollar will be spent in direct labour. Formula: Labour to sales ratio (%) = (Cost of direct labour ÷ Sales) x 100 #### Calculate your labour to sales ratio $$\text{Labour to sales ratio} = \frac{\text{Cost of direct labour}}{\text{Sales}} \times 100$$ #### Labour to sales ratio example Brett's Bakery has a cost of direct labour of $85,000 and records sales of$190,000. This gives the bakery an labour to sales ratio of 44.7%. The operating expense margin indicates how much of the sales dollar will be used for operating expenses, such as rent, gas and electricity. Formula: Operating expense margin (%) = (Operating expenses ÷ Total revenue) x 100 #### Calculate operating expense margin $$\text{Operating expense margin} = \frac{\text{Operating expenses}}{\text{Total revenue}} \times 100$$ #### Operating expense margin example Brett's Bakery has operating expenses of $20,000 and records a total revenue of$245,000. This gives the bakery an operating expense margin of 8.1%. ## Leverage ratios Leverage ratios indicate your business's ability to meet its debt obligations from sources other than cash flow. The debt ratio measures the proportion of your business's assets that are supported by debt. Formula: Debt ratio = Total liabilities ÷ Total assets Aim for: Below 1.0 (safe). 2.0 or higher is risky. Investors generally look for between 0.3 and 0.6. The debt to asset ratio may be used by your creditors to identify: • your ability to repay debts • whether you'll be awarded additional finance. • solvent • able to meet your current and longer-term financial commitments • able to generate a return on their investment. If this ratio consistently increases, it can signal an impending default. #### Calculate debt ratio $$\text{Debt ratio} = \frac{\text{Total liabilities}}{\text{Total assets}}$$
2023-03-29T06:42:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21069446206092834, "perplexity": 7477.192429158906}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948951.4/warc/CC-MAIN-20230329054547-20230329084547-00163.warc.gz"}
https://oscars.bnl.gov/examples/AllAbout_PowerDensity.html
# Power Density - All About¶ This document gives some of the details about power density calculations. At the moment the power density calculations require points in space and the surface normal vector at each point. There are some utilities to easily create rectangular surfaces (made up of such point and normal vectors) and some parametric surfaces. It is also possible to calculate the power density at arbitrary points in space. These arbitrary points in space are difficult to connect as a surface for visualization by most plotting software (typically they must be on some mesh), nevertheless the calculation is possible. For the builtin surface types there are simple ways to rotate and translate them in space, typically by giving the power density calculation function the arguments 'rotations=[rx, ry, rz]' and 'translation=[tx, ty, tz]'. Rotations are done before translation and in the order of rx (rotation about x-axis), ry, rz ad are given in radian. The translation is done after and given in meters. Any of these can be run in multi-threaded, GPU, or MPI mode. Results from running on separate nodes on grid/cloud computing can be combined. In [1]: # matplotlib plots inline %matplotlib inline # Import the OSCARS SR module import oscars.sr # Import OSCARS plots (matplotlib) from oscars.plots_mpl import * # Import OSCARS 3D tools (matplotlib) from oscars.plots3d_mpl import * # Import OSCARS parametric surfaces from oscars.parametric_surfaces import * OSCARS v2.1.8 - Open Source Code for Advanced Radiation Simulation Brookhaven National Laboratory, Upton NY, USA http://oscars.bnl.gov [email protected] In [2]: # Create a new OSCARS object. Default to 8 threads and always use the GPU if available osr = oscars.sr.sr(nthreads=8, gpu=1) In [3]: # For these examples we will make use of a simple undulator field osr.add_bfield_undulator(bfield=[0, 1, 0], period=[0, 0, 0.042], nperiods=31) # Plot the field plot_bfield(osr) ## Beam¶ Add a basic beam somewhat like NSLS2. Filament beam for simple studies. In [4]: # Add a basic electron beam with zero emittance osr.set_particle_beam(energy_GeV=3, x0=[0, 0, -1], current=0.500) # You MUST set the start and stop time for the calculation osr.set_ctstartstop(0, 2) # Plot trajectory plot_trajectory_position(osr.calculate_trajectory()) ## Power density - Rectangular planes¶ Calculate the power density on simple rectangular planes. The 'plane' argument is the initial plane that the surface is in. From there it can be rotated and translated. The order of 'XY' or any combination matters since the normal vector is given by X x Y (in the order written in the input). By default the coordinates in the returned power density are relative to the center of the rectangular grid (not absolute space). If you want the coordinates in absolute space you can specify dim=3, but I give the warning that this makes plotting rotated surfaces difficult. Note: This is the recommended method for calculating power densities on rectangular surfaces. One may also do the same with parametric surfaces (shown later in this tutorial). In [5]: # Calculate power density 30 [m] downstream in the XY plane. power_density = osr.calculate_power_density_rectangle( plane='XY', width=[0.030, 0.030], npoints=[51, 51], translation=[0, 0, 30] ) plot_power_density(power_density) In [6]: # We can easily rotate the above, here about the z axis. power_density = osr.calculate_power_density_rectangle( plane='XY', width=[0.030, 0.030], npoints=[51, 51], rotations=[0, 0, osr.pi()/4], translation=[0, 0, 30] ) plot_power_density(power_density) In [7]: # Calculate the power density on a surface in the XZ plane. # This can be thought of as the power density on the upper beampipe inner surface power_density = osr.calculate_power_density_rectangle( plane='XZ', width=[0.020, 1.000], npoints=[51, 51], translation=[0, 0.004, 2] ) plot_power_density(power_density) In [8]: # We'll now take the above plane and tilt is slightly as if it were a tapered beampipe power_density = osr.calculate_power_density_rectangle( plane='XZ', width=[0.020, 1.000], npoints=[51, 51], rotations=[0.002, 0, 0], translation=[0, 0.004, 2] ) plot_power_density(power_density) ## Power Density - Parametric Surfaces¶ We will now calculate the power density on a few parametric surfaces. A parametric surface in OSCARS is represented by a class. There is another tutorial which explains how to build your own. We begin with a simple rectangular surface, then explore some others. These surfaces come from the oscars.parametric_surfaces module. In [9]: # First create the surface of interest rectangle = PSRectangle(L=0.030, W=0.030, nu=51, nv=51) # Run calculation and plotting pd = power_density_3d(osr, rectangle, translation=[0, 0, 30]) In [10]: # It is easy to rotate the above and see it in 3D pd = power_density_3d(osr, rectangle, rotations=[0, osr.pi()/4, 0], translation=[0, 0, 30]) In [11]: # First create the surface of interest sphere = PSSphere(R=0.020, nu=51, nv=51) # Run calculation and plotting pd = power_density_3d(osr, sphere, translation=[0, 0, 30]) In [12]: # First create the surface of interest cylinder = PSCylinder(R=0.020, L=0.010, nu=51, nv=51) # Run calculation and plotting pd = power_density_3d(osr, cylinder, rotations=[osr.pi()/2, 0, 0], translation=[0, 0, 30]) In [13]: # Let's do a cylinder that is around the photon beam, downstream a bit cylinder = PSCylinder(R=0.005, L=1.000, nu=51, nv=51) # Run calculation and plotting. Here we needed to invert the normal due to the way # it is defined in the PSCylinder class pd = power_density_3d(osr, cylinder, translation=[0, 0, 5], normal=-1) ## Multi-particle power density¶ It is possible to run the power density calculations in multi-particle mode in the case that you have a very large beam, or multiple beams. It is show here for completeness. In [14]: # Add a basic electron beam with zero emittance osr.set_particle_beam( energy_GeV=3, x0=[0, 0, -1], current=0.500, sigma_energy_GeV=0.001*3, beta=[1.5, 0.8], emittance=[0.9e-9, 0.008e-9] ) # You MUST set the start and stop time for the calculation osr.set_ctstartstop(0, 2) In [15]: # Calculate power density 30 [m] downstream in the XY plane. power_density = osr.calculate_power_density_rectangle( plane='XY', width=[0.030, 0.030], npoints=[51, 51], translation=[0, 0, 30], nparticles=3 ) plot_power_density(power_density) ## On Precision¶ The default relative precision is 0.01 (1%) and is controlled by the parameter: • precision=0.01 (default) You may retrieve the relative precision for all points in a calculation by including the parameter: • quantity='precision' Should you not reach the desired precision withing max_level you will receive a warning message. To increase max_level you have two options: • max_level=25 • max_level_extended=(some number above max_level) The maximum max_level is 25 due to typical memor restrictions (because it is faster). The 'extended' version runs in non-memory mode which allows higher precision at the cost of CPU time. Only in rare instances will you need this. You can also retrieve the 'level' of convergence for all points (which will show -1 for non-converged points) with the addition of: • quantity='level' In [16]: # Show the precision reached for each point. power_density = osr.calculate_power_density_rectangle( plane='XY', width=[0.030, 0.030], npoints=[51, 51], translation=[0, 0, 30], quantity='precision' ) plot_power_density(power_density, title='Precision') # Show the level reached for each point. power_density = osr.calculate_power_density_rectangle( plane='XY', width=[0.030, 0.030], npoints=[51, 51], translation=[0, 0, 30], quantity='level' ) plot_power_density(power_density, title='Level')
2023-03-29T06:03:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6249284148216248, "perplexity": 4137.14027746632}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948951.4/warc/CC-MAIN-20230329054547-20230329084547-00575.warc.gz"}
http://scstatehouse.gov/sess120_2013-2014/SJ13/20130319.htm
South Carolina General Assembly 120th Session, 2013-2014 Journal of the Senate Tuesday, March 19, 2013 (Statewide Session) Indicates Matter Stricken Indicates New Matter The Senate assembled at 12:00 Noon, the hour to which it stood adjourned, and was called to order by the PRESIDENT. A quorum being present, the proceedings were opened with a devotion by the Chaplain as follows: From Micah we are reminded: "And what does the Lord require of you? To act justly and to love mercy   and to walk humbly with your God."     (Micah 6:8) Glorious Lord, indeed, the Bills and Resolutions and other formal edicts from this Senate are quite significant. They matter greatly to individuals and organizations across our forty-six counties. But perhaps just as important, dear God, are the messages sent forth by each of these Senators personally: how they conduct themselves, what they declare to be of great importance for them, what they value. We pray, O Lord, that the most meaningful messages are those found in how each of these leaders lives out their days. May they and their staff members be known far and wide as individuals who act justly, who love mercy, and who walk humbly with You. In Your wondrous name we pray, Lord. Amen. The PRESIDENT called for Petitions, Memorials, Presentments of Grand Juries and such like papers. RECESS At 12:05 P.M., on motion of Senator COURSON, the Senate receded from business not to exceed five minutes. At 12:13 P.M., the Senate resumed. Expression of Personal Interest Senator LARRY MARTIN rose for an Expression of Personal Interest. REGULATION RESUBMITTED Document No. 4261 Agency: State Board of Education Chapter: 43 Statutory Authority: 1976 Code Sections 59-5-60, 59-29-100, and 20 U.S.C. 6301, et seq. Received by Lieutenant Governor January 8, 2013 Referred to Education Committee Legislative Review Expiration May 8, 2013 Withdrawn due to end of two-year session June 8, 2012 Resubmitted with no substantive changes January 8, 2013 Received by Lieutenant Governor & Speaker May 8, 2013 House Committee Requested Withdrawal March 6, 2013 120 Day Period Tolled Withdrawn and Resubmitted March 15, 2013 REGULATIONS WITHDRAWN AND RESUBMITTED Document No. 4285 Agency: State Board of Education Chapter: 43 Statutory Authority: 1976 Code Sections 59-19-90, 59-63-30, 59-63-420, 59-63-470, 59-63-480, 59-63-490, 59-63-500, 59-63-510, 59-63-520, 59-63-530, 59-65-30, 59-65-90, and 20 U.S.C. 7165 SUBJECT: Transfers and Withdrawals Received by Lieutenant Governor January 8, 2013 Referred to Education Committee Legislative Review Expiration May 8, 2013 House Committee Requested Withdrawal March 6, 2013 120 Day Period Tolled Withdrawn and Resubmitted   March 13, 2013 Document No. 4294 Agency: State Board of Education Chapter: 43 Statutory Authority: 1976 Code Sections 59-5-60, 59-18-110, 59-29-10, et seq., 59-29-200, 59-33-30, 59-53-1810, 20 U.S.C. 1232(g), and 20 U.S.C. 6301, et seq. Received by Lieutenant Governor January 8, 2013 Referred to Education Committee Legislative Review Expiration May 8, 2013 House Committee Requested Withdrawal March 6, 2013 120 Day Period Tolled Withdrawn and Resubmitted March 15, 2013 Document No. 4308 Agency: State Board of Education Chapter: 43 Statutory Authority: 1976 Code Sections 59-5-60 and 59-29-170 Received by Lieutenant Governor January 15, 2013 Referred to Education Committee Legislative Review Expiration May 15, 2013 House Committee Requested Withdrawal March 6, 2013 120 Day Period Tolled Withdrawn and Resubmitted   March 15, 2013 Document No. 4309 Agency: State Board of Education Chapter: 43 Statutory Authority: 1976 Code Sections 59-5-60, 59-40-10, et seq., and Public Law 111-117, December 16, 2001, Consolidated Appropriations Act, 2010 SUBJECT: Procedures and Standards for Review of Charter School Applications House Committee Requested Withdrawal March 6, 2013 120 Day Period Tolled Withdrawn and Resubmitted   March 15, 2013 Doctor of the Day Senator FAIR introduced Dr. John Evans of Greenville, S.C., Doctor of the Day. Leave of Absence On motion of Senator LARRY MARTIN, at 12:05 P.M., Senator HAYES was granted a leave of absence until 1:00 P.M. Leave of Absence On motion of Senator THURMOND, at 1:00 P.M., Senator SHEHEEN was granted a leave of absence for the balance of the day. Leave of Absence On motion of Senator JOHNSON, at 1:00 P.M., Senator ALLEN was granted a leave of absence for the balance of the day. S. 173 (Word version)     Sen. Campsen S. 237 (Word version)     Sen. Alexander S. 261 (Word version)     Sen. Campsen S. 313 (Word version)     Sen. Malloy S. 445 (Word version)     Sen. Campsen S. 514 (Word version)     Sen. Ford S. 515 (Word version)     Sens. Campsen, Gregory Privilege of the Chamber On motion of Senator BRYANT, with unanimous consent, the provisions of Rule 35B were waived and the Privilege of the Chamber, to that area behind the rail, was extended on behalf of Senator BENNETT to President George Benson of the College of Charleston. Expression of Personal Interest Senator RANKIN rose for an Expression of Personal Interest. Remarks by Senator RANKIN Thank you, Mr. PRESIDENT. If I can get the Senators from Horry County to join me up here -- I see the Senator from North Myrtle Beach joining me. Senator CLEARY is outside tending to other matters. I want to come to you today on behalf of Horry County. If I could get some attention over there from the upstate Senators, please -- Senator from Greenville, I want you to hear this if you will, because you have long been a supporter of the emergency response agencies as our State and our counties and our cities have gone about their business. I want to give you an example of how well, how dutifully, and how beautifully emergency response was put into action and on display Saturday, in the Carolina Forest community, which is between Conway and Myrtle Beach. That area is the fastest growing area in the county and has seen incredible growth. On Saturday, a fire broke out. Twenty-six buildings in a multi-family condominium project, known as the Winter Green area, were destroyed in a matter of fifteen minutes. It is being called an accident at this point with no sense of any criminal or intentional act, but totally accidental. A fire broke out and blew through one hundred nine individually owned units in the Carolina Forest area of Horry County. I am proud to tell you that no life was lost, other than a few pets, which we regret for those who owned them, but most were able to be saved that could be. Yesterday, we toured that area, and a number of other times over the weekend to survey the work being done. From the State Fire Marshal, from the county, from the city, and many of your areas in Columbia as well, you all pitched in to address a need -- a crucial and heart-breaking need for one hundred eighty-nine people whose homes were totally destroyed. We looked at the devastation and the quickness through which the fire ripped through a community. Were it not for the brave efforts of the firemen, it could have crossed over the road and destroyed another community, and more homes, and more trees in that neighborhood. It is a beautiful thing to report to you that no lives were lost. It is a beautiful thing to report to you that almost like a ballet in terms of choreography, the work of our folks, again aided by yours, this State, the fire marshal, and the Forestry Commission were able to come together to address this area. The Governor came down yesterday -- and we thank her for coming -- and applauding the efforts of everyone involved and speaking to those who lost everything. I'm sure you have seen it on the news sources, but it is a sad site. Once a beautiful row of homes, is now a vacant barren pond, only a footprint of a home that existed before. So on behalf of the Senators from Horry County -- Senator CLEARY as well -- our hearts break for those folks. We are saddened by their loss. We are proud to report that agencies and volunteers are working as quickly as humanly possible to hopefully allow these folks to start anew. There were restoration and insurance companies on site that day, along with numerous volunteers -- the Red Cross in particular -- who gave a tremendous outpouring of support. The folks of Horry County and across this State are working to address the needs of these victims in helping them rebuild their lives and start anew. It is a beautiful testament to the fabric of our county and State, that in a time of need, we do what our fellow South Carolinians have done in this instance. So there is good to report from this tragic experience. I want to touch on a personal tragedy, though not family and not physical. The home that my father grew up in, which was built in 1903, in the Allsbrook community, was likewise destroyed in a fire, the Saturday night after the fire in Carolina Forest. The heritage of our family place is now gone, but the memories will live on forever. Thank you to the community who responded and helped try to put out the fire. The home was built out of almost all wood, so if you struck a spark walking, it would be gone like that. It was a beautifully restored home though, and is a loss to all of us. I personally want to thank the community, the families, and the emergency response folks for doing what they do best. We have a lot to be proud of. This fabric of loss and defeat brings forth hope as we are now in the spring and Easter season from the deep depths of defeat, and into a bright horizon of tomorrow. Horry County has a lot to be thankful for and a lot of people to be thankful for. On behalf of the South Carolina Senate, I want to urge those victims who have loss, to embrace the hope of a brighter tomorrow. Thank you. Expression of Personal Interest Senator HEMBREE rose for an Expression of Personal Interest. Remarks by Senator HEMBREE I have just a few very brief remarks after what Senator RANKIN has already said very well. It was a situation that could make you proud to be from Horry County and proud to be a South Carolinian. To see people who pulled together, not only in government, but in nonprofit areas -- volunteers, school districts all across the board -- that they all moved so very swiftly and dropped what they were doing to take care of their neighbors. It really was the right thing to do. It would make you proud to be from Horry County. South Carolinians, please keep those folks in your prayers. Thanks. On motion of Senator HAYES, with unanimous consent, the remarks of Senators RANKIN and HEMBREE were ordered printed in the Journal. INTRODUCTION OF BILLS AND RESOLUTIONS The following were introduced: S. 535 (Word version) -- Senators Peeler, Alexander, L. Martin, McGill, Coleman, Jackson, Campbell, Setzler, Cromer, O'Dell, Sheheen, Turner, Fair, Ford, Nicholson, McElveen and Hayes: A BILL TO AMEND THE CODE OF LAWS OF SOUTH CAROLINA, 1976, BY ADDING ARTICLE 11 TO CHAPTER 119, TITLE 59, ENACTING "THE CLEMSON UNIVERSITY ENTERPRISE ACT", SO AS TO ALLOW THE BOARD OF TRUSTEES OF CLEMSON UNIVERSITY BY RESOLUTION TO ESTABLISH AN ENTERPRISE DIVISION AS PART OF CLEMSON UNIVERSITY, TO PROVIDE THAT CERTAIN ASSETS, PROGRAMS, AND OPERATIONS OF CLEMSON UNIVERSITY MAY BE TRANSFERRED TO THE ENTERPRISE DIVISION, TO PROVIDE THAT THE ENTERPRISE DIVISION IS EXEMPT FROM VARIOUS STATE LAWS GOVERNING PROCUREMENT, HUMAN RESOURCES, PERSONNEL, AND DISPOSITION OF REAL AND PERSONAL PROPERTY WITH SOME SUCH EXEMPTIONS APPLYING AUTOMATICALLY AND OTHERS REQUIRING ADDITIONAL ACTIONS BY THE BOARD OF TRUSTEES, TO PROVIDE THAT BONDS, NOTES, OR OTHER EVIDENCE OF INDEBTEDNESS MAY BE ISSUED FOR THE ENTERPRISE DIVISION AND PROVIDE AUDIT AND REPORTING REQUIREMENTS; AND TO AMEND SECTIONS 8-11-260, 8-17-370, AND 11-35-710, ALL AS AMENDED, AND RELATING RESPECTIVELY TO EXEMPTIONS FROM STATE PERSONNEL ADMINISTRATIONS, THE STATE EMPLOYEE GRIEVANCE PROCEDURE ACT, AND THE SOUTH CAROLINA CONSOLIDATED PROCUREMENT CODE, SO AS TO ADD EXEMPTIONS CONFORMING TO THE CLEMSON UNIVERSITY ENTERPRISE ACT. l:\council\bills\bbm\10823htc13.docx Read the first time and referred to the Committee on Finance. S. 536 (Word version) -- Senators Gregory, Reese, McElveen, Hembree, Hutto, Lourie, Campsen, Cleary, Allen, Shealy, O'Dell, Campbell, Cromer and Hayes: A BILL TO AMEND THE CODE OF LAWS OF SOUTH CAROLINA, 1976, SO AS TO ENACT THE "ENERGY SYSTEM FREEDOM OF OWNERSHIP ACT" BY ADDING ARTICLE 14 TO CHAPTER 52, TITLE 48 SO AS TO PROVIDE THAT A THIRD PARTY MAY SELL ELECTRICITY PRODUCED BY A RENEWABLE ENERGY FACILITY AS DEFINED IN THIS ACT, TO DEFINE CERTAIN TERMS, TO PROVIDE THAT THE SALE OF ELECTRICITY FROM A RENEWABLE ENERGY FACILITY BY THIRD PARTIES DOES NOT SUBJECT THE SELLER TO REGULATION AS A PUBLIC UTILITY, TO PROVIDE RELATED RESPONSIBILITIES OF THE STATE ENERGY OFFICE, TO IMPOSE CERTAIN REQUIREMENTS ON FEES CHARGED BY A UTILITY TO A RENEWABLE ENERGY FACILITY; AND TO PROVIDE THAT THE STATE ENERGY OFFICE MAY PROMULGATE NECESSARY REGULATIONS; AND BY ADDING SECTION 58-27-25 SO AS TO EXEMPT RENEWABLE ENERGY FACILITIES FROM PROVISIONS GOVERNING ELECTRIC UTILITIES AND ELECTRIC COOPERATIVES. l:\council\bills\agm\19944ab13.docx Read the first time and referred to the Committee on Judiciary. S. 537 (Word version) -- Senators Verdin, Sheheen, Cleary, Grooms, Matthews, McGill, Peeler, Shealy, O'Dell, Cromer, Alexander, Fair, Davis, Bright, Bryant and Corbin: A BILL TO AMEND SECTION 1-23-110(A)(3) OF THE 1976 CODE, RELATING TO PUBLIC HEARINGS CONCERNING PROPOSED REGULATIONS, TO REQUIRE PUBLIC MEETINGS PRIOR TO AN AGENCY PROMULGATING, AMENDING, OR REPEALING A REGULATION; AND TO AMEND SECTION 1-23-110(C) TO PROVIDE THAT ALL WRITTEN AND ORAL SUBMISSIONS FROM THE PUBLIC CONCERNING A REGULATION MUST BE TRANSMITTED TO THE SMALL BUSINESS REGULATORY REVIEW COMMITTEE. l:\s-res\dbv\018publ.kmm.dbv.docx Read the first time and referred to the Committee on Judiciary. S. 538 (Word version) -- Senators Malloy, Hayes and L. Martin: A BILL TO ADD SECTION 62-7-816A, SOUTH CAROLINA CODE OF LAWS, 1976, RELATING TO THE POWERS OF A TRUSTEE, SO AS TO AUTHORIZE A TRUSTEE TO AMEND THE PROVISIONS OF AN IRREVOCABLE TRUST WHEN DOING SO IS IN THE BEST INTEREST OF THE BENEFICIARIES AND IN FURTHERANCE OF THE PURPOSE OF THE TRUST; TO AMEND SECTION 62-7-903, RELATING TO THE SOUTH CAROLINA UNIFORM PRINCIPAL AND INCOME ACT, SO AS TO CLARIFY THE TERMS OF THE PROVISION; TO AMEND SECTION 62-7-904, RELATING TO THE POWERS OF A TRUSTEE TO ADJUST, SO AS TO INCLUDE IN THESE POWERS FOR THE CONVERSION OF A TRUST TO A UNITRUST; AND TO ADD SECTIONS 62-7-904A THROUGH 62-7-904P, RELATING TO UNITRUSTS, SO AS TO PROVIDE FOR THE REQUIREMENTS TO ESTABLISH A UNITRUST AND ITS ADMINISTRATION. l:\s-jud\bills\malloy\jud0055.kw.docx Read the first time and referred to the Committee on Judiciary. S. 539 (Word version) -- Labor, Commerce and Industry Committee: A JOINT RESOLUTION TO APPROVE REGULATIONS OF THE OCCUPATIONAL THERAPY BOARD, RELATING TO REQUIREMENTS OF LICENSURE FOR OCCUPATIONAL THERAPISTS, DESIGNATED AS REGULATION DOCUMENT NUMBER 4328, PURSUANT TO THE PROVISIONS OF ARTICLE 1, CHAPTER 23, TITLE 1 OF THE 1976 CODE. l:\council\bills\dbs\31120ac13.docx Read the first time and ordered placed on the Calendar without reference. S. 540 (Word version) -- Labor, Commerce and Industry Committee: A JOINT RESOLUTION TO APPROVE REGULATIONS OF THE PERPETUAL CARE CEMETERY BOARD, RELATING TO PERPETUAL CARE CEMETERY BOARD, DESIGNATED AS REGULATION DOCUMENT NUMBER 4168, PURSUANT TO THE PROVISIONS OF ARTICLE 1, CHAPTER 23, TITLE 1 OF THE 1976 CODE. l:\council\bills\dbs\31117ac13.docx Read the first time and ordered placed on the Calendar without reference. S. 541 (Word version) -- Labor, Commerce and Industry Committee: A JOINT RESOLUTION TO APPROVE REGULATIONS OF THE DEPARTMENT OF LABOR, LICENSING AND REGULATION - PANEL FOR DIETETICS, RELATING TO CODE OF ETHICS, INTERPRETATION OF STANDARDS, AND REPORTING OF DISCIPLINARY ACTIONS, DESIGNATED AS REGULATION DOCUMENT NUMBER 4327, PURSUANT TO THE PROVISIONS OF ARTICLE 1, CHAPTER 23, TITLE 1 OF THE 1976 CODE. l:\council\bills\dbs\31119ac13.docx Read the first time and ordered placed on the Calendar without reference. S. 542 (Word version) -- Labor, Commerce and Industry Committee: A JOINT RESOLUTION TO APPROVE REGULATIONS OF THE DEPARTMENT OF LABOR, LICENSING AND REGULATION - BUILDING CODES COUNCIL, RELATING TO INTERNATIONAL BUILDING CODE, INTERNATIONAL FIRE CODE, INTERNATIONAL FUEL GAS CODE, AND NATIONAL ELECTRICAL CODE, DESIGNATED AS REGULATION DOCUMENT NUMBER 4320, PURSUANT TO THE PROVISIONS OF ARTICLE 1, CHAPTER 23, TITLE 1 OF THE 1976 CODE. l:\council\bills\dbs\31118ac13.docx Read the first time and ordered placed on the Calendar without reference. S. 543 (Word version) -- Senators Courson, L. Martin, Grooms, Shealy, Hayes and Bennett: A CONCURRENT RESOLUTION TO DECLARE APRIL 2013 AS "HOMESCHOOL RECOGNITION MONTH" IN SOUTH CAROLINA, TO RECOGNIZE THE DILIGENT EFFORTS OF HOMESCHOOLING PARENTS AND THE ACADEMIC SUCCESS OF THEIR STUDENTS, AND TO EXPRESS SINCERE APPRECIATION FOR THEIR FOCUS ON THE WELL-BEING AND OVERALL ACHIEVEMENTS OF THEIR CHILDREN. l:\council\bills\rm\1165ac13.docx The Concurrent Resolution was introduced and referred to the Committee on Education. S. 544 (Word version) -- Senators Hayes, Coleman, Gregory and Peeler: A CONCURRENT RESOLUTION TO REQUEST THAT THE DEPARTMENT OF TRANSPORTATION NAME THE PORTION OF SOUTH CAROLINA HIGHWAY 72 IN YORK COUNTY FROM ITS INTERSECTION WITH RAWLSVILLE ROAD TO ITS INTERSECTION WITH CRAIG ROAD "EZRA DEWITT MEMORIAL HIGHWAY" AND ERECT APPROPRIATE MARKERS OR SIGNS ALONG THIS PORTION OF HIGHWAY THAT CONTAIN THE WORDS "EZRA DEWITT MEMORIAL HIGHWAY". l:\council\bills\swb\5159cm13.docx The Concurrent Resolution was introduced and referred to the Committee on Transportation. S. 545 (Word version) -- Senator Williams: A CONCURRENT RESOLUTION TO REQUEST THAT THE DEPARTMENT OF TRANSPORTATION NAME THE INTERSECTION LOCATED AT THE JUNCTURE OF SOUTH CAROLINA HIGHWAYS 76 AND 576 AT WAHEE ROAD IN MARION COUNTY "ROBERT J. MCINTYRE, SR. INTERSECTION" AND ERECT APPROPRIATE MARKERS OR SIGNS AT THIS INTERSECTION THAT CONTAIN THE WORDS "ROBERT J. MCINTYRE, SR. INTERSECTION". l:\s-res\kmw\001mcin.mrh.kmw.docx The Concurrent Resolution was introduced and referred to the Committee on Transportation. S. 546 (Word version) -- Senator Thurmond: A CONCURRENT RESOLUTION TO RECOGNIZE AND HONOR LAWRENCE MCGOWAN "LARRY" TODD, ASSISTANT SOLICITOR OF THE 9TH JUDICIAL CIRCUIT OF CHARLESTON COUNTY, UPON THE OCCASION OF HIS RETIREMENT AFTER EIGHTEEN YEARS OF OUTSTANDING SERVICE, AND TO WISH HIM CONTINUED SUCCESS AND HAPPINESS IN ALL HIS FUTURE ENDEAVORS. l:\council\bills\gm\29648ahb13.docx The Concurrent Resolution was adopted, ordered sent to the House. S. 547 (Word version) -- Senators Sheheen, Coleman and Hayes: A SENATE RESOLUTION TO RECOGNIZE AND CONGRATULATE THE REVEREND HERBERT C. CRUMP, JR., OF ROCK HILL, FOUNDER AND SENIOR PASTOR OF FREEDOM TEMPLE MINISTRIES, INC., UPON HIS ELEVATION TO BISHOP IN THE MT. CALVARY HOLY CHURCH OF AMERICA ON APRIL 6, 2013, AND TO WISH HIM GOD'S RICHEST BLESSINGS AS HE CONTINUES TO SERVE THE LORD. l:\council\bills\nbd\11186ac13.docx S. 548 (Word version) -- Senators Scott, Alexander, Allen, Bennett, Bright, Bryant, Campbell, Campsen, Cleary, Coleman, Corbin, Courson, Cromer, Davis, Fair, Ford, Gregory, Grooms, Hayes, Hembree, Hutto, Jackson, Johnson, Leatherman, Lourie, Malloy, L. Martin, S. Martin, Massey, Matthews, McElveen, McGill, Nicholson, O'Dell, Peeler, Pinckney, Rankin, Reese, Setzler, Shealy, Sheheen, Thurmond, Turner, Verdin, Williams and Young: A SENATE RESOLUTION TO EXPRESS THE PROFOUND SORROW OF THE MEMBERS OF THE SOUTH CAROLINA SENATE UPON THE PASSING OF HARRY GENE BERRY OF BLYTHEWOOD AND TO EXTEND THE DEEPEST SYMPATHY TO HIS FAMILY AND MANY FRIENDS. l:\council\bills\rm\1189ahb13.docx S. 549 (Word version) -- Senator Verdin: A SENATE RESOLUTION TO HONOR AND RECOGNIZE JAMES HILL UPON THE OCCASION OF HIS RETIREMENT AND TO WISH HIM WELL IN ALL HIS FUTURE ENDEAVORS. l:\s-res\dbv\023hill.mrh.dbv.docx S. 550 (Word version) -- Senator Verdin: A SENATE RESOLUTION TO HONOR AND RECOGNIZE MR. DICK CODA FOR HIS MANY TALES OF ADVENTURE TRAPPING IN LAURENS COUNTY AND TO WISH HIM HEALTH AND HAPPINESS IN ALL HIS FUTURE ENDEAVORS. l:\s-res\dbv\024coda.mrh.dbv.docx REPORTS OF STANDING COMMITTEES Appointments Reported Senator ALEXANDER from the Committee on Labor, Commerce and Industry submitted a favorable report on: Statewide Appointment Initial Appointment, Jobs Economic Development Authority, with the term to commence July 27, 2012, and to expire July 12, 2015 5th Congressional District: Gregory A. Thompson, 1820 Stadium Road, Sumter, SC 29154 VICE Hampton Atkins Senator CAMPSEN from the Committee on Fish, Game and Forestry submitted a favorable report on: Statewide Appointment Reappointment, Governing Board of Department of Natural Resources, with the term to commence July 1, 2012, and to expire July 1, 2016 4th Congressional District: Norman F. Pulliam, Sr., 1150 Woodburn Road, Spartanburg, SC 29302 HOUSE CONCURRENCES S. 522 (Word version) -- Senators Campbell and Grooms: A CONCURRENT RESOLUTION TO RECOGNIZE THE SIGNIFICANT CONTRIBUTIONS AND ACCOMPLISHMENTS OF THE ALCOA MT. HOLLY PLANT IN GOOSE CREEK, SOUTH CAROLINA, UPON THEIR ONE HUNDRED TWENTY-FIFTH ANNIVERSARY AND TO DECLARE MARCH 20, 2013, AS "ALCOA APPRECIATION DAY" IN SOUTH CAROLINA. Returned with concurrence. S. 480 (Word version) -- Senators Alexander, Hutto and Rankin: A CONCURRENT RESOLUTION TO FIX NOON ON WEDNESDAY, MAY 1, 2013, AS THE TIME TO ELECT A SUCCESSOR TO THE MEMBER OF THE PUBLIC SERVICE COMMISSION FOR THE FIRST DISTRICT FOR A TERM EXPIRING ON JUNE 30, 2016; TO ELECT A SUCCESSOR TO THE MEMBER OF THE PUBLIC SERVICE COMMISSION FOR THE THIRD DISTRICT FOR A TERM EXPIRING ON JUNE 30, 2016; TO ELECT A SUCCESSOR TO THE MEMBER OF THE PUBLIC SERVICE COMMISSION FOR THE FIFTH DISTRICT FOR A TERM EXPIRING ON JUNE 30, 2016; AND TO ELECT A PUBLIC SERVICE COMMISSIONER FOR THE SEVENTH DISTRICT, AS A SUCCESSOR TO THE PUBLIC SERVICE COMMISSIONER FOR THE AT-LARGE SEAT, FOR A TERM EXPIRING ON JUNE 30, 2016. Returned with concurrence. THE SENATE PROCEEDED TO A CALL OF THE UNCONTESTED LOCAL AND STATEWIDE CALENDAR. The following Bill was read the third time and ordered sent to the House of Representatives: S. 237 (Word version) -- Senators Shealy, Setzler, Courson, Turner, Cromer, Massey, Young and Alexander: A BILL TO AMEND SECTION 10-1-161 OF THE 1976 CODE, RELATING TO STATE CAPITOL BUILDING FLAGS FLOWN AT HALF-STAFF, TO PROVIDE THAT FLAGS ATOP THE STATE CAPITOL BUILDING MUST BE LOWERED TO HALF-STAFF FOR MEMBERS OF THE UNITED STATES MILITARY SERVICES, WHO WERE RESIDENTS OF THIS STATE AND WHO LOST THEIR LIVES IN THE LINE OF DUTY, ON THE DAY WHEN THEIR NAMES ARE RELEASED TO THE GENERAL PUBLIC, AND THE FLAGS SHALL REMAIN AT HALF-STAFF UNTIL AT LEAST DAWN THE SECOND DAY AFTER FUNERAL SERVICES ARE CONDUCTED. PREVIOUSLY PROPOSED AMENDMENT WITHDRAWN AMENDED, READ THE SECOND TIME S. 261 (Word version) -- Senators Leatherman, Setzler, Ford and Campsen: A BILL TO AMEND SECTION 12-6-40, AS AMENDED, CODE OF LAWS OF SOUTH CAROLINA, 1976, RELATING TO THE APPLICATION OF THE INTERNAL REVENUE CODE TO STATE INCOME TAX LAWS, SO AS TO UPDATE THE REFERENCE TO THE INTERNAL REVENUE CODE TO JANUARY 2, 2013, AND TO DELETE AN INAPPLICABLE SUBITEM. The Senate proceeded to a consideration of the Bill, the question being the adoption of the previously proposed amendment and printed in the journal of March 13, 2013. Senator HUTTO spoke on the amendment. Senator HUTTO asked unanimous consent to withdraw the amendment. The amendment was withdrawn. Senator HUTTO proposed the following amendment (261R007.CBH) , which was adopted: Amend the bill, as and if amended, by adding a new SECTION to read: /   SECTION   ___.     A.     Section 12-6-50 of the 1976 Code, as last amended by Act 126 of 2012, is further amended by adding appropriately numbered items to read: "( )   Section 68 relating to the reduction on itemized deductions and Section 151(d)(3) relating to the reduction on the personal exemption for: (a)   a joint return or surviving spouse with an adjusted gross income exceeding three hundred thousand dollars or the same adjusted gross income adjusted for inflation pursuant to Section 68, whichever is higher; (b)   a head of household with an adjusted gross income exceeding two hundred seventy-five thousand dollars or the same adjusted gross income adjusted for inflation pursuant to Section 68, whichever is higher; and (c)   an individual who is not married and who is not a surviving spouse or head of household with an adjusted gross income exceeding two hundred fifty thousand dollars or the same adjusted gross income adjusted for inflation pursuant to Section 68, whichever is higher." B.     From existing funds, the Department of Revenue shall create and distribute the forms and worksheets necessary to aid taxpayers in utilizing the provisions of this SECTION.     / Renumber sections to conform. Amend title to conform. Senator HUTTO explained the amendment. The question then was second reading of the Bill. The "ayes" and "nays" were demanded and taken, resulting as follows: Ayes 36; Nays 0 AYES Alexander Bennett Bright Bryant Campbell Campsen Cleary Coleman Corbin Courson Cromer Davis Fair Ford Gregory Hembree Hutto Johnson Leatherman Malloy Martin, Larry Martin, Shane McElveen McGill Nicholson O'Dell Peeler Pinckney Scott Setzler Shealy Sheheen Thurmond Turner Verdin Young Total--36 NAYS Total--0 There being no further amendments, the Bill was read the second time, passed and ordered to a third reading. S. 261--Ordered to a Third Reading On motion of Senator LEATHERMAN, with unanimous consent, S. 261 was ordered to receive a third reading on Wednesday, March 20, 2013. S. 143 (Word version) -- Senators Malloy, Ford, Massey, S. Martin and Hayes: A BILL TO AMEND ARTICLES 1, 2, 3, AND 4 OF TITLE 62, CODE OF LAWS OF SOUTH CAROLINA, 1976, RELATING TO THE SOUTH CAROLINA PROBATE CODE, SO AS TO, AMONG OTHER THINGS, DEFINE THE JURISDICTION OF THE PROBATE CODE, TO DETERMINE INTESTATE SUCCESSION, TO PROVIDE FOR THE PROCESS OF EXECUTING A WILL, TO PROVIDE FOR THE PROCESS TO PROBATE AND ADMINISTER A WILL, AND TO PROVIDE FOR LOCAL AND FOREIGN PERSONAL REPRESENTATIVES; AND TO AMEND ARTICLES 6 AND 7 OF TITLE 62, RELATING TO THE SOUTH CAROLINA PROBATE CODE, SO AS TO PROVIDE FOR THE GOVERNANCE OF NONPROBATE TRANSFERS, AND TO AMEND THE SOUTH CAROLINA TRUST CODE. The Senate proceeded to a consideration of the Bill, the question being the adoption of the amendment proposed by the Committee on Judiciary. Senator MALLOY proposed the following amendment (JUD0143.007), which was adopted: Amend the committee report, as and if amended, by striking page [143-6], lines 1-8 in their entirety and inserting the following: //   Amend the bill further, as and if amended, by striking page 411, lines 11-23, in their entirety and inserting the following: /   (3)   The terms of the second trust may not contain any provision nor reduce any fixed income, annuity, or unitrust interest of a beneficiary in the assets of an original trust document if the inclusion of the provision or reduction in the original trust document would have disqualified any assets of the original trust for any federal or state income, estate, or gift tax deduction received on account of any assets of the original trust, or if the inclusion of the provision or reduction in the original trust would have reduced the amount of any federal or state income, estate, or gift tax deduction received. In addition, the terms of the second trust may not reduce any retained interest of a beneficiary of the original trust if the interest is a qualified interest under Internal Revenue Code Section 2702.     / Amend the bill further, as and if amended, page 413, lines 1-21 in their entirety and inserting the following: /   (h)   The provisions of this section shall not be construed to create or imply a duty of the trustee to exercise the power to distribute principal or income, or to create an inference of impropriety made as a result of a trustee not exercising the power to appoint principal or income conferred under subsection (a) of this section. The provisions of this section shall not be construed to abridge the right of any trustee who has a power to appoint property in further trust that arises under the terms of the original trust or under any other section of this article or under another provision of law or under common law. The terms of an original trust may modify or waive the notice requirements under subsection (g), reduce or increase restrictions on altering the interests of beneficiaries under subsection (d), and may otherwise contain provisions that are inconsistent with the requirements of this section. (i)     A trustee or beneficiary may commence a proceeding to approve or disapprove a proposed exercise of the trustee's special power to appoint to another trust pursuant to subsection (a) of this section.     / Amend the bill further, as and if amended, by striking page 326, lines 10-14 in their entirety and inserting the following: /   when the conditions have changed). See also All Saints Parish, Waccamaw, a South Carolina non-profit corporation, a/k/a The Episcopal Church of All Saints and a/k/a The Vestry and Church Wardens of the Episcopal Church of All Saints Parish, 358 S.C. 209, 595 S.E. 2d 253 (Ct. App 2004), rev'd on other grounds, 385 S. C. 428, 685 S.E. 2d 163 (2009).   /   // Renumber sections to conform. Amend title to conform. Senator MALLOYexplained the perfecting amendment. The Committee on Judiciary proposed the following amendment (JUD0143.006), which was adopted: Amend the bill, as and if amended, by striking page 178, lines 11-36 in their entirety and inserting the following: /   (7)   (A)   A legal proceeding pending on the date of a decedent's death in which the decedent was a necessary party shall be suspended until a personal representative is appointed to administer the decedent's estate, unless a court otherwise orders. (B)   Pursuant to Section 62-3-104, this subsection does not apply to a proceeding by a secured creditor of a decedent to enforce the secured creditor's right to its security. It does apply to a proceeding for a deficiency judgment against a decedent or the estate of a decedent." This section establishes the mechanism for presenting claims. The claim may be delivered to the personal representative and must be filed with the court. Certain information must be included for claims not yet due, contingent, unliquidated, and secured claims. In lieu of presenting a claim, a proceeding may be commenced against a personal representative in any appropriate court, but the commencement must occur within the time for presenting claims. No claim is required in matters which were pending at the time of decedent's death. Actions on claims must be commenced within the thirty days after the personal representative has mailed a notice of disallowance, but the personal representative or the court may consent prior to the expiration of the thirty-day period to extensions that do not run beyond the applicable statute of limitations. The 2013 amendment requires a creditor seeking appointment to attach a written statement of its claim to the application or petition for appointment. Allowing a creditor to present a claim in this manner creates an exception to the general rule of Section 62-3-104 and Section 62-3-804(6), otherwise precluding the presentation of a claim prior to the appointment of a personal representative. The 2013 amendment further clarifies that, as earlier stated in Section 62-3-104, an in rem proceeding by a secured creditor is not suspended until a personal representative is appointed, unless that proceeding includes an action for a deficiency judgment against a decedent or his estate.     / Amend the bill further, as and if amended, by striking page 300, lines 21-43; page 301, lines 1-43; page 302, lines 1-43, and page 303, lines 1-32 in their entirety and inserting the following: /     Section 62-7-401.   (a)(1)   A trust described in Section 62-7-102 may be created by: (1)(i)   transfer of property to another person as trustee during the settlor's lifetime or by will or other disposition taking effect upon the settlor's death; (2)(ii)   written declaration signed by the owner of property that the owner holds identifiable property as trustee; or (3)(iii)   exercise of a power of appointment in favor of a trustee. (2)   To be valid, a trust of real property, created by transfer in trust or by declaration of trust, must be proved by some writing signed by the party creating the trust. A transfer in trust of personal property does not require written evidence, but must be proven by clear and convincing evidence, pursuant to Section 62-7-407. (b)   When any conveyance shall be made of any lands or tenements by which a trust or confidence shall or may arise or result by the implication or construction of law or be transferred or extinguished by act or operation of law, such trust or confidence shall be of like force and effect as it would have been without Section 62-7-401(a) A trust that arises by act or operation of law does not require the existence of a writing. (c)   A revocable inter vivos trust may be created either by declaration of trust or by a transfer of property and is not rendered invalid because the settler retains substantial control over the trust including, but not limited to, (i) a right of revocation, (ii) substantial beneficial interests in the trust, or (iii) the power to control investments or reinvestments. This subsection does not prevent a finding that a revocable inter vivos trust, enforceable for other purposes, is illusory for purposes of determining a spouse's elective share rights pursuant to Article 2, Title 62. A finding that a revocable inter vivos trust is illusory and thus invalid for purposes of determining a spouse's elective share rights pursuant to Article 2, Title 62 does not render that revocable inter vivos trust invalid, but allows inclusion of the trust assets as part of the probate estate of the settlor only for the purpose of calculating the elective share. In that event, the trust property that passes or has passed to the surviving spouse, including a beneficial interest of the surviving spouse in that trust property, must be applied first to satisfy the elective share and to reduce contributions due from other recipient of transfers including the probate estate, and the trust assets are available for satisfaction of the elective share only to any remaining extent necessary pursuant to Section 62-2-207. REPORTER'S COMMENT This section is based on Restatement (Third) of Trusts Section 10 (Tentative Draft No. 1, approved 1996), and Restatement (Second) of Trusts Section 17 (1959). Under the methods specified for creating a trust in this section, a trust is not created until it receives property. For what constitutes an adequate property interest, see Restatement (Third) of Trusts Sections 40-41 (Tentative Draft No. 2, approved 1999); Restatement (Second) of Trusts Sections 74-86 (1959). The property interest necessary to fund and create a trust need not be substantial. A revocable designation of the trustee as beneficiary of a life insurance policy or employee benefit plan has long been understood to be a property interest sufficient to create a trust. See Section 62-7-103(11) ("property" defined). Furthermore, the property interest need not be transferred contemporaneously with the signing of the trust instrument. A trust instrument signed during the settlor's lifetime is not rendered invalid simply because the trust was not created until property was transferred to the trustee at a much later date, including by contract after the settlor's death. A pourover devise to a previously unfunded trust is also valid and may constitute the property interest creating the trust. See Unif Testamentary Additions to Trusts Act Section 1 (1991), codified at Uniform Probate Code Section 2-511 and SCPC Section 62-2-510 (pourover devise to trust valid regardless of existence, size, or character of trust corpus). See also Restatement (Third) of Trusts Section 19 (Tentative Draft No. 1, approved 1996). Section 62-7-401(a) provides different methods to create a trust, creating a distinction between third-party-trusteed trusts in subsection (a)(1)(i) and self-trusteed trusts in subsection (a)(1)(ii). Subsection (a)(1)(i) provides that, if a third party is to serve as trustee, transfer of property to that other person, whether during life or at death, is sufficient to create a trust; no writing is required. Subsection (a)(1)(ii) requires that, if the settlor is also to be the trustee, then some written declaration signed by the settlor is required to create the trust. Such a declaration need not be a trust agreement, but can be some written evidence signed by the settlor sufficient to establish that the settlor intended to hold the property in trust. While this section refers to transfer of property to a trustee, a trust can be created even though for a period of time no trustee is in office. See Restatement (Third) of Trusts Section 2 cmt. g (Tentative Draft No. 1, approved 1996); Restatement (Second) of Trusts Section 2 cmt. i (1959). A trust can also be created without notice to or acceptance by a trustee or beneficiary. See Restatement (Third) of Trusts Section 14 (Tentative Draft No. 1, approved 1996); Restatement (Second) of Trusts Sections 35-36 (1959). The methods set out in Section 62-7-401 are not the exclusive methods to create a trust as recognized by Section 62-7-102.   A trust can also be created by a promise that creates enforceable rights in a person who immediately or later holds these rights as trustee. See Restatement (Third) of Trusts Section 10(e) (Tentative Draft No. 1, approved 1996). A trust thus created is valid notwithstanding that the trustee may resign or die before the promise is fulfilled. Unless expressly made personal, the promise can be enforced by a successor trustee. For examples of trusts created by means of promises enforceable by the trustee, see Restatement (Third) of Trusts Section 10 cmt. g (Tentative Draft No. 1, approved 1996); Restatement (Second) of Trusts Sections 14 cmt. h, 26 cmt. n (1959). Pre-SCTC South Carolina law made a distinction between trusts for personal property and trusts in land. Trusts in personal property could be proved, as well as created, by parol declarations. See Harris v. Bratton, 34 S.C. 259. 13 S.E. 447 (1891). On the other hand, for a trust of any "land, tenements, or hereditaments" to be valid, former South Carolina Probate Code Section 62-7-101 mandated that the trust be proved by a writing signed by the party creating the trust. An exception to the requirement of a writing to establish a trust in land was found in former SCPC Section 62-7-103 for trusts arising by operation of law, such as resulting and constructive trusts. Because the SCTC applies only to express trusts and not to trusts implied in law (Section 62-7-102), Sections 62-7-401(a)(1)(i) and (1)(ii) codify existing law that trusts of real property must be established by a writing, transfers in trust of personal property do not have the same requirement, and trusts containing real property that arise by operation of law do not require evidence of writing to be valid. Former SCPC Section 62-7-112 has been retained as SCTC Section 62-7-401(c). Former SCPC Section 62-7-112 was enacted after the Siefert decision, Seifert v. Southern Nat'l Bank of South Carolina, 305 S.C. 353, 409 S.E. 2d 337 (1991), to clarify that the settlor's retention of substantial control over a trust, such as a right to revoke, does not render that trust invalid. While a trust created by will may come into existence immediately at the testator's death and not necessarily only upon the later transfer of title from the personal representative, Section 62-7-701 makes clear that the nominated trustee does not have a duty to act until there is an acceptance of the trusteeship, express or implied. To avoid an implied acceptance, a nominated testamentary trustee who is monitoring the actions of the personal representative but who has not yet made a final decision on acceptance should inform the beneficiaries that the nominated trustee has assumed only a limited role. The failure so to inform the beneficiaries could result in liability if misleading conduct by the nominated trustee causes harm to the trust beneficiaries. See Restatement (Third) of Trusts Section 35 cmt. b (Tentative Draft No 2, approved 1999). While this section confirms the familiar principle that a trust may be created by means of the exercise of a power of appointment (paragraph ((a)(1)(iii)), this Code does not legislate comprehensively on the subject of powers of appointment but addresses only selected issues. See Section 62-7-302 (representation by holder of general testamentary power of appointment). For the law on powers of appointment generally, see Restatement (Second) of Property: Donative Transfers Sections 11.1-24.4 (1986); Restatement (Third) of Property: Wills and Other Donative Transfers (in progress).         / Amend the bill further, as and if amended, by striking page 326, lines 10-14 in their entirety and inserting the following: /   when the conditions have changed). See also All Saints Parish, Waccamaw, a South Carolina non-profit corporation, a/k/a The Episcopal Church of All Saints and a/k/a The Vestry and Church Wardens of the Episcopal Church of All Saints Parish, 358 S.C. 209, 595 S.E. 2d 253 (Ct. App 2004), rev'd on other grounds, 385 S. C. 428, 685 S.E. 2d 163 (2009).       / Renumber sections to conform. Amend title to conform. Senator MALLOY explained the committee amendment. The question then was second reading of the Bill, as amended. The "ayes" and "nays" were demanded and taken, resulting as follows: Ayes 38; Nays 1 AYES Alexander Allen Bennett Bryant Campbell Campsen Cleary Coleman Corbin Courson Cromer Davis Fair Ford Gregory Hutto Johnson Leatherman Lourie Malloy Martin, Larry Martin, Shane Matthews McElveen McGill Nicholson O'Dell Peeler Rankin Scott Setzler Shealy Sheheen Thurmond Turner Verdin Williams Young Total--38 NAYS Bright Total--1 There being no further amendments, the Bill was read the second time, passed and ordered to a third reading. S. 515 (Word version) -- Senators Grooms, Campsen and Gregory: A JOINT RESOLUTION TO PROHIBIT TREE REMOVAL IN THE MEDIAN OF A PORTION OF INTERSTATE 26 UNTIL THE TRANSPORTATION REVIEW COMMITTEE HAS REVIEWED AND COMMENTED ON THE PROJECT. The Senate proceeded to a consideration of the Resolution, the question being the second reading of the Joint Resolution. The "ayes" and "nays" were demanded and taken, resulting as follows: Ayes 34; Nays 1 AYES Alexander Allen Bennett Bright Campbell Campsen Cleary Coleman Corbin Courson Cromer Davis Fair Ford Gregory Hayes Hembree Hutto Leatherman Malloy Martin, Larry Martin, Shane Matthews Nicholson Peeler Pinckney Rankin Scott Setzler Shealy Thurmond Turner Verdin Young Total--34 NAYS Bryant Total--1 The Resolution was read the second time and ordered placed on the Third Reading Calendar. H. 3786 (Word version) -- Reps. Erickson, M.S. McLeod, Spires, Alexander, Allison, Anderson, Anthony, Atwater, Bales, Ballentine, Bannister, Barfield, Bedingfield, Bernstein, Bingham, Bowen, Bowers, Branham, Brannon, G.A. Brown, R.L. Brown, Chumley, Clemmons, Clyburn, Cobb-Hunter, Cole, H.A. Crawford, K.R. Crawford, Crosby, Daning, Delleney, Dillard, Douglas, Edge, Felder, Finlay, Forrester, Funderburk, Gagnon, Gambrell, George, Gilliard, Goldfinch, Govan, Hamilton, Hardee, Hardwick, Harrell, Hart, Hayes, Henderson, Herbkersman, Hiott, Hixon, Hodges, Horne, Hosey, Howard, Huggins, Jefferson, Kennedy, King, Knight, Limehouse, Loftis, Long, Lowe, Lucas, Mack, McCoy, McEachern, W.J. McLeod, Merrill, Mitchell, D.C. Moss, V.S. Moss, Munnerlyn, Murphy, Nanney, Neal, Newton, Norman, Ott, Owens, Parks, Patrick, Pitts, Pope, Powers Norrell, Putnam, Quinn, Ridgeway, Riley, Rivers, Robinson-Simpson, Rutherford, Ryhal, Sabb, Sandifer, Sellers, Simrill, Skelton, G.M. Smith, G.R. Smith, J.E. Smith, J.R. Smith, Sottile, Southard, Stavrinakis, Stringer, Tallon, Taylor, Thayer, Toole, Vick, Weeks, Wells, Whipper, White, Whitmire, Williams, Willis and Wood: A CONCURRENT RESOLUTION TO RECOGNIZE THAT ABUSE AND NEGLECT OF CHILDREN IS A SIGNIFICANT PROBLEM AND TO DECLARE TUESDAY, APRIL 9, 2013, AS "CHILDREN'S ADVOCACY DAY" IN SOUTH CAROLINA. The Concurrent Resolution was adopted, ordered returned to the House. S. 313 (Word version) -- Senators Hayes, Courson, Setzler, Matthews, Lourie, Hutto, Jackson, Rankin, L. Martin and O'Dell: A BILL TO AMEND THE CODE OF LAWS OF SOUTH CAROLINA, 1976, BY ADDING CHAPTER 62 TO TITLE 59 SO AS TO ESTABLISH A SCHOOL DISTRICT CHOICE PROGRAM AND OPEN ENROLLMENT PROGRAM WITHIN THE PUBLIC SCHOOL SYSTEM OF THIS STATE, TO PROVIDE FOR A VOLUNTARY PILOT TESTING OF THE PROGRAM BEFORE FULL IMPLEMENTATION, TO DEFINE CERTAIN TERMS, TO PROVIDE FOR AN APPLICATION PROCESS FOR STUDENTS WISHING TO TRANSFER, TO PROVIDE RESPONSIBILITIES, STANDARDS, AND CRITERIA CONCERNING SENDING AND RECEIVING SCHOOLS AND SCHOOL DISTRICTS, TO PROVIDE STANDARDS OF APPROVAL, PRIORITIES FOR ACCEPTING STUDENTS AND CRITERIA FOR DENYING STUDENTS, TO PROVIDE THAT WITH CERTAIN EXCEPTIONS THE PARENT IS RESPONSIBLE FOR TRANSPORTING THE STUDENT TO SCHOOL, TO PROVIDE THAT DISTRICTS SHALL RECEIVE ONE HUNDRED PERCENT OF THE BASE STUDENT COST FROM THE STATE FOR NONRESIDENT STUDENTS ENROLLED PURSUANT TO THIS CHAPTER, TO PROVIDE THAT A STUDENT GENERALLY MAY NOT PARTICIPATE IN INTERSCHOLASTIC ATHLETIC CONTESTS AND COMPETITIONS FOR ONE YEAR AFTER HIS DATE OF ENROLLMENT, TO PROVIDE THAT A RECEIVING DISTRICT SHALL ACCEPT CERTAIN CREDITS TOWARD A STUDENT'S REQUIREMENTS FOR GRADUATION AND SHALL AWARD A DIPLOMA TO A NONRESIDENT STUDENT WHO MEETS ALL REQUIREMENTS FOR GRADUATION, TO PROVIDE THAT A SCHOOL DISTRICT MAY CONTRACT WITH CERTAIN ENTITIES FOR THE PROVISION OF SERVICES, TO PROVIDE THAT THE STATE DEPARTMENT OF EDUCATION ANNUALLY SHALL SURVEY SCHOOL DISTRICTS TO DETERMINE PARTICIPATION IN THE OPEN ENROLLMENT PROGRAM AND PROVIDE CERTAIN DELETED REPORTS ON THE PROGRAM TO THE GENERAL ASSEMBLY, TO PROVIDE A DISTRICT MAY RECEIVE CERTAIN WAIVERS CONCERNING THE IMPLEMENTATION OF THIS ACT, AND TO PROVIDE THAT IMPLEMENTATION OF THIS PROGRAM EACH FISCAL YEAR IS CONTINGENT UPON THE APPROPRIATION OF ADEQUATE FUNDING BY THE GENERAL ASSEMBLY. The Senate proceeded to a consideration of the Bill, the question being the adoption of the amendment proposed by the Committee on Education. The Committee on Education proposed the following amendment (AGM\313C003.AGM.AB13), which was adopted: Amend the bill, as and if amended, Section 59-62-10(B), as contained in SECTION 1, page 2, line 25, so as to delete / make / and insert / promote student achievement by making /. Amend the bill further, Section 59-62-30(E), as contained in SECTION 1, page 5, lines 3-9, by deleting the subsection in its entirety and inserting: /   (E)   The State Board of Education shall promulgate regulations that list factors to be considered in determining school capacity. In developing these regulations, a task force must be established with membership to include, but not be limited to, school board members, superintendents, principals, parents, and business and community leaders. The membership of the task force must reflect urban and rural areas of the State. / Amend the bill further, Section 59-62-70, as contained in SECTION 1, by deleting the SECTION in its entirety and inserting: /   Section 59-62-70.   (A)   In implementing the provisions of this chapter, a student who currently resides in the attendance zone of a school or who qualifies to attend schools within the attendance zone pursuant to Section 59-63-30 must not be displaced by a student transferring from outside the attendance zone. (B)   A school district is not required to: (1)   accept students at a particular school residing outside the school's attendance area in excess of three percent of the school's highest average daily membership in any year from the preceding ten-year period. The acceptance of students residing outside of the attendance area for a particular school must be phased in at a yearly increase of one percent of the average daily membership of the school in the immediately preceding year. Enrolled students residing outside of the school's attendance zone must continue to be counted in the acceptance percentage of the receiving school until the student is no longer enrolled in a receiving school; (2)   make alterations in the structure of a requested school; (3)   establish and offer a particular program in a school if the program is not currently offered in the requested school; or (4)   alter or waive an established eligibility criteria for participation in a particular program, including age requirements, course prerequisites, or required levels of performance. (C)(1)   The school board of trustees shall adopt specific policies regarding capacity standards, standards of approval, and priorities of acceptance. Standards of approval may include consideration of the capacity of a program, class, or grade level. Standards must not be based on ethnicity, national origin, income level, or disabling conditions, English proficiency level, or previous disciplinary proceedings, except that an expulsion from another district, offenses committed that would result in expulsion, or suspensions from the previous school year that total ten days may be included. However, the school board may provide for provisional enrollment of students with prior behavior problems and may establish conditions under which enrollment of nonresident students would be permitted or continued. These standards may include an applicant's gender, previous academic achievement, and athletic, artistic, or other extracurricular ability, but only if enrollment in that program or school is based upon specific levels of performance uniformly applied to all applicants seeking enrollment to that program or school. (2)(a)   In the assignment of students, priority must be given as follows unless a district has a procedure in place and that procedure was implemented in the school year prior to implementation of this chapter: (i)     first, to returning students who continue to meet the requirements of the program or school; (ii)   second, to students residing within the district including students currently enrolled in private schools and home schools, but who desire to attend a school outside their attendance zone; (iii)   third, to students who meet the requirements of the program or school and who seek to attend the designated school in the district's feeder pattern; (iv)   fourth, to the siblings of students residing in the same household already enrolled in the school, provided that any siblings seeking priority under this section meet the requirements of the program or school; and (v)   fifth, to students whose parent or legal guardian is assigned to the school as his primary place of employment. (b)   The policies must not have the purpose or effect of causing racial segregation in a school or the school district. (D)(1)   A receiving school only may deny resident students living outside the attendance zone or nonresident students permission to enroll when: (a)   there is a lack of capacity in the school or program requested; (b)   the school requested does not offer the appropriate programs or is not structured or equipped with the necessary facilities to meet special needs of a student; (c)   the student does not meet established eligibility criteria for participation in a particular program, including age requirements, course prerequisites, or required levels of performance; (d)   a voluntary or court-ordered desegregation plan is in effect for the school district, and the denial is necessary in order to enable compliance with the desegregation plan; (e)   the student was suspended for ten days or more the previous school year, was expelled, has committed offenses that would result in expulsion, or is in the process of being suspended or expelled; or (f)   a student who qualifies to attend a school in a school district pursuant to Section 59-63-30, including the requirement that the student own real estate in the district that has an assessed value of three hundred dollars or more, may attend the schools within the attendance zone where the property is located without having to apply for enrollment to schools in that attendance zone pursuant to this chapter and the receiving school may not deny the student permission to enroll at the school. (2)   A nonresident student may appeal the decision of a district to deny enrollment to the superintendent of the receiving school district or his designee, and the district or the student may appeal an adverse decision to the board of trustees of the school district. The denial by the receiving district's board of an appeal of an enrollment request is final. (E)   A sending school district only may deny resident students a transfer to a receiving school when the transfer would violate a voluntary or court-ordered desegregation plan in effect for that district. However, if the percentage of students seeking to transfer to receiving schools exceeds twenty percent of the sending district's enrollment, the sending district must concur with any additional students transferring from the school to attend a receiving school. If a school within the sending district has transfer requests which exceed twenty percent of its enrollment resulting in the school being at least twenty percent below capacity, the State Board of Education shall appoint an external review team to study educational programs in the school, identify factors contributing to the transfer requests of students, and make recommendations to the district. (F)   A district may not take action to prohibit or prevent application by resident students to attend school in a nonresident school district or to attend another school within the resident district. (G)   Each school district annually shall submit capacity figures for each of its schools to the department. Each district is responsible for annually posting school capacities on the district and school websites. Additionally, information regarding the current enrollment of the school and its percentage of capacity must be included. This information must be provided to the department and posted on the district and school websites by February fifteenth of each school year as it relates to capacity capabilities for the following school year. / Amend the bill further, Section 59-62-135, as contained in SECTION 1, page 13-14, by adding an appropriately lettered subsection at the end to read: /   ( )   The State Board of Education shall promulgate a regulation to define the term 'good cause' as it is to be applied in this section. / Renumber sections to conform. Amend title to conform. Senator HAYES explained the committee amendment. On motion of Senator HAYES, the Bill was carried over, as amended. CARRIED OVER S. 521 (Word version) -- Senators Campsen, Sheheen and Scott: A BILL TO AMEND SECTION 59-3-10 OF THE 1976 CODE, RELATING TO THE ELECTION OF THE STATE SUPERINTENDENT OF EDUCATION, TO PROVIDE FOR THE APPOINTMENT OF THE SUPERINTENDENT BY THE GOVERNOR, WITH THE ADVICE AND CONSENT OF THE SENATE, AND TO PROVIDE FOR THE TERM, QUALIFICATIONS, AND FILLING OF A VACANCY IN THE OFFICE SUPERINTENDENT; AND TO REPEAL SECTION 59-3-20. On motion of Senator PEELER, the Bill was carried over. H. 3620 (Word version) -- Reps. Sandifer and Gambrell: A BILL TO AMEND SECTION 38-90-160, AS AMENDED, CODE OF LAWS OF SOUTH CAROLINA, 1976, RELATING TO THE EXEMPTION OF CAPTIVE INSURANCE COMPANIES FROM CERTAIN PROVISIONS OF TITLE 38, SO AS TO PROVIDE AN INDUSTRIAL INSURED CAPTIVE INSURANCE COMPANY IS SUBJECT TO CERTAIN REQUIREMENTS CONCERNING REPORTS FOR RISK-BASED CAPITAL, ACQUISITIONS DISCLOSURE, AND ASSET DISPOSITION, AND CEDED REINSURANCE AGREEMENTS, AND TO PROVIDE THE DIRECTOR OF THE DEPARTMENT OF INSURANCE MAY ELECT NOT TO TAKE REGULATORY ACTION CONCERNING RISK-BASED CAPITAL IN SPECIFIC CIRCUMSTANCES. On motion of Senator SCOTT, the Bill was carried over. H. 3621 (Word version) -- Reps. Sandifer and Gambrell: A BILL TO AMEND SECTION 38-5-120, AS AMENDED, CODE OF LAWS OF SOUTH CAROLINA, 1976, RELATING TO THE REVOCATION OR SUSPENSION OF A CERTIFICATE OF AUTHORITY TO TRANSACT BUSINESS IN THIS STATE BY AN INSURER, SO AS TO REVISE PROVISIONS CONCERNING A REVOCATION OF THE LICENSEE OF A HAZARDOUS INSURER. On motion of Senator SCOTT, the Bill was carried over. THE CALL OF THE UNCONTESTED CALENDAR HAVING BEEN COMPLETED, THE SENATE PROCEEDED TO THE MOTION PERIOD. On motion of Senator PEELER, the Senate agreed to dispense with the Motion Period. THE SENATE PROCEEDED TO THE INTERRUPTED DEBATE. RETURNED TO THE STATUS OF SPECIAL ORDER S. 92 (Word version) -- Senators Davis, S. Martin, Verdin, Grooms, Bryant and Bright: A BILL TO AMEND THE CODE OF LAWS OF SOUTH CAROLINA, 1976, TO ENACT THE "NDAA NULLIFICATION ACT OF 2013", BY ADDING SECTION 8-1-15, RELATING TO PUBLIC OFFICERS, AND EMPLOYEES, TO PROHIBIT ANY OFFICER OR EMPLOYEE OF THE STATE OR ANY OFFICER OR EMPLOYEE OF A POLITICAL SUBDIVISION FROM AIDING THE DETENTION OF ANY UNITED STATES CITIZEN WITHOUT TRIAL BY THE UNITED STATES ARMED FORCES IN VIOLATION OF THE CONSTITUTION OF SOUTH CAROLINA. The Senate proceeded to a consideration of the Bill, the question being the adoption of the amendment proposed by the Committee on Judiciary. Senator DAVIS spoke on the Bill. Amendment No. P-2 Senator HUTTO proposed the following Amendment No. P-2 (92MW2), which was adopted: Amend the committee amendment, as and if amended, by striking SECTION 2   and inserting the following: /   SECTION   2.   Chapter 1, Title 8 of the 1976 Code is amended by adding: "Section 8-1-15.   No agency of the State, officer, or employee of this State, solely on official state duty, may engage in an activity that aids an agency of the armed forces of the United States in execution of 50 U.S.C. 1541, as provided by the National Defense Authorization Act for Fiscal Year 2012, or any subsequent provision of this law in the detainment of any citizen of the United States in violation of Section 3, Article I and Section 14, Article I of the South Carolina Constitution."/ Renumber sections to conform. Amend title to conform. Senator HUTTO explained the amendment. Senator DAVIS moved that the amendment be adopted. Amendment No. P5 Senator HUTTO proposed the following Amendment No. P5 (92MW3), which was withdrawn: Amend the committee amendment, as and if amended, by striking SECTION 2   and inserting the following: /   SECTION   2.   Chapter 1, Title 8 of the 1976 Code is amended by adding: "Section 8-1-15 (a)   No agency of the State, agency of a political subdivision of the State, officer, or employee of the State, officer or employee of a political subdivision of the State, acting in his official capacity, to include any member of the South Carolina Military Department solely on official state duty, or employees of any state or local detention facility solely on official state duty, may engage in an activity that aids an agency of the armed forces of the United States in execution of 50 U.S.C. 1541, as provided by the National Defense Authorization Act for Fiscal Year 2012, or any subsequent provision of this law in the detainment of any citizen of the United States in violation of Section 3, Article I and Section 14, Article I of the South Carolina Constitution." (b) The intent of subsection (a) is to nullify actions of the Government of the United States that conflict with the Constitution of South Carolina.   / Renumber sections to conform. Amend title to conform. Senator HUTTO explained the amendment. Senator HUTTO asked unanimous consent to make a motion to withdraw the amendment. Senator BRIGHT objected. Senator HUTTO resumed explaining the amendment. Point of Order Senator BRIGHT raised a Point of Order that the Senator was out of order inasmuch as he was impugning the motives of two Senators. The PRESIDENT did not sustain the Point of Order. Senator HUTTO resumed explaining the amendment. On motion of Senator HUTTO, with unanimous consent, Amendment No. P5 was withdrawn. The question then was the adoption of the committee amendment. The Committee on Judiciary proposed the following amendment (JUD0092.008), which was adopted: Amend the bill, as and if amended, by striking all after the enacting words and inserting therein the following: /   SECTION   1.   The General Assembly declares that authority for this act is the following: (1)   The Tenth Amendment to the United States Constitution provides that the United States federal government is authorized to exercise only those powers delegated to it in the Constitution. (2)   Article VI, Clause 2 of the Constitution of the United States provides that laws of the United States are the supreme law of the land provided that they are made in pursuance of the powers delegated to the federal government in the Constitution. (3)   Article I, Section 9, Clause 2 of the Constitution provides that the privilege of the writ of habeas corpus shall not be suspended, unless when in cases of rebellion or invasion the public safety may require it. (4)   The First Amendment provides that the Congress of the United States shall make no law prohibiting the right of the people to petition the government for a redress of grievances. (5)   The Fourth Amendment provides that the right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated. (6)   The Fifth Amendment provides that the people have a right to be free from deprivation of life, liberty, or property, without due process of law. (7)   The Sixth Amendment provides that the people have a right in criminal prosecutions to enjoy a speedy trial by an impartial jury in the state and district where the crime shall have been committed; to be informed of the nature and cause of the accusation; to confront witnesses; and to counsel. (8)   The Fourteenth Amendment provides that the people are to be free from deprivation of life, liberty, or property, without due process of law. SECTION   2.   Chapter 1, Title 8 of the 1976 Code is amended by adding: "Section 8-1-15.   No agency of the State, agency of a political subdivision of the State, officer or employee of the State, officer or employee of a political subdivision of the State, acting in his official capacity, to include any member of the South Carolina Military Department solely on official state duty, or employees of any state or local detention facility solely on official state duty, may engage in an activity that aids an agency of the armed forces of the United States in execution of 50 U.S.C. 1541, as provided by the National Defense Authorization Act for Fiscal Year 2012, or any subsequent provision of this law in the detainment of any citizen of the United States in violation of Section 3, Article I, and Section 14, Article I of the South Carolina Constitution." SECTION   3.   This act takes effect upon approval by the Governor./ Renumber sections to conform. Amend title to conform. The committee amendment was adopted, as amended. Amendment No. 1 Senator LARRY MARTIN proposed the following Amendment No. 1 (JUD0092.009), which was adopted: Amend the bill, as and if amended, by striking the title on page 1, lines 11-20 and inserting therein the following: /   TO AMEND THE CODE OF LAWS OF SOUTH CAROLINA, 1976, BY ADDING SECTION 8-1-15, RELATING TO AGENCIES OF THE STATE, PUBLIC OFFICERS, AND EMPLOYEES, TO PROHIBIT ANY STATE AGENCY, OFFICER, OR EMPLOYEE OR ANY OFFICER OR EMPLOYEE OF A POLITICAL SUBDIVISION FROM AIDING THE DETENTION OF ANY UNITED STATES CITIZEN WITHOUT TRIAL BY THE UNITED STATES ARMED FORCES IN VIOLATION OF THE CONSTITUTION OF SOUTH CAROLINA.   / Renumber sections to conform. Amend title to conform. Senator LARRY MARTIN explained the amendment. Recorded Vote Senators BRYANT, BRIGHT and SHANE MARTIN desired to be recorded as voting against the adoption of the amendment. The question then was the second reading of the Bill. The "ayes" and "nays" were demanded and taken, resulting as follows: Ayes 25; Nays 15 AYES Alexander Bennett Bright Bryant Campbell Campsen Cleary Corbin Courson Cromer Davis Fair Gregory Hayes Hembree Martin, Larry Martin, Shane O'Dell Peeler Rankin Shealy Thurmond Turner Verdin Young Total--25 NAYS Coleman Ford Hutto Jackson Johnson Malloy Matthews McElveen McGill Nicholson Pinckney Reese Scott Setzler Williams Total--15 The Bill was read the second time, passed and ordered to a third reading. Statement by Senator LEATHERMAN I was not in the Chamber during the vote on second reading of this Bill, but had I been present I would have voted for passage. I was not present because I was with hundreds of constituents who came to Columbia to meet with me. The Bill was returned to the status of Special Order. On motion of Senator CROMER, with unanimous consent, the Senate stood adjourned out of respect to the memory of Mr. Owen Junior "Mayor" Smith of Columbia, S.C. Mr. Smith was the beloved husband of Ruth B. Smith and devoted and wonderful father to Pamela S. Pierce, and doting grandfather of Drew and McKenzie. He is also survived by his sister-in-law, Jane Smith, and special niece, Sandra Moore Snead, and numerous other nieces and nephews. He was the son of the late Charles Thomas Smith and Dorothy Keys Smith. A veteran of the U. S. Navy during the Korean Conflict, he was awarded the Korean Service Medal with 4 engagement stars, a good conduct medal and a United Nations medal. Mr. Smith had retired as an administrator after over 40 years service with the S.C. State Department of Transportation. Mr. Smith lived his life marked by unselfish and unfailing service to others and by his love for family and friends. and On motion of Senator HUTTO, with unanimous consent, the Senate stood adjourned out of respect to the memory of Mr. Thomas N. Rhoad of Branchville, S.C., former colleague and friend who served in the House of Representatives from District 90 for twenty-three years.
2014-12-21T19:32:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2713851034641266, "perplexity": 8662.742857643352}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802772265.125/warc/CC-MAIN-20141217075252-00114-ip-10-231-17-201.ec2.internal.warc.gz"}
https://drops.dagstuhl.de/opus/frontdoor.php?source_opus=5303
License: Creative Commons Attribution 3.0 Unported license (CC BY 3.0) When quoting this document, please refer to the following DOI: 10.4230/LIPIcs.APPROX-RANDOM.2015.187 URN: urn:nbn:de:0030-drops-53032 URL: https://drops.dagstuhl.de/opus/volltexte/2015/5303/ Go to the corresponding LIPIcs Volume Portal ### On Approximating Node-Disjoint Paths in Grids pdf-format: ### Abstract In the Node-Disjoint Paths (NDP) problem, the input is an undirected n-vertex graph G, and a collection {(s_1,t_1),...,(s_k,t_k)} of pairs of vertices called demand pairs. The goal is to route the largest possible number of the demand pairs (s_i,t_i), by selecting a path connecting each such pair, so that the resulting paths are node-disjoint. NDP is one of the most basic and extensively studied routing problems. Unfortunately, its approximability is far from being well-understood: the best current upper bound of O(sqrt(n)) is achieved via a simple greedy algorithm, while the best current lower bound on its approximability is Omega(log^{1/2-\delta}(n)) for any constant delta. Even for seemingly simpler special cases, such as planar graphs, and even grid graphs, no better approximation algorithms are currently known. A major reason for this impasse is that the standard technique for designing approximation algorithms for routing problems is LP-rounding of the standard multicommodity flow relaxation of the problem, whose integrality gap for NDP is Omega(sqrt(n)) even on grid graphs. Our main result is an O(n^{1/4} * log(n))-approximation algorithm for NDP on grids. We distinguish between demand pairs with both vertices close to the grid boundary, and pairs where at least one of the two vertices is far from the grid boundary. Our algorithm shows that when all demand pairs are of the latter type, the integrality gap of the multicommodity flow LP-relaxation is at most O(n^{1/4} * log(n)), and we deal with demand pairs of the former type by other methods. We complement our upper bounds by proving that NDP is APX-hard on grid graphs. ### BibTeX - Entry @InProceedings{chuzhoy_et_al:LIPIcs:2015:5303, author = {Julia Chuzhoy and David H. K. Kim}, title = {{On Approximating Node-Disjoint Paths in Grids}}, booktitle = {Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2015)}, pages = {187--211}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-939897-89-7}, ISSN = {1868-8969}, year = {2015}, volume = {40}, editor = {Naveen Garg and Klaus Jansen and Anup Rao and Jos{\'e} D. P. Rolim}, publisher = {Schloss Dagstuhl--Leibniz-Zentrum fuer Informatik}, address = {Dagstuhl, Germany}, URL = {http://drops.dagstuhl.de/opus/volltexte/2015/5303}, URN = {urn:nbn:de:0030-drops-53032}, doi = {10.4230/LIPIcs.APPROX-RANDOM.2015.187}, annote = {Keywords: Node-disjoint paths, approximation algorithms, routing and layout} } Keywords: Node-disjoint paths, approximation algorithms, routing and layout Collection: Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2015) Issue Date: 2015 Date of publication: 13.08.2015 DROPS-Home | Fulltext Search | Imprint | Privacy
2022-12-02T19:20:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6493831276893616, "perplexity": 3341.6701530265996}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710916.40/warc/CC-MAIN-20221202183117-20221202213117-00225.warc.gz"}
http://www.popflock.com/learn?s=Talk:Material_conditional
Talk:Material Conditional Get Talk:Material Conditional essential facts below. View Videos or join the Talk:Material Conditional discussion. Add Talk:Material Conditional to your PopFlock.com topic list for future reference or share this resource on social media. Talk:Material Conditional WikiProject Mathematics (Rated Start-class, Low-priority) This article is within the scope of WikiProject Mathematics, a collaborative effort to improve the coverage of Mathematics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks. Mathematics rating: Start Class Low Priority Field:  Basics WikiProject Philosophy (Rated Start-class, Mid-importance) This article is within the scope of WikiProject Philosophy, a collaborative effort to improve the coverage of content related to philosophy on Wikipedia. If you would like to support the project, please visit the project page, where you can get more details on how you can help, and where you can join the general discussion about philosophy content on Wikipedia. Start  This article has been rated as Start-Class on the project's quality scale. Mid  This article has been rated as Mid-importance on the project's importance scale. ## Second hatnote The 2012 discussion about this matter did not reveal a single instance where material implication (rule of inference) is called "material conditional" or by some other name which redirects here, or may be mistyped in a way which gets a reader to this article. At least, I do not see there any concrete direction. The [1] edit was anything else than an attempt to circumvent the due process. Incnis Mrsi (talk) 07:32, 8 May 2013 (UTC) ## Confusing At present the lead section does not define "material conditional". Furthermore, it assumes some understanding of formal logic, but never actually positions "material conditional" within the study of logic. More detail, more basic explanation, and a definition of the concept would be appreciated. Cnilep (talk) 01:22, 10 May 2013 (UTC) I've reworked the lead to try and make it clearer and more informative. I omitted the distinction between material implication and logical implication from the lead because it wasn't clearly explained and the introduction of that distinction in the lead seemed excessive and confusing. I've tried to describe the meaning of the operator in a clear manner and I've also pointed out a common confusion of beginners to formal logic in relation to that operator. I've also added two citations and extended the segment that listed logical equivalents. Your feedback would be most welcome. AnotherPseudonym (talk) 14:59, 28 May 2013 (UTC) ## p->q is logically equivalent to ... "Reworking" undone. I will revert on sight any edits which injects a knowledge like (whatever a college student can derive from laws of Boolean logic), because a propositional calculus is not necessarily classical/Boolean. There is no such thing as the propositional calculus. Incnis Mrsi (talk) 07:04, 29 May 2013 (UTC) I've reverted your revert. There may be so such thing as the propositional calculus, but removing Boolean propositional calculus form the lead would be wrong. -- Arthur Rubin (talk) 03:36, 30 May 2013 (UTC) It means that you sided with ignorance in this particular case, not more, and not less. You also pointed to some (unexplained) problems with my " " and other regularization of typography and some (unspecified) "other problems", but this does not change much. I do not know who user:AnotherPseudonym is, but we know who user:Arthur Rubin is. How do you, Arthur Rubin, explain removal of Stanford Encyclopedia of Philosophy reference and its replacement with (technically broken) ones to a book written by certain Paul Teller? Incnis Mrsi (talk) 12:42, 3 June 2013 (UTC) Using an encyclopedia to create another encyclopedia is a pointless exercise. An article should be created from primary sources not from another encyclopedia. Why not just replace the entirety of the article with a link to the Stanford Encyclopedia of Philosophy entry on the topic? There is nothing wrong with the book by Teller and it has the virtue of being online in complete form as does the other referenced book. Also it doesn't matter who I am or who Arthur Rubin is or who you are. AnotherPseudonym (talk) 11:07, 4 June 2013 (UTC) I take your point but I think that is rather heavy handed. You could just qualify what you identified as too general. Yes a college student can derive the equivalences but the point is to provide a concise description of the operator in the lead. AnotherPseudonym (talk) 07:45, 31 May 2013 (UTC) I have added the necessary qualification. Regarding the original lead, it was a mess. Amongst other things the original lead had redundancies, it employed terms without first defining them or linking to a definition, sometimes the term "compound" was used and other times "statment" was used, the relatively minor matter of material implication vs. logical entailment was too long, used an awful example and just confused what preceded it. In the lead it would have been sufficient to just say something like: "The material conditional is to be distinguished from logical entailment (which is usually symbolosed using [double turnstile]." The distinction can then be detailed in the body of the article. Also the failure to even mention propositional calculus -- which is the context in which someone is most likely to look up the meaning of the operator -- was an unacceptable omission. By the time someone reaches the study of paraconsistent logical systems they will likely have no need to look up what a material conditional is on Wikipedia. A novice is most likely to look up this entry in popflock.com resource and they will most likely have encountered the operator in the context of classical/Boolean propositional calculus. AnotherPseudonym (talk) 08:06, 31 May 2013 (UTC) Propositional calculus has no special relevance to the topic, because the leading statement already says that "->" is a logical connective: try to think what follows from this. I do not see any point to stress the use of "->" namely in propositional calculi (not in a first-order logic or whatever). Paraconsistent logical systems also have no special relevance to the question raised and I do not realize why I should read anything about these. Which logical systems, except for Boolean-based, have the material conditional equivalent to ${\displaystyle \neg (p\land \neg q)}$? If you do not yet realize what I mean, then read logical connective #Redundancy please. Incnis Mrsi (talk) 12:42, 3 June 2013 (UTC) I'm not arguing for the inclusion of anything about paraconsistent logical systems, I am arguing against that. If I'm not mistaken, paraconsistent logics are a species of non-Boolean logic and your contention -- if I am understanding it -- that the article lead should possess a generality that does not preclude non-Boolean logics amounts to a position that paraconsistent logics -- amongst other non-Boolean logics -- should bear on the composition of the lead. If you write a thorughly generic lead it will retain the vagueness and lack of clarity that was originally complained about. The special relevance of propositional calculus and first-order predicate logic is that they are the most likely context in which a novice will encounter the operator and will seek clarification from an encyclopedia. I believe I know what you mean but I don't agree with the completely generic form of the lead that you support. Such an article will be a useless piece of formalism. Anyone that is even aware that there are logical systems that are non-Boolean will have no need to consult a general encyclopedia regarding the material conditional. Those that are likely to consult an encyclopedia -- those encountering the material conditional in the conext of Boolean first-order logic -- will not gain anything from a generic article that takes account of non-Boolean logics in all of its descriptions. Technically what you are arguing is correct but from a pragmatic and pedagocial position it is misguided. AnotherPseudonym (talk) 06:49, 4 June 2013 (UTC) Could you bring your pedagogical positions to wikiprojects which really need it? popflock.com resource is an encyclopedia. Study Guides is not, Wikiversity is not, but popflock.com resource is! And WP:Wikipedia is not a textbook. It is a harmful misconception that popflock.com resource serves to college students, not experts. There are things which experts can more likely find here than in Google, because not all structures of knowledge are detectable by text search engines. Incnis Mrsi (talk) 08:35, 4 June 2013 (UTC) Actually, I have no objections against your current version except a minor one that I'd say "in classical logic" dropping "propositional calculus" from the lead altogether. I spend my time arguing in this talk page only because I hate when a guy like Arthur Rubin thinks that WP:BRD is a guideline for some other (minor) editors, not for himself. Incnis Mrsi (talk) 11:10, 4 June 2013 (UTC) Done. Looking at the lead now I don't think it has lost anything significant by not explicitly referencing propositional calculus and first-order logic. AnotherPseudonym (talk) 11:52, 4 June 2013 (UTC) ## Symbols are neither defined nor linked In the paragraph "In classical logic p \rightarrow q is logically equivalent to \neg(p \and \neg q) and by De Morgan's Law to \neg p \or q" the symbols for negation, logical and and logical or are neither explained nor linked to other popflock.com resource articles. In this way, the article is not understandable to the layman.--84.150.172.61 (talk) 09:06, 16 December 2013 (UTC) ## Monotonicity I think that what it is said about monotonicity (under "Formal properties") is confusing. It is simple to see that material conditional is anti-monotonic in the first argument and monotonic in the second argument. Still, if we lift the reasoning from truth values to the inference process, then it is true that "if we know more, we cannot derive less" (in classical logic). Saying, as it is in the article, that if a->b then ?c.(a?c)->b doesn't mean that -> is monotonic: the property is indeed true due to the anti-monotonicity of -> in the first argument! (Adding "and c" to the premise can only decrease its truth value and thus increase the truth value of the whole implication, where "decrease" and "increase" refer to the total ordering of the boolean lattice ? < ?). Is there anyone who thinks that these two levels should be clarified and kept distinct? Grace.malibran (talk) 14:01, 9 January 2014 (UTC) ## Challenge to causality "But unlike as the English construction may, the conditional statement "p->q" does not specify a causal relationship...." I doubt that. Can anybody give an example of "If p, then q" which implies causality? I can think of many examples which might give rise to a suspicion of causality, but none in which the suspicion could be considered justifiable. "If you hit the ramp going less than 50 MPH, you're not going to make it." That suggests cause and effect, but I say it doesn't imply it; it just expresses a correlation. --Marshall "Unfree" Price 208.54.85.219 (talk) 01:58, 26 May 2014 (UTC) The current version (March 2016) reads: "However, unlike the English construction, the material conditional statement p -> q does not specify a causal relationship between p and q." This still seems specious, as the English construction does not necessarily specify a causal relationship between p and q. For example, in English I could say, "If I'm at a Fourth of July picnic, then there are going to be fireworks tonight." That does not specify any causal relationship between the two statements. (My presence at the picnic is not causing the fireworks, and the fireworks are not causing my presence at the picnic.) 74.71.76.34 (talk) 14:06, 19 March 2016 (UTC) ## Opposites I came to this article hoping to discover whether "material conditional" was the exact opposite of "counterfactual conditional". I suspect there might be a "factual conditional", in which case, I'll have to go on another errand, seeking the opposite of "material conditional". Oh, maybe it's "immaterial conditional". Who knows? --Marshall "Unfree" Price 172.56.26.37 (talk) 02:09, 26 May 2014 (UTC) ## Diagrams are not labeled I get it, a minimalistic style approach was taken making these diagrams, but why are they are not labeled? http://upload.wikimedia.org/wikipedia/commons/thumb/1/1e/Venn1011.svg/440px-Venn1011.svg.png -- Preceding unsigned comment added by Scire9 (talk o contribs) 20:43, 17 June 2014 (UTC) ## Venn Diagram is wrong The top of page Venn diagram of A --> B is wrong. One circle (A) should be completely inside the other (B). For example in the following image, Whale --> Mammal. http://faculty.ycp.edu/~dhovemey/fall2006/mat111/lecture/figures/whalesAreMammals.png. If it is a whale then it must be a mammal but if it is a mammal it may not be a whale. Hence whales are a subset of mammals. John Middlemas (talk) 23:05, 31 July 2014 (UTC) I expanded the caption a little. Does that explain for you? Paradoctor (talk) 23:53, 31 July 2014 (UTC) The average reader will wonder where is A and where is B? They are not labelled. It can be deciphered from your wording that the A is left but with effort. Also the red outside of both circles is not necessary and confusing. The white part is also irrelevant and detracts from the understanding that all of A should be inside B which is the only point anyway. All you want is a smaller red 'A' circle inside a larger white 'B' circle and white surround. The red signifies member of A which is what we have assumed. Sorry, but I think all that complication will just confuse the real meaning of A --> B. Better the whale/mammal pic. 88.203.90.14 (talk) 01:02, 1 August 2014 (UTC) I'm beginning to wonder whether this diagram is actually helpful. It is correct, though. The idea is to represent statements through sets. A statement ${\displaystyle A}$ is true iff ${\displaystyle x\in A}$ for all ${\displaystyle x}$. Note that we're using the same name for a statement and the set representing it. ${\displaystyle A\rightarrow B}$ is false only if ${\displaystyle A}$ is true and ${\displaystyle B}$ is false. This means that the set ${\displaystyle A\rightarrow B}$ excludes exactly those ${\displaystyle x}$ for which ${\displaystyle x\in A}$ and ${\displaystyle x\notin B}$, which corresponds to the white area. Do you see why the areas outside the circles must be red? Paradoctor (talk) 01:55, 1 August 2014 (UTC) ## "But unlike as the English construction may, the conditional statement "p->q" does not" The sentence in the intro which begins "But unlike as the English construction may, the conditional statement "p->q" does not" does not parse ## Deriving the Truth Table and the "Definition" of Material Implication It is trivial to prove the following using the rules of natural deduction: ${\displaystyle A\land B\implies [A\implies B]}$ (Truth table, line 1) ${\displaystyle A\land \neg B\implies \neg [A\implies B]}$ (Truth table, line 2) ${\displaystyle \neg A\implies [A\implies B]}$ (Truth table, lines 3-4) ${\displaystyle [A\implies B]\iff \neg [A\land \neg B]}$ (Often given as the definition of material implication -- not required in the above derivation of the truth table.) It makes me wonder why so many folks believe that material implication is somehow different from the usage of implications in natural language. What's wrong with: If pigs could fly, then I'd be the King France? They should understand that anything that is true or false will follow from a falsehood. Danchristensen (talk) 03:23, 5 January 2018 (UTC) ## Article Intro The introduction is appallingly bad, it begins with reference to a currently non-existent diagram. Please would someone add an appropriate simple picture to show what this means ? And shouldn't the second paragraph should be the first ? There is a lot of this article (and a lot of argument on this page) which is quite incomprehensible to the ordinary reader. Please would all WP editors concentrate on wording articles to inform and educate those who are not familiar with specialist subjects ? Darkman101 (talk) 23:52, 6 July 2018 (UTC) This article uses material from the Wikipedia page available here. It is released under the Creative Commons Attribution-Share-Alike License 3.0.
2020-04-10T17:53:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 15, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.641261100769043, "perplexity": 1263.014118816124}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370511408.40/warc/CC-MAIN-20200410173109-20200410203609-00451.warc.gz"}
https://tjyj.stats.gov.cn/CN/10.19343/j.cnki.11-1302/c.2019.04.011
• • ### 基于fused惩罚的稀疏主成分分析 • 出版日期:2019-04-25 发布日期:2019-04-22 ### Sparse Principal Component Analysis with Fused Penalty Zhang Bo & Liu Xiaoqian • Online:2019-04-25 Published:2019-04-22 Abstract: This paper mainly studies sparse principal component analysis with fused penalty, so as to solve problems with features which are naturally ordered or variables which are related or even equal to their neighbors. First, we propose a simple approach to obtain sparse PCs from the perspective of regression. A new generalized sparse PCA model is introduced, namely generalized sparse PCA (GSPCA), and the corresponding algorithm is offered. Also, we prove that the solution of GSPCA is equivalent to that of SPC, an existing sparse PCA model, when the penalty is 1-norm. Next, we propose combining the fused penalty and sparse PCA to get a fused sparse PCA method, and introduce the corresponding model with two forms on the basis of PMD and regression. After theoretical derivation, we find that the solutions of the two model forms are consistent, so we call the model FSPCA without discrimination. The simulation reveals that FSPCA has a good performance on datasets where variables are related or even equal to their neighbors. At last, we apply the FSPCA to handwritten numeral recognition. It turns out that compared with SPC, FSPCA can extract PCs which have better interpretability, and this makes FSPCA of higher practical value.
2023-01-30T14:50:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5055777430534363, "perplexity": 1253.6989074247142}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499819.32/warc/CC-MAIN-20230130133622-20230130163622-00082.warc.gz"}
https://pdglive.lbl.gov/DataBlock.action?node=M062EM&home=MXXX035
#### ${\mathit m}_{{{\mathit D}^{*}{(2010)}^{+}}}–{\mathit m}_{{{\mathit D}^{*}{(2007)}^{0}}}$ VALUE (MeV) DOCUMENT ID TECN  COMMENT • • We do not use the following data for averages, fits, limits, etc. • • $2.6$ $\pm1.8$ 1 1977 LGW ${{\mathit e}^{+}}{{\mathit e}^{-}}$ 1 Not independent of FELDMAN 1977B mass difference above, PERUZZI 1977 ${{\mathit D}^{0}}$ mass, and GOLDHABER 1977 ${{\mathit D}^{*}{(2007)}^{0}}$ mass. References: PERUZZI 1977 PRL 39 1301 Study of ${{\mathit D}}$ Mesons Produced in the Decay of the ${{\mathit \psi}{(3772)}}$
2023-01-29T10:07:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8529846668243408, "perplexity": 13231.006824287222}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499710.49/warc/CC-MAIN-20230129080341-20230129110341-00595.warc.gz"}
https://math.libretexts.org/Bookshelves/Calculus/Map%3A_Calculus_-_Early_Transcendentals_(Stewart)/2%3A_Limits_and_Derivatives/2.6%3A_Limits_at_Infinity%3B_Horizontal_Asymptotes
# 2.6: Limits at Infinity; Horizontal Asymptotes $$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$ $$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$ In Definition 1 we stated that in the equation $$\lim\limits_{x\to c}f(x) = L$$, both $$c$$ and $$L$$ were numbers. In this section we relax that definition a bit by considering situations when it makes sense to let $$c$$ and/or $$L$$ be "infinity.'' As a motivating example, consider $$f(x) = 1/x^2$$, as shown in Figure 1.30. Note how, as $$x$$ approaches 0, $$f(x)$$ grows very, very large. It seems appropriate, and descriptive, to state that $\lim\limits_{x\rightarrow 0} \frac1{x^2}=\infty.$Also note that as $$x$$ gets very large, $$f(x)$$ gets very, very small. We could represent this concept with notation such as $\lim\limits_{x\rightarrow \infty} \frac1{x^2}=0.$ $$\text{FIGURE 1.30}$$: Graphing $$f(x)=1/x^2$$ for values of $$x \text{ near }0$$. We explore both types of use of $$\infty$$ in turn. Definition 5: Limit of infinity We say $$\lim\limits_{x\rightarrow c} f(x)=\infty$$ if for every $$M>0$$ there exists $$\delta>0$$ such that for all $$x\neq c$$, if $$|x-c|<\delta$$, then $$f(x)\geq M$$. This is just like the $$\epsilon$$--$$\delta$$ definition from Section 1.2. In that definition, given any (small) value $$\epsilon$$, if we let $$x$$ get close enough to $$c$$ (within $$\delta$$ units of $$c$$) then $$f(x)$$ is guaranteed to be within $$\epsilon$$ of $$f(c)$$. Here, given any (large) value $$M$$, if we let $$x$$ get close enough to $$c$$ (within $$\delta$$ units of $$c$$), then $$f(x)$$ will be at least as large as $$M$$. In other words, if we get close enough to $$c$$, then we can make $$f(x)$$ as large as we want. We can define limits equal to $$-\infty$$ in a similar way. It is important to note that by saying $$\lim\limits_{x\to c}f(x) = \infty$$ we are implicitly stating that \textit{the} limit of $$f(x)$$, as $$x$$ approaches $$c$$, does not exist. A limit only exists when $$f(x)$$ approaches an actual numeric value. We use the concept of limits that approach infinity because it is helpful and descriptive. Example 26: Evaluating limits involving infinity Find $$\lim\limits_{x\rightarrow 1}\frac1{(x-1)^2}$$ as shown in Figure 1.31. $$\text{FIGURE 1.31}$$: Observing infinite limit as $$x\to 1$$ in Example 26. Solution: In Example 4 of Section 1.1, by inspecting values of $$x$$ close to 1 we concluded that this limit does not exist. That is, it cannot equal any real number. But the limit could be infinite. And in fact, we see that the function does appear to be growing larger and larger, as $$f(.99)=10^4$$, $$f(.999)=10^6$$, $$f(.9999)=10^8$$. A similar thing happens on the other side of 1. In general, let a "large'' value $$M$$ be given. Let $$\delta=1/\sqrt{M}$$. If $$x$$ is within $$\delta$$ of 1, i.e., if $$|x-1|<1/\sqrt{M}$$, then: \begin{align*}|x-1| &< \frac{1}{\sqrt{M}} \\ (x-1)^2 &< \frac{1}{M}\\ \frac{1}{(x-1)^2} &> M,\end{align*} which is what we wanted to show. So we may say $$\lim\limits_{x\rightarrow 1}1/{(x-1)^2}=\infty$$. Example 27: Evaluating limits involving infinity Find $$\lim\limits_{x\rightarrow 0}\frac1x$$, as shown in Figure 1.32. $$\text{FIGURE 1.32}$$: Evaluating $$\lim\limits_{x\to 0}\frac{1}{x}$$. Solution: It is easy to see that the function grows without bound near 0, but it does so in different ways on different sides of 0. Since its behavior is not consistent, we cannot say that $$\lim\limits_{x\to 0}\frac{1}{x}=\infty$$. However, we can make a statement about one--sided limits. We can state that $$\lim\limits_{x\rightarrow 0^+}\frac1x=\infty$$ and $$\lim\limits_{x\rightarrow 0^-}\frac1x=-\infty$$. ## Vertical Asymptotes If the limit of $$f(x)$$ as $$x$$ approaches $$c$$ from either the left or right (or both) is $$\infty$$ or $$-\infty$$, we say the function has a vertical asymptote at $$c$$. Example 28: Finding vertical asymptotes Find the vertical asymptotes of $$f(x)=\dfrac{3x}{x^2-4}$$. $$\text{FIGURE 1.33}$$: Graphing $$f(x) = \frac{3x}{x^2-4}$$. Solution: Vertical asymptotes occur where the function grows without bound; this can occur at values of $$c$$ where the denominator is 0. When $$x$$ is near $$c$$, the denominator is small, which in turn can make the function take on large values. In the case of the given function, the denominator is 0 at $$x=\pm 2$$. Substituting in values of $$x$$ close to $$2$$ and $$-2$$ seems to indicate that the function tends toward $$\infty$$ or $$-\infty$$ at those points. We can graphically confirm this by looking at Figure 1.33. Thus the vertical asymptotes are at $$x=\pm2$$. When a rational function has a vertical asymptote at $$x=c$$, we can conclude that the denominator is 0 at $$x=c$$. However, just because the denominator is 0 at a certain point does not mean there is a vertical asymptote there. For instance, $$f(x)=(x^2-1)/(x-1)$$ does not have a vertical asymptote at $$x=1$$, as shown in Figure 1.34. While the denominator does get small near $$x=1$$, the numerator gets small too, matching the denominator step for step. In fact, factoring the numerator, we get$f(x)=\frac{(x-1)(x+1)}{x-1}.$ Canceling the common term, we get that $$f(x)=x+1$$ for $$x\not=1$$. So there is clearly no asymptote, rather a hole exists in the graph at $$x=1$$. $$\text{FIGURE 1.34}$$: Graphically showing that $$f(x)=\frac{x^2-1}{x-1}$$ does not have an asymptote at $$x=1$$. The above example may seem a little contrived. Another example demonstrating this important concept is $$f(x)= (\sin x)/x$$. We have considered this function several times in the previous sections. We found that $$\lim\limits_{x\to0}\frac{\sin x}{x}=1$$; i.e., there is no vertical asymptote. No simple algebraic cancellation makes this fact obvious; we used the Squeeze Theorem in Section 1.3 to prove this. If the denominator is 0 at a certain point but the numerator is not, then there will usually be a vertical asymptote at that point. On the other hand, if the numerator and denominator are both zero at that point, then there may or may not be a vertical asymptote at that point. This case where the numerator and denominator are both zero returns us to an important topic. ## Indeterminate Forms We have seen how the limits $\lim\limits_{x\rightarrow 0}\frac{\sin x}{x}\quad \text{and}\quad \lim\limits_{x\to1}\frac{x^2-1}{x-1}$each return the indeterminate form "$$0/0$$'' when we blindly plug in $$x=0$$ and $$x=1$$, respectively. However, $$0/0$$ is not a valid arithmetical expression. It gives no indication that the respective limits are 1 and 2. With a little cleverness, one can come up $$0/0$$ expressions which have a limit of $$\infty$$, 0, or any other real number. That is why this expression is called indeterminate. A key concept to understand is that such limits do not really return $$0/0$$. Rather, keep in mind that we are taking limits. What is really happening is that the numerator is shrinking to 0 while the denominator is also shrinking to 0. The respective rates at which they do this are very important and determine the actual value of the limit. An indeterminate form indicates that one needs to do more work in order to compute the limit. That work may be algebraic (such as factoring and canceling) or it may require a tool such as the Squeeze Theorem. In a later section we will learn a technique called l'Hospital's Rule that provides another way to handle indeterminate forms. Some other common indeterminate forms are $$\infty-\infty$$, $$\infty\cdot 0$$, $$\infty/\infty$$, $$0^0$$, $$\infty^0$$ and $$1^{\infty}$$. Again, keep in mind that these are the "blind'' results of evaluating a limit, and each, in and of itself, has no meaning. The expression $$\infty-\infty$$ does not really mean "subtract infinity from infinity.'' Rather, it means "One quantity is subtracted from the other, but both are growing without bound.'' What is the result? It is possible to get every value between $$-\infty$$ and $$\infty$$ Note that $$1/0$$ and $$\infty/0$$ are not indeterminate forms, though they are not exactly valid mathematical expressions, either. In each, the function is growing without bound, indicating that the limit will be $$\infty$$, $$-\infty$$, or simply not exist if the left- and right-hand limits do not match. ## Limits at Infinity and Horizontal Asymptotes At the beginning of this section we briefly considered what happens to $$f(x) = 1/x^2$$ as $$x$$ grew very large. Graphically, it concerns the behavior of the function to the "far right'' of the graph. We make this notion more explicit in the following definition. Definition 6: Limits at Infinity and Horizontal Asymptote 1. We say $$\lim\limits_{x\rightarrow\infty} f(x)=L$$ if for every $$\epsilon>0$$ there exists $$M>0$$ such that if $$x\geq M$$, then $$|f(x)-L|<\epsilon$$. 2. We say $$\lim\limits_{x\rightarrow-\infty} f(x)=L$$ if for every $$\epsilon>0$$ there exists $$M<0$$ such that if $$x\leq M$$, then $$|f(x)-L|<\epsilon$$. 3. If $$\lim\limits_{x\rightarrow\infty} f(x)=L$$ or $$\lim\limits_{x\rightarrow-\infty} f(x)=L$$, we say that $$y=L$$ is a horizontal asymptote of $$f$$. We can also define limits such as $$\lim\limits_{x\rightarrow\infty}f(x)=\infty$$ by combining this definition with Definition 5. Example 29: Approximating horizontal asymptotes Approximate the horizontal asymptote(s) of $$f(x)=\frac{x^2}{x^2+4}$$. Solution: We will approximate the horizontal asymptotes by approximating the limits $\lim\limits_{x\to-\infty} \frac{x^2}{x^2+4}\quad \text{and}\quad \lim\limits_{x\to\infty} \frac{x^2}{x^2+4}.$Figure 1.35(a) shows a sketch of $$f$$, and part (b) gives values of $$f(x)$$ for large magnitude values of $$x$$. It seems reasonable to conclude from both of these sources that $$f$$ has a horizontal asymptote at $$y=1$$. $$\text{FIGURE 1.35}$$: Using a graph and a table to approximate a horizontal asymptote in Example 29. Later, we will show how to determine this analytically. Horizontal asymptotes can take on a variety of forms. Figure 1.36(a) shows that $$f(x) = x/(x^2+1)$$ has a horizontal asymptote of $$y=0$$, where 0 is approached from both above and below. Figure 1.36(b) shows that $$f(x) =x/\sqrt{x^2+1}$$ has two horizontal asymptotes; one at $$y=1$$ and the other at $$y=-1$$. Figure 1.36(c) shows that $$f(x) = (\sin x)/x$$ has even more interesting behavior than at just $$x=0$$; as $$x$$ approaches $$\pm\infty$$, $$f(x)$$ approaches 0, but oscillates as it does this. $$\text{FIGURE 1.36}$$: Considering different types of horizontal asymptotes. We can analytically evaluate limits at infinity for rational functions once we understand $$\lim\limits_{x\rightarrow\infty} 1/x$$. As $$x$$ gets larger and larger, the $$1/x$$ gets smaller and smaller, approaching 0. We can, in fact, make $$1/x$$ as small as we want by choosing a large enough value of $$x$$. Given $$\epsilon$$, we can make $$1/x<\epsilon$$ by choosing $$x>1/\epsilon$$. Thus we have $$\lim\limits_{x\rightarrow\infty} 1/x=0$$. It is now not much of a jump to conclude the following: $\lim\limits_{x\rightarrow\infty}\frac1{x^n}=0\quad \text{and}\quad \lim\limits_{x\rightarrow-\infty}\frac1{x^n}=0$ Now suppose we need to compute the following limit: $\lim\limits_{x\rightarrow\infty}\frac{x^3+2x+1}{4x^3-2x^2+9}.$ A good way of approaching this is to divide through the numerator and denominator by $$x^3$$ (hence dividing by 1), which is the largest power of $$x$$ to appear in the function. Doing this, we get \begin{align*}\lim\limits_{x\rightarrow\infty}\frac{x^3+2x+1}{4x^3-2x^2+9} &=\lim\limits_{x\rightarrow\infty}\frac{1/x^3}{1/x^3}\cdot\frac{x^3+2x+1}{4x^3-2x^2+9}\\ &=\lim\limits_{x\rightarrow\infty}\frac{x^3/x^3+2x/x^3+1/x^3}{4x^3/x^3-2x^2/x^3+9/x^3}\\ &= \lim\limits_{x\rightarrow\infty}\frac{1+2/x^2+1/x^3}{4-2/x+9/x^3}.\end{align*} Then using the rules for limits (which also hold for limits at infinity), as well as the fact about limits of $$1/x^n$$, we see that the limit becomes$\frac{1+0+0}{4-0+0}=\frac14.$ This procedure works for any rational function. In fact, it gives us the following theorem. Theorem 11: Limits of Rational Functions at Infinity Let $$f(x)$$ be a rational function of the following form: $f(x)=\frac{a_nx^n + a_{n-1}x^{n-1}+\dots + a_1x + a_0}{b_mx^m + b_{m-1}x^{m-1} + \dots + b_1x + b_0},$ where any of the coefficients may be 0 except for $$a_n$$ and $$b_m$$. 1. If $$n=m$$, then $$\lim\limits_{x\rightarrow\infty} f(x) = \lim\limits_{x\rightarrow-\infty} f(x) = \frac{a_n}{b_m}$$. 2. If $$n<m$$, then $$\lim\limits_{x\rightarrow\infty} f(x) = \lim\limits_{x\rightarrow-\infty} f(x) = 0$$. 3. If $$n>m$$, then $$\lim\limits_{x\rightarrow\infty} f(x)$$ and $$\lim\limits_{x\rightarrow-\infty} f(x)$$ are both infinite. We can see why this is true. If the highest power of $$x$$ is the same in both the numerator and denominator (i.e. $$n=m$$), we will be in a situation like the example above, where we will divide by $$x^n$$ and in the limit all the terms will approach 0 except for $$a_nx^n/x^n$$ and $$b_mx^m/x^n$$. Since $$n=m$$, this will leave us with the limit $$a_n/b_m$$. If $$n<m$$, then after dividing through by $$x^m$$, all the terms in the numerator will approach 0 in the limit, leaving us with $$0/b_m$$ or 0. If $$n>m$$, and we try dividing through by $$x^n$$, we end up with all the terms in the denominator tending toward 0, while the $$x^n$$ term in the numerator does not approach 0. This is indicative of some sort of infinite limit. Intuitively, as $$x$$ gets very large, all the terms in the numerator are small in comparison to $$a_nx^n$$, and likewise all the terms in the denominator are small compared to $$b_nx^m$$. If $$n=m$$, looking only at these two important terms, we have $$(a_nx^n)/(b_nx^m)$$. This reduces to $$a_n/b_m$$. If $$n<m$$, the function behaves like $$a_n/(b_mx^{m-n})$$, which tends toward 0. If $$n>m$$, the function behaves like $$a_nx^{n-m}/b_m$$, which will tend to either $$\infty$$ or $$-\infty$$ depending on the values of $$n$$, $$m$$, $$a_n$$, $$b_m$$ and whether you are looking for $$\lim\limits_{x\rightarrow\infty} f(x)$$ or $$\lim\limits_{x\rightarrow-\infty} f(x)$$. With care, we can quickly evaluate limits at infinity for a large number of functions by considering the largest powers of $$x$$. For instance, consider again $$\lim\limits_{x\to\pm\infty}\frac{x}{\sqrt{x^2+1}},$$ graphed in Figure \ref{fig:hzasy}(b). When $$x$$ is very large, $$x^2+1 \approx x^2$$. Thus $\sqrt{x^2+1}\approx \sqrt{x^2} = |x|,\quad \text{and}\quad \frac{x}{\sqrt{x^2+1}} \approx \frac{x}{|x|}.$This expression is 1 when $$x$$ is positive and $$-1$$ when $$x$$ is negative. Hence we get asymptotes of $$y=1$$ and $$y=-1$$, respectively. Example 30: Finding a limit of a rational function Confirm analytically that $$y=1$$ is the horizontal asymptote of $$f(x) = \frac{x^2}{x^2+4}$$, as approximated in Example 29. Solution: Before using Theorem 11, let's use the technique of evaluating limits at infinity of rational functions that led to that theorem. The largest power of $$x$$ in $$f$$ is 2, so divide the numerator and denominator of $$f$$ by $$x^2$$, then take limits. \begin{align*}\lim\limits_{x\to\infty}\frac{x^2}{x^2+4} &= \lim\limits_{x\to\infty}\frac{x^2/x^2}{x^2/x^2+4/x^2}\\ &=\lim\limits_{x\to\infty}\frac{1}{1+4/x^2}\\ &=\frac{1}{1+0}\\ &= 1. \end{align*} We can also use Theorem 11 directly; in this case $$n=m$$ so the limit is the ratio of the leading coefficients of the numerator and denominator, i.e., 1/1 = 1. Example 31: Finding limits of rational functions Use Theorem 11 to evaluate each of the following limits. \begin{align}&1.\,\,\lim\limits_{x\rightarrow-\infty}\frac{x^2+2x-1}{x^3+1} \qquad\qquad &&3.\,\,\lim\limits_{x\rightarrow\infty}\frac{x^2-1}{3-x} \\ &2.\,\,\lim\limits_{x\rightarrow\infty}\frac{x^2+2x-1}{1-x-3x^2} && \\ \end{align} $$\text{FIGURE 1.37}$$: Visualizing the functions in Example 31. Solution: 1. The highest power of $$x$$ is in the denominator. Therefore, the limit is 0; see Figure 1.37(a). 2. The highest power of $$x$$ is $$x^2$$, which occurs in both the numerator and denominator. The limit is therefore the ratio of the coefficients of $$x^2$$, which is $$-1/3$$. See Figure 1.37(b). 3. The highest power of $$x$$ is in the numerator so the limit will be $$\infty$$ or $$-\infty$$. To see which, consider only the dominant terms from the numerator and denominator, which are $$x^2$$ and $$-x$$. The expression in the limit will behave like $$x^2/(-x) = -x$$ for large values of $$x$$. Therefore, the limit is $$-\infty$$. See Figure 1.37(c). ## Chapter Summary In this chapter we: • defined the limit, • found accessible ways to approximate their values numerically and graphically, • developed a not--so--easy method of proving the value of a limit ($$\epsilon-\delta$$ proofs), • explored when limits do not exist, • defined continuity and explored properties of continuous functions, and • considered limits that involved infinity. Why? Mathematics is famous for building on itself and calculus proves to be no exception. In the next chapter we will be interested in "dividing by 0.'' That is, we will want to divide a quantity by a smaller and smaller number and see what value the quotient approaches. In other words, we will want to find a limit. These limits will enable us to, among other things, determine exactly how fast something is moving when we are only given position information. Later, we will want to add up an infinite list of numbers. We will do so by first adding up a finite list of numbers, then take a limit as the number of things we are adding approaches infinity. Surprisingly, this sum often is finite; that is, we can add up an infinite list of numbers and get, for instance, 42. These are just two quick examples of why we are interested in limits. Many students dislike this topic when they are first introduced to it, but over time an appreciation is often formed based on the scope of its applicability. ### Contributors • Gregory Hartman (Virginia Military Institute). Contributions were made by Troy Siemers and Dimplekumar Chalishajar of VMI and Brian Heinold of Mount Saint Mary's University. This content is copyrighted by a Creative Commons Attribution - Noncommercial (BY-NC) License. http://www.apexcalculus.com/
2019-11-18T07:19:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9824876189231873, "perplexity": 162.2310375752633}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669454.33/warc/CC-MAIN-20191118053441-20191118081441-00505.warc.gz"}