url
stringlengths 14
1.76k
| text
stringlengths 100
1.02M
| metadata
stringlengths 1.06k
1.1k
|
---|---|---|
https://arxiv-export-lb.library.cornell.edu/abs/2203.11580 | math.AG
(what is this?)
# Title: The second cohomology of regular semisimple Hessenberg varieties from GKM theory
Abstract: We describe the second cohomology of a regular semisimple Hessenberg variety by generators and relations explicitly in terms of GKM theory. The cohomology of a regular semisimple Hessenberg variety becomes a module of a symmetric group $\mathfrak{S}_n$ by the dot action introduced by Tymoczko. As an application of our explicit description, we give a formula describing the isomorphism class of the second cohomology as an $\mathfrak{S}_n$-module. Our formula is not exactly the same as the known formula by Chow or Cho-Hong-Lee but they are equivalent. We also discuss its higher degree generalization.
Comments: 23 pages, 3 figures Subjects: Algebraic Geometry (math.AG); Algebraic Topology (math.AT); Representation Theory (math.RT) MSC classes: 57S12 (Primary), 14M15 (Secondary) Cite as: arXiv:2203.11580 [math.AG] (or arXiv:2203.11580v1 [math.AG] for this version)
## Submission history
From: Takashi Sato [view email]
[v1] Tue, 22 Mar 2022 10:03:56 GMT (21kb)
Link back to: arXiv, form interface, contact. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44325342774391174, "perplexity": 2082.6744948234614}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499890.39/warc/CC-MAIN-20230131190543-20230131220543-00516.warc.gz"} |
https://www.beatthegmat.com/jackson-invested-300-000-dividing-it-all-unequally-between-account-p-and-account-q-at-the-end-of-the-year-it-turned-t328234.html?&view=next&sid=35b81814e3512a0d67f6b9b5af0fbbdd | ## The rear wheels of a car corssed a certain line 0.5 second after the front wheels crossed the same line. If the centers
##### This topic has expert replies
Moderator
Posts: 2042
Joined: 15 Oct 2017
Followed by:6 members
### The rear wheels of a car corssed a certain line 0.5 second after the front wheels crossed the same line. If the centers
by BTGmoderatorLU » Tue Nov 23, 2021 12:58 pm
00:00
A
B
C
D
E
## Global Stats
Source: GMAT Prep
The rear wheels of a car crossed a certain line 0.5 second after the front wheels crossed the same line. If the centers of the front and rear wheels are 20 feet apart and the car traveled in a straight line at a constant speed, which of the following gives the speed of the car in miles per hour? (5280 feet = 1 mile)
A. $$\left(\dfrac{20}{5280}\right)\left(\dfrac{60^2}{0.5}\right)$$
B. $$\left(\dfrac{20}{5280}\right)\left(\dfrac{60}{0.5}\right)$$
C. $$\left(\dfrac{20}{5280}\right)\left(\dfrac{0.5}{60^2}\right)$$
D. $$\dfrac{(20)(5280)}{(60^2)(0.5)}$$
E. $$\dfrac{(20)(5280)}{(60)(0.5)}$$
The OA is A
• Page 1 of 1 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17248642444610596, "perplexity": 982.7975211105783}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662675072.99/warc/CC-MAIN-20220527174336-20220527204336-00159.warc.gz"} |
http://soho.nascom.nasa.gov/soc/JOPs/jop171/ | SOHO/CDS,EIT,MDI + (TRACE)
JOP PROPOSAL
SOLAR NETWORK: VARIABILITY AND DYNAMICS OF THE OUTER SOLAR ATMOSPHERE
P. Gömöry, J. Rybák, N. Brynildsen, A. Kucera
Astronomical Institute, Slovak Academy of Sciences
SK-05960 Tatranská Lomnica, Slovakia
Institute of Theoretical Astrophysics, University of Oslo
P. O. Box 1029 Blindern, 0315 Oslo, Norway
SCIENTIFIC OBJECTIVE:
Study of variability and dynamics of the outer solar atmospheric layers in solar network using line intensities and Doppler shifts of the chromospheric, transition region and coronal emission lines is proposed. Network located in the quiet atmosphere, in a coronal hole as well as network connected with loops outside active regions will be investigated searching for correlations between the line intensities and the Doppler shifts of different spectral lines. In particular effects of network transient events on mutual relations of the atmospheric layers in/above network will be determined. Limitations on the possible physical mechanisms which control energy transfer and mass motion in the network of the solar outer atmosphere will be derived. Relation of the emergence and motions of the photospheric magnetic flux to these mechanisms will be analyzed.
SCIENTIFIC JUSTIFICATION:
Previous observations performed with different instruments (HRTS, SOHO/CDS, SOHO/SUMER) have revealed that the upper chromosphere, transition region and corona are not static, but display significant changes and variability even in the case of the quiet solar atmosphere. Several apparently different transient events take place in the outer layers of the solar atmosphere (e.g., explosive events, blinkers, spicules, bright points). Overall mutual relations of plasma in ranges of chromospheric, transition region and coronal temperatures should depend on presence and properties of these transient events. Especially solar network is of the particular interest as energy could be channeled through the outer atmospheric layers in this target from the photosphere to corona. Moreover different global magnetic configuration (quiet sun, coronal hole) can influence energy transport and dynamics of plasma in/above the solar network as well. Because of this we propose the simultaneous spectral line measurements with the SOHO/CDS spectrometer in all layers of the outer solar atmosphere. Our interest is focused on long time series of observations in the solar network. Results of our previous measurements (JOP 078, Gömöry et al., 2003, Hvar Observatory Bulletin 27, 67-741) have shown that effects of the transient events can be traced but longer series of data than that one taken during the JOP 078 are needed. The main parameters used in this study (tested on the previous CDS data) will be the intensities and the Doppler shifts of the selected spectral lines. Autocorrelations and cross-correlations of these quantities allow us to derive some information about energy transfer and mass motion in solar outer atmosphere above/in the solar network (Gömöry et al., 2003, contribution to the SOHO13 workshop2).
PARTICIPATING INSTRUMENTS:
The core of the program - the spectral measurements performed by the SOHO/CDS spectrometer - have already been tested on July 23, 2003 as the VARDYN1-4 programs. The data have been inspected and it was clarified that all 4 programs for the CDS spectrometers are prepared well providing the data intended to be acquired. Extension of these tested programs to the proposed operation of CDS is described below. Additional observational data are proposed to be acquired simultaneously with the proposed CDS spectral measurements, namely: EIT patrol observations in all channels and measurements in 195 Åspectral channel during the whole CDS run, MDI measurements of B, and finally TRACE filtergrams in the selected bandpasses. The TRACE observations are optional.
OPERATIONAL CONSIDERATIONS:
It is very important for our aim to run the main part of our CDS measurements (VARDYN4) within the interval created by two successive EIT patrol observations (FULL SUN 171/284/195/304), e.g., between 01:25-07.00 or within other such intervals. The total duration of the proposed observational run for CDS is 8.3hours. The run should be repeated in several days at different positions near the disk center and in coronal hole. Direct selection of the target can be performed several hours or one day before the start of the particular run. No near real time commanding of SOHO instruments and TRACE is required. The program is intended to be performed within the MEDOC12 campaign in the period November, 17-30, 2003.
DETAILED OBSERVING SEQUENCES - CDS:
All data are proposed to be obtained with the Normal Incidence Spectrometer (NIS). Various modes of CDS observations (1D position - full detector, 2D raster - selected lines, 1D sequences - selected lines) are proposed in a specific procedure. All modes (observing sequences) of CDS have already been tested (July, 23, 2003). The brightest CDS spectral lines were selected for our aim with an attempt to cover wide range of temperatures (10K-10K). No rotation compensation is proposed for CDS so the slit should cover several different structures during each run. Alternation of the 4'' and 2'' slits are planned for the CDS sequence part 4.
CDS sequence part 1: full detector (VARDYN1)
Slit: 2''240''
Number of exposures: 2
Exposure time: 10s (+ app. 890s overhead) 900s
Duration: 2900s=1800s = 30min
Telemetry/Compression: truncate to 12bits
CDS sequence part 2: full detector (VARDYN2)
Slit: 2''240''
Number of exposures: 2
Exposure time: 100s (+ app. 800s overhead) 900s
Duration: 2900s=1800s = 30min
Telemetry/Compression: truncate to 12bits
CDS sequence part 3: 2D sequential raster near the disk center centred at the position of the slit where the next CDS sequence part will be taken (VARDYN3)
Lines: HeI 584.33Å(210K), OIII 599.59Å(810K), OV 629.74Å
(2.510K), NeVI 562.80Å(410K), MgIX 386.04Å(110K),
SiXII 520.67Å(210K)
Slit: 4''240''
Detector area (V): 70 pixels=142'', (H): 21 pixels
Steps (x,y): 2'', 0''
Number of positions per raster: 20
Number of rasters: 4
Number of exposures: 20 (per raster)4 (rasters)=80
Exposure time: 10s (+ app. 6s overhead) 16s
Duration: 20416s=1280s=22 min
Telemetry/Compression: truncate to 12bits
CDS sequence part 4: 1D position in center of previous 2D raster (VARDYN4)
Rotation compensation: OFF
Lines: HeI 584.33Å(210K), OIII 599.59Å(810K), OV 629.74Å
(2.510K), NeVI 562.80Å(?10K), MgIX 386.04Å(110K),
SiXII 520.67Å(210K)
Slit (x,y): 4''240''
Detector area (V): 70 pixels=142''
(H): 21 pixels
Steps: 0'', 0''
Number of exposures: 145
Exposure time: 10s (+ app. 5.3s overhead) 15.3s
Duration: 14515.3s2220s =37min
Telemetry/Compression: truncate to 12bits
Repetition: 9 times
Total duration: 92220s=19980s=333min=5h 33min
CDS sequence part 5: 2D sequential raster near the disk center centred at the position of the slit in the previous sequence part VARDYN4 (VARDYN3)
CDS sequence part 6: full detector (VARDYN2)
CDS sequence part 7: full detector (VARDYN1)
DETAILED OBSERVING SEQUENCES - EIT:
It is necessary for our aim to obtain the patrol measurements of EIT (FULL SUN 171/284/195/304) within the time interval in which the main part (CDS sequence parts 3-5) of our CDS measurements will take place. The usual patrol measurements taken at 01:00 UT, 07:00 UT, 13:00 UT and 19:00 UT are sufficient. We need EIT observations in 195 Åspectral channel (usual CME watch) during the whole CDS run.
DETAILED OBSERVING SEQUENCES - MDI:
Campaign type: 3 frames (the Doppler velocities, continuum intensities, the longitudinal magnetic field) taken and transfered in each minute, operation mode: high resolution (if possible) or low resolution-full disk
DETAILED OBSERVING SEQUENCES - TRACE:
The TRACE part is scheduled as a repetition of image acquisition in 5 wavelengths in a cycle for a long time interval covering the SOHO/CDS observing run. Pointing of TRACE should follow the CDS pointing. Fine co-alignment of TRACE images with other (CDS, EIT) data will be possible as the TRACE images should be taken in the same time as the EIT patrol measurements and the CDS rasters.
Channels: Ly, UV continuum, CIV 1500Å, white light, FeIX171Å
In case of some constraints of instrument the full set of images in all channels can be taken only during the EIT patrol measurements and the CDS rasters and a limited set of TRACE channels (Ly, UV continuum, FeIX171Å) or just one channel (FeIX171Å) can be defined for the main part of the CDS run (VARDYN4).
The TRACE observations with the partial requirements are not strictly necessary for our aim, but they could help us significantly to fulfil our scientific plans. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8188611268997192, "perplexity": 5399.433376670279}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163994768/warc/CC-MAIN-20131204133314-00099-ip-10-33-133-15.ec2.internal.warc.gz"} |
https://gitlab.eclipse.org/eclipse/ice/ice/-/commit/be4fd3fbbd6687287cd22229d9ac80363187bdf4 | Commit be4fd3fb by Daniel Bluhm
### Fix references to org.eclipse.ice.tests.util.data
```
And minor formatting fixes
Signed-off-by: Daniel Bluhm <[email protected]>```
parent 16056bdc
... ... @@ -16,6 +16,7 @@ ../org.eclipse.ice.tests.util ../org.eclipse.ice.data ../org.eclipse.ice.dev ../org.eclipse.ice.archetypes ... ...
... ... @@ -21,7 +21,7 @@ \$ mvn clean verify In both cases one can skip the tests by including `-DskipTests` in your build. ### Dependencies All dependencies are noted in the `pom` file, and all but one are within maven central. The only non-centralized dependency is the ICE package `org.eclipse.ice.tests.data`. To install it, perform the following commands (after cloning the ICE repositiory) so that the Commands package can build successfully: All dependencies are noted in the `pom` file, and all but one are within maven central. The only non-centralized dependency is the ICE package `org.eclipse.ice.tests.util.data`. To install it, perform the following commands (after cloning the ICE repositiory) so that the Commands package can build successfully: ```shell \$ cd org.eclipse.ice.data ... ...
... ... @@ -22,14 +22,14 @@ org.eclipse.ice org.eclipse.ice.tests.data org.eclipse.ice.tests.util.data 3.0.0-SNAPSHOT log4j log4j 1.2.17 org.apache.sshd sshd-core ... ... @@ -51,4 +51,4 @@ 4.5.10 \ No newline at end of file | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9995512962341309, "perplexity": 18874.649963760505}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358208.31/warc/CC-MAIN-20211127163427-20211127193427-00536.warc.gz"} |
http://www.physicsforums.com/showthread.php?t=541952 | Caculating heat loss from cooling fins?
by rsalmon
Tags: heat loss, thermodynamics, transformer
P: 12 Hello, I am currently studying transformer design, in particular cooling methods. I am tyring to calculate the cooling effect (increased heat transfer) of adding cooling fins to a transformer. Does anyone know of some general equations/methods for calculating the increased heat radiation achieved by adding cooling fins. Thanks, Rob
P: 688 Basically, fins increase the heat transfer rate by adding surface area. But there is an efficiency associated with the fins. The heat transfer rate is: q_Total = Nfin * eff * hfin * Af * (Tsurface - Tsurr) + hbase * Ab * (Tsurface - Tsurr) q_Total = heat transfer rate Nfin = number of fins eff = fin efficiency hfin = convection heat transfer coefficient of fin Af = surface area of one fin Tsurface = surface temperature Tsurr = surroundings temperature hbase = convection heat transfer coefficient of base Af = surface area of base Notice the first term in the above equation is the heat transfer from the fin and the second term is from the base. These calculations are a bit of an art. The trick is to make simplyfying assumptions that do not introduce too much error. Let me know if you need help getting started.
PF Gold P: 244 Just have in mind that adding metals close to a transformer will generally reduce the transformers efficiency. When a heat sink is placed upon a toroide transformer for example, the transformer is "seeing" a short circuit secondary winding because the magnetic field wants to induce electric current through the heat sink. As the heat sink (usually made by copper or aluminum) is a good conductor, it will for sure reduce the efficiency of the transformer - which in turn makes it even hotter than without heat sink at all. Vidar
Related Discussions Mechanical Engineering 1 Introductory Physics Homework 8 Calculus & Beyond Homework 1 Engineering, Comp Sci, & Technology Homework 0 Classical Physics 28 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.859066903591156, "perplexity": 1494.110634089055}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00320-ip-10-147-4-33.ec2.internal.warc.gz"} |
http://www.realmagick.com/digamma-function/ | realmagick.com The shrine of knowledge.
# Digamma Function
A selection of articles related to digamma function.
Bringing it Down to Earth: A Fractal Approach
'Clouds are not spheres, mountains are not cones, coastlines are not circles, and bark is not smooth, nor does lightning travel in a straight line.' B. Mandelbrot W e want to think about the future - it's our nature. Unlike other creatures, humans possess an...
Mystic Sciences >> Astrology
>> Modern Science
Select Cross-Cultural and Historical Personifications of Death
This extensive introduction includes some of the more well known, along with some lesser known Death "incarnations", and I use that term loosely, as in many cultures, the Angel of Death can be quite an adept shapeshifter. We have tried to cull...
Mystic Sciences >> Necromantic Studies
Hod & Netzach
Just as the previous pairs of Sephiroth, (Binah - Chokmah, Chessed - Gedulah), needed to be explained together to be fully understood, (Binah depends on that of Chokmah, and Chessed relies on Geburah, visa versa), so it is with Hod and Netzach. Hod is form...
Magick >> Qabalah
Survivalists' Guide for the New Millennium: Chapter 4
THE TRANSFORMATION OF CATERPILLAR TO BUTTERFLY It is not difficult to see that the whole Agenda behind owning a bigger home, a newer car and more land is one of ego. This is an orientation toward the building of an empire of material wealth, in a vain...
Philosophy >> Survivalists Guide for the New Millennium
What is hypnotic trance? Does it provide unusual physical or mental capacities?
2.1 'Trance;' descriptive or misleading? Most of the classical notions of hypnosis have long held that hypnosis was special in some way from other types of interpersonal communication and that an induction (preparatory process considered by some to be...
Parapsychology >> Hypnosis
Survivalists' Guide for the New Millennium: Chapter 6
AS THE WORM TURNS Health and well being are part of the natural birthright of the human being. With all of its organs intact, the right diet, exercise and mental focus, a human body can overcome any disease. Even so, the effects of living in this...
Philosophy >> Survivalists Guide for the New Millennium
Digamma Function is described in multiple online sources, as addition to our editors' articles, see section below for printable documents, Digamma Function books and related discussion.
## Suggested Pdf Resources
Introduction to the Gamma Function
INFINITE FAMILY OF APPROXIMATIONS OF THE DIGAMMA
tions to the Digamma function Ψ.
The integrals in Gradshteyn and Ryzhik. Part 10: The digamma
be expressed in terms of the digamma function ψ(x) = d dx log Γ(x). In this note Here Γ(x) is the gamma function defined by. (1.
ON THE GAMMA FUNCTION AND ITS APPLICATIONS 1
Factorial, Gamma and Beta Functions Outline
integer values, the Gamma function was later presented in its traditional The first reported use of the gamma symbol for this function was by Legendre in 1839.
## Suggested Web Resources
Digamma Function -- from Wolfram MathWorld
is therefore frequently used for the digamma function itself, and Erdélyi et al. ( 1981) use the notation psi(z) for Psi(z) .
Digamma function
Digamma function - Wikipedia, the free encyclopedia
In mathematics, the digamma function is defined as the logarithmic derivative of the gamma function: \psi(x) =\frac{d}{dx} \ln.
PlanetMath: digamma and polygamma function
The digamma function is defined as the logarithmic derivative of the gamma function: \psi (z) = {d \over dz} \log \Gamma (z) = {\Gamma' (z) \over \Gamma (z )}.
Digamma Function - GNU Scientific Library -- Reference Manual
These routines compute the digamma function \psi(n) for positive integer n . The digamma function is also called the Psi function.
Great care has been taken to prepare the information on this page. Elements of the content come from factual and lexical knowledge databases, realmagick.com library and third-party sources. We appreciate your suggestions and comments on further improvements of the site. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7824010252952576, "perplexity": 3624.889296532742}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119646180.24/warc/CC-MAIN-20141024030046-00132-ip-10-16-133-185.ec2.internal.warc.gz"} |
https://gmatclub.com/forum/gmat-diagnostic-test-question-79304-20.html | It is currently 19 Nov 2017, 00:17
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# GMAT Diagnostic Test Question 1
Author Message
Intern
Joined: 12 Jan 2011
Posts: 3
Kudos [?]: [0], given: 2
Re: GMAT Diagnostic Test Question 1 [#permalink]
### Show Tags
26 Aug 2011, 09:10
this ques is very eazy but difficult if don't know you perfect squares.
I was able to get the first part very quick but I used calculator for the second part
which is not allowed in the GMAT Exams
Kudos [?]: [0], given: 2
Intern
Joined: 13 Feb 2012
Posts: 20
Kudos [?]: 58 [0], given: 14
WE: Other (Transportation)
Re: GMAT Diagnostic Test Question 1 [#permalink]
### Show Tags
13 Feb 2012, 16:18
LU wrote:
Based on Math formula its 324 - 289 = 35
And it works for any number regardless if it is perfect square or not.
Thanks
LU
Sorry, but 36^(1/2) - 25^(1/2)= 6 - 5 = 1
but 36 - 25 = 11
Do I miss anything about the formula??
Kudos [?]: 58 [0], given: 14
Intern
Joined: 13 Feb 2012
Posts: 20
Kudos [?]: 58 [0], given: 14
WE: Other (Transportation)
Re: GMAT Diagnostic Test Question 1 [#permalink]
### Show Tags
13 Feb 2012, 16:26
If you cannot remember the squares of the first 20 numbers I think the best approach is to play with the units digits only. The root of 324 should end to 2 or 8 and the root of 289 should end to 3 or 7.
The next step is to take those figures in pairs and check the digit of their sum.
So (2,3) --> units digit 5 (Answer D)
(2,7) --> units digit 9 (No Answer)
(8,3) --> units digit 1 (No Answer)
(8,7) --> units digit 5 (Answer D)
I don't even care about which numbers they actually are!
Kudos [?]: 58 [0], given: 14
Intern
Joined: 30 May 2011
Posts: 18
Kudos [?]: 7 [0], given: 4
Re: GMAT Diagnostic Test Question 1 [#permalink]
### Show Tags
04 Sep 2012, 10:43
bb wrote:
GMAT Diagnostic Test Question 1
Diffculty: 650
Field: Arithmetic, Roots
Rating:
$$\sqrt{324} + \sqrt{289} = ?$$
(A). 32
(B). 33
(C). 34
(D). 35
(E). 36
The Unit digit of square root of 324 is 2, the unit digit for square root of 289 is 3, so the unit digit for the two must be 5, only D fits.
Kudos [?]: 7 [0], given: 4
Intern
Joined: 28 Aug 2012
Posts: 45
Kudos [?]: 48 [0], given: 3
Location: Austria
GMAT 1: 770 Q51 V42
Re: GMAT Diagnostic Test Question 1 [#permalink]
### Show Tags
04 Sep 2012, 11:56
linglinrtw wrote:
The Unit digit of square root of 324 is 2, the unit digit for square root of 289 is 3, so the unit digit for the two must be 5, only D fits.
Correct answer, wrong solution. The unit's digit of square root of 324 can be 8, too. And the unit's digit of square root of 289 can be 7, too.
Kudos [?]: 48 [0], given: 3
Intern
Status: Preparing for GMAT
Joined: 19 Sep 2012
Posts: 19
Kudos [?]: 10 [0], given: 8
Location: India
GMAT Date: 01-31-2013
WE: Information Technology (Computer Software)
Re: GMAT Diagnostic Test Question 1 [#permalink]
### Show Tags
19 Sep 2012, 21:54
Memorise the squares and cubes upto 20...it will help in faster calculation...
else seeing the last digit.. 9 so square root of 7 is 49 so try 17
and same way 4 so 8 * 8 64 ends with 4 also next to 17..
so 17 + 18 = 35.
D
_________________
Rajeev Nambyar
Chennai, India.
Kudos [?]: 10 [0], given: 8
Intern
Joined: 26 Apr 2013
Posts: 1
Kudos [?]: [0], given: 0
Re: GMAT Diagnostic Test Question 1 [#permalink]
### Show Tags
26 Apr 2013, 12:46
tom09b wrote:
LU wrote:
Based on Math formula its 324 - 289 = 35
And it works for any number regardless if it is perfect square or not.
Thanks
LU
Sorry, but 36^(1/2) - 25^(1/2)= 6 - 5 = 1
but 36 - 25 = 11
Do I miss anything about the formula??
=> 324^(1/2) + 289^(1/2) = 18+17 = 35
-alsukran
Kudos [?]: [0], given: 0
Intern
Joined: 18 Nov 2011
Posts: 36
Kudos [?]: 15 [0], given: 0
Concentration: Strategy, Marketing
GMAT Date: 06-18-2013
GPA: 3.98
Re: GMAT Diagnostic Test Question 1 [#permalink]
### Show Tags
26 Apr 2013, 16:43
LU wrote:
Based on Math formula its 324 - 289 = 35
And it works for any number regardless if it is perfect square or not.
Thanks
LU
"Based on math formula" - To which math formula in particular are you refering? Expand please.
Kudos [?]: 15 [0], given: 0
Intern
Joined: 31 Aug 2013
Posts: 15
Kudos [?]: 2 [0], given: 1
Re: GMAT Diagnostic Test Question 1 [#permalink]
### Show Tags
04 Oct 2013, 03:12
$$\sqrt{324} + \sqrt{289} = ?$$
(A). 32
(B). 33
(C). 34
(D). 35
(E). 36[/quote]
324 is the square root of 18 and 289 is the square root of 17.
ie 18 + 17 = 35.
Kudos [?]: 2 [0], given: 1
Intern
Joined: 13 May 2013
Posts: 29
Kudos [?]: 10 [0], given: 28
Re: GMAT Diagnostic Test Question 1 [#permalink]
### Show Tags
14 Apr 2014, 02:37
LU wrote:
Based on Math formula its 324 - 289 = 35
And it works for any number regardless if it is perfect square or not.
Thanks
LU
What is this "Math formula" ?
It works for any number? The square root of 64 + the square root of 25 is NOT equal to 64 - 25.
Or what am I missing here?
Kudos [?]: 10 [0], given: 28
Math Expert
Joined: 02 Sep 2009
Posts: 42249
Kudos [?]: 132643 [1], given: 12326
Re: GMAT Diagnostic Test Question 1 [#permalink]
### Show Tags
14 Apr 2014, 02:53
1
KUDOS
Expert's post
Saabs wrote:
LU wrote:
Based on Math formula its 324 - 289 = 35
And it works for any number regardless if it is perfect square or not.
Thanks
LU
What is this "Math formula" ?
It works for any number? The square root of 64 + the square root of 25 is NOT equal to 64 - 25.
Or what am I missing here?
Yes, that's not true.
_________________
Kudos [?]: 132643 [1], given: 12326
Re: GMAT Diagnostic Test Question 1 [#permalink] 14 Apr 2014, 02:53
Go to page Previous 1 2 [ 31 posts ]
Display posts from previous: Sort by
# GMAT Diagnostic Test Question 1
Moderator: Bunuel
Powered by phpBB © phpBB Group | Emoji artwork provided by EmojiOne Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27414000034332275, "perplexity": 8519.1426867758}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805417.47/warc/CC-MAIN-20171119061756-20171119081756-00570.warc.gz"} |
https://drizzlepac.readthedocs.io/en/latest/adrizzle.html | # Image Drizzling Step¶
The operation of drizzling each input image needs to be performed twice during processing:
• single drizzle step: this initial step drizzles each image onto the final output WCS as separate images
• final drizzle step: this step produces the final combined image based on the cosmic-ray masks determined by AstroDrizzle
Interfaces to main drizzle functions.
drizzlepac.adrizzle.drizzle(input, outdata, wcsmap=None, editpars=False, configObj=None, **input_dict)[source]
Each input image gets drizzled onto a separate copy of the output frame. When stacked, these copies would correspond to the final combined product. As separate images, they allow for treatment of each input image separately in the undistorted, final WCS system. These images provide the information necessary for refining image registration for each of the input images. They also provide the images that will be succeedingly combined into a median image and then used for the subsequent blot and cosmic ray detection steps.
Aside from the input parameters, this step requires:
• valid input images with SCI extensions
• valid distortion coefficients tables
• any optional secondary distortion correction images
• numpy object (in memory) for static mask
This step produces:
• singly drizzled science image (simple FITS format)
• singly drizzled weight images (simple FITS format)
These images all have the same WCS based on the original input parameters and those provided for this step; specifically, output shape, pixel size, and orientation, if any have been specified at all.
Other Parameters:
driz_separate : bool (Default = No)
This parameter specifies whether or not to drizzle each input image onto separate output images. The separate output images will all have the same WCS as the final combined output frame. These images are used to create the median image, needed for cosmic ray rejection.
driz_sep_kernel : {‘square’, ‘point’, ‘gaussian’, ‘turbo’, ‘tophat’, ‘lanczos3’} (Default = ‘turbo’)
Used for the initial separate drizzling operation only, this parameter specifies the form of the kernel function used to distribute flux onto the separate output images. The current options are:
• square: original classic drizzling kernel
• point: this kernel is a point so each input pixel can only contribute to the single pixel that is closest to the output position. It is equivalent to the limit as pixfrac -> 0, and is very fast.
• gaussian: this kernel is a circular gaussian with a FWHM equal to the value of pixfrac, measured in input pixels.
• turbo: this is similar to kernel=”square” but the box is always the same shape and size on the output grid, and is always aligned with the X and Y axes. This may result in a significant speed increase.
• tophat: this kernel is a circular “top hat” shape of width pixfrac. It effects only output pixels within a radius of pixfrac/2 from the output position.
• lanczos3: a Lanczos style kernel, extending a radius of 3 pixels from the center of the detection. The Lanczos kernel is a damped and bounded form of the “sinc” interpolator, and is very effective for resampling single images when scale=pixfrac=1. It leads to less resolution loss than other kernels, and typically results in reduced correlated noise in outputs.
Warning
The 'lanczos3' kernel tends to result in much slower processing as compared to other kernel options. This option should never be used for pixfrac!=1.0, and is not recommended for scale != 1.0.
The default for this step is “turbo” since it is much faster than “square”, and it is quite satisfactory for the purposes of generating the median image. More information about the different kernels can be found in the help file for the drizzle task.
driz_sep_wt_scl : float (Default = exptime)
This parameter specifies the weighting factor for input image. If driz_sep_wt_scl=exptime, then the scaling value will be set equal to the exposure time found in the image header. The use of the default value is recommended for producing optimal behavior for most scenarious. It is possible to set wt_scl=’expsq’ for weighting by the square of the exposure time, which is optimal for read-noise dominated images.
driz_sep_pixfrac : float (Default = 1.0)
Fraction by which input pixels are “shrunk” before being drizzled onto the output image grid, given as a real number between 0 and 1. This specifies the size of the footprint, or “dropsize”, of a pixel in units of the input pixel size. If pixfrac is set to less than 0.001, the kernel parameter will be reset to ‘point’ for more efficient processing. In the step of drizzling each input image onto a separate output image, the default value of 1.0 is best in order to ensure that each output drizzled image is fully populated with pixels from the input image. For more information, see the help for the drizzle task.
driz_sep_fillval : int or INDEF (Default = INDEF)
Value to be assigned to output pixels that have zero weight, or that receive flux from any input pixels during drizzling. This parameter corresponds to the fillval parameter of the drizzle task. If the default of INDEF is used, and if the weight in both the input and output images for a given pixel are zero, then the output pixel will be set to the value it would have had if the input had a non-zero weight. Otherwise, if a numerical value is provided (e.g. 0), then these pixels will be set to that value.
driz_sep_bits : int (Default = 0)
Integer sum of all the DQ bit values from the input image’s DQ array that should be considered ‘good’ when building the weighting mask. This can also be used to reset pixels to good if they had been flagged as cosmic rays during a previous run of AstroDrizzle, by adding the value 4096 for ACS and WFPC2 data. Please see the section on Selecting the Bits Parameter for a more detailed discussion.
driz_sep_wcs : bool (Default = No)
Define custom WCS for seperate output images?
driz_sep_refimage : str (Default = ‘’)
Reference image from which a WCS solution can be obtained.
driz_sep_rot : float (Default = INDEF)
Position Angle of output image’s Y-axis relative to North. A value of 0.0 would orient the final output image to be North up. The default of INDEF specifies that the images will not be rotated, but will instead be drizzled in the default orientation for the camera with the x and y axes of the drizzled image corresponding approximately to the detector axes. This conserves disk space, as these single drizzled images are only used in the intermediate step of creating a median image.
driz_sep_scale : float (Default = INDEF)
Linear size of the output pixels in arcseconds/pixel for each separate drizzled image (used in creating the median for cosmic ray rejection). The default value of INDEF specifies that the undistorted pixel scale for the first input image will be used as the pixel scale for all the output images.
driz_sep_outnx : int (Default = INDEF)
Size, in pixels, of the X axis in the output images that each input will be drizzled onto. If no value is specified, the smallest size that can accommodate the full dithered field will be used.
driz_sep_outny : int (Default = INDEF)
Size, in pixels, of the Y axis in the output images that each input will be drizzled onto. If no value is specified, the smallest size that can accommodate the full dithered field will be used.
driz_sep_ra : float (Default = INDEF)
Right ascension (in decimal degrees) specifying the center of the output image. If this value is not designated, the center will automatically be calculated based on the distribution of image dither positions.
driz_sep_dec : float (Default = INDEF)
Declination (in decimal degrees) specifying the center of the output image. If this value is not designated, the center will automatically be calculated based on the distribution of image dither positions.
Notes
These tasks are designed to work together seemlessly when run in the full AstroDrizzle interface. More advanced users may wish to create specialized scripts for their own datasets, making use of only a subset of the predefined AstroDrizzle tasks, or add additional processing, which may be usefull for their particular data. In these cases, individual access to the tasks is important.
Something to keep in mind is that the full AstroDrizzle interface will make backup copies of your original files and place them in the OrIg/ directory of your current working directory. If you are working with the stand alone interfaces, it is assumed that the user has already taken care of backing up their original datafiles as the input file with be directly altered.
There are two user interface function for this task, one to allow you to create seperately drizzled images of each image in your list and the other to create one single output drizzled image, which is the combination of all of them:
def drizSeparate(imageObjectList,output_wcs,configObj,wcsmap=wcs_functions.WCSMap)
def drizFinal(imageObjectList, output_wcs, configObj,build=None,wcsmap=wcs_functions.WCSMap)
if configObj[single_step]['driz_separate']:
drizSeparate(imgObjList,outwcs,configObj,wcsmap=wcsmap)
else:
drizFinal(imgObjList,outwcs,configObj,wcsmap=wcsmap)
Examples
Basic example of how to call static yourself from a python command line, using the default parameters for the task.
>>> from drizzlepac import adrizzle
drizzlepac.adrizzle.run(configObj, wcsmap=None)[source]
Interface for running wdrizzle from TEAL or Python command-line.
This code performs all file I/O to set up the use of the drizzle code for a single exposure to replicate the functionality of the original wdrizzle.
drizzlepac.adrizzle.drizSeparate(imageObjectList, output_wcs, configObj, wcsmap=None, procSteps=None)[source]
drizzlepac.adrizzle.drizFinal(imageObjectList, output_wcs, configObj, build=None, wcsmap=None, procSteps=None)[source]
drizzlepac.adrizzle.mergeDQarray(maskname, dqarr)[source]
Merge static or CR mask with mask created from DQ array on-the-fly here.
drizzlepac.adrizzle.updateInputDQArray(dqfile, dq_extn, chip, crmaskname, cr_bits_value)[source]
drizzlepac.adrizzle.buildDrizParamDict(configObj, single=True)[source]
drizzlepac.adrizzle.interpret_maskval(paramDict)[source]
Apply logic for interpreting final_maskval value…
drizzlepac.adrizzle.run_driz(imageObjectList, output_wcs, paramDict, single, build, wcsmap=None)[source]
Perform drizzle operation on input to create output. The input parameters originally was a list of dictionaries, one for each input, that matches the primary parameters for an IRAF drizzle task.
This method would then loop over all the entries in the list and run drizzle for each entry.
Parameters required for input in paramDict:
build,single,units,wt_scl,pixfrac,kernel,fillval, rot,scale,xsh,ysh,blotnx,blotny,outnx,outny,data
drizzlepac.adrizzle.run_driz_img(img, chiplist, output_wcs, outwcs, template, paramDict, single, num_in_prod, build, _versions, _numctx, _nplanes, chipIdxCopy, _outsci, _outwht, _outctx, _hdrlist, wcsmap)[source]
Perform the drizzle operation on a single image. This is separated out from run_driz() so as to keep together the entirety of the code which is inside the loop over images. See the run_driz() code for more documentation.
drizzlepac.adrizzle.run_driz_chip(img, chip, output_wcs, outwcs, template, paramDict, single, doWrite, build, _versions, _numctx, _nplanes, _numchips, _outsci, _outwht, _outctx, _hdrlist, wcsmap)[source]
Perform the drizzle operation on a single chip. This is separated out from run_driz_img so as to keep together the entirety of the code which is inside the loop over chips. See the run_driz code for more documentation.
drizzlepac.adrizzle.do_driz(insci, input_wcs, inwht, output_wcs, outsci, outwht, outcon, expin, in_units, wt_scl, wcslin_pscale=1.0, uniqid=1, pixfrac=1.0, kernel='square', fillval='INDEF', stepsize=10, wcsmap=None)[source]
Core routine for performing ‘drizzle’ operation on a single input image All input values will be Python objects such as ndarrays, instead of filenames. File handling (input and output) will be performed by calling routine.
drizzlepac.adrizzle.get_data(filename)[source]
drizzlepac.adrizzle.create_output(filename)[source]
drizzlepac.adrizzle.help(file=None)[source]
Print out syntax help for running astrodrizzle
Parameters: file : str (Default = None) If given, write out help to the filename specified by this parameter Any previously existing file with this name will be deleted before writing out the help.
drizzlepac.adrizzle.getHelpAsString(docstring=False, show_ver=True)[source]
return useful help from a file in the script directory called __taskname__.help | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2179080992937088, "perplexity": 2866.6347366105742}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039594808.94/warc/CC-MAIN-20210423131042-20210423161042-00275.warc.gz"} |
https://itprospt.com/num/1411233/problem-2-8-ii-consider-the-operatorsr2r2-and-u-r-r2-wherealdpoint | 1
# Problem 2.8(ii)) Consider the operatorsR2R2 and U R?R2 wherealdpoint) Prove that 0 adl & are linear mapS . Are and & composites of rotations #d rellections ...
## Question
###### Problem 2.8(ii)) Consider the operatorsR2R2 and U R?R2 wherealdpoint) Prove that 0 adl & are linear mapS . Are and & composites of rotations #d rellections in R?? point ) Compute (the formulas for) 02 = 00 0 ad 42 = 0o 0 points) Compute @ 0 and (0 0 Vo 0-
Problem 2.8(ii)) Consider the operators R2 R2 and U R? R2 where ald point) Prove that 0 adl & are linear mapS . Are and & composites of rotations #d rellections in R?? point ) Compute (the formulas for) 02 = 00 0 ad 42 = 0o 0 points) Compute @ 0 and (0 0 Vo 0-
#### Similar Solved Questions
##### Which of the following - types of radiation has the most energy? microwavesvsible lightinfrared radiation utraviolet lightQUESTIon 2Calculate the wavelength of light emitted when & electron in a hydrogen atom falls trom the n-4 orbit to the n=1 orbit: (Rydberg Constant; RH =2.18X 10-18 J) 97 nm 650 nm 380 nm 6400 nm
Which of the following - types of radiation has the most energy? microwaves vsible light infrared radiation utraviolet light QUESTIon 2 Calculate the wavelength of light emitted when & electron in a hydrogen atom falls trom the n-4 orbit to the n=1 orbit: (Rydberg Constant; RH =2.18X 10-18 J) 97...
##### (10)+VR =VNote that the same time constant t=RC appears in the charging equations as well at any time Today we will be verifying the above equations by measuring the various voltages in both charging and discharging capacitor circuits This will be accomplished by measuring the time constant for various combinations of R and C. PreLab Question 2: Consider RC charging circuit as in Figure with R= IxIO ohms; C-IxlO" Farads, and Vaprlicd 9.0 Volts: How long will it take t0 charge the capacitor
(10) +VR =V Note that the same time constant t=RC appears in the charging equations as well at any time Today we will be verifying the above equations by measuring the various voltages in both charging and discharging capacitor circuits This will be accomplished by measuring the time constant for va...
##### Part Ill: Bottlenecks and(b) Genotype frequencies ((a) Allele frequencies data type R R Rr rr R ObservedPopulationPredictedUse the third image in the handout to calculate genotype frequencies for this population. b. Calculate the allele frequencies for the population and fill them in above: Check the example from class if you have issues with the calculations. The grey boxes are ones you don't fill in Use those allele frequencies to predict genotype frequencies using H-W equation:
Part Ill: Bottlenecks and (b) Genotype frequencies ((a) Allele frequencies data type R R Rr rr R Observed Population Predicted Use the third image in the handout to calculate genotype frequencies for this population. b. Calculate the allele frequencies for the population and fill them in above: Chec...
##### Use the test to test the hypotheses In part (d) at a 0.05 level of significance_ Present the results In the analysis of variance table format.Set up the ANOVA table: (Round your values for MSE and to two declmal places; and your P-value to three decimal places_Source of VarlationSumDegrees 0f squares 0f FreedomMean squareP-valueRegressionErorTotalFlnd the value of the test statistic. (Round vour answer to two decima places:)FInd the p-value_ (Round your answer to three decimal places ) p-value
Use the test to test the hypotheses In part (d) at a 0.05 level of significance_ Present the results In the analysis of variance table format. Set up the ANOVA table: (Round your values for MSE and to two declmal places; and your P-value to three decimal places_ Source of Varlation Sum Degrees 0f sq...
##### Match each R-squered value to the correct cataset label:5.1%Choose62.1%Chojte9858Chooze |
Match each R-squered value to the correct cataset label: 5.1% Choose 62.1% Chojte 9858 Chooze |...
##### Question 63 pESuppose you are asked to construct 95% confidence interval for an unknown population mean /. The population standard deviation is unknown and thc sample used to construct the interval has sample size n =19. What critical value should be used in the formula? (do not round your final answer)
Question 6 3 pE Suppose you are asked to construct 95% confidence interval for an unknown population mean /. The population standard deviation is unknown and thc sample used to construct the interval has sample size n =19. What critical value should be used in the formula? (do not round your final a...
##### Particle A of charge $3.00 \times$ $10^{-4} \mathrm{C}$ is at the origin, particle B of charge $-6.00 \times 10^{-4} \mathrm{C}$ is at $(4.00 \mathrm{m}, 0),$ and particle $\mathrm{C}$ of charge $1.00 \times 10^{-4} \mathrm{C}$ is at $(0,$, $3.00 \mathrm{m}$ ). We wish to find the net electric force on C. (a) What is the $x$ component of the electric force exerted by $\mathrm{A}$ on $\mathrm{C}$ ? (b) What is the $y$ component of the force exerted by A on C? (c) Find the magnitude of the force e
Particle A of charge $3.00 \times$ $10^{-4} \mathrm{C}$ is at the origin, particle B of charge $-6.00 \times 10^{-4} \mathrm{C}$ is at $(4.00 \mathrm{m}, 0),$ and particle $\mathrm{C}$ of charge $1.00 \times 10^{-4} \mathrm{C}$ is at $(0,$, $3.00 \mathrm{m}$ ). We wish to find the net electric force...
##### How much work is required to set Up the arrangement of charges shown in Figure Kf Q= _ 3 AC? Assume that the charged particles are initially infinitely far apart and at Iest Take the potential to be zero at infinity:3 mLa Cl= Select one:
How much work is required to set Up the arrangement of charges shown in Figure Kf Q= _ 3 AC? Assume that the charged particles are initially infinitely far apart and at Iest Take the potential to be zero at infinity: 3 m La Cl= Select one:...
##### In Exercises $55-58$, determine whether the statement is true or false. If it is false, explain why or give an example that shows it is false.Between any two relative minima of $f,$ there must be at least one relative maximum of $f$.
In Exercises $55-58$, determine whether the statement is true or false. If it is false, explain why or give an example that shows it is false. Between any two relative minima of $f,$ there must be at least one relative maximum of $f$....
##### Par AWhich vessels open into the left alrium?suporior vona cavapulmonary voinpulmonary artoryeona
Par A Which vessels open into the left alrium? suporior vona cava pulmonary voin pulmonary artory eona...
##### Solve the following triangles (4 points each)
Solve the following triangles (4 points each)...
##### The Coca Cola produced 2.5 liter bottle of soda. The productiondepartment reported that the standard deviation of this bottle is0.38 liter. The quality control department conduct a randomchecking on the content of the bottles and obtained 2.45 litersfrom 100 2-liter bottles. Test if there's enough evidence that theaverage amount in bottles is different from the standard 2 liters.Use 0.05 level of significance.
The Coca Cola produced 2.5 liter bottle of soda. The production department reported that the standard deviation of this bottle is 0.38 liter. The quality control department conduct a random checking on the content of the bottles and obtained 2.45 liters from 100 2-liter bottles. Test if there's...
##### How much carbon dioxide (g) will be produced after burning- 20-pound propane tank?How much carbon monoxide (g) will be produced after burning 20-pound propane tank with limited access to oxveen (incomolete combustior
How much carbon dioxide (g) will be produced after burning- 20-pound propane tank? How much carbon monoxide (g) will be produced after burning 20-pound propane tank with limited access to oxveen (incomolete combustior...
##### Fnd t iotal plarsure 01 te mirtue(2 poinis)wnd i tha moks kacion dd gach g88 pregeth in &e miuno ?(0 poines)
Fnd t iotal plarsure 01 te mirtue (2 poinis) wnd i tha moks kacion dd gach g88 pregeth in &e miuno ?(0 poines)...
##### If the wavelength absorbed is at 412 nm, then what color will webe able to see with our eyes?
If the wavelength absorbed is at 412 nm, then what color will we be able to see with our eyes?...
##### 3 value of B for which the following V equation 1 ai; exact is.14Determine the values ofr for has 1 Tl (ibi; equation 4131 shovn 13 9'
3 value of B for which the following V equation 1 ai; exact is.14 Determine the values ofr for has 1 Tl (ibi; equation 413 1 shovn 1 3 9 '... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8915154337882996, "perplexity": 2615.368549607612}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00665.warc.gz"} |
https://qbnets.wordpress.com/2009/08/ | # Quantum Bayesian Networks
## August 29, 2009
### Tulsi
Filed under: Uncategorized — rrtucci @ 4:35 pm
Check out:
“Indian prodigy boy completes PhD in physics at the age of 21”
Bombay News.Net
Friday 28th August, 2009 (ANI)
Excerpt:
“The young Indian scientist has an invite from the Institute for Quantum Computing at the University of Waterloo, Canada, for post- doctoral work.
But he wants to continue his research in software development for quantum computing, the super fast future of number crunching in India given a chance and proper funding.
He said that he hopes to set up his own quantum computing company someday and is working hard for it.”
Huh? This must be a misquote. Surely what Tulsi really said must have been that he wants to be a prestigious quantum complexity theorist, not a lowly quantum computer programmer. 🙂
Wikipedia on Tathagat Avatar Tulsi
### Quantum Internet Snake Oil
Filed under: Uncategorized — rrtucci @ 6:27 am
Bizarre/Hilarious.
Caltech and MIT snake oil sale:
“Entangled Light, Quantum Money”
Technology Review, September/October 2009
by Mark Williams
Some excerpts:
…Kimble made one easily graspable assertion: “Our society’s technical base is information commerce. In the next 20 years, quantum information science–a fusion of computer science and quantum mechanics that didn’t exist 20 years ago–will radically change that commerce.”
The revolutionary technology that Kimble envisions is large quantum networks, resembling the Internet but relying on entanglement. What inherent advantages would promote the development and adoption of such networks?
Substantial ones….
MIT’s Seth Lloyd has given some thought to the design options for quantum networks.
So Kimble has a reasonable argument that quantum networks are feasible. And the advantages that he envisions–absolute data security, no latency, and a further exponential gain in computational power–would hardly be negligible in the world of information commerce.
Futures traders who use near-instantaneous quantum networks will have clear advantages over those who don’t.
Other commercial applications are possible as well. Scott Aaronson suggested one of them in a paper called “Quantum Copy-Protection and Quantum Money.”
The first generation of money emerged with the invention of coins in Lydia nearly 3,000 years ago, its second generation with the paper bills of exchange issued by the banks of Renaissance Italy, and its third with electronic money and the virtual economy of the modern era. If scientists like Kimble and Aaronson are correct, quantum networks may soon give rise to a further generation of money.
Replacing the classical internet by a quantum one is a really dumb idea; doing so would be phenomenally expensive and totally unnecessary. There are known classical cryptographic codes which cannot be broken by a classical or quantum computer, so there is no need for quantum cryptography, or a “quantum Internet”, or “quantum money”, ever. Why should a carpenter replace his perfectly adequate steel hammer (classical networks and classical encryption methods) by a pure gold one?
My previous blog posts on this subject:
## August 28, 2009
### xkcd comic on AI
Filed under: Uncategorized — rrtucci @ 4:22 pm
xkcd comics, “genetic algorithms” panel:
### Military Uses of Quantum Computers
Filed under: Uncategorized — rrtucci @ 3:55 pm
Both hawks and doves should be interested in this topic. We would be dangerously naive and irresponsible if we didn’t think about this topic, starting at the early stages of QC development.
Since the dawn of civilization, science has been used to design weapons, the instruments with which we defend our nation, wage war against other nations, kill and maim. Many highly successful, peaceful applications of science were initially invented or improved with warfare in mind. A modern example is the internet, which was initially funded by DARPA.
Unless humans learn to stop killing each other, which is highly unlikely, it’s inevitable that eventually, someone somewhere will use quantum computers to build weapons. So, what types of weapons could be built with quantum computers? Some obvious near term military applications of quantum computers are
cryptographic decoding – Thanks to Shor’s algorithm, quantum computers can decode efficiently certain types of cryptographic codes such are RSA, which is the backbone of our current commercial encryption systems. Luckily, there are other known classical cryptographic codes which cannot be decoded by a QC.
AI/pattern recognition – Bayesian network methods (a subset of which is MCMC (Markov Chain Monte Carlo) methods) have numerous, wonderful, peaceful applications. Nevertheless, Bayesian network methods have military applications too. For example, they are already in use (or their use is being tested) to discriminate between missiles and decoys in Star Wars defense systems. (I know this for a fact, since I was once interviewed (and rejected 🙂 ) for a defense job to write Bayesian net software for this purpose). We will soon know how to do B. net calculations on a quantum computer. (See my previous posts about quantum simulated annealing). QCs will perform such calculations much faster than classical computers. The speed at which one can perform discrimination is crucial for defensive systems like a Star Wars defense shield.
Bioinformatics – A frightful possibility in the not too distant future is bio-engineered terrorism or accidents. Suppose someone were to engineer a very harmful germ and unleash it upon us. Bioinformaticists might be enlisted to analyze this germ, as their findings might help to find an antidote. Clearly, how long it takes to analyze the germ is critical in this scenario. One common tool in bioinformatics is MCMC, so if MCMC can be performed faster with a quantum computer than a classical computer, that would help bioinformaticists do their job more quickly.
## August 25, 2009
### Quantum (Zeno, Anti-Zeno, Hamlet) Effects
Filed under: Uncategorized — rrtucci @ 8:54 pm
New fodder for quantum algorithm composers and quantum computer programmers:
“Quantum Hamlet Effect” by Vladan Panković, arXiv 0908.1301
Imagine a play in which:
As he stands near the edge of the quantum precipice,
Zeno hears voices saying: “Don’t jump!”,
Anti-Zeno hears voices saying: “Jump!”,
and poor Hamlet hears both.
Quite a drama!
Unfortunately, as already pointed out by Wolfgang, Vladan incorrectly assumes he can take, for his proposed Hamlet experiment, the limit of infinitely many measurements. (If you could take such a limit, then the sum of the time intervals between measurements would be infinite, so it would take an infinitely long time to perform the experiment).
By the way, the Wikipedia article on the standard quantum Zeno and anti-Zeno effects is excellent, even discussing experimental tests of the idea.
## August 23, 2009
### The X-Prize for Quantum Computing, Why Not?
Filed under: Uncategorized — rrtucci @ 11:47 am
In his latest blog post, Jack Woehr ponders about the factors that might determine when quantum computers will become practical, to the point where people like him “can play with them”.
In my opinion, quantum computing research is currently mostly a government funded, academic endeavor. Private sector investment in QCs is currently almost nil. It will take an eternity to develop a large scale quantum computer this way. If the private sector were more involved, that would certainly speed up the development of QCs. So the question in my mind is, how can we encourage more private investment in QCs. This reminded me of the great automobile and airplane contests of the early 20th century (Great auto race of 1908, Orteig aviation prize of 1919), and of the current X prizes.
The first X Prize was the Ansari X Prize, announced in 1996, $10 million to the first private company to fly a man to near space (100 km altitude) (twice in two weeks, in the same “reusable” vehicle). 26 teams from around the world participated. It has been estimated that 10 X$10 million was invested in pursuit of the prize. The prize was won in 2004 by SpaceShipOne.
An important goal of the Ansari X prize was to encourage private investment in the space industry, so participants were not allowed to have any government funding.
Since the Ansari X prize, 4 more X prizes have been announced and still lay unclaimed (Google Lunar, Progressive Automotive, Archon Genomics, Northrop Grumman Lunar Lander) and 4 more are in the planning stages (Energy and Environment, Exploration, Education & Global Development, Life Sciences).
As of today, a search of the official X Prize website yields no hits for the keyword “quantum computer”. However, note that there is a very enticing “Propose an X Prize” button on the home page.
Other similar contemporary prizes: DARPA Grand Challenge, etc.
I end with an inspiring quote which I got from this news article. In an interview, Peter Diamandis, co-founder of the X prize, paid tribute to his friend and mentor, Arthur C. Clarke, by relating the following story about Clarke:
“He told me something once that I thought was incredibly valuable. He said, ‘Peter, there are three phases of a good idea. The first phase is, people tell you it’s a crazy idea, it’ll never work. The next phase is, they say, it might work but it’s not worth doing. And the third phase is when people tell you, “I told you that was a great idea all along.”‘
“The X Prize has definitely gone through those three phases, and I think of Arthur every time I talk about that. I’m thankful for his support … and also for his absolute passion regarding the need of the human race to evolve beyond the earth.”
## August 19, 2009
### Tocqueville
Filed under: Uncategorized — rrtucci @ 5:09 pm
In 1831, a 25 year old French aristocrat named Alexis de Tocqueville toured the young U.S. nation for 9 months. After this, he published his insightful thoughts and observations about the U.S. in a book entitled “Democracy in America”.
Jack Woehr, a long time denizen of the world of Computer Programming, has been recently touring the new world of Quantum Computing. He has just posted in his blog a 3 part series (Aug 11, 12, 19) on the ion trappers of the Colorado-NIST.
## August 15, 2009
### Szegedy Ops
Filed under: Uncategorized — rrtucci @ 10:07 pm
Fig.1-Szegedy Operator W(M) when M acts on 2 bits (i.e., is 4 by 4 matrix)
QuSAnn implements in software some cool theoretical techniques, such as the quantum walk operators $W(M)$ invented by Mario Szegedy ( a two times Gödel prize winner). These are unitary operators which have the following highly desirable property.
Suppose you have a Markov chain with transition probability matrix $M(y|x)$ and stationary state $\pi(x)$, where $x,y\in\Omega$. Then the state $\sum_x \sqrt{\pi(x)} |x\rangle\otimes |0\rangle$ is a stationary state of $W(M)$. (Here $0$ is an arbitrary, fixed element of $\Omega$ ). Thus, if $M$ acts on $N_B$ bits (i.e., on a $2^{N_B}$ dimensional vector space), then $W(M)$ acts on $2N_B$ bits.
For example, Fig.1 shows a circuit diagram for $W(M)$ in case $N_B=2$. (For instructions on how to decipher Fig.1, see the QuSAnn documentation). Fig.1 uses multiplexor gates. If you want to express these in terms of simpler gates, like multiply controlled NOTs and qubit rotations, you can do this with Multiplexor Expander, an utility application that comes with QuSAnn.
## August 13, 2009
### Have a Hot Dog at the Ballpark of Quantum Computing
Filed under: Uncategorized — rrtucci @ 12:06 am
A conversation overheard at Rick’s Café Américain-
So why would anyone be interested in using a quantum computer, shweetheart?
-Cigarettes, gum, cryptography, anyone?
-No cryptography thank you. Just cigarettes, shweetie.
Let’s face it Miss, most people (including most scientists and engineers) are not interested in understanding cryptography. They want to use cryptography to protect their data (e.g., in monetary transactions), but that’s about it. They don’t give a hill of beans about the underlying theory, or Shor factorization or RSA. Understanding the arcane details of cryptography is just not necessary or important to their disciplines (Is cryptography part of the curriculum of a physicist or biologist or chemist, like calculus and linear algebra are? Nope.). Besides, quantum computers can break RSA, but there are other known cryptographic codes, not breakable by a QC, that could easily replace RSA.
So what’s left for a poor QC prospector down on his luck? Grover’s algorithm?
Most people who have given a curshory look at Grover’s algo are totally baffled by the claim that it can be used to search a database. Not the type of databases (e.g., a telephone directory) I am familiar with. Not unless you first know how to express your “database” as an efficient oracle (that is, as an algorithm that is computable with polynomial efficiency), and how do you do that for a telephone directory? Beats me.
So…to recap, the average Joe thinks that there are only two known, important algorithms for a quantum computer, see, and those two algorithms sound quite lame to him.
Now, now. It ain’t that bad kid. Just remember darling, we’ll always have Paris, and… quantum simulated annealing and other applications of MCMC (Markov Chain Monte Carlo) for a quantum computer.
While most scientist and engineers and businessmen don’t give a hill of beans about breaking RSA or searching a misnomer-ed database, they are keenly interested in minimizing functions. They are confronted with such problems on a daily basis. And quite often, function minimization cannot be performed with polynomial efficiency. In those difficult cases, classical simulated annealing enables them to get an answer that is fairly close to the true minimum, using a thermodynamic method that Nature herself loves and uses frequently.
And… Somma et al. (arXiv:0712.1008) and Wocjan et al. (arXiv:0804.4259) have proven that one can do simulated annealing much faster on a quantum computer than on a classical one. (More precisely, for any $\epsilon>0$, their quantum algorithm requires order $1/\sqrt{\delta}$ elementary operations to find, with probability greater than $1-\epsilon$, the minimum of a function. Here $\delta$ is the distance between the two largest eigenvalue magnitudes of the transition probability matrix for the Metropolis Markov chain. Their quantum algorithm outperforms the classical simulated annealing algorithm, which requires order $1/\delta$ elementary operations to do the same thing.)
Relax. No need to fear the Germans while you’re here at Rick’s Cafe. Have a drink at the bar and play our new Monte Carlo roulette game, QuSAnn. QuSAnn outputs a quantum circuit for doing simulated annealing on a quantum computer. The quantum circuit implements the algorithm of Wocjan et al., which improves on the original algorithm of Somma et al.
Appendix
Bogie quotes:
“A hot dog at the ball park is better than steak at the Ritz.”(his own words)
“I was born when you kissed me. I died when you left me. I lived a few weeks while you loved me.”(In A Lonely Place)
“help a poor American down on his luck”(Treasure of the Sierra Madre)
“The stuff that dreams are made of.”(Maltese Falcon)
Casablanca:
“Of all the gin joints in all the towns in all the world, she walks into mine.”(Casablanca)
Ilsa: Play it once, Sam. For old times’ sake.
Sam: [lying] I don’t know what you mean, Miss Ilsa.
Ilsa: Play it, Sam. Play “As Time Goes By.”
Sam: [lying] Oh, I can’t remember it, Miss Ilsa. I’m a little rusty on it.
Ilsa: I’ll hum it for you. Da-dy-da-dy-da-dum, da-dy-da-dee-da-dum…
[Sam begins playing]
Ilsa: Sing it, Sam.
Sam: [singing] You must remember this / A kiss is still a kiss / A sigh is just a sigh / The fundamental things apply / As time goes by. / And when two lovers woo, / They still say, “I love you” / On that you can rely / No matter what the future brings-…
Rick: [rushing up] Sam, I thought I told you never to play-…
[Sees Ilsa. Sam closes the piano and rolls it away]
“It doesn’t take much to see that the problems of three little people doesn’t add up to a hill of beans in this crazy world. Someday you’ll understand that. Now, now… Here’s looking at you kid.”(Casablanca)
“If that plane leaves the ground and you’re not with him, you’ll regret it. Maybe not today. Maybe not tomorrow, but soon and for the rest of your life.”(Casablanca)
“We’ll always have Paris.”(Casablanca)
“Louis, I think this is the start of a beautiful friendship.”(Casablanca)
## August 5, 2009
### Why Quantum Computers–Galileo’s Telescope
Filed under: Uncategorized — rrtucci @ 12:42 pm
Check out:
Galileo’s Vision
Four hundred years ago, the Italian scientist looked into space and changed our view of the universe, By David Zax, Smithsonian magazine, July 2009
A beautifully written paean to the spirit of scientific discovery. Like Galileo’s telescope, quantum computers, once they are built, will open for us vast new vistas of the universe (into a new type of quantum devices, q. algorithms, q. computer programming, q. measurement theory, q. entanglement, q. decoherence, q. simulation, q. error correction, q. information theory, maybe even AI and q. gravity). Hopefully, QCs will make mankind wiser, kinder and stronger.
Blog at WordPress.com. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 20, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3555087745189667, "perplexity": 3150.5415715491454}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572235.63/warc/CC-MAIN-20190915175150-20190915201150-00553.warc.gz"} |
https://www.physicsforums.com/threads/proof-hint-help.75348/ | # Proof Hint/Help
1. May 12, 2005
### eckiller
Hi everyone,
Let T be a linear operator on nxn Matrices with real entries defined by
T(A) = transpose(A).
Show that +-1 are the only eigenvalues of T.
Any tips on how to start this. I thought about writing the matrix representation relative to the standard basis, but it seemed really messy/tedious to write that out in general. Is there an easier way, or is that the only way to go?
2. May 12, 2005
### Don Aman
use the fact that T^2 = id
Similar Discussions: Proof Hint/Help | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9655972719192505, "perplexity": 859.8040455295247}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320707.69/warc/CC-MAIN-20170626101322-20170626121322-00038.warc.gz"} |
https://community.wolfram.com/groups/-/m/t/2364831 | GROUPS:
# Where does the ResourceFunction stored locally?
Posted 6 days ago
169 Views
|
3 Replies
|
4 Total Likes
|
Whenever we pull a function from the Wolfram Function Repository using "ResourceFunction", where does the downloaded files stored in the system?
3 Replies
Sort By:
Posted 6 days ago
Hi Ajit,They are stored somewhere under PersistenceLocation["Local"][[2]] Why do you need to know this? The source code for all WFR functions is available for download. Click on the "Source Notebook" button on the top left of a functions documentation page.
Local caches of resource functions are stored in binary format and are quite small. On my machine, Collatz occupies 21KB. Hardly seems worth the effort to delete. Even on a Raspberry Pi storage is cheap.You might want to try PersistResourceFunction, it provides a clean way to install and uninstall resource functions. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20503461360931396, "perplexity": 3508.9087347872037}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056297.61/warc/CC-MAIN-20210918032926-20210918062926-00274.warc.gz"} |
https://learnzillion.com/lesson_plans/5517-find-the-distance-between-two-points-using-absolute-value | Instructional video
# Find the distance between two points using absolute value
teaches Common Core State Standards CCSS.Math.Content.7.NS.A.1c http://corestandards.org/Math/Content/7/NS/A/1/c
## You have saved this instructional video!
Here's where you can access your saved items.
Dismiss | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.962891697883606, "perplexity": 15707.082693742448}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125949036.99/warc/CC-MAIN-20180427041028-20180427061028-00622.warc.gz"} |
https://socratic.org/questions/how-do-you-evaluate-7k-2s-3q-2q-3k-4k-q-2s#543432 | Algebra
Topics
# How do you evaluate (- 7k + 2s - 3q ) - ( - 2q - 3k ) + ( 4k + q - 2s )?
Jan 30, 2018
0
#### Explanation:
Let’s start by making the equation to addition format (as it is easier to evaluate).
Note: Negative x Negative = Positive
So $\left(- 7 k + 2 s - 3 q\right) - \left(- 2 q - 3 k\right) + \left(4 k + q - 2 s\right)$
Becomes $\left(- 7 k + 2 s - 3 q\right) + \left(2 q + 3 k\right) + \left(4 k + q - 2 s\right)$
Now we can group all the like terms together; move all the same variables together. Ill arrange it in brackets for you to get a better visual.
$\left(- 7 k + 2 s - 3 q\right) + \left(2 q + 3 k\right) + \left(4 k + q - 2 s\right)$
$\left(- 7 k + 3 k + 4 k\right) + \left(2 s - 2 s\right) + \left(- 3 q + 2 q + q\right)$
$\left(- 7 k + 7 k\right) + \left(2 s - 2 s\right) + \left(- 3 q + 3 q\right)$
$0 + 0 + 0 = 0$
Therefore answer is 0
##### Impact of this question
489 views around the world
You can reuse this answer | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 6, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8753573298454285, "perplexity": 2132.582553450494}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104678225.97/warc/CC-MAIN-20220706212428-20220707002428-00734.warc.gz"} |
https://socratic.org/questions/how-do-you-factor-9a-2-9a | Algebra
Topics
# How do you factor -9a^2-9a?
Nov 24, 2015
$- 9 a \left(a + 1\right)$
#### Explanation:
What is a common factor of the equation?
First lets look at $9$
The two numbers $- 9$ and $- 9$ both have a common factor of $- 9$, so we can take out a $- 9$ by dividing the equation by $- 9$ and placing it on the outside of the brackets.
Factoring out -9
$- 9 \left(\frac{\cancel{- 9} {a}^{2}}{\cancel{- 9}} + \frac{\cancel{- 9} a}{\cancel{- 9}}\right)$
New equation: $- 9 \left({a}^{2} + a\right)$
Now lets look at $a$
The common factor of ${a}^{2}$ and $a$ is $a$. Take out $a$ by dividing the inside of the brackets by $a$ and placing it on the outside
Factoring out the a
$- 9 a \left(\frac{{a}^{\cancel{2}}}{\cancel{a}} + \frac{\cancel{a}}{\cancel{a}}\right)$
New equation: $- 9 a \left(a + 1\right)$
Note: Keep in mind that $\frac{a}{a}$ is $1$, not zero, because $\frac{1}{1}$ is not zero either, so we have to keep a $1$ after cancelling out $\frac{a}{a}$.
##### Impact of this question
656 views around the world
You can reuse this answer | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 22, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9469085335731506, "perplexity": 330.2622470535782}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363216.90/warc/CC-MAIN-20211205191620-20211205221620-00426.warc.gz"} |
https://sriasat.wordpress.com/ | ## SL(2,IR) is the commutator subgroup of GL(2,IR)
Here is a proof of the above fact.
Let $N$ be the commutator subgroup of the general linear group $GL(2,\mathbb R)$; i.e.,
$N=\langle ABA^{-1}B^{-1}:A,B\in GL(2,\mathbb R)\rangle$.
First, it is clear that $N$ is contained in the special linear group $SL(2,\mathbb R)$, since $\det(ABA^{-1}B^{-1})=1$ for any $A,B\in GL(2,\mathbb R)$. Next, we claim that $N$ contains all matrices
$\begin{pmatrix} 1 & b\\ 0 & 1\end{pmatrix}$.
This follows from noting that
$\begin{pmatrix} 1 & b\\ 0 & 1\end{pmatrix}=\begin{pmatrix} 1 & b\\ 0 & b\end{pmatrix}\begin{pmatrix} 1 & 1\\ 0 & 1\end{pmatrix}\begin{pmatrix} 1 & b\\ 0 & b\end{pmatrix}^{-1}\begin{pmatrix} 1 & 1\\ 0 & 1\end{pmatrix}^{-1}$.
By taking transposes, it also follows that $N$ contains all matrices
$\begin{pmatrix} 1 & 0\\ c & 1\end{pmatrix}$.
Further, $N$ contains all matrices
$\begin{pmatrix} a & 0\\ 0 & 1/a\end{pmatrix}$
since
$\begin{pmatrix} a & 0\\ 0 & 1/a\end{pmatrix}=\begin{pmatrix} a & 0\\ 0 & 1\end{pmatrix}\begin{pmatrix} 0 & 1\\ 1 & 0\end{pmatrix}\begin{pmatrix} a & 0\\ 0 & 1\end{pmatrix}^{-1}\begin{pmatrix} 0 & 1\\ 1 & 0\end{pmatrix}^{-1}$
for any $a\neq 0$.
Now let
$\begin{pmatrix} a & b\\ c & d\end{pmatrix}\in SL(2,\mathbb R)$.
Then $ad-bc=1$. Using the above results,
$\begin{pmatrix} a & b\\ c & d\end{pmatrix}=\begin{pmatrix} 1 & 0\\ c/a & 1\end{pmatrix}\begin{pmatrix} 1 & ab\\ 0 & 1\end{pmatrix}\begin{pmatrix} a & 0\\ 0 & 1/a\end{pmatrix}\in N$
if $a\neq 0$, and
$\begin{pmatrix} a & b\\ c & d\end{pmatrix}=\begin{pmatrix}0&1\\-1&0\end{pmatrix}\begin{pmatrix}1&-\frac{d}{b}\\ 0&1\end{pmatrix}\begin{pmatrix}1&0\\ ab&1\end{pmatrix}\begin{pmatrix}1/b&0\\ 0&b\end{pmatrix}\in N$
if $b\neq 0$, and the latter since
\begin{aligned}\begin{pmatrix} 0 & -1\\ 1 & 0\end{pmatrix}=&\begin{pmatrix}x&y\\0&-x-y\end{pmatrix}\begin{pmatrix}-x-y&0\\ x&y\end{pmatrix}\begin{pmatrix}x&y\\0&-x-y\end{pmatrix}^{-1}\begin{pmatrix}-x-y&0\\ x&y\end{pmatrix}^{-1}\\ \in &N\end{aligned}
for any $x,y,x+y\neq 0$. Thus $SL(2,\mathbb R)\subseteq N$, i.e., $N=SL(2,\mathbb R)$.
Filed under Linear algebra
## Minkowski’s criterion
Here is a related post that I find interesting.
In linear algebra, Minkowski‘s criterion states the following.
Theorem (Minkowski’s Criterion). Let $A$ be an $n\times n$ matrix with real entries such that the diagonal entries are all positive, off diagonal entries are all negative, and the row sums are all positive. Then $\det(A)\neq 0$.
This is a nice criterion and is not very difficult to prove, but for a random matrix it is asking too much. To decide whether a matrix is singular one usually looks for a row/column consisting of zeros or adding up to zero. The following result gives sufficient conditions for this to work. Unfortunately, it does not generalise Minkowski’s result.
Theorem. Let $A$ be a $n\times n$ matrix with real entries such that its row sums are all $\ge 0$, its lower diagonal entries are $\ge 0$ and its upper diagonal entries are $\le 0$. Then $\det(A)=0$ if and only if $A$ has either a row consisting entirely of zeros or all the row sums equal to zero.
Proof. Suppose that $Ab=0$, where $b=(b_1,\dots,b_n)^T\neq 0$. Assume that $b_1\ge\cdots\ge b_n$. Then there exists $1\le m such that $a_{1,1}',\dots,a_{1,m}'\ge 0$ and $a_{1,m+1}',\dots,a_{1,n}'\le 0$. Hence
\begin{aligned}0=\sum_{j=1}^na_{1,j}'b_j&\ge b_m\sum_{j=1}^ma_{1,j}'+b_{m+1}\sum_{j=m+1}^na_{1,j}\\&\ge (b_m-b_{m+1})\sum_{j=1}^ma_{1,j}'\ge 0.\end{aligned}
So we must have (i) $b_1=\cdots=b_m$, (ii) $b_{m+1}=\cdots=b_n$, (iii) $a_{1,1}'+\cdots+a_{1,n}'=0$ and (iv) either $b_m=b_{m+1}$ or $a_{1,1}'+\cdots+a_{1,m}'=0$. These boil down to having either $b_1=\cdots=b_n$ or $a_{1,1}'=\cdots=a_{1,n}'=0$. Apply this argument to each row of $A$ to obtain the desired conclusion. $\square$
Filed under Linear algebra
## Simple cases of Jacobson’s theorem
A celebrated theorem of Jacobson states that
Theorem. Let $R$ be a ring, not necessarily containing $1$. If, for each $a\in R$ there exists a positive integer $n$ such that $a^n=a$, then $R$ is commutative.
This is a very strong and difficult result (although not very useful in practice). However, we can obtain some special cases via elementary means.
Proposition 1. Let $R$ be a ring such that for each $a\in R$ we have $a^2=a$. Then $R$ is commutative.
Proof. Let $a,b\in R$. Then $a+b=(a+b)^2=a^2+ab+ba+b^2=a+ab+ba+b$, i.e., $ab=-ba$. Again, $a-b=(a-b)^2=a^2-ab-ba+b^2=a-ab-ba+b$, i.e., $ab=-ba+2b$. Thus $2b=0$, i.e., $b=-b$ for each $b\in R$. Thus $ab=-ba=ba$, as desired. $\square$
The next case is already considerably harder.
Proposition 2. Let $R$ be a ring such that for each $a\in R$ we have $a^3=a$. Then $R$ is commutative.
Proof. Let $a,b\in R$. Then $a+b=(a+b)^3$ shows that
$(*)\qquad\qquad\qquad a^2b+aba+ba^2+ab^2+bab+b^2a=0$,
and $a-b=(a-b)^3$ shows that
$a^2b+aba+ba^2=ab^2+bab+b^2a$.
Hence
$(**)\qquad\qquad\qquad\qquad 2(a^2b+aba+ba^2)=0$
for all $a,b\in R$.
Plugging $a=b$ into $(**)$ gives $6a=0$, i.e., $3a=-3a$ for each $a\in R$.
Plugging $b=a^2$ into $(*)$ gives $3(a^2+a)=0$, i.e., $3a^2=3a$ for each $a\in R$. Replacing $a$ by $a+b$ gives $3(ab+ba)=0$, i.e., $3(ab-ba)=0$.
Also, multiplying $(**)$ by $a$ first on the left and then on the right and then subtracting the two gives $2(ab-ba)=0$.
From the last two paragraphs we conclude that $ab-ba=0$ for all $a,b\in R$. $\square$
Corollary. Let $R$ be a ring such that for each $a\in R$ we have $a^n=a$ for some $n\le 3$. Then $R$ is commutative.
Proof. Note that if $a^n=a$ for some $n\le 3$ then $a^3=a$. Hence the result follows by Proposition 2. $\square$
Filed under Algebra
## A nice group theory result
While working on some group theory problems today a friend and I came up with the following result.
Lemma. Let $H$ be a normal subgroup of a finite group $G$ such that $\gcd(|H|,|G/H|)=1$. If the order of $g\in G$ divides $|H|$, then $g\in H$.
Proof. Let $d$ be the order of $g$. Then the order $d'$ of $gH$ in $G/H$ divides both $d$ and $|G/H|$. But $\gcd(d,|G/H|)=1$. Hence $d'=1$, i.e., $gH=H$, i.e., $g\in H$. $\square$
Corollary 1. Let $H$ be a normal subgroup of a finite group $G$ such that $\gcd(|H|,|G/H|)=1$. If $K\le G$ such that $|K|$ divides $|H|$, then $K\le H$.
Proof. Apply the lemma to the elements of $K$. $\square$
Corollary 2. Let $H$ be a normal subgroup of a finite group $G$ such that $\gcd(|H|,|G/H|)=1$. Then $H$ is the unique subgroup of $G$ of order $|H|$.
Proof. Use Corollary 1. $\square$
Here is an example of the lemma in action.
Problem. Show that $S_4$ has no normal subgroup of order $8$ or $3$.
Solution. If $H$ is a normal subgroup of $S_4$ of order $8$, then $\gcd(|H|,|S_4/H|)=1$. Hence every element of order $2$ or $4$ in $S_4$ must lie in $H$. In particular, $(1\ 2),(1\ 2\ 3\ 4)\in H$. By a result in the previous post, $H=S_4$, a contradiction.
Likewise, if $H$ is a normal subgroup of $S_4$ of order $3$, then $H$ must contain every 3-cycle; in particular, $(1\ 2\ 3),(2\ 3\ 4)\in H$. Hence $(1\ 2\ 3)(2\ 3\ 4)=(1\ 2)(3\ 4)\in H$. But this has order 2, and $2\nmid 3$, a contradiction. $\square$
More generally, we can prove the following.
Corollary 3. $S_n$ for $n\ge 4$ has no non-trivial proper normal subgroup $H$ with $\gcd(|H|,|S_n/H|)=1$.
Proof. Suppose otherwise and let $d$ divide $|H|$. Then $H$ must contain all $d$-cycles. So if $|H|$ is even then taking $d=2$ gives $H=S_n$. If $|H|$ is odd, it contains the cycles $\sigma=(1\ \cdots\ d)$ and $\rho=(d\ \cdots\ 2\ n)$ for some $3\le d. Then $\sigma\rho=(1\ 2\ n)\in H$ has order 3. So $|H|$ contains all 3-cycles, i.e., $A_n\le H$. Since $A_n\le S_n$ is maximal, either $H=A_n$ or $H=S_n$, a contradiction. $\square$
Filed under Algebra
## Generating the symmetric group
It’s a fairly well-known fact that the symmetric group $S_n$ can be generated by the transposition $(1\ 2)$ and the $n$cycle $(1\ \cdots\ n)$. One way to prove it is as follows.
1. Show that the transpositions $(a\ b)$ for $a,b\in\{1,\dots,n\}$ generate $S_n$.
2. Show that any transposition $(a\ b)$ can be obtained from $(1\ 2)$ and $(1\ \cdots\ n)$.
We also need the following key lemma the proof of which is routine.
Lemma. $\rho (a\ b)\rho^{-1}=(\rho(a)\ \rho(b))$ for any $\rho\in S_n$ and $a,b\in\{1,\dots,n\}$.
(1) is easily proven by the observation that any permutation of $1,\dots,n$ can be obtained by swapping two elements at a time. (2) is a bit more interesting.
We first use the lemma to observe that any transposition of the form $(a\ a+1)$ can be obtained from $(1\ 2)$ upon repeated conjugation by $(1\ \cdots\ n)$. Now, since $(a\ b)=(b\ a)$, WLOG let $a. Using the lemma, conjugating $(a\ a+1)$ by $(a+1\ a+2)$ gives $(a\ a+2)$, conjugating $(a\ a+2)$ by $(a+2\ a+3)$ gives $(a\ a+3)$, etc. In this way we can eventually get $(a\ b)$. So we are done by (1).
This argument shows that $S_n$ can in fact be generated by $(a\ a+1)$ and $(1\ \cdots\ n)$ for any $a$.
Now let’s consider an arbitrary transposition $\tau$ and an $n$-cycle $\sigma$ in $S_n$. By relabeling $1,\dots,n$, we can assume that $\sigma=(1\ \cdots\ n)$ and $\tau=(a\ b)$ for $a. Note that $\sigma^{-a+1}\tau\sigma^{a-1}=(1\ c)$ where $c=b-a+1$, so WLOG $\tau=(1\ c)$. Then $\sigma^k\tau\sigma^{-k}=(1+k\ c+k)$ for each $k$. In particular, taking $k=c-1$ gives $\sigma^{c-1}\tau\sigma^{-c+1}=(c\ 2c-1)=:\rho$. Then $\rho\tau\rho^{-1}=(1\ 2c-1)$. Repeating this procedure produces $(1\ c+k(c-1))$ for $k=0,1,2,\dots$. Now $\{c+k(c-1):k=0,1,\dots,n-1\}$ is a complete set of residues mod $n$ if and only if $\gcd(c-1,n)=1$, i.e., $\gcd(b-a,n)=1$. So we’ve shown that
Theorem. Let $\tau=(a\ b)$ be a transposition and $\sigma=(c_1\ \cdots\ c_n)$ be an $n$-cycle in $S_n$. Then $\tau$ and $\sigma$ generate $S_n$ if and only if $\gcd(c_b-c_a,n)=1$.
In particular, $(a\ b)$ and $(1\ \cdots\ n)$ generate $S_n$ if and only if $\gcd(b-a,n)=1$.
Corollary. $S_p$ is generated by any transposition and any $p$-cycle for $p$ prime.
1 Comment
Filed under Algebra | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 256, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9911993741989136, "perplexity": 780.7777610358659}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948596051.82/warc/CC-MAIN-20171217132751-20171217154751-00562.warc.gz"} |
https://stats.stackexchange.com/questions/386306/prediction-intervals-for-thief | # Prediction intervals for THieF
I would like to add prediction intervals to a temporal aggregation using the thief package. Can someone point out either how to automatically plot prediction intervals, or a method of calculating them? I've included an example using a sample data set that is very similar to my real data.
library(tidyverse)
library(ggplot2)
library(fpp2)
library(thief)
AEdemand.train <- AEdemand[,1] %>% subset(end = floor(length(.)*0.8)-1)
AEdemand.test <- AEdemand[,1] %>% subset(start = floor(length(.)*0.8))
Horizon <- AEdemand.test %>% length()
ftbats <- function(y,h,...){forecast(tbats(y),h,...)}
AEdemand.thief <- AEdemand.train %>% thief(forecastfunction=ftbats)
AEdemand.plot <- AEdemand.thief %>%
forecast(h=Horizon) %>%
autoplot()+
autolayer(AEdemand.test)+
ylab("Type 1 Departments - Major A&E")
plot(AEdemand.plot) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22166237235069275, "perplexity": 10465.818787678081}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912205600.75/warc/CC-MAIN-20190326180238-20190326202238-00314.warc.gz"} |
http://mathoverflow.net/users/1090/martin-o?tab=stats | # Martin O
Unregistered less info
reputation
24
bio website location age member for 4 years, 5 months seen Jul 9 '11 at 12:00 profile views 178
11 Thematic Programs for 2010-2011? 8 Are there oriented $4k+2$ manifolds such that $im(H_{2k+1}(M; Z/2)\to H_{2k+1}(M, \partial M; Z/2))$ has odd dimension? 4 Morphisms between supermanifolds R^{0|1}→R^{0|1} 2 How to caculate the internal hom of supermanifolds? 2 Equivariant Surgery problem
# 341 Reputation
+35 How to caculate the internal hom of supermanifolds? +55 Morphisms between supermanifolds R^{0|1}→R^{0|1} +95 Are there oriented $4k+2$ manifolds such that $im(H_{2k+1}(M; Z/2)\to H_{2k+1}(M, \partial M; Z/2))$ has odd dimension? +10 Thematic Programs for 2010-2011?
# 0 Questions
This user has not asked any questions
# 6 Tags
11 gt.geometric-topology × 3 10 at.algebraic-topology × 2 11 career 6 supermanifolds × 2 11 conferences 1 gn.general-topology
# 1 Account
MathOverflow 341 rep 24 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3369109034538269, "perplexity": 9877.487400440918}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00426-ip-10-147-4-33.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/172292/proof-of-a-b-c-a-b-cup-c | # Proof of $(A - B) - C = A - (B \cup C)$
I have several of these types of problems, and it would be great if I can get some help on one so I have a guide on how I can solve these. I tried asking another problem, but it turned out to be a special case problem, so hopefully this one works out normally.
The question is:
Prove $(A - B) - C = A - (B \cup C)$
I know I must prove both sides are not equivalent to each other to complete this proof. Here's my shot: We start with left side.
• if $x \in C$, then $x \notin A$ and $x \in B$.
• So $x \in (B \cup C)$
• So $A - (B \cup C)$
Is this the right idea? Should I then reverse the proof to prove it the other way around, or is that unnecessary? Should it be more formal?
Thanks!
-
To prove that $X=Y$ you need to show that if $x\in X$ then $x\in Y$ and that if $y\in Y$ then $y\in X$. Even though the two things you need to show can sometimes be proved simultaneously, I strongly advise that you not attempt that. It is too easy to lose control of the logic. – André Nicolas Jul 18 '12 at 12:44
The normal way to prove equalities like this one is in two steps. First, show that if some $x \in$ LHS, then it must also be $\in$ RHS. Then, you must show that if $x\in$ RHS, then it must also be $\in$ RHS. These two together are enough to prove that LHS = RHS.
So, in this specific example, step 1 looks like this.
$x\in (A\backslash B)\backslash C$
$\Rightarrow ( x\in A$ and $x\notin B )$ and $x\notin C$
$\Rightarrow x\in A$ and not $(x\in B$ or $x\in C)$
$\Rightarrow x\in A$ and $x\notin B \cup C$
$\Rightarrow x\in A \backslash ( B \cup C )$
So that's the first half. I have left the second half as an exercise for you to solve yourself.
Note that there are other ways of solving such questions, but this is probably the most straightforward. Good luck.
-
Thank you very much! – pauliwago Jul 18 '12 at 7:29
Would you kindly explain the 3rd step? You went from x $\notin$ B) and x $\notin$ C to not (x $\in$ B or x $\in$ C). Why the or? – pauliwago Jul 18 '12 at 13:53
It's called De Morgan's Laws, I believe. – David Wheeler Jul 18 '12 at 20:03
I think that the sequence you use for the first step is composed all by equivalent assertions. Hence it is valid in reverse for the second step. – Emanuele Paolini Sep 12 '15 at 18:29
In the case of very simple expressions like this, an alternative approach is to use "truth tables". Essentially, one considers, for an element $x$, all possible combinations of whether it is or it not in any given set, and computes whether it is in the set determined by the complex expression. Here, with three sets, we need to consider $8$ possibilities.
Use $0$ to denote that the element is not in the set, and $1$ to denote that it is. Remembering that $x\in X-Y$ if and only if $x\in X$ and $x\notin Y$, we have: $$\begin{array}{|c|c|c||c|c|} \hline A&B&C&A-B&(A-B)-C\\ \hline 0 & 0 & 0& 0 & 0\\ \hline 0 & 0 & 1 & 0 & 0\\ \hline 0 & 1 & 0 & 0 & 0\\ \hline 0 & 1 & 1 & 0 & 0\\ \hline 1 & 0 &0 & 1 & 1\\ \hline 1 & 0 & 1 & 1 & 0\\ \hline 1 & 1 & 0 & 0 & 0\\ \hline 1 & 1 & 1 &0 & 0\\ \hline \end{array}$$
For $A-(B\cup C)$, we have: $$\begin{array}{|c|c|c||c|c|} \hline A&B&C&B\cup C&A-(B\cup C)\\ \hline 0 & 0 & 0& 0 & 0\\ \hline 0 & 0 & 1 & 1 & 0\\ \hline 0 & 1 & 0 & 1 & 0\\ \hline 0 & 1 & 1 & 1 & 0\\ \hline 1 & 0 &0 & 0 & 1\\ \hline 1 & 0 & 1 & 1 & 0\\ \hline 1 & 1 & 0 & 1 & 0\\ \hline 1 & 1 & 1 &1 & 0\\ \hline \end{array}$$
Now look at the final column: since the columns for $A-(B\cup C)$ and for $(A-B)-C$ are identical, it follows that the sets are the same.
Now, for large number of "sets", this technique becomes unwieldly: the table will have $2^n$ rows when the equation involves $n$ different sets. But for small numbers of sets (two or three, usually), it provides a simple, mechanical way of testing for equality. For example, the truth-table method is much simpler to prove the associativity of the symmetric difference than element-wise methods.
-
Depending on how much you are allowed to assume (De Morgan's laws, etc), you can also just show this algebraically via the identity $X\setminus Y = X\cap \bar{Y}$.
It's a couple of applications of that identity, one associativity step and one De Morgan step, to give the outline. (It's really simple, so I don't want to completely give it away)
Also, before the editor gets to this, don't change my $\setminus$ to $-$, I don't like the ambiguity.
-
Working with the Characteristic function of a set makes these problems easy:
$$1_{(A - B) - C}= 1_{A-B} - 1_{A-B}1_C=(1_A- 1_A1_B)-1_A1_C+ 1A1_B1_C \,$$
$$1_{A-(B \cup C)}= 1_{A}- 1_{A}1_{B \cup C}=1_A- 1_A ( 1_B+ 1_C -1_B1_C)\,$$
It is easy now to see that the RHS are equal.
-
In a variation of the currently approved answer, I would prove this through the following simple calculation: for every $x$,
$$\begin{array}{ll} & x \in (A - (B \cup C)) \\ \equiv & \;\;\;\text{"definition of -"} \\ & x \in A \land \lnot (x \in (B \cup C)) \\ \equiv & \;\;\;\text{"definition of \cup"} \\ & x \in A \land \lnot (x \in B \lor x \in C) \\ \equiv & \;\;\;\text{"logic"} \\ & x \in A \land \lnot (x \in B) \land \lnot (x \in C) \\ \equiv & \;\;\;\text{"definition of -, to work towards our goal"} \\ & x \in (A - B) \land \lnot (x \in C) \\ \equiv & \;\;\;\text{"definition of -"} \\ & x \in ((A - B) - C) \\ \end{array}$$
By the definition of set equality, this proves the given theorem.
-
(A-B)-C=A-(BC)
The right hand side means
A ∩ (BC)^c
(c means compliment)
Which, in turn, means, by deMorgan's law
AB^c ∩ C^c
Hence, by the associative property,
(AB^c) ∩ C^c
And, reinstating the minus sign,
(A-B)-C
Thus (A-B)-C=A-(BC)
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9375630617141724, "perplexity": 300.7347743803804}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860112727.96/warc/CC-MAIN-20160428161512-00061-ip-10-239-7-51.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/54909/geodesic-versus-geodesic-loop | # Geodesic versus geodesic loop
Let (M,g) be a closed manifold and let $\alpha$ be an element of $G=\pi_1(M,p)$ we can define the norm of $\alpha$ with respect to p as the infinimum riemannian length of a representative of $\alpha$ . people say this norm is realised by a geodesic loop at p my question is why it is realised by a geodesic loop and not by a geodesic it must be something with the regularity here is a simple proof for why i am saying geodesic and not geodesic loop. fix $\tilde{p}$ in the universal cover of $(M,g)$ let c be a representitve of $\alpha$ , $\tilde{c}$ be the lift of c to $\tilde{M}$ with base point $\tilde{p}$ and let $c\prime$ be the unique minimizing geodsic joinging $\tilde{p}$ to $\alpha(\tilde{p})=\tilde{c}(\tilde{p})$ . obviously $\tilde{c}$ and $c\prime$ are homotopic and hence $\pi oc\prime$ and $\pi o\tilde{c}=c$ are homotopic where $\pi : \tilde{M}\rightarrow M$. and I know the the image of a geodesic by a local isometry is a geodesic hence $\pi oc\prime$ is a closed geodesic at p and sure it minimise the length otherwise we will have a contradiction cause it lift $c\prime$ is minimizing .
-
Hi, welcome to this site. I'm sorry to ask, but: What exactly is your question? It would be great if you could insert a few more periods and make this one block of text into two or three paragraphs. – t.b. Aug 1 '11 at 10:05
my question is the length of alpha is realised by a closed geodesic or closed geodesic loop ? – Alfie Aug 1 '11 at 10:17
On the other hand, if instead of looking at homotopy classes of based maps from $S^1$, one just considers the homotopy classes of maps from $S^1$ into a compact manifold $M$, then such classes are represented by closed geodesics. – Jason DeVito Aug 1 '11 at 16:02
Are you sure you have the definition of a "closed geodesic" and "geodesic loop" correct?
Usually we have
Definition A closed geodesic $\gamma$ on $(M,g)$ is a smooth image of $\mathbb{S}^1$ that is geodesic.
Definition A geodesic loop $\gamma$ on $(M,g)$ is a smooth image of $[0,1]$, that is geodesic, and such that $\gamma(0) = \gamma(1)$.
See, for example, the Springer EOM.
Then by the argument given in your question statement, you have that a minimizing object is automatically a geodesic loop. But it doesn't have to be a closed geodesic as there may be an angle.
Example Imagine a T-shaped pipe formed by joining two cyclinders at right angles. (To make it closed you can glue the ends of the pipe to a big sphere.) Smooth out the junctions. Let your base point $p$ be the point on the horizontal part of the T that sits directly opposite the vertical leg of the T.
-
$\pi \circ c′$ is not smooth at its endpoints. That is, it is not smooth at $p$. Thus $\pi \circ c′$ is not a geodesic. Instead it is a geodesic loop.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9012612104415894, "perplexity": 228.85054481060996}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644062782.10/warc/CC-MAIN-20150827025422-00228-ip-10-171-96-226.ec2.internal.warc.gz"} |
http://tex.stackexchange.com/questions/3748/in-cell-newlines-in-tabular-environments | # in-cell newlines (in tabular environments)?
I'm working on a table containing multi-line cells and I wonder if the following concept is supported by LaTeX.
The concept itself is simple: allow a cell to span multiple lines by increasing the line height (similar to how it works in Office suites when you press Ctrl-Enter or Alt-Enter to wrap the current cell). Using \\ doesn't work as it causes LaTeX to go to the next table row.
Thus what would be required is an equivalent of \\ which is scoped to a table cell. Is this implemented already (I suppose not in LaTeX, but perhaps there are packages)?
I realize that the same can be achieved with \multirow, but I don't find that approach very user-friendly.
-
Sure, just use \newline instead of \\:
\documentclass{article}
\begin{document}
\begin{tabular}{|p{5cm}|p{5cm}|}
\hline
foo & foo \\
\hline
line\newline break & another \newline break \\
\hline
\end{tabular}
\end{document}
-
Only works if you use p and specify the size of the (of all?!) column(s) – Nils Mar 9 '11 at 11:18
Or \linebreak which has the advantage to give the correct alignment when \centering or \raggedleft is used in the column. – Ulrike Fischer Apr 16 '11 at 14:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9921613931655884, "perplexity": 1955.825430605194}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644064869.18/warc/CC-MAIN-20150827025424-00298-ip-10-171-96-226.ec2.internal.warc.gz"} |
https://www.mersenneforum.org/showthread.php?s=2cdcb882cf479af0850c929aa982f360&t=4269 | mersenneforum.org > Data P-1: Block of 33: First=20082767
Register FAQ Search Today's Posts Mark Forums Read
2005-06-29, 07:02 #1 dave_0273 Oct 2003 Australia, Brisbane 2·5·47 Posts P-1: Block of 33: First=20082767 These exponents have been LL-tested at least once, but without a matching double-check. They have not been P-1 tested yet. Put this at the start of your worktodo.ini file: Pfactor=20082767,66,1 Pfactor=20082809,66,1 Pfactor=20082991,66,1 Pfactor=20083003,66,1 Pfactor=20083277,66,1 Pfactor=20083717,66,1 Pfactor=20084447,66,1 Pfactor=20085697,66,1 Pfactor=20085763,66,1 Pfactor=20086159,66,1 Pfactor=20086471,66,1 Pfactor=20086519,66,1 Pfactor=20086831,66,1 Pfactor=20087143,66,1 Pfactor=20087231,66,1 Pfactor=20087357,66,1 Pfactor=20088617,66,1 Pfactor=20089507,66,1 Pfactor=20089697,66,1 Pfactor=20090527,66,1 Pfactor=20090561,66,1 Pfactor=20090599,66,1 Pfactor=20090849,66,1 Pfactor=20092057,66,1 Pfactor=20093081,66,1 Pfactor=20093347,66,1 Pfactor=20094101,66,1 Pfactor=20095021,66,1 Pfactor=20095759,66,1 Pfactor=20096899,66,1 Pfactor=20098319,66,1 Pfactor=20098693,66,1 Pfactor=20099477,66,1 Post here to claim this set. When you're finished, manually check in your results at http://www.mersenne.org/ips/manualtests.html and post your results here as well.
2005-06-29, 08:30 #2 ValerieVonck Mar 2004 Belgium 11010010012 Posts Ok I will take this set. Thanx
2005-07-20, 16:12 #3 ValerieVonck Mar 2004 Belgium 15118 Posts Block complete Code: [Mon Jul 04 17:35:48 2005] Trying 1000 iterations for exponent 20082767 using 1024K FFT. If average roundoff error is above 0.242, then a larger FFT will be used. Trying 1000 iterations for exponent 20082767 using 1024K FFT. If average roundoff error is above 0.242, then a larger FFT will be used. Final average roundoff error is 0.24832, using 1280K FFT for exponent 20082767. [Mon Jul 04 21:56:28 2005] M20082767 completed P-1, B1=110000, B2=1732500, Wc1: 07A815FF Trying 1000 iterations for exponent 20082809 using 1024K FFT. If average roundoff error is above 0.242, then a larger FFT will be used. Final average roundoff error is 0.24701, using 1280K FFT for exponent 20082809. [Tue Jul 05 21:40:09 2005] M20082809 completed P-1, B1=110000, B2=1732500, Wc1: 07B31587 Trying 1000 iterations for exponent 20082991 using 1024K FFT. If average roundoff error is above 0.242, then a larger FFT will be used. Final average roundoff error is 0.24619, using 1280K FFT for exponent 20082991. [Wed Jul 06 10:04:07 2005] M20082991 completed P-1, B1=110000, B2=1732500, Wc1: 07B61584 Trying 1000 iterations for exponent 20083003 using 1024K FFT. If average roundoff error is above 0.242, then a larger FFT will be used. Final average roundoff error is 0.24757, using 1280K FFT for exponent 20083003. [Wed Jul 06 14:12:48 2005] M20083003 completed P-1, B1=110000, B2=1732500, Wc1: 07AB15FF Trying 1000 iterations for exponent 20083277 using 1024K FFT. If average roundoff error is above 0.242, then a larger FFT will be used. Final average roundoff error is 0.2471, using 1280K FFT for exponent 20083277. [Wed Jul 06 18:22:21 2005] M20083277 completed P-1, B1=110000, B2=1732500, Wc1: 07AC1584 Trying 1000 iterations for exponent 20083717 using 1024K FFT. If average roundoff error is above 0.242, then a larger FFT will be used. Final average roundoff error is 0.24698, using 1280K FFT for exponent 20083717. [Wed Jul 06 22:34:40 2005] M20083717 completed P-1, B1=110000, B2=1732500, Wc1: 07891586 Trying 1000 iterations for exponent 20084447 using 1024K FFT. If average roundoff error is above 0.242, then a larger FFT will be used. Final average roundoff error is 0.2473, using 1280K FFT for exponent 20084447. [Thu Jul 07 21:51:26 2005] M20084447 completed P-1, B1=110000, B2=1732500, Wc1: 07AF158C Trying 1000 iterations for exponent 20085697 using 1024K FFT. If average roundoff error is above 0.242, then a larger FFT will be used. Final average roundoff error is 0.24757, using 1280K FFT for exponent 20085697. [Fri Jul 08 22:02:06 2005] M20085697 completed P-1, B1=110000, B2=1732500, Wc1: 07B4159A Trying 1000 iterations for exponent 20085763 using 1024K FFT. If average roundoff error is above 0.242, then a larger FFT will be used. Final average roundoff error is 0.24883, using 1280K FFT for exponent 20085763. [Sat Jul 09 13:17:39 2005] M20085763 completed P-1, B1=110000, B2=1732500, Wc1: 07A01598 Trying 1000 iterations for exponent 20086159 using 1024K FFT. If average roundoff error is above 0.242, then a larger FFT will be used. Final average roundoff error is 0.24765, using 1280K FFT for exponent 20086159. [Sat Jul 09 17:25:53 2005] M20086159 completed P-1, B1=110000, B2=1732500, Wc1: 07BF15A1 Trying 1000 iterations for exponent 20086471 using 1024K FFT. If average roundoff error is above 0.242, then a larger FFT will be used. Final average roundoff error is 0.24786, using 1280K FFT for exponent 20086471. [Sat Jul 09 21:06:59 2005] M20086471 completed P-1, B1=110000, B2=1732500, Wc1: 07DF1599 Trying 1000 iterations for exponent 20086519 using 1024K FFT. If average roundoff error is above 0.242, then a larger FFT will be used. Final average roundoff error is 0.24815, using 1280K FFT for exponent 20086519. [Sun Jul 10 01:13:36 2005] M20086519 completed P-1, B1=110000, B2=1732500, Wc1: 07D915A8 Trying 1000 iterations for exponent 20086831 using 1024K FFT. If average roundoff error is above 0.242, then a larger FFT will be used. Final average roundoff error is 0.24868, using 1280K FFT for exponent 20086831. [Sun Jul 10 14:36:14 2005] M20086831 completed P-1, B1=110000, B2=1732500, Wc1: 07AA15A0 Trying 1000 iterations for exponent 20087143 using 1024K FFT. If average roundoff error is above 0.242, then a larger FFT will be used. Final average roundoff error is 0.24786, using 1280K FFT for exponent 20087143. [Sun Jul 10 18:47:33 2005] P-1 found a factor in stage #2, B1=110000, B2=1732500. M20087143 has a factor: 70427895743876585930567 Trying 1000 iterations for exponent 20087231 using 1024K FFT. If average roundoff error is above 0.242, then a larger FFT will be used. Final average roundoff error is 0.24804, using 1280K FFT for exponent 20087231. [Mon Jul 11 18:27:14 2005] M20087231 completed P-1, B1=110000, B2=1732500, Wc1: 07CD15B5 Trying 1000 iterations for exponent 20087357 using 1024K FFT. If average roundoff error is above 0.242, then a larger FFT will be used. Final average roundoff error is 0.2479, using 1280K FFT for exponent 20087357. [Tue Jul 12 18:14:41 2005] M20087357 completed P-1, B1=110000, B2=1732500, Wc1: 07DE15AC Trying 1000 iterations for exponent 20088617 using 1024K FFT. If average roundoff error is above 0.242, then a larger FFT will be used. Final average roundoff error is 0.24939, using 1280K FFT for exponent 20088617. [Wed Jul 13 17:59:17 2005] M20088617 completed P-1, B1=110000, B2=1732500, Wc1: 07DE15B3 Trying 1000 iterations for exponent 20089507 using 1024K FFT. If average roundoff error is above 0.242, then a larger FFT will be used. Final average roundoff error is 0.24894, using 1280K FFT for exponent 20089507. [Wed Jul 13 22:11:14 2005] M20089507 completed P-1, B1=110000, B2=1732500, Wc1: 07D91442 Trying 1000 iterations for exponent 20089697 using 1024K FFT. If average roundoff error is above 0.242, then a larger FFT will be used. Final average roundoff error is 0.25006, using 1280K FFT for exponent 20089697. [Thu Jul 14 09:44:31 2005] M20089697 completed P-1, B1=110000, B2=1732500, Wc1: 07D4144E [Thu Jul 14 18:44:23 2005] M20090527 completed P-1, B1=110000, B2=1732500, Wc1: 07A11454 [Thu Jul 14 22:57:17 2005] M20090561 completed P-1, B1=110000, B2=1732500, Wc1: 07FC1454 [Fri Jul 15 22:39:18 2005] M20090599 completed P-1, B1=110000, B2=1732500, Wc1: 07CB1450 [Sat Jul 16 11:15:07 2005] M20090849 completed P-1, B1=110000, B2=1732500, Wc1: 07FB1456 [Sat Jul 16 15:26:45 2005] M20092057 completed P-1, B1=110000, B2=1842500, Wc1: 07FCC3EB [Sat Jul 16 19:47:03 2005] M20093081 completed P-1, B1=110000, B2=1842500, Wc1: 07C3C3D9 [Sun Jul 17 00:01:33 2005] M20093347 completed P-1, B1=110000, B2=1842500, Wc1: 07CDC3DE [Sun Jul 17 14:49:30 2005] M20094101 completed P-1, B1=110000, B2=1842500, Wc1: 07CAC3C3 [Sun Jul 17 19:08:57 2005] M20095021 completed P-1, B1=110000, B2=1842500, Wc1: 07E5C3C5 [Mon Jul 18 18:50:20 2005] M20095759 completed P-1, B1=110000, B2=1842500, Wc1: 07F2C3CB [Tue Jul 19 19:13:30 2005] M20096899 completed P-1, B1=110000, B2=1842500, Wc1: 081EC3B8 [Wed Jul 20 09:32:41 2005] M20098319 completed P-1, B1=110000, B2=1842500, Wc1: 080BC3A6 [Wed Jul 20 13:40:20 2005] M20098693 completed P-1, B1=110000, B2=1842500, Wc1: 080BC3A1 [Wed Jul 20 17:49:24 2005] M20099477 completed P-1, B1=110000, B2=1842500, Wc1: 0810C3AB 1 factor found
2005-07-20, 18:07 #4 delta_t Nov 2002 Anchorage, AK 3·7·17 Posts Are the error messages something to be concerned about? They are in the earlier ones, but not the latter ones. I don't think I've seen those before in other results submissions. Last fiddled with by delta_t on 2005-07-20 at 18:09
2005-07-20, 19:32 #5 cheesehead "Richard B. Woods" Aug 2002 Wisconsin USA 22·3·641 Posts Those messages are normal ever since version 22 iintroduced "soft FFT crossovers". Code: New features in Version 22.8 of prime95.exe ------------------------------------------- 1) Soft FFT crossovers have been implemented. If you test an exponent that is within 0.2% of the old hard FFT crossover point, then 1000 test iterations are run to determine if the smaller or larger FFT is appropriate for the exponent. So, for certain small ranges of exponents near the FFT crossover points (only 0.4% -- 0.2% on each side), Prime95 runs a test to determine whether the smaller (faster) FFT size yields an acceptable average roundoff error. If not, Prime 95 will use the larger (slower, but more accurate) FFT size. What you saw were the messages from that procedure. The ranges in which Prime95 does this include only 1/250 of all exponents, which is why you usually don't encounter this.
Similar Threads Thread Thread Starter Forum Replies Last Post ravlyuchenko Msieve 5 2011-05-09 13:16 dave_0273 Completed Missions 2 2005-06-30 20:49 dave_0273 Completed Missions 2 2005-05-20 16:07 dave_0273 Completed Missions 2 2005-05-16 17:55 dave_0273 Completed Missions 3 2005-05-14 12:07
All times are UTC. The time now is 12:23.
Thu Oct 28 12:23:51 UTC 2021 up 97 days, 6:52, 0 users, load averages: 2.59, 2.37, 2.24 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8730250000953674, "perplexity": 27787.424910682457}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588284.71/warc/CC-MAIN-20211028100619-20211028130619-00446.warc.gz"} |
https://www.physicsforums.com/threads/automorphisms-of-z_n.101186/ | # Homework Help: Automorphisms of Z_n
1. Nov 23, 2005
### johnnyboy2005
hey, was just hoping i could get a little help getting started on this one....
Let r be an element in U(n). Prove that the mapping (alpha):Zn-->Zn defined by (alpha)(s) = sr mod n for all s in Zn is an automorphism of Zn.
not expecting the answer, maybe to just push me in the right direction...thanks so much
2. Nov 23, 2005
### matt grime
This is simply treating Zn as an additive group, right? (You should right Z/n or Z/nZ or Z/(n))
Show it is a homomorphism and show it is either invertible (easy given the other use of the word invertible in the question) or just show it is a bijection.
3. Nov 23, 2005
### math-chick_41
Show that its 1-1 onto and operation preserving. since r is in U(n) then r^-1 exists and you will need that little tid bit to show 1-1 then since Zn are finite you get onto.
4. Dec 2, 2005
### johnnyboy2005
so i actually left this question for a bit. This is my soln' so far....
to show it is an automorphism the groups must be one to one and onto (easy to show) and to show that the function is map preserving i'm saying that for any a and b in Z(n) you will have
(alpha)(a+b) = (alpha)(a) + (alpha)(b) = (a)r mod n + (b)r mod n = (a + b)rmodn which is in the automorphism
5. Dec 2, 2005
### matt grime
groups are not one to one and onto, even if we assume that you mean are in bijective correspondence this does not mean any map between them is an isomorphism. if you are proving a map is an isomorphism it might behove you to mention the map in the alleged proof.
what do you mean by "in the automorphism"? Surely you aren't treating maps as sets.
It is a one line proof: (a+b)r=ar+br, and -ar=-(ar) are just general facts of mulitiplying numbers, so trivially it is a homomorphism, now why is it an isomorphism?
6. Dec 2, 2005
### johnnyboy2005
it is an isomorphism because it is one to one and onto....
7. Dec 2, 2005
### matt grime
and have you proved that? i don't think you can get away with saying it is easy to show without showing it; i would be sceptical that you had done so given the presentation of your argument, and the fact that if it were so easy why didn't you write it out? it'd be good if you didn't show one to one and onto but actually pointed out that the map is invertible. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9259262681007385, "perplexity": 1014.3061180580708}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267155702.33/warc/CC-MAIN-20180918205149-20180918225149-00235.warc.gz"} |
http://mathhelpforum.com/statistics/26733-solved-statistics-binomial-distribution.html | # Math Help - [SOLVED] Statistics - Binomial Distribution
1. ## [SOLVED] Statistics - Binomial Distribution
Question:
Given that D~B(12,0.7), calculate the smallest value of d such that P(D>d)<0.90
n = 12, p = 0.7, q = 1 - 0.7 = 0.3, d=?
2. I may be guilty of promoting myself here....but it's not for profit so I think it's OK.
If you look in the MHF software forum you will find a little program I wrote. I think it would be useful to you.
You can get the answer to your problem using the program but that's not really the point.
If you play around with it a little you might understand the question better.
As for answering the question properly...
Do you know how to calculate P(D=0) or P(D=1) for example?
3. Yes, I know how to calculate P(D=0)
P(D=0) = 12C0 * 0.7^0 * 0.3^(12-0)
= 1 * 1 * 0.000000531
= 0.000000531
I don't know how to get the value of d!
4. Originally Posted by looi76
Question:
Given that D~B(12,0.7), calculate the smallest value of d such that P(D>d)<0.90
n = 12, p = 0.7, q = 1 - 0.7 = 0.3, d=?
It wants you to find the samllest $d$ such that:
$P(D>d)<0.90$
where:
$P(D>d)= \sum_{r=d+1}^{12} b(r;12,0.7) <0.9,\ d$ an integer $\le 12$
where $b(r;12,0.7)$ is the probability of exactly $r$ successes in $12$ Bernoulli trials with a single trial probability of success of $0.7$
Now work backwards from $12$ adding probabilities untill the total is greater than $0.9$, and that last term is your $d$
RonL | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6662911176681519, "perplexity": 583.9558950041792}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500832662.33/warc/CC-MAIN-20140820021352-00472-ip-10-180-136-8.ec2.internal.warc.gz"} |
http://www.stockarcher.com/2012/09/ | Tuesday, September 18, 2012
Modeling Price Fluctuations
The premise of this post is that the movements in price of a security (e.g. stocks, bonds) can be viewed as a random process. Whether or not this is a valid assumption is somewhat of a philosophical question. The price of a security entirely depends on the factors of supply and demand, which are in turn deterministically governed by a multitude of more subtle factors. But like the outcome of a flip of a coin, which is completely determined by the equations of physics and the parameters of the system, such processes are much to complex to analyze in full generality. As a result, we model it as a stochastic process whose variance comes from all of these latent factors.
An illustration of random walks
Problem Statement and Assumptions
We are given the initial price $$P_0$$ and we want to make inferences about the future stock price $$P_T$$. The random variables $$P_i$$ must also be non-negative. The time scale here is arbitrary and can be made as large or small as necessary.
Our key assumption here is that the changes in price are independent and identically distributed (iid). We characterize the price change as the ratio $C_i = \frac{P_i}{P_{i-1}}$ Note that we didn't use a straightforward difference ($$P_i-P_{i-1}$$). The reason is because the difference most certainly isn't iid (a price of $1 has support on $$[-1,\infty]$$ whereas a price of$2 has support on $$[-2,\infty]$$). You'll notice that our characterization corresponds to a percentage difference (plus one).
The Normal Distribution
The normal distribution (also known as the bell curve, the Gaussian, etc.) is ubiquitous in modeling random variables. And so it would be reasonable to conjecture that $$P_T$$ is normally distributed. $f_{\mu,\sigma^2}(x) = \frac{1}{\sqrt{2\pi \sigma^2}}e^{-\frac{(x-\mu)^2}{2\sigma^2}}$
The normal distribution
However in a similar vein as to why we didn't use the difference in price as our characterization of change, the normal distribution doesn't have the correct support. If we had used the distribution as our model, we would have found that the model would assign a positive probability to the future price being less than 0.
Logarithms to the Rescue
Okay, let's actually do the math without resorting to guessing. The price $$P_{1}$$ can be expressed as $$C_1 \times P_0$$, and $$P_{2}$$ as $$C_2 \times P_1$$, and so on. Inductively continuing this process yields $P_T = C_T C_{T-1} \dots C_1 P_0$ Thus we have that $$P_T$$ is proportional to the product of $$T$$ iid random variables. The trick is to turn this product into a sum so then we can apply the central limit theorem. We do this by taking the logarithm of both sides \begin{align*} \log P_T &= \log(C_T C_{T-1} \dots C_1 P_0) \\ &= \log C_T + \log C_{T-1} + \dots + \log C_1 + \log P_0 \\ &\thicksim N(\mu,\sigma^2) \end{align*} Since the $$C_i$$s are iid, their logarithms must also be iid. Now we can apply the central limit theorem to see that $$\log P_T$$ converges to a normal distribution! The exponential of a normal distribution is known as the log-normal distribution so $$P_T$$ is log-normal. $g_{\mu,\sigma^2}(x) = \frac{1}{x\sqrt{2\pi \sigma^2}}e^{-\frac{(\log x-\mu)^2}{2\sigma^2}}$
The log-normal distribution
As a sanity check, we see that the support of the log-normal is on $$(0,\infty]$$ as expected.
But wait there's more!
In the beginning we noted that the choice of time-scale is arbitrary. By considering smaller time scales, we can view our $$C_i$$s as the product of finer grained ratios. Thus by the same argument as above, each of the $$C_i$$s must also be log-normally distributed.
Experimental Results
I took ~3200 closing stock prices of Microsoft Corporation (MSFT), courtesy of Yahoo! Finance from January 3, 2000 to today. I imported the data set into R and calculated the logarithms of the $$C_i$$s. I then plotted a normalized histogram of the results and overlaid the theoretical normal distribution on top of it. The plot is shown below:
Discussion
As you can see, the theoretical distribution doesn't fit our data exactly. The overall shape is correct, but our derived distribution puts too little mass in the center and too little on the edges.
We now must go back to our assumptions for further scrutiny. Our main assumption was that the changes are independent and identically distributed. In fact, it has been shown in many research papers (e.g. Schwert 1989) that the changes are not identically distributed, but rather vary over time. However, the central limit theorem is fairly robust in practice. Especially under a sufficiently large of samples, each "new" distribution will eventually sum to normality (and the sum of normal distributions is normal).
I suspect that the deviation from normality is primarily caused by dependence between samples. The heavy tails can be explained by the fact that a large drop/rise in price today may be correlated to another drop/rise in the near future. This is particularly true during times of extreme depression or economic growth. A similar argument can be made about the excess of mass in the center of the distribution. It is conceivable that times of low volatility will be followed by another time of low volatility.
Conclusion
While our model might not be perfect in practice, it is a good first step to developing a better model. I think what you should take from this is that it is important to experimentally verify your models rather than blindly taking your assumptions as ground truths. I'll conclude this post with a few closing remarks:
• Many people actually do use the normal distribution to model changes in prices despite the obvious objections stated above. One can justify this by noting that the distribution of $$C_i$$ in practice is usually close to 0. Thus the first order approximation $$e^x \approx 1+x$$ is fairly accurate.
• The histogram and fit shown above can be reproduced for almost any stock or index (e.g. S&P 500, DJIA, NASDAQ)
• R is a great piece of software but has god awful tutorials and documentation. I am not in a position to recommend it yet because of this.
Obligatory Disclaimer
The author is not qualified to give financial, tax, or legal advice and disclaims any and all liability for this information. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9639219045639038, "perplexity": 299.5773294265796}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891196.79/warc/CC-MAIN-20180122073932-20180122093932-00609.warc.gz"} |
http://www.math.gatech.edu/node/123 | ## Differential Equations
Department:
MATH
Course Number:
2552
Hours - Lecture:
3
Hours - Lab:
0
Hours - Recitation:
2
Hours - Total Credit:
4
Typical Scheduling:
Every Semester
Methods for obtaining numerical and analytic solutions of elementary differential equations. Applications are also discussed with an emphasis on modeling.
Prerequisites:
MATH 1502 OR MATH 1512 OR MATH 1555 OR MATH 1504 ((MATH 1552 OR MATH 15X2 OR MATH 1X52) AND (MATH 1522 OR MATH 1553 OR MATH 1554 OR MATH 1564 OR MATH 1X53))
Equivalent to MATH 2403.
Course Text:
Differential Equations: An Introduction to Modern Methods & Applications, by James R. Brannan and William E. Boyce (Third edition); John Wiley and Sons, Inc.
Topic Outline:
Topic Text Sections Lectures
Introduction, linear equations and First order differential equations 1.1-1.3, 2.1-2.7 7
Systems of first order equations 3.1-3.6, 6.1-6.7 13
Second order linear equations 4.1-4.7 7
The Laplace transform 5.1-5.9 9
Nonlinear Differential equations and stability 7.1-7.6 5
Numerical methods such as Euler's method 8.1 and 8.4 1 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8875367641448975, "perplexity": 6343.142883738479}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948512054.0/warc/CC-MAIN-20171211014442-20171211034442-00286.warc.gz"} |
https://colour.readthedocs.io/en/v0.3.13/generated/colour.dominant_wavelength.html | # colour.dominant_wavelength¶
colour.dominant_wavelength(xy, xy_n, cmfs=XYZ_ColourMatchingFunctions(name='CIE 1931 2 Degree Standard Observer', ...), reverse=False)[source]
Returns the dominant wavelength $$\lambda_d$$ for given colour stimulus $$xy$$ and the related $$xy_wl$$ first and $$xy_{cw}$$ second intersection coordinates with the spectral locus.
In the eventuality where the $$xy_wl$$ first intersection coordinates are on the line of purples, the complementary wavelength will be computed in lieu.
The complementary wavelength is indicated by a negative sign and the $$xy_{cw}$$ second intersection coordinates which are set by default to the same value than $$xy_wl$$ first intersection coordinates will be set to the complementary dominant wavelength intersection coordinates with the spectral locus.
Parameters: xy (array_like) – Colour stimulus xy chromaticity coordinates. xy_n (array_like) – Achromatic stimulus xy chromaticity coordinates. cmfs (XYZ_ColourMatchingFunctions, optional) – Standard observer colour matching functions. reverse (bool, optional) – Reverse the computation direction to retrieve the complementary wavelength. Dominant wavelength, first intersection point xy chromaticity coordinates, second intersection point xy chromaticity coordinates. tuple
References
Examples
Dominant wavelength computation:
>>> from pprint import pprint
>>> xy = np.array([0.54369557, 0.32107944])
>>> xy_n = np.array([0.31270000, 0.32900000])
>>> cmfs = CMFS['CIE 1931 2 Degree Standard Observer']
>>> pprint(dominant_wavelength(xy, xy_n, cmfs)) # doctest: +ELLIPSIS
(array(616...),
array([ 0.6835474..., 0.3162840...]),
array([ 0.6835474..., 0.3162840...]))
Complementary dominant wavelength is returned if the first intersection is located on the line of purples:
>>> xy = np.array([0.37605506, 0.24452225])
>>> pprint(dominant_wavelength(xy, xy_n, cmfs)) # doctest: +ELLIPSIS
(array(-509.0),
array([ 0.4572314..., 0.1362814...]),
array([ 0.0104096..., 0.7320745...])) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5129851698875427, "perplexity": 14707.657886106}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057227.73/warc/CC-MAIN-20210921191451-20210921221451-00488.warc.gz"} |
https://www.quantumforest.com/2014/07/comment-on-sustainability-and-innovation-in-staple-crop-production-in-the-us-midwest/ | Comment on Sustainability and innovation in staple crop production in the US Midwest
After writing a blog post about the paper “Sustainability and innovation in staple crop production in the US Midwest” I decided to submit a formal comment to the International Journal of Agricultural Sustainability in July 2013, which was published today. As far as I know, Heinemann et al. provided a rebuttal to my comments, which I have not seen but that should be published soon. This post is an example on how we can use open data (in this case from the USDA and FAO) and free software (R) to participate in scientific discussion (see supplementary material below).
The text below the *** represents my author’s version provided as part of my Green Access rights. The article published in the International Journal of Agricultural Sustainability [copyright Taylor & Francis]; is freely available online at http://dx.doi.org/10.1080/14735903.2014.939842).
While I had many issues with the original article, I decided to focus on three problems—to make the submission brief and fit under the 1,000 words limit enforced by the journal editor. The first point I make is a summary of my previous post on the article, and then move on to two big problems: assuming that only the choice of biotechnology affects yield (making the comparison between USA and Western Europe inadequate) and comparing use of agrochemicals at the wrong scale (national- versus crop-level).
PS 2014-08-05 16:30: Heinemann et al.’s reply to my comment is now available.
• They state that “Apiolaza did not report variability in the annual yield data” and that they had to ‘reconstruct’ my figure. Firstly, the published figure 2 does include error bars and, secondly, the data and R code are available as supplementary information (in contrast to their original paper). Let’s blame the journal for not passing this information to them.
• They also include 2 more years of data (2011 & 2012), although the whole point of my comment is that the conclusions they derived with the original data set were not justified. Heinemann et al. are right in that yields in 2011 & 2012 were higher in Western Europe than in the USA (the latter suffering a huge drought); however, in 2013 it was the opposite: 99,695 Hg/ha for USA vs 83,724 Hg/ha for Western Europe.
• They question that I commented only on one crop, although I did cover another crop (wheat) quickly with respect to the use of pesticides—but not with the detail I wanted, as there was a 1,000 words limit. I would have also mentioned the higher variability for Western European wheat production, as it is weird that they pointed out that high variability is a problems for USA maize production but said nothing for wheat in Europe.
• Furthermore, they also claimed “There is no statistically significant difference in the means over the entire 50-year period” however, a naive paired t-test t.test(FAOarticle$Yield[1:50], FAOarticle$Yield[51:100], paired = TRUE) (see full code below) says otherwise.
***
Abstract
This comment highlight issues when comparing genetically modified (GM) crops to non-GM ones across countries. Ignoring structural differences between agricultural sectors and assuming common yield trajectories before the time of introduction of GM crops results on misestimating the effect of GM varieties. Further data collection and analyses should guide policy-makers to encourage diverse approaches to agriculture, rather than excluding specific technologies (like GM crops) from the onset.
Keywords: genetic modification; biotechnology; productivity; economics
In a recent article Heinemann et al. (2013) focused “on the US staple crop agrobiodiversity, particularly maize” using the contrast between the yield of Western Europe and United States as a proxy for the comparison between genetically modified (GM) maize versus non-GM maize. They found no yield benefit from using GM maize when comparing the United States to Western Europe.
In addition, Heinemann et al. contrasted wheat yields across United States and Western Europe to highlight the superiority of the European biotechnological package from a sustainability viewpoint.
I am compelled to comment on two aspects that led the authors to draw incorrect conclusions on these issues. My statistical code and data are available as supplementary material.
1. Misestimating the effect of GM maize varieties
Heinemann et al. used FAO data[], from 1961 to 2010 inclusive, to fit linear models with yield as the response variable, country and year as predictors. Based on this analysis they concluded, “W. Europe has benefitted from the same, or marginally greater, yield increases without GM”. However, this assumes a common yield trajectory for United States and Western Europe before significant commercial use of GM maize, conflating GM and non-GM yields. GM maize adoption in United States has continually increased from 25% of area of maize planted in 2000 to the current 90% (Figure 1, United States Department of Agriculture 2013[]).
If we fit a linear model from 1961 to 1999 (last year with less than 25% area of GM maize) we obtain the following regression equations $$y = 1094.8 x + 39895.6$$ (United States, R2 = 0.80) and $$y = 1454.5 x + 29802.2$$ (W. Europe, R2 = 0.90). This means that Western Europe started with a considerably lower yield than the USA (29,802.2 vs 39,895.6 hg/ha) in 1961 but increased yields faster than USA (1,454.5 vs 1,094.8 hg/ha per year) before substantial use of GM maize. By 1999 yield in Western Europe was superior to that in United States.
This is even more evident in Figure 2, which shows average yield per decade, removing year-to-year extraneous variation (e.g. due to weather). Western European yields surpassed United States’s during the 1990s (Figure 2). This trend reverses in the 2000s, while United States simultaneously increased the percentage of planted area with GM maize, directly contradicting Heinemann et al.’s claim.
2. Ignoring structural differences between agricultural sectors
When discussing non-GM crops using wheat the authors state, “the combination of biotechnologies used by W. Europe is demonstrating greater productivity than the combination used by the United States”. This sentence summarizes one of the central problems of their article: assuming that, if it were not for the choice of biotechnology bundle, the agricultural sectors would have the same intrinsic yield, making them comparable. However, many inputs beside biotechnology affect yield. For example, Neumann et al. (2010) studied the spatial distribution of yield and found that in the Unites States “access can explain most of the variability in wheat efficiency. In the more remote regions land prices are lower and inputs are therefore often substituted by land leading to lower efficiencies”. Lower yields in United States make sense from an economic point of view, as land replaces more expensive inputs like agrochemicals.
Heinemann et al. support their case by comparing pesticide use between United States and France across all crops. However, what is relevant to the discussion is pesticide use for the crops being compared. European cereals, and wheat in particular, are the most widely fungicide-treated group of crops worldwide (Kucket al. 2012). For example, 27% of the wheat planted area in France was already treated with fungicides by 1979 (Jenkins and Lescar 1980). More than 30 years later in the United States this figure has reached only 19% for winter wheat (which accounts for 70% of planted area, NASS 2013). Fungicide applications result on higher yield responses (Oerke 2006).
Final remarks
Heinemann et al. ignored available data on GM adoption when analysing maize yields. They also mistakenly treated biotechnological bundles as the only/main explanation for non-GMO yield differences between United States and Western Europe. These issues mean that the thrust of their general conclusion is unsupported by the available evidence. Nevertheless, their article also raised issues that deserve more consideration; e.g. the roles of agricultural subsidies and market concentration on food security.
Agricultural sustainability requires carefully matching options in the biotechnology portfolio to site-specific economic, environmental and cultural constraints. Further data collection and analyses should lead policy makers to encourage diverse approaches to agriculture, rather than excluding specific technologies (like GMOs and pesticides) from the onset.
References
Heinemann, J. A., Massaro, M., Coray, D. S., Agapito-Tenfen, S. Z. and Wen, J. D. 2013. Sustainability and innovation in staple crop production in the US Midwest. International Journal of Agricultural Sustainability (available here).
Jenkins, J. E. E. and Lescar, L. 1980. Use of foliar fungicides on cereals in Western Europe. Plant Disease, 64(11): 987-994 (behind paywall).
Kuck, K. H., Leadbeater, A. and Gisi, U. 2012. FRAC Mode of Action Classification and Resistance Risk of Fungicides. In: Krämer, W., Schirmer, U., Jeschke, P. and Witschel, M., eds., Modern Crop Protection Compounds. Wiley. 539-567.
NASS, 2013. Highlights: 2012 Agricultural Chemical Use Survey. Wheat. United States Department of Agriculture (available here).
Neumann, K., Verburg, P. H., Stehfest, E., and Müller, C. 2010. The yield gap of global grain production: a spatial analysis. Agricultural Systems, 103(5), 316–326 (behind paywall).
Oerke E. 2006. Crop losses to pests. Journal of Agriculture Science, 144: 31-43 (behind paywall, or Free PDF).
United States Department of Agriculture. 2013. Adoption of Genetically Engineered Crops in the U.S. USDA Economic Research Service (available here).
Supplementary material
You can replicate the analyses and plots produced in this comment using the following files:
• Maize production data for United States and Western Europe (csv, extracted from FAO). []
• GMO maize penetration data (csv, extracted from USDA). []
• R code for analyses (R file, changed extension to .txt so WordPress would not complain). | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.327811062335968, "perplexity": 4693.205857216163}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221219197.89/warc/CC-MAIN-20180821230258-20180822010258-00093.warc.gz"} |
http://physics.stackexchange.com/users/17341/achiralsarkar?tab=activity&sort=all | # AchiralSarkar
less info
reputation
5
bio website achiralsarkar.wordpress.com location India age 20 member for 11 months seen Nov 24 at 4:58 profile views 33
I am an undergraduate student majoring in Physics.I like to see myself as an autodidact who is also a self-deluding dilettante.
Apart from physics, my interests include mathematics and computer science. I am a computer enthusiast and I enjoy science fiction and computer games. I also dabble in English literature, politics and philosophy.
I am a free thinker and advocate a naturalist world view.
# 17 Actions
May29 accepted What is the physical significance of the off-diagonal moment of inertia matrix elements? May29 asked What is the physical significance of the off-diagonal moment of inertia matrix elements? Jan24 accepted Physical Significance of Fourier Transform and Uncertainty Relationships Jan24 comment Physical Significance of Fourier Transform and Uncertainty Relationships Though the FAQ of Stack Exchange discourages users from adding comments expressing thanks etc. I must say, this was just awesome! Thank you, very much! Jan24 asked Physical Significance of Fourier Transform and Uncertainty Relationships Jan3 answered Books that every physicist should read Jan2 awarded Supporter Jan1 awarded Scholar Jan1 accepted System of Particles and Moment of Mass Jan1 comment System of Particles and Moment of Mass Well, my textbooks says an expectation value of a random variable is the weighted average of all possible values that this random variable can take on.Mathematically,the expected value is the integral of the random variable with respect to its probability measure. And just one more doubt, what you call probability density and 'probability measure' is the same thing, I guess. Correct me if I am wrong Jan1 comment System of Particles and Moment of Mass Well, your explanation is pretty straight forward for the first moment of mass but while talking about the second moment of mass, you say that it is the moment of (moment of mass). Since it is dimensionally correct , I guess, it is right but is it the correct of interpreting 'moment of inertia' which is a tensor when talking about a rigid body rotating in 3 D space? We ca get the components by your definition but what about the $M(X^2+Y^2)$ component of the matrix Jan1 awarded Student Jan1 comment System of Particles and Moment of Mass Can you please elaborate on what you mean by 'expectation value'? Jan1 awarded Editor Jan1 revised System of Particles and Moment of Mass edited body Jan1 asked System of Particles and Moment of Mass Jan1 awarded Autobiographer | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9084765315055847, "perplexity": 1240.233572519809}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1387345775355/warc/CC-MAIN-20131218054935-00011-ip-10-33-133-15.ec2.internal.warc.gz"} |
http://link.springer.com/article/10.1007/BF01941137 | Part II Computer Science
BIT Numerical Mathematics
, Volume 28, Issue 3, pp 605-619
First online:
Terminating general recursion
• Bengt NordströmAffiliated withDepartment of Computer Science, Chalmers University of Technology and the University of Göteborg
Rent the article at a discount
Rent now
* Final gross prices may vary according to local VAT.
Abstract
In Martin-Löf's type theory, general recursion is not available. The only iterating constructs are primitive recursion over natural numbers and other inductive sets. The paper describes a way to allow a general recursion operator in type theory (extended with propositions). A proof rule for the new operator is presented. The addition of the new operator will not destroy the property that all well-typed programs terminate. An advantage of the new program construct is that it is possible to separate the termination proof of the program from the proof of other properties.
D.2.1 D.2.4 D.3.1 F.3.1 F.3.3
Key Words
recursion well-founded induction programming logic fixed point termination proof | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8857125043869019, "perplexity": 1456.361700039337}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860122268.99/warc/CC-MAIN-20160428161522-00100-ip-10-239-7-51.ec2.internal.warc.gz"} |
https://economics.stackexchange.com/questions/50121/why-does-the-belief-over-information-sets-with-probability-zero-matter-in-perfec | # Why does the belief over information sets with probability zero matter in Perfect Bayesian Equilibrium?
I'm struggling to understand why the notion of "belief revision" is an important concept. In particular, why does the belief over information sets with probability zero matter?
When comparing to the notion of "weak sequential equilibriums" (i.e. an assessment that satisfies sequential rationality and Bayesian updating at reached information sets), since both equilibria satisfy sequential rationality, does this mean that for any profile $$\sigma_W$$ of a weak sequential equilibrium, there exists a profile $$\sigma_P$$ belonging to a perfect Bayesian equilibrium such that $$\sigma_W$$ and $$\sigma_P$$ agree on information sets with positive probability?
Finally, suppose that all information sets have non-zero probability. In this case, is every weak sequential equilibrium also a perfect Bayesian equilibrium?
No, having $$\sigma_W$$ as a weak equilibria might require empty threats or crazy beliefs off-path, so there may not be a PBE supporting the same path of play (the reverse is obviously true). | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 5, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9604870676994324, "perplexity": 821.2111058440925}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662527626.15/warc/CC-MAIN-20220519105247-20220519135247-00377.warc.gz"} |
http://mathhelpforum.com/number-theory/12024-congruence-modulo-m.html | # Math Help - congruence modulo m
1. ## congruence modulo m
Find x, if 1<x<84 such that x = 7 (mod 12).
2. Originally Posted by jenjen
Find x, if 1<x<84 such that x = 7 (mod 12).
This is simple.
Trivally,
x = 7 (mod 12)
Thus, 7 works. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9620251059532166, "perplexity": 6061.186436838459}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257824995.51/warc/CC-MAIN-20160723071024-00312-ip-10-185-27-174.ec2.internal.warc.gz"} |
https://byjus.com/maths/length-of-tangent/ | # Length Of Tangent On A Circle
A tangent to a circle is defined as a line segment that touches the circle exactly at one point. There are some important points regarding tangents:
• A tangent to a circle cannot be drawn through a point which lies inside the circle. It is so because all the lines passing through any point inside the circle, will intersect the circle at two points.
• There is exactly one tangent to a circle which passes through only one point on the circle.
• There are exactly two tangents can be drawn to a circle from a point outside the circle.
In the figure, $P$ is an external point from which tangents are drawn to the circle. $A$ and $B$ are points of contact of the tangent with a circle. The length of a tangent is equal to the length of a line segment with end-points as the external point and the point of contact. So, $PA$ and $PB$ are the lengths of tangent to the circle from an external point $P$.
## Some theorems on length of tangent
Theorem 1: The lengths of tangents drawn from an external point to a circle are equal. It is proved as follows:
Consider the circle with center $O$. $PA$ and $PB$ are the two tangents drawn to the circle from the external point $P$. $OA$ and $OB$ are radii of the circle.
Since tangent on a circle and the radius are perpendicular to each other at the point of tangency,
$∠PAO$ = $∠PBO$ = $90°$
Consider the triangles, $∆PAO$ and $∆PBO$,
$∠PAO$ = $∠PBO$ = $90°$
$PO$ is common side for both the triangles,
$OA$ = $OB$ [Radii of the circle]
Hence, by RHS congruence theorem,
$∆PAO ≅ ∆PBO$
$⇒ PA = PB$ (Corresponding parts of congruent triangles)
This can also be proved by using Pythagoras theorem as follows,
Since,
$∠PAO$ = $∠PBO$ = $90°$
$∆PAO$ and $∆PBO$ are right angled triangles.
$PA^2$ = $OP^2 – OA^2$
Since $OA$ = $OB$,
$PA^2$ = $OP^2 – OB^2$ = $PB^2$
This gives, $PA$ = $PB$
Therefore, tangents drawn to a circle from an external point will have equal lengths. There is an important observation here:
• Since $∠APO$ = $∠BPO$, $OP$ is the angle bisector of $∠APB$.
Therefore, center of the circle lies on angle bisector of the angle made by two tangents to the circle from an external point.
Let’s consider an example for better understanding of the concept of length of the tangent drawn to a circle from an external point.
Example: A circle is inscribed in the quadrilateral $ABCD$, Prove that $AB + CD$ = $AD + BC$.
Tangents drawn from the point $A$ will have equal lengths.
This gives,
$AP$ = $AM$ —(1)
Similarly, for tangents drawn from point $B$,
$BN$ = $BM$ —(2)
From point $C$,
$CN$ = $CO$ —(3)
From point $D$,
$DP$ = $DO$ —(4)
Adding equations(1),(2), (3) and (4) gives,
$AP + BN + CN + DP$ = $AM + BM + CO + DO$
$\Rightarrow AP + PD + BN + NC$ = $AM + MB + DO + OC$
$\Rightarrow AD + BC$ = $AB + CD$
That’s the required proof. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8274152874946594, "perplexity": 174.14126410336752}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986648343.8/warc/CC-MAIN-20191013221144-20191014004144-00376.warc.gz"} |
https://www.ou.org/torah/halacha/hashoneh-halachos/mon_12_17_12/ | # 447. Favors for the Lender
65:7 If one person lends money to a friend for a certain amount of time so that the friend will reciprocate and will later lend him a larger amount for the same length of time or the same amount for a longer period, this is also interest. If the intention is that the friend will lend him the same amount for the same length of time, the authorities differ as to whether or not it is interest; it is therefore advisable to act stringently in this matter and avoid the situation. However, if no condition was made and the one who initially borrowed later lends to him of his own accord, even if he is only doing so because he had previously borrowed, one may act leniently.
65:8 So long as the debt is outstanding, the lender must be careful not to benefit from the borrower without the borrower’s knowledge. This even applies to something that the borrower would have done for the lender anyway, even if he had not lent him money. The reason for this is that since the lender enjoys something that is not his, it seems that he is relying on the unpaid loan to assume that the borrower would permit this. On the other hand, if the borrower knowingly does the lender a favor, it is permitted so long as it’s something he would have done for him anyway, even without the loan. Nevertheless, such favors may not be made public knowledge. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8030202388763428, "perplexity": 634.6428395374406}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988721268.95/warc/CC-MAIN-20161020183841-00020-ip-10-171-6-4.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/abstract-algebra-question.400108/ | # Homework Help: Abstract Algebra question
1. May 1, 2010
### tyrannosaurus
1. The problem statement, all variables and given/known data
(1)To prove this I have to let G be a group, with |G|=p^2.
(2)Use the G/Z(G) theorem to show G must be Abelian.
(3) Use the Fundamental Theorem of Finite Abelian Groups to find all the possible isomorphism types for G.
2. Relevant equations
Z(G) = the center of G (a is an element of G such that ax=xa for all x in G)
3. The attempt at a solution
I can prove it by using Conjugacy classes and gettting that the order of Z(G) must be non-trival and going on from there, but we have not gotten to Conjugacy Classes yet so i can't use this fact. Can anyone help me on this? I know that |Z(G)|=1 or pq when the order of |G|=pq where p and q are not distinct primes. From there I am unsure on how to uise the G/Z(G) thereom to prove that G is abelian. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9149356484413147, "perplexity": 357.38566749262606}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864256.26/warc/CC-MAIN-20180621172638-20180621192638-00002.warc.gz"} |
http://hitchhikersgui.de/Extensional_tectonics | # Extensional tectonics
Extensional tectonics is concerned with the structures formed by, and the tectonic processes associated with, the stretching of a planetary body's crust or lithosphere.
## Deformation styles
The types of structure and the geometries formed depend on the amount of stretching involved. Stretching is generally measured using the parameter β, known as the beta factor, where
${\displaystyle \beta ={\frac {t_{0}}{t_{1}}}\,,}$
t0 is the initial crustal thickness and t1 is the final crustal thickness. It is also the equivalent of the strain parameter stretch.[1]
### Low beta factor
In areas of relatively low crustal stretching, the dominant structures are high to moderate angle normal faults, with associated half grabens and tilted fault blocks.[2]
### High beta factor
In areas of high crustal stretching, individual extensional faults may become rotated to too low a dip to remain active and a new set of faults may be generated.[3] Large displacements may juxtapose syntectonic sediments against metamorphic rocks of the mid to lower crust and such structures are called detachment faults. In some cases the detachments are folded such that the metamorphic rocks are exposed within antiformal closures and these are known as metamorphic core complexes.[citation needed]
### Passive margins
Passive margins above a weak layer develop a specific set of extensional structures. Large listric regional (i.e. dipping towards the ocean) faults are developed with rollover anticlines and related crestal collapse grabens. On some margins, such as the Niger Delta, large counter-regional faults are observed, dipping back towards the continent, forming large grabenal mini-basins with antithetic regional faults.[4]
## Geological environments associated with extensional tectonics
Areas of extensional tectonics are typically associated with:
Horst and graben structure, typical rift related structure (direction of extension shown by red arrows).
### Continental rifts
Rifts are linear zones of localized crustal extension. They range in width from somewhat less than 100 km up to several hundred km, consisting of one or more normal faults and related fault blocks.[2] In individual rift segments, one polarity (i.e. dip direction) normally dominates giving a half-graben geometry.[5] Other common geometries include metamorphic core complexes and tilted blocks. Examples of active continental rifts are the Baikal Rift Zone and the East African Rift.
### Divergent plate boundaries
Divergent plate boundaries are zones of active extension as the crust newly formed at the mid-ocean ridge system becomes involved in the opening process.
### Gravitational spreading of zones of thickened crust
Zones of thickened crust, such as those formed during continent-continent collision tend to spread laterally; this spreading occurs even when the collisional event is still in progress.[6] After the collision has finished the zone of thickened crust generally undergoes gravitational collapse, often with the formation of very large extensional faults. Large-scale Devonian extension, for example, followed immediately after the end of the Caledonian orogeny particularly in East Greenland and western Norway.[7][8]
### Releasing bends along strike-slip faults
When a strike-slip fault is offset along strike such as to create a gap i.e. a left-stepping bend on a sinistral fault, a zone of extension or transtension is generated. Such bends are known as releasing bends or extensional stepovers and often form pull-apart basins or rhombochasms. Examples of active pull-apart basins include the Dead Sea, formed at a left-stepping offset of the sinistral sense Dead Sea Transform system, and the Sea of Marmara, formed at a right-stepping offset on the dextral sense North Anatolian Fault system.[9]
### Back-arc basins
Back-arc basins form behind many subduction zones due to the effects of oceanic trench roll-back which leads to a zone of extension parallel to the island arc.
### Passive margins
A passive margin built out over a weaker layer, such as an overpressured mudstone or salt, tends to spread laterally under its own weight. The inboard part of the sedimentary prism is affected by extensional faulting, balanced by outboard shortening.
## References
1. ^ Park, R. G. (1997). Foundations of Structural Geology (3rd ed.). Psychology Press. p. 64. ISBN 978-0-7487-5802-9.
2. ^ a b Kearey, P.; Klepeis, K.A.; Vine, F.J. (2009). "Continental rifts and rifted margins". Global Tectonics. WileyBlackwell. p. 153. ISBN 978-1-4443-0322-3.
3. ^ Proffett, J.M. 1977. Cenozoic geology of the Yerington district, Nevada, and implications for the nature of Basin and Range faulting. Bull. geol. Soc. Am. 88, 247–66.
4. ^ Tuttle, M.L.W., Charpentier, R.R. & Brownfield, M.E. 2002. The Niger Delta Petroleum System: Niger Delta Province, Nigeria, Cameroon, and Equatorial Guinea, Africa. USGS Open-File Report 99-50-H.
5. ^ Ebinger, C.J., Jackson, J.A., Foster, A.N. & Hayward, N.J. 1999. Extensional basin geometry and the elastic lithosphere. Philosophical Transactions of the Royal Society, London, A, 357, 741–765.
6. ^ Tapponier, P. Mercier, J.L., Armijo, R., Tonglin, H, & Ji, Z. 1981. Field evidence for active normal faulting in Tibet. Nature, 294, 410–414.
7. ^ Dunlap, J.W. & Fossen, H. 1998: Early Paleozoic orogenic collapse, tectonic stability, and late Paleozoic continental rifting revealed through thermochronology of K-feldspars, southern Norway. Tectonics 17, 604–620.
8. ^ Hartz, E.H, Andresen, A., Hodges K.V. & Martin, M.W., 2000, The Fjord Region Detachment Zone: A long-lived extensional fault in the East Greenland Caledonides, J. Geol. Soc. London, 158, 795–810.
9. ^ Armijo, R.; Meyer, B.; Navarro, S.; King, G.; Barka, A. (2002), "Asymmetric slip partitioning in the Sea of Marmara pull-apart: a clue to propagation processes of the North Anatolian Fault?" (PDF), Terra Nova, Wiley-Blackwell, 14 (2): 80–86., Bibcode:2002TeNov..14...80A, CiteSeerX 10.1.1.546.4111, doi:10.1046/j.1365-3121.2002.00397.x | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6255154609680176, "perplexity": 24331.958771290952}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583512015.74/warc/CC-MAIN-20181018214747-20181019000247-00539.warc.gz"} |
https://kr.mathworks.com/help/fixedpoint/ug/scaling-basics.html | ## Scaling
Fixed-point numbers can be encoded according to the scheme
where the slope can be expressed as
The integer is sometimes called the stored integer. This is the raw binary number, in which the binary point assumed to be at the far right of the word. In Fixed-Point Designer™ documentation, the negative of the fixed exponent is often referred to as the fraction length.
The slope and bias together represent the scaling of the fixed-point number. In a number with zero bias, only the slope affects the scaling. A fixed-point number that is only scaled by binary point position is equivalent to a number in [Slope Bias] representation that has a bias equal to zero and a slope adjustment factor equal to one. This is referred to as binary point-only scaling or power-of-two scaling:
or
Fixed-Point Designer software supports both binary point-only scaling and [Slope Bias] scaling.
Note
For examples of binary point-only scaling, see the Fixed-Point Designer Perform Binary-Point Scaling example.
For an example of how to compute slope and bias in MATLAB®, see Compute Slope and Bias
평가판 신청 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.939366340637207, "perplexity": 1532.6885141456078}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304345.92/warc/CC-MAIN-20220123232910-20220124022910-00527.warc.gz"} |
http://ilovephilosophy.com/viewtopic.php?f=48&t=193858 | ## Cause of Mass Shootings Revealed:
Discussion of the recent unfolding of history.
### Cause of Mass Shootings Revealed:
Entitlement politics.
In other words, the leftist school of thought that lowers the threshold of emotional tolerance, and gives justification for whatever feelings of rage against people who intend no harm.
My friend Thrasymachus just gave me this insight.
this is pretty solid, I think.
Of course the availability of assault rifles helps, but the cause of the rapid rise in mass shootings is the rapid increase of easily triggered people.
It is so cleanly evident that it appears almost scientific, and the consequences of the insight are thus pretty terrifying. Ill see what you all think.
Before the Light - Tree of Life
The strong do what they can, the weak accept what they must.
- Thucydides
Fixed Cross
Doric Usurper
Posts: 7777
Joined: Fri Jul 15, 2011 12:53 am
Location: the black ships
### Re: Cause of Mass Shootings Revealed:
I said about the same thing viewtopic.php?f=3&t=193836&start=225#p2694175
Entitlement, yup.
Serendipper
Philosopher
Posts: 1114
Joined: Sun Aug 13, 2017 7:30 pm
### Re: Cause of Mass Shootings Revealed:
The temple mount will be rebuilt in Jerusalem and all the nations of the world will be ruled from there. All races, cultures, leaders, and nations will come to bow before the new messiah yet to come. All will come to know the chosen of God who refer themselves as Jews. For every Jew there will be a thousand goyim that will be their slaves as it was ordained by God. Every man, woman, and child will convert to Zionism.
Zero_Sum
New World Order Enthusiast
Posts: 1838
Joined: Thu Nov 30, 2017 7:05 pm
Location: United States- Greater Israel
### Re: Cause of Mass Shootings Revealed:
Fixed Cross wrote:It is so cleanly evident that it appears almost scientific
This is my favorite part. 'Scientific' being things that are clearly evident. Not, say, where you use a particular methodology.
Karpel Tunnel
Thinker
Posts: 869
Joined: Wed Jan 10, 2018 12:26 pm
### Re: Cause of Mass Shootings Revealed:
Fixed Cross wrote:Entitlement politics.
In other words, the leftist school of thought that lowers the threshold of emotional tolerance, and gives justification for whatever feelings of rage against people who intend no harm.
My friend Thrasymachus just gave me this insight.
this is pretty solid, I think.
Of course the availability of assault rifles helps, but the cause of the rapid rise in mass shootings is the rapid increase of easily triggered people.
It is so cleanly evident that it appears almost scientific, and the consequences of the insight are thus pretty terrifying. Ill see what you all think.
Yeah, because conservatives don't have any entitlement politics of their own.
The temple mount will be rebuilt in Jerusalem and all the nations of the world will be ruled from there. All races, cultures, leaders, and nations will come to bow before the new messiah yet to come. All will come to know the chosen of God who refer themselves as Jews. For every Jew there will be a thousand goyim that will be their slaves as it was ordained by God. Every man, woman, and child will convert to Zionism.
Zero_Sum
New World Order Enthusiast
Posts: 1838
Joined: Thu Nov 30, 2017 7:05 pm
Location: United States- Greater Israel
### Re: Cause of Mass Shootings Revealed:
Fixed Cross wrote:Of course the availability of assault rifles helps, but the cause of the rapid rise in mass shootings is the rapid increase of easily triggered people.
And conservative defense of abusive corporate practice, corporate undermining of democracy, corporate manipulation of emotions via media/advertising and now digital device addiction, corporate demand for wars, and conservative long standing fascination with disciplining children and their idiotic tabula rasa let's fill up their brains with the right answers pedagogy
have not added any stress to the equation. We have moved so far to the right in terms of economic issues and corporations have vastly more control of brains than they did 40 years ago, but it's all the Left's fault.
Now we'll get a comment that conservatives today are really leftists as if we are responsible for silly language use.
Karpel Tunnel
Thinker
Posts: 869
Joined: Wed Jan 10, 2018 12:26 pm
### Re: Cause of Mass Shootings Revealed:
Fixed Cross wrote:Entitlement politics.
In other words, the leftist school of thought that lowers the threshold of emotional tolerance, and gives justification for whatever feelings of rage against people who intend no harm.
My friend Thrasymachus just gave me this insight.
this is pretty solid, I think.
Of course the availability of assault rifles helps, but the cause of the rapid rise in mass shootings is the rapid increase of easily triggered people.
It is so cleanly evident that it appears almost scientific, and the consequences of the insight are thus pretty terrifying. Ill see what you all think.
This is quite true. These people cannot regulate their own emotions, and then are taught about how evil society is, other people are, how entitled they are themselves and how victimized.
Leftism in general has done this. The rise of crimes like this corresponds perfectly to the rise of Leftist Statism, the PC virtue signaling neoliberal social justice warrior microaggressions ethos.
I happen to know why leftists are like this: they think that they are making things safer, more peaceful, by denying freedom of speech and gun rights, because free speech implies offensive speech leading to conflicts, and guns imply violence. Therefore, the leftist reasons in perfect Kantian fashion, if I never offend anyone then I’m following the universal maxim of not giving offense, therefore everyone could also do that and things would be perfect; likewise with guns, if the leftist herself doesn’t own a gun then by categorical imperative everyone else could also do the same and there would be no guns.
This sort of feel good soggy moralism doesn’t jive with the real world, of course. It’s just an excuse to live in fantasy land. Sure, we can follow the golden rule whenever possible but that is no substitute for a social philosophy, much less for an Ontology.
Modern Leftist thought is about self-disempowerment, which jives with erecting the nanny state big daddy government as the new god-religion. It’s literslly insane. These people are insane.
Consider how divorced from rational classical liberal thinking these modern and “postmodern” leftists are. Their pathology is deep, it’s no wonder there aren’t even more of these violent incidents.
EIHWAZ PERTHO NAUTHIZ
ANSUZ
URUZ
Philosopher
Posts: 2049
Joined: Tue Nov 10, 2015 12:14 am
Location: The topoi
### Re: Cause of Mass Shootings Revealed:
Plus you have leftists in medicine and pharma handing out psychopills like tic tacs, to young people who already are taught how to hate and blame others and who cannot regulate their emotions. Yeah good idea, screwing up the brain chemistry with ideology and neurosis wasn’t enough, now you have to go in there and physically mess with it at the chemical level too, on top of that.
These killers almost always are on psychopills. And the FBI (deliberate?) failures have also been the supporting cause of many of these incidents.
EIHWAZ PERTHO NAUTHIZ
ANSUZ
URUZ
Philosopher
Posts: 2049
Joined: Tue Nov 10, 2015 12:14 am
Location: The topoi
### Re: Cause of Mass Shootings Revealed:
Antidepressants Are Expensive Tic Tacs
Treats depression by placebo-action; side-effects are the bonus because if your depression doesn't stop, at least you'll have a GOOD reason to be depressed when you can't eat or sleep and get a stiff case of erectile disfunction. You can't lose!
Serendipper
Philosopher
Posts: 1114
Joined: Sun Aug 13, 2017 7:30 pm
### Re: Cause of Mass Shootings Revealed:
UrGod wrote:Plus you have leftists in medicine and pharma handing out psychopills like tic tacs, to young people who already are taught how to hate and blame others and who cannot regulate their emotions. Yeah good idea, screwing up the brain chemistry with ideology and neurosis wasn’t enough, now you have to go in there and physically mess with it at the chemical level too, on top of that.
Wait big pharma is Lefty? Interesting.
Big Pharma's campaign contributions....
Top Recipients, 2017-2018
Candidate Office Amount
Hatch, Orrin G (R-UT) Senate $343,899 Walden, Greg (R-OR) House$302,700
Ryan, Paul (R-WI) House $296,095 McCarthy, Kevin (R-CA) House$227,600
Casey, Bob (D-PA) Senate \$194,911
And which side of the aisle tends to fight attempts to control corporations, make sure that oversight works, etc.
As if the right is more critical of lobbying than the left.
Conservatives and liberals are both in industry pockets. The right has a much harder time questioning corporate abuse of power. And both hate emotions, except when channeled in the PC ways of each group.
It's like Reagan Bush and Bush had no influence on the state of things in the US. Nor their wars. Of course Leftist attitudes add to shit but the victim posturing of the right, as if they have not contributed to the fucked up state of things is just silly.
And now that a classic abusive corporate power is being blamed on the left is just a joke.
The digital addictions, those can be placed a bit more on the left, the way facebook, twitter, etc. cognitive scientists made those media addictive, that is a more lefty crowd. I could buy that better, though of course the rights inability to criticize corporate power and the oligarchy would still have a role.
But PHARMA??????
Are you gonna blame the Left for Monsanto next?
Last edited by Karpel Tunnel on Wed Feb 28, 2018 12:22 pm, edited 1 time in total.
Karpel Tunnel
Thinker
Posts: 869
Joined: Wed Jan 10, 2018 12:26 pm
### Re: Cause of Mass Shootings Revealed:
What's fucked up is how cheap the politicians can be bought for. That's chump change to the industry. Dividends probably add up to more than that for one quarter.
You see...a pimp's love is very different from that of a square.
Dating a stripper is like eating a noisy bag of chips in church. Everyone looks at you in disgust, but deep down they want some too.
What exactly is logic? -Magnus Anderson
Support the innocence project on AmazonSmile instead of Turd's African savior biker dude.
http://www.innocenceproject.org/
Mr Reasonable
resident contrarian
Posts: 25416
Joined: Sat Mar 17, 2007 8:54 am
Location: pimping a hole straight through the stratosphere itself
### Re: Cause of Mass Shootings Revealed:
Both neo liberals and neo conservatives are owned by corporatism. It's really funny when they try to appear better than the other as an observing outsider. You both serve the same masters.
The temple mount will be rebuilt in Jerusalem and all the nations of the world will be ruled from there. All races, cultures, leaders, and nations will come to bow before the new messiah yet to come. All will come to know the chosen of God who refer themselves as Jews. For every Jew there will be a thousand goyim that will be their slaves as it was ordained by God. Every man, woman, and child will convert to Zionism.
Zero_Sum
New World Order Enthusiast
Posts: 1838
Joined: Thu Nov 30, 2017 7:05 pm
Location: United States- Greater Israel
### Re: Cause of Mass Shootings Revealed:
Mr Reasonable wrote:What's fucked up is how cheap the politicians can be bought for. That's chump change to the industry. Dividends probably add up to more than that for one quarter.
That is actually the actual problem. Politicians have become too cheap. The labor value of corruption has inflated faster than cash value. Or corruption has become a commodity rather than a luxury. A basic necessity even. A human right.
(why Trump is found so abusive?)
Before the Light - Tree of Life
The strong do what they can, the weak accept what they must.
- Thucydides
Fixed Cross
Doric Usurper
Posts: 7777
Joined: Fri Jul 15, 2011 12:53 am
Location: the black ships
### Re: Cause of Mass Shootings Revealed:
Deep state, media and corrupt political elitists will do anything to get rid of Trump. But in the Internet age and increasing public awareness and alternate media, we see what is going on now.
Truth wins.
EIHWAZ PERTHO NAUTHIZ
ANSUZ
URUZ
Philosopher
Posts: 2049
Joined: Tue Nov 10, 2015 12:14 am
Location: The topoi
### Re: Cause of Mass Shootings Revealed:
UrGod wrote:Deep state, media and corrupt political elitists will do anything to get rid of Trump. But in the Internet age and increasing public awareness and alternate media, we see what is going on now.
Truth wins.
Deep state funds both parties including Trump.
The temple mount will be rebuilt in Jerusalem and all the nations of the world will be ruled from there. All races, cultures, leaders, and nations will come to bow before the new messiah yet to come. All will come to know the chosen of God who refer themselves as Jews. For every Jew there will be a thousand goyim that will be their slaves as it was ordained by God. Every man, woman, and child will convert to Zionism.
Zero_Sum
New World Order Enthusiast
Posts: 1838
Joined: Thu Nov 30, 2017 7:05 pm
Location: United States- Greater Israel
### Re: Cause of Mass Shootings Revealed:
So a Georgia state legislator was going to vote for a bill relating to taxes and what have you on jet fuel, which would have benefited Delta airlines who's based out of Atlanta. Now, because Delta ended discounts for NRA members, that same legislator has publicly stated that he's going to squash the deal and refuse to vote for anything that benefits the airline on the grounds that you can't attack conservatives and not expect them to fight back.
Is this guy just openly admitting his corruption to the media? If he thought it was a good vote before, then to change is as retaliation on behalf of a special interest group is corrupt. If he thought it was a bad vote before, then having put it fourth as something he'd vote for would be showing bias to a corporation. Does he not have a duty to vote in a way that serves the interest of his constituents?
Thoughts?
You see...a pimp's love is very different from that of a square.
Dating a stripper is like eating a noisy bag of chips in church. Everyone looks at you in disgust, but deep down they want some too.
What exactly is logic? -Magnus Anderson
Support the innocence project on AmazonSmile instead of Turd's African savior biker dude.
http://www.innocenceproject.org/
Mr Reasonable
resident contrarian
Posts: 25416
Joined: Sat Mar 17, 2007 8:54 am
Location: pimping a hole straight through the stratosphere itself
### Re: Cause of Mass Shootings Revealed:
Why would anyone want to voluntarily use or support a company like Delta when they openly virtue signal their cheap radical leftist emotionalism by shitting all over their consumers and the US Constitution, none of whom had anything to do with a criminal act committed by a crazy person who was reported to the police and FBI many times but the police and FBI did... nothing to stop him?
Oh yeah, that's right, the FBI was too busy covering their ass over how they allowed themselves to become weaponized by the same leftists that Delta is now kissing the ass of.
EIHWAZ PERTHO NAUTHIZ
ANSUZ
URUZ
Philosopher
Posts: 2049
Joined: Tue Nov 10, 2015 12:14 am
Location: The topoi
### Re: Cause of Mass Shootings Revealed:
Truth sucks when it proves you wrong, doesn’t it?
Deal with it.
EIHWAZ PERTHO NAUTHIZ
ANSUZ
URUZ
Philosopher
Posts: 2049
Joined: Tue Nov 10, 2015 12:14 am
Location: The topoi
### Re: Cause of Mass Shootings Revealed:
The entire philosophy behind citizens owning weapons is that we are free adults and the government serves us, not the other way around. Rights are to and for individuals, not the state. Under certain conditions we agree in law that rights can be removed, such as taking guns away from violent criminals or locking violent criminals up. But to take everyone's guns away is madness, you punish people who did nothing wrong and you destroy the entire philosophical foundation of a free society.
As someone pointed out to me, in America there is a tight relationship between the people and their government. The government is actually afraid of the people, because there are millions of us who do own firearms including auto and semiauto firearms. The rationale of citizen gun rights is not only philosophical in that the citizen is sovereign and that government answers to him, and not only practical in allowing people to defend themselves and others, but is also preventative of future attempts by tyrants to take over. No tyrant would dare try that in America, where there are so many millions of armed citizens.
It is future protection, and creates a background state of fear of the citizenry, which is very important as a deterrent. Gun ownership is also important psychologically since it reaffirms that an individual is sovereign and that rights belong to him, not the state, and that he is a free individual with self-responsibility and the ability every day to exercise that and demonstrate it through responsible firearm ownership. The fact that some people abuse their rights doesn’t speak at all to the vast majority who do not.
But no, crazy leftist traitors want to strip everyone of guns, converting the citizenry into second class citizens before the first class of government. They want humans to be sheep, unable to take personal responsibility and safety into their own hands.
I don’t need a government to give me the right to own weapons, I have that right naturally. It exists in my capacity to defend myself and my need for doing so. I can choose for myself, my rights do now flow from the state. Any society that allows itself to be disarmed by its government is literally no better than sheep in a kennel. Not free human beings, but slaves. Look at European countries that have strict gun control, and look at how little actual political power the citizens there have. They don’t even have free speech, they can’t even maintain borders, their laws are written by super national bureaucrats whom none of them voted for.
That sort of thing is what you get when you allow your own government to strip you of the natural right to own the means of defending yourself and others. Psychologically once you concede on that fundamental issue you are done for, everything else eventually goes.
So no wonder leftists want to disarm us. They are all about turning humans into compliant sheep who always do what they’re told and never bite the hand that feeds. Lol. What a disgrace. How embarrassing to even be on this planet with such filth running around fucking everything up.
Want someone to blame for school shootings? Blame the criminal himself who commits the act. Oh no, that doesn’t allow you to use it for political purposes for your own power and control!! Or you can blame the FBI who failed over and over and over again to stop people whom they actually knew were real threats.
But no, that would also... bite the hand that feeds. Leftists would never bite the hand of their statist globalist masters.
EIHWAZ PERTHO NAUTHIZ
ANSUZ
URUZ
Philosopher
Posts: 2049
Joined: Tue Nov 10, 2015 12:14 am
Location: The topoi
### Re: Cause of Mass Shootings Revealed:
UrGod wrote:Why would anyone want to voluntarily use or support a company like Delta when they openly virtue signal their cheap radical leftist emotionalism by shitting all over their consumers and the US Constitution, none of whom had anything to do with a criminal act committed by a crazy person who was reported to the police and FBI many times but the police and FBI did... nothing to stop him?
Oh yeah, that's right, the FBI was too busy covering their ass over how they allowed themselves to become weaponized by the same leftists that Delta is now kissing the ass of.
Delta deciding not to offer a discount on flight for NRA members shits all over the constitution?
Is that what you're saying?
Did you just try to base your unhinged rant on the notion that not receiving a discount on a flight somehow denies someone the right to bear arms?
Are you ok? Look...there are good arguments to be made for what it is your feeling. But reason first, then make the argument. Don't just get all emotional and vomit out the sort of shit that you just did.
Prove me wrong? What are you 17? Take your meds and settle down.
You see...a pimp's love is very different from that of a square.
Dating a stripper is like eating a noisy bag of chips in church. Everyone looks at you in disgust, but deep down they want some too.
What exactly is logic? -Magnus Anderson
Support the innocence project on AmazonSmile instead of Turd's African savior biker dude.
http://www.innocenceproject.org/
Mr Reasonable
resident contrarian
Posts: 25416
Joined: Sat Mar 17, 2007 8:54 am
Location: pimping a hole straight through the stratosphere itself
### Re: Cause of Mass Shootings Revealed:
I think I attended my first gun show in the early 80s..when I was still in a diaper. I've surely been to 100s of them. I've owned, bought, sold and traded what must have been 100s of guns over the years. I reloaded my own ammo for years. I've fired tens of thousands of rounds....at least. That being said, I've also been listening to the gun control debate since before you were probably born.
Here's how it works. Something terrible happens because of a person who has a gun. Then, the gun dealers who are all right wing anti-democracy people...they raise hell and tell a bunch of lies about how all the guns are going to dry up. Then, they raise their prices and gouge you idiots for more money, thus enriching themselves in the process off your irrational fear. Here's a few facts for you. There is nothing that you could buy since I was a little kid that you haven't been able to buy this entire time, and that you weren't able to buy at any time between now and then. Nothing. If anything the selection of guns and types of ammo have increased over the years and the availability of them has significantly increased. Even NFA guns can be had by anyone willing to purchase a tax stamp. You can form a trust and be the sole administrator of it and the trust can own NFA weapons without you even having to take the background check.
Idiot. You know jack shit about the facts. When I was a teenager, you could get a crate of 20 sks's for 2000 bucks. Still in the plastic bags soaked in cosmoline. When Clinton took office, the right winger NRA nutjobs raised hell about the NFA, and the price of the sks went to nearly 300 bucks. A lot of people made a lot of money off that alone, and it wasn't the idiots who were rushing out to pay 300 bucks for cheap Chinese piece of shit rifles because they believed that they wouldn't be able to get a semi-auto rifle once Clinton banned all the guns.
Guess what? You can still buy them. You can buy a cetme or an HK, you can get an fnfal, you can get a fucking M1 from the US government if you fill out the forms and submit them...at a discount and you've always been able to do that. There are so many companies making M4 style rifles now that you can choose from 20 different manufacturers, or you can build your own if you buy a lower receiver for 150 bucks. You can go to a fucking hardware store and buy 2 pieces of pipe, a coupling, a bag of rubber bands and a nail and make a gun in 5 minutes without asking anyone.
Seriously. Have you ever considered any of these facts? Or do you just think in terms of your emotions and then spew out rants that are based entirely in fiction?
You see...a pimp's love is very different from that of a square.
Dating a stripper is like eating a noisy bag of chips in church. Everyone looks at you in disgust, but deep down they want some too.
What exactly is logic? -Magnus Anderson
Support the innocence project on AmazonSmile instead of Turd's African savior biker dude.
http://www.innocenceproject.org/
Mr Reasonable
resident contrarian
Posts: 25416
Joined: Sat Mar 17, 2007 8:54 am
Location: pimping a hole straight through the stratosphere itself
### Re: Cause of Mass Shootings Revealed:
Civilian Marksmanship Program
http://thecmp.org/cmp_sales/
Online classifieds for guns.
http://www.armslist.com/
Get an fucking FFL if you want and you don't even have to pay transfer fees. Hell you can collect fees for transferring guns between others. Make some money.
https://www.atf.gov/resource-center/how ... easy-steps
Now you can have some facts and calm down, stop repeating fallacious political rhetoric and realize that no one is, or ever has infringed on your rights.
You see...a pimp's love is very different from that of a square.
Dating a stripper is like eating a noisy bag of chips in church. Everyone looks at you in disgust, but deep down they want some too.
What exactly is logic? -Magnus Anderson
Support the innocence project on AmazonSmile instead of Turd's African savior biker dude.
http://www.innocenceproject.org/
Mr Reasonable
resident contrarian
Posts: 25416
Joined: Sat Mar 17, 2007 8:54 am
Location: pimping a hole straight through the stratosphere itself
### Re: Cause of Mass Shootings Revealed:
I think urgod is one of those easily triggered people. A prime example of someone who shouldn't have a gun. Look at how crazy he went over the idea that a right wing politician might have openly admitted to selling his vote to a special interest group. Insane. He thinks that not giving a discount on a flight hinders one's ability and right to bear arms.
A hot headed, easily triggered guy who's cool with corruption and doesn't reason from emotion rather than form his emotions as a result of reasoning.
Yeah...he shouldn't be armed.
Last edited by Mr Reasonable on Sat Mar 03, 2018 8:04 pm, edited 1 time in total.
You see...a pimp's love is very different from that of a square.
Dating a stripper is like eating a noisy bag of chips in church. Everyone looks at you in disgust, but deep down they want some too.
What exactly is logic? -Magnus Anderson
Support the innocence project on AmazonSmile instead of Turd's African savior biker dude.
http://www.innocenceproject.org/
Mr Reasonable
resident contrarian
Posts: 25416
Joined: Sat Mar 17, 2007 8:54 am
Location: pimping a hole straight through the stratosphere itself
### Re: Cause of Mass Shootings Revealed:
Mr Reasonable wrote:I think I attended my first gun show in the early 80s..when I was still in a diaper. I've surely been to 100s of them. I've owned, bought, sold and traded what must have been 100s of guns over the years. I reloaded my own ammo for years. I've fired tens of thousands of rounds....at least. That being said, I've also been listening to the gun control debate since before you were probably born.
Here's how it works. Something terrible happens because of a person who has a gun. Then, the gun dealers who are all right wing anti-democracy people...they raise hell and tell a bunch of lies about how all the guns are going to dry up. Then, they raise their prices and gouge you idiots for more money, thus enriching themselves in the process off your irrational fear. Here's a few facts for you. There is nothing that you could buy since I was a little kid that you haven't been able to buy this entire time, and that you weren't able to buy at any time between now and then. Nothing. If anything the selection of guns and types of ammo have increased over the years and the availability of them has significantly increased. Even NFA guns can be had by anyone willing to purchase a tax stamp. You can form a trust and be the sole administrator of it and the trust can own NFA weapons without you even having to take the background check.
Idiot. You know jack shit about the facts. When I was a teenager, you could get a crate of 20 sks's for 2000 bucks. Still in the plastic bags soaked in cosmoline. When Clinton took office, the right winger NRA nutjobs raised hell about the NFA, and the price of the sks went to nearly 300 bucks. A lot of people made a lot of money off that alone, and it wasn't the idiots who were rushing out to pay 300 bucks for cheap Chinese piece of shit rifles because they believed that they wouldn't be able to get a semi-auto rifle once Clinton banned all the guns.
Guess what? You can still buy them. You can buy a cetme or an HK, you can get an fnfal, you can get a fucking M1 from the US government if you fill out the forms and submit them...at a discount and you've always been able to do that. There are so many companies making M4 style rifles now that you can choose from 20 different manufacturers, or you can build your own if you buy a lower receiver for 150 bucks. You can go to a fucking hardware store and buy 2 pieces of pipe, a coupling, a bag of rubber bands and a nail and make a gun in 5 minutes without asking anyone.
Seriously. Have you ever considered any of these facts? Or do you just think in terms of your emotions and then spew out rants that are based entirely in fiction?
Now that's a good post!
Maybe lose the:
Idiot. You know jack shit about the facts.
And the:
Or do you just think in terms of your emotions and then spew out rants that are based entirely in fiction?
No need for that example.
You can go to a fucking hardware store and buy 2 pieces of pipe, a coupling, a bag of rubber bands and a nail and make a gun in 5 minutes without asking anyone.
You don't want to be caught with that.
Serendipper
Philosopher
Posts: 1114
Joined: Sun Aug 13, 2017 7:30 pm
### Re: Cause of Mass Shootings Revealed:
Olololll
EIHWAZ PERTHO NAUTHIZ
ANSUZ
URUZ
Philosopher
Posts: 2049
Joined: Tue Nov 10, 2015 12:14 am
Location: The topoi
Next | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19073370099067688, "perplexity": 5146.491996628562}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267157203.39/warc/CC-MAIN-20180921131727-20180921152127-00552.warc.gz"} |
http://www.ms.lt/sodas/Book/MathNotebook?action=print | # Book: MathNotebook
I'm starting to write up the results that I'm getting as I investigate the root of mathematics.
Current research program
• Investigation: Collect and develop ideas about physics
• Investigation: Collect examples and overview the ways of figuring things out in physics.
• Investigation: Relate the ways of figuring things out in physics and math.
• Investigation: Intuit the deBroglie wave equation in terms of a particle moving back and forth a fixed distance.
• Solved: Defining geometry
• How is one dimension embedded in other dimensions?
• What is a line segment? What makes it "straight"?
• What is a circle?
• What does it mean for figures to intersect?
• Can a line intersect with itself?
• Investigation: Map out the main ideas of geometry.
• Challenge: Relate the definition of geometry as "the regularity of choice" with Grothendieck's machinery.
• Investigation: What are the four geometries?
• Study: What is projective geometry all about?
• What does projective geometry say about the existence of infinity?
• Do all lines (through plane) meet at infinity at a common point? Or at a circle? Or do the ends of the lines not meet? Or do they go to a circle of infinite length?
• Study: Understand the basics of symplectic geometry.
• Compare the four geometries.
• Express the four geometries in terms of symmetric functions.
• Consider how infinity, zero and one are defined in the various geometries. How do these concepts fit together?
• How do they involve the viewer and their perspective? How might that relate to Christopher Alexander's principles of life and the plane of the viewer?
• How are they related to the twelve topologies?
• Investigate orientation. What is the notion of orientation for points, lines, etc? grounding different geometries? considering building spaces bottom up (adding lines) and removing spaces top down (removing hyperplanes)?
• Compare building spaces bottom up or by deletion top down with choices left or right, etc.
• Study Norman Wildberger's book and videos as an expression of geometry and try to express it all systematically, for example, using symmetric functions.
• List out the results of universal hyperbolic geometry and state them in terms of symmetric functions.
• Understand algebraic geometry (sheaves, etc.) by analyzing its theorems.
• Understand Bott periodicity.
• Relate Bott periodicity to the eight divisions of everything and to the three operations.
Complex analysis
• Using complex numbers, interpret {$d/dz \: e^z$}
• Study the Catalan numbers and the Mandelbrot set
• Check what happens if I plug in different values into the Catalan power series.
• What is a combinatorial interpretation of P - P(n), the generators of the Mandelbrot set, in terms of the Catalan numbers? Get help to generate the difference. What is the best software for that?
Analysis
• Survey the kinds of differential equations, especially in the sciences, and consider them in terms of their symmetry.
Linear algebra
Logic
Computability theory
Parsiųstas iš http://www.ms.lt/sodas/Book/MathNotebook
Puslapis paskutinį kartą pakeistas 2019 birželio 17 d., 19:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5306978225708008, "perplexity": 1981.3039062782757}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999163.73/warc/CC-MAIN-20190620065141-20190620091141-00232.warc.gz"} |
https://patrickjuli.us/tag/producer-surplus/ | # Tax incidence revisited, part 5: Who really pays the tax?
JDN 2457359
I think all the pieces are now in place to really talk about tax incidence.
In earlier posts I discussed how taxes have important downsides, then talked about how taxes can distort prices, then explained that taxes are actually what gives money its value. In the most recent post in the series, I used supply and demand curves to show precisely how taxes create deadweight loss.
Now at last I can get to the fundamental question: Who really pays the tax?
The common-sense answer would be that whoever writes the check to the government pays the tax, but this is almost completely wrong. It is right about one aspect, a sort of political economy notion, which is that if there is any trouble collecting the tax, it’s generally that person who is on the hook to pay it. But especially in First World countries, most taxes are collected successfully almost all the time. Tax avoidance—using loopholes to reduce your tax burden—is all over the place, but tax evasion—illegally refusing to pay the tax you owe—is quite rare. And for this political economy argument to hold, you really need significant amounts of tax evasion and enforcement against it.
The real economic answer is that the person who pays the tax is the person who bears the loss in surplus. In essence, the person who bears the tax is the person who is most unhappy about it.
In the previous post in this series, I explained what surplus is, but it bears a brief repetition. Surplus is the value you get from purchases you make, in excess of the price you paid to get them. It’s measured in dollars, because that way we can read it right off the supply and demand curve. We should actually be adjusting for marginal utility of wealth and measuring in QALY, but that’s a lot harder so it rarely gets done.
In the graphs I drew in part 4, I already talked about how the deadweight loss is much greater if supply and demand are elastic than if they are inelastic. But in those graphs I intentionally set it up so that the elasticities of supply and demand were about the same. What if they aren’t?
Consider what happens if supply is very inelastic, but demand is very elastic. In fact, to keep it simple, lets suppose that supply is perfectly inelastic, but demand is perfectly elastic. This means that supply elasticity is 0, but demand elasticity is infinite.
The zero supply elasticity means that the worker would actually be willing to work up to their maximum hours for nothing, but is unwilling to go above that regardless of the wage. They have a specific amount of hours they want to work, regardless of what they are paid.
The infinite demand elasticity means that each hour of work is worth exactly the same amount the employer, with no diminishing returns. They have a specific wage they are willing to pay, regardless of how many hours it buys.
Both of these are quite extreme; it’s unlikely that in real life we would ever have an elasticity that is literally zero or infinity. But we do actually see elasticities that get very low or very high, and qualitatively they act the same way.
So let’s suppose as before that the wage is $20 and the number of hours worked is 40. The supply and demand graph actually looks a little weird: There is no consumer surplus whatsoever. Each hour is worth$20 to the employer, and that is what they shall pay. The whole graph is full of producer surplus; the worker would have been willing to work for free, but instead gets $20 per hour for 40 hours, so they gain a whopping$800 in surplus.
Now let’s implement a tax, say 50% to make it easy. (That’s actually a huge payroll tax, and if anybody ever suggested implementing that I’d be among the people pulling out a Laffer curve to show them why it’s a bad idea.)
Normally a tax would push the demand wage higher, but in this case $20 is exactly what they can afford, so they continue to pay exactly the same as if nothing had happened. This is the extreme example in which your “pre-tax” wage is actually your pre-tax wage, what you’d get if there hadn’t been a tax. This is the only such example—if demand elasticity is anything less than infinity, the wage you see listed as “pre-tax” will in fact be higher than what you’d have gotten in the absence of the tax. The tax revenue is therefore borne entirely by the worker; they used to take home$20 per hour, but now they only get $10. Their new surplus is only$400, precisely 40% lower. The extra $400 goes directly to the government, which makes this example unusual in another way: There is no deadweight loss. The employer is completely unaffected; their surplus goes from zero to zero. No surplus is destroyed, only moved. Surplus is simply redistributed from the worker to the government, so the worker bears the entirety of the tax. Note that this is true regardless of who actually writes the check; I didn’t even have to include that in the model. Once we know that there was a tax imposed on each hour of work, the market prices decided who would bear the burden of that tax. By Jove, we’ve actually found an example in which it’s fair to say “the government is taking my hard-earned money!” (I’m fairly certain if you replied to such people with “So you think your supply elasticity is zero but your employer’s demand elasticity is infinite?” you would be met with blank stares or worse.) This is however quite an extreme case. Let’s try a more realistic example, where supply elasticity is very small, but not zero, and demand elasticity is very high, but not infinite. I’ve made the demand elasticity -10 and the supply elasticity 0.5 for this example. Before the tax, the wage was$20 for 40 hours of work. The worker received a producer surplus of $700. The employer received a consumer surplus of only$80. The reason their demand is so elastic is that they are only barely getting more from each hour of work than they have to pay.
Total surplus is $780. After the tax, the number of hours worked has dropped to 35. The “pre-tax” (demand) wage has only risen to$20.25. The after-tax (supply) wage the worker actually receives has dropped all the way to $10. The employer’s surplus has only fallen to$65.63, a decrease of $14.37 or 18%. Meanwhile the worker’s surplus has fallen all the way to$325, a decrease of $275 or 46%. The employer does feel the tax, but in both absolute and relative terms, the worker feels the tax much more than the employer does. The tax revenue is$358.75, which means that the total surplus has been reduced to $749.38. There is now$30.62 of deadweight loss. Where both elasticities are finite and nonzero, deadweight loss is basically inevitable.
In this more realistic example, the burden was shared somewhat, but it still mostly fell on the worker, because the worker had a much lower elasticity. Let’s try turning the tables and making demand elasticity low while supply elasticity is high—in fact, once again let’s illustrate by using the extreme case of zero versus infinity.
In order to do this, I need to also set a maximum wage the employer is willing to pay. With nonzero elasticity, that maximum sort of came out automatically when the demand curve hits zero; but when elasticity is zero, the line is parallel so it never crosses. Let’s say in this case that the maximum is $50 per hour. (Think about why we didn’t need to set a minimum wage for the worker when supply was perfectly inelastic—there already was a minimum, zero.) This graph looks deceptively similar to the previous; basically all that has happened is the supply and demand curves have switched places, but that makes all the difference. Now instead of the worker getting all the surplus, it’s the employer who gets all the surplus. At their maximum wage of$50, they are getting $1200 in surplus. Now let’s impose that same 50% tax again. The worker will not accept any wage less than$20, so the demand wage must rise all the way to $40. The government will then receive$800 in revenue, while the employer will only get $400 in surplus. Notice again that the deadweight loss is zero. The employer will now bear the entire burden of the tax. In this case the “pre-tax” wage is basically meaningless; regardless of the value of the tax the worker would receive the same amount, and the “pre-tax” wage is really just an accounting mechanism the government uses to say how large the tax is. They could just as well have said, “Hey employer, give us$800!” and the outcome would be the same. This is called a lump-sum tax, and they don’t work in the real world but are sometimes used for comparison. The thing about a lump-sum tax is that it doesn’t distort prices in any way, so in principle you could use it to redistribute wealth however you want. But in practice, there’s no way to implement a lump-sum tax that would be large enough to raise sufficient revenue but small enough to be affordable by the entire population. Also, a lump-sum tax is extremely regressive, hurting the poor tremendously while the rich feel nothing. (Actually the closest I can think of to a realistic lump-sum tax would be a basic income, which is essentially a negative lump-sum tax.)
I could keep going with more examples, but the basic argument is the same.
In general what you will find is that the person who bears a tax is the person who has the most to lose if less of that good is sold. This will mean their supply or demand is very inelastic and their surplus is very large.
Inversely, the person who doesn’t feel the tax is the person who has the least to lose if the good stops being sold. That will mean their supply or demand is very elastic and their surplus is very small.
Once again, it really does not matter how the tax is collected. It could be taken entirely from the employer, or entirely from the worker, or shared 50-50, or 60-40, or whatever. As long as it actually does get paid, the person who will actually feel the tax depends upon the structure of the market, not the method of tax collection. Raising “employer contributions” to payroll taxes won’t actually make workers take any more home; their “pre-tax” wages will simply be adjusted downward to compensate. Likewise, raising the “employee contribution” won’t actually put more money in the pockets of the corporation, it will just force them to raise wages to avoid losing employees. The actual amount that each party must contribute to the tax isn’t based on how the checks are written; it’s based on the elasticities of the supply and demand curves.
And that’s why I actually can’t get that strongly behind corporate taxes; even though they are formally collected from the corporation, they could simply be hurting customers or employees. We don’t actually know; we really don’t understand the incidence of corporate taxes. I’d much rather use income taxes or even sales taxes, because we understand the incidence of those.
# Tax incidence revisited, part 4: Surplus and deadweight loss
JDN 2457355
I’ve already mentioned the fact that taxation creates deadweight loss, but in order to understand tax incidence it’s important to appreciate exactly how this works.
Deadweight loss is usually measured in terms of total economic surplus, which is a strange and deeply-flawed measure of value but relatively easy to calculate.
Surplus is based upon the concept of willingness-to-pay; the value of something is determined by the maximum amount of money you would be willing to pay for it.
This is bizarre for a number of reasons, and I think the most important one is that people differ in how much wealth they have, and therefore in their marginal utility of wealth. $1 is worth more to a starving child in Ghana than it is to me, and worth more to me than it is to a hedge fund manager, and worth more to a hedge fund manager than it is to Bill Gates. So when you try to set what something is worth based on how much someone will pay for it, which someone are you using? People also vary, of course, in how much real value a good has to them: Some people like dark chocolate, some don’t. Some people love spicy foods and others despise them. Some people enjoy watching sports, others would rather read a book. A meal is worth a lot more to you if you haven’t eaten in days than if you just ate half an hour ago. That’s not actually a problem; part of the point of a market economy is to distribute goods to those who value them most. But willingness-to-pay is really the product of two different effects: The real effect, how much utility the good provides you; and the wealth effect, how your level of wealth affects how much you’d pay to get the same amount of utility. By itself, willingness-to-pay has no means of distinguishing these two effects, and actually I think one of the deepest problems with capitalism is that ultimately capitalism has no means of distinguishing these two effects. Products will be sold to the highest bidder, not the person who needs it the most—and that’s why Americans throw away enough food to end world hunger. But for today, let’s set that aside. Let’s pretend that willingness-to-pay is really a good measure of value. One thing that is really nice about it is that you can read it right off the supply and demand curves. When you buy something, your consumer surplus is the difference between your willingness-to-pay and how much you actually did pay. If a sandwich is worth$10 to you and you pay $5 to get it, you have received$5 of consumer surplus.
When you sell something, your producer surplus is the difference between how much you were paid and your willingness-to-accept, which is the minimum amount of money you would accept to part with it. If making that sandwich cost you $2 to buy ingredients and$1 worth of your time, your willingness-to-accept would be $3; if you then sell it for$5, you have received $2 of producer surplus. Total economic surplus is simply the sum of consumer surplus and producer surplus. One of the goals of an efficient market is to maximize total economic surplus. Let’s return to our previous example, where a 20% tax raised the original wage from$22.50 and thus resulted in an after-tax wage of $18. Before the tax, the supply and demand curves looked like this: Consumer surplus is the area below the demand curve, above the price, up to the total number of goods sold. The basic reasoning behind this is that the demand curve gives the willingness-to-pay for each good, which decreases as more goods are sold because of diminishing marginal utility. So what this curve is saying is that the first hour of work was worth$40 to the employer, but each following hour was worth a bit less, until the 10th hour of work was only worth $35. Thus the first hour gave$40-$20 =$20 of surplus, while the 10th hour only gave $35-$20 = $15 of surplus. Producer surplus is the area above the supply curve, below the price, again up to the total number of goods sold. The reasoning is the same: If the first hour of work cost$5 worth of time but the 10th hour cost $10 worth of time, the first hour provided$20-$5 =$15 in producer surplus, but the 10th hour only provided $20-$10 = $10 in producer surplus. Imagine drawing a little 1-pixel-wide line straight down from the demand curve to the price for each hour and then adding up all those little lines into the total area under the curve, and similarly drawing little 1-pixel-wide lines straight up from the supply curve. The employer was paying$20 * 40 = $800 for an amount of work that they actually valued at$1200 (the total area under the demand curve up to 40 hours), so they benefit by $400. The worker was being paid$800 for an amount of work that they would have been willing to accept $480 to do (the total area under the supply curve up to 40 hours), so they benefit$320. The sum of these is the total surplus $720. After the tax, the employer is paying$22.50 * 35 = $787.50, but for an amount of work that they only value at$1093.75, so their new surplus is only $306.25. The worker is receiving$18 * 35 = $630, for an amount of work they’d have been willing to accept$385 to do, so their new surplus is $245. Even when you add back in the government revenue of$4.50 * 35 = $157.50, the total surplus is still only$708.75. What happened to that extra $11.25 of value? It simply disappeared. It’s gone. That’s what we mean by “deadweight loss”. That’s why there is a downside to taxation. How large the deadweight loss is depends on the precise shape of the supply and demand curves, specifically on how elastic they are. Remember that elasticity is the proportional change in the quantity sold relative to the change in price. If increasing the price 1% makes you want to buy 2% less, you have a demand elasticity of -2. (Some would just say “2”, but then how do we say it if raising the price makes you want to buy more? The Law of Demand is more like what you’d call a guideline.) If increasing the price 1% makes you want to sell 0.5% more, you have a supply elasticity of 0.5. If supply and demand are highly elastic, deadweight loss will be large, because even a small tax causes people to stop buying and selling a large amount of goods. If either supply or demand is inelastic, deadweight loss will be small, because people will more or less buy and sell as they always did regardless of the tax. I’ve filled in the deadweight loss with brown in each of these graphs. They are designed to have the same tax rate, and the same price and quantity sold before the tax. When supply and demand are elastic, the deadweight loss is large: But when supply and demand are inelastic, the deadweight loss is small: Notice that despite the original price and the tax rate being the same, the tax revenue is also larger in the case of inelastic supply and demand. (The total surplus is also larger, but it’s generally thought that we don’t have much control over the real value and cost of goods, so we can’t generally make something more inelastic in order to increase total surplus.) Thus, all other things equal, it is better to tax goods that are inelastic, because this will raise more tax revenue while producing less deadweight loss. But that’s not all that elasticity does! At last, the end of our journey approaches: In the next post in this series, I will explain how elasticity affects who actually ends up bearing the burden of the tax. # What you need to know about tax incidence JDN 2457152 EDT 14:54. I said in my previous post that I consider tax incidence to be one of the top ten things you should know about economics. If I actually try to make a top ten list, I think it goes something like this: 1. Supply and demand 2. Monopoly and oligopoly 3. Externalities 4. Tax incidence 5. Utility, especially marginal utility of wealth 6. Pareto-efficiency 7. Risk and loss aversion 8. Biases and heuristics, including sunk-cost fallacy, scope neglect, herd behavior, anchoring and representative heuristic 9. Asymmetric information 10. Winner-takes-all effect So really tax incidence is in my top five things you should know about economics, and yet I still haven’t talked about it very much. Well, today I will. The basic principles of supply and demand I’m basically assuming you know, but I really should spend some more time on monopoly and externalities at some point. Why is tax incidence so important? Because of one central fact: The person who pays the tax is not the person who writes the check. It doesn’t matter whether a tax is paid by the buyer or the seller; it matters what the buyer and seller can do to avoid the tax. If you can change your behavior in order to avoid paying the tax—buy less stuff, or buy somewhere else, or deduct something—you will not bear the tax as much as someone else who can’t do anything to avoid the tax, even if you are the one who writes the check. If you can avoid it and they can’t, other parties in the transaction will adjust their prices in order to eat the tax on your behalf. Thus, if you have a good that you absolutely must buy no matter what—like, say, table saltand then we make everyone who sells that good pay an extra$5 per kilogram, I can guarantee you that you will pay an extra $5 per kilogram, and the suppliers will make just as much money as they did before. (A salt tax would be an excellent way to redistribute wealth from ordinary people to corporations, if you’re into that sort of thing. Not that we have any trouble doing that in America.) On the other hand, if you have a good that you’ll only buy at a very specific price—like, say, fast food—then we can make you write the check for a tax of an extra$5 per kilogram you use, and in real terms you’ll pay hardly any tax at all, because the sellers will either eat the cost themselves by lowering the prices or stop selling the product entirely. (A fast food tax might actually be a good idea as a public health measure, because it would reduce production and consumption of fast food—remember, heart disease is one of the leading causes of death in the United States, making cheeseburgers a good deal more dangerous than terrorists—but it’s a bad idea as a revenue measure, because rather than pay it, people are just going to buy and sell less.)
In the limit in which supply and demand are both completely fixed (perfectly inelastic), you can tax however you want and it’s just free redistribution of wealth however you like. In the limit in which supply and demand are both locked into a single price (perfectly elastic), you literally cannot tax that good—you’ll just eliminate production entirely. There aren’t a lot of perfectly elastic goods in the real world, but the closest I can think of is cash. If you instituted a 2% tax on all cash withdrawn, most people would stop using cash basically overnight. If you want a simple way to make all transactions digital, find a way to enforce a cash tax. When you have a perfect substitute available, taxation eliminates production entirely.
To really make sense out of tax incidence, I’m going to need a lot of a neoclassical economists’ favorite thing: Supply and demand curves. These things pop up everywhere in economics; and they’re quite useful. I’m not so sure about their application to things like aggregate demand and the business cycle, for example, but today I’m going to use them for the sort of microeconomic small-market stuff that they were originally designed for; and what I say here is going to be basically completely orthodox, right out of what you’d find in an ECON 301 textbook.
Let’s assume that things are linear, just to make the math easier. You’d get basically the same answers with nonlinear demand and supply functions, but it would be a lot more work. Likewise, I’m going to assume a unit tax on goods—like $2890 per hectare—as opposed to a proportional tax on sales—like 6% property tax—again, for mathematical simplicity. The next concept I’m going to have to talk about is elasticitywhich is the proportional amount that quantity sold changes relative to price. If price increases 2% and you buy 4% less, you have a demand elasticity of -2. If price increases 2% and you buy 1% less, you have a demand elasticity of -1/2. If price increases 3% and you sell 6% more, you have a supply elasticity of 2. If price decreases 5% and you sell 1% less, you have a supply elasticity of 1/5. Elasticity doesn’t have any units of measurement, it’s just a number—which is part of why we like to use it. It also has some very nice mathematical properties involving logarithms, but we won’t be needing those today. The price that renters are willing and able to pay, the demand price PD will start at their maximum price, the reserve price PR, and then it will decrease linearly according to the quantity of land rented Q, according to a linear function (simply because we assumed that) which will vary according to a parameter e that represents the elasticity of demand (it isn’t strictly equal to it, but it’s sort of a linearization). We’re interested in what is called the consumer surplus; it is equal to the total amount of value that buyers get from their purchases, converted into dollars, minus the amount they had to pay for those purchases. This we add to the producer surplus, which is the amount paid for those purchases minus the cost of producing themwhich is basically just the same thing as profit. Togerther the consumer surplus and producer surplus make the total economic surplus, which economists generally try to maximize. Because different people have different marginal utility of wealth, this is actually a really terrible idea for deep and fundamental reasons—taking a house from Mitt Romney and giving it to a homeless person would most definitely reduce economic surplus, even though it would obviously make the world a better place. Indeed, I think that many of the problems in the world, particularly those related to inequality, can be traced to the fact that markets maximize economic surplus rather than actual utility. But for now I’m going to ignore all that, and pretend that maximizing economic surplus is what we want to do. You can read off the economic surplus straight from the supply and demand curves; it’s the area between the lines. (Mathematically, it’s an integral; but that’s equivalent to the area under a curve, and with straight lines they’re just triangles.) I’m going to call the consumer surplus just “surplus”, and producer surplus I’ll call “profit”. Below the demand curve and above the price is the surplus, and below the price and above the supply curve is the profit: I’m going to be bold here and actually use equations! Hopefully this won’t turn off too many readers. I will give each equation in both a simple text format and in proper LaTeX. Remember, you can render LaTeX here. PD = PR – 1/e * Q P_D = P_R – \frac{1}{e} Q \\ The marginal cost that landlords have to pay, the supply price PS, is a bit weirder, as I’ll talk about more in a moment. For now let’s say that it is a linear function, starting at zero cost for some quantity Q0 and then increases linearly according to a parameter n that similarly represents the elasticity of supply. PS = 1/n * (Q – Q0) P_S = \frac{1}{n} \left( Q – Q_0 \right) \\ Now, if you introduce a tax, there will be a difference between the price that renters pay and the price that landlords receive—namely, the tax, which we’ll call T. I’m going to assume that, on paper, the landlord pays the whole tax. As I said above, this literally does not matter. I could assume that on paper the renter pays the whole tax, and the real effect on the distribution of wealth would be identical. All we’d have to do is set PD = P and PS = P – T; the consumer and producer surplus would end up exactly the same. Or we could do something in between, with P’D = P + rT and P’S = P – (1 – r) T. Then, if the market is competitive, we just set the prices equal, taking the tax into account: P = PD – T = PR – 1/e * Q – T = PS = 1/n * (Q – Q0) P= P_D – T = P_R – \frac{1}{e} Q – T= P_S = \frac{1}{n} \left(Q – Q_0 \right) \\ P_R – 1/e * Q – T = 1/n * (Q – Q0) P_R – \frac{1}{e} Q – T = \frac{1}{n} \left(Q – Q_0 \right) \\ Notice the equivalency here; if we set P’D = P + rT and P’S = P – (1 – r) T, so that the consumer now pays a fraction of the tax r. P = P’D – rT = P_r – 1/e*Q = P’S + (1 – r) T + 1/n * (Q – Q0) + (1 – r) T P^\prime_D – r T = P = P_R – \frac{1}{e} Q = P^\prime_S = \frac{1}{n} \left(Q – Q_0 \right) + (1 – r) T\\ The result is exactly the same: P_R – 1/e * Q – T = 1/n * (Q – Q0) P_R – \frac{1}{e} Q – T = \frac{1}{n} \left(Q – Q_0 \right) \\ I’ll spare you the algebra, but this comes out to: Q = (PR – T)/(1/n + 1/e) + (Q0)/(1 + n/e) Q = \frac{P_R – T}{\frac{1}{n} + \frac{1}{e}} + \frac{Q_0}{1 + \frac{n}{e}} P = (PR – T)/(1+ n/e) – (Q0)/(e + n) P = \frac{P_R – T}}{1 + \frac{n}{e}} – \frac{Q_0}{e+n} \\ That’s if the market is competitive. If the market is a monopoly, instead of setting the prices equal, we set the price the landlord receives equal to the marginal revenue—which takes into account the fact that increasing the amount they sell forces them to reduce the price they charge everyone else. Thus, the marginal revenue drops faster than the price as the quantity sold increases. After a bunch of algebra (and just a dash of calculus), that comes out to these very similar, but not quite identical, equations: Q = (PR – T)/(1/n + 2/e) + (Q0)/(1+ 2n/e) Q = \frac{P_R – T}{\frac{1}{n} + \frac{2}{e}} + \frac{Q_0}{1 + \frac{2n}{e}} \\ P = (PR – T)*((1/n + 1/e)/(1/n + 2/e) – (Q0)/(e + 2n) P = \left( P_R – T\right)\frac{\frac{1}{n} + \frac{1}{e}}{\frac{1}{n} + \frac{2}{e}} – \frac{Q_0}{e+2n} \\ Yes, it changes some 1s into 2s. That by itself accounts for the full effect of monopoly. That’s why I think it’s worthwhile to use the equations; they are deeply elegant and express in a compact form all of the different cases. They look really intimidating right now, but for most of the cases we’ll consider these general equations simplify quite dramatically. There are several cases to consider. Land has an extremely high cost to create—for practical purposes, we can consider its supply fixed, that is, perfectly inelastic. If the market is competitive, so that landlords have no market power, then they will simply rent out all the land they have at whatever price the market will bear: This is like setting n = 0 and T = 0 in the above equations, the competitive ones. Q = Q0 Q = Q_0 \\ P = PR – Q0/e P = P_R – \frac{Q_0}{e} \\ If we now introduce a tax, it will fall completely on the landlords, because they have little choice but to rent out all the land they have, and they can only rent it at a price—including tax—that the market will bear. Now we still have n = 0 but not T = 0. Q = Q0 Q = Q_0 \\ P = PR – T – Q0/e P = P_R – T – \frac{Q_0}{e} \\ The consumer surplus will be: ½ (Q)(PR – P – T) = 1/(2e)* Q02 \frac{1}{2}Q(P_R – P – T) = \frac{1}{2e}Q_0^2 \\ Notice how T isn’t in the result. The consumer surplus is unaffected by the tax. The producer surplus, on the other hand, will be reduced by the tax: (Q)(P) = (PR – T – Q0/e) Q0 = PR Q0 – 1/e Q02 – TQ0 (Q)(P) = (P_R – T – \frac{Q_0}{e})Q_0 = P_R Q_0 – \frac{1}{e} Q_0^2 – T Q_0 \\ T appears linearly as TQ0, which is the same as the tax revenue. All the money goes directly from the landlord to the government, as we want if our goal is to redistribute wealth without raising rent. But now suppose that the market is not competitive, and by tacit collusion or regulatory capture the landlords can exert some market power; this is quite likely the case in reality. Actually in reality we’re probably somewhere in between monopoly and competition, either oligopoly or monopolistic competitionwhich I will talk about a good deal more in a later post, I promise. It could be that demand is still sufficiently high that even with their market power, landlords have an incentive to rent out all their available land, in which case the result will be the same as in the competitive market. A tax will then fall completely on the landlords as before: Indeed, in this case it doesn’t really matter that the market is monopolistic; everything is the same as it would be under a competitive market. Notice how if you set n = 0, the monopolistic equations and the competitive equations come out exactly the same. The good news is, this is quite likely our actual situation! So even in the presence of significant market power the land tax can redistribute wealth in just the way we want. But there are a few other possibilities. One is that demand is not sufficiently high, so that the landlords’ market power causes them to actually hold back some land in order to raise the price: This will create some of what we call deadweight loss, in which some economic value is wasted. By restricting the land they rent out, the landlords make more profit, but the harm they cause to tenant is created than the profit they gain, so there is value wasted. Now instead of setting n = 0, we actually set n = infinity. Why? Because the reason that the landlords restrict the land they sell is that their marginal revenue is actually negative beyond that point—they would actually get less money in total if they sold more land. Instead of being bounded by their cost of production (because they have none, the land is there whether they sell it or not), they are bounded by zero. (Once again we’ve hit upon a fundamental concept in economics, particularly macroeconomics, that I don’t have time to talk about today: the zero lower bound.) Thus, they can change quantity all they want (within a certain range) without changing the price, which is equivalent to a supply elasticity of infinity. Introducing a tax will then exacerbate this deadweight loss (adding DWL2 to the original DWL1), because it provides even more incentive for the landlords to restrict the supply of land: Q = e/2*(PR – T) Q = \frac{e}{2} \left(P_R – T\right)\\ P = 1/2*(PR – T) P = \frac{1}{2} \left(P_R – T\right) \\ The quantity Q0 completely drops out, because it doesn’t matter how much land is available (as long as it’s enough); it only matters how much land it is profitable to rent out. We can then find the consumer and producer surplus, and see that they are both reduced by the tax. The consumer surplus is as follows: ½ (Q)(PR – 1/2(PR – T)) = e/4*(PR2 – T2) \frac{1}{2}Q \left( P_R – \frac{1}{2}left( P – T \right) \right) = \frac{e}{4}\left( P_R^2 – T^2 \right) \\ This time, the tax does have an effect on reducing the consumer surplus. The producer surplus, on the other hand, will be: (Q)(P) = 1/2*(PR – T)*e/2*(PR – T) = e/4*(PR – T)2 (Q)(P) = \frac{1}{2}\left(P_R – T \right) \frac{e}{2} \left(P_R – T\right) = \frac{e}{4} \left(P_R – T)^2 \\ Notice how it is also reduced by the tax—and no longer in a simple linear way. The tax revenue is now a function of the demand: TQ = e/2*T(PR – T) T Q = \frac{e}{2} T (P_R – T) \\ If you add all these up, you’ll find that the sum is this: e/2 * (PR^2 – T^2) \frac{e}{2} \left(P_R^2 – T^2 \right) \\ The sum is actually reduced by an amount equal to e/2*T^2, which is the deadweight loss. Finally there is an even worse scenario, in which the tax is so large that it actually creates an incentive to restrict land where none previously existed: Notice, however, that because the supply of land is inelastic the deadweight loss is still relatively small compared to the huge amount of tax revenue. But actually this isn’t the whole story, because a land tax provides an incentive to get rid of land that you’re not profiting from. If this incentive is strong enough, the monopolistic power of landlords will disappear, as the unused land gets sold to more landholders or to the government. This is a way of avoiding the tax, but it’s one that actually benefits society, so we don’t mind incentivizing it. Now, let’s compare this to our current system of property taxes, which include the value of buildings. Buildings are expensive to create, but we build them all the time; the supply of buildings is strongly dependent upon the price at which those buildings will sell. This makes for a supply curve that is somewhat elastic. If the market were competitive and we had no taxes, it would be optimally efficient: Property taxes create an incentive to produce fewer buildings, and this creates deadweight loss. Notice that this happens even if the market is perfectly competitive: Since both n and e are finite and nonzero, we’d need to use the whole equations: Since the algebra is such a mess, I don’t see any reason to subject you to it; but suffice it to say, the T does not drop out. Tenants do see their consumer surplus reduced, and the larger the tax the more this is so. Now, suppose that the market for buildings is monopolistic, as it most likely is. This would create deadweight loss even in the absence of a tax: But a tax will add even more deadweight loss: Once again, we’d need the full equations, and once again it’s a mess; but the result is, as before, that the tax gets passed on to the tenants in the form of more restricted sales and therefore higher rents. Because of the finite supply elasticity, there’s no way that the tax can avoid raising the rent. As long as landlords have to pay more taxes when they build more or better buildings, they are going to raise the rent in those buildings accordingly—whether the market is competitive or not. If the market is indeed monopolistic, there may be ways to bring the rent down: suppose we know what the competitive market price of rent should be, and we can establish rent control to that effect. If we are truly correct about the price to set, this rent control can not only reduce rent, it can actually reduce the deadweight loss: But if we set the rent control too low, or don’t properly account for the varying cost of different buildings, we can instead introduce a new kind of deadweight loss, by making it too expensive to make new buildings. In fact, what actually seems to happen is more complicated than that—because otherwise the number of buildings is obviously far too small, rent control is usually set to affect some buildings and not others. So what seems to happen is that the rent market fragments into two markets: One, which is too small, but very good for those few who get the chance to use it; and the other, which is unaffected by the rent control but is more monopolistic and therefore raises prices even further. This is why almost all economists are opposed to rent control (PDF); it doesn’t solve the problem of high rent and simply causes a whole new set of problems. A land tax with a basic income, on the other hand, would help poor people at least as much as rent control presently does—probably a good deal more—without discouraging the production and maintenance of new apartment buildings. But now we come to a key point: The land tax must be uniform per hectare. If it is instead based on the value of the land, then this acts like a finite elasticity of supply; it provides an incentive to reduce the value of your own land in order to avoid the tax. As I showed above, this is particularly pernicious if the market is monopolistic, but even if it is competitive the effect is still there. One exception I can see is if there are different tiers based on broad classes of land that it’s difficult to switch between, such as “land in Manhattan” versus “land in Brooklyn” or “desert land” versus “forest land”. But even this policy would have to be done very carefully, because any opportunity to substitute can create an opportunity to pass on the tax to someone else—for instance if land taxes are lower in Brooklyn developers are going to move to Brooklyn. Maybe we want that, in which case that is a good policy; but we should be aware of these sorts of additional consequences. The simplest way to avoid all these problems is to simply make the land tax uniform. And given the quantities we’re talking about—less than$3000 per hectare per year—it should be affordable for anyone except the very large landholders we’re trying to distribute wealth from in the first place.
The good news is, most economists would probably be on board with this proposal. After all, the neoclassical models themselves say it would be more efficient than our current system of rent control and property taxes—and the idea is at least as old as Adam Smith. Perhaps we can finally change the fact that the rent is too damn high. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.381691038608551, "perplexity": 957.1075511925275}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370521876.48/warc/CC-MAIN-20200404103932-20200404133932-00046.warc.gz"} |
http://mathhelpforum.com/differential-geometry/126707-prove-fundamental-theorem-calculus.html | # Thread: Prove the Fundamental Theorem of Calculus
1. ## Prove the Fundamental Theorem of Calculus
Hi,
I would like to know how to prove the following generalization of the Fundamental Theorem of Calculus using the method below.
Suppose there is a finite set E in [a,b] and functions f,φ : [a,b]—> R (reals) such that
(a) φ is continuous on [a,b]
(b) φ'(x) = f(x) for all x ∈[a,b]\E
(c) f ∈R[a,b] (i.e. f is Riemann integrable)
Then
∫f (from a to b) = φ(b) - φ(a)
Method :Start by assuming E = {a,b} and remember that the Mean Value Theorem for derivatives applied to a (closed) interval [c,d], say, requires continuity on all of [c,d] but only requires differentiability on (c,d).
Once you have done it for this special case, explain why it is still true if you add another point to E. The general case then follows.
I am a bit confuse about the method specified by my teacher.
It would be helpful if someone can guide me through this.
Thank you very much.
2. Anyone would have an idea of how my teacher would want the proof to be? | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.970441997051239, "perplexity": 476.72648508923595}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886109470.15/warc/CC-MAIN-20170821172333-20170821192333-00113.warc.gz"} |
http://aas.org/archives/BAAS/v30n2/aas192/abs/S067022.html | Session 67 - Stars: Evolution, Atmospheres, Intrinsic.
Display session, Thursday, June 11
Atlas Ballroom,
## [67.22] [Fe/H] Calibration of \delta Scuti Variables Using Caby Photometry
M. L. Hintz, M. D. Joner, E. G. Hintz, C. G. Christensen (Brigham Young University)
The hk index has been used as a metallicity indicator for RR Lyrae variable stars (Baird 1996, AJ, 112, 2132). Baird found a relationship between [Fe/H] and hk at a given (b-y). Evidence will be given that for (b-y) < 0.200, the \log g affects the [Fe/H]-hk relationship. Employing spectroscopic abundances of stars with published hk values and Kurucz models, a 3-D interpolation is used to determine [Fe/H] from (b-y), c_1 and hk values. The resulting [Fe/H], \log g and T_eff values for 11 \delta Scuti stars will be presented. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8073530793190002, "perplexity": 25327.26992663057}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037663612.31/warc/CC-MAIN-20140930004103-00225-ip-10-234-18-248.ec2.internal.warc.gz"} |
http://www.koreascience.or.kr/article/ArticleFullRecord.jsp?cn=DBSHCJ_2013_v28n4_649 | AN EXPLICIT FORMULA FOR THE NUMBER OF SUBGROUPS OF A FINITE ABELIAN p-GROUP UP TO RANK 3
Title & Authors
AN EXPLICIT FORMULA FOR THE NUMBER OF SUBGROUPS OF A FINITE ABELIAN p-GROUP UP TO RANK 3
Oh, Ju-Mok;
Abstract
In this paper we give an explicit formula for the total number of subgroups of a finite abelian $\small{p}$-group up to rank three.
Keywords
enumeration;subgroup;abelian p-group;
Language
English
Cited by
References
1.
G. Calugareanu, The total number of subgroups of a finite abelian group, Sci. Math. Jpn. 60 (2004), no. 1, 157-167.
2.
I. J. Davies, Enumeration of certain subgroups of abelian p-groups, Proc. Edinburgh Math. Soc. 13 (1962), no. 2, 1-4.
3.
S. Delsarte, Fonctions de Mobius sur les groupes abeliens finis, Ann. of Math. 49 (1948), no. 2, 600-609.
4.
P. Dyubyuk, On the number of subgroups of a finite abelian group, Soviet Math. 2 (1961), 298-300.
5.
J. Petrillo, Counting subgroups in a direct product of finite cyclic groups, College Math. J. 42 (2011), no. 3, 215-222.
6.
Y. Yeh, On prime power abelian groups, Bull. Amer. Math. Soc. 54 (1948), 323-327. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 1, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.767047643661499, "perplexity": 259.22927447029684}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500550975184.95/warc/CC-MAIN-20170728163715-20170728183715-00324.warc.gz"} |
https://math.stackexchange.com/questions/3741839/hoeffdings-inequality-for-sum-of-bernoulli-random-variables | # Hoeffding's Inequality for sum of Bernoulli random variables
In the book High-Dimensional Probability, by Roman Vershynin, the Hoeffding's Inequality is stated as the following:
Let $$X_1,...,X_N$$ be independent symmetric Bernoulli random variables (e.i $$P(X=-1)=P(X=1)=1/2$$), and let $$a = (a_1,...,a_N) \in \mathbb R^N$$. Then, for any $$t \geq 0$$, we have $$P\left(\sum^N_{i=1}a_i X_i \geq t \right) \leq e^{\frac{-t^2}{2||a||_2^2}}$$
The author then claims that for a fair coin, one can transform the symmetric Bernoulli into a regular Bernoulli (e.g $$Y = 2X - 1$$) and use Hoeffding's Inequality to show that the probability of getting at least $$3N/4$$ heads in $$N$$ coin tosses has an exponential decay, hence:
$$P\left(\sum^N_{i=1}Y_i \geq\frac{3N}{4} \right) \leq e^{-\frac{N}{8}}$$
I've tried to arrive at such bound, but my calculations are yielding a differnt result. Here is what I've tried:
Since $$Y_i = 2X_i -1$$, therefore $$P\left(\sum^N_{i=1}Y_i \geq\frac{3N}{4} \right) = P\left(2\left(\sum^N_{i=1}X_i\right) - N \geq\frac{3N}{4} \right) = P\left(\sum^N_{i=1}X_i \geq\frac{7N}{8} \right) \leq e^{-\frac{7^2 N^2}{2\cdot 8^2 N}}$$
Can someone help me understand what I'm doing wrong and perhaps show how to properly do this?
• It's Hoeffding Jul 1 '20 at 23:52
• unless i'm missing something, it looks like you have proved an ever stronger inequality, considering that -7^2/8^2<-1/8 Jul 2 '20 at 0:29
• Hey Mike. There was an error in the calculation. Ive fixed it. But still, as you say, the bound is stronger. I wonder if it's correct though, since it differs from what the book shows. Cause I don't see why the book would give a weaker bound. Jul 2 '20 at 10:55
After quite sometime I realized what is wrong with the calculations in the question and how to get the correct result. First, the error in the above calculation.
The Bernoulli variable is actually $$X_i$$ not $$Y_i$$, so the correct probability for a fair coin to give more that $$\frac{3N}{4}$$ is $$P\left( \sum^N_{i=1}X_i \geq \frac{3N}{4} \right) \leq exp(-N/8)$$
Now, here is the proper solution:
$$P\left( \sum^N_{i=1}X_i \geq \frac{3N}{4} \right)= P\left( \sum^N_{i=1}\frac{(Y_i+1)}{2} \geq \frac{3N}{4} \right)=$$ $$= P\left( \sum^N_{i=1}Y_i \geq \frac{3N}{2}-N \right)\leq exp\left( \frac{-(\frac{3N}{2}-N)^2}{N} \right) = exp(-N/8)$$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 17, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9462971091270447, "perplexity": 194.86407579765293}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358774.44/warc/CC-MAIN-20211129134323-20211129164323-00627.warc.gz"} |
http://christianity.stackexchange.com/users/1654/mpiktas | # mpiktas
less info
reputation
1
bio website vzemlys.wordpress.com location Vilnius, Lithuania age 33 member for 2 years, 2 months seen May 18 '12 at 11:07 profile views 2
* denotes convolution, $\cdot$ denotes multiplication
I am the developer of the midasr R package:
and you cand find me on
This user has not answered any questions
# 0 Questions
This user has not asked any questions
# 0 Tags
This user has not participated in any tags
# 30 Accounts
Cross Validated 17,913 rep 32878 Stack Overflow 1,845 rep 1926 Mathematics 981 rep 721 Quantitative Finance 281 rep 35 Sports 221 rep 16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6564726233482361, "perplexity": 17602.33395110776}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510272256.16/warc/CC-MAIN-20140728011752-00480-ip-10-146-231-18.ec2.internal.warc.gz"} |
https://artofproblemsolving.com/wiki/index.php?title=2020_AIME_I_Problems/Problem_2&diff=119240&oldid=119239 | # Difference between revisions of "2020 AIME I Problems/Problem 2"
## Problem
There is a unique positive real number such that the three numbers , , and , in that order, form a geometric progression with positive common ratio. The number can be written as , where and are relatively prime positive integers. Find .
## Solution
Since these form a geometric series, is the common ratio. Rewriting this, we get by base change formula. Therefore, the common ratio is 2. Now
. Therefore, .
~ JHawk0224
## Solution 2
If we set , we can obtain three terms of a geometric sequence through logarithm properties. The three terms are In a three-term geometric sequence, the middle term squared is equal to the product of the other two terms, so we obtain the following: which can be solved to reveal . Therefore, , so our answer is .
-molocyxu
## Solution 3
Let be the common ratio. We have
Hence we obtain Ideally we change everything to base and we can get: Now divide to get: By change-of-base we obtain: Hence and we have as desired.
~skyscraper | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9771999716758728, "perplexity": 490.29829213906004}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154432.2/warc/CC-MAIN-20210803061431-20210803091431-00238.warc.gz"} |
https://infoscience.epfl.ch/record/135395 | Infoscience
Journal article
# SRC-1 and TIF2 control energy balance between white and brown adipose tissues
We have explored the effects of two members of the p160 coregulator family on energy homeostasis. TIF2-/- mice are protected against obesity and display enhanced adaptive thermogenesis, whereas SRC-1-/- mice are prone to obesity due to reduced energy expenditure. In white adipose tissue, lack of TIF2 decreases PPARgamma activity and reduces fat accumulation, whereas in brown adipose tissue it facilitates the interaction between SRC-1 and PGC-1alpha, which induces PGC-1alpha's thermogenic activity. Interestingly, a high-fat diet increases the TIF2/SRC-1 expression ratio, which may contribute to weight gain. These results reveal that the relative level of TIF2/SRC-1 can modulate energy metabolism. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8606307506561279, "perplexity": 16454.414594299247}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170600.29/warc/CC-MAIN-20170219104610-00497-ip-10-171-10-108.ec2.internal.warc.gz"} |
https://mathoverflow.net/questions/316316/image-of-morphism-of-quasi-categories | # Image of morphism of quasi-categories
I have two questions about images of morphisms of quasi-categories.
Suppose that $$f\colon X \to Y$$ is a morphism of quasi-categories.
1. Suppose that we calculate the image of $$f$$ in the category $$\mathsf{QCat}$$, using the universal property of image. Will this be a Joyal-fibrant replacement of the image as calculated in $$\mathsf{SSet}$$?
2. If $$X = N(C)$$ and $$Y = N(D)$$ are (nerves of) categories, will the image of $$f$$ be the nerve of the essential image of the functor $$C \to D$$?
In case the answer to one of these questions is no, can the statements be adjusted to become true?
• (I'm going to assume that with $\mathrm{QCat}$ you mean the 1-category of quasicategories) The answer to question 2 is no, for the same reason for which the image of a functor of categories is not the essential image: you are missing the objects that are equivalent to objects in the image but not in the image. I don't understand why you are taking the image in $\mathrm{QCat}$ though, it's not typically a useful thing to do. Maybe you can explain a bit your motivation? – Denis Nardin Nov 27 '18 at 10:36
• Does a 1-categorical image in $QCat$ even exist? – Mike Shulman Nov 27 '18 at 18:32
• I guess if you identify two $2$-simplices along their boundary (and fibrant replace), then there is no least sub-quasicategory containing the 1-skeleton, since the image of each $2$-simplex gives a sub-quasicategory. There is in fact no sub-quasicategory Joyal equivalent to the image, since every triangle commutes. – Kevin Arlin Nov 27 '18 at 19:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 8, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9238890409469604, "perplexity": 331.23729291799293}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400206763.24/warc/CC-MAIN-20200922192512-20200922222512-00791.warc.gz"} |
http://mathoverflow.net/questions/108581/number-of-normal-subgroups-in-a-p-group?sort=votes | # Number of Normal subgroups In a p-Group
Dear all,
Does someone know of any paper/method that enables us counting/estimating the number of normal subgroups of some p-group of order $p ^n$ ($n$ is some natural number ? ) .
Is there anyway we can count the maximal subgroups it has (i.e.- the groups of order $p^{n-1}$ ? ) ?
-
en.wikipedia.org/wiki/Hall_algebra In abelian groups count of subgroups with fixed factor is related to Hall-Littlewood polynoms. What happens for non-abelian - I asked MO: mathoverflow.net/questions/107537/… with no reply – Alexander Chervov Oct 2 '12 at 5:18
Same question simultaneously asked at m.se, math.stackexchange.com/questions/205681/… --- coincidence? – Gerry Myerson Oct 2 '12 at 5:51
@Alexander Chevov: Thanks ! I had no idea about the Hall Algebra notion... But I'm still skeptic about it... Have you got any paper the gives some more details about it? Thanks again! – Jason Mraz Oct 2 '12 at 12:38
I think MacDonald's book "Symmetric functions ... " discuss this... I am not sure - I can send you file of the book, if you need. There have been modern developments about Hall algebras, which go very very far from p-groups - they are surveyed in arxiv.org/abs/math/0611617 – Alexander Chervov Oct 3 '12 at 11:00
That's excatly the thing... I only need kind of "simple" estimates and bounds on the number of subgroups... I'll try to go over the lecture notes you sent and I might find something useful in them... Thanks a lot ! (I'll try to look for the book you mentioned) – Jason Mraz Oct 3 '12 at 19:03
For a $p$-group $P$, the number of maximal subgroups is $\sum_{k=0}^r p^k$ where $r$ is the minimum size of a generating set for $P$. You can see this from looking at the maximal subgroups of $P/\Phi(P)$, which is elementary abelian of order $p^r$.
What I can tell you is that there is at least one normal subgroup for every power of $p$ up to the order of the group. Sylow theory style orbit counting gives us that the number of normal subgroups of each order $p^k$ is going to be congruent to $1 \mod{p}$, so the total number of normal subgroups in a $p$-group of order $p^n$ will then be congruent to $n+1 \mod{p}$.
EDIT: I thought of a bound.
$n+1$ is the lower bound, attained by the cyclic group of order $p^n$. There must be at least one normal subgroup for every prime power divisor, so this is the lowest it can go.
On the other hand, I claim that elementary abelian groups $E_{p^n}$ contain the largest number of normal subgroups. This is because it has the maximum rank of all groups of order $p^n$. Thinking of $E_{p^n}$ as an $\mathbb{F_p}$-vector space, we obtain the number of subspaces by $$\mathcal{N}(E_{p^n})=\sum_{m=0}^{n}\prod_{k=0}^{m-1}\frac{p^n-p^k}{p^m-p^k}.$$ Here we count the number of ordered combinations of $m$ linearly independent vectors in $\mathbb{F_p}^n$, then divide by the number of possible bases of an $m$-dimensional subspace. Summing over $m$ we have the total number of normal subgroups in $E_{p^n}$.
-
Forgive my ignorance: how do Sylow theorems give you information about subgroups of a p-group? – Nick Gill Oct 2 '12 at 8:37
Indeed, in a cyclic group G of order p, there are two normal subgroups: G and {1}. And 2 is not 1 mod p... Do you mean the number of normal subgroups OF A GIVEN ORDER will be 1 mod p? I could maybe believe that but I'd have to think about how I'd prove it. – Nick Gill Oct 2 '12 at 8:41
From Sylow Theorem, we see that the number of subgroups of a given order in a finite $p$-group is congruent to 1 mod $p$. Maybe Alexander means this. – Wei Zhou Oct 2 '12 at 9:48
@Wei Zhou, I've never seen Sylow theorems applied to subgroups of a $p$-group. So, while I can believe the result you state, I'm not sure how you use Sylow to prove it. Maybe it's just that the method by which we prove the $1\mod p$ part of the Sylow theorems can be applied here (?). – Nick Gill Oct 2 '12 at 13:35
@Nick: Let $G$ be a $p$-group of order $p^n$, and S the set of all subgroups of order $p^m$. Let $P \in S$. Then $P$ can acts on $S$ by conjugation. By counting the orbits of this action, we see $|S|$ congruent to 1. This trick is use to prove Sylow theorem by someone. So in some book I can not find, I think this is also called Sylow theorem. – Wei Zhou Oct 2 '12 at 15:50
The wikipedia article on p-groups reminded me that
Every normal subgroup of a finite p-group intersects the center nontrivially.
This implies immediately that minimal normal subgroups of a p-group $G$ will be central. This fact can be used to prove the statement that Wei Zhou made:
A $p$-group of maximal class and size $p^n$ has the least number of normal subgroups of all groups of order $p^n$.
(If I'm thinking straight this number is $n+1$ and the bound is also achieved by the cyclic group of order $p^n$.)
It seems to me that one might be able to prove something a little stronger using an inductive argument: counting the minimal normal subgroups in the center $Z$, and then counting the normal subgroups in $G/Z$, and then putting these two numbers together... It's that last bit that's going to be tricky though. If the center is cyclic, then everything is fine but when it's not cyclic, eek...
-
Dear @Nick: Thanks a lot ! I 'll be glad if you'll be able to tell me what do you mean by a "group of maximal class" ... After verifying this little detail, I'll reread your answer in order to check again that I understand it... Thanks again! – Jason Mraz Oct 2 '12 at 12:43
By class', I mean nilpotency class' i.e. the length of the upper (or lower) central series. A group $G$ of order $p^n$ (with $n>2$ )is of maximal class if this length is $n-1$; in this case $G$ has center $Z(G)$ cyclic of order $p$, then $G/Z(G)$ has center cyclic of order $p$. This pattern continues until you get to a normal group of index $p^2$ in $G$ at which point one has an abelian quotient. Note that, a priori, there may be more than one isomorphism class of group of order $p^n$ of class $n-1$ - not all of them will necessarily have the minimal number of normal subgroups. – Nick Gill Oct 2 '12 at 13:16
I think that the $p$-groups of maximal class have the least number of normal subgroups except for the cyclic groups. If I'm not mistaken $p$-groups of maximal class and order $p^n$ will have one normal subgroup for each $p$th power up to $p^{n-2}$, then a few normal subgroups of order $p^{n-1}$, as opposed to cyclic groups which of course have unique normal subgroups for every power of $p$. – Alexander Gruber Oct 2 '12 at 22:58
Dear @Nick Gill and @Alexander: Where can I find proofs for the facts you mention? I can't see this straight away... Can you give me some reference for the proof of these facts? Thanks ! – Jason Mraz Oct 3 '12 at 9:16
@Alexander, if $G/ G_1$ is cyclic of order $p^2$ (where $G_1$ is the first term in the lower central series), then I think one gets $n+1$ normal subgroups. HOWEVER I do not know enough about $p$-groups of maximal class to be sure that this can happen! If all $p$-groups of maximal class have $G/G_1$ elementary abelian, then I agree with you. – Nick Gill Oct 3 '12 at 13:09
As I know, for the p-group of maximal class, the number of normal subgroups are known. And the number of normal subgroups in p-group of maximal class the the smallest.
-
Reference please – Alexander Chervov Oct 2 '12 at 15:38
About the minimality of normal subgroup, you can get this fact mentioned in the comment of Alexander Gruber following the answer of Nick Gill, from the paper by N. Blackburn, On a special class of p-groups. By the way, the above paper is an important paper for the theory of p-group – Wei Zhou Oct 3 '12 at 1:19
Thank you ! PS as a remark may say that adding more details to the answers would be more valuable in general and more easy convince the readers to press on +1 button:) – Alexander Chervov Oct 3 '12 at 10:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8016961216926575, "perplexity": 236.98035783118007}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701157075.54/warc/CC-MAIN-20160205193917-00038-ip-10-236-182-209.ec2.internal.warc.gz"} |
https://explore.openaire.eu/search/publication?articleId=od______1106::0f206d63223c216541c27c5fae90d687 | Surface dispersive energy determined with IGC-ID in anti-graffiti-coated building materials
Article English OPEN
Carmona-Quiroga, Paula María ; Rubio, J. ; Sánchez, M. Jesús ; Martínez-Ramírez, S. ; Blanco-Varela, María Teresa (2011)
• Publisher: Elsevier
• Related identifiers:
• Subject: Inverse gas chromatography | Construction materials | Surface energy | Anti-graffiti coatings | Ormosil | Contact angle
Coating building materials with anti-graffiti treatments hinders or prevents spray paint adherence by generating low energy surfaces. This paper describes the effect of coating cement paste, lime mortar, granite, limestone and brick with two anti-graffiti agents (a water-base fluoroalkylsiloxane, “Protectosil Antigraffiti®”, and a Zr ormosil) on the dispersive component of the surface energy of these five construction materials. The agents were rediluted in their respective solvents at concentrations of 5 and 75% and the values were determined with inverse gas chromatography at infinite dilution (IGC-ID). The dispersive energy of the five materials prior to coating, ranked from highest to lowest, was as follows: limestone > granite > cement paste > brick > lime mortar. After application of the two anti-graffiti compounds, CF3 terminals (Protectosil) were found to reduce the surface energy of both basic (limestone and lime mortar) and acidic (granite) substrates more effectively than CH3 (ormosil) terminals.
Share - Bookmark | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8627129197120667, "perplexity": 24819.433817201178}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039745761.75/warc/CC-MAIN-20181119105556-20181119131556-00194.warc.gz"} |
https://stacks.math.columbia.edu/tag/0594 | ## 10.86 Mittag-Leffler systems
The purpose of this section is to define Mittag-Leffler systems and why this is a useful notion.
In the following, $I$ will be a directed set, see Categories, Definition 4.21.1. Let $(A_ i, \varphi _{ji}: A_ j \to A_ i)$ be an inverse system of sets or of modules indexed by $I$, see Categories, Definition 4.21.4. This is a directed inverse system as we assumed $I$ directed (Categories, Definition 4.21.4). For each $i \in I$, the images $\varphi _{ji}(A_ j) \subset A_ i$ for $j \geq i$ form a decreasing directed family of subsets (or submodules) of $A_ i$. Let $A'_ i = \bigcap _{j \geq i} \varphi _{ji}(A_ j)$. Then $\varphi _{ji}(A'_ j) \subset A'_ i$ for $j \geq i$, hence by restricting we get a directed inverse system $(A'_ i, \varphi _{ji}|_{A'_ j})$. From the construction of the limit of an inverse system in the category of sets or modules, we have $\mathop{\mathrm{lim}}\nolimits A_ i = \mathop{\mathrm{lim}}\nolimits A'_ i$. The Mittag-Leffler condition on $(A_ i, \varphi _{ji})$ is that $A'_ i$ equals $\varphi _{ji}(A_ j)$ for some $j \geq i$ (and hence equals $\varphi _{ki}(A_ k)$ for all $k \geq j$):
Definition 10.86.1. Let $(A_ i, \varphi _{ji})$ be a directed inverse system of sets over $I$. Then we say $(A_ i, \varphi _{ji})$ is Mittag-Leffler if for each $i \in I$, the family $\varphi _{ji}(A_ j) \subset A_ i$ for $j \geq i$ stabilizes. Explicitly, this means that for each $i \in I$, there exists $j \geq i$ such that for $k \geq j$ we have $\varphi _{ki}(A_ k) = \varphi _{ji}( A_ j)$. If $(A_ i, \varphi _{ji})$ is a directed inverse system of modules over a ring $R$, we say that it is Mittag-Leffler if the underlying inverse system of sets is Mittag-Leffler.
Example 10.86.2. If $(A_ i, \varphi _{ji})$ is a directed inverse system of sets or of modules and the maps $\varphi _{ji}$ are surjective, then clearly the system is Mittag-Leffler. Conversely, suppose $(A_ i, \varphi _{ji})$ is Mittag-Leffler. Let $A'_ i \subset A_ i$ be the stable image of $\varphi _{ji}(A_ j)$ for $j \geq i$. Then $\varphi _{ji}|_{A'_ j}: A'_ j \to A'_ i$ is surjective for $j \geq i$ and $\mathop{\mathrm{lim}}\nolimits A_ i = \mathop{\mathrm{lim}}\nolimits A'_ i$. Hence the limit of the Mittag-Leffler system $(A_ i, \varphi _{ji})$ can also be written as the limit of a directed inverse system over $I$ with surjective maps.
Lemma 10.86.3. Let $(A_ i, \varphi _{ji})$ be a directed inverse system over $I$. Suppose $I$ is countable. If $(A_ i, \varphi _{ji})$ is Mittag-Leffler and the $A_ i$ are nonempty, then $\mathop{\mathrm{lim}}\nolimits A_ i$ is nonempty.
Proof. Let $i_1, i_2, i_3, \ldots$ be an enumeration of the elements of $I$. Define inductively a sequence of elements $j_ n \in I$ for $n = 1, 2, 3, \ldots$ by the conditions: $j_1 = i_1$, and $j_ n \geq i_ n$ and $j_ n \geq j_ m$ for $m < n$. Then the sequence $j_ n$ is increasing and forms a cofinal subset of $I$. Hence we may assume $I =\{ 1, 2, 3, \ldots \}$. So by Example 10.86.2 we are reduced to showing that the limit of an inverse system of nonempty sets with surjective maps indexed by the positive integers is nonempty. This is obvious. $\square$
The Mittag-Leffler condition will be important for us because of the following exactness property.
$0 \to A_ i \xrightarrow {f_ i} B_ i \xrightarrow {g_ i} C_ i \to 0$
be an exact sequence of directed inverse systems of abelian groups over $I$. Suppose $I$ is countable. If $(A_ i)$ is Mittag-Leffler, then
$0 \to \mathop{\mathrm{lim}}\nolimits A_ i \to \mathop{\mathrm{lim}}\nolimits B_ i \to \mathop{\mathrm{lim}}\nolimits C_ i\to 0$
is exact.
Proof. Taking limits of directed inverse systems is left exact, hence we only need to prove surjectivity of $\mathop{\mathrm{lim}}\nolimits B_ i \to \mathop{\mathrm{lim}}\nolimits C_ i$. So let $(c_ i) \in \mathop{\mathrm{lim}}\nolimits C_ i$. For each $i \in I$, let $E_ i = g_ i^{-1}(c_ i)$, which is nonempty since $g_ i: B_ i \to C_ i$ is surjective. The system of maps $\varphi _{ji}: B_ j \to B_ i$ for $(B_ i)$ restrict to maps $E_ j \to E_ i$ which make $(E_ i)$ into an inverse system of nonempty sets. It is enough to show that $(E_ i)$ is Mittag-Leffler. For then Lemma 10.86.3 would show $\mathop{\mathrm{lim}}\nolimits E_ i$ is nonempty, and taking any element of $\mathop{\mathrm{lim}}\nolimits E_ i$ would give an element of $\mathop{\mathrm{lim}}\nolimits B_ i$ mapping to $(c_ i)$.
By the injection $f_ i: A_ i \to B_ i$ we will regard $A_ i$ as a subset of $B_ i$. Since $(A_ i)$ is Mittag-Leffler, if $i \in I$ then there exists $j \geq i$ such that $\varphi _{ki}(A_ k) = \varphi _{ji}(A_ j)$ for $k \geq j$. We claim that also $\varphi _{ki}(E_ k) = \varphi _{ji}(E_ j)$ for $k \geq j$. Always $\varphi _{ki}(E_ k) \subset \varphi _{ji}(E_ j)$ for $k \geq j$. For the reverse inclusion let $e_ j \in E_ j$, and we need to find $x_ k \in E_ k$ such that $\varphi _{ki}(x_ k) = \varphi _{ji}(e_ j)$. Let $e'_ k \in E_ k$ be any element, and set $e'_ j = \varphi _{kj}(e'_ k)$. Then $g_ j(e_ j - e'_ j) = c_ j - c_ j = 0$, hence $e_ j - e'_ j = a_ j \in A_ j$. Since $\varphi _{ki}(A_ k) = \varphi _{ji}(A_ j)$, there exists $a_ k \in A_ k$ such that $\varphi _{ki}(a_ k) = \varphi _{ji}(a_ j)$. Hence
$\varphi _{ki}(e'_ k + a_ k) = \varphi _{ji}(e'_ j) + \varphi _{ji}(a_ j) = \varphi _{ji}(e_ j),$
so we can take $x_ k = e'_ k + a_ k$. $\square$
Comment #2969 by on
(1) The second part of the first sentence should be changed to something like "... and to explain why this is a useful notion." (2) The images $\varphi_{ji}(A_j)$ do not form a decreasing family in general, but a "decreasingly directed family" of subsets (or submodules) of $A_i$.
Comment #3827 by Andy on
For lemma 0597, shouldn't it be A' nonempty instead?
Comment #3927 by on
@#3827: Don't understand what you are saying. I think it is correct as it is now.
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar). | {"extraction_info": {"found_math": true, "script_math_tex": 2, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9969785809516907, "perplexity": 122.7526799351527}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764495001.99/warc/CC-MAIN-20230127164242-20230127194242-00855.warc.gz"} |
http://nrich.maths.org/414/solution | #### You may also like
A new problem posed by Lyndon Baker who has devised many NRICH problems over the years.
How would you design the tiering of seats in a stadium so that all spectators have a good view?
# Double Angle Triples
##### Stage: 5 Challenge Level:
Consider the triangle $ABC$ as shown in the diagram. Show that if $\angle B = 2 \angle A$ then $b^2=a^2+ac$. Find integer solutions of this equation (for example, $a=4$, $b=6$ and $c=5$) and hence find examples of triangles with sides of integer lengths and one angle twice another.
This problem was solved by Yatir Halevi, of Maccabim-Reut High School, Israel. He found a general parametric formula for triangles where one angle is twice another that gives the lengths of the sides $a$, $b$ and $c$ (all integers) of such triangles. The angle opposite side $b$ is double the angle opposite side $a$. Choosing different values of the parameters $u$ and $v$ you get triangles given by:
$$a=u^2, \quad b=uv, \quad c=v^2-u^2.$$
First you have to prove the identity $b^2=a^2+ac$ for such triangles. It can be proved using the similar triangles in the diagram or alternatively from the Sine and Cosine Rules.
Method 1
By construction $\Delta ABX$ is an isosceles triangle. The angle of this triangle at $B$ is $180^\circ - 2\alpha$, and hence the angles at $A$ and $X$ are $\alpha$. Therefore $\Delta XAC$ and $\Delta ABC$ are similar; hence
$${a\over b} = {BC\over AC} = {AC\over XC}={b\over a+c},$$
and thus $b^2=a^2+ac$.
Method 2
By the Cosine Rule:
\eqalign{ b^2 &= a^2 + c^2 -2ac \cos 2\alpha \cr a^2 &= b^2 + c^2 -2bc\cos \alpha }
Subtracting the equations, using the double angle formula and rearranging them a bit we get :
$$2b^2 - 2a^2 = 2c(b\cos \alpha - a\cos ^2\alpha +a).\quad (1)$$
By the Sine Rule:
\eqalign{ {a\over \sin \alpha }&= {b \over \sin 2\alpha } \cr {a \over \sin \alpha } &= {b \over 2\sin \alpha \cos \alpha } \cr \cos \alpha &= {b\over 2a}. \quad (2)}
Combining (1) and (2):
\eqalign{ 2b^2 - 2a^2 &= 2c({b^2\over 2a} - {2ab^2 \over 4a^2} + a) \cr &= 2c({b^2\over 2a} - {b^2\over 2a} + a) \cr b^2 - a^2 &= ca \cr b^2 &= a^2 + ac.}
To find integer solutions, we may assume that $a$, $b$ and $c$ have no common factors. Then $a$ and $a+c$ have no common factors (for if $p$ divides $a$ and $a+c$ then it divides $c$, and hence also $b$). As $a(a+c)=b^2$, and as $a$ and $a+c$ are coprime, $a$ and $a+c$ must be perfect squares. Let $a=u^2$, $a+c=v^2$ so that
$$a=u^2, \quad b=uv, \quad c=v^2-u^2.$$
Note that not every solution gives rise to a triangle (for all three triangle inequalities must be satisfied); for example, $u=1$ and $v=4$ gives $(a,b,c) = (1,4,15)$ and this does not give a triangle. The following are a few examples of values which do give triangles:
u v a b c 2 3 4 6 5 3 4 9 12 7 3 5 9 15 16 4 5 16 20 9
Note: it can be proved that $a$, $b$ and $c$ give a triangle if and only if $v > u > v/2$; see Math. Gazette , Vol. 412, June 1976, p.130. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9987304210662842, "perplexity": 318.5181331722143}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701152987.97/warc/CC-MAIN-20160205193912-00043-ip-10-236-182-209.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/288080/why-the-wu-sprung-model-is-not-accepted-as-a-solution-to-riemann-hypothesis | # Why the Wu-sprung model is not accepted as a solution to Riemann HYpothesis ??
in the Wu-sprung model, given a Hamiltonian in one dimension
$$-y''(x)+f(x)y(x)=E_{n}y(x) \qquad y(0)=0=y(\infty)$$
we can define the function $f(x)$ implicitly as
$$f^{-1}(x)= 2\sqrt{\pi} \frac{d^{1/2}}{dx^{1/2}}n(x)$$
here $n(x)$ is the function counting the eigenvalues $n(x)= \sum_{E_{n}\le x} 1$
for the case of Riemann function this $n(x)= \frac{1}{\pi}arg\xi(1/2+i \sqrt{x})$
so the Riemann Hypotheis is the solution to an inverse problem
literature: http://arxiv.org/pdf/math/0510341v1.pdf introduction to wu sprung model
a survey on inverse problems in physics
for the Riemann zet function the 'potential ' $f(X)$ is defined as
$$f^{-1} (x)=\frac{4}{\sqrt{4x+1} } +\frac{1}{2\pi } \int\nolimits_{-\sqrt{x} }^{\sqrt{x}}\frac{dr}{\sqrt{x-r^2} } \left( \frac{\Gamma '}{\Gamma } \left( \frac{1}{4} +\frac{ir}{2} \right) -\ln \pi \right) -\sum\limits_{n=1}^\infty \frac{\Lambda (n)}{\sqrt{n} } J_0 \left( \sqrt{x} \ln n\right)$$
-
Also posted to (and quickly closed at) MO, mathoverflow.net/questions/120017/… – Gerry Myerson Jan 28 '13 at 5:59
It is not accepted because it causes too many typos. HYpothesis, Hypotheis, imverse, RIemann, zet. – Gerry Myerson Jan 28 '13 at 6:01
i meant the equations, which are the ones that are really important :D – Jose Garcia Jan 28 '13 at 9:14
You corrected three of the six that I pointed out. – Gerry Myerson Jan 28 '13 at 11:39
HERE is a survey made by me about this problem and how the Riemann Weil and gutzwiller trace are analogue :) vixra.org/pdf/1301.0078v2.pdf see the analogy between the Guzwiller trace and riemann weil summation formulae in QM – Jose Garcia Jan 28 '13 at 12:48
vixra.org/pdf/1301.0078v2.pdf the operator is of the form $H=-\frac{d^{2}}{dx^{2}}+f(x)$ and the function $f(x)$ is given implicitly inside (1.10) as an smooth part plus corrections due to the primes and prime powers $\sum_{n=1}^{\infty}\frac{\Lambda (n)}{\sqrt{n}}J_{0}( \sqrt{x}logp)$ – Jose Garcia Feb 7 '13 at 11:24
is a problem of QM so the space is the same as the QM ... acting over integrable eigenfunctions $\int_{-\infty}^{\infty}dx |\Psi(x)|$ – Jose Garcia Feb 7 '13 at 17:28 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9810786843299866, "perplexity": 1463.831209310658}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398873.39/warc/CC-MAIN-20160624154958-00016-ip-10-164-35-72.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/how-much-can-someone-with-a-phd-in-physics-make.424664/ | # Programs How much can someone with a phd in physics make?
1. Aug 27, 2010
### agent_509
I know that you shouldn't get a phd for the money, and you should get one because you want one. I am just wondering how much someone with a phd in physics can make depending on what job they choose.
2. Aug 27, 2010
Same range as somebody with a high school diploma can make. Anywhere from $0 to hundreds of millions of dollars a year. Like you said, it depends on what job they choose. What job did you have in mind specifically? 3. Aug 27, 2010 ### agent_509 anything involving research in physics. Whether it be being a professor at a research university, working for the government, most anything involving research really. 4. Aug 27, 2010 ### jtbell ### Staff: Mentor 5. Aug 27, 2010 ### Troponin Brian May is worth about$80 million, but his PhD is in Astrophysics. I don't know if you're considering the various sub-fields or not.
6. Aug 27, 2010
### lisab
Staff Emeritus
Ah, but he took that rare and difficult rock-star-to-scientist route .
7. Aug 27, 2010
### agent_509
okay, thanks for the answers everyone, I think I got what I was looking for from jtbell's link.
8. Aug 28, 2010
### twofish-quant
Starting salary for Ph.D. level quants on Wall Street is roughly $100K +$50K bonus. With three years of experience, you make VP level, and that gets you about $150K salary +$100K bonus. Most people stay at VP level for the rest of their careers, but I know physics Ph.D.'s that have gotten into managing director level, and total comp there can be \$500K+.
Similar Discussions: How much can someone with a phd in physics make? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46933403611183167, "perplexity": 2448.574121686857}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806586.6/warc/CC-MAIN-20171122122605-20171122142605-00022.warc.gz"} |
https://scholars.ncu.edu.tw/en/publications/semi-analytical-model-for-coupled-multispecies-advective-dispersi | # Semi-analytical model for coupled multispecies advective-dispersive transport subject to rate-limited sorption
Jui Sheng Chen, Yo Chieh Ho, Ching Ping Liang, Sheng Wei Wang, Chen Wuing Liu
Research output: Contribution to journalArticlepeer-review
10 Scopus citations
## Abstract
Most analytical or semi-analytical models currently used to simulate multispecies transport assume instantaneous equilibrium between the dissolved and sorbed phases of the contaminant. However, research has demonstrated that rate-limited sorption process can have a profound effect upon solute transport in the subsurface environment. This study presents a novel semi-analytical model for simulating the migrations of plumes of degradable contaminants subject to rate-limited sorption. The derived semi-analytical model is then applied to investigate the effects of the rate-limited (nonequilibrium-controlled) sorption on the plume migration of degradable contaminants. Results show that the kinetic sorption rate constant has significant impacts on the plume migration of degradable contaminants. Increasing the kinetic sorption rate constant results in a reduction of predicted concentration for all species in the degradable contaminants while the equilibrium-controlled sorption model lead to significant underestimation of the concentrations of degradable contaminants under conditions with low sorption Damköler number. The equilibrium-controlled sorption model agrees well with the rate-limited sorption model when the ratio of Damköler number to the product of distribution coefficient and bulk density is greater than 2 or 3 order of magnitude.
Original language English 124164 Journal of Hydrology 579 https://doi.org/10.1016/j.jhydrol.2019.124164 Published - Dec 2019
## Keywords
• Damköler number
• Multispecies transport
• Nonequilibrium-controlled sorption
• Semi-analytical model
• Sorption reaction rate constant
## Fingerprint
Dive into the research topics of 'Semi-analytical model for coupled multispecies advective-dispersive transport subject to rate-limited sorption'. Together they form a unique fingerprint.
• ### Development of analytical model for groundwater contaminat fate and transport( I )
Chen, J.
1/06/1731/05/18
Project: Research | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8529177904129028, "perplexity": 12492.481864328996}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301063.81/warc/CC-MAIN-20220118213028-20220119003028-00025.warc.gz"} |
http://www.transtutors.com/questions/tts-confidence-intervals-and-more-166521.htm | # Confidence intervals and more
Calculate a 95% confidence interval on the average weight of packaged mustard seed. Explain very carefully to the packaging workers what the 95% confidence interval numbers mean. Include your Excel output. Is there anything you would want to note for management?
Use the same dataset you used for the ...
Answer: 1. A. Descriptive Statistics Column 1 Sample Size, n: 36 Mean: 1.733056 Variance, s^2: 0.0225875 St Dev, s: 0.1502915 z_c = 1.96 at 95% confidence. margin of...
Related Questions in Theory of probability
• 112 used care interval (Solved) June 17, 2015
112. In a recent sample of 84 used car sales costs, the sample mean was $6,425 with a standard deviation of$3,156. Assume the underlying distribution is approximately normal. a....
Solution Preview :
The solution is based on confidence interval. A confidence interval (CI) is a type of interval estimate of a population parameter. It is an observed interval (i.e., it is calculated from the...
• 95% cONFIDENCE INTERVAL MEANS (Solved) November 10, 2011
Assume you are the manager of a mustard seed factory in Colombia. Your company has received complaints that there isn't enough mustard ...
• Statistics (Solved) December 08, 2012
Need help with this assignment. Attach is the homework. You will need access to my online class-www.devryu.net. Login (D03418170), password (Darian1978). Click on decision making,...
Solution Preview :
Statistics – Lab #6 Name:_______________________ Statistical Concepts: Data Simulation Discrete Probability Distribution Confidence Intervals Calculations for a set of variables Open the...
• Derivatives-The makers of a soft drink want to identify the average age of... (Solved) November 27, 2013
. Discuss why the 98% and 99% confidence intervals are different. d. What would you expect to happen to the confidence...
Solution Preview :
a. 83.41 to 96.59 b. 82.65 to 97.35 c. As the level of confidence increases, the confidence
• Statistics using Minitab, excel, (Solved) January 29, 2012
Here is a homework I need done. It uses excel and minitab | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5032720565795898, "perplexity": 3491.55090801272}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660887.60/warc/CC-MAIN-20160924173740-00207-ip-10-143-35-109.ec2.internal.warc.gz"} |
https://brilliant.org/problems/does-light-have-momentum/ | # Does light have momentum?
Find the momentum of a photon with wavelength 1000 nm submit your answer in terms of kg meters/second.
× | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8754655718803406, "perplexity": 1084.7170516657288}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647576.75/warc/CC-MAIN-20180321043531-20180321063531-00116.warc.gz"} |
https://motls.blogspot.com/2012/05/czech-socialist-politician-350000-in.html?m=1 | ## Tuesday, May 15, 2012
### Czech socialist politician: $350,000 in a shoe box On Wednesday, they found extra$1.5 million (CZK 30 million) in a special hideout in the floor of his house (and probably submachine gun model 58)
It's been a stereotype that Mr David Rath, the current governor of Central Bohemia (a disk around Prague without Prague), a former socialist healthcare minister, and one of the most aggressive social democratic politicians in the Czech Republic is one of the most immoral and corruptible politicians in Czechia.
It turned out that it hasn't been a stereotype. It's been a fact since the very beginning. I've heard many stories about his previous methods to get lots of money (he was very poor right after the fall of communism) but what we got yesterday and publicly today sounds much more specific. Details will be investigated but the conclusion that he is a criminal without any moral restrictions to speak of seems pretty much unassailable at this point once he was taken into custody by police. The police president who informed the interior minister last night claims that they have worked hard – 100 investigators were involved – and they feel very certain about the case.
Someone who has seen into Dr Rath's cards has spoken (Mr Paroubek speculates it was Mr Filip Bušina, an entrepreneur who had similar legal problems in the past) and for about six months, police have investigated accusations of bribery, negotiating advantages in public procurement (i.e. manipulating public tenders), and misappropriation of EU funds that is related to Dr Rath, a female director of a Central Bohemian hospital, and about 6 other people (5 men, 3 women). Contracts linked to the hospital in Kladno and/or the reconstruction of the Buštěhrad chateau may be involved. Today, the cops finally decided to catch the rat on the street, near a sewer in front of his house in Hostivice, Greater Prague, a mile from the Prague Havel Airport (calm video from the event, rap). What did they find?
By a "complete coincidence", they found a David Rath with a shoe box and what was in the shoe box? Yes, exactly what you expect in a shoe box that someone is carrying on the street: the answer is either $350,000 or$600,000, depending on the sources (USD $1 is at CZK 20 Czech crowns again, due to the anxiety caused by the ongoing putrefaction of Greece as a nation). The actual damages to the country are higher by more than an order of magnitude. That's a pretty good observation. I guess that Mr Rath was just going to buy some ice cream. More seriously, the planning by the police was probably so perfectionist and used so much overwhelming information that they were probably sure in advance that he would have the money with him, too. The cops actually needed to catch him red-handed; see the explanation at the end. It seems that the money was brought to the place by Rath's important friend, Ms Kateřina Pancová, the director of the Kladno hospital (another social democrat and Rath's #3 in the list of sexual partners after his wife and his mistress: he only has kids with #1 and #2). The money transfer could have taken place in her house in Rudná near Prague. Pancová and Rath are among the 8 arrested people, much like Mr Petr Kott, an ex-center-right lawmaker who left conservative politics because he was drunk all the time (the social democrats hungrily devoured him, a super-drunk ex-right-wing lawmaker was destined to become a top social democrat), and Mr Pavel Drážďanský, the director of developer Konstruktiva Branko who won the tender to reconstruct the chateau for$10 million from EU funds.
Dr Rath's P.R. department was asked to offer an explanation. They said: "It is not in interest of Dr Rath to humiliate himself by an explanation [which could be interpreted as excuses by the media]." LOL! :-) These last words of his political career may become another quote that will be often repeated. (Later, he humiliated himself in this way, anyway. He said that the shoe box was supposed to contain bottled wine: he was "surprised" to see any banknotes in it. I guess that he postponed this explanation in order to invent the most credible one and the wine was his winner. If he learned some kindergarten physics, he would be able to distinguish bottles from banknotes.)
It's a good luck that they managed to find him. When it comes to corruption, I am a realist. I have no doubts that some people in the public sector – in Czechia as well as almost all countries in the world – enjoy some advantages because of their power. Potential for corruption is one of the largely unavoidable taxes that we pay for the intrinsically sick part of our lives that we call the public sector. But punishment is only meaningful and morally justified if one has a sufficient amount of evidence that a given person has done something worse than an average politician is doing or that he or she has received more money than an average politician or official is receiving.
Mr Pavel Bém, the former mayor of Prague who is also a physician and who has been to the peak of Mt Everest, has been harassed due to some telephone calls with his friend, an entrepreneur, and these conversations could have been interpreted as a suggestion that the entrepreneur may have helped to buy shoes for Dr Bém or something like that, and maybe these shoes were really bought, and if they were bought, it could have been corruption, and so on and so on.
I would always think: are you joking? You don't have evidence for any wrongdoing, certainly not something that would be really dangerous and non-negligible, and what we should really worry about here is that the ex-mayor was eavesdropped and someone controls P.R. machineries that may use these conversations against the ex-mayor – or anyone else. I don't give a damn whether he received shoes because of a billion-of-crowns decision.
This was how a center-right politician, a former candidate or "prince" to lead the conservative Civic Democratic Party, was treated. Today, however, a left-wing politician and a former candidate or "prince" who could have led the left-wing Social Democratic Party, wasn't vaguely accused of receiving shoes as a bribe. He was found with a shoe box containing about half a million dollars that seem pretty clearly linked to a specific case of corruption and/or misappropriation of the EU money.
A new product of IKEA that will flood the shelves soon
Now, don't you really see a material difference between the shoes and the shoe box? I can't believe that some people would be able to suggest that these two cases are similar. But in reality, some people would love to spread the meme that the shoe case is worse than the shoe box case! At some level, it becomes clear that the accusation of corruption may become a cure that is worse than the disease. It seems pretty clear that politicians from the political parties who were most loudly screaming that they would fight against corruption belong among the most corrupt politicians in a country. Accusations of corruption became a cheap tool to win cheap votes and these things may be more harmful to our country – and others – than the corruption itself.
I think that the voters who still believe someone who verbally wants to fight corruption (Dr Rath has been an elite in this discipline of accusations!) is just being naive. The recent corruption stories surrounding the "Public Affairs", a party that was the loudest one concerning corruption, should be another revelation for everyone who is willing to learn from the experience. The right attitude is to accept that this is happening to some extent and calmly introduce mechanisms and punishments that will reduce corruption. These mechanisms will surely cost something as well and when the costs exceed the benefits, it becomes counterproductive to increase the fight against corruption! At the same moment, it's important to respect the presumption of innocent for generic officials. The government simply cannot work if everyone is automatically assumed to be a criminal.
It's interesting that most of the politicians, especially the regional ones, who are involved in these debates are physicians. Dr David Rath, the guy with the shoe box, is a physician. Dr Pavel Bém, the ex-mayor of Prague from Mt Everest with the eavesdropped telephone conversation, is a physician, too. Some TRF readers may remember Dr David Rath from the following nice exchange with Dr Miroslav Macek at a stomalogical conference:
Dr Rath had publicly described Dr Macek's marriage as one that was motivated by the thirst for money (suggesting that Mr Macek's wife had no other virtues) – the same kind of slanderous talk that you expect from Shmoits and Shmolins and similar human crap.
In the video above, the host, Dr Macek – who was already decided to defend the dignity of his wife and himself – calmly says: Before I start to moderate our conference, please let me deal with one issue that is of purely personal nature: [SLAP, applause.] Mr minister [at that time] Rath was preemptively warned. I have warned him in the press. It is purely my personal matter. He deserves it. [Applause, Dr Rath is leaving.] Dr Rath: Mr Doctor, we won't be solving it over here. You have attacked me cowardly from the back side. Why didn't you face me like a man, face to face? [...] You are a coward! [SLAP, COUNTER-SLAP]
Of course, this exchange has become legendary and has been included in the Guardian's TOP TEN of similar events. Dr Macek had to pay \$5,000 to Dr Rath as a compensation but the happiness that the smack brought to the nation clearly had a much higher value if it weren't priceless.
Ms Lenka Bradáčová, the Czech Corrado Cattani, a boss of the anti-corruption police unit that has shown much more skills than many others, and not only in this scandal. Let's hope that she will end up differently than her fictitious Italian counterpart.
I sincerely hope that they will be able to find out and prove the wrongdoings that has allowed Mr Rath to carry half a million in a shoe box exactly when he was caught by the police so that he will be allowed to spend many, many years in a prison (estimates talk about 12 years, let's see). My hope for a thorough derathization is connected not only with the fact that Mr Rath is not only a socialist (who has worked hard on his power under many other colors in the past, however, but this kind of immoral folks is what Mr Paroubek wanted to attract intohis party) and a guy who sometimes lives with his wife and sometimes with his official mistress (one child with each) but more generally, he is an unquestionable, egotist, immoral rat.
Another reason behind my hope is that I want the ordinary people's obsession with the corruption and the conspiracy theory that being bribed poses no risks to gradually go away. In particular, the lawmakers' immunity hasn't helped Dr Rath at all: the receipt that police may pick Dr Rath was a pure formality for the chairwoman of the Parliament (who was the only right-wing politician who knew about the arrest in advance). ;-) According to our laws, only the Parliamentary Spokesperson's agreement is needed when the police catches a criminal-lawmaker during a crime i.e. red-handed (in order to make immunity disappear): so the cops really needed to make this tour de force but they succeeded.
Of course, I would love to hope that the incident will also reduce the scary, high amount of votes that the social democrats may receive in the next elections but I am not so sure whether my hope is also a realistic scenario.
The leaders of the Social Democracy have already apologized and indicated that due to the presumption of guilt in their party rules, it's a matter of days when Dr Rath will be stripped of his membership.
The Czechs are already making fun of a classic Czech movie, Jáchym, throw him to the machine. A psychiatrist in the asylum says: At this place, I've been telling all of them: don't accept bribes, don't accept bribes, don't accept bribes, they will make you insane. But these warnings are futile, futile, futile. Dr Macek who had slapped Dr Rath in the past has already explained that Dr Rath – like some other politicians who rise too quickly – lose their mind and start to consider themselves to be omnipotent demigods. Indeed, one has to be intrinsically stupid or insane – despite the rumors about Dr Rath's immense intelligence – to be a governor and to accept half a million dollars of bribes in cash.
Republic Rath, previously known as Central Bohemia (before all the towns were renamed as well)
Fortuna, a bookmaker, allows you to make a bet how much more money (and where) will be found in Rath's house beyond the current CZK 30 million.
The amount of parodies and jokes about Rath that were created within a day or two has been enormous; see e.g. an unmodified song from a TV fairy tale, We have caught a little thief. Among many other things, I liked this Czech poem (all "rath" below has been improved from "rat", of course):
Náš kamaráth demokrath
udělal si doktoráth.
Kolikráth si musel přáth
o národ se postarath,
začal lidi nasírath.
V hlavě měl zkrath, státu krath,
i EU chtěl odírath,
ovčany by mileráth
poslal na dlažbu žebrath.
Když šel prachy provětrath,
troufli si ho vyšťourath.
Zbývá už jen zapírath,
nevědomost předstírath,
kňourath, kárath, vydírath,
šance vidět umírath,
a dřív, než přijde slunovrath,
mříže budou zavírath.
Hlavně to však tentokráth
zase celý neposrath." :D | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24430100619792938, "perplexity": 3831.6361395360614}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738552.17/warc/CC-MAIN-20200809102845-20200809132845-00557.warc.gz"} |
http://dash.harvard.edu/handle/1/4239019 | # Liquids on Topologically Nanopatterned Surfaces
Title: Liquids on Topologically Nanopatterned Surfaces Author: Gang, Oleg; Alvine, Kyle J.; Fukumo, Masafumi; Pershan, Peter S.; Black, Charles T.; Ocko, Benjamin M. Note: Order does not necessarily reflect citation order of authors. Citation: Gang, Oleg, Kyle J. Alvine, Masafumi Fukuto, Peter S. Pershan, Charles T. Black, and Benjamin M. Ocko. 2005. Liquids on topologically nanopatterned surfaces. Physical Review Letters 95(21): 217801. Full Text & Related Files: Gang_Liquids.pdf (515.2Kb; PDF) Abstract: We report here surface x-ray scattering studies of the adsorption of simple hydrocarbon liquid films on nanostructured surfaces—silicon patterned by an array of nanocavities. Two different regimes, filling and growing, are observed for the wetting film evolution as a function of the chemical potential offset $$\Delta \mu$$ from the bulk liquid-vapor coexistence. The strong influence of geometrical effects is manifested by a $$\Delta \mu$$ dependence of liquid adsorption $$\Gamma$$ in the nanocavities that is stronger than the van der Waals behavior $$\Gamma ~\Delta \mu -1/3$$ for flat surfaces. The observed $$\Delta \mu$$ dependence is, however, much weaker than predicted for the infinitely deep parabolic cavities, suggesting that the finite-size effects contribute significantly to the observed adsorption behavior. Published Version: doi:10.1103/PhysRevLett.95.217801 Other Sources: http://liquids.seas.harvard.edu/peter/2005.pdf/PRL95_217801_05.pdf Terms of Use: This article is made available under the terms and conditions applicable to Other Posted Material, as set forth at http://nrs.harvard.edu/urn-3:HUL.InstRepos:dash.current.terms-of-use#LAA Citable link to this page: http://nrs.harvard.edu/urn-3:HUL.InstRepos:4239019
### This item appears in the following Collection(s)
• FAS Scholarly Articles [6463]
Peer reviewed scholarly articles from the Faculty of Arts and Sciences of Harvard University | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8092176914215088, "perplexity": 6801.298595562928}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00539-ip-10-147-4-33.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/257995/prove-that-ax4-bx3-x2-1-0-always-has-imaginary-solution | # Prove that $ax^4 + bx^3 + x^2 + 1 = 0$ always has imaginary solution
Is there a simple way to prove that solution for $ax^4 + bx^3 + x^2 + 1 = 0$ always has at least one imaginary root?
-
What do you know about $a$ and $b$? If they are both $0$, the solution is real. – TMM Dec 13 '12 at 16:02
Sorry, the equation was wrong, updated it now – mathkid Dec 13 '12 at 16:04
$-10x^4+x^2+1=0$ has 2 real and 2 complex solutions? Did you mean "there is always at least 1 imaginary/complex solution"? – CBenni Dec 13 '12 at 16:08
@CBenni Yup, you are correct - I will update the question – mathkid Dec 13 '12 at 16:10
Are, $a,b\in\mathbb{C}$? Are you looking for Complex solutions or Imaginary solutions? Remember imaginary numbers have no real part. – cderwin Dec 13 '12 at 16:18
Hint: Assuming $a$ and $b$ are real, it is quite easy to identify the $x$-values at the turning points of this function (left hand side of equation), and also to identify which order they come in and the general shape of the function. It is also very easy to compute the value of the function at one of the turning points. See how far this takes you. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7512366771697998, "perplexity": 269.3320883387765}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464051299749.12/warc/CC-MAIN-20160524005459-00123-ip-10-185-217-139.ec2.internal.warc.gz"} |
https://www.jecst.org/journal/view.php?doi=10.33961/jecst.2021.00052 | • Home
• E-submission
• Sitemap
J. Electrochem. Sci. Technol Search
CLOSE
J. Electrochem. Sci. Technol > Volume 12(4); 2021 > Article
Kim and Park: Synergy Effect of K Doping and Nb Oxide Coating on Li1.2Ni0.13 Co0.13Mn0.54O2 Cathodes
### Abstract
The Li-rich oxides are promising cathode materials due to their high energy density. However, characteristics such as low rate capability, unstable cyclic performance, and rapid capacity fading during cycling prevent their commercialization. These characteristics are mainly attributed to the phase instability of the host structure and undesirable side reactions at the cathode/electrolyte interface. To suppress the phase transition during cycling and interfacial side reactions with the reactive electrolyte, K (potassium) doping and Nb oxide coating were simultaneously introduced to a Li-rich oxide (Li1.2Ni0.13Co0.13Mn0.54O2). The capacity and rate capability of the Li-rich oxide were significantly enhanced by K doping. Considering the X-ray diffraction (XRD) analysis, the interslab thickness of LiO2 increased and cation mixing decreased due to K doping, which facilitated Li migration during cycling and resulted in enhanced capacity and rate capability. The K-doped Li-rich oxide also exhibited considerably improved cyclic performance, probably because the large K+ ions disturb the migration of the transition metals causing the phase transition and act as a pillar stabilizing the host structure during cycling. The Nb oxide coating also considerably enhanced the capacity and rate capability of the samples, indicating that the undesirable interfacial layer formed from the side reaction was a major resistance factor that reduced the capacity of the cathode. This result confirms that the introduction of K doping and Nb oxide coating is an effective approach to enhance the electrochemical performance of Li-rich oxides.
### 1. Introduction
Li-rich layered oxides, composed of an integrated structure between Li2MnO3 and LiMO2 (M=Mn, Co, Ni, Fe, etc.), have attracted considerable attention owing to their promising properties for application as cathodes in Li battery systems [14]. Many studies have indicated that Li-rich layered oxides have a higher energy density than any other commercial cathode material, which is very useful for Li-ion batteries that require high capacity [19]. However, they exhibit poor rate capability, unstable cycle life, and fast voltage decay during cycling [19], which is mainly attributed to the structural instability of Li-rich oxides. In particular, some of the layered host structures of Li-rich oxides changes to spinel structures during cycling, which is responsible for the capacity fading and voltage decay during cycling [1014]. The origin of the spinel transition is attributed to the migration of transition metal ions to the Li layer [15,16]. Therefore, the suppression of the rearrangement of transition metal ions is a key factor for enhancing the structural stability of Li-rich oxides.
Doping with foreign ions into the lattice structure of cathodes could be a suitable approach for this problem, because doped foreign ions can block the rearrangement of transition metal ions [1719]. Alkali ions such as Na (sodium) and K (potassium) have been successfully used as doping elements [18,20,21]. Na+ and K+ ions are easily exchanged with Li+ ions in the Li layer, and their larger ionic radius than that of Li+ can inhibit the migration of transition metal ions, which enhances the phase stability of the cathodes during cycling. Furthermore, the doping of large-sized ions (Na+ and K+) in the Li layer can increase the Li layer spacing (interslab thickness of LiO2), which facilitates the intercalation/deintercalation of Li ions during cycling [17,20,21].
The surface instability attributed to the undesirable reaction with the electrolyte also seriously affects the degradation of the electrochemical performance of cathode materials. The reactive electrolyte containing HF from to the decomposition of salt (LiPF6) attacks the surface of the cathode, which leads to capacity fading and reduced rate capability during cycling [2224]. Surface coating is the most common and effective method for enhancing the surface stability between the cathode and the electrolyte. The coating layer can act as a protection layer from undesirable surface reactions and improve the electrochemical performance of the cathode [2529].
Based on a previous work, it is clear that doping and coating can positively influence the bulk and surface properties of cathode materials. Thus, the simultaneous use of the two methods can provide optimized results. Therefore, a Li-rich oxide (Li1.2Ni0.13Co0.13Mn0.54O2) was doped with K and coated with Nb oxide (NbOx). K doping is expected to stabilize the structure of the Lirich oxide. Nb oxide is a stable material that suppresses undesirable surface reactions at the cathode/electrolyte interface [26]. Both doped and coated Lirich oxides may show enhanced intrinsic and surface stability, which contributes to the electrochemical performance of the cathode.
### 2. Experimental
Pristine Li1.2Ni0.13Co0.13Mn0.54O2 and K-doped Li1.2Ni0.13Co0.13Mn0.54O2 powders were prepared using a simple combustion method [30, 31]. Manganese acetate tetrahydrate [Mn(CH3CO2)2·4H2O (Aldrich, +99%)], nickel (II) nitrate hexahydrate [Ni(NO3)2·6H2O (Aldrich, 99.99%)], cobalt (II) nitrate hexahydrate [Co(NO3)2·6H2O (Aldrich, 98%)], Li acetate dihydrate [CH3CO2Li·2H2O (Aldrich, 98%)], Li nitrate [LiNO3 (Aldrich, 99%)], and K acetate (CH3COOK (Aldrich, +99%)] were used as source materials. The source materials in stoichiometric ratios were dissolved in a solvent composed of distilled water and acetic acid to form a solution. The amount of K doping was 0.02 mol% of the pristine powder, thus the expected chemical composition of K-doped Li1.2Ni0.13Co0.13Mn0.54O2 was Li0.98K0.02[Li0.2Ni0.13Co0.13Mn0.54]O2. The solutions were continuously stirred on a hot plate at 80–90°C. As the solvent evaporated, the mixed solution turned into a viscous gel. The gel was fired at 400°C for 1 h, and a vigorous decomposition process occurred, resulting in an ash-like powder. The decomposed powder was ground and sintered in air at 500°C for 4 h and then at 900°C for 7 h. In sequence, it was quenched to room temperature.
An Nb oxide coating was introduced to the K-doped Li1.2Ni0.13Co0.13Mn0.54O2. Since it is difficult to accurately confirm the composition of the coating material, that will be called as ‘Nb oxide’ in this work. To prepare the Nb oxide coating solution, Nb pentaethoxide [Nb(OC2H5)5 (Kojundo, 99.99%)] was dissolved in a solution of ethanol, stirred in an Ar-filled glove box at 40°C for 15 min, and the K-doped Li1.2Ni0.13Co0.13Mn0.54O2 powders were added. The solvent was then evaporated at 70°C under stirring. The resulting precursor was dried under vacuum at 90°C overnight and sintered in air at 450°C for 5 h to form the Nb oxide coating layer. The amount of coating was adjusted to 0.3 and 1.0 wt.% of pristine powder.
The surface morphologies of the samples were observed using field-emission scanning electron microscopy (FE-SEM, JSM-7610F PLUS) and transmission electron microscopy (TEM, JEM-2100F, Cs corrector). X-ray diffraction (XRD) patterns of the powders were obtained using XRD (Empyrean) over a 2θ range of 10–90°. Highscore Plus software was used to refine the lattice parameters for Rietveld analysis. The K 2p and Nb 3d binding energies of the samples were analyzed by X-ray photoelectron spectroscopy (XPS, K-Alpha+). For electrochemical tests, a slurry was prepared by mixing with carbon black (Super P) and polyvinylidene fluoride (PVDF) in a weight ratio of 80 (cathode powder): 12 (Super P): 8 (PVDF). A coin-type cell (2032) composed of a cathode, a Li-metal anode, a Celgard 2400 separator, and an electrolyte (1 M LiPF6 in EC/DMC (1:1 vol%)) was used. The cells were cycled in a potential range of 2.0–4.8 V using a Won A Tech voltammetry system. Impedance measurements of the cells were performed using an electrochemical workstation (Ametek, VersaSTAT 3) by applying an alternating current voltage with an amplitude of 5 mV over a frequency range of 0.1 Hz to 100 kHz.
### 3. Results and Discussion
The morphology and surface coating layer of the samples were observed using SEM and TEM analyses. Fig. 1 shows the SEM images of the pristine Li1.2Ni0.13Co0.13Mn0.54O2, K-doped Li1.2Ni0.13Co0.13 Mn0.54O2 (Li[Li0.18K0.02Ni0.13Co0.13Mn0.54]O2), K-doped and 0.3 wt.% Nb oxide coated Li1.2Ni0.13 Co0.13Mn0.54O2, and K-doped and 1.0 wt.% Nb oxide coated Li1.2Ni0.13Co0.13Mn0.54O2 powders. For convenience, they were named as pristine, K-doped, 0.3 Nb-coated, and 1.0 Nb-coated samples, respectively. As shown in Fig. 1, the sample powders consisted of aggregated nano-sized granules. The size of the powders was several micrometers; however, they were porous and composed of weakly connected nanoparticles. This cluster-type porous shape provides a wide surface area contactable with the electrolyte, which facilitates Li movement and improves the rate capability of the cells. However, the wide surface area of the cathode also activates the undesirable reaction with the electrolyte, which may deteriorate the electrochemical performance during cycling. Considering this fact, the surface coating is expected to influence the electrochemical performance of our samples because of the interfacial stabilizing effect of the coating materials.
The surface morphology of the samples did not significantly change depending on the doping and coating treatments. Particularly, a special coating layer was not observed in the Nb oxide coated samples. Many surface-coated cathodes were covered with foreign nanoparticles composed of coating materials [3234]. It is anticipated that the surface coating layer was not formed as particle type but as a thin and homogeneous film, thus it was difficult to distinguish in the SEM image. To observe the coating layer in detail, TEM images of the pristine and Nb coated samples were obtained. Compared with the TEM image of the pristine sample (Fig. 2a), the Nb coated samples (Fig. 2b and 2c) presented several nanometer-sized surface layers, which was expected to be the Nb oxide coating layer. This thin and homogeneous coating layer efficiently prevented direct contact between the cathode and electrolyte and protected the cathode surface from the reactive electrolyte. However, there is also a possibility that the surface layer was formed from Li residues such as Li2CO3 and LiOH, attributed to unreacted Li sources.
To verify the Nb oxide coating layer more clearly and confirm the K doping, XPS measurement was performed. Fig. 3 illustrates the XPS spectra of K 2p and Nb 3d of the pristine, K-doped, 0.3 Nb-coated, and 1.0 Nb-coated samples. As shown in XPS spectra for the pristine sample (Fig. 3a and 3b), meaningful peaks were not found in the binding energy region at 291~298 eV and 204.5~212.5 eV. However, XPS spectra for K-doped sample (Fig. 3c and 3d) presented a major peak at ~292.9 eV and a corresponding satellite peak at ~295.7, which are well assigned to K 2P3/2 and K 2P1/2, respectively. These two peaks provide direct evidence of the existence of K in the structure of the K-doped sample. The 0.3 Nb-coated samples exhibited peaks associated with K 2P3/2 and K 2P1/2, and also new peaks located at ~207.0 and ~209.9 eV (Fig. 3e and 3f). The new peaks are attributed to the binding energy of Nb 3d5/2 and Nb 3d3/2, respectively, indicating that the Nb oxide layer was formed on the surface of the 0.3 Nb-coated sample. In the XPS spectra of the 1.0 Nb-coated sample (Fig. 3g and 3h), the intensity of peaks at ~207.0 and ~209.9 eV was increased compared with those for the 0.3 Nb-coated sample. This may indicate that the surface coating layer for the 1.0 Nb-coated sample is thicker than that for the 0.3 Nb-coated sample. The intensities of the peaks for K 2P3/2 and K 2P1/2 were not significantly changed by the coating treatment.
From the TEM and XPS analyses, the K doping and Nb oxide coating were confirmed. Doping and coating may affect the crystal structure of the cathode (Li1.2Ni0.13Co0.13Mn0.54O2). To probe these effects on the structure, XRD measurements were performed. Fig. 4a shows the XRD patterns of the pristine, K-doped, 0.3 Nb-coated, and 1.0 Nb-coated samples. The diffraction peaks indicate that the samples have an isostructure with typical α-NaFeO2 (space group R−3m). The position and intensity of the diffraction peaks for the four samples were similar, indicating that the doping and coating treatments did not critically change the crystal structure of the sample. However, the doped and coated samples may have slightly different lattice parameters, unit cell volumes, and interslab thicknesses. Therefore, to characterize the crystal structure in more detail, Rietveld refinement was employed, as shown in Fig. 4b and 4c. The cell parameters derived from the Rietveld refinement and interslab thickness calculated according to reference 35 are summarized in Table 1.
The lattice parameters a and c of the pristine sample were measured as 2.8514 Å and 14.2312 Å, respectively. Although the ionic radius of K+ ions (1.38 Å) is larger than that of Li+ ions (0.76 Å), the lattice parameters of the K-doped sample were slightly reduced to 2.8513 Å (a) and 14.227Å (c). This constriction of the unit cell is attributed to the lattice distortion and strain generated in the local structure due to substitution of the K ions [21]. However, the interslab thickness of LiO2 (ILiO2) increased from 2.4396 Å to 2.5023 Å upon K doping. ILiO2 means the distance between the oxygen layer located on both sides of the original Li sites; thus, an enlarged ILiO2 can reduce the activation barrier of Li hopping, which facilitates Li migration during cycling. This is attributed to the substitution of K+ ions with smaller Li+ ions in the Li layer of the host structure. However, it is also possible that the K+ ions in the Li diffusion channel can inhibit the migration of Li+ ions during cycling. Instead, the large K+ ions can act as pillars to stabilize the layered structure of Li-rich oxide because they suppress the oxygen atom repulsion during the migration of Li+ ions. The K-doped sample also showed an increased intensity ratio between (003) and (104) peaks (I(003)/I(104)=1.3369) compared to the pristine sample (I(003)/I(104)=1.2028), which means that the K doping reduced the degree of cation mixing of the sample [36]. Some Li ions in the Li layer are replaced by Ni ions (cation mixing), which disturbs Li diffusion and deteriorates the rate capability of the cathode during the charging-discharging process. Thus, the reduced degree of cation mixing leads to the expectation that the rate capability of the sample can be improved by K doping. The Nb coating affects the lattice parameter, interslab thickness, and I(003)/I(104). Some of the Nb ions may diffuse into the structure and act as dopants and form a coating layer. However, the change in these values by coating was not significant compared with the effect of K doping, which is attributed to the fact that large Nb ions may not sufficiently diffuse into the structure during the coating process.
The electrochemical properties of the samples were measured to investigate the effects of doping and coating treatments. Fig. 5a presents initial charge-discharge curves of the samples at a rate of 20 mA·g−1 in the voltage range of 4.8–2.0 V. All profiles showed a smooth voltage slope below 4.5 V along with a plateau region in the range of ~ 4.5 V, whose shape is the typical initial voltage profile of Li-rich oxides. The sloping region below 4.5 V is ascribed to the cationic redox reaction associated with Li+ deintercalation from the host structure by oxidation of transition metal (Ni and Co) ions [6,37]. The plateau region at approximately 4.5 V is associated with the oxygen redox reaction. Early works proposed that the plateau region presents the removal of Li ions from the Li2MnO3 lattice accompanied by the irreversible generation of oxygen gas [6,37]. However, recent studies have suggested that oxygen redox reactions reversibly occur without oxygen loss or major structural changes during cycling, although a portion of oxygen at the surface irreversibly escapes from the lattice [3840]. Thus, the cationic redox reaction, related to transition metal ions, and the oxygen redox reaction contribute to the extremely large energy density of Li-rich oxide cathodes.
Interestingly, the initial discharge capacity was distinctly increased by the K doping. The discharge capacity of the pristine sample was ~ 240 mAh·g−1, whereas that of the K-doped sample reached ~ 270 mAh·g−1. The existence of K+ ions in the original Li site increases the interslab thickness of LiO2, which can facilitate intercalation/deintercalation of Li ions during the charging and discharging processes. Therefore, the increased discharge capacity is related to the enlarged interslab thickness of LiO2 owing to the effect of K doping. The discharge capacities of the 0.3 Nb-coated and 1.0 Nb-coated samples increased to ~292 and ~294 mAh·g−1, respectively, which means that the introduction of the Nb oxide coating is considerably effective in improving the discharge capacity. This result indicates that the undesirable interfacial layer formed from side reactions between the cathode and electrolyte acts as a strong resistance factor. Therefore, the surface coating with Nb oxide suppressing the side reactions significantly influences the discharge capacity of the samples. The surface coating effect can be more prominent in our samples because the powders prepared by the simple combustion method have a small size and porous structure, which provides a wide interfacial region in contact with the reactive electrolyte.
The discharge capacities of the samples at various current densities (20, 40, 100, 200, and 600 mA·g−1) were observed to compare the rate capability of the samples, as shown in Fig. 5b. The discharge capacities and capacity retentions of the samples are summarized in Table 2. Capacity retention is the percentage of the retained capacity at each current density compared to that at 20 mA·g−1. As the current density increased, the discharge capacity of all the samples decreased. The values of the pristine sample at ~100, ~200, and ~600 mA·g−1 were ~ 157, ~124, and ~69 mAh·g−1, respectively (the values are the 13th, 18th, and 23rd cycles in Fig. 5, respectively). The capacity retentions were ~67% (at 100 mA·g−1), ~53% (at 200 mA·g−1), and ~30% (at 600 mA·g−1). The discharge capacities of the K-doped sample were higher than those of the pristine sample at all current densities. Moreover, the capacity retentions of the K-doped sample were ~77% (at 100 mA·g−1), ~67% (at 200 mA·g−1), and ~42% (at 600 mA·g−1), which are superior to those of the pristine sample, showing an enhanced rate capability by K doping. This is attributed to the enlarged interslab thickness of LiO2 and reduced cation mixing. It is possible that the large K+ ions in the Li sites can disturb the migration of Li+ ions during cycling; however, its influence seems to be lower than the positive effect of K doping.
The 0.3 Nb-coated sample also showed a significantly enhanced rate capability compared to the pristine sample. The capacity retention reached ~77% (at 100 mA·g−1), ~68 (at 200 mA·g−1), and ~51% (at 600 mA·g−1). In particular, the capacity retention at 600 mA·g−1 was superior to that of the K-doped sample, which is related to the surface protection effect suppressing the undesirable side reactions at the cathode/electrolyte interface. However, the retained capacity of the 1.0 Nb-coated sample was lower than that of the 0.3 Nb-coated sample. The Nb oxide layer may act as a new resistance factor when the thickness of the coating layer exceeds a certain value. The 1.0 wt.% of Nb oxide coating seems to be excessively thick for smooth movement of Li ions and electrons at the surface of cathodes.
The cyclic properties of the samples are shown in Fig 6. The cells containing the samples were cycled at 20 mA·g−1 for the initial three cycles, and tested at 100 mA·g−1 for 50 cycles. An initial low current density may induce sufficient initial irreversible reactions, which activate the oxygen redox reaction. At 100 mA·g−1, capacity retention until 50th cycle compared to 4th cycle was ~69% (pristine sample), ~89% (K-doped sample), ~87% (0.3 Nb-coated sample), and ~ 81% (1.0 Nb-coated sample). The K-doped samples clearly showed a considerably improved cyclic performance. Fig. 7 presents the 4th, 15th, and 50th charge-discharge curves of the samples in Fig. 6. The curves for samples containing K doping (K-doped and Nb-coated samples) showed relatively low capacity fading during cycling, as shown in Fig. 7b–d, compared to that of the pristine sample (Fig. 7a). Considering that the capacity fading of Li-rich oxides is mainly attributed to structural transformation from a layered structure to a spinel variant [1014], it is clear that K doping efficiently alleviated the occurrence of the spinel structure. The existence of K+ ions in the Li layer disturbs the migration of transition metals, which can reduce the phase transition during cycling. Furthermore, large K+ ions acting as pillars increase the stability of the layered structure and suppress the collapse of the lattice structure of the Li-rich oxide. These effects may improve the cycling performance of K-doped samples.
To further investigate the influence of K doping and Nb coating, the impedance of the cells containing the samples was measured. Fig. 8 shows the Nyquist curves of the pristine, K-doped, 0.3 Nb-coated, and 1.0 Nb-coated samples before the electrochemical test, after 1 and 50 cycles. The curves are composed of a semicircle in the high-frequency range and a line in the low-frequency range. The size of the semicircle represents the impedance value of the charge transfer and solid electrolyte interface [25,41]. As shown in Fig. 8a, the semicircles of the cells were similar, regardless of the samples they contained. However, after 1 and 50 cycles (Fig. 8b and 8c, respectively), the K-doped samples presented a considerably smaller semicircle compared to that of the pristine sample. This means that the impedance value of the cells was significantly decreased by the K doping. The enlarged interslab thickness of LiO2 and reduced cation mixing may reduce the impedance of the cell after cycling. The 0.3 wt.% Nb oxide coating further decreased the impedance value of the cells, which is attributed to the stabilized interface between cathode and electrolyte by coating. Although the 1.0 Nb-coated sample showed a slightly higher impedance value than the 0.3 Nb-coated sample, it was still smaller than the impedance value for the pristine sample. This reduced impedance value explains the higher capacity and improved rate capability of the K doping and Nb oxide coating. It is clear that the simultaneous introduction of two methods is an effective approach to obtain enhanced electrochemical properties of Li-rich oxide owing to the synergistic effect of coating and doping.
### 4. Conclusions
K doping and Nb oxide coating were simultaneously applied to a Li-rich oxide (Li1.2Ni0.13Co0.13Mn0.54O2) to improve bulk and surface properties. The coating layer formed a thin, homogeneous film-type shape on the surface of the porous powder. Large K+ ions substituted smaller Li+ ions in the Li layer and enlarged the interslab thickness of LiO2, which facilitated Li migration during cycling. Cation mixing, which disturbs Li diffusion, was also reduced by K doping. Owing to these effects, the discharge capacity and rate capability of the Li-rich oxide were improved by K doping. Moreover, the cyclic performance was distinctly enhanced because large K+ ions in the Li layer disturb the migration of transition metals, which reduces the phase transition during cycling. The K+ ions are also expected to act as pillars that stabilize the host structure of the Li-rich oxide.
The Nb oxide coating further increased the capacity of the K-doped sample. The sample prepared by the simple combustion method has a wide interfacial region between the cathode and electrolyte, thus undesirable interfacial side reactions seem to critically influence. Therefore, the suppression of side reactions by the Nb oxide coating significantly improved the discharge capacity of the sample. Owing to the synergistic effect of the K doping and Nb oxide coating, the 0.3 wt.% Nb-coated K-doped sample exhibited the best capacity, rate capability, and cyclic performance in our work. Impedance analysis also confirmed that K doping and Nb oxide coating are effective approaches for reducing the impedance value during cycling, which can explain the enhanced capacity and rate capability by doping and coating.
### Acknowledgment
This work was supported by the National Research Foundation of Korea(NRF) grant funded by the Korean government (MSIT) (No. 2020R1A2C1008370) and supported by the Technology Innovation Program (20007034) funded by the Ministry of Trade, Industry & Energy (MOTIE, Korea). This work was also supported by Kyonggi University’s Graduate Research Assistantship 2020.
##### Fig. 1
SEM images of Li1.2Ni0.13Co0.13Mn0.54O2 (Li-rich oxide) powders. (a) Pristine, (b) K-doped, (c) 0.3 Nb-coated, and (d) 1.0 Nb-coated samples.
##### Fig. 2
TEM images of Li1.2Ni0.13Co0.13Mn0.54O2 (Li-rich oxide) powders. (a) Pristine, (b) 0.3 Nb-coated, and (c) 1.0 Nb-coated samples.
##### Fig. 3
XPS spectra of Li1.2Ni0.13Co0.13Mn0.54O2 (Li-rich oxide) powders. (a) K 2p and (b) Nb 3d for pristine sample, (c) K 2p and (d) Nb 3d for K-doped sample, (e) K 2p and (f) Nb 3d for 0.3-Nb coated sample, (g) K 2p and (h) Nb 3d for 1.0 Nb-coated sample.
##### Fig. 4
(a) XRD patterns of Li1.2Ni0.13Co0.13Mn0.54O2 (Li-rich oxide) powders, Rietveld refinement of (b) pristine sample and (c) 0.3 Nb-coated sample.
##### Fig. 5
Electrochemical performance of pristine, K-doped, 0.3 Nb-coated, and 1.0 Nb-coated samples. (a) Initial charge-discharge curves of samples at 20 mA·g−1. (b) Discharge capacities of samples at 20, 40, 100, 200, 600 mA·g−1.
##### Fig. 6
Cyclic performance of pristine, K-doped, 0.3 Nb-coated, and 1.0 Nb-coated samples (cells cycled at 20 mA·g−1 for initial three cycles, and tested at 100 mA·g−1 until 50 cycles).
##### Fig. 7
4th, 15th, and 50th charge–discharge curves of the samples measured at 100 mA·g−1. (a) Pristine, (b) K-doped, (c) 0.3 Nb-coated, and (d) 1.0 Nb-coated samples.
##### Fig. 8
Nyquist plots of pristine, K-doped, 0.3 Nb-coated, and 1.0 Nb-coated samples: (a) before electrochemical test, (b) after 1 cycle, and (c) after 50 cycles.
##### Table 1
Crystal structural parameters and interslab thickness of pristine, K-doped, 0.3 Nb-coated, and 1.0 Nb-coated samples.
Sample a (Å) c (Å) V (Å3) c/a I(003)/I(104) SMO2 (Å) ILiO2 (Å) GoF Rwp (%)
Pristine 2.8514 14.2312 115.704 4.9910 1.2028 2.3042 2.4396 1.18 1.78
K-doped 2.8513 14.227 115.665 4.9897 1.3369 2.2401 2.5023 1.35 1.96
0.3 Nb-coated 2.8504 14.2207 115.536 4.9891 1.3659 2.2341 2.5062 1.31 1.91
1.0 Nb-coated 2.8518 14.2281 115.712 4.9892 1.3594 2.2361 2.5066 1.22 1.79
##### Table 2
Discharge capacity and capacity retention of pristine, K-doped, 0.3 Nb-coated, and 1.0 Nb-coated samples (capacity retention refers to the percentage of the retained capacity at each current density compared to that at 20 mA·g−1).
Current density (mA·g−1) Pristine K-doped 0.3 Nb-coated 1.0 Nb-coated
Discharge capacity (mAh·g−1) Capacity retention (%) Discharge capacity (mAh·g−1) Capacity retention (%) Discharge capacity (mAh·g−1) Capacity retention (%) Discharge capacity (mAh·g−1) Capacity retention (%)
20 (3rd cycle) 233.93 100 270.43 100 288.63 100 280.96 100
100 (13th cycle) 156.82 67.04 208.73 77.18 223.58 77.46 197.00 70.12
200 (18th cycle) 124.16 53.08 179.93 66.53 196.17 67.97 160.99 57.30
600 (23rd cycle) 69.24 29.60 114.41 42.31 146.85 50.88 95.88 34.13
### References
[1] MM. Thackeray, SH. Kang, CS. Johnson, JT. Vaughey, R. Benedek and SA. Hackney, J Mater Chem., 2007, 17(30), 3112–3125.
[2] KA. Jarvis, Z. Deng, LF. Allard, A. Manthiram and PJ. Ferreira, Chem Mater., 2011, 23(16), 3614–3621.
[3] H. Koga, L. Croguennec, M. Ménétrier, P. Mannessiez, F. Weill and C. Delmas, J Power Sources., 2013, 236, 250–258.
[4] F. Fu, Y-P. Deng, C-H. Shen, G-L. Xu, X-X. Peng, Q. Wang, Y-F. Xu, J-C. Fang, L. Huang and S-G. Sun, Electrochem commun., 2014, 44, 54–58.
[5] H. Yu and H. Zhou, J Phys Chem Lett., 2013, 4(8), 1268–1280.
[6] D. Luo, G. Li, X. Guan, C. Yu, J. Zheng, X. Zhang and L. Li, J Mater Chem A., 2013, 1(4), 1220–1227.
[7] B. Song, H. Liu, Z. Liu, P. Xiao, MO. Lai and L. Lu, Sci Rep., 2013, 3(1), 1–12.
[8] SJ. Shi, JP. Tu, YY. Tang, YX. Yu, YQ. Zhang, XL. Wang and CD. Gu, J Power Sources., 2013, 228, 14–23.
[9] H. Lee, SB. Lim, JY. Kim, M. Jeong, YJ. Park and WS. Yoon, ACS Appl Mater Interfaces., 2018, 10(13), 10804–10818.
[10] ES. Lee and A. Manthiram, J Mater Chem A., 2014, 2(11), 3932–3939.
[11] M. Gu, A. Genc, I. Belharouak, D. Wang, K. Amine, S. Thevuthasan, DR. Baer, J-G. Zhang, ND. Browning, J. Liu and C. Wang, Chem Mater., 2013, 25(11), 2319–2326.
[12] AR. Armstrong, M. Holzapfel, P. Novak, CS. Johnson, SH. Kang, MM. Thackeray and PG. Bruce, J Am Chem Soc., 2006, 128(26), 8694–8698.
[13] DYW. Yu and K. Yanagida, J Electrochem Soc., 2011, 158(9), A1015.
[14] D. Mohanty, AS. Sefat, S. Kalnaus, J. Li, RA. Meisner, EA. Payzant, DP. Abraham, DL. Wood and C. Daniel, J Mater Chem A., 2013, 1(20), 6249–6261.
[15] B. Xu, CR. Fell, M. Chi and YS. Meng, Energy Environ Sci., 2011, 4(6), 2223–2233.
[16] J. Reed, G. Ceder and A. Van Der Ven, Electrochem Solid-State Lett., 2001, 4(6), A78.
[17] W. He, D. Yuan, J. Qian, X. Ai, H. Yang and Y. Cao, J Mater Chem A., 2013, 1(37), 11397–11403.
[18] MN. Ates, Q. Jia, A. Shah, A. Busnaina, S. Mukerjee and KM. Abraham, J Electrochem Soc., 2014, 161(3), A290.
[19] J-H. Park, J. Lim, J. Yoon, K-S. Park, J. Gim, J. Song, H. Park, D. Im, M. Park, D. Ahn, Y. Paik and J. Kim, Dalt Trans., 2012, 41(10), 3053–3059.
[20] Z. Zheng, XD. Guo, YJ. Zhong, WB. Hua, CH. Shen, SL. Chou and XS. Yang, Electrochim Acta., 2016, 188, 336–343.
[21] Q. Li, G. Li, C. Fu, D. Luo, J. Fan and L. Li, ACS Appl Mater Interfaces., 2014, 6(13), 10330–10341.
[22] SY. Lee and YJ. Park, ACS Omega., 2020, 5(7), 3579–3587.
[23] BG. Lee and YJ. Park, Sci Rep., 2020, 10(1), 1–11.
[24] X. Wang, Z. Hu, A. Adeosun, B. Liu, R. Ruan, S. Li and H. Tan, J Energy Inst., 2018, 91(6), 835–844.
[25] JS. Park and YJ. Park, J Electrochem Sci Technol., 2017, 8(2), 101–106.
[26] F. Xin, H. Zhou, X. Chen, M. Zuba, N. Chernova, G. Zhou and MS. Whittingham, ACS Appl Mater Interfaces., 2019, 11(38), 34889–34894.
[27] Y. Seok Jung, AS. Cavanagh, Y. Yan, SM. George and A. Manthiram, J Electrochem Soc., 2011, 158(12), A1298-.
[28] YK. Sun, MJ. Lee, CS. Yoon, J. Hassoun, K. Amine and B. Scrosati, Adv Mater., 2012, 24(9), 1192–1196.
[29] H. Kim, D. Byun, W. Chang, HG. Jung and W. Choi, Chem A., 2017, 5(47), 25077–25089.
[30] HJ. Lee and YJ. Park, J Power Sources., 2013, 244, 222–233.
[31] JH. Ryu, BG. Park, SB. Kim and YJ. Park, J Appl Electrochem., 2009, 39(7), 1059–1066.
[32] HW. Kwak and YJ. Park, Sci Rep., 2019, 9(1), 1–9.
[33] HW. Kwak and YJ. Park, Thin Solid Films., 2018, 660, 625–630.
[34] JW. Lee and YJ. Park, J Electrochem Sci Technol., 2018, 9(3), 176–183.
[35] A. Rougier, P. Gravereau and C. Delmas, J Electrochem Soc., 1996, 143(4), 1168–1175.
[36] CB. Lim and YJ. Park, Sci Rep., 2020, 10(1), 1–12.
[37] C. Fu, G. Li, D. Luo, J. Zheng and L. Li, J Mater Chem A., 2014, 2(5), 1471–1483.
[38] M. Okubo and A. Yamada, ACS Appl Mater Interfaces., 2017, 9(42), 36463–36472.
[39] K. Luo, MR. Roberts, R. Hao, N. Guerrini, DM. pickup, YS. Liu, K. Edstrom, J. Guo, AV. Chadwick, LC. Duda and PG. Bruce, Nat Chem., 2016, 8(7), 684–691.
[40] H. Koga, L. Croguennec, M. Menetrier, P. Mannessiez, F. Weill, C. Delmas and S. Belin, J Phys Chem C., 2014, 118(11), 5700–5709.
[41] GTK. Fey, P. Muralidharan, CZ. Lu and Y. Da Cho, Solid State Ion., 2005, 176(37–28), 2759–2767. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8176710605621338, "perplexity": 8874.163675893042}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304810.95/warc/CC-MAIN-20220125100035-20220125130035-00603.warc.gz"} |
https://www.physicsforums.com/threads/black-hole-formed-from-lhc.678725/ | # Black Hole Formed From LHC?
Tags:
1. Mar 15, 2013
### PhysicsWanabe
Hey I heard from a couple differnt places on the internet that there's a possibilty that a black hole hypothetically could be formed by the LHC. I thought that black holes were formed by the implosion of a huge star that keeps going in on itself due to gravity. Is it true that this could happen at the LHC?
2. Mar 15, 2013
### fzero
It is indeed true that the typical example of black hole formation is in the collapse of a sufficiently massive star. However, we believe that a small black hole can be formed whenever mass is compressed beyond a certain size. For example, if you had a Planck mass ($\sim 10^{-8}~\mathrm{kg}$) compressed to within a Planck length ($\sim 10^{-35}~\mathrm{m}$), you would create a black hole. In particular, in the very early universe, when all of the mass in the universe was much closer together, the formation of so-called primordial black holes would have been possible.
In certain hypothetical extensions of the Standard Model (mainly models with "large" extra dimensions of spacetime), the true Planck length could be longer than the $\sim 10^{-35}~\mathrm{m}$ distance we derive from measuring gravity at long scales. In these models, at some high energy, the strength of gravity becomes sharply stronger than what we measure at the normal energy scales of the solar system or table-top experiments. In principle, above this characteristic energy, the formation of black holes would become feasible. Such small black holes wouldn't be expected to be very dangerous, because they would decay quickly due to Hawking radiation.
It is unlikely that black holes would be produced at the LHC. The mechanism relies on highly speculative ideas and no other signatures of this type of new physics has been seen so far. Wikipedia has extensive discussions at http://en.wikipedia.org/wiki/Micro_black_hole and http://en.wikipedia.org/wiki/Safety_of_high_energy_particle_collision_experiments
3. Mar 15, 2013
### ZapperZ
Staff Emeritus
You are a bit late to the party. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7247105836868286, "perplexity": 403.5702345799112}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863119.34/warc/CC-MAIN-20180619193031-20180619213031-00285.warc.gz"} |
https://worldwidescience.org/topicpages/a/application+reference+efsa-gmo-uk-2005-21.html | #### Sample records for application reference efsa-gmo-uk-2005-21
1. Microfabricated Reference Electrodes and their Biosensing Applications
Directory of Open Access Journals (Sweden)
M. Jamal Deen
2010-03-01
Full Text Available Over the past two decades, there has been an increasing trend towards miniaturization of both biological and chemical sensors and their integration with miniaturized sample pre-processing and analysis systems. These miniaturized lab-on-chip devices have several functional advantages including low cost, their ability to analyze smaller samples, faster analysis time, suitability for automation, and increased reliability and repeatability. Electrical based sensing methods that transduce biological or chemical signals into the electrical domain are a dominant part of the lab-on-chip devices. A vital part of any electrochemical sensing system is the reference electrode, which is a probe that is capable of measuring the potential on the solution side of an electrochemical interface. Research on miniaturization of this crucial component and analysis of the parameters that affect its performance, stability and lifetime, is sparse. In this paper, we present the basic electrochemistry and thermodynamics of these reference electrodes and illustrate the uses of reference electrodes in electrochemical and biological measurements. Different electrochemical systems that are used as reference electrodes will be presented, and an overview of some contemporary advances in electrode miniaturization and their performance will be provided.
2. Old italian reference systems and their applications
Science.gov (United States)
Baiocchi, Valerio; Lelo, Keti
2010-05-01
The history of geodetic systems used in Italy from the end of the XIX century to the beginning of the XXth century is complex and, in the past, this has led some researcher to misinterpretations. For this reason an explanation of geodetic systems used in Italy in this period is reported in this paper. Towards the end of the XIXth century, the " Ufficio Tecnico del Corpo di Stato Maggiore" (first nucleus of the future IGM) was entrusted to unify the geodetic reference systems of the Italian pre-union states to produce a unique Italian Datum for the whole national territory. At the same time, the "Ufficio del Catasto" (National Cadastre Office), for its purposes, began the production of a cartography in projection Cassini-Soldner representing only the thematic layer of its interest: the delimitations of properties. Although officially the Datums used in those years are the same both for cadastre and IGM (Genoa, Monte Mario, Castanea delle Furie), in many cases temporary orientations were used on cadastral maps and the values of first, second and third order vertexes do not coincide with those definitive ones used by the IGM. This ambiguity led frequently to misinterpretation and errors to georeferenciation of present and historic Italian cartography
3. Top-down enterprise application integration with reference models
OpenAIRE
Willem-Jan van den Heuvel; Wilhelm Hasselbring; Mike Papazoglou
2000-01-01
For Enterprise Resource Planning (ERP) systems such as SAP R/3 or IBM SanFrancisco, the tailoring of reference models for customizing the ERP systems to specific organizational contexts is an established approach. In this paper, we present a methodology that uses such reference models as a starting point for a top-down integration of enterprise applications. The re-engineered models of legacy systems are individually linked via cross-mapping specifications to the forward-engineered reference ...
4. Top-Down Enterprise Application Integration with Reference Models
Directory of Open Access Journals (Sweden)
Willem-Jan van den Heuvel
2000-11-01
Full Text Available For Enterprise Resource Planning (ERP systems such as SAP R/3 or IBM SanFrancisco, the tailoring of reference models for customizing the ERP systems to specific organizational contexts is an established approach. In this paper, we present a methodology that uses such reference models as a starting point for a top-down integration of enterprise applications. The re-engineered models of legacy systems are individually linked via cross-mapping specifications to the forward-engineered reference model's specification. The actual linking of reference and legacy models is done with a methodology for connecting (new business objects with (old legacy systems.
5. LCCP Desktop Application v1.0 Engineering Reference
Energy Technology Data Exchange (ETDEWEB)
Beshr, Mohamed [University of Maryland, College Park; Aute, Vikrant [University of Maryland, College Park
2014-04-01
This Life Cycle Climate Performance (LCCP) Desktop Application Engineering Reference is divided into three parts. The first part of the guide, consisting of the LCCP objective, literature review, and mathematical background, is presented in Sections 2-4. The second part of the guide (given in Sections 5-10) provides a description of the input data required by the LCCP desktop application, including each of the input pages (Application Information, Load Information, and Simulation Information) and details for interfacing the LCCP Desktop Application with the VapCyc and EnergyPlus simulation programs. The third part of the guide (given in Section 11) describes the various interfaces of the LCCP code.
6. Opportunities for ISRU Applications in the Mars Reference Mission
Science.gov (United States)
Duke, Michael B.
1998-01-01
The NASA Mars Exploration Reference Mission envisions sending three crews of six astronauts to Mars, each for 500-day stays on the surface. In situ Resourse Unitlization (ISRU) has been baselined for the production of propellant for crews leaving the surface, as well as to create reservoirs of water and life-support consumables These applications improve performance (by reducing the mass of hardware and supplies that must be brought to Mars for the propulsion system) and reduce risk (by creating consumables as backups to stores brought from Earth). Similar applications of other types of ISRU-derived materials should be sought and selected if they similarly improve performance or reduce risk. Some possible concepts for consideration, based on a review of the components included in the Reference Mission, include (1) emplacement of a hardened landing pad; (2) construction of a roadway for transporting the nuclear power system to a safe distance from the habitat; (3) radiation shielding for inflatable structures; (4) tanks and plumbing for bioregenerative life-support system; (5) drilling rig; (6) additional access structures for equipment and personnel and unpressurized structures for vehicle storage; (7) utilitarian manufactured products (e.g., stools and benches) for habitat and laboratory; (8) thermal radiators; (9) photovoltaic devices and support structures; and ( 10) external structures for storage and preservation of Mars samples. These may be viewed principally as mission- enhancing concepts for the Reference Mission. Selection would require a clear rationale for performance improvement or risk reduction and a demonstration that the cost of developing and transporting the needed equipment would be recovered within the budget for the program. Additional work is also necessary to ascertain whether early applications of ISRU for these types of purposes could lead to the modification of later missions, allowing the replacement of infrastructure payloads currently
7. Application of diagnostic reference levels in medical practice
Energy Technology Data Exchange (ETDEWEB)
Bourguignon, Michel [Faculty of Medicine of Paris, Deputy Director General, Nuclear Safety Authority (ASN), Paris (France)
2006-07-01
Diagnosis reference levels (D.R.L.) are defined in the Council Directive 97/43 EURATOM as 'Dose levels in medical radio diagnosis practices or in the case of radiopharmaceuticals, levels of activity, for typical examinations for groups of standards-sized patients or standards phantoms for broadly defined types of equipment. These levels are expected not to be exceeded for standard procedures when good and normal practice regarding diagnostic and technical performance is applied'. Thus D.R.L. apply only to diagnostic procedures and does not apply to radiotherapy. Radiation protection of patients is based on the application of 2 major radiation protection principles, justification and optimization. The justification principle must be respected first because the best way to protect the patient is not to carry a useless test. Radiation protection of the patient is a continuous process and local dose indicator values in the good range should not prevent the radiologist or nuclear medicine physician to continue to optimize their practice. (N.C.)
8. A Reliable Reference Electrode in Molten Carbonate and Its Applications
Institute of Scientific and Technical Information of China (English)
2000-01-01
A Ag|AgCl reference electrode which can be used in molten carbonate media has been described in this paper.It consists of a silver wire immersed in a solution of AgCl(1mol%) in (Li0.62,K0.38)2CO3,with a zirconia junction.The main properties of reference electrode,such as reproducibility ,stability and reversibility, were checked.The results have demonstrated that the reference electrode is reliable.With such reference electrode catalysis of various electrode materials to oxygen reduction in molten alkali carbonate media was investigated.It is found that as catalysts for oxygen reduction oxidized nickel-niobium alloy is superior to nickel oxide.
9. CMOS bandgap references and temperature sensors and their applications
OpenAIRE
Wang, G.
2005-01-01
Two main parts have been presented in this thesis: device characterization and circuit. In integrated bandgap references and temperature sensors, the IC(VBE, characteristics of bipolar transistors are used to generate the basic signals with high accuracy. To investigate the possibilities to fabricate high-precision bandgap references and temperature sensors in low-cost CMOS technology, the electrical characteristics of substrate bipolar pnp transistors have been investigated over a wide tempe...
10. CMOS bandgap references and temperature sensors and their applications
NARCIS (Netherlands)
Wang, G.
2005-01-01
Two main parts have been presented in this thesis: device characterization and circuit. In integrated bandgap references and temperature sensors, the IC(VBE, characteristics of bipolar transistors are used to generate the basic signals with high accuracy. To investigate the possibilities to fabrica
11. Reference clock parameters for digital communications systems applications
Science.gov (United States)
Kartaschoff, P.
1981-01-01
The basic parameters relevant to the design of network timing systems describe the random and systematic time departures of the system elements, i.e., master (or reference) clocks, transmission links, and other clocks controlled over the links. The quantitative relations between these parameters were established and illustrated by means of numerical examples based on available measured data. The examples were limited to a simple PLL control system but the analysis can eventually be applied to more sophisticated systems at the cost of increased computational effort.
12. Neutron imaging and applications a reference for the imaging community
CERN Document Server
McGreevy, Robert L; Bilheux, Hassina Z
2009-01-01
Offers an introduction to the basics of neutron beam production in addition to the wide scope of techniques that enhance imaging application capabilities. This title features a section that describes imaging single grains in polycrystalline materials, neutron imaging of geological materials and other materials science and engineering areas.
13. Flouescence reference materials used for optical and biophotonic applications
Science.gov (United States)
Engel, A.; Otterman, C.; Klahn, J.; Enseling, D.; Korb, T.; Resch-Genger, U.; Hoffmann, K.; Schweizer, S.; Selling, J.; Kynast, U.; Koberling, F.; Rupertus, V.
2007-07-01
Fluorescence techniques are known for their high sensitivity and are widely used as analytical tools and detection methods for product and process control, material sciences, environmental and bio-technical analysis, molecular genetics, cell biology, medical diagnostics, and drug screening. According to DIN/ISO 17025 certified standards are used for fluorescence diagnostics having the drawback of giving relative values for fluorescence intensities only. Therefore reference materials for a quantitative characterization have to be related directly to the materials under investigation. In order to evaluate these figures it is necessary to calculate absolute numbers like absorption/excitation cross sections and quantum yield. This can be done for different types of dopands in different materials like glass, glass ceramics, crystals or nano crystalline material embedded in polymer matrices. Based on the optical spectroscopy data we will discuss options for characteristic doped glasses and glass ceramics with respect to scattering and absorption regime. It has shown recently for YAG:Ce glass ceramics that for a proper determination of the quantum efficiency in these highly scattering media a reference material with similar scattering and fluorescent properties is required. This may be performed using the emission decay measurement diagnostics, where the decay time is below 100 ns. In this paper we present first results of these aspects using well performing LUMOGEN RED organic pigments for a comparison of mainly transparent glass with glass ceramics doped with various amounts of dopands e.g. ions of raw earth elements and transition metals. The LUMOGEN red is embedded in silica and polyurethane matrices. Characterisations on wavelength accuracy and lifetime for different environmental conditions (temperature, UV irradiation) have been performed. Moreover intensity patterns and results for homogeneity, isotropy, photo and thermal stability will be discussed. In a next
14. Application of Artificial Neural Network Approach for Estimating Reference Evapotranspiration
Directory of Open Access Journals (Sweden)
Khyati N. Vyas
2016-08-01
Full Text Available The process of evapotranspiration (ET is a vital part of the water cycle. Exact estimation of the value of ET is necessary for designing irrigation systems and water resources management. Accurate estimation of ET is essential in agriculture, its over-estimation leads to cause the waste of valuable water resources and its underestimation leads to the plant moisture stress and decrease in the crop yield. The well known Penman-Monteith (PM equation always performs the highest accuracy results of estimating reference Evapotranspiration (ET0 among the existing methods is without any discussion. However, the equation requires climatic data that are not always available particularly for a developing country. ET0 is a complex process which is depending on a number of interacting meteorological factors, such as temperature, humidity, wind speed, and radiation. The lack of physical understanding of ET0 process and unavailability of all appropriate data results in imprecise estimation of ET0. Over the past two decades, artificial neural networks (ANNs have been increasingly applied in modeling of hydrological processes because of their ability in mapping the input–output relationship without any understanding of physical process. This paper investigates for the first time in the semiarid environment of Junagadh, the potential of an artificial neural network (ANN for estimating ET0 with limited climatic data set.
15. Comprehensive NASA Cis-Lunar Earth Moon Libration Orbit Reference and Web Application Project
Data.gov (United States)
National Aeronautics and Space Administration — To finalize a comprehensive NASA Cis-Lunar / Earth-Moon Libration Orbit Reference and Web Application begun using FY13 IRAD funding approved in May 2013. This GSFC...
16. Comparative assessment of thematic accuracy of GLC maps for specific applications using existing reference data
Science.gov (United States)
Tsendbazar, N. E.; de Bruin, S.; Mora, B.; Schouten, L.; Herold, M.
2016-02-01
Inputs to various applications and models, current global land cover (GLC) maps are based on different data sources and methods. Therefore, comparing GLC maps is challenging. Statistical comparison of GLC maps is further complicated by the lack of a reference dataset that is suitable for validating multiple maps. This study utilizes the existing Globcover-2005 reference dataset to compare thematic accuracies of three GLC maps for the year 2005 (Globcover, LC-CCI and MODIS). We translated and reinterpreted the LCCS (land cover classification system) classifier information of the reference dataset into the different map legends. The three maps were evaluated for a variety of applications, i.e., general circulation models, dynamic global vegetation models, agriculture assessments, carbon estimation and biodiversity assessments, using weighted accuracy assessment. Based on the impact of land cover confusions on the overall weighted accuracy of the GLC maps, we identified map improvement priorities. Overall accuracies were 70.8 ± 1.4%, 71.4 ± 1.3%, and 61.3 ± 1.5% for LC-CCI, MODIS, and Globcover, respectively. Weighted accuracy assessments produced increased overall accuracies (80-93%) since not all class confusion errors are important for specific applications. As a common denominator for all applications, the classes mixed trees, shrubs, grasses, and cropland were identified as improvement priorities. The results demonstrate the necessity of accounting for dissimilarities in the importance of map classification errors for different user application. To determine the fitness of use of GLC maps, accuracy of GLC maps should be assessed per application; there is no single-figure accuracy estimate expressing map fitness for all purposes.
17. Utility and applicability of the sharable content object reference model (SCORM) within Navy higher education
OpenAIRE
Zacharopoulos, Ilias Z.; Kohistany, Mohammad B.
2004-01-01
Approved for public release, distribution is unlimited This thesis critically analyzes the Sharable Content Object Reference Model (SCORM) within higher education and examines SCORM's limitations within a realistic application environment versus within a theoretical/conceptual platform. The thesis also examines environments better suited for implementation of SCORM technology. In addressing the research questions, it was discovered that from the current standards set forth by Advanced Dist...
18. Reference tissue modeling with parameter coupling: application to a study of SERT binding in HIV
Science.gov (United States)
Endres, Christopher J.; Hammoud, Dima A.; Pomper, Martin G.
2011-04-01
When applicable, it is generally preferred to evaluate positron emission tomography (PET) studies using a reference tissue-based approach as that avoids the need for invasive arterial blood sampling. However, most reference tissue methods have been shown to have a bias that is dependent on the level of tracer binding, and the variability of parameter estimates may be substantially affected by noise level. In a study of serotonin transporter (SERT) binding in HIV dementia, it was determined that applying parameter coupling to the simplified reference tissue model (SRTM) reduced the variability of parameter estimates and yielded the strongest between-group significant differences in SERT binding. The use of parameter coupling makes the application of SRTM more consistent with conventional blood input models and reduces the total number of fitted parameters, thus should yield more robust parameter estimates. Here, we provide a detailed evaluation of the application of parameter constraint and parameter coupling to [11C]DASB PET studies. Five quantitative methods, including three methods that constrain the reference tissue clearance (kr2) to a common value across regions were applied to the clinical and simulated data to compare measurement of the tracer binding potential (BPND). Compared with standard SRTM, either coupling of kr2 across regions or constraining kr2 to a first-pass estimate improved the sensitivity of SRTM to measuring a significant difference in BPND between patients and controls. Parameter coupling was particularly effective in reducing the variance of parameter estimates, which was less than 50% of the variance obtained with standard SRTM. A linear approach was also improved when constraining kr2 to a first-pass estimate, although the SRTM-based methods yielded stronger significant differences when applied to the clinical study. This work shows that parameter coupling reduces the variance of parameter estimates and may better discriminate between
19. Normalization references for Europe and North America for application with USEtox™ characterization factors
DEFF Research Database (Denmark)
Laurent, Alexis; Lautier, Anne; Rosenbaum, Ralph K.;
2011-01-01
Purpose: In life cycle impact assessment, normalization can be a very effective tool for the life cycle assessment practitioner to interpret results and put them into perspective. The paper presents normalization references for the recently developed USEtox™ model, which aims at calculating...... globally applicable characterization factors. Normalization references for Europe and North America are determined, and guidance for expansions to other geographical regions is provided. Materials and methods The base years of the European and North American inventories are 2004 and 2002/2008, respectively......, a similar coverage of substances was obtained for both regions with relatively high representation of metals and a number of organic compounds, mainly consisting of non-methane volatile organic compounds and pesticides. The two inventory sets were eventually characterized with the characterization factors...
20. Application of Model Reference Adaptive Control System to Instrument Pointing System /IPS/
Science.gov (United States)
Waites, H. B.
1979-01-01
A Model Reference Adaptive Controller (MRAC) is derived for a Shuttle payload called the Instrument Pointing System (IPS). The unique features of this MRAC design are that total state feedback is not required, that the internal structure of the model is independent of the internal structure of the IPS, and that the model input is of bounded variation and not required a priori. An application of Liapunov's stability theorems is used to synthesize a control signal which assures MRAC asymptotic stability. Exponential observers are used to obtain the necessary state information to implement the control synthesis. Results are presented which show how effectively the MRAC can maneuver the IPS.
1. Design and Development of Hybrid Multilevel Inverter employing Dual Reference Modulation Technique for Fuel Cell Applications
Directory of Open Access Journals (Sweden)
R. Seyezhai
2011-10-01
Full Text Available MultiLevel Inverter (MLI has been recognized as an attractive topology for high voltage DC-AC conversion. This paper focuses on a new dual reference modulation technique for a hybrid multilevel inverter employing Silicon carbide (SiC switches for fuel cell applications. The proposed modulation technique employs two reference waveforms and a single inverted sine wave as the carrier waveform. This technique is compared with the conventional dual carrier waveform in terms of output voltage spectral quality and switching losses. An experimental five-level hybrid inverter test rig has been built using SiC switches to implement the proposed algorithm. Gating signals are generated using PIC microcontroller. The performance of the inverter has been analyzed and compared with the result obtained from theory and simulation. Simulation study of Proportional Integral (PI controller for the inverter employing the proposed modulation strategy has been done in MATLAB/SIMULINK. Keywords: Multilevel inverter, SiC , dual reference modulation, switching losses, PI
2. Reference tissue modeling with parameter coupling: application to a study of SERT binding in HIV
Energy Technology Data Exchange (ETDEWEB)
Endres, Christopher J; Pomper, Martin G [Russell H Morgan Department of Radiology and Radiological Science, Johns Hopkins Medical Institutions, Baltimore, MD 21231 (United States); Hammoud, Dima A, E-mail: [email protected] [Radiology and Imaging Sciences, National Institutes of Health/Clinical Center, Bethesda, MD (United States)
2011-04-21
When applicable, it is generally preferred to evaluate positron emission tomography (PET) studies using a reference tissue-based approach as that avoids the need for invasive arterial blood sampling. However, most reference tissue methods have been shown to have a bias that is dependent on the level of tracer binding, and the variability of parameter estimates may be substantially affected by noise level. In a study of serotonin transporter (SERT) binding in HIV dementia, it was determined that applying parameter coupling to the simplified reference tissue model (SRTM) reduced the variability of parameter estimates and yielded the strongest between-group significant differences in SERT binding. The use of parameter coupling makes the application of SRTM more consistent with conventional blood input models and reduces the total number of fitted parameters, thus should yield more robust parameter estimates. Here, we provide a detailed evaluation of the application of parameter constraint and parameter coupling to [{sup 11}C]DASB PET studies. Five quantitative methods, including three methods that constrain the reference tissue clearance (k{sup r}{sub 2}) to a common value across regions were applied to the clinical and simulated data to compare measurement of the tracer binding potential (BP{sub ND}). Compared with standard SRTM, either coupling of k{sup r}{sub 2} across regions or constraining k{sup r}{sub 2} to a first-pass estimate improved the sensitivity of SRTM to measuring a significant difference in BP{sub ND} between patients and controls. Parameter coupling was particularly effective in reducing the variance of parameter estimates, which was less than 50% of the variance obtained with standard SRTM. A linear approach was also improved when constraining k{sup r}{sub 2} to a first-pass estimate, although the SRTM-based methods yielded stronger significant differences when applied to the clinical study. This work shows that parameter coupling reduces the
3. A High-voltage Reference Testbed for the Evaluation of High-voltage Dividers for Pulsed Applications
CERN Document Server
Bastos, M Cerqueira; Bergman, A; 10.1109/CPEM.2010.5543408
2010-01-01
The design, evaluation and commissioning of a high voltage reference testbed for pulsed applications to be used in the precision testing of high voltage dividers is described. The testbed is composed of a pulsed power supply, a reference divider based on compressed gas capacitor technology and an acquisition system which makes use of the fast measurement capabilities of the HP 3458 DVM. Results of the evaluation of the reference system are presented.
4. State-specific Multi-reference Perturbation Theories with Relaxed Coefficients: Molecular Applications
Directory of Open Access Journals (Sweden)
Debashis Mukherjee
2002-06-01
Full Text Available Abstract: We present in this paper two new versions of Rayleigh-Schr¨odinger (RS and the Brillouin-Wigner (BW state-specific multi-reference perturbative theories (SSMRPT which stem from our state-specific multi-reference coupled-cluster formalism (SS-MRCC, developed with a complete active space (CAS. They are manifestly sizeextensive and are designed to avoid intruders. The combining coefficients cμ for the model functions Æμ are completely relaxed and are obtained by diagonalizing an effective operator in the model space, one root of which is the target eigenvalue of interest. By invoking suitable partitioning of the hamiltonian, very convenient perturbative versions of the formalism in both the RS and the BW forms are developed for the second order energy. The unperturbed hamiltonians for these theories can be chosen to be of both MÆller-Plesset (MP and Epstein-Nesbet (EN type. However, we choose the corresponding Fock operator fμ for each model function Æμ, whose diagonal elements are used to define the unperturbed hamiltonian in the MP partition. In the EN partition, we additionally include all the diagonal direct and exchange ladders. Our SS-MRPT thus utilizes a multi-partitioning strategy. Illustrative numerical applications are presented for potential energy surfaces (PES of the ground (1Σ+ and the first delta (1Δ states of CH+ which possess pronounced multi-reference character. Comparison of the results with the corresponding full CI values indicates the efficacy of our formalisms.
5. Novel application of high pressure processing for the production of shellfish toxin matrix reference materials.
Science.gov (United States)
Turner, Andrew D; Powell, Andy L; Burrell, Stephen
2014-11-01
The production of homogeneous and stable matrix reference materials for marine biotoxins is important for the validation and implementation of instrumental methods of analysis. High pressure processing was investigated to ascertain potential advantages this technique may have in stabilising paralytic shellfish poisoning toxins in shellfish tissues compared to untreated materials. Oyster tissues were subjected to a range of different temperatures and pressures, with results showing a significant reduction in biological activity in comparison to control samples, without significantly altering toxin profiles. Tissue subjected to pressures >600 MPa at 50 °C was assessed for homogeneity and stability. The sample homogeneity was determined using a pre-column oxidation LC-FLD method and shown to be within accepted levels of within batch repeatability. Short and long-term stability studies were conducted over a range of temperatures, with analysis by pre and post column oxidation LC-FLD demonstrating improved stability of toxins compared to the untreated materials and with epimerisation of toxins also notably reduced in treated materials. This study confirmed the technique of high pressure processing to improve the stability of PSP toxins compared to untreated wet tissues and highlighted its applicability in reference material preparation where removal of biological activity is of importance.
6. An Application of Fictitious Reference Iterative Tuning to State Feedback Control
Science.gov (United States)
Matsui, Yoshihiro; Akamatsu, Shunichi; Kimura, Tomohiko; Nakano, Kazushi; Sakurama, Kazunori
In this paper, an application method of Fictitious Reference Iterative Tuning (FRIT), which has been developed for controller gain tuning for single-input single-output systems, to state feedback gain tuning for single-input multivariable systems is proposed. Transient response data of a single-input multivariable plant obtained under closed-loop operation is used for model matching by the FRIT in time domain. The data is also used in frequency domain to estimate the stability and to improve the control performance of the closed-loop system with the state feedback gain tuned by the method. The method is applied to a state feedback control system for an inverted pendulum with an inertia rotor and its usefulness is illustrated through experiments.
7. Applicability of the "Frame of Reference" approach for environmental monitoring of offshore renewable energy projects.
Science.gov (United States)
Garel, Erwan; Rey, Cibran Camba; Ferreira, Oscar; van Koningsveld, Mark
2014-08-01
This paper assesses the applicability of the Frame of Reference (FoR) approach for the environmental monitoring of large-scale offshore Marine Renewable Energy (MRE) projects. The focus is on projects harvesting energy from winds, waves and currents. Environmental concerns induced by MRE projects are reported based on a classification scheme identifying stressors, receptors, effects and impacts. Although the potential effects of stressors on most receptors are identified, there are large knowledge gaps regarding the corresponding (positive and negative) impacts. In that context, the development of offshore MRE requires the implementation of fit-for-purpose monitoring activities aimed at environmental protection and knowledge development. Taking European legislation as an example, it is suggested to adopt standardized monitoring protocols for the enhanced usage and utility of environmental indicators. Towards this objective, the use of the FoR approach is advocated since it provides guidance for the definition and use of coherent set of environmental state indicators. After a description of this framework, various examples of applications are provided considering a virtual MRE project located in European waters. Finally, some conclusions and recommendations are provided for the successful implementation of the FoR approach and for future studies.
8. Bayesian methods for uncertainty factor application for derivation of reference values.
Science.gov (United States)
Simon, Ted W; Zhu, Yiliang; Dourson, Michael L; Beck, Nancy B
2016-10-01
In 2014, the National Research Council (NRC) published Review of EPA's Integrated Risk Information System (IRIS) Process that considers methods EPA uses for developing toxicity criteria for non-carcinogens. These criteria are the Reference Dose (RfD) for oral exposure and Reference Concentration (RfC) for inhalation exposure. The NRC Review suggested using Bayesian methods for application of uncertainty factors (UFs) to adjust the point of departure dose or concentration to a level considered to be without adverse effects for the human population. The NRC foresaw Bayesian methods would be potentially useful for combining toxicity data from disparate sources-high throughput assays, animal testing, and observational epidemiology. UFs represent five distinct areas for which both adjustment and consideration of uncertainty may be needed. NRC suggested UFs could be represented as Bayesian prior distributions, illustrated the use of a log-normal distribution to represent the composite UF, and combined this distribution with a log-normal distribution representing uncertainty in the point of departure (POD) to reflect the overall uncertainty. Here, we explore these suggestions and present a refinement of the methodology suggested by NRC that considers each individual UF as a distribution. From an examination of 24 evaluations from EPA's IRIS program, when individual UFs were represented using this approach, the geometric mean fold change in the value of the RfD or RfC increased from 3 to over 30, depending on the number of individual UFs used and the sophistication of the assessment. We present example calculations and recommendations for implementing the refined NRC methodology.
9. A conditional bivariate reference curve with an application to human growth
DEFF Research Database (Denmark)
Petersen, Jørgen Holm
conditional bivariate distribution; reference curves; percentile; non-parametric; quantile regression; non-parametric estimation......conditional bivariate distribution; reference curves; percentile; non-parametric; quantile regression; non-parametric estimation...
10. The Command and Control Reference Model for modeling, simulations, and technology applications
Science.gov (United States)
Mayk, Israel
1994-01-01
The C2RM provides a framework for the evolution of a coordinated and detailed definition of a command and control (C2) discipline. The C2RM embodies an integrated multidisciplinary approach. It is intended to be complete and self-consistent for the main levels of abstractions encountered in models, simulations, operational applications, functional descriptions, paradigms and metaphors of C2. The scope of the C2RM embraces C2 using all key physical and logical interactions associated with C2 systems. It is concerned with interactions, involving not only communications (e.g., radios), but transportations (e.g., vehicles), identifications (e.g., sensors), and inflictions (e.g., weapons), which take place between resources of the same, friendly, hostile or neutral C2 units. High levels of abstractions of user requirements for C2 across the broad spectrum of military and civil domains have led to the development of the C2RM. It applies to all phases of system acquisition from the laboratory to the field and from conceptualization to realization. The C2RM is based upon generic and analog extensions to the International Standards Organization (ISO) open system interconnection (OSI) reference model (RM) which go far beyond the scope of the ISO OSI RM. The major theme, however, of layering services is preserved to facilitate understanding, reuse of design, implementation, and interoperability to the maximum degree possible with available C2 technology.
11. Applicability of Two International Risk Scores in Cardiac Surgery in a Reference Center in Brazil
Energy Technology Data Exchange (ETDEWEB)
Garofallo, Silvia Bueno; Machado, Daniel Pinheiro; Rodrigues, Clarissa Garcia; Bordim, Odemir Jr.; Kalil, Renato A. K.; Portal, Vera Lúcia, E-mail: [email protected] [Post-Graduation Program in Health Sciences: Cardiology, Instituto de Cardiologia/Fundação Universitária de Cardiologia, Porto Alegre, RS (Brazil)
2014-06-15
The applicability of international risk scores in heart surgery (HS) is not well defined in centers outside of North America and Europe. To evaluate the capacity of the Parsonnet Bernstein 2000 (BP) and EuroSCORE (ES) in predicting in-hospital mortality (IHM) in patients undergoing HS at a reference hospital in Brazil and to identify risk predictors (RP). Retrospective cohort study of 1,065 patients, with 60.3% patients underwent coronary artery bypass grafting (CABG), 32.7%, valve surgery and 7.0%, CABG combined with valve surgery. Additive and logistic scores models, the area under the ROC (Receiver Operating Characteristic) curve (AUC) and the standardized mortality ratio (SMR) were calculated. Multivariate logistic regression was performed to identify the RP. Overall mortality was 7.8%. The baseline characteristics of the patients were significantly different in relation to BP and ES. AUCs of the logistic and additive BP were 0.72 (95% CI, from 0.66 to 0.78 p = 0.74), and of ES they were 0.73 (95% CI; 0.67 to 0.79 p = 0.80). The calculation of the SMR in BP was 1.59 (95% CI; 1.27 to 1.99) and in ES, 1.43 (95% CI; 1.14 to 1.79). Seven RP of IHM were identified: age, serum creatinine > 2.26 mg/dL, active endocarditis, systolic pulmonary arterial pressure > 60 mmHg, one or more previous HS, CABG combined with valve surgery and diabetes mellitus. Local scores, based on the real situation of local populations, must be developed for better assessment of risk in cardiac surgery.
12. Applicability of Two International Risk Scores in Cardiac Surgery in a Reference Center in Brazil
International Nuclear Information System (INIS)
The applicability of international risk scores in heart surgery (HS) is not well defined in centers outside of North America and Europe. To evaluate the capacity of the Parsonnet Bernstein 2000 (BP) and EuroSCORE (ES) in predicting in-hospital mortality (IHM) in patients undergoing HS at a reference hospital in Brazil and to identify risk predictors (RP). Retrospective cohort study of 1,065 patients, with 60.3% patients underwent coronary artery bypass grafting (CABG), 32.7%, valve surgery and 7.0%, CABG combined with valve surgery. Additive and logistic scores models, the area under the ROC (Receiver Operating Characteristic) curve (AUC) and the standardized mortality ratio (SMR) were calculated. Multivariate logistic regression was performed to identify the RP. Overall mortality was 7.8%. The baseline characteristics of the patients were significantly different in relation to BP and ES. AUCs of the logistic and additive BP were 0.72 (95% CI, from 0.66 to 0.78 p = 0.74), and of ES they were 0.73 (95% CI; 0.67 to 0.79 p = 0.80). The calculation of the SMR in BP was 1.59 (95% CI; 1.27 to 1.99) and in ES, 1.43 (95% CI; 1.14 to 1.79). Seven RP of IHM were identified: age, serum creatinine > 2.26 mg/dL, active endocarditis, systolic pulmonary arterial pressure > 60 mmHg, one or more previous HS, CABG combined with valve surgery and diabetes mellitus. Local scores, based on the real situation of local populations, must be developed for better assessment of risk in cardiac surgery
13. Applicability of Two International Risk Scores in Cardiac Surgery in a Reference Center in Brazil
Directory of Open Access Journals (Sweden)
Silvia Bueno Garofallo
2014-06-01
Full Text Available Background:The applicability of international risk scores in heart surgery (HS is not well defined in centers outside of North America and Europe.Objective:To evaluate the capacity of the Parsonnet Bernstein 2000 (BP and EuroSCORE (ES in predicting in-hospital mortality (IHM in patients undergoing HS at a reference hospital in Brazil and to identify risk predictors (RP.Methods:Retrospective cohort study of 1,065 patients, with 60.3% patients underwent coronary artery bypass grafting (CABG, 32.7%, valve surgery and 7.0%, CABG combined with valve surgery. Additive and logistic scores models, the area under the ROC (Receiver Operating Characteristic curve (AUC and the standardized mortality ratio (SMR were calculated. Multivariate logistic regression was performed to identify the RP.Results:Overall mortality was 7.8%. The baseline characteristics of the patients were significantly different in relation to BP and ES. AUCs of the logistic and additive BP were 0.72 (95% CI, from 0.66 to 0.78 p = 0.74, and of ES they were 0.73 (95% CI; 0.67 to 0.79 p = 0.80. The calculation of the SMR in BP was 1.59 (95% CI; 1.27 to 1.99 and in ES, 1.43 (95% CI; 1.14 to 1.79. Seven RP of IHM were identified: age, serum creatinine > 2.26 mg/dL, active endocarditis, systolic pulmonary arterial pressure > 60 mmHg, one or more previous HS, CABG combined with valve surgery and diabetes mellitus.Conclusion:Local scores, based on the real situation of local populations, must be developed for better assessment of risk in cardiac surgery.
14. Bushland Reference ET Calculator with QA/QC capabilities and iPhone/iPad application
Science.gov (United States)
Accurate daily reference evapotranspiration (ET) values are needed to estimate crop water demand for irrigation management and hydrologic modeling purposes. The USDA-ARS Conservation and Production Research Laboratory at Bushland, Texas developed the Bushland Reference ET (BET) Calculator for calcul...
15. CMOS compatible fabrication process of MEMS resonator for timing reference and sensing application
Science.gov (United States)
Huynh, Duc H.; Nguyen, Phuong D.; Nguyen, Thanh C.; Skafidas, Stan; Evans, Robin
2015-12-01
Frequency reference and timing control devices are ubiquitous in electronic applications. There is at least one resonator required for each of this device. Currently electromechanical resonators such as crystal resonator, ceramic resonator are the ultimate choices. This tendency will probably keep going for many more years. However, current market demands for small size, low power consumption, cheap and reliable products, has divulged many limitations of this type of resonators. They cannot be integrated into standard CMOS (Complement metaloxide- semiconductor) IC (Integrated Circuit) due to material and fabrication process incompatibility. Currently, these devices are off-chip and they require external circuitries to interface with the ICs. This configuration significantly increases the overall size and cost of the entire electronic system. In addition, extra external connection, especially at high frequency, will potentially create negative impacts on the performance of the entire system due to signal degradation and parasitic effects. Furthermore, due to off-chip packaging nature, these devices are quite expensive, particularly for high frequency and high quality factor devices. To address these issues, researchers have been intensively studying on an alternative for type of resonator by utilizing the new emerging MEMS (Micro-electro-mechanical systems) technology. Recent progress in this field has demonstrated a MEMS resonator with resonant frequency of 2.97 GHz and quality factor (measured in vacuum) of 42900. Despite this great achievement, this prototype is still far from being fully integrated into CMOS system due to incompatibility in fabrication process and its high series motional impedance. On the other hand, fully integrated MEMS resonator had been demonstrated but at lower frequency and quality factor. We propose a design and fabrication process for a low cost, high frequency and a high quality MEMS resonator, which can be integrated into a standard
16. Assessing uncertainty in reference intervals via tolerance intervals: application to a mixed model describing HIV infection.
Science.gov (United States)
Katki, Hormuzd A; Engels, Eric A; Rosenberg, Philip S
2005-10-30
We define the reference interval as the range between the 2.5th and 97.5th percentiles of a random variable. We use reference intervals to compare characteristics of a marker of disease progression between affected populations. We use a tolerance interval to assess uncertainty in the reference interval. Unlike the tolerance interval, the estimated reference interval does not contains the true reference interval with specified confidence (or credibility). The tolerance interval is easy to understand, communicate and visualize. We derive estimates of the reference interval and its tolerance interval for markers defined by features of a linear mixed model. Examples considered are reference intervals for time trends in HIV viral load, and CD4 per cent, in HIV-infected haemophiliac children and homosexual men. We estimate the intervals with likelihood methods and also develop a Bayesian model in which the parameters are estimated via Markov-chain Monte Carlo. The Bayesian formulation naturally overcomes some important limitations of the likelihood model. PMID:16189804
17. Application of the WHO Growth Reference (2007) to Assess the Nutritional Status of Children in China
Institute of Scientific and Technical Information of China (English)
YAN-PING LI; XIAO-QI HU; JING-ZHAO; XIAO-GUANG YANG; GUAN-SHENG MA
2009-01-01
Objective To assess the nutrition status of children and adolescents in China using the WHO growth reference (2007) in comparison with that defined by the International Obesity Task Force (IOTF) and the Working Group on Obesity in China (WGOC). Methods Overweight and obesity were defined by age-, sex-, specific BMI reference developed by WHO (2007), IOTF (2000), and WGOC (2004), respectively. Stunting and thinness were defined as height and BMI less than two standard deviations (SD) of the WHO growth reference (2007), respectively. Data of children and adolescents aged 5 to 19 years (n=54 857, 28 273 boys, 26 584 girls) from the 2002 China National Nutrition and Health Survey (CNNHS) were used in the study. Results The prevalence of overweight, obesity, stunting and thinness among Chinese children and adolescents aged 5-19 years was 5.0%, 1.2%, 13.8%, and 7.4%, respectively when the WHO growth reference (2007) was used, whereas the estimated absolute total number affected by these 4 conditions were 14.6, 3.7, 40.6, and 21.8 million, respectively. The prevalence of overweight and obesity was 18.1% in large cities, while the stunting prevalence was 25.1% in rural 4. Obesity prevalence assessed by the WHO growth reference was higher than that as assessed by the IOTF reference, and obesity prevalence assessed by the WGOC reference was lower than that as assessed by the IOTF reference. Conclusion The nutritional status of children and adolescents is not equal in different areas of China. Stunting is still the main health problem of the poor, while overweight and obesity are the main health problems in large cities.
18. A reference system for animal biometrics: application to the northern leopard frog
Science.gov (United States)
Petrovska-Delacretaz, D.; Edwards, A.; Chiasson, J.; Chollet, G.; Pilliod, D.S.
2014-01-01
Reference systems and public databases are available for human biometrics, but to our knowledge nothing is available for animal biometrics. This is surprising because animals are not required to give their agreement to be in a database. This paper proposes a reference system and database for the northern leopard frog (Lithobates pipiens). Both are available for reproducible experiments. Results of both open set and closed set experiments are given.
19. Development and application of a multi-targeting reference plasmid as calibrator for analysis of five genetically modified soybean events.
Science.gov (United States)
Pi, Liqun; Li, Xiang; Cao, Yiwei; Wang, Canhua; Pan, Liangwen; Yang, Litao
2015-04-01
Reference materials are important in accurate analysis of genetically modified organism (GMO) contents in food/feeds, and development of novel reference plasmid is a new trend in the research of GMO reference materials. Herein, we constructed a novel multi-targeting plasmid, pSOY, which contained seven event-specific sequences of five GM soybeans (MON89788-5', A2704-12-3', A5547-127-3', DP356043-5', DP305423-3', A2704-12-5', and A5547-127-5') and sequence of soybean endogenous reference gene Lectin. We evaluated the specificity, limit of detection and quantification, and applicability of pSOY in both qualitative and quantitative PCR analyses. The limit of detection (LOD) was as low as 20 copies in qualitative PCR, and the limit of quantification (LOQ) in quantitative PCR was 10 copies. In quantitative real-time PCR analysis, the PCR efficiencies of all event-specific and Lectin assays were higher than 90%, and the squared regression coefficients (R(2)) were more than 0.999. The quantification bias varied from 0.21% to 19.29%, and the relative standard deviations were from 1.08% to 9.84% in simulated samples analysis. All the results demonstrated that the developed multi-targeting plasmid, pSOY, was a credible substitute of matrix reference materials, and could be used as a reliable reference calibrator in the identification and quantification of multiple GM soybean events.
20. Standard digital reference images for investment steel castings for aerospace applications
CERN Document Server
American Society for Testing and Materials. Philadelphia
2010-01-01
1.1 The digital reference images provided in the adjunct to this standard illustrate various types and degrees of discontinuities occurring in thin-wall steel investment castings. Use of this standard for the specification or grading of castings requires procurement of the adjunct digital reference images which illustrate the discontinuity types and severity levels. They are intended to provide the following: 1.1.1 A guide enabling recognition of thin-wall steel casting discontinuities and their differentiation both as to type and degree through digital radiographic examination. 1.1.2 Example digital radiographic illustrations of discontinuities and a nomenclature for reference in acceptance standards, specifications and drawings. 1.2 Two illustration categories are covered as follows: 1.2.1 Graded—Six common discontinuity types each illustrated in eight degrees of progressively increasing severity. 1.2.2 Ungraded—Twelve single illustrations of additional discontinuity types and of patterns and imper...
1. A Contrastive Study of Personal Reference between Chinese and English and Its Application
Institute of Scientific and Technical Information of China (English)
张茜
2008-01-01
This paper aims to make a contrast of personal reference between Chinese and English and apply the contrastive study to Second Language(L2)teaching and learning and English-Chinese translation.Both Chinese and English have certain expressions serving as personal reference so as to achieve unity and coherence of a text.In spite of the similarity,there are differences in grammatical devices of personal reference between the two languages.Generally speaking,compared with English,there are two main differences in Chinese:A:Native Chinese speakers are prone to omit the subject and possessive determiners.B:Native Chinese speakers usually employ the device of lexical repetition to achieve the unity and coherence of a text.Both the differences will cause difficulties in L2 learning and translation.
2. Composite iterative learning controller design for gradually varying references with applications in an AFM system
Institute of Scientific and Technical Information of China (English)
方勇纯; 张玉东; 董晓坤
2014-01-01
Learning control for gradually varying references in iteration domain was considered in this research, and a composite iterative learning control strategy was proposed to enable a plant to track unknown iteration-dependent trajectories. Specifically, by decoupling the current reference into the desired trajectory of the last trial and a disturbance signal with small magnitude, the learning and feedback parts were designed respectively to ensure fine tracking performance. After some theoretical analysis, the judging condition on whether the composite iterative learning control approach achieves better control results than pure feedback control was obtained for varying references. The convergence property of the closed-loop system was rigorously studied and the saturation problem was also addressed in the controller. The designed composite iterative learning control strategy is successfully employed in an atomic force microscope system, with both simulation and experimental results clearly demonstrating its superior performance.
3. Environmental Guide Value (VGE) and specific reference values (QS) for uranium. Synthesis and elements for application to French fresh waters
International Nuclear Information System (INIS)
This report proposes a synthesis of works performed to determine criteria of protection of continental aquatic ecosystems with respect to uranium. These works resulted in the determination of an environmental guide value (VGE) for the assessment of the ecological and chemical condition of waters. Other specific reference values have been determined to be used in risk assessment: average annual concentration, maximum admissible concentration. After a recall of the methodology adopted for the determination of VGE in the case of uranium, the report discusses the specific reference values in the case of uranium for different organisms, for predators, for the protection of human health against a risk of exposure by consumption of fished products or drinkable water. The determination of VGE and its application are reported, and its consistency with the criterion of radiation protection of the environment applied to water and sediments is discussed. The determination of specific reference values is then discussed
4. Reference values and clinical application of magnetic peripheral nerve stimulation in cats
NARCIS (Netherlands)
Van Soens, Iris; Struys, Michel M. R. F.; Bhatti, Sofie F. M.; Van Ham, Luc M. L.
2012-01-01
Magnetic stimulation of radial (RN) and sciatic (SN) nerves was performed bilaterally in 40 healthy cats. Reference values for onset latency and peak-to-peak amplitude of magnetic motor evoked potentials (MMEPs) were obtained and compared with values of electric motor evoked potentials (EMEPs) in 10
5. Breast Reference Set Application: GeorgeTuszynski-Temple (2012) — EDRN Public Portal
Science.gov (United States)
As recommended by the review committee, we will analyze 30 invasive cancer cases and 30 benign controls from the EDRN breast reference set. Cases and controls should be selected that are Caucasian and post-menopausal. It is suggested that the Caucasian cases and controls match as closely as possible in regards to age and body mass index.
6. VizBin - an application for reference-independent visualization and human-augmented binning of metagenomic data
OpenAIRE
Laczny, C.C.; Sternal, T.; Plugaru, V.; Gawron, P.; Atashpendar, A.; Margossian, H.H.; Coronado, S.; Van der Maaten, L.J.M.; Vlassis, N.; Wilmes, P.
2015-01-01
Background Metagenomics is limited in its ability to link distinct microbial populations to genetic potential due to a current lack of representative isolate genome sequences. Reference-independent approaches, which exploit for example inherent genomic signatures for the clustering of metagenomic fragments (binning), offer the prospect to resolve and reconstruct population-level genomic complements without the need for prior knowledge. Results We present VizBin, a Java™-based application whic...
7. CCF analysis of high redundancy systems safety/relief valve data analysis and reference BWR application. Main report
Energy Technology Data Exchange (ETDEWEB)
Mankamo, T. [Avaplan Oy (Finland); Bjoere, S.; Olsson, Lena [ABB Atom AB, Vaesteraas (Sweden)
1992-12-01
Dependent failure analysis and modeling were developed for high redundancy systems. The study included a comprehensive data analysis of safety and relief valves at the Finnish and Swedish BWR plants, resulting in improved understanding of Common Cause Failure mechanisms in these components. The reference application on the Forsmark 1/2 reactor relief system, constituting of twelve safety/relief lines and two regulating relief lines, covered different safety criteria cases of reactor depressurization and overpressure protection function, and failure to re close sequences. For the quantification of dependencies, the Alpha Factor Model, the Binomial Probability Model and the Common Load Model were compared for applicability in high redundancy systems.
8. 40 CFR Table A-1 to Subpart A of... - Summary of Applicable Requirements for Reference and Equivalent Methods for Air Monitoring of...
Science.gov (United States)
2010-07-01
... Reference and Equivalent Methods for Air Monitoring of Criteria Pollutants A Table A-1 to Subpart A of Part...) AMBIENT AIR MONITORING REFERENCE AND EQUIVALENT METHODS General Provisions Pt. 53, Subpt. A, Table A-1 Table A-1 to Subpart A of Part 53—Summary of Applicable Requirements for Reference and...
9. Analytically continued Fock space multi-reference coupled-cluster theory: Application to the shape resonance
Science.gov (United States)
Pal, Sourav; Sajeev, Y.; Vaval, Nayana
2006-10-01
The Fock space multi-reference coupled-cluster (FSMRCC) method is used for the study of the shape resonance energy and width in an electron-atom/molecule collision. The procedure is based upon combining a complex absorbing potential (CAP) with FSMRCC theory. Accurate resonance parameters are obtained by solving a small non-Hermitian eigen-value problem. We study the shape resonances in e --C 2H 4 and e --Mg.
10. Salivary cortisol monitoring: determination of reference values in healthy children and application in asthmatic children.
Science.gov (United States)
Nagakura, Toshikazu; Tanaka, Toshiaki; Arita, Masahiko; Nishikawa, Kiyoshi; Shigeta, Makoto; Wada, Noriyuki; Matsumoto, Tsutomu; Hiraba, Kazumi; Fukuda, Norimasa
2012-01-01
Venipuncture testing of adrenocortical function in asthmatic infants and young children receiving inhaled corticosteroids can raise cortisol levels and mask physiological responses. This study aimed to establish reference ranges for salivary cortisol levels and evaluate the safety and effects of jet-nebulized budesonide inhalation suspension (BIS) on salivary cortisol levels and patient outcomes in infants and young children with mild or persistent asthma. Reference salivary cortisol levels were determined in healthy children aged 6 months to 4 years old. A 12-week multicenter, randomized, parallel-group, open-label study was performed involving 53 age-matched asthmatic children who received either 0.5 mg/day of BIS or 40-60 mg/day of cromolyn sodium inhalation suspension (CIS) via compressor nebulizer. The effective measuring range of salivary cortisol concentration in asthmatic children was 0.12-3.00 micrograms/dL. The upper and lower limits of the reference range were 0.827 and 0.076 micrograms/dL, respectively. No significant difference was seen from baseline through week 12 in the CIS and BIS groups. BIS was safe in these patients, with no inhibitory effects on adrenocortical function. Salivary cortisol measurement offers a useful and accurate tool for testing adrenocortical function in infants and young children. Longer-term studies that incorporate testing of the hypothalamic-pituitary-adrenal axis are warranted to confirm our findings.
11. Proposal for Reference Soil Concentrations of Radiocesium Applicable to Accidentally Contaminated Rice and Soybean Fields
Energy Technology Data Exchange (ETDEWEB)
Choi, Yong-Ho; Lim, Kwang-Muk; Jun, In; Kim, Byung-Ho; Keum, Dong-Kwon [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2014-10-15
Radionuclides in arable soil can be transferred to food plants via root uptake. If radionuclide concentrations in food plants to be grown in contaminated soil are estimated to be higher than the authorized food standards, their culture needs to be cancelled or ameliorating practices need to be taken. Therefore, it is necessary to establish soil concentration limits or reference soil concentrations of radiocesium standing with the food standards in preparation for potential severe NPP accidents in this and adjacent countries. In the present study, reference soil concentrations of radiocesium for rice and soybean, two of the most important food plants in Korea, were provisionally established using all relevant domestic data of soil-to-plant transfer factor (TF). The reference soil concentrations of radiocesium for rice and soybean were calculated using available domestic TF data, and were proposed for provisional use at the time of a severe NPP accident. The present RSCs are based on limited numbers of {sup 137}Cs TF values. More amounts of relevant TF data should be produced to have more reliable RSCs. For other staple-food plants such as Chinese cabbage and radish, RSCs of radiocesium should also be established. However, only a couple of relevant domestic TF values are available for these vegetables.
12. Reference Materials
Science.gov (United States)
Merkus, Henk G.
Reference materials for measurement of particle size and porosity may be used for calibration or qualification of instruments or for validation of operating procedures or operators. They cover a broad range of materials. On the one hand there are the certified reference materials, for which governmental institutes have certified one or more typical size or porosity values. Then, there is a large group of reference materials from commercial companies. And on the other hand there are typical products in a given line of industry, where size or porosity values come from the analysis laboratory itself or from some round-robin test in a group of industrial laboratories. Their regular application is essential for adequate quality control of particle size and porosity measurement, as required in e.g., ISO 17025 on quality management. In relation to this, some quality requirements for certification are presented.
13. Reference Standardization for Mass Spectrometry and High-resolution Metabolomics Applications to Exposome Research.
Science.gov (United States)
Go, Young-Mi; Walker, Douglas I; Liang, Yongliang; Uppal, Karan; Soltow, Quinlyn A; Tran, ViLinh; Strobel, Frederick; Quyyumi, Arshed A; Ziegler, Thomas R; Pennell, Kurt D; Miller, Gary W; Jones, Dean P
2015-12-01
The exposome is the cumulative measure of environmental influences and associated biological responses throughout the lifespan, including exposures from the environment, diet, behavior, and endogenous processes. A major challenge for exposome research lies in the development of robust and affordable analytic procedures to measure the broad range of exposures and associated biologic impacts occurring over a lifetime. Biomonitoring is an established approach to evaluate internal body burden of environmental exposures, but use of biomonitoring for exposome research is often limited by the high costs associated with quantification of individual chemicals. High-resolution metabolomics (HRM) uses ultra-high resolution mass spectrometry with minimal sample preparation to support high-throughput relative quantification of thousands of environmental, dietary, and microbial chemicals. HRM also measures metabolites in most endogenous metabolic pathways, thereby providing simultaneous measurement of biologic responses to environmental exposures. The present research examined quantification strategies to enhance the usefulness of HRM data for cumulative exposome research. The results provide a simple reference standardization protocol in which individual chemical concentrations in unknown samples are estimated by comparison to a concurrently analyzed, pooled reference sample with known chemical concentrations. The approach was tested using blinded analyses of amino acids in human samples and was found to be comparable to independent laboratory results based on surrogate standardization or internal standardization. Quantification was reproducible over a 13-month period and extrapolated to thousands of chemicals. The results show that reference standardization protocol provides an effective strategy that will enhance data collection for cumulative exposome research. In principle, the approach can be extended to other types of mass spectrometry and other analytical methods. PMID
14. [Good medical practice for drugs. Definition, guidelines, references, field of action and applications].
Science.gov (United States)
2008-01-01
Proper use of drugs can be defined as the use of the right product, in a correct dosage, during an adequate length of time, for a given patient and provided he has no serious side effects.It is virtually impossible, with such a number of drugs, such a number of clinical situations to prescribe adequately without using references or guidelines. References may lead to a unique choice, when the diagnosis is certain and the drug to be given is unique. With a good initial and continuous medical education, doctors can take easily this type of decision. The Summary of Products Characteristics (SPC) helps them; by sticking to this fundamental reference, prescription might be more precise and safe. In a lot of clinical situations the choice between a large numbers of therapeutic strategies necessitates use of a guideline based on scientific knowledge. Finally, a given therapeutic strategy can be as effective as and considerably less expensive than another. In such cases, payers can drive doctors to the prescription of the less expensive strategy.Some difficulties are common to all references and guidelines: 1. A lot of clinical situations are not covered by guidelines. 2. Guidelines should be updated each time there is a modification of knowledge: it is extremely difficult to do. 3. A great number of guidelines exist, issued by scientific community, health authorities or the payers. Sometime you can find a proposition in a guideline and the reverse in another guideline. It could be confusing. 4. Guidelines should be evaluated rigorously to know if they fulfil their goals. 5. Some of those guidelines simply cannot help doctors. They are too complex or do not take into account practical situations.We have made an inventory of those various guidelines and their weaknesses and we propose some solutions to increase their utility. We propose an analysis of the situation and some solutions to improve the quality and the relevance of the guidelines: to create groups of coordination
15. Characterization of bandgap reference circuits designed for high energy physics applications
Science.gov (United States)
Traversi, G.; De Canio, F.; Gaioni, L.; Manghisoni, M.; Mattiazzo, S.; Ratti, L.; Re, V.; Riceputi, E.
2016-07-01
The objective of this work is to design a high performance bandgap voltage reference circuit in a standard commercial 65 nm CMOS technology capable of operating in harsh radiation environments. A prototype circuit based on three different devices (diode, bipolar transistor and MOSFET) was fabricated and tested. Measurement results show a temperature variation as low as ±3.4 mV over a temperature range of 170 ° C (-30 °C to 140 °C) and a line regulation at room temperature of 5.2%/V. Measured VREF is 690 mV±15 mV (3σ) for 26 samples on the same wafer. Circuits correctly operate with supply voltages in the range from 1.32 V down to 0.78 V. A reference voltage shift of only 7.6 mV (around 1.1%) was measured after irradiation with 10 keV X-rays up to an integrated dose of 225 Mrad (SiO2).
16. Applications of dendrochronology and sediment geochronology to establish reference episodes for evaluations of environmental radioactivity
Energy Technology Data Exchange (ETDEWEB)
Waugh, W.J.; Carroll, J.; Abraham, J.D.; Landeen, D.S. [Roy F. Weston, Inc., U.S. Department of Energy, Grand Junction Office, 2597 B 3/4 Road, Grand Junction, CO 81503 (United States)
1998-12-01
Dendrochronology and sediment geochronology have been used to demonstrate retrospective monitoring of environmental radioactivity at United States Department of Energy (DOE) sites. {sup 14}C in annual growth rings of sagebrush preserved the temporal and spatial patterns of {sup 14}C resulting from dispersion downwind of a nuclear fuel processing facility at the Hanford Site in Washington State. As far as 10 km downwind of the facility, {sup 14}C concentrations were significantly higher in growth rings formed during a fuel processing episode than in rings produced during preoperational or postoperational episodes. An episode of uranium mill tailings deposition in pond sediments at the Grand Junction Office in Colorado was reconstructed using {sup 210}Pb geochronology constrained by a marker of peak {sup 137}Cs fallout. Uranium concentrations in ponds sediments deposited after the processing episode provide a reasonable cleanup standard. These reference episodes of environmental radioactivity reconstructed from measurements taken within contaminated environments can improve or replace reference area data as baseline information for dose reconstructions, risk assessments, and the establishment of cleanup standards. (Copyright (c) 1998 Elsevier Science B.V., Amsterdam. All rights reserved.)
17. Application of complete space multireference many-body perturbation theory to N2: Dependence on reference space and H0
Science.gov (United States)
Finley, James P.; Freed, Karl F.
1995-01-01
and, thus, appear to be beyond the range of applicability of the forced degeneracy Heff method. Novel techniques are employed for properly treating some of these cases, including the use of orbitals which optimize the quasidegeneracy of the reference space and minimize energy denominator problems. By considering reference spaces of varying sizes, we describe the tradeoff between employing large reference spaces, which provide excellent first order descriptions, and the difficulties imposed by the fact that larger reference spaces severely violate the quasidegeneracy constraints of the Heff method. The same tradeoff exists when the optimal first order CASSCF orbitals are compared with orbitals generated by a VN-1 potential. The VN-1 potential orbitals, which produce relatively quasidegenerate reference spaces, are equivalent to the sequential SCF orbitals used in previous Heff computations, but are more simply obtained by a unitary transformation. The forced degenerate valence orbital energy ɛv¯ is computed from an averaging scheme for the valence orbital energies. The ground state N2 computations contrast two averaging schemes—populational and democratic. Democratic averaging weighs all valence orbitals equally, while populational averaging weighs valence orbitals in proportion to their ground state populations. Populational averaging is determined to be useful only in situations where core-core and core-valence correlation are unimportant. A Fock-type operator used by Roos and co-workers is employed to uniquely define CASSCF orbitals within their invariant subspaces. This operator is found to be more compatible with populational than democratic averaging, especially when the reference space contains high lying orbitals.
18. Toxicity reference values for chlorophacinone and their application for assessing anticoagulant rodenticide risk to raptors
Science.gov (United States)
Rattner, Barnett A.; Horak, Katherine E.; Lazarus, Rebecca; Schultz, Sandra; Knowles, Susan N.; Abbo, B.G.; Volker, Steven F.
2015-01-01
Despite widespread use and benefit, there are growing concerns regarding hazards of second-generation anticoagulant rodenticides to non-target wildlife which may result in expanded use of first-generation compounds, including chlorophacinone (CPN). The toxicity of CPN over a 7-day exposure period was investigated in American kestrels (Falco sparverius) fed either rat tissue mechanically-amended with CPN, tissue from rats fed Rozol® bait (biologically-incorporated CPN), or control diets (tissue from untreated rats or commercial bird of prey diet) ad libitum. Nominal CPN concentrations in the formulated diets were 0.15, 0.75 and 1.5 µg/g food wet weight, and measured concentrations averaged 94 % of target values. Kestrel food consumption was similar among groups and body weight varied by less than 6 %. Overt signs of intoxication, liver CPN residues, and changes in prothrombin time (PT), Russell’s viper venom time (RVVT) and hematocrit, were generally dose-dependent. Histological evidence of hemorrhage was present at all CPN dose levels, and most frequently observed in pectoral muscle and heart. There were no apparent differences in toxicity between mechanically-amended and biologically-incorporated CPN diet formulations. Dietary-based toxicity reference values at which clotting times were prolonged in 50 % of the kestrels were 79.2 µg CPN consumed/kg body weight-day for PT and 39.1 µg/kg body weight-day for RVVT. Based upon daily food consumption of kestrels and previously reported CPN concentrations found in small mammals following field baiting trials, these toxicity reference values might be exceeded by free-ranging raptors consuming such exposed prey. Tissue-based toxicity reference values for coagulopathy in 50 % of exposed birds were 0.107 µg CPN/g liver wet weight for PT and 0.076 µg/g liver for RVVT, and are below the range of residue levels reported in raptor mortality incidents attributed to CPN exposure. Sublethal responses associated
19. The characterization of uranium and plutonium reference materials by the (100-X) route; basic principle and applications
International Nuclear Information System (INIS)
Uranium and plutonium reference materials are produced to calibrate reagents used in titrimetric methods or to check instrumental techniques like electrochemical methods. For this purpose, the most commonly prepared reference materials are high-purity uranium metal pieces and uranium dioxide pellets, respectively plutonium metal pieces and plutonium dioxide powders. Two different routes can be followed to certify the element content: a direct or an indirect route. With the direct route the main element is assayed using appropriate redox titration methods. The overall uncertainty on the main element is limited by both the precision of the titration methods used and the uncertainty of the reference materials. The indirect route is applicable only for high purity base material. It requires the measurement of the sum of impurities (X) in the material and the substraction of this sum from hundred per cent (100-X). Since the (100-X) route requires the measurement of a large number of elements, a variety of analytical methods has to be used, and a contribution by different specialized laboratories is requested. The uncertainty of the total of impurities is calculated from the respective uncertainties of the determination of each specific element. If the total of impurities is sufficiently small and even if it is determined with a rather large uncertainty, the (100-X) route leads nevertheless to a low uncertainty on the main element content. The application of the (100-X) route is illustrated for the case of uranium dioxide pellets certified as an European Community Nuclear Reference Material. The (100-X) route is recommended as a very accurate one for high purity material characterization and certification purposes
20. A RISK-AWARE BUSINESS PROCESS MANAGEMENT REFERENCE MODEL AND ITS APPLICATION IN AN EGYPTIAN UNIVERSITY
Directory of Open Access Journals (Sweden)
Mohamed H. Haggag
2015-05-01
Full Text Available Due to the environmental pressures on organizations, the demand on Business Process Management (BPM automation suites has increased. This led to the arising need for managing process-related risks. Therefore the management of risks in business processes has been the subject of many researches during the past few years. However, most of these researches focused mainly on one or two stages of the BPM life cycle and introduced a support for it. This paper aims to provide a reference model for Risk-Aware BPM which addresses the whole stages of the BPM life cycle, as well as some current techniques are listed for the implementation of this model. Additionally, a case study for a business process in an Egyptian university is introduced, in order to apply this model in real-world environment. The results will be analyzed and concluded.
1. Application of the limit analysis and of the reference stress method to stiffened welded structures
International Nuclear Information System (INIS)
A global method for the analysis of the failure of non elementary structures is examined. The type of failure are: instantaneous or delayed deformation, plastic instability of the material and delayed rupture. The steel structure studied is the lower support of a reactor core composed of two concentric rings, a bottom plate, a top plate and 15 stiffening radial plates. For calculation an annular sector representing 1/30 of this structure is modelized. The behaviour of the structure is examined for the following conditions: an instantaneous temperature rise from 4000C to 6500C, then the temperature is maintained for 400 hours and decreased from 6500 to 4000 at a rate of 0.2 degree/hour. The results obtained by elastic analysis, limit analysis and viscoplastic analysis are compared. The reference stress methods is reliable for the determination of structure dimensioning and is less expensive than the viscoplastic analysis. 35 drawings and 19 tables are given
2. Determination of reference data of REB diodes by using a numerical method for different applications
International Nuclear Information System (INIS)
In this study, some reference data of a REB diode are presented functionally. These given characteristics are consisted of the computational results. Generally the numerical scheme depends upon the essential parameters of the charged transmission line and Child-Langmuir's diode model. By this system, further the correlation functions, some other definite functions such as the voltage of transmission line Vsub(L)(t), the diode voltage Vsub(d)(t), the diode current Isub(d)(t), the diode impedance Rsub(d)(t), the diode input power Wsub(d)(t), the dissipated energy Usub(d)(t), the efficiency phi, the beam density nsub(b)(t), the relativistic beam energy Usub(b)(t), and the intrinsic impedance Zsub(int)(t) have also been investigated. (author)
3. Liver Rapid Reference Set Application: Hemken - Abbott (2015) — EDRN Public Portal
Science.gov (United States)
The aim for this testing is to find a small panel of biomarkers (n=2-5) that can be tested on the Abbott ARCHITECT automated immunoassay platform for the early detection of hepatocellular carcinoma (HCC). This panel of biomarkers should perform significantly better than alpha-fetoprotein (AFP) alone based on multivariate statistical analysis. This testing of the EDRN reference set will help expedite the selection of a small panel of ARCHITECT biomarkers for the early detection of HCC. The panel of ARCHITECT biomarkers Abbott plans to test include: AFP, protein induced by vitamin K absence or antagonist-II (PIVKA-II), golgi protein 73 (GP73), hepatocellular growth factor (HGF), dipeptidyl peptidase 4 (DPP4) and DPP4/seprase (surface expressed protease) heterodimer hybrid. PIVKA-II is abnormal des-carboxylated prothrombin (DCP) present in vitamin K deficiency.
4. Liver Full Reference Set Application :Timothy Block - Drexel Univ (2010) — EDRN Public Portal
Science.gov (United States)
The goal of this application is to determine if the levels of serum GP73 and fucosylated kininogen/acute phase proteins can be used to detect hepatocellular carcinoma (HCC) in the background of liver cirrhosis. The use of the validation set would allow us to directly compare GP73 and fucosylated markers against AFP, AFP-L3 and DCP as well as test them in combination with these markers
5. Bumpy Application of Utility Code for Genomic Inventions: With Special Reference to Express Sequence Tags
Directory of Open Access Journals (Sweden)
M R Sreenivasa Murthy
2013-12-01
Full Text Available Genomics, a new bough of biotechnology responsible for gene mapping has acquired a rapid significance in the field of patents. Brisk growth of patent filing in genomic subject matter is raising serious concerns about their utility from the perspective of societal benefit. Though the genomic related patent application qualifies the criterion of invention and non-obviousness in major instances, the inventors are unable to satisfy the utility criterion. Some instances such as patent application for ESTs have no utility at all. The patent regulators constructed various tests to deal with the situation such as specificity, substantiality (real world credibility tests etc. Hoverer, it is noteworthy that an attempt to uniform the standard of utility test for genomic inventions especially in the field of ESTs, cloning and creation of chimeras, has been made by America and Europe through specific regulations. Thus, the objective of this paper is firstly, to explain the importance of biotechnology and genomic inventions for mankind and significance of ESTs for future research. Secondly, to analyze the application of Utility code prior to the emergence of Utility code in America and Europe. Thirdly to scrutinize the Utility code in both countries and their implication on aftermath cases, and. fourthly and finally, to critically evaluate the both countries utility pathways in the light of societal benefit.
6. Reference database of hypervariable genetic markers of Argentina: application for molecular anthropology and forensic casework.
Science.gov (United States)
Sala, A; Penacino, G; Carnese, R; Corach, D
1999-06-01
The population of Argentina is mostly composed of people of European ancestry. Aboriginal communities are at present very reduced in number and restricted to small geographically isolated patches. Three aboriginal communities, the Mapuche, Tehuelche and Wichi, were selected for short tandem repeat (STR) investigation. The metropolitan population of the city of Buenos Aires was analyzed, with both micro- and minisatellites. The minisatellite loci D1S7, D2S44, D4S139, D5S110, D8S358, D10S28, and D17S26 were typed on HaeIII-digested DNA obtained from unrelated individuals. D1S80 was typed by polymerase chain reaction (PCR). The autosomal STRs THO1, FABP, D6S366, CSF1PO, TPOX, F13A1, FES/FPS, vWA, MBPA/B, D16S539, D7S820, D13S317, and RENA4 and the sex chromosome STRs HPRTB, DYS385, DYS3891, DYS38911, DYS19, DYS390, DYS391, DYS392, DYS393 and YCAII were also investigated. As a by-product of our investigations, a reference database was created that is routinely used in forensic casework and paternity testing. STR allele frequency distributions are characterized by significant differences within and also between different populations. In contrast, the minisatellite bin distribution of the metropolitan population is not significantly different from other Caucasian populations.
7. MSA Bladder Reference Set Application: Charles Rosser-Hawaii (2014) — EDRN Public Portal
Science.gov (United States)
The goal of this proposal is straightforward. We wish to assay in a discovery set, reference set from EDRN, both PAI-1 and ANG promoters and genes for mutations. Then the results will be confirmed in a test cohort comprised of DNA extracted from fresh frozen tissue (n = 80 BCa patients). DNA from matching buffy coat from these 80 patients will serve as control. Extracted RNA can be assessed for difference in transcription. Furthermore, matched voided urine samples from these 80 patients are available to assess protein levels of PAI-1 and ANG by ELISA in addition to assessing activity of PAI-1 and ANG. At the end, we will link any genetic alteration with changes in RNA, protein and protein activity level as well as clinical features (e.g., age, race, tobacco history, grade, stage and outcomes). This comprehensive study will allow us with certainty to state if there are mutations in the promoters and genes of PAI-1 and ANG that are functional and thus may lead to the growth advantage that we previously demonstrated in our experiments.
8. Liver Full Reference Set Application: Hiro Yamada - Wako (2011) — EDRN Public Portal
Science.gov (United States)
Wako has received new 510(k) clearance for Lens culinaris agglutinin-reactive fraction of alpha-fetoprotein (AFP-L3) and Des-gamma-carboxy prothrombin (DCP) tests on an innovative μTASWako i30 analyzer from FDA. The AFP-L3 and DCP assayed on an older platform LiBASys have been cleared with indication of use for risk assessment of hepatocellular carcinoma (HCC) in patient at risk for the liver malignancy. Wako believes that early detection of HCC is critical for improving HCC patient outcome. Therefore, Wako is currently seeking collaborative opportunities to retrospectively measure clinical samples using the AFP-L3 and DCP for further determining of effectiveness of the HCC biomarkers in early detection which are collected prospectively during HCC surveillance. The Reference Sample Set in the EDRN biorepository are well characterized and studied. Access to these samples would allow Wako to quickly determine the clinical effectiveness of AFP-L3 and DCP in detecting early HCC
9. TELEPERM XS: I and C systems for safety application in NPP's - features, developments, references and feedback
International Nuclear Information System (INIS)
In the field of digital I and C AREVA NP is focused on concepts that on the one hand make allowance for development cycles getting shorter in the technology competition and on the other hand assure a long-term system support with the ability to deliver spare parts in the long run. The system platform TELEPERM XS, which was developed especially for safety I and C application of nuclear power plants, meets requirements effectively and thus provides a great benefit for the customer. The typical applications of TELEPERM XS are in the field of Reactor Protection and ESFAS functions (Engineered Safety Features Actuation System). High demands are defined for system reliability and availability, as well as for failure prevention and tolerance. The requirements of corresponding international codes and standards of nuclear installations are also implemented in the development and engineering processes of TELEPERM XS. The system platform is integrated into a sustainable program for service life management of electronic systems and equipment. Its ongoing future-oriented development ensures the long-term availability of hardware and software components for installed TELEPERM XS applications already installed in the plants. The further development of platform and components continues to be based on the robust, service-proven TELEPERM XS architecture, with the aim of minimizing the risks associated with equipment qualification and project licensing. A further development feature is the completion and extension of TELEPERM XS applications. This continuous innovation process, combined with maximized compatibility, makes TELEPERM XS unique, and provides the basis for a sustainable system with a service life guaranteed for the long term. Within the past 10 years, the majority of all comprehensive modernization projects worldwide were implemented or are contracted using TELEPERM XS. TELEPERM XS has been implemented in two new nuclear power plants and there are orders for four more
10. Teledetection passive et processus decisionnel a reference spatiale: Application a l'aquaculture en milieu marin
Science.gov (United States)
Habbane, Mohamed
L'objectif de cette etude est d'elaborer un processus decisionnel a reference spatiale (PDRS) pour la mariculture. Le PDRS est applique aux eaux cotieres de la baie des Chaleurs, dans le golfe du Saint-Laurent (Canada). Une carte preliminaire regionale d'indices du potentiel maricole, d'une limite de resolution spatiale de 1 kmsp2, est produite avec des parametres du niveau 1. Ces parametres englobent la temperature de l'eau de surface, extraite des images AVHRR, la salinite, les courants ainsi que les pigments chlorophylliens, quantifies a l'aide de mesures in situ. Les images AVHRR, prises en 1994, ont ete utiliees comme reference primaire pour selectionner des aires pouvant supporter une activite maricole sur la cote nord de la baie des Chaleurs. La temperature de surface extraite de ces images permet une analyse mesoechelle a la fois qualitative et quantitative des processus cotiers observes pendant la periode d'acquisition des donnees. Les autres donnees, soit la salinite, les courants et les concentrations en pigments chlorophylliens, sont analysees de facon a identifier la variabilite spatio-temporelle des caracteristiques des eaux de surface. L'ensemble des informations permet de produire une carte preliminaire regionale d'indices du potentiel maricole de la partie centrale de la baie des Chaleurs. Selon cet indice (defini entre 0 et 1), le secteur de potentiel aquicole de 0,5 a 0,75 s'etend sur une superficie d'environ 300 kmsp2. La localisation de cette aire potentielle est en accord avec les fortes concentrations en pigments chlrophylliens, presentant des conditions environnementales ideales a une haute productivite biologique. Par la suite la carte preliminaire est modifiee en tenant compte des parametres du niveau 2. Ces parametres sont la geomorphologie littorale, la bathymetrie, les sediments en suspension, les vents, les vagues, le debit d'eau douce, la glace marine, le carbone organique dissous, les aires de peche et les sources de pollution. Ces
11. A first reference dataset for the evaluation of geometric correction methods under the scope of remote sensing applications
Science.gov (United States)
Gonçalves, Hernâni; Teodoro, Ana C.; Gonçalves, José A.; Corte Real, Luís
2011-11-01
The geometric correction of images under the scope of remote sensing applications is still mostly a manual work. This is a time and effort consuming task associated with an intra- and inter-operator subjectivity. One of the main reasons may be the lack of a proper evaluation of the different available automatic image registration (AIR) methods, since some of them are only adequate for certain types of applications/data. In order to fulfill a gap in this context, a first reference dataset of pairs of images comprising some types of geometric distortions was created, different spatial and spectral resolutions, and divided according to the Level 1 of CORINE Land Cover nomenclature (European Environment Agency). This dataset will allow for gaining perception of the abilities and limitations of some AIR methods. Some AIR methods were evaluated in this work, including the traditional correlation-based method and the SIFT approach, for which a set of measures for an objective evaluation of the geometric correction process quality was computed for every combination of pair of images/AIR method. The reference dataset is available from an internet address, being expected that it becomes a channel of interaction among the remote sensing community interested in this field.
12. Superconductors, analysis and applications, with special reference to the utilisation of bulk (Re)BCO materials
Science.gov (United States)
Coombs, T. A.
2010-11-01
The Electrical Power and Energy Conversion (EPEC) superconductivity group at Cambridge University has been working on the application of superconductivity to large scale devices. This work is taking place over a range of areas which cover FCLs, motors and generators, SMES, accelerator magnets and MRI. The research is underpinned by advanced modelling techniques using both pure Critical State models and E- J models to analyse the behaviour of the superconductors. As part of the device design we are concentrating on the analysis of AC losses in complicated geometries such as are found in motor windings and the magnetisation of bulk superconductors to enable their full potential to be realised. We are interested in the full range of high-temperature superconductors and have measured and predicted the performance of YBCO, MgB 2 and BSCCO at a range of temperatures and in wire, tape and bulk forms. This paper concentrates on recent work which includes: modelling of coils using formulations based on H and A. A critical state model for the analysis of coils in SMES; crossed field effects in bulk superconductors; a magnetic model together with experimental results which explain and describe the method of flux pumping whereby a bulk superconductor can be magnetised to a high flux density using a repeatedly applied field of low flux density and finally a new configuration for MRI magnets
13. Superconductors, analysis and applications, with special reference to the utilisation of bulk (Re)BCO materials
Energy Technology Data Exchange (ETDEWEB)
Coombs, T.A., E-mail: [email protected] [University of Cambridge, Department of Engineering, Trumpington Street, Cambridge CB2 1PZ (United Kingdom)
2010-11-01
The Electrical Power and Energy Conversion (EPEC) superconductivity group at Cambridge University has been working on the application of superconductivity to large scale devices. This work is taking place over a range of areas which cover FCLs, motors and generators, SMES, accelerator magnets and MRI. The research is underpinned by advanced modelling techniques using both pure Critical State models and E-J models to analyse the behaviour of the superconductors. As part of the device design we are concentrating on the analysis of AC losses in complicated geometries such as are found in motor windings and the magnetisation of bulk superconductors to enable their full potential to be realised. We are interested in the full range of high-temperature superconductors and have measured and predicted the performance of YBCO, MgB{sub 2} and BSCCO at a range of temperatures and in wire, tape and bulk forms. This paper concentrates on recent work which includes: modelling of coils using formulations based on H and A. A critical state model for the analysis of coils in SMES; crossed field effects in bulk superconductors; a magnetic model together with experimental results which explain and describe the method of flux pumping whereby a bulk superconductor can be magnetised to a high flux density using a repeatedly applied field of low flux density and finally a new configuration for MRI magnets
14. Application of fundamental aquatic chemistry to the safety case and the role of thermodynamic reference data bases
Energy Technology Data Exchange (ETDEWEB)
Altmaier, Marcus; Gaona, Xavier; Fellhauer, David; Geckeis, Horst [Karlsruhe Institute of Technology, Eggenstein-Leopoldshafen (Germany). Inst. for Nuclear Waste Disposal
2015-07-01
All national and international programs developing a Nuclear Waste Disposal Safety Case have recognized the essential requirement of assessing aqueous (radionuclide) chemistry and establishing reliable thermodynamic databases. Long-term disposal of nuclear waste in deep underground repositories is the safest option to separate highly hazardous radionuclides from the environment. In order to predict the long-term performance of a repository for different evolution scenarios, the potentially relevant specific (geo)chemical systems are analyzed. This requires a detailed understanding of solubility, speciation and thermodynamics for all relevant components including radionuclides, and the availability of reliable thermodynamic data and databases as fundamental input for integral geochemical model calculations and hence PA. Radionuclide solubility and speciation strongly depend on chemical conditions (pH, E{sub h}, matrix electrolyte system and ionic strength) with additional factors like the presence of complexing ligands or temperature further impacting solution chemistry. As the fundamental chemical key processes are known and convincingly described by general laws of nature (→ solution thermodynamics), the long-term behavior of a repository system can be analyzed over geological timescales using geochemical tools. A key application of fundamental aquatic chemistry in the Safety Case is the determination of solubility limits (radionuclide source terms). Based upon fundamental chemical information (on solid phases, complexation reactions, activity coefficients, etc.), the maximum amount of radionuclides potentially dissolved in a given volume of solution and transported away from the repository, are quantified. A detailed understanding of radionuclide chemistry is also crucial for neighboring fields. For example, advanced mechanistic understanding and modeling of sorption processes at the solid liquid interphase, waste dissolution processes, secondary phase and
15. Application of fundamental aquatic chemistry to the safety case and the role of thermodynamic reference data bases
International Nuclear Information System (INIS)
All national and international programs developing a Nuclear Waste Disposal Safety Case have recognized the essential requirement of assessing aqueous (radionuclide) chemistry and establishing reliable thermodynamic databases. Long-term disposal of nuclear waste in deep underground repositories is the safest option to separate highly hazardous radionuclides from the environment. In order to predict the long-term performance of a repository for different evolution scenarios, the potentially relevant specific (geo)chemical systems are analyzed. This requires a detailed understanding of solubility, speciation and thermodynamics for all relevant components including radionuclides, and the availability of reliable thermodynamic data and databases as fundamental input for integral geochemical model calculations and hence PA. Radionuclide solubility and speciation strongly depend on chemical conditions (pH, Eh, matrix electrolyte system and ionic strength) with additional factors like the presence of complexing ligands or temperature further impacting solution chemistry. As the fundamental chemical key processes are known and convincingly described by general laws of nature (→ solution thermodynamics), the long-term behavior of a repository system can be analyzed over geological timescales using geochemical tools. A key application of fundamental aquatic chemistry in the Safety Case is the determination of solubility limits (radionuclide source terms). Based upon fundamental chemical information (on solid phases, complexation reactions, activity coefficients, etc.), the maximum amount of radionuclides potentially dissolved in a given volume of solution and transported away from the repository, are quantified. A detailed understanding of radionuclide chemistry is also crucial for neighboring fields. For example, advanced mechanistic understanding and modeling of sorption processes at the solid liquid interphase, waste dissolution processes, secondary phase and solid
16. Applications of Mars Global Reference Atmospheric Model (Mars-GRAM 2005) Supporting Mission Site Selection for Mars Science Laboratory
Science.gov (United States)
Justh, Hilary L.; Justus, Carl G.
2008-01-01
The Mars Global Reference Atmospheric Model (Mars-GRAM 2005) is an engineering level atmospheric model widely used for diverse mission applications. An overview is presented of Mars-GRAM 2005 and its new features. One new feature of Mars-GRAM 2005 is the 'auxiliary profile' option. In this option, an input file of temperature and density versus altitude is used to replace mean atmospheric values from Mars-GRAM's conventional (General Circulation Model) climatology. An auxiliary profile can be generated from any source of data or alternate model output. Auxiliary profiles for this study were produced from mesoscale model output (Southwest Research Institute's Mars Regional Atmospheric Modeling System (MRAMS) model and Oregon State University's Mars mesoscale model (MMM5)model) and a global Thermal Emission Spectrometer(TES) database. The global TES database has been specifically generated for purposes of making Mars-GRAM auxiliary profiles. This data base contains averages and standard deviations of temperature, density, and thermal wind components,averaged over 5-by-5 degree latitude-longitude bins and 15 degree L(s) bins, for each of three Mars years of TES nadir data. Results are presented using auxiliary profiles produced from the mesoscale model output and TES observed data for candidate Mars Science Laboratory (MSL) landing sites. Input parameters rpscale (for density perturbations) and rwscale (for wind perturbations) can be used to "recalibrate" Mars-GRAM perturbation magnitudes to better replicate observed or mesoscale model variability.
17. APPLICABILITY OF kDNA-PCR FOR ROUTINE DIAGNOSIS OF AMERICAN TEGUMENTARY LEISHMANIASIS IN A TERTIARY REFERENCE HOSPITAL
Directory of Open Access Journals (Sweden)
Marcela M. Satow
2013-12-01
Full Text Available SUMMARY This study evaluated the applicability of kDNA-PCR as a prospective routine diagnosis method for American tegumentary leishmaniasis (ATL in patients from the Instituto de Infectologia Emílio Ribas (IIER, a reference center for infectious diseases in São Paulo - SP, Brazil. The kDNA-PCR method detected Leishmania DNA in 87.5% (112/128 of the clinically suspected ATL patients, while the traditional methods demonstrated the following percentages of positivity: 62.8% (49/78 for the Montenegro skin test, 61.8% (47/76 for direct investigation, and 19.3% (22/114 for in vitro culture. The molecular method was able to confirm the disease in samples considered negative or inconclusive by traditional laboratory methods, contributing to the final clinical diagnosis and therapy of ATL in this hospital. Thus, we strongly recommend the inclusion of kDNA-PCR amplification as an alternative diagnostic method for ATL, suggesting a new algorithm routine to be followed to help the diagnosis and treatment of ATL in IIER.
18. Applicability of kDNA-PCR for routine diagnosis of American tegumentary leishmaniasis in a tertiary reference hospital.
Science.gov (United States)
Satow, Marcela M; Yamashiro-Kanashiro, Edite H; Rocha, Mussya C; Oyafuso, Luiza K; Soler, Rita C; Cotrim, Paulo C; Lindoso, José Angelo L
2013-01-01
This study evaluated the applicability of kDNA-PCR as a prospective routine diagnosis method for American tegumentary leishmaniasis (ATL) in patients from the Instituto de Infectologia Emílio Ribas (IIER), a reference center for infectious diseases in São Paulo - SP, Brazil. The kDNA-PCR method detected Leishmania DNA in 87.5% (112/128) of the clinically suspected ATL patients, while the traditional methods demonstrated the following percentages of positivity: 62.8% (49/78) for the Montenegro skin test, 61.8% (47/76) for direct investigation, and 19.3% (22/114) for in vitro culture. The molecular method was able to confirm the disease in samples considered negative or inconclusive by traditional laboratory methods, contributing to the final clinical diagnosis and therapy of ATL in this hospital. Thus, we strongly recommend the inclusion of kDNA-PCR amplification as an alternative diagnostic method for ATL, suggesting a new algorithm routine to be followed to help the diagnosis and treatment of ATL in IIER.
19. Application of reference point indentation for micro-mechanical surface characterization of calcium silicate based dental materials.
Science.gov (United States)
Antonijević, Djordje; Milovanović, Petar; Riedel, Christoph; Hahn, Michael; Amling, Michael; Busse, Björn; Djurić, Marija
2016-04-01
The objective of this study was to elucidate micromechanical properties of Biodentine and two experimental calcium silicate cements (CSCs) using Reference Point Indentation (RPI). Biomechanical characteristics of the cement type and the effects of a radiopacifier, liquid components, acid etching treatment and bioactivation in simulated body fluid (SBF) were investigated by measuring the microhardness, average unloading slope (Avg US) and indentation distance increase (IDI). Biodentine had a greater microhardness than the experimental CSCs, while the Avg US and IDI values were not significantly different among investigated materials. There was a statistically significant difference in microhardness and IDI values between pure CSCs and radiopacified cements (p < 0.05). Micromechanical properties were not affected by different liquid components used. Acid-etching treatment reduced Biodentine's microhardness while cements' immersion in SBF resulted in greater microhardness and higher IDI values compared to the control group. Clearly, the physiological environment and the cements' composition affect their surface micromechanical properties. The addition of calcium chloride and CSCs' immersion in SBF are beneficial for CSCs' micromechanical performance, while the addition of radiopacifiers and acid etching treatment weaken the CSCs' surface. Application of RPI aids with the characterization of micromechanical properties of synthetic materials' surfaces. PMID:26888441
20. Development of high temperature reference electrodes for in-pile application: Part I. Feasibility study of the external pressure balanced Ag/AgCl reference electrode (EPBRE) and the cathodically charged Palladium hydrogen electrode
Energy Technology Data Exchange (ETDEWEB)
Bosch, R.W.; Van Nieuwenhove, R
1998-10-01
The main problems connected with corrosion potential measurements at elevated temperatures and pressures are related to the stability and lifetime of the reference electrode and the correct estimation of the potential related to the Standard Hydrogen Scale (SHE). Under Pressurised Water Reactor (PWR) conditions of 300 degrees Celsius and 150 bar, the choice of materials is also a limiting factor due to the influence of radiation. Investigations on two reference electrodes that can be used under PWR conditions are reported: the cathodically charged palladium hydrogen electrode, and the external pressure balanced silver/silver chloride electrode. Preliminary investigations with the Pd-electrode were focused on the calculation of the required charging time and the influence of dissolved oxygen. High temperature applications are discussed on the basis of results reported in the literature. Investigations with the silver/silver chloride reference electrode mainly dealt with the salt bridge which is necessary to connect the reference electrode with the testing solution. It is shown that the thermal junction potential is independent of the length of the salt bridge. In addition, the high temperature contributes to an increase of the conductivity of the solution, which is beneficial for the salt bridge connection.
1. Development of high temperature reference electrodes for in-pile application: Part I. Feasibility study of the external pressure balanced Ag/AgCl reference electrode (EPBRE) and the cathodically charged Palladium hydrogen electrode
International Nuclear Information System (INIS)
The main problems connected with corrosion potential measurements at elevated temperatures and pressures are related to the stability and lifetime of the reference electrode and the correct estimation of the potential related to the Standard Hydrogen Scale (SHE). Under Pressurised Water Reactor (PWR) conditions of 300 degrees Celsius and 150 bar, the choice of materials is also a limiting factor due to the influence of radiation. Investigations on two reference electrodes that can be used under PWR conditions are reported: the cathodically charged palladium hydrogen electrode, and the external pressure balanced silver/silver chloride electrode. Preliminary investigations with the Pd-electrode were focused on the calculation of the required charging time and the influence of dissolved oxygen. High temperature applications are discussed on the basis of results reported in the literature. Investigations with the silver/silver chloride reference electrode mainly dealt with the salt bridge which is necessary to connect the reference electrode with the testing solution. It is shown that the thermal junction potential is independent of the length of the salt bridge. In addition, the high temperature contributes to an increase of the conductivity of the solution, which is beneficial for the salt bridge connection
2. Real-time virtual reference service based on applicable artificial intelligence technologies:The début of the robot Xiaotu at Tsinghua University Library
Institute of Scientific and Technical Information of China (English)
Fei; YAO; Lei; JI; Chengyu; ZHANG; Wu; CHEN
2011-01-01
The adoption of applicable artificial intelligence technologies to library real-time virtual reference services is an innovative experimentation in one of the key areas of library services.Based on the open source software Artificial Linguistic Internet Computer Entity(A.L.I.C.E.)and a combined application of several other relevant supporting technologies for facilitating the use of the current existing library resources,Tsinghua University Library has recently developed a real-time smart talking robot,named Xiaotu,for the enhancement of its various service functions,such as reference services,book searching,Baidu Baike searching,self-directed learning,etc.The operation of Xiaotu is programmed into Renren website(a social networking website),which adds significantly an innovative feature to the modus operandi of the real-time virtual reference service at Tsinghua University Library.
3. Implicit reference-based group-wise image registration and its application to structural and functional MRI.
Science.gov (United States)
Geng, Xiujuan; Christensen, Gary E; Gu, Hong; Ross, Thomas J; Yang, Yihong
2009-10-01
In this study, an implicit reference group-wise (IRG) registration with a small deformation, linear elastic model was used to jointly estimate correspondences between a set of MRI images. The performance of pair-wise and group-wise registration algorithms was evaluated for spatial normalization of structural and functional MRI data. Traditional spatial normalization is accomplished by group-to-reference (G2R) registration in which a group of images are registered pair-wise to a reference image. G2R registration is limited due to bias associated with selecting a reference image. In contrast, implicit reference group-wise (IRG) registration estimates correspondences between a group of images by jointly registering the images to an implicit reference corresponding to the group average. The implicit reference is estimated during IRG registration eliminating the bias associated with selecting a specific reference image. Registration performance was evaluated using segmented T1-weighted magnetic resonance images from the Nonrigid Image Registration Evaluation Project (NIREP), DTI and fMRI images. Implicit reference pair-wise (IRP) registration-a special case of IRG registration for two images-is shown to produce better relative overlap than IRG for pair-wise registration using the same small deformation, linear elastic registration model. However, IRP-G2R registration is shown to have significant transitivity error, i.e., significant inconsistencies between correspondences defined by different pair-wise transformations. In contrast, IRG registration produces consistent correspondence between images in a group at the cost of slightly reduced pair-wise RO accuracy compared to IRP-G2R. IRG spatial normalization of the fractional anisotropy (FA) maps of DTI is shown to have smaller FA variance compared with G2R methods using the same elastic registration model. Analyses of fMRI data sets with sensorimotor and visual tasks show that IRG registration, on average, increases the
4. 37 CFR 1.78 - Claiming benefit of earlier filing date and cross-references to other applications.
Science.gov (United States)
2010-07-01
...-filed application must name as an inventor at least one inventor named in the later-filed application and disclose the named inventor's invention claimed in at least one claim of the later-filed... provisional applications, each prior-filed provisional application must name as an inventor at least...
5. Reference Revolutions.
Science.gov (United States)
Mason, Marilyn Gell
1998-01-01
Describes developments in Online Computer Library Center (OCLC) electronic reference services. Presents a background on networked cataloging and the initial implementation of reference services by OCLC. Discusses the introduction of OCLC FirstSearch service, which today offers access to over 65 databases, future developments in integrated…
6. Construction and application of a Korean reference panel for imputing classical alleles and amino acids of human leukocyte antigen genes.
Directory of Open Access Journals (Sweden)
Kwangwoo Kim
Full Text Available Genetic variations of human leukocyte antigen (HLA genes within the major histocompatibility complex (MHC locus are strongly associated with disease susceptibility and prognosis for many diseases, including many autoimmune diseases. In this study, we developed a Korean HLA reference panel for imputing classical alleles and amino acid residues of several HLA genes. An HLA reference panel has potential for use in identifying and fine-mapping disease associations with the MHC locus in East Asian populations, including Koreans. A total of 413 unrelated Korean subjects were analyzed for single nucleotide polymorphisms (SNPs at the MHC locus and six HLA genes, including HLA-A, -B, -C, -DRB1, -DPB1, and -DQB1. The HLA reference panel was constructed by phasing the 5,858 MHC SNPs, 233 classical HLA alleles, and 1,387 amino acid residue markers from 1,025 amino acid positions as binary variables. The imputation accuracy of the HLA reference panel was assessed by measuring concordance rates between imputed and genotyped alleles of the HLA genes from a subset of the study subjects and East Asian HapMap individuals. Average concordance rates were 95.6% and 91.1% at 2-digit and 4-digit allele resolutions, respectively. The imputation accuracy was minimally affected by SNP density of a test dataset for imputation. In conclusion, the Korean HLA reference panel we developed was highly suitable for imputing HLA alleles and amino acids from MHC SNPs in East Asians, including Koreans.
7. VizBin: an application for reference-independent visualization and human-augmented binning of metagenomic data
NARCIS (Netherlands)
Laczny, C.C.; Sternal, T.; Plugaru, V.; Gawron, P.; Atashpendar, A.; Margossian, H.H.; Coronado, S.; Van der Maaten, L.J.M.; Vlassis, N.; Wilmes, P.
2015-01-01
Background: Metagenomics is limited in its ability to link distinct microbial populations to genetic potential due to a current lack of representative isolate genome sequences. Reference-independent approaches, which exploit for example inherent genomic signatures for the clustering of metagenomic f
8. Selecting the optimal method to calculate daily global reference potential evaporation from CFSR reanalysis data for application in a hydrological model study
OpenAIRE
F. C. Sperna Weiland; C. Tisseuil; Dürr, H. H.; Vrac, M.; van Beek, L. P. H.
2012-01-01
Potential evaporation (PET) is one of the main inputs of hydrological models. Yet, there is limited consensus on which PET equation is most applicable in hydrological climate impact assessments. In this study six different methods to derive global scale reference PET daily time series from Climate Forecast System Reanalysis (CFSR) data are compared: Penman-Monteith, Priestley-Taylor and original and re-calibrated versions of the Hargreaves and Blaney-Criddle method. The calculated PET time se...
9. ICRP Publication 116—the first ICRP/ICRU application of the male and female adult reference computational phantoms
CERN Document Server
Petoussi-Henss, Nina; Eckerman, Keith F; Endo, Akira; Hertel, Nolan; Hunt, John; Menzel, Hans G; Pelliccioni, Maurizio; Schlattl, Helmut; Zankl, Maria
2014-01-01
ICRP Publication 116 on Conversion coefficients for radiological protection quantities for external radiation exposures', provides fluence-to-dose conversion coefficients for organ-absorbed doses and effective dose for various types of external exposures (ICRP 2010 ICRP Publication 116). The publication supersedes the ICRP Publication 74 (ICRP 1996 ICRP Publication 74, ICRU 1998 ICRU Report 57), including new particle types and expanding the energy ranges considered. The coefficients were calculated using the ICRP/ICRU computational phantoms (ICRP 2009 ICRP Publication 110) representing the reference adult male and reference adult female (ICRP 2002 ICRP Publication 89), together with a variety of Monte Carlo codes simulating the radiation transport in the body. Idealized whole-body irradiation from unidirectional and rotational parallel beams as well as isotropic irradiation was considered for a large variety of incident radiations and energy ranges. Comparison of the effective doses with operational quantit...
10. Reference signal extraction from corrupted ECG using wavelet decomposition for MRI sequence triggering: application to small animals
Directory of Open Access Journals (Sweden)
Bataillard Alain
2006-02-01
Full Text Available Abstract Background Present developments in Nuclear Magnetic Resonance (NMR imaging techniques strive for improved spatial and temporal resolution performances. However, trying to achieve the shortest gradient rising time with high intensity gradients has its drawbacks: It generates high amplitude noises that get superimposed on the simultaneously recorded electrophysiological signals, needed to synchronize moving organ images. Consequently, new strategies have to be developed for processing these collected signals during Magnetic Resonance Imaging (MRI examinations. The aim of this work is to extract an efficient reference signal, from an electrocardiogram (ECG that was contaminated by the NMR artefacts. This may be used for image triggering and/or cardiac rhythm monitoring. Methods Our method, based on sub-band decomposition using wavelet filters, is tested on various ECG signals recorded during three imaging sequences: Gradient Echo (GE, Fast Spin Echo (FSE and Inversion Recovery with Spin Echo (IRSE. In order to define the most adapted wavelet functions to use according to the excitation protocols, noise generated by each imaging sequence is recorded and analysed. After exploring noise models along with information found in the literature, a group of 14 wavelets, members of three families (Daubechies, Coiflets, Symlets, is selected for the study. The extraction process is carried out by decomposing the contaminated ECG signals into 8 scales using a given wavelet function, then combining the sub-bands necessary for cardiac synchronization, i.e. those containing the essential part of the QRS energy, to construct a reference signal. Results The efficiency of the presented method has been tested on a group of quite representative signals containing: highly contaminated (mean SNR Conclusion Sub-band decomposition proved to be very suitable for extracting a reference signal from a corrupted ECG for MRI triggering. An appropriate choice of the
11. Development of Chinese reference man deformable surface phantom and its application to the influence of physique on electromagnetic dosimetry
International Nuclear Information System (INIS)
A reference man is a theoretical individual that represents the average anatomical structure and physiological and metabolic features of a specific group of people and has been widely used in radiation safety research. With the help of an advantage in deformation, the present work proposed a Chinese reference man adult-male polygon-mesh surface phantom based on the Visible Chinese Human segment image dataset by surface rendering and deforming. To investigate the influence of physique on electromagnetic dosimetry in humans, a series of human phantoms with 10th, 50th and 90th body mass index and body circumference percentile physiques for Chinese adult males were further constructed by deforming the Chinese reference man surface phantom. All the surface phantoms were then voxelized to perform electromagnetic field simulation in a frequency range of 20 MHz to 3 GHz using the finite-difference time-domain method and evaluate the whole-body average and organ average specific absorption rate and the ratios of absorbed energy in skin, fat and muscle to the whole body. The results indicate thinner physique leads to higher WBSAR and the volume of subcutaneous fat, the penetration depth of the electromagnetic field in tissues and standing-wave occurrence may be the influence factors of physique on electromagnetic dosimetry. (paper)
12. Development of Chinese reference man deformable surface phantom and its application to the influence of physique on electromagnetic dosimetry
Science.gov (United States)
Yu, D.; Wang, M.; Liu, Q.
2015-09-01
A reference man is a theoretical individual that represents the average anatomical structure and physiological and metabolic features of a specific group of people and has been widely used in radiation safety research. With the help of an advantage in deformation, the present work proposed a Chinese reference man adult-male polygon-mesh surface phantom based on the Visible Chinese Human segment image dataset by surface rendering and deforming. To investigate the influence of physique on electromagnetic dosimetry in humans, a series of human phantoms with 10th, 50th and 90th body mass index and body circumference percentile physiques for Chinese adult males were further constructed by deforming the Chinese reference man surface phantom. All the surface phantoms were then voxelized to perform electromagnetic field simulation in a frequency range of 20 MHz to 3 GHz using the finite-difference time-domain method and evaluate the whole-body average and organ average specific absorption rate and the ratios of absorbed energy in skin, fat and muscle to the whole body. The results indicate thinner physique leads to higher WBSAR and the volume of subcutaneous fat, the penetration depth of the electromagnetic field in tissues and standing-wave occurrence may be the influence factors of physique on electromagnetic dosimetry.
13. Application of radioisotope tracer techniques in studies on host-parasite relationships, with special reference to larval trematodes. A review
International Nuclear Information System (INIS)
The application of radioisotope tracer techniques in studies on various host-parasite relationships between larval trematodes and their intermediate and definite hosts is reviewed. Such studies comprise, for example, the reproduction and nutrition of various developmental stages of trematodes in relation to host and environment. The preparation and application of radiolabelled larvae are also discussed with special emphasis on their use in studies on free-living ecology and migration in hosts. (author)
14. A Survey of Noncooperative Game Theory with Reference to Agricultural Markets: Part 2. Potential Applications in Agriculture
OpenAIRE
Sexton, Richard J.
1994-01-01
This paper is the second of a two-part survey on noncooperative game theory relevant to agricultural markets. Part 1 of the survey focused on important game theory concepts, while this paper illustrates applications of the theory to agricultural markets. Game theory is relevant when markets are imperfectly competitive, and this paper argues that this condition is commonly met in agriculture. Specific topics of application include principal-agent models, auctions, and bargaining.
15. Application of the Böhm chamber for reference beta dose measurements and the calibration of personal dosimeters
Directory of Open Access Journals (Sweden)
Skubacz Krystian
2016-03-01
Full Text Available Thermoluminescent dosimeters (TLDs currently used in personal and area dosimetry are often utilized to measure doses of ionizing radiation in fields with a more complex structure and therefore they should be calibrated in relation to different radiation types. The results of such calibration presented for UD-813 TLDs allowed for evaluation of their capability in relation to different radiation types like the beta and photon radiation of different energies and neutron radiation generated by the 241Am-Be source. The detector response for 60 keV photons was 10% higher than for the 662 keV gamma radiation of 137Cs. There were also response differences in relation to photon and beta radiation between detectors with an enhanced concentration of lithium 6Li and boron 10B and detectors containing a natural level of these isotopes. Measurements of the reference beta doses were performed with the help of the Böhm chamber. This method is relatively more complicated compared to determining the reference photon and neutron doses and is described thoroughly in this paper. The corrected current measured by the Böhm chamber for the chosen parameters was a linear function for an entire available range of the chamber depths. The percentage of errors related to the evaluated reference beta doses were below 2% despite a rather large number of corrections that should be taken into account. The calibration distances varied from 11 cm to 50 cm. For this range and beta particle energy, the absorption of radiation in the air was negligible and their attenuation had a predominantly geometric character.
16. Effects of re-application of nitrogen fertilizer on forest soil-water chemistry, with special reference to cadmium
International Nuclear Information System (INIS)
A greatly increased concentration of cadmium was found in soil water following the application of nitrogen fertilizer. Our study was conducted at an experimental site in the western part of central Sweden. Prior to this, the area had been used to study the effects of the repeated application of fertilizer, under different regimes, on forest production. In this experiment, we examined the residual effects of previous nitrogen fertilizer application regimes on soil-water chemistry, following a final, additional fertilizer application. Soil water was sampled using suction lysimeters installed at a depth of 50 cm. However, due to the failure of the lysimeters at two of the study plots, the differences between fertilizer regimes could not be evaluated. Instead, we focused on changes in the solubility of cadmium and aluminium caused by soil-water acidification due to the re-application of nitrogen fertilizer. Every fourth or eighth year, between 1981 and 1997, the study plots received 150 kg N ha-1, in the form of ammonium nitrate (AN) and calcium ammonium nitrate (CAN). The effects of the final fertilizer application (CAN) were studied. Application of nitrogen fertilizer resulted in a rapid increase in NO3- concentration in soil-water, and a decrease in pH. The increased soil-water acidity resulted in some metals becoming more soluble and occurring in higher concentrations within the soil water. The increase in concentration of some toxic heavy metals, such as cadmium, was of concern. The highest measured cadmium concentration was 2.7 μg l-1, compared to the government health limit of 5 μg l-1 for drinking water. The cadmium detected must originate from the soil since it was not present in the nitrogen fertilizer. Cadmium is highly toxic to both animals and plants, and knowledge of its occurrence, in relation to various silvicultural operations, is of great importance
17. Electronics engineer's reference book
CERN Document Server
Turner, L W
1976-01-01
Electronics Engineer's Reference Book, 4th Edition is a reference book for electronic engineers that reviews the knowledge and techniques in electronics engineering and covers topics ranging from basics to materials and components, devices, circuits, measurements, and applications. This edition is comprised of 27 chapters; the first of which presents general information on electronics engineering, including terminology, mathematical equations, mathematical signs and symbols, and Greek alphabet and symbols. Attention then turns to the history of electronics; electromagnetic and nuclear radiatio
18. MEMS based reference oscillator
OpenAIRE
Hedestig, Joel
2005-01-01
The interest in tiny wireless applications raises the demand for an integrated reference oscillator with the same performance as the macroscopic quartz crystal reference oscillators. The main challenge of the thesis is to prove that it is possible to build a MEMS based oscillator that approaches the accuracy level of existing quartz crystal oscillators. The MEMS resonator samples which Philips provides are measured and an equivalent electrical model is designed for them. This model is used in...
19. ICRP Publication 116—the first ICRP/ICRU application of the male and female adult reference computational phantoms
Science.gov (United States)
Petoussi-Henss, Nina; Bolch, Wesley E.; Eckerman, Keith F.; Endo, Akira; Hertel, Nolan; Hunt, John; Menzel, Hans G.; Pelliccioni, Maurizio; Schlattl, Helmut; Zankl, Maria
2014-09-01
ICRP Publication 116 on ‘Conversion coefficients for radiological protection quantities for external radiation exposures’, provides fluence-to-dose conversion coefficients for organ-absorbed doses and effective dose for various types of external exposures (ICRP 2010 ICRP Publication 116). The publication supersedes the ICRP Publication 74 (ICRP 1996 ICRP Publication 74, ICRU 1998 ICRU Report 57), including new particle types and expanding the energy ranges considered. The coefficients were calculated using the ICRP/ICRU computational phantoms (ICRP 2009 ICRP Publication 110) representing the reference adult male and reference adult female (ICRP 2002 ICRP Publication 89), together with a variety of Monte Carlo codes simulating the radiation transport in the body. Idealized whole-body irradiation from unidirectional and rotational parallel beams as well as isotropic irradiation was considered for a large variety of incident radiations and energy ranges. Comparison of the effective doses with operational quantities revealed that the latter quantities continue to provide a good approximation of effective dose for photons, neutrons and electrons for the ‘conventional’ energy ranges considered previously (ICRP 1996, ICRU 1998), but not at the higher energies of ICRP Publication 116.
20. Regular Expression Pocket Reference
CERN Document Server
Stubblebine, Tony
2007-01-01
This handy little book offers programmers a complete overview of the syntax and semantics of regular expressions that are at the heart of every text-processing application. Ideal as a quick reference, Regular Expression Pocket Reference covers the regular expression APIs for Perl 5.8, Ruby (including some upcoming 1.9 features), Java, PHP, .NET and C#, Python, vi, JavaScript, and the PCRE regular expression libraries. This concise and easy-to-use reference puts a very powerful tool for manipulating text and data right at your fingertips. Composed of a mixture of symbols and text, regular exp
1. Handbook of reference electrodes
CERN Document Server
Inzelt, György; Scholz, Fritz
2013-01-01
Reference Electrodes are a crucial part of any electrochemical system, yet an up-to-date and comprehensive handbook is long overdue. Here, an experienced team of electrochemists provides an in-depth source of information and data for the proper choice and construction of reference electrodes. This includes all kinds of applications such as aqueous and non-aqueous solutions, ionic liquids, glass melts, solid electrolyte systems, and membrane electrodes. Advanced technologies such as miniaturized, conducting-polymer-based, screen-printed or disposable reference electrodes are also covered. Essen
2. Laser Frequency Stabilization for Coherent Lidar Applications using Novel All-Fiber Gas Reference Cell Fabrication Technique
Science.gov (United States)
Meras, Patrick, Jr.; Poberezhskiy, Ilya Y.; Chang, Daniel H.; Levin, Jason; Spiers, Gary D.
2008-01-01
Compact hollow-core photonic crystal fiber (HC-PCF)gas frequency reference cell was constructed using a novel packaging technique that relies on torch-sealing a quartz filling tube connected to a mechanical splice between regular and hollow-core fibers. The use of this gas cell for laser frequency stabilization was demonstrated by locking a tunable diode laser to the center of the P9 line from the (nu)1+(nu)3 band of acetylene with RMS frequency error of 2.06 MHz over 2 hours. This effort was performed in support of a task to miniaturize the laser frequency stabilization subsystem of JPL/LMCT Laser Absorption Spectrometer (LAS) instrument.
3. Analysis of laser shock experiments on precompressed samples using a quartz reference and application to warm dense hydrogen and helium
CERN Document Server
Brygoo, Stephanie; Loubeyre, Paul; Lazicki, Amy E; Hamel, Sebastien; Qi, Tingting; Celliers, Peter M; Coppari, Federica; Eggert, Jon H; Fratanduono, Dayne E; Hicks, Damien G; Rygg, J Ryan; Smith, Raymond F; Swift, Damian C; Collins, Gilbert W; Jeanloz, Raymond
2015-01-01
Megabar (1 Mbar = 100 GPa) laser shocks on precompressed samples allow reaching unprecedented high densities and moderately high 10000-100000K temperatures. We describe here a complete analysis framework for the velocimetry (VISAR) and pyrometry (SOP) data produced in these experiments. Since the precompression increases the initial density of both the sample of interest and the quartz reference for pressure-density, reflectivity and temperature measurements, we describe analytical corrections based on available experimental data on warm dense silica and density-functional-theory based molecular dynamics computer simulations. Using our improved analysis framework we report a re-analysis of previously published data on warm dense hydrogen and helium, compare the newly inferred pressure, density and temperature data with most advanced equation of state models and provide updated reflectivity values.
4. JDBC Pocket Reference
CERN Document Server
Bales, Donald
2003-01-01
JDBC--the Java Database Connectivity specification--is a complex set of application programming interfaces (APIs) that developers need to understand if they want their Java applications to work with databases. JDBC is so complex that even the most experienced developers need to refresh their memories from time to time on specific methods and details. But, practically speaking, who wants to stop and thumb through a weighty tutorial volume each time a question arises? The answer is the JDBC Pocket Reference, a data-packed quick reference that is both a time-saver and a lifesaver. The JDBC P
5. Arsenic fractionation by sequential extractions in standard reference materials and industrially contaminated soil samples: Applicability and drawbacks
Energy Technology Data Exchange (ETDEWEB)
Herreweghe, S. van; Swennen, R. [Fysico-Chemische Geologie, Heverlee (Belgium)
2003-07-01
Availability mobility (phyto)toxycity and potential risk of contaminants is strongly affected by the manner of appearance of elements, the so-called speciation. Operational fractionation methods like sequential extractions have been applied for a long time to determine the solid phase speciation of heavy metals since direct determination of specific chemical compounds can not always be easily achieved. The aim of this research was to assess the applicability of sequential extractions on highly contaminated soils where arsenic is also present as discrete As-bearing minerals. Sequential extractions are mostly developed to fractionate heavy metals occurring in trace amounts and their applicability on highly contaminated samples remains insufficiently studied. There was, furthermore, a need to evaluate sequential extraction schemes specifically focussing on metalloid element extraction such as arsenic. (orig.)
6. Relativistic equation-of-motion coupled-cluster method using open-shell reference wavefunction: Application to ionization potential.
Science.gov (United States)
Pathak, Himadri; Sasmal, Sudip; Nayak, Malaya K; Vaval, Nayana; Pal, Sourav
2016-08-21
The open-shell reference relativistic equation-of-motion coupled-cluster method within its four-component description is successfully implemented with the consideration of single- and double- excitation approximations using the Dirac-Coulomb Hamiltonian. At the first attempt, the implemented method is employed to calculate ionization potential value of heavy atomic (Ag, Cs, Au, Fr, and Lr) and molecular (HgH and PbF) systems, where the effect of relativity does really matter to obtain highly accurate results. Not only the relativistic effect but also the effect of electron correlation is crucial in these heavy atomic and molecular systems. To justify the fact, we have taken two further approximations in the four-component relativistic equation-of-motion framework to quantify how the effect of electron correlation plays a role in the calculated values at different levels of theory. All these calculated results are compared with the available experimental data as well as with other theoretically calculated values to judge the extent of accuracy obtained in our calculations. PMID:27544090
7. Measurements of the total-body potassium contents. Application of reference value with the whole-body counter
Energy Technology Data Exchange (ETDEWEB)
Yamamoto, Tetsuo [Chiba Univ. (Japan). Inst. for Training Radiological Technicians; Saegusa, Kenji; Arimizu, Noboru; Kuniyasu, Yoshio; Itoh, Hisao
2001-08-01
The total-body potassium contents were measured in 405 healthy volunteers and 186 patients with whole body counter in Chiba University Hospital. The total-body potassium contents was expressed by the reference value (R value). The R value was calculated as measured potassium contents (g) divided by the body surface area (m{sup 2}) and adjusted by age and sex of healthy persons. The R value was 100.65{+-}9.22% in 405 healthy volunteers. Those of each disease were as follows: liver cirrhosis; 94.24{+-}11.22%, chronic hepatitis; 95.74{+-}11.24%, hyperthyroidism; 99.37{+-}10.8%, periodic paralysis; 82.0{+-}9.01%, Barter's syndrome; 93.99{+-}9.86%, myasthenia gravis; 97.34{+-}6.42% and hypo-potassemia; 90.64{+-}11.76%, respectively. The R values of other diseases such as uterine cancer, breast cancer, anemia, hypertension were 97.78{+-}11.5%, 99.22{+-}8.88%, 96.64{+-}12.73%, 98.5{+-}9.63% respectively. Fourteen patients showed especially lower R values under 75%. These were 1 liver cirrhosis, 3 hypertension, 1 diabetes mellitus, 3 hypo-potassemia, 1 periodic paralysis, 2 Barter's syndrome, 2 chemical poisoning, and 1 breast cancer. Follow-up study was performed in some patients with the lower R values. The result of follow-up study showed that there was a relationship between improvement of symptoms and increase of total body potassium contents. (author)
8. Surface and bulk characterization of an ultrafine South African coal fly ash with reference to polymer applications
Science.gov (United States)
van der Merwe, E. M.; Prinsloo, L. C.; Mathebula, C. L.; Swart, H. C.; Coetsee, E.; Doucet, F. J.
2014-10-01
South African coal-fired power stations produce about 25 million tons of fly ash per annum, of which only approximately 5% is currently reused. A growing concern about pollution and increasing landfill costs stimulates research into new ways to utilize coal fly ash for economically beneficial applications. Fly ash particles may be used as inorganic filler in polymers, an application which generally requires the modification of their surface properties. In order to design experiments that will result in controlled changes in surface chemistry and morphology, a detailed knowledge of the bulk chemical and mineralogical compositions of untreated fly ash particles, as well as their morphology and surface properties, is needed. In this paper, a combination of complementary bulk and surface techniques was explored to assess the physicochemical properties of a classified, ultrafine coal fly ash sample, and the findings were discussed in the context of polymer application as fillers. The sample was categorized as a Class F fly ash (XRF). Sixty-two percent of the sample was an amorphous glass phase, with mullite and quartz being the main identified crystalline phases (XRD, FTIR). Quantitative carbon and sulfur analysis reported a total bulk carbon and sulfur content of 0.37% and 0.16% respectively. The spatial distribution of the phases was determined by 2D mapping of Raman spectra, while TGA showed a very low weight loss for temperatures ranging between 25 and 1000 °C. Individual fly ash particles were characterized by a monomodal size distribution (PSD) of spherical particles with smooth surfaces (SEM, TEM, AFM), and a mean particle size of 4.6 μm (PSD). The BET active surface area of this sample was 1.52 m2/g and the chemical composition of the fly ash surface (AES, XPS) was significantly different from the bulk composition and varied considerably between spheres. Many properties of the sample (e.g. spherical morphology, small particle size, thermal stability) appeared
9. Summary report of second research coordination meeting on parameters for calculation of nuclear reactions of relevance to non-energy nuclear application (Reference Input Parameter Library: Phase III)
International Nuclear Information System (INIS)
A summary is given of the Second Research Coordination Meeting on Parameters for Calculation of Nuclear Reactions of Relevance to Non-Energy Nuclear Applications (Reference Input Parameter Library: Phase III), including a review of the various work undertaken by participants. The new RIPL-3 library should serve as input for theoretical calculations of nuclear reaction data at incident energies up to 200 MeV, as needed for energy and non-energy modern applications of nuclear data. Significant progress was achieved in defining the contents of the RIPL-3 library. Technical discussions and the resulting work plan of the Coordinated Research Programme are summarized, along with actions and deadlines. Participants' summary reports at the RCM are also included in this report. (author)
10. Python library reference
NARCIS (Netherlands)
Rossum, G. van
1995-01-01
Python is an extensible, interpreted, object-oriented programming language. It supports a wide range of applications, from simple text processing scripts to interactive WWW browsers. While the Python Reference Manual describes the exact syntax and semantics of the language, it does not describe the
11. Simultaneous determination of Si, Al and Na concentrations by particle induced gamma-ray emission and applications to reference materials and ceramic archaeological artifacts
Energy Technology Data Exchange (ETDEWEB)
Dasari, K.B. [Radiochemistry Division, Bhabha Atomic Research Centre, Trombay, Mumbai 400085 (India); GITAM Institute of Science, GITAM University, Visakhapatnam 530045 (India); Chhillar, S. [Radiochemistry Division, Bhabha Atomic Research Centre, Trombay, Mumbai 400085 (India); Acharya, R., E-mail: [email protected] [Radiochemistry Division, Bhabha Atomic Research Centre, Trombay, Mumbai 400085 (India); Ray, D.K.; Behera, A. [Ion Beam Laboratory, Institute of Physics, Bhubaneswar 751005 (India); Lakshmana Das, N. [GITAM Institute of Science, GITAM University, Visakhapatnam 530045 (India); Pujari, P.K. [Radiochemistry Division, Bhabha Atomic Research Centre, Trombay, Mumbai 400085 (India)
2014-11-15
A particle induced gamma ray emission (PIGE) method using 4 MeV proton beam was standardized for simultaneous determination of Si, Al and Na concentrations and has been applied for non-destructive analysis of several reference materials and archaeological clay pottery samples. Current normalized count rates of gamma-rays for the three elements listed above were obtained by an in situ method using Li as internal standard. The paper presents application of the in situ current normalized PIGE method for grouping study of 39 clay potteries, obtained from Rajasthan and Andhra Pradesh states of India. Grouping of artifacts was carried out using the ratios of SiO{sub 2} to Al{sub 2}O{sub 3} concentrations, due to their non volatile nature. Powder samples and elemental standards in pellet forms (cellulose matrix) were irradiated using the 4 MeV proton beam (∼10 nA) from the 3 MV tandem accelerator at IOP Bhubaneswar, and assay of prompt gamma rays was carried out using a 60% relative efficiency HPGe detector coupled to MCA. The concentration ratio values of SiO{sub 2}/Al{sub 2}O{sub 3} indicated that pottery samples fell into two major groups, which are in good agreement with their collection areas. Reference materials from IAEA and NIST were analyzed for quantification of Si, Al and Na concentrations as a part of validation as well as application of PIGE method.
12. Electrical engineer's reference book
CERN Document Server
Jones, G R
2013-01-01
A long established reference book: radical revision for the fifteenth edition includes complete rearrangement to take in chapters on new topics and regroup the subjects covered for easy access to information.The Electrical Engineer's Reference Book, first published in 1945, maintains its original aims: to reflect the state of the art in electrical science and technology and cater for the needs of practising engineers. Most chapters have been revised and many augmented so as to deal properly with both fundamental developments and new technology and applications that have come to the fore since
13. Application of genotyping-by-sequencing on semiconductor sequencing platforms: a comparison of genetic and reference-based marker ordering in barley.
Directory of Open Access Journals (Sweden)
Martin Mascher
Full Text Available The rapid development of next-generation sequencing platforms has enabled the use of sequencing for routine genotyping across a range of genetics studies and breeding applications. Genotyping-by-sequencing (GBS, a low-cost, reduced representation sequencing method, is becoming a common approach for whole-genome marker profiling in many species. With quickly developing sequencing technologies, adapting current GBS methodologies to new platforms will leverage these advancements for future studies. To test new semiconductor sequencing platforms for GBS, we genotyped a barley recombinant inbred line (RIL population. Based on a previous GBS approach, we designed bar code and adapter sets for the Ion Torrent platforms. Four sets of 24-plex libraries were constructed consisting of 94 RILs and the two parents and sequenced on two Ion platforms. In parallel, a 96-plex library of the same RILs was sequenced on the Illumina HiSeq 2000. We applied two different computational pipelines to analyze sequencing data; the reference-independent TASSEL pipeline and a reference-based pipeline using SAMtools. Sequence contigs positioned on the integrated physical and genetic map were used for read mapping and variant calling. We found high agreement in genotype calls between the different platforms and high concordance between genetic and reference-based marker order. There was, however, paucity in the number of SNP that were jointly discovered by the different pipelines indicating a strong effect of alignment and filtering parameters on SNP discovery. We show the utility of the current barley genome assembly as a framework for developing very low-cost genetic maps, facilitating high resolution genetic mapping and negating the need for developing de novo genetic maps for future studies in barley. Through demonstration of GBS on semiconductor sequencing platforms, we conclude that the GBS approach is amenable to a range of platforms and can easily be modified as new
14. Development and field application of a nonlinear ultrasonic modulation technique for fatigue crack detection without reference data from an intact condition
Science.gov (United States)
Lim, Hyung Jin; Kim, Yongtak; Koo, Gunhee; Yang, Suyoung; Sohn, Hoon; Bae, In-hwan; Jang, Jeong-Hwan
2016-09-01
In this study, a fatigue crack detection technique, which detects a fatigue crack without relying on any reference data obtained from the intact condition of a target structure, is developed using nonlinear ultrasonic modulation and applied to a real bridge structure. Using two wafer-type lead zirconate titanate (PZT) transducers, ultrasonic excitations at two distinctive frequencies are applied to a target inspection spot and the corresponding ultrasonic response is measured by another PZT transducer. Then, the nonlinear modulation components produced by a breathing-crack are extracted from the measured ultrasonic response, and a statistical classifier, which can determine if the nonlinear modulation components are statistically significant in comparison with the background noise level, is proposed. The effectiveness of the proposed fatigue crack detection technique is experimentally validated using the data obtained from aluminum plates and aircraft fitting-lug specimens under varying temperature and loading conditions, and through a field testing of Yeongjong Grand Bridge in South Korea. The uniqueness of this study lies in that (1) detection of a micro fatigue crack with less than 1 μm width and fatigue cracks in the range of 10-20 μm in width using nonlinear ultrasonic modulation, (2) automated detection of fatigue crack formation without using reference data obtained from an intact condition, (3) reliable and robust diagnosis under varying temperature and loading conditions, (4) application of a local fatigue crack detection technique to online monitoring of a real bridge.
15. Aplicação das Dietary Reference Intakes na avaliação da ingestão de nutrientes para indivíduos Application of Dietary Reference Intakes for assessment of individuals
Directory of Open Access Journals (Sweden)
Dirce Maria Lobo Marchioni
2004-06-01
Full Text Available A avaliação do estado nutricional é uma das práticas clínicas fundamentais para tomar-se a decisão quanto ao diagnóstico nutricional de um indivíduo e à conduta dietética a ser-lhe prescrita. A adequação da ingestão de nutrientes é um dos componentes da avaliação nutricional e é feita a partir de valores de referência que se constituem em estimativas das necessidades fisiológicas desses nutrientes e metas de ingestão dos mesmos. Colocam-se hoje à disposição dos profissionais um novo conjunto de valores de referência que constituem um avanço importante no modo de interpretar a adequação dietética: as Dietary Reference Intakes. Este artigo aborda os métodos propostos para avaliação da adequação da ingestão de nutriente às necessidades do indivíduo, utilizando os novos valores de referências.The nutritional status assessment is one of the fundamental clinical approaches in making a decision about nutritional diagnosis and dietetic behavior, in order to prescribe an adequate diet therapy. The evaluation of nutrient intakes is a component of the nutritional assessment and it is made from estimates of nutrient physiological needs and goals for good nutrition, known as reference values. A new group of reference values is recently available for health professionals: the Dietary Reference Intakes, which represent an important progress in the field of dietary assessment interpretation. This paper discusses the proposed methods for the individual nutrient intake assessment, using the Dietary Reference Intakes.
16. Enterprise Reference Library
Science.gov (United States)
Bickham, Grandin; Saile, Lynn; Havelka, Jacque; Fitts, Mary
2011-01-01
Introduction: Johnson Space Center (JSC) offers two extensive libraries that contain journals, research literature and electronic resources. Searching capabilities are available to those individuals residing onsite or through a librarian s search. Many individuals have rich collections of references, but no mechanisms to share reference libraries across researchers, projects, or directorates exist. Likewise, information regarding which references are provided to which individuals is not available, resulting in duplicate requests, redundant labor costs and associated copying fees. In addition, this tends to limit collaboration between colleagues and promotes the establishment of individual, unshared silos of information The Integrated Medical Model (IMM) team has utilized a centralized reference management tool during the development, test, and operational phases of this project. The Enterprise Reference Library project expands the capabilities developed for IMM to address the above issues and enhance collaboration across JSC. Method: After significant market analysis for a multi-user reference management tool, no available commercial tool was found to meet this need, so a software program was built around a commercial tool, Reference Manager 12 by The Thomson Corporation. A use case approach guided the requirements development phase. The premise of the design is that individuals use their own reference management software and export to SharePoint when their library is incorporated into the Enterprise Reference Library. This results in a searchable user-specific library application. An accompanying share folder will warehouse the electronic full-text articles, which allows the global user community to access full -text articles. Discussion: An enterprise reference library solution can provide a multidisciplinary collection of full text articles. This approach improves efficiency in obtaining and storing reference material while greatly reducing labor, purchasing and
17. Nuclear Science References Database
OpenAIRE
PRITYCHENKO B.; Běták, E.; B. Singh; Totans, J.
2013-01-01
The Nuclear Science References (NSR) database together with its associated Web interface, is the world's only comprehensive source of easily accessible low- and intermediate-energy nuclear physics bibliographic information for more than 210,000 articles since the beginning of nuclear science. The weekly-updated NSR database provides essential support for nuclear data evaluation, compilation and research activities. The principles of the database and Web application development and maintenance...
18. Python library reference
OpenAIRE
Rossum, van, M.A.J.
1995-01-01
Python is an extensible, interpreted, object-oriented programming language. It supports a wide range of applications, from simple text processing scripts to interactive WWW browsers. While the Python Reference Manual describes the exact syntax and semantics of the language, it does not describe the standard library that is distributed with the language, and which greatly enhances its immediate usability. This library contains built-in modules (written in C) that provide access to system funct...
19. Reference to Galery
Directory of Open Access Journals (Sweden)
Yuri F. Katorin
2016-06-01
Full Text Available In the article it is told about some aspects of the application of military rowing vessels, composition and rules the formation of the command of rowers, special features of the organization of their work on bringing of ship into the motion, is analyzed the influence of the practice of use reference to gallery of the condemned criminals to entire legislation of this country.
20. VBScript pocket reference
CERN Document Server
Lomax, Paul; Petrusha, Ron
2008-01-01
Microsoft's Visual Basic Scripting Edition (VBScript), a subset of Visual Basic for Applications, is a powerful language for Internet application development, where it can serve as a scripting language for server-side, client-side, and system scripting. Whether you're developing code for Active Server Pages, client-side scripts for Internet Explorer, code for Outlook forms, or scripts for Windows Script Host, VBScript Pocket Reference will be your constant companion. Don't let the pocket-friendly format fool you. Based on the bestsellingVBScript in a Nutshell, this small book details every V
1. Instrumentation reference book
CERN Document Server
Boyes, Walt
2002-01-01
Instrumentation is not a clearly defined subject, having a 'fuzzy' boundary with a number of other disciplines. Often categorized as either 'techniques' or 'applications' this book addresses the various applications that may be needed with reference to the practical techniques that are available for the instrumentation or measurement of a specific physical quantity or quality. This makes it of direct interest to anyone working in the process, control and instrumentation fields where these measurements are essential.* Comprehensive and authoritative collection of technical information* Writte
2. Selecting the optimal method to calculate daily global reference potential evaporation from CFSR reanalysis data for application in a hydrological model study
Science.gov (United States)
Sperna Weiland, F. C.; Tisseuil, C.; Dürr, H. H.; Vrac, M.; van Beek, L. P. H.
2012-03-01
Potential evaporation (PET) is one of the main inputs of hydrological models. Yet, there is limited consensus on which PET equation is most applicable in hydrological climate impact assessments. In this study six different methods to derive global scale reference PET daily time series from Climate Forecast System Reanalysis (CFSR) data are compared: Penman-Monteith, Priestley-Taylor and original and re-calibrated versions of the Hargreaves and Blaney-Criddle method. The calculated PET time series are (1) evaluated against global monthly Penman-Monteith PET time series calculated from CRU data and (2) tested on their usability for modeling of global discharge cycles. A major finding is that for part of the investigated basins the selection of a PET method may have only a minor influence on the resulting river flow. Within the hydrological model used in this study the bias related to the PET method tends to decrease while going from PET, AET and runoff to discharge calculations. However, the performance of individual PET methods appears to be spatially variable, which stresses the necessity to select the most accurate and spatially stable PET method. The lowest root mean squared differences and the least significant deviations (95% significance level) between monthly CFSR derived PET time series and CRU derived PET were obtained for a cell-specific re-calibrated Blaney-Criddle equation. However, results show that this re-calibrated form is likely to be unstable under changing climate conditions and less reliable for the calculation of daily time series. Although often recommended, the Penman-Monteith equation applied to the CFSR data did not outperform the other methods in a evaluation against PET derived with the Penman-Monteith equation from CRU data. In arid regions (e.g. Sahara, central Australia, US deserts), the equation resulted in relatively low PET values and, consequently, led to relatively high discharge values for dry basins (e.g. Orange, Murray and
3. Library Reference Services.
Science.gov (United States)
Miller, Constance; And Others
1985-01-01
Seven articles on library reference services highlight reference obsolescence in academic libraries, major studies of unobtrusive reference tests, methods for evaluating reference desk performance, reference interview evaluation, problems of reference desk control, online searching by end users, and reference collection development in…
4. Development of molecular closures for the reference interaction site model theory with application to square-well and Lennard-Jones homonuclear diatomics
Science.gov (United States)
Munaò, Gianmarco; Costa, Dino; Caccamo, Carlo
2016-10-01
Inspired by significant improvements obtained for the performances of the polymer reference interaction site model (PRISM) theory of the fluid phase when coupled with ‘molecular closures’ (Schweizer and Yethiraj 1993 J. Chem. Phys. 98 9053), we exploit a matrix generalization of this concept, suitable for the more general RISM framework. We report a preliminary test of the formalism, as applied to prototype square-well homonuclear diatomics. As for the structure, comparison with Monte Carlo shows that molecular closures are slightly more predictive than their ‘atomic’ counterparts, and thermodynamic properties are equally accurate. We also devise an application of molecular closures to models interacting via continuous, soft-core potentials, by using well established prescriptions in liquid state perturbation theories. In the case of Lennard-Jones dimers, our scheme definitely improves over the atomic one, providing semi-quantitative structural results, and quite good estimates of internal energy, pressure and phase coexistence. Our finding paves the way to a systematic employment of molecular closures within the RISM framework to be applied to more complex systems, such as molecules constituted by several non-equivalent interaction sites.
5. Development of molecular closures for the reference interaction site model theory with application to square-well and Lennard-Jones homonuclear diatomics.
Science.gov (United States)
Munaò, Gianmarco; Costa, Dino; Caccamo, Carlo
2016-10-19
Inspired by significant improvements obtained for the performances of the polymer reference interaction site model (PRISM) theory of the fluid phase when coupled with 'molecular closures' (Schweizer and Yethiraj 1993 J. Chem. Phys. 98 9053), we exploit a matrix generalization of this concept, suitable for the more general RISM framework. We report a preliminary test of the formalism, as applied to prototype square-well homonuclear diatomics. As for the structure, comparison with Monte Carlo shows that molecular closures are slightly more predictive than their 'atomic' counterparts, and thermodynamic properties are equally accurate. We also devise an application of molecular closures to models interacting via continuous, soft-core potentials, by using well established prescriptions in liquid state perturbation theories. In the case of Lennard-Jones dimers, our scheme definitely improves over the atomic one, providing semi-quantitative structural results, and quite good estimates of internal energy, pressure and phase coexistence. Our finding paves the way to a systematic employment of molecular closures within the RISM framework to be applied to more complex systems, such as molecules constituted by several non-equivalent interaction sites. PMID:27548461
6. Xcode 5 developer reference
CERN Document Server
Wentk, Richard
2014-01-01
Design, code, and build amazing apps with Xcode 5 Thanks to Apple's awesome Xcode development environment, you can create the next big app for Macs, iPhones, iPads, or iPod touches. Xcode 5 contains gigabytes of great stuff to help you develop for both OS X and iOS devices - things like sample code, utilities, companion applications, documentation, and more. And with Xcode 5 Developer Reference, you now have the ultimate step-by-step guide to it all. Immerse yourself in the heady and lucrative world of Apple app development, see how to tame the latest features and functions, and find loads of
7. Electronics engineer's reference book
CERN Document Server
Mazda, F F
1989-01-01
Electronics Engineer's Reference Book, Sixth Edition is a five-part book that begins with a synopsis of mathematical and electrical techniques used in the analysis of electronic systems. Part II covers physical phenomena, such as electricity, light, and radiation, often met with in electronic systems. Part III contains chapters on basic electronic components and materials, the building blocks of any electronic design. Part IV highlights electronic circuit design and instrumentation. The last part shows the application areas of electronics such as radar and computers.
8. Selecting the optimal method to calculate daily global reference potential evaporation from CFSR reanalysis data for application in a hydrological model study
Directory of Open Access Journals (Sweden)
F. C. Sperna Weiland
2012-03-01
Full Text Available Potential evaporation (PET is one of the main inputs of hydrological models. Yet, there is limited consensus on which PET equation is most applicable in hydrological climate impact assessments. In this study six different methods to derive global scale reference PET daily time series from Climate Forecast System Reanalysis (CFSR data are compared: Penman-Monteith, Priestley-Taylor and original and re-calibrated versions of the Hargreaves and Blaney-Criddle method. The calculated PET time series are (1 evaluated against global monthly Penman-Monteith PET time series calculated from CRU data and (2 tested on their usability for modeling of global discharge cycles.
A major finding is that for part of the investigated basins the selection of a PET method may have only a minor influence on the resulting river flow. Within the hydrological model used in this study the bias related to the PET method tends to decrease while going from PET, AET and runoff to discharge calculations. However, the performance of individual PET methods appears to be spatially variable, which stresses the necessity to select the most accurate and spatially stable PET method. The lowest root mean squared differences and the least significant deviations (95% significance level between monthly CFSR derived PET time series and CRU derived PET were obtained for a cell-specific re-calibrated Blaney-Criddle equation. However, results show that this re-calibrated form is likely to be unstable under changing climate conditions and less reliable for the calculation of daily time series. Although often recommended, the Penman-Monteith equation applied to the CFSR data did not outperform the other methods in a evaluation against PET derived with the Penman-Monteith equation from CRU data. In arid regions (e.g. Sahara, central Australia, US deserts, the equation resulted in relatively low PET values and, consequently, led to relatively high discharge values for dry basins (e
9. Transuranium reference measurements and reference materials
International Nuclear Information System (INIS)
During the 30 years of its existence, the Central Bureau for Nuclear Measurements has been involved in numerous high-accuracy investigations of mainly nuclear properties of elements important for the nuclear fuel cycle and for application of relevant radionuclides. This paper reports that these studies were made possible by the availability of particle accelerators (150-MeV electron linear accelerator; 3.7- and 7-MV Van de Graaff accelerators) as neutron sources and the use of specialized laboratories for the preparation, characterization, and manipulation of radioactive samples. Examples of investigations with plutonium and neptunium are capture and fission cross sections of 239Pu at very low energies, spontaneous fission fragment mass and energy distributions of 242Pu at very low energies, spontaneous fission fragment mass and energy distributions of 242Pu, alpha-particle and gamma-ray spectra of 237Np, half-life of 241Pu, use of 237Np in reactor neutron dosimetry, preparation and characterization of plutonium oxide reference materials for nondestructive analysis, and demonstration of the potential of synthetic isotope mixtures for analytical control of the nuclear fuel cycle
10. Virtual reference services
OpenAIRE
Márdero Arellano, Miguel Ángel
2001-01-01
Analysis of virtual reference services, their standandars and new technologies that have changed the tradicional practice at the library’s reference desk. Major American virtual reference services initiatives and their characteristics are described.
11. 标准物质在科研设备质量控制体系中的使用和管理%Application and Management of Reference Materials in Quality Assurance System for the Equipment of Scientific Research
Institute of Scientific and Technical Information of China (English)
高艳艳; 何春泽; 张春霞; 程环
2012-01-01
The importance of the application and management of reference materials in the whole quality assurance system for the equipment of scientific research was described. The problems at the present stage of the reference materials for the equipment of scientific research were pointed out, and the corresponding solutions were proposed.%阐述了标准物质的应用和管理在科研设备质量控制体系中的重要性,指出了现阶段科研设备用标准物质存在的问题,并提出了对策.
12. National Nutrition Education Clearing House Reference List, General Teacher References.
Science.gov (United States)
National Nutrition Education Clearing House, Berkeley, CA.
References applicable to both elementary and secondary levels, as well as background information of importance to teachers in the field of nutrition and nutrition education, are included in this bibliography. Although not a comprehensive list, resources include books, pamphlets, curriculum guides, bibliographies, newsletters, article reprints, and…
13. Application of new reference materials for the quality control of water; Aplicaciones de nuevos materiales de referencia microbiologicos al control de la calidad del agua
Energy Technology Data Exchange (ETDEWEB)
Yanez, A.; Soria, E.; Murtula, R.; Catalan, V.
2007-07-01
In laboratories dedicated to the quality control of water, as it occurs in the rest of assay laboratories, it is important to guarantee de validity and traceability of the generated results, and for this reason there is an increasing interest in introducing standards such as UNE-EN ISO/ICI 17025. Therefore, it is important and necessary the use of reference materials for different purposes, such as methods validation or the performance of actions based on the systematic contrast of all the activities involves in the quality control. Nevertheless, in microbiology the present offer of reference materials in very low mainly because they are very difficult to be prepared, and for this reason in this work we show our experience in the development of a new microbiological quantitative reference material, that will improve the use of microbial strains in the assay laboratories. (Author)
14. Rational Application and Reflection of References in Thesis Writing%参考文献在论文写作中的合理应用及思考
Institute of Scientific and Technical Information of China (English)
陆遐
2014-01-01
The quotations should be identified correctly by standard, scientific and reasonable method in the use of the references. This paper analyzes how to use references rationally in thesis writing, elaborates the meaning and inspection methods of the references, and provides help for thesis writing.%在参考文献的使用上,需以规范、科学、合理的方式,正确地进行引文标识。本文就在论文的写作中如何合理运用参考文献的问题进行分析,对参考文献的意义、查阅方法等问题进行阐述,为论文的写作提供一定帮助。
15. Magnetohydrodynamic inertial reference system
Science.gov (United States)
Eckelkamp-Baker, Dan; Sebesta, Henry R.; Burkhard, Kevin
2000-07-01
Optical platforms increasingly require attitude knowledge and optical instrument pointing at sub-microradian accuracy. No low-cost commercial system exists to provide this level of accuracy for guidance, navigation, and control. The need for small, inexpensive inertial sensors, which may be employed in pointing control systems that are required to satisfy angular line-of-sight stabilization jitter error budgets to levels of 1-3 microradian rms and less, has existed for at least two decades. Innovations and evolutions in small, low-noise inertial angular motion sensor technology and advances in the applications of the global positioning system have converged to allow improvement in acquisition, tracking and pointing solutions for a wide variety of payloads. We are developing a small, inexpensive, and high-performance inertial attitude reference system that uses our innovative magnetohydrodynamic angular rate sensor technology.
16. Live, Digital Reference.
Science.gov (United States)
Kenney, Brian
2002-01-01
Discusses digital reference services, also known as virtual reference, chat reference, or online reference, based on a round table discussion at the 2002 American Library Association annual conference in Atlanta. Topics include numbers and marketing; sustainability; competition and models; evaluation methods; outsourcing; staffing and training;…
17. Fundamentals of Reference
Science.gov (United States)
Mulac, Carolyn M.
2012-01-01
The all-in-one "Reference reference" you've been waiting for, this invaluable book offers a concise introduction to reference sources and services for a variety of readers, from library staff members who are asked to work in the reference department to managers and others who wish to familiarize themselves with this important area of…
18. Reference Models for Virtual Enterprises
DEFF Research Database (Denmark)
Tølle, Martin; Bernus, Peter; Vesterager, Johan
2002-01-01
This paper analyses different types of Reference Models (RMs) applicable to support the set up and (re)configuration of virtual enterprises (VEs). RMs are models capturing concepts common to VEs aiming to convert the task of setting up of VE towards a configuration task, and hence reduce the time...... needed for VE creation. The RMs are analysed through a mapping onto the Virtual Enterprise Reference Architecture (VERA) created in the IMS GLOBEMEN project based upon GERAM.......This paper analyses different types of Reference Models (RMs) applicable to support the set up and (re)configuration of virtual enterprises (VEs). RMs are models capturing concepts common to VEs aiming to convert the task of setting up of VE towards a configuration task, and hence reduce the time...
19. Numerical investigation of Marine Hydrokinetic Turbines: methodology development for single turbine and small array simulation, and application to flume and full-scale reference models
Science.gov (United States)
Javaherchi Mozafari, Amir Teymour
A hierarchy of numerical models, Single Rotating Reference Frame (SRF) and Blade Element Model (BEM), were used for numerical investigation of horizontal axis Marine Hydrokinetic (MHK) Turbines. In the initial stage the SRF and BEM were used to simulate the performance and turbulent wake of a flume- and a full-scale MHK turbine reference model. A significant level of understanding and confidence was developed in the implementation of numerical models for simulation of a MHK turbine. This was achieved by simulation of the flume-scale turbine experiments and comparison between numerical and experimental results. Then the developed numerical methodology was applied to simulate the performance and wake of the full-scale MHK reference model (DOE Reference Model 1). In the second stage the BEM was used to simulate the experimental study of two different MHK turbine array configurations (i.e. two and three coaxial turbines). After developing a numerical methodology using the experimental comparison to simulate the flow field of a turbine array, this methodology was applied toward array optimization study of a full-scale model with the goal of proposing an optimized MHK turbine configuration with minimal computational cost and time. In the last stage the BEM was used to investigate one of the potential environmental effects of MHK turbine. A general methodological approach was developed and experimentally validated to investigate the effect of MHK turbine wake on the sedimentation process of suspended particles in a tidal channel.
20. 2002 reference document; Document de reference 2002
Energy Technology Data Exchange (ETDEWEB)
NONE
2002-07-01
This 2002 reference document of the group Areva, provides information on the society. Organized in seven chapters, it presents the persons responsible for the reference document and for auditing the financial statements, information pertaining to the transaction, general information on the company and share capital, information on company operation, changes and future prospects, assets, financial position, financial performance, information on company management and executive board and supervisory board, recent developments and future prospects. (A.L.B.)
1. Application of Isotope Dilution Mass Spectrometry for Reference Measurements of Cadmium. Copper, Mercury, Lead, Zinc and Methyl Mercury in Marine Sediment Sample
Directory of Open Access Journals (Sweden)
Vasileva E.
2013-04-01
Full Text Available Marine sediment was selected as a test sample for the laboratory inter-comparison studies organized by the Environment Laboratoryes of the International Atomic Energy. The analytical procedure to establish the reference values for the Cd, Cu, Hg, Methyl Hg, Pb and Zn amount contents was based on Isotope Dilution Inductively Coupled Plasma-Mass Spectrometry (ID ICP-MS applied as a primary method of measurement..The Hg and Methyl Hg determination will be detailed more specifically because of the problems encountered with this element, including sample homogeneity issues, memory effects and possible matrix effects during the ICP- MS measurement stage. Reference values, traceable to the SI, with total uncertainties of less than 2% relative expanded uncertainty (k=2 were obtained for Cd, Cu, Zn and Pb and around 5% for Hg and CH3Hg.
2. Development of real-time PCR method for the detection and the quantification of a new endogenous reference gene in sugar beet "Beta vulgaris L.": GMO application.
Science.gov (United States)
Chaouachi, Maher; Alaya, Akram; Ali, Imen Ben Haj; Hafsa, Ahmed Ben; Nabi, Nesrine; Bérard, Aurélie; Romaniuk, Marcel; Skhiri, Fethia; Saïd, Khaled
2013-01-01
KEY MESSAGE : Here, we describe a new developed quantitative real-time PCR method for the detection and quantification of a new specific endogenous reference gene used in GMO analysis. The key requirement of this study was the identification of a new reference gene used for the differentiation of the four genomic sections of the sugar beet (Beta vulgaris L.) (Beta, Corrollinae, Nanae and Procumbentes) suitable for quantification of genetically modified sugar beet. A specific qualitative polymerase chain reaction (PCR) assay was designed to detect the sugar beet amplifying a region of the adenylate transporter (ant) gene only from the species of the genomic section I of the genus Beta (cultivated and wild relatives) and showing negative PCR results for 7 species of the 3 other sections, 8 related species and 20 non-sugar beet plants. The sensitivity of the assay was 15 haploid genome copies (HGC). A quantitative real-time polymerase chain reaction (QRT-PCR) assay was also performed, having high linearity (R (2) > 0.994) over sugar beet standard concentrations ranging from 20,000 to 10 HGC of the sugar beet DNA per PCR. The QRT-PCR assay described in this study was specific and more sensitive for sugar beet quantification compared to the validated test previously reported in the European Reference Laboratory. This assay is suitable for GMO quantification in routine analysis from a wide variety of matrices.
3. Development of real-time PCR method for the detection and the quantification of a new endogenous reference gene in sugar beet "Beta vulgaris L.": GMO application.
Science.gov (United States)
Chaouachi, Maher; Alaya, Akram; Ali, Imen Ben Haj; Hafsa, Ahmed Ben; Nabi, Nesrine; Bérard, Aurélie; Romaniuk, Marcel; Skhiri, Fethia; Saïd, Khaled
2013-01-01
KEY MESSAGE : Here, we describe a new developed quantitative real-time PCR method for the detection and quantification of a new specific endogenous reference gene used in GMO analysis. The key requirement of this study was the identification of a new reference gene used for the differentiation of the four genomic sections of the sugar beet (Beta vulgaris L.) (Beta, Corrollinae, Nanae and Procumbentes) suitable for quantification of genetically modified sugar beet. A specific qualitative polymerase chain reaction (PCR) assay was designed to detect the sugar beet amplifying a region of the adenylate transporter (ant) gene only from the species of the genomic section I of the genus Beta (cultivated and wild relatives) and showing negative PCR results for 7 species of the 3 other sections, 8 related species and 20 non-sugar beet plants. The sensitivity of the assay was 15 haploid genome copies (HGC). A quantitative real-time polymerase chain reaction (QRT-PCR) assay was also performed, having high linearity (R (2) > 0.994) over sugar beet standard concentrations ranging from 20,000 to 10 HGC of the sugar beet DNA per PCR. The QRT-PCR assay described in this study was specific and more sensitive for sugar beet quantification compared to the validated test previously reported in the European Reference Laboratory. This assay is suitable for GMO quantification in routine analysis from a wide variety of matrices. PMID:23052591
4. Development of an orange juice in-house reference material and its application to guarantee the quality of vitamin C determination in fruits, juices and fruit pulps.
Science.gov (United States)
Valente, A; Sanches-Silva, A; Albuquerque, T G; Costa, H S
2014-07-01
Reference materials are useful for the quality control of analytical procedures and to evaluate the performance of laboratories. There are few and expensive certified reference materials commercially available for vitamin C or ascorbic acid analysis in food matrices. In this study, the preparation and the suitability assessment of an orange juice in-house reference material (RM) for vitamin C analysis in fruits, juices and in fruit pulps is described. This RM was used for the development and full validation of an HPLC method. The results showed excellent linearity (r(2)=0.9995), good accuracy (96.6-97.3%) and precision, as relative standard deviation, ranged from 0.70% to 3.67%. The in-house RM was homogenous and stable at storage conditions (-80°C) during 12 months. According to our results, this in-house RM is an excellent tool to use in quality control and method verification purposes for vitamin C analysis of fruits, juices and fruit pulps matrices. Furthermore, a stabilization solution with perchloric and metaphosphoric acids was developed which prevents degradation of ascorbic acid for a period of 12 months at -80°C. PMID:24518317
5. Empirical Bayes accomodation of batch-effects in microarray data using identical replicate reference samples: application to RNA expression profiling of blood from Duchenne muscular dystrophy patients
Directory of Open Access Journals (Sweden)
McCulloch Charles E
2008-10-01
Full Text Available Abstract Background Non-biological experimental error routinely occurs in microarray data collected in different batches. It is often impossible to compare groups of samples from independent experiments because batch effects confound true gene expression differences. Existing methods can correct for batch effects only when samples from all biological groups are represented in every batch. Results In this report we describe a generalized empirical Bayes approach to correct for cross-experimental batch effects, allowing direct comparisons of gene expression between biological groups from independent experiments. The proposed experimental design uses identical reference samples in each batch in every experiment. These reference samples are from the same tissue as the experimental samples. This design with tissue matched reference samples allows a gene-by-gene correction to be performed using fewer arrays than currently available methods. We examine the effects of non-biological variation within a single experiment and between experiments. Conclusion Batch correction has a significant impact on which genes are identified as differentially regulated. Using this method, gene expression in the blood of patients with Duchenne Muscular Dystrophy is shown to differ for hundreds of genes when compared to controls. The numbers of specific genes differ depending upon whether between experiment and/or between batch corrections are performed.
6. Development of an orange juice in-house reference material and its application to guarantee the quality of vitamin C determination in fruits, juices and fruit pulps.
Science.gov (United States)
Valente, A; Sanches-Silva, A; Albuquerque, T G; Costa, H S
2014-07-01
Reference materials are useful for the quality control of analytical procedures and to evaluate the performance of laboratories. There are few and expensive certified reference materials commercially available for vitamin C or ascorbic acid analysis in food matrices. In this study, the preparation and the suitability assessment of an orange juice in-house reference material (RM) for vitamin C analysis in fruits, juices and in fruit pulps is described. This RM was used for the development and full validation of an HPLC method. The results showed excellent linearity (r(2)=0.9995), good accuracy (96.6-97.3%) and precision, as relative standard deviation, ranged from 0.70% to 3.67%. The in-house RM was homogenous and stable at storage conditions (-80°C) during 12 months. According to our results, this in-house RM is an excellent tool to use in quality control and method verification purposes for vitamin C analysis of fruits, juices and fruit pulps matrices. Furthermore, a stabilization solution with perchloric and metaphosphoric acids was developed which prevents degradation of ascorbic acid for a period of 12 months at -80°C.
7. Areva reference document 2007
International Nuclear Information System (INIS)
This reference document contains information on the AREVA group's objectives, prospects and development strategies, particularly in Chapters 4 and 7. It contains also information on the markets, market shares and competitive position of the AREVA group. Content: 1 - Person responsible for the reference document and persons responsible for auditing the financial statements; 2 - Information pertaining to the transaction (not applicable); 3 - General information on the company and its share capital: Information on Areva, Information on share capital and voting rights, Investment certificate trading, Dividends, Organization chart of AREVA group companies, Equity interests, Shareholders' agreements; 4 - Information on company operations, new developments and future prospects: Overview and strategy of the AREVA group, The Nuclear Power and Transmission and Distribution markets, The energy businesses of the AREVA group, Front End division, Reactors and Services division, Back End division, Transmission and Distribution division, Major contracts 140 Principal sites of the AREVA group, AREVA's customers and suppliers, Sustainable Development and Continuous Improvement, Capital spending programs, Research and Development programs, Intellectual Property and Trademarks, Risk and insurance; 5 - Assets financial position financial performance: Analysis of and comments on the group's financial position and performance, Human Resources report, Environmental report, Consolidated financial statements 2007, Notes to the consolidated financial statements, Annual financial statements 2007, Notes to the corporate financial statements; 6 - Corporate governance: Composition and functioning of corporate bodies, Executive compensation, Profit-sharing plans, AREVA Values Charter, Annual Ordinary General Meeting of Shareholders of April 17, 2008; 7 - Recent developments and future prospects: Events subsequent to year-end closing for 2007, Outlook; Glossary; table of concordance
8. Areva, reference document 2006
International Nuclear Information System (INIS)
This reference document contains information on the AREVA group's objectives, prospects and development strategies, particularly in Chapters 4 and 7. It contains information on the markets, market shares and competitive position of the AREVA group. Content: - 1 Person responsible for the reference document and persons responsible for auditing the financial statements; - 2 Information pertaining to the transaction (Not applicable); - 3 General information on the company and its share capital: Information on AREVA, on share capital and voting rights, Investment certificate trading, Dividends, Organization chart of AREVA group companies, Equity interests, Shareholders' agreements; - 4 Information on company operations, new developments and future prospects: Overview and strategy of the AREVA group, The Nuclear Power and Transmission and Distribution markets, The energy businesses of the AREVA group, Front End division, Reactors and Services division, Back End division, Transmission and Distribution division, Major contracts, The principal sites of the AREVA group, AREVA's customers and suppliers, Sustainable Development and Continuous Improvement, Capital spending programs, Research and development programs, intellectual property and trademarks, Risk and insurance; - 5 Assets - Financial position - Financial performance: Analysis of and comments on the group's financial position and performance, 2006 Human Resources Report, Environmental Report, Consolidated financial statements, Notes to the consolidated financial statements, AREVA SA financial statements, Notes to the corporate financial statements; 6 - Corporate Governance: Composition and functioning of corporate bodies, Executive compensation, Profit-sharing plans, AREVA Values Charter, Annual Combined General Meeting of Shareholders of May 3, 2007; 7 - Recent developments and future prospects: Events subsequent to year-end closing for 2006, Outlook; 8 - Glossary; 9 - Table of concordance
9. CMS Statistics Reference Booklet
Data.gov (United States)
U.S. Department of Health & Human Services — The annual CMS Statistics reference booklet provides a quick reference for summary information about health expenditures and the Medicare and Medicaid health...
10. VBE reference framework
NARCIS (Netherlands)
H. Afsarmanesh; L.M. Camarinha-Matos; E. Ermilova
2008-01-01
Defining a comprehensive and generic "reference framework" for Virtual organizations Breeding Environments (VBEs), addressing all their features and characteristics, is challenging. While the definition and modeling of VBEs has become more formalized during the last five years, "reference models" fo
11. Genetics Home Reference
Science.gov (United States)
Skip Navigation Bar Home Current Issue Past Issues Genetics Home Reference Past Issues / Spring 2007 Table of ... of this page please turn Javascript on. The Genetics Home Reference (GHR) Web site — ghr.nlm.nih. ...
12. Electrical engineering a pocket reference
CERN Document Server
Schmidt-Walter, Heinz
2007-01-01
This essential reference offers you a well-organized resource for accessing the basic electrical engineering knowledge you need for your work. Whether you're an experienced engineer who appreciates an occasional refresher in key areas, or a student preparing to enter the field, Electrical Engineering: A Pocket Reference provides quick and easy access to fundamental principles and their applications. You also find an extensive collection of time-saving equations that help simplify your daily projects.Supported with more than 500 diagrams and figures, 60 tables, and an extensive index, this uniq
13. Reference frames and refbits
CERN Document Server
Van Enk, S J
2004-01-01
We define a new quantity called refbit, which allows one to quantify the resource of sharing a reference frame in quantum communication protocols. By considering various protocols we find relations between refbits and other resources such as cbits, ebits, cobits, and qubits. We also consider the same resources in encoded, reference-frame independent, form. This allows one to rephrase and unify previous work on phase references, reference frames, and superselection rules.
14. A reference-modified density functional theory: An application to solvation free-energy calculations for a Lennard-Jones solution
Science.gov (United States)
Sumi, Tomonari; Maruyama, Yutaka; Mitsutake, Ayori; Koga, Kenichiro
2016-06-01
In the conventional classical density functional theory (DFT) for simple fluids, an ideal gas is usually chosen as the reference system because there is a one-to-one correspondence between the external field and the density distribution function, and the exact intrinsic free-energy functional is available for the ideal gas. In this case, the second-order density functional Taylor series expansion of the excess intrinsic free-energy functional provides the hypernetted-chain (HNC) approximation. Recently, it has been shown that the HNC approximation significantly overestimates the solvation free energy (SFE) for an infinitely dilute Lennard-Jones (LJ) solution, especially when the solute particles are several times larger than the solvent particles [T. Miyata and J. Thapa, Chem. Phys. Lett. 604, 122 (2014)]. In the present study, we propose a reference-modified density functional theory as a systematic approach to improve the SFE functional as well as the pair distribution functions. The second-order density functional Taylor series expansion for the excess part of the intrinsic free-energy functional in which a hard-sphere fluid is introduced as the reference system instead of an ideal gas is applied to the LJ pure and infinitely dilute solution systems and is proved to remarkably improve the drawbacks of the HNC approximation. Furthermore, the third-order density functional expansion approximation in which a factorization approximation is applied to the triplet direct correlation function is examined for the LJ systems. We also show that the third-order contribution can yield further refinements for both the pair distribution function and the excess chemical potential for the pure LJ liquids.
15. Study on the Preparation and Application of Cannabis Fructus TLC Reference Extract%火麻仁薄层色谱鉴别用对照提取物的研制与应用
Institute of Scientific and Technical Information of China (English)
邓仕任; 蔡明宸; 夏林波; 王鑫; 朱夏敏
2016-01-01
Cannabis fructus TLC reference extract was prepared and its application in the quality control of Cannabis fructus and preparations as a reference drug was explored.Cannabis fructus TLC reference extract was prepared by opting processing route and investigating stability influencing factors.The application to TLC identification items of Cannabis fructus was performed according to the Chinese Pharmacopoeia 2010 edition.Results showed that Cannabis fructus TLC reference extract can play the same role as Cannabis fructus reference drug with the same quantity and quality.Cannabis fructus TLC reference extract for TLC identification can be used for quality control analysis of Cannabis fructus and preparations.The research provided a foundation for further development of TCM reference extracts.%研制被薄层色谱鉴别用火麻仁对照提取物,以替代对照药材用于火麻仁的薄层鉴别。从制备工艺考察、提取物稳定性影响因素考察以及提取物性状等方面制订薄层色谱鉴别用火麻仁对照提取物的制备和保存方法,并比较对照药材与对照提取物在薄层鉴别中的应用结果。结果表明:制备方法为:火麻仁药材粉末(过二号筛)加25倍乙醚脱脂后,药渣加10倍甲醇加热回流提取1 h,滤过,减压蒸去甲醇,所得浸膏用10倍甲醇复溶,加入与浸膏等重量的柱层析硅胶(200~300目),混匀,减压蒸干甲醇,过九号筛即得。该对照提取物与火麻仁药材的薄层鉴别效果一致。本研究制备的火麻仁薄层鉴别用对照提取物稳定可靠,可以替代火麻仁对照药材用于火麻仁及其成方制剂的质量控制分析。
16. Ecological Evaluation Index continuous formula (EEI-c application: a step forward for functional groups, the formula and reference condition values
Directory of Open Access Journals (Sweden)
S. ORFANIDIS
2012-12-01
Full Text Available The Ecological Evaluation Index continuous formula (EEI-c was designed to estimate the habitat- based ecological status of rocky coastal and sedimentary transitional waters using shallow benthic macrophyte communities as bioindicators. This study aimed to remedy the weaknesses of the currently used EEI methodology in: (1 ecological status groups (ESG, (2 the formula, and (3 reference condition values. A cluster analysis of twelve species traits was used to delineate ESGs. Two main clusters (ESG I, late-successional; ESG II, opportunistic were identified that were hierarchically divided into three and two sub-clusters, respectively: ESG I comprised thick perennial (IA, thick plastic (IB and shade-adapted plastic (IC coastal water species, and angiosperm plastic (IA, thick plastic (IB and shade-adapted plastic (IC transitional water species. ESG II comprised fleshy opportunistic (IIB and filamentous sheet-like opportunistic (IIA species both in coastal and transitional waters. To avoid discrete jumps at the boundaries between predefined ecological categories, a hyperbolic model that approximates the index values and expresses the ecosystem status in continuous numbers was developed. Seventy-four quantitative and destructive samples of the upper infralittoralCystoseira crinita and coastal lagoonRuppia cirrhosa communities from tentative pristine to less impacted sites in Greece verified 10 as an ‘ideal’ EEI-c reference condition value.
17. CONSTRUCTION OF THE REFERENCE DATABASE IN GBDB AND ITS APPLICATION%GBDB的在线文献数据库及其应用
Institute of Scientific and Technical Information of China (English)
樊隽轩; 侯旭东; 陈清; 张琳娜; 陈中阳
2012-01-01
The Geobiodiversity Database (GBDB, ht-tp://www. geobiodiversity. com) is a unique, section-based online database system for palaeontolo-gical and stratigraphic research. Various kinds of data sources can be integrated into it, including geographic, lithostratigraphic, biostratigraphic, chronostratigraphic, taxonomic and related bibliographic reference data. Data sources are mostly taken from the published literature. The reference database is therefore an important data source in GBDB. Through the reference database users can track the source of the data, check and verify its quality, and even attach their own opinions, as an important comparison with the initiator' s thoughts. To achieve a comprehensive reference database is a long-term and time-consuming job. As more people become involved, the less time will be consumed per person and the more rapidly data quality will increase. After four years' development, the present reference database is well integrated in GBDB, and eight major reference types are recognized. Both DOI (Digital Object unique Identifier) and URL (Uniform / Universal Resource Locator) automatic conversion to full-text database are supported. Users can upload their Endnote files to the GBDB reference database through the database administrator (fanjunxuan@ gmail. com). A new export function to 12 common journals such as Palaeontology , Journal of Paleontology, Palaeo3, and Acta Palaeontologica Sinica , has recently become available online. By Feb. 15, 2012, 44 310 literature records related to palaeontology and stratigraphy had been compiled into the GBDB. Amongst them, 4 321 were based on material from China, which, according to our preliminary estimation, comprises about one third of the total literature on Chinese material. Simple counting of literature published in each decade since 1900 indicates a slow increase from 1910-1919 to 1950-1959, except for a slight decrease probably resulting from the 2nd World War. Then there was a rapid
18. Application of genotyping-by-sequencing on semiconductor sequencing platforms: A comparison of genetic and reference-based marker ordering in barley
Science.gov (United States)
The rapid development of next generation sequencing platforms has enabled the use of sequencing for routine genotyping across a range of genetics studies and breeding applications. Genotyping-by-sequencing (GBS), a low-cost, reduced representation sequencing method, is becoming a common approach fo...
19. [Application and case analysis on the problem-based teaching of Jingluo Shuxue Xue (Science of Meridian and Acupoint) in reference to the team oriented learning method].
Science.gov (United States)
Ma, Ruijie; Lin, Xianming
2015-12-01
The problem based teaching (PBT) has been the main approach to the training in the universities o the world. Combined with the team oriented learning method, PBT will become the method available to the education in medical universities. In the paper, based on the common questions in teaching Jingluo Shuxue Xue (Science of Meridian and Acupoint), the concepts and characters of PBT and the team oriented learning method were analyzed. The implementation steps of PBT were set up in reference to the team oriented learning method. By quoting the original text in Beiji Qianjin Yaofang (Essential recipes for emergent use worth a thousand gold), the case analysis on "the thirteen devil points" was established with PBT.
20. Dynamic modeling of breast tissue with application of model reference adaptive system identification technique based on clinical robot-assisted palpation.
Science.gov (United States)
Keshavarz, M; Mojra, A
2015-11-01
Accurate identification of breast tissue's dynamic behavior in physical examination is critical to successful diagnosis and treatment. In this study a model reference adaptive system identification (MRAS) algorithm is utilized to estimate the dynamic behavior of breast tissue from mechanical stress-strain datasets. A robot-assisted device (Robo-Tac-BMI) is going to mimic physical palpation on a 45 year old woman having a benign mass in the left breast. Stress-strain datasets will be collected over 14 regions of both breasts in a specific period of time. Then, a 2nd order linear model is adapted to the experimental datasets. It was confirmed that a unique dynamic model with maximum error about 0.89% is descriptive of the breast tissue behavior meanwhile mass detection may be achieved by 56.1% difference from the normal tissue.
1. Visual Basic 2012 programmer's reference
CERN Document Server
Stephens, Rod
2012-01-01
The comprehensive guide to Visual Basic 2012 Microsoft Visual Basic (VB) is the most popular programming language in the world, with millions of lines of code used in businesses and applications of all types and sizes. In this edition of the bestselling Wrox guide, Visual Basic expert Rod Stephens offers novice and experienced developers a comprehensive tutorial and reference to Visual Basic 2012. This latest edition introduces major changes to the Visual Studio development platform, including support for developing mobile applications that can take advantage of the Windows 8 operating system
2. Dynamic HTML The Definitive Reference
CERN Document Server
Goodman, Danny
2007-01-01
Packed with information on the latest web specifications and browser features, this new edition is your ultimate one-stop resource for HTML, XHTML, CSS, Document Object Model (DOM), and JavaScript development. Here is the comprehensive reference for designers of Rich Internet Applications who need to operate in all modern browsers, including Internet Explorer 7, Firefox 2, Safari, and Opera. With this book, you can instantly see browser support for the latest standards-based technologies, including CSS Level 3, DOM Level 3, Web Forms 2.0, XMLHttpRequest for AJAX applications, JavaScript 1.7
3. JavaScript Pocket Reference
CERN Document Server
Flanagan, David
1998-01-01
JavaScript is a powerful, object-based scripting language that can be embedded directly in HTML pages. It allows you to create dynamic, interactive Web-based applications that run completely within a Web browser -- JavaScript is the language of choice for developing Dynamic HTML (DHTML) content. JavaScript can be integrated effectively with CGI and Java to produce sophisticated Web applications, although, in many cases, JavaScript eliminates the need for complex CGI scripts and Java applets altogether. The JavaScript Pocket Reference is a companion volume to JavaScript: The Definitive Guide
4. The first customer reference
OpenAIRE
Ruokolainen, Jari
2008-01-01
Marketing and sales have generally been recognized as typical bottlenecks for start-up technology companies which produce complex products for corporate customers. Start-up technology companies often need a customer reference to support their efforts in entering the market. Without a real-world assessment that the first customer reference represents, it is difficult to convince the next potential customer to buy. The first customer reference, the topic of this study, has not been widely ...
5. Android quick APIs reference
CERN Document Server
Cinar, Onur
2015-01-01
The Android Quick APIs Reference is a condensed code and APIs reference for the new Google Android 5.0 SDK. It presents the essential Android APIs in a well-organized format that can be used as a handy reference. You won't find any technical jargon, bloated samples, drawn out history lessons, or witty stories in this book. What you will find is a software development kit and APIs reference that is concise, to the point and highly accessible. The book is packed with useful information and is a must-have for any mobile or Android app developer or programmer. In the Android Quick APIs Refe
6. Application of OSI Reference Model in New Rural Cooperative Medical Care%OSI参考模型在新型农村合作医疗中的应用
Institute of Scientific and Technical Information of China (English)
高亚玲
2012-01-01
The article presents the concept of new rural cooperative medical care, introduces OSI reference model, and elaborates the advantages and disadvantages of OSI in application of the new rural cooperative medical care.%本文在介绍新型农村合作医疗概念的基础上,引入了OSI参考模型,同时阐述了OSI参考模型在合作医疗中的优缺点,以及OSI在新型农村合作医疗中的应用.
7. Application of the ICRP/ICRU reference computational phantoms to internal dosimetry: calculation of specific absorbed fractions of energy for photons and electrons
Energy Technology Data Exchange (ETDEWEB)
Hadid, L; Desbree, A; Franck, D; Blanchardon, E [IRSN, Institute for Radiological Protection and Nuclear Safety, Internal Dosimetry Department, IRSN/DRPH/SDI, BP 17, F-92262 Fontenay-aux-Roses Cedex (France); Schlattl, H; Zankl, M, E-mail: [email protected] [Institute of Radiation Protection, Helmholtz Zentrum Muenchen-German Research Center for Environmental Health, Neuherberg (Germany)
2010-07-07
The emission of radiation from a contaminated body region is connected with the dose received by radiosensitive tissue through the specific absorbed fractions (SAFs) of emitted energy, which is therefore an essential quantity for internal dose assessment. A set of SAFs were calculated using the new adult reference computational phantoms, released by the International Commission on Radiological Protection (ICRP) together with the International Commission on Radiation Units and Measurements (ICRU). Part of these results has been recently published in ICRP Publication 110 (2009 Adult reference computational phantoms (Oxford: Elsevier)). In this paper, we mainly discuss the results and also present them in numeric form. The emission of monoenergetic photons and electrons with energies ranging from 10 keV to 10 MeV was simulated for three source organs: lungs, thyroid and liver. SAFs were calculated for four target regions in the body: lungs, colon wall, breasts and stomach wall. For quality assurance purposes, the simulations were performed simultaneously at the Helmholtz Zentrum Muenchen (HMGU, Germany) and at the Institute for Radiological Protection and Nuclear Safety (IRSN, France), using the Monte Carlo transport codes EGSnrc and MCNPX, respectively. The comparison of results shows overall agreement for photons and high-energy electrons with differences lower than 8%. Nevertheless, significant differences were found for electrons at lower energy for distant source/target organ pairs. Finally, the results for photons were compared to the SAF values derived using mathematical phantoms. Significant variations that can amount to 200% were found. The main reason for these differences is the change of geometry in the more realistic voxel body models. For electrons, no SAFs have been computed with the mathematical phantoms; instead, approximate formulae have been used by both the Medical Internal Radiation Dose committee (MIRD) and the ICRP due to the limitations imposed
8. Application of the ICRP/ICRU reference computational phantoms to internal dosimetry: calculation of specific absorbed fractions of energy for photons and electrons
Science.gov (United States)
Hadid, L.; Desbrée, A.; Schlattl, H.; Franck, D.; Blanchardon, E.; Zankl, M.
2010-07-01
The emission of radiation from a contaminated body region is connected with the dose received by radiosensitive tissue through the specific absorbed fractions (SAFs) of emitted energy, which is therefore an essential quantity for internal dose assessment. A set of SAFs were calculated using the new adult reference computational phantoms, released by the International Commission on Radiological Protection (ICRP) together with the International Commission on Radiation Units and Measurements (ICRU). Part of these results has been recently published in ICRP Publication 110 (2009 Adult reference computational phantoms (Oxford: Elsevier)). In this paper, we mainly discuss the results and also present them in numeric form. The emission of monoenergetic photons and electrons with energies ranging from 10 keV to 10 MeV was simulated for three source organs: lungs, thyroid and liver. SAFs were calculated for four target regions in the body: lungs, colon wall, breasts and stomach wall. For quality assurance purposes, the simulations were performed simultaneously at the Helmholtz Zentrum München (HMGU, Germany) and at the Institute for Radiological Protection and Nuclear Safety (IRSN, France), using the Monte Carlo transport codes EGSnrc and MCNPX, respectively. The comparison of results shows overall agreement for photons and high-energy electrons with differences lower than 8%. Nevertheless, significant differences were found for electrons at lower energy for distant source/target organ pairs. Finally, the results for photons were compared to the SAF values derived using mathematical phantoms. Significant variations that can amount to 200% were found. The main reason for these differences is the change of geometry in the more realistic voxel body models. For electrons, no SAFs have been computed with the mathematical phantoms; instead, approximate formulae have been used by both the Medical Internal Radiation Dose committee (MIRD) and the ICRP due to the limitations imposed
9. 32 CFR 516.2 - References.
Science.gov (United States)
2010-07-01
... 32 National Defense 3 2010-07-01 2010-07-01 true References. 516.2 Section 516.2 National Defense Department of Defense (Continued) DEPARTMENT OF THE ARMY AID OF CIVIL AUTHORITIES AND PUBLIC RELATIONS LITIGATION General § 516.2 References. Applicable publications and forms are listed in appendix A to...
10. Application of USCRN Station Density Strategy to China Climate Reference Network%USCRN气候基准站网布局理论在我国的应用
Institute of Scientific and Technical Information of China (English)
胡婷; 周江兴; 代刊
2012-01-01
基于美国气候基准站网(USCRN,US Climate Reference Network)的均匀正三角网格布局模式,以站网对我国平均降水量和气温的方差解释量分别能够达到95%和98%为标准,利用我国2416个观测站的降水量和气温数据研究了我国气候观测站网的布局设计,得到能反映我国整体气候特征的站网最少台站数.结果表明:为显著减少全国尺度上的气候不确定性,需要新建边长为3°纬距的正三角均匀网格(103个站点)站网,并在全国范围内开展站址考察、总结评估和运行检验工作,才能得到新的气候基准站网;在已有气候观测系统基础上改进完善而不新建系统,则改进的气候基准站网为最接近边长为2°纬距的正三角网格分布(229个站点),其中199个预期位置或其附近已建有观测站点,没有对应实际站点的30个预期位置主要分布在青藏高原西南大部,这些地区将是未来建站的重点.%The US Climate Reference Network (USCRN) consists of 114 stations developed, deployed, managed, and maintained by the National Oceanic and Atmospheric Administration (NOAA) in the continental United States for the express purpose of detecting the national signal of climate change, focusing solely on precipitation and temperature. The vision of the USCRN program is to reduce uncertainty and error range envelopes in producing the most precise in situ precipitation and temperature records possible, and to do it with the fewest possible stations located in areas of minimal human disturbance and with the least likelihood of human development over the coming 50-100 years. And the key goal of USCRN is to reduce cli-mate uncertainty at the national level to a statistically insignificant level. That is, for precipitation climate uncertainty should be reduced by 95% and for temperature climate uncertainty at the national level should be reduced by 98%.China is in great need of a sustainable high-quality and long
11. Application of the optimized decoupling methodology for the construction of a skeletal primary reference fuel (PRF mechanism focusing on engine-relevant conditions
Directory of Open Access Journals (Sweden)
Yachao eChang
2015-09-01
Full Text Available For the multi-dimensional simulation of the engines with advanced compression-ignition combustion strategies, a practical and robust chemical kinetic mechanism is highly demanded. Decoupling methodology is effective for the construction of skeletal mechanisms for long-chain alkanes. To improve the performance of the decoupling methodology, further improvements are introduced based on recent theoretical and experimental works. The improvements include: (1 updating the H2/O2 sub-mechanism; (2 refining the rate constants in the HCO/CH3/CH2O sub-mechanism; (3 building a new reduced C2 sub-mechanism; and (4 improving the large-molecule sub-mechanism. With the improved decoupling methodology, a skeletal primary reference fuel (PRF mechanism is developed. The mechanism is validated against the experimental data in shock tubes, jet-stirred reactors, premixed and counterflow flames for various PRF fuels covering the temperature range of 500–1450 K, the pressure range of 1–55 atm, and the equivalence ratio range of 0.25¬–1.0. Finally, the skeletal mechanism is coupled with a multi-dimensional computational fluid dynamics model to simulate the combustion and emission characteristics of homogeneous charge compression ignition (HCCI engines fueled with iso-octane and PRF. Overall, the agreements between the experiment and prediction are satisfactory.
12. Submergence Vulnerability Index development and application to Coastwide Reference Monitoring System Sites and Coastal Wetlands Planning, Protection and Restoration Act projects
Science.gov (United States)
Stagg, Camille L.; Sharp, Leigh Anne; McGinnis, Thomas E.; Snedden, Gregg A.
2013-01-01
Since its implementation in 2003, the Coastwide Reference Monitoring System (CRMS) in Louisiana has facilitated the creation of a comprehensive dataset that includes, but is not limited to, vegetation, hydrologic, and soil metrics on a coastwide scale. The primary impetus for this data collection is to assess land management activities, including restoration efforts, across the coast. The aim of the CRMS analytical team is to provide a method to synthesize this data to enable multiscaled evaluations of activities in Louisiana’s coastal wetlands. Several indices have been developed to facilitate data synthesis and interpretation, including a Floristic Quality Index, a Hydrologic Index, and a Landscape Index. This document details the development of the Submergence Vulnerability Index, which incorporates sediment-elevation data as well as hydrologic data to determine the vulnerability of a wetland based on its ability to keep pace with sea-level rise. The objective of this document is to provide Federal and State sponsors, project managers, planners, landowners, data users, and the rest of the coastal restoration community with the following: (1) data collection and model development methods for the sediment-elevation response variables, and (2) a description of how these response variables will be used to evaluate CWPPRA project and program effectiveness.
13. Evaluation of the Penman-Monteith (FAO 56 PM Method for Calculating Reference Evapotranspiration Using Limited Data: Application to the Wet Páramo of Southern Ecuador
Directory of Open Access Journals (Sweden)
Mario Córdova
2015-08-01
Full Text Available Reference evapotranspiration (ETo is often calculated using the Penman-Monteith (FAO 56 PM; Allen et al 1998 method, which requires data on temperature, relative humidity, wind speed, and solar radiation. But in high-mountain environments, such as the Andean páramo, meteorological monitoring is limited and high-quality data are scarce. Therefore, the FAO 56 PM equation can be applied only through the use of an alternative method suggested by the same authors that substitutes estimates for missing data. This study evaluated whether the FAO 56 PM method for estimating missing data can be effectively used for páramo landscapes in the high Andes of southern Ecuador. Our investigation was based on data from 2 automatic weather stations at elevations of 3780 m and 3979 m. We found that using estimated wind speed data has no major effect on calculated ETo but that if solar radiation data are estimated, ETo calculations may be erroneous by as much as 24%; if relative humidity data are estimated, the error may be as high as 14%; and if all data except temperature are estimated, errors higher than 30% may result. Our study demonstrates the importance of using high-quality meteorological data for calculating ETo in the wet páramo landscapes of southern Ecuador.
14. Application of radcal gamma thermometer assemblies for core coolant monitoring in ASEA ATOM reactors with particular reference to the Barsebaeck plants
International Nuclear Information System (INIS)
In this study reference designs for instrument assemblies containing RGT rods to monitor the core coolant conditions in the Barsebaeck reactors have been worked out. Four such strings would be required to satisfy the Reg. Guide 1.97 reqiurements. The signal transmission to the control room and the presentation of information to the operators have been addressed. Downcomer water level measurement is considered important in order to get an early warning about leakages. Possible ways of diversifying the existing measurement method using RGTs are mentioned, and the design of a downcomer RGT rod has been suggested. To fully comply with Reg. Guide 1.97, water level measurements above core would be required. In a conceptual way it has been shown how an RGT rod could be extended up into this region, if so required. The possibility of making an ideal core coolant monitoring system by replacing one of the structural rods (water rods) in the fuel bundle by an RGT rod is pointed out. There are foreseen, however, several practical obstacles in pursuing the idea. The present state of RGT development and further work required to get the intrument licensed as a coolant monitoring device, has been defined. (Author)
15. Organomercury determination in biological reference materials: application to a study on mercury speciation in marine mammals off the Faröe Islands.
Science.gov (United States)
Schintu, M; Jean-Caurant, F; Amiard, J C
1992-08-01
The potential use of graphite furnace atomic absorption spectrometry (GF-AAS) for the organic mercury determination in marine biological tissues was evaluated. Following its isolation by acid extraction in toluene, organic mercury was recovered in aqueous thiosulfate and measured by GF-AAS. The detection limit was 0.01 microgram Hg/g (as methyl mercury). Analyses were conducted on three reference standard materials certified for their methyl mercury content, DOLT-1, DORM-1, and TORT-1, provided by the National Research Council of Canada. The method resulted in very good recovery and reproducibility, indicating that GF-AAS can provide results comparable to those obtained by using more expensive and time consuming analytical techniques. The method was applied to the analysis of liver tissues of pilot whale specimens (Globicephala melas) from the drive fishery of the Faröe Islands (northeast Atlantic). The results provided useful information on the proportion of different mercury forms in the liver of these marine mammals.
16. Library Reference Service.
Science.gov (United States)
Schippleck, Suzanne
The Inglewood, California, public library provides a manual on reference service. The theory, purpose, and objectives of reference are noted, and goals and activities are described in terms of budget, personnel, resources, and services. A chapter on organization covers service structure, information services, relationships with other library…
17. China Connections Reference Book.
Science.gov (United States)
Kalat, Marie B.; Hoermann, Elizabeth F.
This reference book focuses on six aspects of the geography of the People's Republic of China. They are: territory, governing units, population and land use, waterways, land forms, and climates. Designed as a primary reference, the book explains how the Chinese people and their lifestyles are affected by China's geography. Special components…
18. Marketing Reference Services.
Science.gov (United States)
Norman, O. Gene
1995-01-01
Relates the marketing concept to library reference services. Highlights include a review of the literature and an overview of marketing, including research, the marketing mix, strategic plan, marketing plan, and marketing audit. Marketing principles are applied to reference services through the marketing mix elements of product, price, place, and…
19. Radar rainfall estimation for the post-event analysis of a Slovenian flash-flood case: application of the mountain reference technique at C-band frequency
Directory of Open Access Journals (Sweden)
L. Bouilloud
2009-01-01
20. Revisions of the Fish Invasiveness Screening Kit (FISK) for its application in warmer climatic zones, with particular reference to peninsular Florida.
Science.gov (United States)
Lawson, Larry L; Hill, Jeffrey E; Vilizzi, Lorenzo; Hardin, Scott; Copp, Gordon H
2013-08-01
The initial version (v1) of the Fish Invasiveness Scoring Kit (FISK) was adapted from the Weed Risk Assessment of Pheloung, Williams, and Halloy to assess the potential invasiveness of nonnative freshwater fishes in the United Kingdom. Published applications of FISK v1 have been primarily in temperate-zone countries (Belgium, Belarus, and Japan), so the specificity of this screening tool to that climatic zone was not noted until attempts were made to apply it in peninsular Florida. To remedy this shortcoming, the questions and guidance notes of FISK v1 were reviewed and revised to improve clarity and extend its applicability to broader climatic regions, resulting in changes to 36 of the 49 questions. In addition, upgrades were made to the software architecture of FISK to improve overall computational speed as well as graphical user interface flexibility and friendliness. We demonstrate the process of screening a fish species using FISK v2 in a realistic management scenario by assessing the Barcoo grunter Scortum barcoo (Terapontidae), a species whose management concerns are related to its potential use for aquaponics in Florida. The FISK v2 screening of Barcoo grunter placed the species into the lower range of medium risk (score = 5), suggesting it is a permissible species for use in Florida under current nonnative species regulations. Screening of the Barcoo grunter illustrates the usefulness of FISK v2 as a proactive tool serving to inform risk management decisions, but the low level of confidence associated with the assessment highlighted a dearth of critical information on this species.
1. Revisions of the Fish Invasiveness Screening Kit (FISK) for its application in warmer climatic zones, with particular reference to peninsular Florida.
Science.gov (United States)
Lawson, Larry L; Hill, Jeffrey E; Vilizzi, Lorenzo; Hardin, Scott; Copp, Gordon H
2013-08-01
The initial version (v1) of the Fish Invasiveness Scoring Kit (FISK) was adapted from the Weed Risk Assessment of Pheloung, Williams, and Halloy to assess the potential invasiveness of nonnative freshwater fishes in the United Kingdom. Published applications of FISK v1 have been primarily in temperate-zone countries (Belgium, Belarus, and Japan), so the specificity of this screening tool to that climatic zone was not noted until attempts were made to apply it in peninsular Florida. To remedy this shortcoming, the questions and guidance notes of FISK v1 were reviewed and revised to improve clarity and extend its applicability to broader climatic regions, resulting in changes to 36 of the 49 questions. In addition, upgrades were made to the software architecture of FISK to improve overall computational speed as well as graphical user interface flexibility and friendliness. We demonstrate the process of screening a fish species using FISK v2 in a realistic management scenario by assessing the Barcoo grunter Scortum barcoo (Terapontidae), a species whose management concerns are related to its potential use for aquaponics in Florida. The FISK v2 screening of Barcoo grunter placed the species into the lower range of medium risk (score = 5), suggesting it is a permissible species for use in Florida under current nonnative species regulations. Screening of the Barcoo grunter illustrates the usefulness of FISK v2 as a proactive tool serving to inform risk management decisions, but the low level of confidence associated with the assessment highlighted a dearth of critical information on this species. PMID:23035930
2. Selection of a marker gene to construct a reference library for wetland plants, and the application of metabarcoding to analyze the diet of wintering herbivorous waterbirds.
Science.gov (United States)
Yang, Yuzhan; Zhan, Aibin; Cao, Lei; Meng, Fanjuan; Xu, Wenbin
2016-01-01
Food availability and diet selection are important factors influencing the abundance and distribution of wild waterbirds. In order to better understand changes in waterbird population, it is essential to figure out what they feed on. However, analyzing their diet could be difficult and inefficient using traditional methods such as microhistologic observation. Here, we addressed this gap of knowledge by investigating the diet of greater white-fronted goose Anser albifrons and bean goose Anser fabalis, which are obligate herbivores wintering in China, mostly in the Middle and Lower Yangtze River floodplain. First, we selected a suitable and high-resolution marker gene for wetland plants that these geese would consume during the wintering period. Eight candidate genes were included: rbcL, rpoC1, rpoB, matK, trnH-psbA, trnL (UAA), atpF-atpH, and psbK-psbI. The selection was performed via analysis of representative sequences from NCBI and comparison of amplification efficiency and resolution power of plant samples collected from the wintering area. The trnL gene was chosen at last with c/h primers, and a local plant reference library was constructed with this gene. Then, utilizing DNA metabarcoding, we discovered 15 food items in total from the feces of these birds. Of the 15 unique dietary sequences, 10 could be identified at specie level. As for greater white-fronted goose, 73% of sequences belonged to Poaceae spp., and 26% belonged to Carex spp. In contrast, almost all sequences of bean goose belonged to Carex spp. (99%). Using the same samples, microhistology provided consistent food composition with metabarcoding results for greater white-fronted goose, while 13% of Poaceae was recovered for bean goose. In addition, two other taxa were discovered only through microhistologic analysis. Although most of the identified taxa matched relatively well between the two methods, DNA metabarcoding gave taxonomically more detailed information. Discrepancies were likely due to
3. Collaborating on Referring Expressions
CERN Document Server
Heeman, P A; Heeman, Peter A.; Hirst, Graeme
1995-01-01
This paper presents a computational model of how conversational participants collaborate in order to make a referring action successful. The model is based on the view of language as goal-directed behavior. We propose that the content of a referring expression can be accounted for by the planning paradigm. Not only does this approach allow the processes of building referring expressions and identifying their referents to be captured by plan construction and plan inference, it also allows us to account for how participants clarify a referring expression by using meta-actions that reason about and manipulate the plan derivation that corresponds to the referring expression. To account for how clarification goals arise and how inferred clarification plans affect the agent, we propose that the agents are in a certain state of mind, and that this state includes an intention to achieve the goal of referring and a plan that the agents are currently considering. It is this mental state that sanctions the adoption of g...
4. CSS Pocket Reference
CERN Document Server
Meyer, Eric
2011-01-01
When you're working with CSS and need a quick answer, CSS Pocket Reference delivers. This handy, concise book provides all of the essential information you need to implement CSS on the fly. Ideal for intermediate to advanced web designers and developers, the 4th edition is revised and updated for CSS3, the latest version of the Cascading Style Sheet specification. Along with a complete alphabetical reference to CSS3 selectors and properties, you'll also find a short introduction to the key concepts of CSS. Based on Cascading Style Sheets: The Definitive Guide, this reference is an easy-to-us
5. R quick syntax reference
CERN Document Server
Tollefson, Margot
2014-01-01
The R Quick Syntax Reference is a handy reference book detailing the intricacies of the R language. Not only is R a free, open-source tool, R is powerful, flexible, and has state of the art statistical techniques available. With the many details which must be correct when using any language, however, the R Quick Syntax Reference makes using R easier.Starting with the basic structure of R, the book takes you on a journey through the terminology used in R and the syntax required to make R work. You will find looking up the correct form for an expression quick and easy. With a copy of the R Quick
6. STL pocket reference
CERN Document Server
Lischner, Ray
2003-01-01
The STL Pocket Reference describes the functions, classes, and templates in that part of the C++ standard library often referred to as the Standard Template Library (STL). The STL encompasses containers, iterators, algorithms, and function objects, which collectively represent one of the most important and widely used subsets of standard library functionality. The C++ standard library, even the subset known as the STL, is vast. It's next to impossible to work with the STL without some sort of reference at your side to remind you of template parameters, function invocations, return types--ind
7. Biomedical Engineering Desk Reference
CERN Document Server
Ratner, Buddy D; Schoen, Frederick J; Lemons, Jack E; Dyro, Joseph; Martinsen, Orjan G; Kyle, Richard; Preim, Bernhard; Bartz, Dirk; Grimnes, Sverre; Vallero, Daniel; Semmlow, John; Murray, W Bosseau; Perez, Reinaldo; Bankman, Isaac; Dunn, Stanley; Ikada, Yoshito; Moghe, Prabhas V; Constantinides, Alkis
2009-01-01
A one-stop Desk Reference, for Biomedical Engineers involved in the ever expanding and very fast moving area; this is a book that will not gather dust on the shelf. It brings together the essential professional reference content from leading international contributors in the biomedical engineering field. Material covers a broad range of topics including: Biomechanics and Biomaterials; Tissue Engineering; and Biosignal Processing* A hard-working desk reference providing all the essential material needed by biomedical and clinical engineers on a day-to-day basis * Fundamentals, key techniques,
8. LINQ Pocket Reference
CERN Document Server
Albahari, Joseph
2008-01-01
9. First-person reference.
OpenAIRE
Taylor, J. E. V.
2007-01-01
It is argued that reference in first-person thought is distinct from reference in other thoughts about objects. This difference is located in the lack of acquaintance required for first-person thought. In order to be in the position to think about and refer to other objects, a subject must be acquainted with them. It is this acquaintance relation which enables him to think about a particular object. In contrast, a subject can think about himself without being acquainted with himself because h...
10. Review: Satellite-based remote sensing and geographic information systems and their application in the assessment of groundwater potential, with particular reference to India
Science.gov (United States)
Jasmin, Ismail; Mallikarjuna, P.
2011-06-01
Various hydrological, geological and geomorphological factors play a major role in the occurrence and movement of groundwater in different terrains. With advances in space technology and the advent of powerful personal computers, techniques for the assessment of groundwater potential have evolved, of which remote sensing (RS) and geographic information systems (GIS) are of great significance. The application of these methods is comprehensively reviewed with respect to the exploration and assessment of groundwater potential in consolidated and unconsolidated formations in semi-arid regions, and specifically in India. The process of such assessment includes the collection of remotely sensed data from suitable sensors and the selection of thematic maps on rainfall, geology, lithology, geomorphology, soil, land use/land cover, drainage patterns, slope and lineaments. The data are handled according to their significance with the assignment of appropriate weights and integrated into a sophisticated GIS environment. The requisite remote sensing and GIS data, in conjunction with necessary field investigations, help to identify the groundwater potential zones effectively.
11. The effect of 10 : 1 compression and soft copy interpretation on the chest radiographs of premature neonates with reference to their possible application in teleradiology
International Nuclear Information System (INIS)
The aim of the study was to assess the potential application of teleradiology in the neonatal intensive care unit (NICU) by ascertaining whether any decrease in conspicuity of anatomic detail or interventional devices in the chest radiographs of premature infants is caused by picture archiving and communication system (PACS)-based soft copy interpretation of 10 : 1 compressed images. One hundred digital chest radiographs of low-birthweight infants were obtained in the NICU using a storage phosphor system. Laser-printed images were interpreted and the data set for each radiograph was then irreversibly compressed by a 10 : 1 ratio. Four radiologists with extensive PACS experience used a five-point grading system to score laser-printed hard copy images for the visibility of six parameters of anatomic landmarks and interventional devices in the chest. Compressed soft copy images displayed on 2K PACS workstation were subsequently scored using the same approach. Statistical manipulation demonstrated no loss of anatomic detail in five of the six parameters scored, with minimal difference in one landmark, the retrocardiac lung assessment. While further study is required to assess the clinical impact of the variance noted when evaluating lung parameters, the preservation or improvement of information in the remaining parameters following irreversible compression and soft copy interpretation is promising for the potential use of teleradiology in this population. (orig.)
12. Reference Climatological Stations
Data.gov (United States)
National Oceanic and Atmospheric Administration, Department of Commerce — The Reference Climatological Stations (RCS) network represents the first effort by NOAA to create and maintain a nationwide network of stations located only in...
13. EPA QUICK REFERENCE GUIDES
Science.gov (United States)
EPA Quick Reference Guides are compilations of information on chemical and biological terrorist agents. The information is presented in consistent format and includes agent characteristics, release scenarios, health and safety data, real-time field detection, effect levels, samp...
14. Collaborative networks: Reference modeling
NARCIS (Netherlands)
L.M. Camarinha-Matos; H. Afsarmanesh
2008-01-01
Collaborative Networks: Reference Modeling works to establish a theoretical foundation for Collaborative Networks. Particular emphasis is put on modeling multiple facets of collaborative networks and establishing a comprehensive modeling framework that captures and structures diverse perspectives of
15. Ozone Standard Reference Photometer
Data.gov (United States)
Federal Laboratory Consortium — The Standard Reference Photometer (SRP) Program began in the early 1980s as collaboration between NIST and the U.S. Environmental Protection Agency (EPA) to design,...
16. Underwater Sound Reference Division
Data.gov (United States)
Federal Laboratory Consortium — The Underwater Sound Reference Division (USRD) serves as the U.S. standardizing activity in the area of underwater acoustic measurements, as the National Institute...
17. Genetics Home Reference: cystinuria
Science.gov (United States)
... for This Page Claes DJ, Jackson E. Cystinuria: mechanisms and management. Pediatr Nephrol. 2012 Nov;27(11): ... with a qualified healthcare professional . About Genetics Home Reference Site Map Contact Us Selection Criteria for Links ...
18. Dissolution processes. [224 references
Energy Technology Data Exchange (ETDEWEB)
Silver, G.L.
1976-10-22
This review contains more than 100 observations and 224 references on the dissolution phenomenon. The dissolution processes are grouped into three categories: methods of aqueous attack, fusion methods, and miscellaneous observations on phenomena related to dissolution problems. (DLC)
19. Toxicity Reference Database
Data.gov (United States)
U.S. Environmental Protection Agency — The Toxicity Reference Database (ToxRefDB) contains approximately 30 years and 2 billion worth of animal studies. ToxRefDB allows scientists and the interested... 20. The Calibration Reference Data System Science.gov (United States) Greenfield, P.; Miller, T. 2016-07-01 We describe a software architecture and implementation for using rules to determine which calibration files are appropriate for calibrating a given observation. This new system, the Calibration Reference Data System (CRDS), replaces what had been previously used for the Hubble Space Telescope (HST) calibration pipelines, the Calibration Database System (CDBS). CRDS will be used for the James Webb Space Telescope (JWST) calibration pipelines, and is currently being used for HST calibration pipelines. CRDS can be easily generalized for use in similar applications that need a rules-based system for selecting the appropriate item for a given dataset; we give some examples of such generalizations that will likely be used for JWST. The core functionality of the Calibration Reference Data System is available under an Open Source license. CRDS is briefly contrasted with a sampling of other similar systems used at other observatories. 1. 2002 reference document International Nuclear Information System (INIS) This 2002 reference document of the group Areva, provides information on the society. Organized in seven chapters, it presents the persons responsible for the reference document and for auditing the financial statements, information pertaining to the transaction, general information on the company and share capital, information on company operation, changes and future prospects, assets, financial position, financial performance, information on company management and executive board and supervisory board, recent developments and future prospects. (A.L.B.) 2. X Python reference manual OpenAIRE Mullender, Sjoerd 1995-01-01 This document describes the built-in types, exceptions, and functions of the X windows extension to Python. It assumes basic knowledge about the Python language and access to the X windows documentation. For an informal introduction to the language, see the Python Tutorial. The Python Reference Manual gives a more formal definition of the language. The Python Library Reference describes the built-in and standard modules of Python. This document can be seen as en extension to that document. 3. Reference Citation Format Institute of Scientific and Technical Information of China (English) 2014-01-01 <正>The format for citations in text and for bibliographic references follows GB/T 7714—2005.The citation should be ordered in number as it appears in the text of the submitted article.For journal article Sun,Y.,Li,B.,&Qu,J.F.Design and implementation of library intelligent IM reference robot.New Technology of Library and Information Service(in Chinese), 4. Reference Citation Format Institute of Scientific and Technical Information of China (English) 2012-01-01 <正>The format for citations in text and for bibliographic references follows GB/T 7714—2005.The citation should be ordered in number as it appears in the text of the submitted article.For journal article Sun,Y.,Li,B.,&Qu,J.F.Design and implementation of library intelligent IM reference robot.New Technology of Library and Information Service(in Chinese),2011,205:88–92. 5. Reference Man anatomical model Energy Technology Data Exchange (ETDEWEB) Cristy, M. 1994-10-01 The 70-kg Standard Man or Reference Man has been used in physiological models since at least the 1920s to represent adult males. It came into use in radiation protection in the late 1940s and was developed extensively during the 1950s and used by the International Commission on Radiological Protection (ICRP) in its Publication 2 in 1959. The current Reference Man for Purposes of Radiation Protection is a monumental book published in 1975 by the ICRP as ICRP Publication 23. It has a wealth of information useful for radiation dosimetry, including anatomical and physiological data, gross and elemental composition of the body and organs and tissues of the body. The anatomical data includes specified reference values for an adult male and an adult female. Other reference values are primarily for the adult male. The anatomical data include much data on fetuses and children, although reference values are not established. There is an ICRP task group currently working on revising selected parts of the Reference Man document. 6. Progress report on research project [Parameters for calculation of nuclear reactions of relevance to non-energy nuclear applications (Reference Input Parameter Library: Phase III) International Nuclear Information System (INIS) Full text: Uncertainties of the KD03 global optical model parameters. An estimate of the uncertainties of the parameters of the KD03 global optical model potential has been given. A Monte Carlo method for generating uncertainties of the final cross sections and angular distributions is used. The approach is pragmatic: The parameter uncertainties are adjusted such that the resulting calculated uncertainties account for the difference between the global prediction and the experimental data. At this stage, no OMP parameter correlations have been taken into account. We think however that the present results (summarized in a table), allow for adjustment of OMP parameters for data evaluation purposes. The presented uncertainties give a measure of the allowed deviation from the average parameters. Phenomenological level density parameters A computational set up for a consistent parameterization of three level density models has been built. This includes the Back-shifted Fermi gas Model, the Constant Temperature Model and the Generalized Superfluid Model, each without and with explicit collective enhancement. The resulting level densities should be applicable over a large energy range, taking into account experimental information from both discrete levels and mean resonance spacing. For each of the three models, we have produced local level density parameters, i.e. parameters that are adjusted per nucleus, which give the best average description of all observables (discrete levels, mean resonance spacing) for that nucleus. We have also produced a global level density parameterization for all models, i.e. formulae for the global expressions that enter the level density formula, to be used for any nucleus. A few remaining deficiencies in the procedure need to be removed before the parameter collection can be delivered to RIPL-3) 7. Establishing ecological reference conditions and tracking post-application effectiveness of lanthanum-saturated bentonite clay (Phoslock®) for reducing phosphorus in aquatic systems: an applied paleolimnological approach. Science.gov (United States) Moos, M T; Taffs, K H; Longstaff, B J; Ginn, B K 2014-08-01 Innovative management strategies for nutrient enrichment of freshwater are important in the face of this increasing global problem, however many strategies are not assessed over long enough time periods to establish effectiveness. Paleolimnological techniques using diatoms as biological indicators were utilized to establish ecological reference conditions, environmental variation, and the effectiveness of lanthanum-saturated bentonite clay (brand name: Phoslock(®)) applied to reduce water column phosphorus (P) concentrations in four waterbodies in Ontario, Canada, and eastern Australia. In sediment cores from the two Canadian sites, there were short-lived changes to diatom assemblages, relative to inferred background conditions, and a temporary reduction in both measured and diatom-inferred total phosphorus (TP) before returning to pre-application conditions (particularly in the urban stormwater management pond which has a high flushing rate and responds rapidly to precipitation and surface run-off). The two Australian sites (a sewage treatment pond and a shallow recreational lake), recorded no reduction in diatom-inferred TP. Based on our pre-application environmental reconstruction, changes to the diatom assemblages and diatom-inferred TP appeared to be driven by larger, climatic factors. While laboratory tests involving this product showed sharp reductions in water column TP, management strategies require detailed information on pre-application environmental conditions and variations in order to accurately assess the effectiveness of new technologies for lake management. 8. PVWatts Version 1 Technical Reference Energy Technology Data Exchange (ETDEWEB) Dobos, A. P. 2013-10-01 The NREL PVWatts(TM) calculator is a web application developed by the National Renewable Energy Laboratory (NREL) that estimates the electricity production of a grid-connected photovoltaic system based on a few simple inputs. PVWatts combines a number of sub-models to predict overall system performance, and makes several hidden assumptions about performance parameters. This technical reference details the individual sub-models, documents assumptions and hidden parameters, and explains the sequence of calculations that yield the final system performance estimation. 9. Setting reference targets International Nuclear Information System (INIS) Reference Targets are used to represent virtual quantities like the magnetic axis of a magnet or the definition of a coordinate system. To explain the function of reference targets in the sequence of the alignment process, this paper will first briefly discuss the geometry of the trajectory design space and of the surveying space, then continue with an overview of a typical alignment process. This is followed by a discussion on magnet fiducialization. While the magnetic measurement methods to determine the magnetic centerline are only listed (they will be discussed in detail in a subsequent talk), emphasis is given to the optical/mechanical methods and to the task of transferring the centerline position to reference targets 10. Python pocket reference CERN Document Server Lutz, Mark 2010-01-01 This is the book to reach for when you're coding on the fly and need an answer now. It's an easy-to-use reference to the core language, with descriptions of commonly used modules and toolkits, and a guide to recent changes, new features, and upgraded built-ins -- all updated to cover Python 3.X as well as version 2.6. You'll also quickly find exactly what you need with the handy index. Written by Mark Lutz -- widely recognized as the world's leading Python trainer -- Python Pocket Reference, Fourth Edition, is the perfect companion to O'Reilly's classic Python tutorials, also written by Mark 11. HTML & XHTML Pocket Reference CERN Document Server Robbins, Jennifer 2010-01-01 After years of using spacer GIFs, layers of nested tables, and other improvised solutions for building your web sites, getting used to the more stringent standards-compliant design can be intimidating. HTML and XHTML Pocket Reference is the perfect little book when you need answers immediately. Jennifer Niederst-Robbins, author Web Design in a Nutshell, has revised and updated the fourth edition of this pocket guide by taking the top 20% of vital reference information from her Nutshell book, augmenting it judiciously, cross-referencing everything, and organizing it according to the most com 12. Language Reference Book Directory of Open Access Journals (Sweden) Miljenko Lapaine 2012-06-01 Full Text Available The Coca-Cola HBC Hrvatska Language Reference Book (Jezični priručnik Coca-Cole HBC Hrvatska initially conceived as a practical reference book intended for use within the company is nowadays available to everyone at http://www.prirucnik.hr. The book was prepared by Lana Hudeček and Maja Matković in collaboration with Igor Čutuk and was printed in May 2011 (2nd edition in February 2012 with 274 pages. 13. bash Quick Reference CERN Document Server Robbins, Arnold 2008-01-01 In this quick reference, you'll find everything you need to know about the bash shell. Whether you print it out or read it on the screen, this PDF gives you the answers to the annoying questions that always come up when you're writing shell scripts: What characters do you need to quote? How do you get variable substitution to do exactly what you want? How do you use arrays? It's also helpful for interactive use. If you're a Unix user or programmer, or if you're using bash on Windows, you'll find this quick reference indispensable. 14. CSS Pocket Reference CERN Document Server Meyer, Eric A 2007-01-01 They say that good things come in small packages, and it's certainly true for this edition of CSS Pocket Reference. Completely revised and updated to reflect the latest Cascading Style Sheet specifications in CSS 2.1, this indispensable little book covers the most essential information that web designers and developers need to implement CSS effectively across all browsers. Inside, you'll find: A short introduction to the key concepts of CSS A complete alphabetical reference to all CSS 2.1 selectors and properties A chart displaying detailed information about CSS support for every style ele 15. SNAP operating system reference manual International Nuclear Information System (INIS) The SNAP Operating System (SOS) is a FORTRAN 77 program which provides assistance to the safeguards analyst who uses the Safeguards Automated Facility Evaluation (SAFE) and the Safeguards Network Analysis Procedure (SNAP) techniques. Features offered by SOS are a data base system for storing a library of SNAP applications, computer graphics representation of SNAP models, a computer graphics editor to develop and modify SNAP models, a SAFE-to-SNAP interface, automatic generation of SNAP input data, and a computer graphic post-processor for SNAP. The SOS Reference Manual provides detailed application information concerning SOS as well as a detailed discussion of all SOS components and their associated command input formats. SOS was developed for the US Nuclear Regulatory Commission's Office of Nuclear Regulatory Research and the US Naval Surface Weapons Center by Pritsker and Associates, Inc., under contract to Sandia National Laboratories 16. A STUDY ON THE APPLICATION OF MUTUAL REFERENCE TECHNIQUE IN MAGNETOTELLURIC SOUNDING DATA%大地电磁测深数据互参考处理的应用研究 Institute of Scientific and Technical Information of China (English) 戴前伟; 陈勇雄; 侯智超 2013-01-01 Magnetotelluric sounding exploration vulnerable to the interference of radio stations, power grids, industrial free current and other human factors, resulting data features large deviation, distorted curve shape and low reliability. In this paper, based on the estimation formula of magnetotelluric impedance tensor, the degree of deviation of impedance tensor due to cultural noises is discussed. When there are uncorrelated noises in the electromagnetic components, a higher precise estimation of impedance tensor is obtained. As we known, the magnetic component of MT will not change too much in a certain distance, and if this distance is longer enough, the cultural noise at one site will not related to the other. This is the reason why we use the magnetic mutual reference method to improve the quality of field data. In this paper, a compromise scenario of the remote reference technique, the so-called mutual reference technique is expounded. The effect of application of this method is analyzed. The results show that this method is effective, and some problems in application are summarized.%大地电磁测深勘探方法易受电台、电网、工业游离电流等人为因素干扰,致使资料数据离差大、曲线形态畸变、可信度差.根据噪声对大地电磁测深张量阻抗估算值的影响规律和大地电磁测深远参考技术的应用效果,阐述了远参考技术的一种折中方案,即互参考道方法.选取不同参考距离、不同质量的大地电磁实测数据作为互参考点,通过计算经互参考处理后的大地电磁测深数据得出相干度、信噪比以及卡尼亚电阻率~相位曲线,分析了经互参考道处理后MT资料的效果.结果表明,该方法是有效的,并总结了应用此方法的一些问题. 17. Reference-Dependent Sympathy Science.gov (United States) Small, Deborah A. 2010-01-01 Natural disasters and other traumatic events often draw a greater charitable response than do ongoing misfortunes, even those that may cause even more widespread misery, such as famine or malaria. Why is the response disproportionate to need? The notion of reference dependence critical to Prospect Theory (Kahneman & Tversky, 1979) maintains that… 18. Reference Model Development Energy Technology Data Exchange (ETDEWEB) Jepsen, Richard [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States) 2011-11-02 Presentation from the 2011 Water Peer Review in which principal investigator discusses project progress to develop a representative set of Reference Models (RM) for the MHK industry to develop baseline cost of energy (COE) and evaluate key cost component/system reduction pathways. 19. Extending reference assembly models DEFF Research Database (Denmark) Church, Deanna M.; Schneider, Valerie A.; Steinberg, Karyn Meltz; 2015-01-01 The human genome reference assembly is crucial for aligning and analyzing sequence data, and for genome annotation, among other roles. However, the models and analysis assumptions that underlie the current assembly need revising to fully represent human sequence diversity. Improved analysis tools... 20. Role and Reference Grammar. Science.gov (United States) Van Valin, Robert D., Jr. This paper discusses Role and Reference Grammar (RRG), which is a structuralist-formalist theory of grammar. RRG grew out of an attempt to answer two fundamental questions: (1) what would linguistic theory look like if it were based on the analysis of Lakhota, Tagalog, and Dyirbal, rather than on the analysis of English?; and (2) how can the… 1. Reference class forecasting DEFF Research Database (Denmark) Flyvbjerg, Bent Underbudgettering og budgetoverskridelser forekommer i et flertal af større bygge- og anlægsprojekter. Problemet skyldes optimisme og/eller strategisk misinformation i budgetteringsprocessen. Reference class forecasting (RCF) er en prognosemetode, som er udviklet for at reducere eller eliminere... 2. Generating Multimodal References NARCIS (Netherlands) van der Sluis, Ielka; Krahmer, E. 2007-01-01 This paper presents a new computational model for the generation of multimodal referring expressions, based on observations in human communication. The algorithm is an extension of the graph-based algorithm proposed by Krahmer et al. (2003) and makes use of a so-called Flashlight Model for pointing. 3. Reference Citation Format Institute of Scientific and Technical Information of China (English) 2008-01-01 <正>The format for citations in text and for bibliographic references follows the Publication Manual of the American Psychological Association(5thEd.,2001)and GB/T 7714-2005.The citation of printed word should be ordered in number as it appears in the text of the submitted 4. Reference Sources for Nursing Science.gov (United States) Nursing Outlook, 1976 1976-01-01 The ninth revision (including a Canadian supplement) of a list of nursing reference works lists items in the following sections: abstract journals, audiovisuals, bibliographies, dictionaries, directories, drug lists and pharmacologies, educational programs, histories, indexes, legal guides, library administration and organization, research grants,… 5. X Python reference manual NARCIS (Netherlands) Mullender, K.S. 1995-01-01 This document describes the built-in types, exceptions, and functions of the X windows extension to Python. It assumes basic knowledge about the Python language and access to the X windows documentation. For an informal introduction to the language, see the Python Tutorial. The Python Reference Manu 6. The Reference Encounter Model. Science.gov (United States) White, Marilyn Domas 1983-01-01 Develops model of the reference interview which explicitly incorporates human information processing, particularly schema ideas presented by Marvin Minsky and other theorists in cognitive processing and artificial intelligence. Questions are raised concerning use of content analysis of transcribed verbal protocols as methodology for studying… 7. Treasury Reference Model OpenAIRE Hashim, Ali; Allan, Bill 2001-01-01 The Treasury Reference Model (TRM) gives guidelines for the design of automated treasury systems for government aiming at a) authorities within government and their advisors who are engaged in planning and implementing such systems; and b) software designers and suppliers from the private sector - or even in-house developers of treasury software. The paper starts in Part I with a discussion of ... 8. The Unreliability of References Science.gov (United States) Barden, Dennis M. 2008-01-01 When search consultants, like the author, are invited to propose their services in support of a college or university seeking new leadership, they are generally asked a fairly standard set of questions. But there is one question that they find among the most difficult to answer: How do they check a candidate's references to ensure that they know… 9. International Geomagnetic Reference Field DEFF Research Database (Denmark) Finlay, Chris; Maus, S.; Beggan, C. D.; 2010-01-01 The eleventh generation of the International Geomagnetic Reference Field (IGRF) was adopted in December 2009 by the International Association of Geomagnetism and Aeronomy Working Group V‐MOD. It updates the previous IGRF generation with a definitive main field model for epoch 2005.0, a main field... 10. Calling SNPs without a reference sequence OpenAIRE Schuster Stephan C; Hayes Vanessa M; Zhang Yu; Ratan Aakrosh; Miller Webb 2010-01-01 Abstract Background The most common application for the next-generation sequencing technologies is resequencing, where short reads from the genome of an individual are aligned to a reference genome sequence for the same species. These mappings can then be used to identify genetic differences among individuals in a population, and perhaps ultimately to explain phenotypic variation. Many algorithms capable of aligning short reads to the reference, and determining differences between them have b... 11. Calling SNPs without a reference sequence OpenAIRE Ratan, Aakrosh; Zhang, Yu; Hayes, Vanessa M.; Stephan C Schuster; Miller, Webb 2010-01-01 Background The most common application for the next-generation sequencing technologies is resequencing, where short reads from the genome of an individual are aligned to a reference genome sequence for the same species. These mappings can then be used to identify genetic differences among individuals in a population, and perhaps ultimately to explain phenotypic variation. Many algorithms capable of aligning short reads to the reference, and determining differences between them have been repor... 12. OSH technical reference manual Energy Technology Data Exchange (ETDEWEB) 1993-11-01 In an evaluation of the Department of Energy (DOE) Occupational Safety and Health programs for government-owned contractor-operated (GOCO) activities, the Department of Labors Occupational Safety and Health Administration (OSHA) recommended a technical information exchange program. The intent was to share written safety and health programs, plans, training manuals, and materials within the entire DOE community. The OSH Technical Reference (OTR) helps support the secretarys response to the OSHA finding by providing a one-stop resource and referral for technical information that relates to safe operations and practice. It also serves as a technical information exchange tool to reference DOE-wide materials pertinent to specific safety topics and, with some modification, as a training aid. The OTR bridges the gap between general safety documents and very specific requirements documents. It is tailored to the DOE community and incorporates DOE field experience. 13. Open SHMEM Reference Implementation Energy Technology Data Exchange (ETDEWEB) 2016-05-12 OpenSHMEM is an effort to create a specification for a standardized API for parallel programming in the Partitioned Global Address Space. Along with the specification the project is also creating a reference implementation of the API. This implementation attempts to be portable, to allow it to be deployed in multiple environments, and to be a starting point for implementations targeted to particular hardware platforms. It will also serve as a springboard for future development of the API. 14. Reference Citation Format Institute of Scientific and Technical Information of China (English) 2009-01-01 <正>The format for citations in text and for bibliographic references follows the Publication Manual of the American Psychological Association(5thEd.,2001)and GB/T 7714-2005.The citation of printed word should be ordered in number as it appears in the text of the submitted article.For journal article1 Goodrum,A.A.,McCain,K.W.,&Lawrence,S.,et al.Scholarly publishing in the Internet 15. Reference Citation Format Institute of Scientific and Technical Information of China (English) 2008-01-01 <正>The format for citations in text and for bibliographic references follows the Publication Manual of the American Psychological Association(5thEd.,2001)and GB/T 7714-2005.The citation of printed word should be ordered in number as it appears in the text of the submitted article.For journal article1 Goodrum,A.A.,McCain,K.W.,&Lawrence,S.,et al.Scholarly publishing in the Internet age: 16. Reference Citation Format Institute of Scientific and Technical Information of China (English) 2010-01-01 <正>The format for citations in text and for bibliographic references follows the Publication Manual of the American Psychological Association(5thEd.,2001)and GB/T 7714-2005.The citation of printed word should be ordered in number as it appears in the text of the submitted article.For journal article1 Goodrum,A.A.,McCain,K.W.,&Lawrence,S.,et al.Scholarly publishing in the Internet 17. Reference Citation Format Institute of Scientific and Technical Information of China (English) 2009-01-01 <正>The format for citations in text and for bibliographic references follows the Publication Manual of the American Psychological Association(5thEd.,2001)and GB/T 7714-2005.The citation of printed word should be ordered in number as it appears in the text of the submitted article.For journal article1 Goodrum,A.A.,McCain,K.W.,&Lawrence,S.,et al.Scholarly publishing in the Internet age: 18. MR pelvimetric reference values International Nuclear Information System (INIS) Complete pelvimetry can be easily performed via MR. As shown by comparing the measurements of 4 observers its reproducibility is acceptable. The reference diameters obtained on 53 patients by MR are in 7 out of 9 dimensions larger than those of anatomical and obstetrical textbooks, which have not changed in the last 100 years. Hence, with the general increase in population size an increase in pelvic diameters has taken place. (orig.) 19. GDB Pocket Reference CERN Document Server Robbins, Arnold 2009-01-01 The GNU debugger is valuable for testing, fixing, and retesting software because it allows you to see exactly what's going on inside of a program as it's executing. This new pocket reference shows you how to specify a target for debugging, perform a careful examination to find the cause of program failure, and make quick changes for further testing. The guide covers several popular programming languages. 20. Alignment reference device Science.gov (United States) Patton, Gail Y.; Torgerson, Darrel D. 1987-01-01 An alignment reference device provides a collimated laser beam that minimizes angular deviations therein. A laser beam source outputs the beam into a single mode optical fiber. The output end of the optical fiber acts as a source of radiant energy and is positioned at the focal point of a lens system where the focal point is positioned within the lens. The output beam reflects off a mirror back to the lens that produces a collimated beam. 1. Electrical engineer's reference book CERN Document Server Laughton, M A 1985-01-01 Electrical Engineer's Reference Book, Fourteenth Edition focuses on electrical engineering. The book first discusses units, mathematics, and physical quantities, including the international unit system, physical properties, and electricity. The text also looks at network and control systems analysis. The book examines materials used in electrical engineering. Topics include conducting materials, superconductors, silicon, insulating materials, electrical steels, and soft irons and relay steels. The text underscores electrical metrology and instrumentation, steam-generating plants, turbines 2. Reference structure tomography Science.gov (United States) Brady, David J.; Pitsianis, Nikos P.; Sun, Xiaobai 2004-07-01 Reference structure tomography (RST) uses multidimensional modulations to encode mappings between radiating objects and measurements. RST may be used to image source-density distributions, estimate source parameters, or classify sources. The RST paradigm permits scan-free multidimensional imaging, data-efficient and computation-efficient source analysis, and direct abstraction of physical features. We introduce the basic concepts of RST and illustrate the use of RST for multidimensional imaging based on a geometric radiation model. 3. Electroacoustical reference data CERN Document Server Eargle, John M 2002-01-01 The need for a general collection of electroacoustical reference and design data in graphical form has been felt by acousticians and engineers for some time. This type of data can otherwise only be found in a collection of handbooks. Therefore, it is the author's intention that this book serve as a single source for many electroacoustical reference and system design requirements. In form, the volume closely resembles Frank Massa's Acoustic Design Charts, a handy book dating from 1942 that has long been out of print. The basic format of Massa's book has been followed here: For each entry, graphical data are presented on the right page, while text, examples, and refer ences appear on the left page. In this manner, the user can solve a given problem without thumbing from one page to the next. All graphs and charts have been scaled for ease in data entry and reading. The book is divided into the following sections: A. General Acoustical Relationships. This section covers the behavior of sound transmis sion in... 4. Is anaphoric reference cooperative? Science.gov (United States) Kantola, Leila; van Gompel, Roger P G 2016-01-01 Two experiments investigated whether the choice of anaphoric expression is affected by the presence of an addressee. Following a context sentence and visual scene, participants described a target scene that required anaphoric reference. They described the scene either to an addressee (Experiment 1) or without an addressee (Experiment 2). When an addressee was present in the task, participants used more pronouns and fewer repeated noun phrases when the referent was the grammatical subject in the context sentence than when it was the grammatical object and they used more pronouns when there was no competitor than when there was. They used fewer pronouns and more repeated noun phrases when a visual competitor was present in the scene than when there was no visual competitor. In the absence of an addressee, linguistic context effects were the same as those when an addressee was present, but the visual effect of the competitor disappeared. We conclude that visual salience effects are due to adjustments that speakers make when they produce reference for an addressee, whereas linguistic salience effects appear whether or not speakers have addressees. PMID:26165163 5. Reference Inflow Characterization for River Resource Reference Model (RM2) Energy Technology Data Exchange (ETDEWEB) Neary, Vincent S [ORNL 2011-12-01 Sandia National Laboratory (SNL) is leading an effort to develop reference models for marine and hydrokinetic technologies and wave and current energy resources. This effort will allow the refinement of technology design tools, accurate estimates of a baseline levelized cost of energy (LCoE), and the identification of the main cost drivers that need to be addressed to achieve a competitive LCoE. As part of this effort, Oak Ridge National Laboratory was charged with examining and reporting reference river inflow characteristics for reference model 2 (RM2). Published turbulent flow data from large rivers, a water supply canal and laboratory flumes, are reviewed to determine the range of velocities, turbulence intensities and turbulent stresses acting on hydrokinetic technologies, and also to evaluate the validity of classical models that describe the depth variation of the time-mean velocity and turbulent normal Reynolds stresses. The classical models are found to generally perform well in describing river inflow characteristics. A potential challenge in river inflow characterization, however, is the high variability of depth and flow over the design life of a hydrokinetic device. This variation can have significant effects on the inflow mean velocity and turbulence intensity experienced by stationary and bottom mounted hydrokinetic energy conversion devices, which requires further investigation, but are expected to have minimal effects on surface mounted devices like the vertical axis turbine device designed for RM2. A simple methodology for obtaining an approximate inflow characterization for surface deployed devices is developed using the relation umax=(7/6)V where V is the bulk velocity and umax is assumed to be the near-surface velocity. The application of this expression is recommended for deriving the local inflow velocity acting on the energy extraction planes of the RM2 vertical axis rotors, where V=Q/A can be calculated given a USGS gage flow time 6. Celestial Reference Frames Science.gov (United States) Jacobs, Christopher S. 2013-03-01 Concepts and Background: This paper gives an overview of modern celestial reference frames as realized at radio frequencies using the Very Long baseline Interferometry (VLBI) technique. We discuss basic celestial reference frame concepts, desired properties, and uses. We review the networks of antennas used for this work. We briefly discuss the history of the science of astrometry touching upon the discovery of precession, proper motion, nutation, and parallax, and the field of radio astronomy. Building Celestial Frames: Next, we discuss the multi-step process of building a celestial frame: First candidate sources are identified based on point-like properties from single dish radio telescopes surveys. Second, positions are refined using connected element interferometers such as the Very Large Array, and the ATCA. Third, positions of approximately milli-arcsecond (mas) accuracy are determined using intercontinental VLBI surveys. Fourth, sub-mas positions are determined by multiyear programs using intercontinental VLBI. These sub-mas sets of positions are then verified by multiple teams in preparation for release to non-specialists in the form of an official IAU International Celestial Reference Frame (ICRF). The process described above has until recently been largely restricted to work at S/X-band (2.3/8.4 GHz). However, in the last decade sub-mas work has expanded to include celestial frames at K-band (24 GHz), Ka-band (32 GHz), and Q-band (43 GHz). While these frames currently have the disadvantage of far smaller data sets, the astrophysical quality of the sources themselves improves at these higher frequencies and thus make these frequencies attractive for realizations of celestial reference frames. Accordingly, we review progress at these higher frequency bands. Path to the Future: We discuss prospects for celestial reference frames over the next decade. We present an example of an error budget for astrometric VLBI and discuss the budget's use as a tool for 7. Celestial Reference Frame Science.gov (United States) Jacobs, Christopher S. 2013-09-01 Concepts and Background: This paper gives an overview of modern celestial reference frames as realized at radio frequencies using the Very Long baseline Interferometry (VLBI) technique. We discuss basic celestial reference frame concepts, desired properties, and uses. We review the networks of antennas used for this work. We briefly discuss the history of the science of astrometry touching upon the discovery of precession, proper motion, nutation, and parallax, and the field of radio astronomy. Building Celestial Frames: Next, we discuss the multi-step process of building a celestial frame: First candidate sources are identified based on point-like properties from single dish radio telescopes surveys. Second, positions are refined using connected element interferometers such as the Very Large Array, and the ATCA. Third, positions of approximately milli-arcsecond (mas) accuracy are determined using intercontinental VLBI surveys. Fourth, sub-mas positions are determined by multiyear programs using intercontinental VLBI. These sub-mas sets of positions are then verified by multiple teams in preparation for release to non-specialists in the form of an official IAU International Celestial Reference Frame (ICRF). The process described above has until recently been largely restricted to work at S/X-band (2.3/8.4 GHz). However, in the last decade sub-mas work has expanded to include celestial frames at K-band (24 GHz), Ka-band (32 GHz), and Q-band (43 GHz). While these frames currently have the disadvantage of far smaller data sets, the astrophysical quality of the sources themselves improves at these higher frequencies and thus make these frequencies attractive for realizations of celestial reference frames. Accordingly, we review progress at these higher frequency bands. Path to the Future: We discuss prospects for celestial reference frames over the next decade. We present an example of an error budget for astrometric VLBI and discuss the budget's use as a tool for 8. 40 CFR 53.16 - Supersession of reference methods. Science.gov (United States) 2010-07-01 ... 40 Protection of Environment 5 2010-07-01 2010-07-01 false Supersession of reference methods. 53... (CONTINUED) AMBIENT AIR MONITORING REFERENCE AND EQUIVALENT METHODS General Provisions § 53.16 Supersession of reference methods. (a) This section prescribes procedures and criteria applicable to requests... 9. Mechanical engineer's reference book CERN Document Server Parrish, A 1973-01-01 Mechanical Engineer's Reference Book: 11th Edition presents a comprehensive examination of the use of Systéme International d' Unités (SI) metrication. It discusses the effectiveness of such a system when used in the field of engineering. It addresses the basic concepts involved in thermodynamics and heat transfer. Some of the topics covered in the book are the metallurgy of iron and steel; screw threads and fasteners; hole basis and shaft basis fits; an introduction to geometrical tolerancing; mechanical working of steel; high strength alloy steels; advantages of making components as castings 10. Optomechanical reference accelerometer CERN Document Server Gerberding, Oliver; Melcher, John; Pratt, Jon; Taylor, Jacob 2015-01-01 We present an optomechanical accelerometer with high dynamic range, high bandwidth and read-out noise levels below 8 {\\mu}g/\\sqrt{\\mathrm{Hz}}\$. The straightforward assembly and low cost of our device make it a prime candidate for on-site reference calibrations and autonomous navigation. We present experimental data taken with a vacuum sealed, portable prototype and deduce the achieved bias stability and scale factor accuracy. Additionally, we present a comprehensive model of the device physics that we use to analyze the fundamental noise sources and accuracy limitations of such devices.
11. Reference Citation Format
Institute of Scientific and Technical Information of China (English)
2010-01-01
<正>The format for citations in text and for bibliographic references follows the Publication Manual of the American Psychological Association(5thEd.,2001)and GB/T 7714-2005.The citation of printed word should be ordered in number as it appears in the text of the submitted article.For journal article1 Goodrum,A.A.,McCain,K.W.,&Lawrence,S.,et al.Scholarly publishing in the Internet age:A citation analysis of computer science literature.Information Processing and Management,2001,37:661-675.2 Fernandez,M.,Kadiyska,Y.,&Suciu,D.,et al.SilkRoute:A framework for publishing
12. Reference Citation Format
Institute of Scientific and Technical Information of China (English)
2011-01-01
<正>The format for citations in text and for bibliographic references follows the Publication Manual of the American Psychological Association(5thEd.,2001)and GB/T 7714-2005.The citation of printed word should be ordered in number as it appears in the text of the submitted article.For journal article1 Goodrum,A.A.,McCain,K.W.,&Lawrence,S.,et al.Scholarly publishing in the Internet age:A citation analysis of computer science literature.Information Processing and Manage-
13. Reference Citation Format
Institute of Scientific and Technical Information of China (English)
2008-01-01
<正>The format for citations in text and for bibliographic references follows the Publication Manual of the American Psychological Association(5th Ed.,2001)and GB/T 7714-2005.The citation of printed word should be ordered in number as it appears in the text of the submitted article.For journal article1 Goodrum,A.A.,McCain,K.W.,&Lawrence,S.,et al.Scholarly publishing in the Internet age:A citation analysis of computer science literature.Information Processing and Management,2001,37:661-675.
14. Reference Citation Format
Institute of Scientific and Technical Information of China (English)
2011-01-01
<正>The format for citations in text and for bibliographic references follows the Publication Manual of the American Psychological Association(5thEd.,2001)and GB/T 7714-2005.The citation of printed word should be ordered in number as it appears in the text of the submitted article.For journal article1 Goodrum,A.A.,McCain,K.W.,&Lawrence,S.,et al.Scholarly publishing in the Internet age:A citation analysis of computer science literature.Information Processing and Management,2001,37:661-675.2 Fernandez.M
15. Coal Data: A reference
International Nuclear Information System (INIS)
The purpose of Coal Data: A Reference is to provide basic information on the mining and use of coal, an important source of energy in the United States. The report is written for a general audience. The goal is to cover basic material and strike a reasonable compromise between overly generalized statements and detailed analyses. The section ''Coal Terminology and Related Information'' provides additional information about terms mentioned in the text and introduces new terms. Topics covered are US coal deposits, resources and reserves, mining, production, employment and productivity, health and safety, preparation, transportation, supply and stocks, use, coal, the environment, and more. (VC)
16. XSLT 10 Pocket Reference
CERN Document Server
Lenz, Evan
2008-01-01
XSLT is an essential tool for converting XML into other kinds of documents: HTML, PDF file, and many others. It's a critical technology for XML-based platforms such as Microsoft .NET, Sun Microsystems' Sun One, as well as for most web browsers and authoring tools. As useful as XSLT is, however, most people have a difficult time getting used to its peculiar characteristics. The ability to use advanced techniques depends on a clear and exact understanding of how XSLT templates work and interact. The XSLT 1.0 Pocket Reference from O'Reilly wants to make sure you achieve that level of understan
17. Rails Pocket Reference
CERN Document Server
Berry, Eric
2008-01-01
Rails 2.1 brings a new level of stability and power to this acclaimed web development framework, but keeping track of its numerous moving parts is still a chore. Rails Pocket Reference offers you a painless alternative to hunting for resources online, with brief yet thorough explanations of the most frequently used methods and structures supported by Rails 2.1, along with key concepts you need to work through the framework's most tangled corners. Organized to help you quickly find what you need, this book will not only get you up to speed on how Rails works, it also provides a handy referenc
18. Inertial pseudo star reference unit
Science.gov (United States)
Luniewicz, Michael F.; Woodbury, Dale T.; Gilmore, Jerold P.; Chien, Tze T.
1994-05-01
Advanced space systems for earth observation sensing and defense applications share a common objective: high-resolution monitoring. They require subsystems that accurately provide precise line-of-sight (LOS) pointing of the monitoring sensor with extreme jitter suppression and a precision attitude control system. To address this objective, Draper has developed a pointing system, the Inertial Pseudo Star Reference Unit (IPSRU). The IPSRU effort is a DARPA and SDI sponsored program at Draper under contract with the USAF Phillips Laboratory. The IPSRU implements a collimated light source mounted on a wide-band, extremely low-noise inertially stabilized platform. The collimated light beam becomes, in effect, a jitter-stabilized pseudo star. In addition, its direction in inertial space can be pointed at a precise rate by commands applied to the platform.
19. Roaming Reference: Reinvigorating Reference through Point of Need Service
Directory of Open Access Journals (Sweden)
Kealin M. McCabe
2011-11-01
Full Text Available Roaming reference service was pursued as a way to address declining reference statistics. The service was staffed by librarians armed with iPads over a period of six months during the 2010-2011 academic year. Transactional statistics were collected in relation to query type (Research, Facilitative or Technology, location and approach (librarian to patron, patron to librarian or via chat widget. Overall, roaming reference resulted in an additional 228 reference questions, 67% (n=153 of which were research related. Two iterations of the service were implemented, roaming reference as a standalone service (Fall 2010 and roaming reference integrated with traditional reference desk duties (Winter 2011. The results demonstrate that although the Weller Library’s reference transactions are declining annually, they are not disappearing. For a roaming reference service to succeed, it must be a standalone service provided in addition to traditional reference services. The integration of the two reference models (roaming reference and reference desk resulted in a 56% decline in the total number of roaming reference questions from the previous term. The simple act of roaming has the potential to reinvigorate reference services as a whole, forcing librarians outside their comfort zones, allowing them to reach patrons at their point of need.
20. Generic Crystalline Disposal Reference Case
Energy Technology Data Exchange (ETDEWEB)
Painter, Scott Leroy [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Chu, Shaoping [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Harp, Dylan Robert [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Perry, Frank Vinton [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Wang, Yifeng [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2015-02-20
A generic reference case for disposal of spent nuclear fuel and high-level radioactive waste in crystalline rock is outlined. The generic cases are intended to support development of disposal system modeling capability by establishing relevant baseline conditions and parameters. Establishment of a generic reference case requires that the emplacement concept, waste inventory, waste form, waste package, backfill/buffer properties, EBS failure scenarios, host rock properties, and biosphere be specified. The focus in this report is on those elements that are unique to crystalline disposal, especially the geosphere representation. Three emplacement concepts are suggested for further analyses: a waste packages containing 4 PWR assemblies emplaced in boreholes in the floors of tunnels (KBS-3 concept), a 12-assembly waste package emplaced in tunnels, and a 32-assembly dual purpose canister emplaced in tunnels. In addition, three failure scenarios were suggested for future use: a nominal scenario involving corrosion of the waste package in the tunnel emplacement concepts, a manufacturing defect scenario applicable to the KBS-3 concept, and a disruptive glaciation scenario applicable to both emplacement concepts. The computational approaches required to analyze EBS failure and transport processes in a crystalline rock repository are similar to those of argillite/shale, with the most significant difference being that the EBS in a crystalline rock repository will likely experience highly heterogeneous flow rates, which should be represented in the model. The computational approaches required to analyze radionuclide transport in the natural system are very different because of the highly channelized nature of fracture flow. Computational workflows tailored to crystalline rock based on discrete transport pathways extracted from discrete fracture network models are recommended.
1. Antares Reference Telescope System
Energy Technology Data Exchange (ETDEWEB)
Viswanathan, V.K.; Kaprelian, E.; Swann, T.; Parker, J.; Wolfe, P.; Woodfin, G.; Knight, D.
1983-01-01
Antares is a 24-beam, 40-TW carbon-dioxide laser-fusion system currently nearing completion at the Los Alamos National Laboratory. The 24 beams will be focused onto a tiny target (typically 300 to 1000 ..mu..m in diameter) located approximately at the center of a 7.3-m-diameter by 9.3-m-long vacuum (10/sup -6/ torr) chamber. The design goal is to position the targets to within 10 ..mu..m of a selected nominal position, which may be anywhere within a fixed spherical region 1 cm in diameter. The Antares Reference Telescope System is intended to help achieve this goal for alignment and viewing of the various targets used in the laser system. The Antares Reference Telescope System consists of two similar electro-optical systems positioned in a near orthogonal manner in the target chamber area of the laser. Each of these consists of four subsystems: (1) a fixed 9X optical imaging subsystem which produces an image of the target at the vidicon; (2) a reticle projection subsystem which superimposes an image of the reticle pattern at the vidicon; (3) an adjustable front-lighting subsystem which illuminates the target; and (4) an adjustable back-lighting subsystem which also can be used to illuminate the target. The various optical, mechanical, and vidicon design considerations and trade-offs are discussed. The final system chosen (which is being built) and its current status are described in detail.
2. AREVA - 2013 Reference document
International Nuclear Information System (INIS)
This Reference Document contains information on the AREVA group's objectives, prospects and development strategies, as well as estimates of the markets, market shares and competitive position of the AREVA group. Content: 1 - Person responsible for the Reference Document; 2 - Statutory auditors; 3 - Selected financial information; 4 - Description of major risks confronting the company; 5 - Information about the issuer; 6 - Business overview; 7 - Organizational structure; 8 - Property, plant and equipment; 9 - Situation and activities of the company and its subsidiaries; 10 - Capital resources; 11 - Research and development programs, patents and licenses; 12 - Trend information; 13 - Profit forecasts or estimates; 14 - Management and supervisory bodies; 15 - Compensation and benefits; 16 - Functioning of the management and supervisory bodies; 17 - Human resources information; 18 - Principal shareholders; 19 - Transactions with related parties; 20 - Financial information concerning assets, financial positions and financial performance; 21 - Additional information; 22 - Major contracts; 23 - Third party information, statements by experts and declarations of interest; 24 - Documents on display; 25 - Information on holdings; Appendix 1: report of the supervisory board chairman on the preparation and organization of the board's activities and internal control procedures; Appendix 2: statutory auditors' reports; Appendix 3: environmental report; Appendix 4: non-financial reporting methodology and independent third-party report on social, environmental and societal data; Appendix 5: ordinary and extraordinary general shareholders' meeting; Appendix 6: values charter; Appendix 7: table of concordance of the management report; glossaries
3. Coal data: A reference
Energy Technology Data Exchange (ETDEWEB)
1995-02-01
This report, Coal Data: A Reference, summarizes basic information on the mining and use of coal, an important source of energy in the US. This report is written for a general audience. The goal is to cover basic material and strike a reasonable compromise between overly generalized statements and detailed analyses. The section Supplemental Figures and Tables contains statistics, graphs, maps, and other illustrations that show trends, patterns, geographic locations, and similar coal-related information. The section Coal Terminology and Related Information provides additional information about terms mentioned in the text and introduces some new terms. The last edition of Coal Data: A Reference was published in 1991. The present edition contains updated data as well as expanded reviews and additional information. Added to the text are discussions of coal quality, coal prices, unions, and strikes. The appendix has been expanded to provide statistics on a variety of additional topics, such as: trends in coal production and royalties from Federal and Indian coal leases, hours worked and earnings for coal mine employment, railroad coal shipments and revenues, waterborne coal traffic, coal export loading terminals, utility coal combustion byproducts, and trace elements in coal. The information in this report has been gleaned mainly from the sources in the bibliography. The reader interested in going beyond the scope of this report should consult these sources. The statistics are largely from reports published by the Energy Information Administration.
4. Antares Reference Telescope System
International Nuclear Information System (INIS)
Antares is a 24-beam, 40-TW carbon-dioxide laser-fusion system currently nearing completion at the Los Alamos National Laboratory. The 24 beams will be focused onto a tiny target (typically 300 to 1000 μm in diameter) located approximately at the center of a 7.3-m-diameter by 9.3-m-long vacuum (10-6 torr) chamber. The design goal is to position the targets to within 10 μm of a selected nominal position, which may be anywhere within a fixed spherical region 1 cm in diameter. The Antares Reference Telescope System is intended to help achieve this goal for alignment and viewing of the various targets used in the laser system. The Antares Reference Telescope System consists of two similar electro-optical systems positioned in a near orthogonal manner in the target chamber area of the laser. Each of these consists of four subsystems: (1) a fixed 9X optical imaging subsystem which produces an image of the target at the vidicon; (2) a reticle projection subsystem which superimposes an image of the reticle pattern at the vidicon; (3) an adjustable front-lighting subsystem which illuminates the target; and (4) an adjustable back-lighting subsystem which also can be used to illuminate the target. The various optical, mechanical, and vidicon design considerations and trade-offs are discussed. The final system chosen (which is being built) and its current status are described in detail
5. Sensor employing internal reference electrode
DEFF Research Database (Denmark)
2013-01-01
The present invention concerns a novel internal reference electrode as well as a novel sensing electrode for an improved internal reference oxygen sensor and the sensor employing same.......The present invention concerns a novel internal reference electrode as well as a novel sensing electrode for an improved internal reference oxygen sensor and the sensor employing same....
6. Reference dosimetry and measurement quality assurance
International Nuclear Information System (INIS)
Measurements of absorbed dose made by a reference dosimetry system, such as alanine, have been suggested for achieving quality assurance through traceability to primary standards. Such traceability can assist users of radiation worldwide in enhancing quality control in medicine, agriculture, and industry. International and national standards of absorbed dose are still needed for applications of γ-ray and electron dosimetry at high doses (e.g. radiation therapy, food irradiation and industrial radiation processing). Reference systems, such as ferrous sulfate dosimeters measured by spectrophotometry and alanine measured by electron spin resonance spectrometry are already well established. Another useful reference system for high doses is supplied as dichromate solutions measured by spectrophotometry. Reference dosimetry, particularly for electron beams, can be accomplished with thin alanine or radiochromic dye film dosemeters. (author)
7. AREVA 2009 reference document
International Nuclear Information System (INIS)
This Reference Document contains information on the AREVA group's objectives, prospects and development strategies. It contains information on the markets, market shares and competitive position of the AREVA group. This information provides an adequate picture of the size of these markets and of the AREVA group's competitive position. Content: 1 - Person responsible for the Reference Document and Attestation by the person responsible for the Reference Document; 2 - Statutory and Deputy Auditors; 3 - Selected financial information; 4 - Risks: Risk management and coverage, Legal risk, Industrial and environmental risk, Operating risk, Risk related to major projects, Liquidity and market risk, Other risk; 5 - Information about the issuer: History and development, Investments; 6 - Business overview: Markets for nuclear power and renewable energies, AREVA customers and suppliers, Overview and strategy of the group, Business divisions, Discontinued operations: AREVA Transmission and Distribution; 7 - Organizational structure; 8 - Property, plant and equipment: Principal sites of the AREVA group, Environmental issues that may affect the issuer's; 9 - Analysis of and comments on the group's financial position and performance: Overview, Financial position, Cash flow, Statement of financial position, Events subsequent to year-end closing for 2009; 10 - Capital Resources; 11 - Research and development programs, patents and licenses; 12 -trend information: Current situation, Financial objectives; 13 - Profit forecasts or estimates; 14 - Administrative, management and supervisory bodies and senior management; 15 - Compensation and benefits; 16 - Functioning of corporate bodies; 17 - Employees; 18 - Principal shareholders; 19 - Transactions with related parties: French state, CEA, EDF group; 20 - Financial information concerning assets, financial positions and financial performance; 21 - Additional information: Share capital, Certificate of incorporation and by-laws; 22 - Major
8. Tank characterization reference guide
International Nuclear Information System (INIS)
Characterization of the Hanford Site high-level waste storage tanks supports safety issue resolution; operations and maintenance requirements; and retrieval, pretreatment, vitrification, and disposal technology development. Technical, historical, and programmatic information about the waste tanks is often scattered among many sources, if it is documented at all. This Tank Characterization Reference Guide, therefore, serves as a common location for much of the generic tank information that is otherwise contained in many documents. The report is intended to be an introduction to the issues and history surrounding the generation, storage, and management of the liquid process wastes, and a presentation of the sampling, analysis, and modeling activities that support the current waste characterization. This report should provide a basis upon which those unfamiliar with the Hanford Site tank farms can start their research
9. Reference Citation Format
Institute of Scientific and Technical Information of China (English)
2011-01-01
<正>The format for citations in text and for bibliographic references follows the Publication Manual of the American Psychological Association(5thEd.,2001)and GB/T 7714-2005.The citation of printed word should be ordered in number as it appears in the text of the submitted article.For journal article1 Goodrum,A.A.,McCain,K.W.,&Lawrence,S.,et al.Scholarly publishing in the Internet age:A citation analysis of computer science literature.Information Processing and Management,2001,37:661-675.2 Fernandez,M.,Kadiyska,Y.,&Suciu,D.,et al.SilkRoute:A framework for publishing relational data in XML.ACM Transactions on Database Systems,2002,27(4):438-493.
10. Reference Citation Format
Institute of Scientific and Technical Information of China (English)
2010-01-01
<正>The format for citations in text and for bibliographic references follows the Publication Manual of the American Psychological Association(5thEd.,2001)and GB/T 7714-2005.The citation of printed word should be ordered in number as it appears in the text of the submitted article.For journal article1 Goodrum,A.A.,McCain,K.W.,&Lawrence,S.,et al.Scholarly publishing in the Internet age:A citation analysis of computer science literature.Information Processing and Management,2001,37:661-675.2 Fernandez,M.,Kadiyska,Y.,&Suciu,D.,et al.SilkRoute:A framework for publishing relational data in XML.ACM Transactions on Database Systems,2002,27(4):438-493.
11. Reference Citation Format
Institute of Scientific and Technical Information of China (English)
2009-01-01
<正>The format for citations in text and for bibliographic references follows the Publication Manual of the American Psychological Association(5thEd.,2001)and GB/T 7714-2005.The citation of printed word should be ordered in number as it appears in the text of the submitted article.For journal article1 Goodrum,A.A.,McCain,K.W.,&Lawrence,S.,et al.Scholarly publishing in the Internet age:A citation analysis of computer science literature.Information Processing and Management,2001,37:661-675.2 Fernandez,M.,Kadiyska,Y.,&Suciu,D.,et al.SilkRoute:A framework for publishing relational data in XML.ACM Transactions on Database Systems,2002,27(4):438-493.
12. Tank characterization reference guide
Energy Technology Data Exchange (ETDEWEB)
De Lorenzo, D.S.; DiCenso, A.T.; Hiller, D.B.; Johnson, K.W.; Rutherford, J.H.; Smith, D.J. [Los Alamos Technical Associates, Kennewick, WA (United States); Simpson, B.C. [Westinghouse Hanford Co., Richland, WA (United States)
1994-09-01
Characterization of the Hanford Site high-level waste storage tanks supports safety issue resolution; operations and maintenance requirements; and retrieval, pretreatment, vitrification, and disposal technology development. Technical, historical, and programmatic information about the waste tanks is often scattered among many sources, if it is documented at all. This Tank Characterization Reference Guide, therefore, serves as a common location for much of the generic tank information that is otherwise contained in many documents. The report is intended to be an introduction to the issues and history surrounding the generation, storage, and management of the liquid process wastes, and a presentation of the sampling, analysis, and modeling activities that support the current waste characterization. This report should provide a basis upon which those unfamiliar with the Hanford Site tank farms can start their research.
13. Areva reference document 2007; Areva document de reference 2007
Energy Technology Data Exchange (ETDEWEB)
NONE
2008-07-01
This reference document contains information on the AREVA group's objectives, prospects and development strategies, particularly in Chapters 4 and 7. It contains also information on the markets, market shares and competitive position of the AREVA group. Content: 1 - Person responsible for the reference document and persons responsible for auditing the financial statements; 2 - Information pertaining to the transaction (not applicable); 3 - General information on the company and its share capital: Information on Areva, Information on share capital and voting rights, Investment certificate trading, Dividends, Organization chart of AREVA group companies, Equity interests, Shareholders' agreements; 4 - Information on company operations, new developments and future prospects: Overview and strategy of the AREVA group, The Nuclear Power and Transmission and Distribution markets, The energy businesses of the AREVA group, Front End division, Reactors and Services division, Back End division, Transmission and Distribution division, Major contracts 140 Principal sites of the AREVA group, AREVA's customers and suppliers, Sustainable Development and Continuous Improvement, Capital spending programs, Research and Development programs, Intellectual Property and Trademarks, Risk and insurance; 5 - Assets financial position financial performance: Analysis of and comments on the group's financial position and performance, Human Resources report, Environmental report, Consolidated financial statements 2007, Notes to the consolidated financial statements, Annual financial statements 2007, Notes to the corporate financial statements; 6 - Corporate governance: Composition and functioning of corporate bodies, Executive compensation, Profit-sharing plans, AREVA Values Charter, Annual Ordinary General Meeting of Shareholders of April 17, 2008; 7 - Recent developments and future prospects: Events subsequent to year-end closing for 2007, Outlook; Glossary; table of
14. Areva, reference document 2006; Areva, document de reference 2006
Energy Technology Data Exchange (ETDEWEB)
NONE
2006-07-01
This reference document contains information on the AREVA group's objectives, prospects and development strategies, particularly in Chapters 4 and 7. It contains information on the markets, market shares and competitive position of the AREVA group. Content: - 1 Person responsible for the reference document and persons responsible for auditing the financial statements; - 2 Information pertaining to the transaction (Not applicable); - 3 General information on the company and its share capital: Information on AREVA, on share capital and voting rights, Investment certificate trading, Dividends, Organization chart of AREVA group companies, Equity interests, Shareholders' agreements; - 4 Information on company operations, new developments and future prospects: Overview and strategy of the AREVA group, The Nuclear Power and Transmission and Distribution markets, The energy businesses of the AREVA group, Front End division, Reactors and Services division, Back End division, Transmission and Distribution division, Major contracts, The principal sites of the AREVA group, AREVA's customers and suppliers, Sustainable Development and Continuous Improvement, Capital spending programs, Research and development programs, intellectual property and trademarks, Risk and insurance; - 5 Assets - Financial position - Financial performance: Analysis of and comments on the group's financial position and performance, 2006 Human Resources Report, Environmental Report, Consolidated financial statements, Notes to the consolidated financial statements, AREVA SA financial statements, Notes to the corporate financial statements; 6 - Corporate Governance: Composition and functioning of corporate bodies, Executive compensation, Profit-sharing plans, AREVA Values Charter, Annual Combined General Meeting of Shareholders of May 3, 2007; 7 - Recent developments and future prospects: Events subsequent to year-end closing for 2006, Outlook; 8 - Glossary; 9 - Table of concordance.
15. Roaming Reference: Reinvigorating Reference through Point of Need Service
OpenAIRE
Kealin M. McCabe; James R.W. MacDonald
2011-01-01
Roaming reference service was pursued as a way to address declining reference statistics. The service was staffed by librarians armed with iPads over a period of six months during the 2010-2011 academic year. Transactional statistics were collected in relation to query type (Research, Facilitative or Technology), location and approach (librarian to patron, patron to librarian or via chat widget). Overall, roaming reference resulted in an additional 228 reference questions, 67% (n=153) of whic...
16. Synthetic growth reference charts.
Science.gov (United States)
Hermanussen, M; Burmeister, J
1999-08-01
to generate distance standards for height (synthetic growth reference charts). Synthetic growth reference charts can help to actualize current growth charts without much additional effort, and they may also be used for populations for which autochthonous growth standards are not available. PMID:10503677
17. AREVA - 2012 Reference document
International Nuclear Information System (INIS)
After a presentation of the person responsible for this Reference Document, of statutory auditors, and of a summary of financial information, this report address the different risk factors: risk management and coverage, legal risk, industrial and environmental risk, operational risk, risk related to major projects, liquidity and market risk, and other risks (related to political and economic conditions, to Group's structure, and to human resources). The next parts propose information about the issuer, a business overview (markets for nuclear power and renewable energies, customers and suppliers, group's strategy, operations), a brief presentation of the organizational structure, a presentation of properties, plants and equipment (principal sites, environmental issues which may affect these items), analysis and comments on the group's financial position and performance, a presentation of capital resources, a presentation of research and development activities (programs, patents and licenses), a brief description of financial objectives and profit forecasts or estimates, a presentation of administration, management and supervision bodies, a description of the operation of corporate bodies, an overview of personnel, of principal shareholders, and of transactions with related parties, a more detailed presentation of financial information concerning assets, financial positions and financial performance. Addition information regarding share capital is given, as well as an indication of major contracts, third party information, available documents, and information on holdings
18. Sensor Characteristics Reference Guide
Energy Technology Data Exchange (ETDEWEB)
Cree, Johnathan V.; Dansu, A.; Fuhr, P.; Lanzisera, Steven M.; McIntyre, T.; Muehleisen, Ralph T.; Starke, M.; Banerjee, Pranab; Kuruganti, T.; Castello, C.
2013-04-01
The Buildings Technologies Office (BTO), within the U.S. Department of Energy (DOE), Office of Energy Efficiency and Renewable Energy (EERE), is initiating a new program in Sensor and Controls. The vision of this program is: • Buildings operating automatically and continuously at peak energy efficiency over their lifetimes and interoperating effectively with the electric power grid. • Buildings that are self-configuring, self-commissioning, self-learning, self-diagnosing, self-healing, and self-transacting to enable continuous peak performance. • Lower overall building operating costs and higher asset valuation. The overarching goal is to capture 30% energy savings by enhanced management of energy consuming assets and systems through development of cost-effective sensors and controls. One step in achieving this vision is the publication of this Sensor Characteristics Reference Guide. The purpose of the guide is to inform building owners and operators of the current status, capabilities, and limitations of sensor technologies. It is hoped that this guide will aid in the design and procurement process and result in successful implementation of building sensor and control systems. DOE will also use this guide to identify research priorities, develop future specifications for potential market adoption, and provide market clarity through unbiased information
19. The Herschel Reference Survey
CERN Document Server
Boselli, A; Cortese, L; Bendo, G; Chanial, P; Buat, V; Davies, J; Auld, R; Rigby, E; Baes, M; Barlow, M; Bock, J; Bradford, M; Castro-Rodriguez, N; Charlot, S; Clements, D; Cormier, D; Dwek, E; Elbaz, D; Galametz, M; Galliano, F; Gear, W; Glenn, J; Gomez, H; Griffin, M; Hony, S; Isaak, K; Levenson, L; Lu, N; Madden, S; O'Halloran, B; Okumura, K; Oliver, S; Page, M; Panuzzo, P; Papageorgiou, A; Parkin, T; Perez-Fournon, I; Pohlen, M; Rangwala, N; Roussel, H; Rykala, A; Sacchi, N; Sauvage, M; Schulz, B; Schirm, M; Smith, M W L; Spinoglio, L; Stevens, J; Symeonidis, M; Vaccari, M; Vigroux, L; Wilson, C; Wozniak, H; Wright, G; Zeilinger, W
2010-01-01
The Herschel Reference Survey is a guaranteed time Herschel key project and will be a benchmark study of dust in the nearby universe. The survey will complement a number of other Herschel key projects including large cosmological surveys that trace dust in the distant universe. We will use Herschel to produce images of a statistically-complete sample of 323 galaxies at 250, 350 and 500 micron. The sample is volume-limited, containing sources with distances between 15 and 25 Mpc and flux limits in the K-band to minimize the selection effects associated with dust and with young high-mass stars and to introduce a selection in stellar mass. The sample spans the whole range of morphological types (ellipticals to late-type spirals) and environments (from the field to the centre of the Virgo Cluster) and as such will be useful for other purposes than our own. We plan to use the survey to investigate (i) the dust content of galaxies as a function of Hubble type, stellar mass and environment, (ii) the connection betwe...
20. Jakarta Struts Pocket Reference
CERN Document Server
Cavaness, Chuck
2003-01-01
Web tier frameworks have soared in popularity over the past year or so due to the increasing complexity of Java itself, and the need to get more work done with fewer resources. Developers who used to spend hours and hours writing low-level features can use a well-written framework to build the presentation tier so they start coding the "good stuff" sooner--the business logic at the core of the program. The Jakarta Struts Framework is one of the most popular presentation frameworks for building web applications with Java Servlet and JavaServer Pages (JSP) technologies. If you work with the St
1. Reference blindness: the influence of references on trust in Wikipedia
OpenAIRE
Lucassen, Teun; Noordzij, Matthijs L.; Schraagen, Jan Maarten
2011-01-01
In this study we show the influence of references on trust in information. We changed the contents of reference lists of Wikipedia articles in such a way that the new references were no longer in any sense related to the topic of the article. Furthermore, the length of the reference list was varied. College students were asked to evaluate the credibility of these articles. Only 6 out of 23 students noticed the manipulation of the references; 9 out of 23 students noticed the variations in leng...
2. Global Reference Tables Services Architecture
Data.gov (United States)
Social Security Administration — This database stores the reference and transactional data used to provide a data-driven service access method to certain Global Reference Table (GRT) service tables.
3. Fingerprint Reference-Point Detection
OpenAIRE
Liu Manhua; Jiang Xudong; Kot Alex Chichung
2005-01-01
A robust fingerprint recognition algorithm should tolerate the rotation and translation of the fingerprint image. One popular solution is to consistently detect a unique reference point and compute a unique reference orientation for translational and rotational alignment. This paper develops an effective algorithm to locate a reference point and compute the corresponding reference orientation consistently and accurately for all types of fingerprints. To compute the reliable orientation field...
4. Gestural Viewpoint Signals Referent Accessibility
Science.gov (United States)
Debreslioska, Sandra; Özyürek, Asli; Gullberg, Marianne; Perniss, Pamela
2013-01-01
The tracking of entities in discourse is known to be a bimodal phenomenon. Speakers achieve cohesion in speech by alternating between full lexical forms, pronouns, and zero anaphora as they track referents. They also track referents in co-speech gestures. In this study, we explored how viewpoint is deployed in reference tracking, focusing on…
5. Reference neutron activation library
International Nuclear Information System (INIS)
Many scientific endeavors require accurate nuclear data. Examples include studies of environmental protection connected with the running of a nuclear installation, the conceptual designs of fusion energy producing devices, astrophysics and the production of medical isotopes. In response to this need, many national and international data libraries have evolved over the years. Initially nuclear data work concentrated on materials relevant to the commercial power industry which is based on the fission of actinides, but recently the topic of activation has become of increasing importance. Activation of materials occurs in fission devices, but is generally overshadowed by the primary fission process. In fusion devices, high energy (14 MeV) neutrons produced in the D-T fusion reaction cause activation of the structure, and (with the exception of the tritium fuel) is the dominant source of activity. Astrophysics requires cross-sections (generally describing neutron capture) or its studies of nucleosynthesis. Many analytical techniques require activation analysis. For example, borehole logging uses the detection of gamma rays from irradiated materials to determine the various components of rocks. To provide data for these applications, various specialized data libraries have been produced. The most comprehensive of these have been developed for fusion studies, since it has been appreciated that impurities are of the greatest importance in determining the overall activity, and thus data on all elements are required. These libraries contain information on a wide range of reactions: (n,γ), (n,2n), (n,α), (n,p), (n,d), (n,t), (n,3He)and (n,n')over the energy range from 10-5 eV to 15 or 20 MeV. It should be noted that the production of various isomeric states have to be treated in detail in these libraries,and that the range of targets must include long-lived radioactive nuclides in addition to stable nuclides. These comprehensive libraries thus contain almost all the
6. Co-reference and reasoning.
Science.gov (United States)
Walsh, Clare R; Johnson-Laird, P N
2004-01-01
Co-reference occurs when two or more noun phrases refer to the same individual, as in the following inferential problem: Mark is kneeling by the fire or he is looking at the TV but not both. / Mark is kneeling by the fire. / Is he looking at the TV? In three experiments, we compared co-referential reasoning problems with problems referring to different individuals. Experiment 1 showed that co-reference improves accuracy. In Experiment 2, we replicated that finding and showed that co-reference speeds up both reading and inference. Experiment 3 showed that the effects of co-reference are greatest when the premises and the conclusion share co-referents. These effects led the participants to make illusory inferences--that is, to draw systematically invalid conclusions. The results are discussed in terms of the mental model theory of reasoning.
7. A reference model for database security proxy
Institute of Scientific and Technical Information of China (English)
蔡亮; 杨小虎; 董金祥
2002-01-01
How to protect the database, the kernel resources of information warfare, is becoming more and more important since the rapid development of computer and communication technology. As an application-level firewall, database security proxy can successfully repulse attacks originated from outside the network, reduce to zerolevel damage from foreign DBMS products. We enhanced the capability of the COAST' s firewall reference model by adding a transmission unit modification function and an attribute value mapping function,describes the schematic and semantic layer reference model, and finally forms a reference model for DBMS security proxy which greatly helps in the design and implementation of database security proxies. This modeling process can clearly separate the system functionality into three layers, define the possible security functions for each layer, and estimate the computational cost for each layer.
8. A reference model for database security proxy
Institute of Scientific and Technical Information of China (English)
2002-01-01
How to protect the database, the kernel resources of information warfare, is becoming more and more important since the rapid development of computer and communication technology. As an application-level firewall, database security proxy can successfully repulse attacks originated from outside the network, reduce to zerolevel damage from foreign DBMS products. We enhanced the capability of the COAST's firewall reference model by adding a transmission unit modification function and an attribute value mapping function, describes the schematic and semantic layer reference model, and finally forms a reference model for DBMS security proxy which greatly helps in the design and implementation of database security proxies. This modeling process can clearly separate the system functionality into three layers, define the possible security functions for each layer, and estimate the computational cost for each layer.
9. Development and application of reference extract used for TLC identifi-cation of Morinda officinalis How%巴戟天薄层色谱鉴别用对照提取物的研制与应用
Institute of Scientific and Technical Information of China (English)
林锦锋; 杨志业; 李曼莎; 宋力飞; 魏锋
2015-01-01
目的:研制薄层色谱鉴别用巴戟天对照提取物,以解决对照药材的局限性。方法从原料选定、制备工艺研究、稳定性影响因素考察等方面探讨薄层色谱鉴别用巴戟天对照提取物的研究思路,并比较巴戟天对照药材与对照提取物在薄层鉴别中的应用结果。结果巴戟天薄层鉴别用对照提取物的提取条件:巴戟天粗粉加20倍70%乙醇,加热回流提取,提取2次,每次1 h。该对照提取物替代对照药材应用于巴戟天及其成方制剂中的薄层鉴别效果一致。结论本研究制备的巴戟天薄层鉴别用对照提取物,可以替代巴戟天对照药材用于巴戟天及其成方制剂的质量控制分析。%Objective To develop Morinda offic inalis How reference extract for TLC to solve the limitation of reference crude herb. Methods The research strategy of Morinda officinalis How reference extract for TLC was discussed from raw material selected, preparation technology and stability influence factors. Practical results between Morinda offici-nalis How reference crude drug and reference extract in TLC were compared. Results The extract condition of refer-ence extract used for TLC of Morinda offic inalis How was as follows: Morindae offic inalis How thick powder was heat-ing reflux extracted with 20 flods of 70% ethanol for two times, 1 hour each time. The reference extract instead of ref-erence crude drug used for TLC of Morindae offic inalis How and its historical preparations gained the same effect. Conclusion The reference extract of Morindae offic inalis How for TLC prepared by this study can replace Morindae offic inalis How reference crude herb to be used for quality control analysis of Morindae offic inalis How and its histori-cal preparation.
10. Relativistic physics in arbitrary reference frames
CERN Document Server
Mitskievich, N V
1996-01-01
In this paper we give a review of the most general approach to description of reference frames, the monad formalism. This approach is explicitly general covariant at each step, permitting to use abstract representation of tensor quantities; it is applicable also to special relativity when non-inertial effects are considered in its context; moreover, it involves no hypotheses whatsoever thus being a completely natural one. For the sake of the reader's convenience, a synopsis of tensor calculus in pseudo-Riemannian space-time precedes discussion of the subject, containing expressions rarely encountered in literature but essentially facilitating the consideration. We give also a comparison of the monad formalism with the other approaches to description of reference frames in general relativity. In three chapters we consider applications of the monad formalism to general relativistic mechanics, electromagnetic and gravitational fields theory. Alongside of the general theory, which includes the monad representatio...
11. Reference models supporting enterprise networks and virtual enterprises
DEFF Research Database (Denmark)
Tølle, Martin; Bernus, Peter
2003-01-01
This article analyses different types of reference models applicable to support the set up and (re)configuration of Virtual Enterprises (VEs). Reference models are models capturing concepts common to VEs aiming to convert the task of setting up of VE into a configuration task, and hence reducing ...... the time needed for VE creation. The reference models are analysed through a mapping onto the Virtual Enterprise Reference Architecture (VERA) based upon GERAM and created in the IMS GLOBEMEN project.......This article analyses different types of reference models applicable to support the set up and (re)configuration of Virtual Enterprises (VEs). Reference models are models capturing concepts common to VEs aiming to convert the task of setting up of VE into a configuration task, and hence reducing...
12. Relativistic Physics in Arbitrary Reference Frames
OpenAIRE
Mitskievich, Nikolai V.
1996-01-01
In this paper we give a review of the most general approach to description of reference frames, the monad formalism. This approach is explicitly general covariant at each step, permitting to use abstract representation of tensor quantities; it is applicable also to special relativity when non-inertial effects are considered in its context; moreover, it involves no hypotheses whatsoever thus being a completely natural one. For the sake of the reader's convenience, a synopsis of tensor calculus...
13. Nitrogen-15 reference book: medicine and biosciences
International Nuclear Information System (INIS)
A comprehensive bibliography on the application of the stable nitrogen isotope 15N in medicine, animal nutrition and physiology, biosciences, and related disciplines is presented. The literature pertaining to this paper covers the period from 1977 to 1981. The references are completed by an index of all authors and a subject index with special emphasis to the used organisms, labelled compounds, and tracer techniques, respectively. (author)
14. Pseudo-Reference-Based Assembly of Vertebrate Transcriptomes
OpenAIRE
Kyoungwoo Nam; Heesu Jeong; Jin-Wu Nam
2016-01-01
High-throughput RNA sequencing (RNA-seq) provides a comprehensive picture of the transcriptome, including the identity, structure, quantity, and variability of expressed transcripts in cells, through the assembly of sequenced short RNA-seq reads. Although the reference-based approach guarantees the high quality of the resulting transcriptome, this approach is only applicable when the relevant reference genome is present. Here, we developed a pseudo-reference-based assembly (PRA) that reconstr...
15. User Preferences in Reference Services: Virtual Reference and Academic Libraries
Science.gov (United States)
Cummings, Joel; Cummings, Lara; Frederiksen, Linda
2007-01-01
This study examines the use of chat in an academic library's user population and where virtual reference services might fit within the spectrum of public services offered by academic libraries. Using questionnaires, this research demonstrates that many within the academic community are open to the idea of chat-based reference or using chat for…
16. Java for dummies quick reference
CERN Document Server
Lowe, Doug
2012-01-01
A reference that answers your questions as you move through your coding The demand for Android programming and web apps continues to grow at an unprecedented pace and Java is the preferred language for both. Java For Dummies Quick Reference keeps you moving through your coding while you solve a problem, look up a command or syntax, or search for a programming tip. Whether you're a Java newbie or a seasoned user, this fast reference offers you quick access to solutions without requiring that you wade through pages of tutorial material. Leverages the true reference format that is organized with
17. Validation of reference transcripts in strawberry (Fragaria spp.).
Science.gov (United States)
Clancy, Maureen A; Rosli, Hernan G; Chamala, Srikar; Barbazuk, W Brad; Civello, P Marcos; Folta, Kevin M
2013-12-01
Contemporary methods to assay gene expression depend on a stable set of reference transcripts for accurate quantitation. A lack of well-tested reference genes slows progress in characterizing gene expression in high-value specialty crops. In this study, a set of strawberry (Fragaria spp.) constitutively expressed reference genes has been identified by merging digital gene expression data with expression profiling. Constitutive reference candidates were validated using quantitative PCR and hybridization. Several transcripts have been identified that show improved stability across tissues relative to traditional reference transcripts. Results are similar between commercial octoploid strawberry and the diploid model. Our findings also show that while some never-before-used references are appropriate for most applications, even the most stable reference transcripts require careful assessment across the diverse tissues and fruit developmental states before being adopted as controls.
18. Elektronik Danışma Hizmeti / Digital Reference Services
Directory of Open Access Journals (Sweden)
Nazan Uçak
2003-10-01
Full Text Available This paper is mainly related to digital reference services. Evaluation of these services, their difference from the traditional reference services, factors which cause their emergence, related technical and quantitative standards are presented. Additionaly, projects which are developed in this area, the importance of cooperation activities, application problems and how librarians approach to these issues are examined.
19. Reference vectors in economic choice
Directory of Open Access Journals (Sweden)
Teycir Abdelghani GOUCHA
2013-07-01
Full Text Available In this paper the introduction of notion of reference vector paves the way for a combination of classical and social approaches in the framework of referential preferences given by matrix groups. It is shown that individual demand issue from rational decision does not depend on that reference.
20. Reference counting for reversible languages
DEFF Research Database (Denmark)
Mogensen, Torben Ægidius
2014-01-01
deallocation. This requires the language to be linear: A pointer can not be copied and it can only be eliminated by deallocating the node to which it points. We overcome this limitation by adding reference counts to nodes: Copying a pointer to a node increases the reference count of the node and eliminating...
1. Queuing Theory and Reference Transactions.
Science.gov (United States)
Terbille, Charles
1995-01-01
Examines the implications of applying the queuing theory to three different reference situations: (1) random patron arrivals; (2) random durations of transactions; and (3) use of two librarians. Tables and figures represent results from spreadsheet calculations of queues for each reference situation. (JMV)
2. Expert Systems for Reference Work.
Science.gov (United States)
Parrot, James R.
1986-01-01
Discussion of library reference work that may be suitable for use of expert systems focuses on (1) information and literature searches, and (2) requests to interpret bibliographic references and locate items listed. Systems and computer-assisted instruction modules designed for information retrieval at the University of Waterloo Library are…
3. Calling SNPs without a reference sequence
Directory of Open Access Journals (Sweden)
Schuster Stephan C
2010-03-01
Full Text Available Abstract Background The most common application for the next-generation sequencing technologies is resequencing, where short reads from the genome of an individual are aligned to a reference genome sequence for the same species. These mappings can then be used to identify genetic differences among individuals in a population, and perhaps ultimately to explain phenotypic variation. Many algorithms capable of aligning short reads to the reference, and determining differences between them have been reported. Much less has been reported on how to use these technologies to determine genetic differences among individuals of a species for which a reference sequence is not available, which drastically limits the number of species that can easily benefit from these new technologies. Results We describe a computational pipeline, called DIAL (De novo Identification of Alleles, for identifying single-base substitutions between two closely related genomes without the help of a reference genome. The method works even when the depth of coverage is insufficient for de novo assembly, and it can be extended to determine small insertions/deletions. We evaluate the software's effectiveness using published Roche/454 sequence data from the genome of Dr. James Watson (to detect heterozygous positions and recent Illumina data from orangutan, in each case comparing our results to those from computational analysis that uses a reference genome assembly. We also illustrate the use of DIAL to identify nucleotide differences among transcriptome sequences. Conclusions DIAL can be used for identification of nucleotide differences in species for which no reference sequence is available. Our main motivation is to use this tool to survey the genetic diversity of endangered species as the identified sequence differences can be used to design genotyping arrays to assist in the species' management. The DIAL source code is freely available at http://www.bx.psu.edu/miller_lab/.
4. Reference analysis of the signal + background model in counting experiments II. Approximate reference prior
Science.gov (United States)
2014-10-01
The objective Bayesian treatment of a model representing two independent Poisson processes, labelled as signal'' and background'' and both contributing additively to the total number of counted events, is considered. It is shown that the reference prior for the parameter of interest (the signal intensity) can be well approximated by the widely (ab)used flat prior only when the expected background is very high. On the other hand, a very simple approximation (the limiting form of the reference prior for perfect prior background knowledge) can be safely used over a large portion of the background parameters space. The resulting approximate reference posterior is a Gamma density whose parameters are related to the observed counts. This limiting form is simpler than the result obtained with a flat prior, with the additional advantage of representing a much closer approximation to the reference posterior in all cases. Hence such limiting prior should be considered a better default or conventional prior than the uniform prior. On the computing side, it is shown that a 2-parameter fitting function is able to reproduce extremely well the reference prior for any background prior. Thus, it can be useful in applications requiring the evaluation of the reference prior for a very large number of times.
5. Avaliação da ingestão de nutrientes de crianças de uma creche filantrópica: aplicação do Consumo Dietético de Referência Assessment of nutrients intake of children in a charity daycare center: application of Dietary Reference Intake
Directory of Open Access Journals (Sweden)
Roseane Moreira Sampaio Barbosa
2007-04-01
6. References
OpenAIRE
2013-01-01
Aarseth, Espen J. 1997. Cybertext: Perspectives on Ergodic Literature. Baltimore: Johns Hopkins University Press. Andrén, Anders. 1998. Between Artifacts and Texts: Historical Archaeology in Global Perspective. Contributions To Global Historical Archaeology. Trans. Alan Crozier. New York: Plenum Press. Anon. 1911. ’The New York Public Library: How the Readers and the Books Are Distributed in the New Building’. Scientific American 104.21 (27 May): 527. Anon. 1971. ’Keepers of Rules Versus Play...
7. References
OpenAIRE
2012-01-01
Appiah, Kwame Anthony (2005) The Ethics of Identity, Princeton University Press, Princeton, NJ. ______ (2006). Cosmopolitanism: Ethics in a World of Strangers, Norton, New York. Clarke, Charles (2006) ‛Global Citizens and Quality International Education: Enlarging the Role of the Commonwealth’. Speech delivered to the Royal Commonwealth Society, 15 November, 2006, London. Estlund, Cynthia (2003) Working Together: How Workplace Bonds Strengthen a Diverse Democracy, Oxford University Press, New...
8. Nuclear measurements and reference materials
International Nuclear Information System (INIS)
This report summarizes the progress of the JRC programs on nuclear data, nuclear metrology, nuclear reference materials and non-nuclear reference materials. Budget restrictions and personnel difficulties were encountered during 1987. Fission properties of 235U as a function of neutron energy and of the resonances can be successfully described on the basis of a three exit channel fission model. Double differential neutron emission cross-sections were accomplished on 7Li and were started for the tritium production cross-section of 9Be. Reference materials of uranium minerals and ores were prepared. Special nuclear targets were prepared. A batch of 250 g of Pu02 was characterized in view of certification as reference material for the elemental assay of plutonium
9. Selected Reference Books of 1999.
Science.gov (United States)
McIlvaine, Eileen
2000-01-01
Presents annotated bibliographies of a selection of recent scholarly and general reference works under the subject headings of publishing, periodical indexes, philosophy and religion, literature, music, art, photography, social sciences, business, history, and new editions. (LRW)
10. Genetics Home Reference: genitopatellar syndrome
Science.gov (United States)
... syndrome have distinct clinical features reflecting distinct molecular mechanisms. Hum Mutat. 2012 Nov;33(11):1520-5. ... with a qualified healthcare professional . About Genetics Home Reference Site Map Contact Us Selection Criteria for Links ...
11. Genetics Home Reference: hereditary angioedema
Science.gov (United States)
... Cicardi M. C1-inhibitor deficiency and angioedema: molecular mechanisms and clinical progress. Trends Mol Med. 2009 Feb; ... with a qualified healthcare professional . About Genetics Home Reference Site Map Contact Us Selection Criteria for Links ...
12. Genetics Home Reference: Huntington disease
Science.gov (United States)
... Citation on PubMed Jones L, Hughes A. Pathogenic mechanisms in Huntington's disease. Int Rev Neurobiol. 2011;98: ... with a qualified healthcare professional . About Genetics Home Reference Site Map Contact Us Selection Criteria for Links ...
13. Reference values for nematode communities
Energy Technology Data Exchange (ETDEWEB)
Waarde, J. van der; Wagelmans, M. [Bioclear bv, Groningen (Netherlands); Keidel, H. [Blgg bv (Netherlands); Knoben, R. [Royal Haskoning (Netherlands); Schouten, T.; Bogte, J. [RIVM (Netherlands); Goede, R. de; Bongers, T. [Wageningen Univerity and Research Centre (Netherlands); Didden, W.; Doelman, P. [Advies (Netherlands); Kerkum, F.; Jonge, J. de [RIZA, Lelystad (Netherlands)
2003-07-01
The TRIAD approach is increasingly used for the assessment of ecological risks of soil and sediment contamination. The interpretation of the results form this TRIAD are however hampered by a lack of reference values. These reference values ideally reflect the ecological situation in non-contaminated but comparable ecosystems. As a part of the TRIAD approach nematodes are routinely used as indicators of soil quality but clear reference values are not available. The aim of the project was to develop a reference system for nematode fauna to facilitate assessment of ecological risks. All available data on nematodes in Dutch soil and sediments were collected and put together in one database: the first complete Dutch nematode database. After a quality check, a total of approximately 1600 samples was selected for further analysis. (orig.)
14. Reference frame for Product Configuration
DEFF Research Database (Denmark)
Ladeby, Klaes Rohde; Oddsson, Gudmundur Valur
2011-01-01
This paper presents a reference frame for configuration. The reference frame is established by review of existing literature, and consequently it is a theoretical frame of reference. The review of literature shows a deterioration of the understanding of configuration. Most recent literature reports...... on configuration systems in the shape of anecdotal reporting on the development of information systems that perhaps support the configuration task – perhaps not. Consequently, the definition of configuration has become ambiguous as different research groups defines configuration differently. This paper propose...... a reference frame for configuration that permits 1) a more precise understanding of a configuration system, 2) a understanding of how the configuration system relate to other systems, and 3) a definition of the basic concepts in configuration. The total configuration system, together with the definition...
15. Haemostatic reference intervals in pregnancy
DEFF Research Database (Denmark)
Szecsi, Pal Bela; Jørgensen, Maja; Klajnbard, Anna;
2010-01-01
Haemostatic reference intervals are generally based on samples from non-pregnant women. Thus, they may not be relevant to pregnant women, a problem that may hinder accurate diagnosis and treatment of haemostatic disorders during pregnancy. In this study, we establish gestational age-specific refe...... and total protein S was stable. Gestational age-specific reference values are essential for the accurate interpretation of a subset of haemostatic tests during pregnancy, delivery, and puerperium....
16. Space Station reference configuration description
Science.gov (United States)
1984-01-01
The data generated by the Space Station Program Skunk Works over a period of 4 months which supports the definition of a Space Station reference configuration is documented. The data were generated to meet these objectives: (1) provide a focal point for the definition and assessment of program requirements; (2) establish a basis for estimating program cost; and (3) define a reference configuration in sufficient detail to allow its inclusion in the definition phase Request for Proposal (RFP).
17. Virtual reference: chat with us!
Science.gov (United States)
Lapidus, Mariana; Bond, Irena
2009-01-01
Virtual chat services represent an exciting way to provide patrons of medical libraries with instant reference help in an academic environment. The purpose of this article is to examine the implementation, marketing process, use, and development of a virtual reference service initiated at the Massachusetts College of Pharmacy and Health Sciences and its three-campus libraries. In addition, this paper will discuss practical recommendations for the future improvement of the service. PMID:19384714
18. TclTk Pocket Reference
CERN Document Server
Raines, Paul
1998-01-01
The Tcl/Tk combination is increasingly popular because it lets you produce sophisticated graphical interfaces with a few easy commands, develop and change scripts quickly, and conveniently tie together existing utilities or programming libraries. The Tcl/Tk Pocket Reference,a handy reference guide to the basic Tcl language elements, Tcl and Tk commands, and Tk widgets, is a companion volume to Tcl/Tk in a Nutshell.
19. Referring Expressions: A Unified Approach
Institute of Scientific and Technical Information of China (English)
K.M. Jaszczolt
2001-01-01
@@ 0. Introduction Expressions used by speakers to refer are commonly divided into two categories: that of directly referring expressions and that of expressions whose referring function is secured by the context of utterance. Directly referring expressions are normally said to include proper names, some pronouns including demonstrative, and demonstrative phrases. The other category comprises mainly definite and indefinite descriptions, the first being their most acclaimed representative. Definite descriptions are widely acknowledged to have referential uses. They are not referring expressions, so to speak, by default,but rather as a result of a contextually determined interpretation. As a category, they are frequently said to belong with quantifiers (Neale 1990; Recanati 1993). However, the arguments for classifying them with referring expressions are ample (Bach 1987a; Larson & Segal 1995; Brown 1995; Jaszczolt 1997a,1997b, 1999b). It is argued in Part I of this paper that although definite descriptions exhibit an ambiguity of use between the referential and the attributive reading, they also exhibit the property of having an unmarked, salient interpretation which makes them akin to directly referential terms. This salient reading is the referential interpretation, arrived at with the help of the hearer′s presumption of the presence of strong referential intention that supports the speaker′s utterance.
20. Computational phantoms of the ICRP reference male and reference female
International Nuclear Information System (INIS)
Computational models of the human body - together with radiation transport codes - have been used for the evaluation of organ dose conversion coefficients in occupational, medical and environmental radiation protection. During the last two decades, it has become common practice to use voxel models that are derived mostly from (whole body) medical image data of real persons instead of the older mathematical MIRD-type body models. It was shown that the schematic organ shapes of the MIRD-type phantoms presented an over- simplification, having an influence on the resulting dose coefficients, which may deviate systematically from those calculated for voxel models. In its recent recommendations, the ICRP adopted a couple of voxel phantoms for future calculations of organ dose coefficients. The phantoms are based on medical image data of real persons and are consistent with the information given in ICRP Publication 89 on the reference anatomical and physiological parameters for both male and female subjects. The reference voxel models were constructed by modifying the voxel models 'Golem' and 'Laura' developed in our working group of two individuals whose body height and weight resembled the reference data. The organ masses of both models were adjusted to the ICRP data on the Reference Male and Reference Female, without spoiling their realistic anatomy. This paper describes the methods used for this process and the characteristics of the resulting voxel models. Furthermore, to illustrate the uses of these phantoms, conversion coefficients for some external exposures are also presented. (author)
1. Swahili Learners' Reference Grammar. African Language Learners' Reference Grammar Series.
Science.gov (United States)
Thompson, Katrina Daly; Schleicher, Antonia Folarin
This reference grammar is written for speakers of English who are learning Swahili. Because many language learners are not familiar with the grammatical terminology, this book explains the basic terminology and concepts of English grammar that are necessary for understanding the grammar of Swahili. It assumes no formal knowledge of English grammar…
2. Virtual Reference, Real Money: Modeling Costs in Virtual Reference Services
Science.gov (United States)
Eakin, Lori; Pomerantz, Jeffrey
2009-01-01
Libraries nationwide are in yet another phase of belt tightening. Without an understanding of the economic factors that influence library operations, however, controlling costs and performing cost-benefit analyses on services is difficult. This paper describes a project to develop a cost model for collaborative virtual reference services. This…
3. Fast Reference-Based MRI
CERN Document Server
Weizman, Lior; Ben-Basaht, Dafna
2015-01-01
In many clinical MRI scenarios, existing imaging information can be used to significantly shorten acquisition time or to improve Signal to Noise Ratio (SNR). In some cases, a previously acquired image can serve as a reference image, that may exhibit similarity to the image being acquired. Examples include similarity between adjacent slices in high resolution MRI, similarity between various contrasts in the same scan and similarity between different scans of the same patient. In this paper we present a general framework for utilizing reference images for fast MRI. We take into account that the reference image may exhibit low similarity with the acquired image and develop an iterative weighted approach for reconstruction, which tunes the weights according to the degree of similarity. Experiments demonstrate the performance of the method in three different clinical MRI scenarios: SNR improvement in high resolution brain MRI, utilizing similarity between T2-weighted and fluid-attenuated inversion recovery (FLAIR)...
4. JavaScript programmer's reference
CERN Document Server
Valentine, Thomas
2013-01-01
JavaScript Programmer's Reference is an invaluable resource that won't stray far from your desktop (or your tablet!). It contains detailed information on every JavaScript object and command, and combines that reference with practical examples showcasing how you can use those commands in the real world. Whether you're just checking the syntax of a method or you're starting out on the road to JavaScript mastery, the JavaScript Programmer's Reference will be an essential aid. With a detailed and informative tutorial section giving you the ins and outs of programming with JavaScript and the DOM f
5. Scientific Opinion of the Panel on Genetically Modified Organisms on an application (Reference EFSA-GMO-CZ-2006-33) for the placing on the market of the insect-resistant and glyphosate-tolerant genetically modified maize MON 88017 x MON 810, for food and feed uses, import and processing under
DEFF Research Database (Denmark)
Sørensen, Ilona Kryspin
or survival of feral maize plants in case of accidental release into the environment of maize MON 88017 x MON 810 viable grains during transportation and processing. The scope of the post-market environmental monitoring plan provided by the applicant is in line with the intended uses of maize MON 88017 x MON....... Further information from applications for placing the single insert lines MON 88017 and MON 810 on the market under EU regulatory procedures was taken into account where appropriate. The scope of application EFSA-GMO-CZ-2006-33 is for food and feed uses, import and processing of genetically modified maize...... MON 88017 x MON 810 and all derived products, but excluding cultivation in the EU. The EFSA GMO Panel assessed maize MON 88017 x MON 810 with reference to the intended uses and the appropriate principles described in the Guidance Document of the Scientific Panel on Genetically Modified Organisms...
6. Calibration of {sup 90}Sr+{sup 90}Y chemical applicators using a mini extrapolation chamber as reference system;Calibracao de aplicadores clinicos de {sup 90}Sr+{sup 90}Y utilizando uma mini-camera de extrapolacao como sistema de referencia
Energy Technology Data Exchange (ETDEWEB)
Antonio, Patricia L.; Caldas, Linda V.E. [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil); Oliveira, Mercia L. [Centro Regional de Ciencias Nucleares do Nordeste (CRCN-NE/CNEN-PE), Recife, PE (Brazil)
2009-07-01
{sup 90}Sr + {sup 90}Y clinical applicators are beta radiation sources utilized in several radiotherapy Brazilian clinics, although don't be more manufactured. These sources are employed in brachytherapy procedures for the treatment of superficial lesions of skin and eyes. International recommendations and previous works determine that dermatological and ophthalmic applicators shall be calibrated periodically, and one of the methods for their calibration consists of the use of an extrapolation chamber. In this work, a method of calibration of {sup 90}Sr + {sup 90}Y clinical applicators was applied using a mini-extrapolation chamber of plane window, developed at the Calibration Laboratory at IPEN, as a reference system. The results obtained were considered satisfactory, when compared with the results given in the calibration certificates of the sources. (author)
7. Mixed quantum/classical theory for inelastic scattering of asymmetric-top-rotor + atom in the body-fixed reference frame and application to the H2O + He system
International Nuclear Information System (INIS)
The mixed quantum/classical theory (MQCT) for inelastic molecule-atom scattering developed recently [A. Semenov and D. Babikov, J. Chem. Phys. 139, 174108 (2013)] is extended to treat a general case of an asymmetric-top-rotor molecule in the body-fixed reference frame. This complements a similar theory formulated in the space-fixed reference-frame [M. Ivanov, M.-L. Dubernet, and D. Babikov, J. Chem. Phys. 140, 134301 (2014)]. Here, the goal was to develop an approximate computationally affordable treatment of the rotationally inelastic scattering and apply it to H2O + He. We found that MQCT is somewhat less accurate at lower scattering energies. For example, below E = 1000 cm−1 the typical errors in the values of inelastic scattering cross sections are on the order of 10%. However, at higher scattering energies MQCT method appears to be rather accurate. Thus, at scattering energies above 2000 cm−1 the errors are consistently in the range of 1%–2%, which is basically our convergence criterion with respect to the number of trajectories. At these conditions our MQCT method remains computationally affordable. We found that computational cost of the fully-coupled MQCT calculations scales as n2, where n is the number of channels. This is more favorable than the full-quantum inelastic scattering calculations that scale as n3. Our conclusion is that for complex systems (heavy collision partners with many internal states) and at higher scattering energies MQCT may offer significant computational advantages
8. Mixed quantum/classical theory for inelastic scattering of asymmetric-top-rotor + atom in the body-fixed reference frame and application to the H2O + He system
Science.gov (United States)
Semenov, Alexander; Dubernet, Marie-Lise; Babikov, Dmitri
2014-09-01
The mixed quantum/classical theory (MQCT) for inelastic molecule-atom scattering developed recently [A. Semenov and D. Babikov, J. Chem. Phys. 139, 174108 (2013)] is extended to treat a general case of an asymmetric-top-rotor molecule in the body-fixed reference frame. This complements a similar theory formulated in the space-fixed reference-frame [M. Ivanov, M.-L. Dubernet, and D. Babikov, J. Chem. Phys. 140, 134301 (2014)]. Here, the goal was to develop an approximate computationally affordable treatment of the rotationally inelastic scattering and apply it to H2O + He. We found that MQCT is somewhat less accurate at lower scattering energies. For example, below E = 1000 cm-1 the typical errors in the values of inelastic scattering cross sections are on the order of 10%. However, at higher scattering energies MQCT method appears to be rather accurate. Thus, at scattering energies above 2000 cm-1 the errors are consistently in the range of 1%-2%, which is basically our convergence criterion with respect to the number of trajectories. At these conditions our MQCT method remains computationally affordable. We found that computational cost of the fully-coupled MQCT calculations scales as n2, where n is the number of channels. This is more favorable than the full-quantum inelastic scattering calculations that scale as n3. Our conclusion is that for complex systems (heavy collision partners with many internal states) and at higher scattering energies MQCT may offer significant computational advantages.
9. Sizewell 'B' PWR reference design
International Nuclear Information System (INIS)
The reference design for a PWR power station to be constructed as Sizewell 'B' is presented in 3 volumes containing 14 chapters and in a volume of drawings. The report describes the proposed design and provides the basis upon which the safety case and the Pre-Construction Safety Report have been prepared. The station is based on a 3425MWt Westinghouse PWR providing steam to two turbine generators each of 600 MW. The layout and many of the systems are based on the SNUPPS design for Callaway which has been chosen as the US reference plant for the project. (U.K.)
10. A deflationary theory of reference
OpenAIRE
Båve, Arvid
2009-01-01
The article first rehearses three deflationary theories of reference, (1) disquotationalism, (2) propositionalism (Horwich), and (3) the anaphoric theory (Brandom), and raises a number of objections against them. It turns out that each corresponds to a closely related theory of truth, and that these are subject to analogous criticisms to a surprisingly high extent. I then present a theory of my own, according to which the schema “That S(t) is about t” and the biconditional “S refers to x iff ...
11. Social reference: Toward a unifying theory
OpenAIRE
Shachaf, Pnina
2010-01-01
This article addresses the need for a theoretical approach to reference research and specifically concentrates on a lacuna in conceptual research on social reference. Social reference refers to online question answering services that are provided by communities of volunteers on question and answer (Q&A) sites. Social reference is similar to library reference, but at the same time, it differs significantly from the traditional (and digital) dyadic reference encounter; it involves a collaborati...
12. Application of improved particle-swarm-optimization in stabilized platform based on multiple reference frame model%改进粒子群优化在稳定平台多空间分析模型的应用
Institute of Scientific and Technical Information of China (English)
范新明; 曹剑中; 杨洪涛; 王华伟; 杨磊; 廖加文; 王华; 雷杨杰
2015-01-01
常规伺服系统根据电机轴系转动进行模型分析,以轴系所在的基座空间作为参照系。稳定平台的被控量以惯性空间作为参照系,因此不适合用常规伺服系统模型来建模。针对稳定平台的多参照系问题,文章采用以惯性空间作为电机轴系转动参照系的多空间分析模型,并将改进粒子群算法应用于该模型。粒子群算法作为一种群智能算法,广泛应用于参数优化。文中通过惯性权重改进和越界改进,利用改进后的粒子群算法进行稳定平台PID参数的优化和整定。通过仿真和硬件实验平台验证,结果表明:在稳定平台多空间分析模型基础之上,采用改进粒子群算法优化后的PID控制器可以使稳定平台有更高的稳定精度、更好的鲁棒性,有效地隔离了外部的震动和干扰。%In the conventional servo system, model analysis according to motor axis and base space is used as a reference. However, when analyzing the stabilized platform, it is not compatible due to in which exists inertial space and base space. In order to solve this problem, the multiple reference frame model was proposed, where direct-current motor model was based on inertial space. On the basis of the multiple reference frame model, an improved Particle Swarm Optimization (PSO) algorithm was also proposed. As a kind of swarm intelligence algorithm, PSO was widely used in parameters optimization. The traditional PSO on inertial weight and slopping-over borders were improved, and then, it was adopted in tuning and optimization of PID parameters. The simulation and experiments results indicate that the improved PSO (IPSO) PID controller can obviously enhance the static precision and effectively isolate the vibration and disturbance of carrier.
13. Development of a dual-internal-reference technique to improve accuracy when determining bacterial 16S rRNA:16S rRNA gene ratio with application to Escherichia coli liquid and aerosol samples.
Science.gov (United States)
Zhen, Huajun; Krumins, Valdis; Fennell, Donna E; Mainelis, Gediminas
2015-10-01
Accurate enumeration of rRNA content in microbial cells, e.g. by using the 16S rRNA:16S rRNA gene ratio, is critical to properly understand its relationship to microbial activities. However, few studies have considered possible methodological artifacts that may contribute to the variability of rRNA analysis results. In this study, a technique utilizing genomic DNA and 16S rRNA from an exogenous species (Pseudomonas fluorescens) as dual internal references was developed to improve accuracy when determining the 16S rRNA:16S rRNA gene ratio of a target organism, Escherichia coli. This technique was able to adequately control the variability in sample processing and analysis procedures due to nucleic acid (DNA and RNA) losses, inefficient reverse transcription of RNA, and inefficient PCR amplification. The measured 16S rRNA:16S rRNA gene ratio of E. coli increased by 2-3 fold when E. coli 16S rRNA gene and 16S rRNA quantities were normalized to the sample-specific fractional recoveries of reference (P. fluorescens) 16S rRNA gene and 16S rRNA, respectively. In addition, the intra-sample variation of this ratio, represented by coefficients of variation from replicate samples, decreased significantly after normalization. This technique was applied to investigate the temporal variation of 16S rRNA:16S rRNA gene ratio of E. coli during its non-steady-state growth in a complex liquid medium, and to E. coli aerosols when exposed to particle-free air after their collection on a filter. The 16S rRNA:16S rRNA gene ratio of E. coli increased significantly during its early exponential phase of growth; when E. coli aerosols were exposed to extended filtration stress after sample collection, the ratio also increased. In contrast, no significant temporal trend in E. coli 16S rRNA:16S rRNA gene ratio was observed when the determined ratios were not normalized based on the recoveries of dual references. The developed technique could be widely applied in studies of relationship between
14. Guam and Micronesia Reference Sources.
Science.gov (United States)
Goetzfridt, Nicholas J.; Goniwiecha, Mark C.
1993-01-01
This article lists reference sources for studying Guam and Micronesia. The entries are arranged alphabetically by main entry within each section in the categories of: (1) bibliographical works; (2) travel and guide books; (3) handbooks and surveys; (4) dictionaries; (5) yearbooks; (6) periodical and newspaper publications; and (7) audiovisual…
15. [Developmental Placement.] Collected Research References.
Science.gov (United States)
Bjorklund, Gail
Drawing on information and references in the ERIC system, this literature review describes research related to a child's developmental placement. The issues examined include school entrance age; predictive validity, reliability, and features of Gesell School Readiness Assessment; retention; and the effectiveness of developmental placement. A…
16. Space Station reference configuration update
Science.gov (United States)
Bonner, Tom F., Jr.
1985-01-01
The reference configuration of the NASA Space Station as of November 1985 is presented in a series of diagrams, drawings, graphs, and tables. The configurations for components to be contributed by ESA, Canada, and Japan are included. Brief captions are provided, along with answers to questions raised at the conference.
17. Space-Time Reference Systems
CERN Document Server
Soffel, Michael
2013-01-01
The high accuracy of modern astronomical spatial-temporal reference systems has made them considerably complex. This book offers a comprehensive overview of such systems. It begins with a discussion of ‘The Problem of Time’, including recent developments in the art of clock making (e.g., optical clocks) and various time scales. The authors address the definitions and realization of spatial coordinates by reference to remote celestial objects such as quasars. After an extensive treatment of classical equinox-based coordinates, new paradigms for setting up a celestial reference system are introduced that no longer refer to the translational and rotational motion of the Earth. The role of relativity in the definition and realization of such systems is clarified. The topics presented in this book are complemented by exercises (with solutions). The authors offer a series of files, written in Maple, a standard computer algebra system, to help readers get a feel for the various models and orders of magnitude. ...
18. Reference Values of Skin Autofluorescence
NARCIS (Netherlands)
Koetsier, M.; Lutgers, H. L.; de Jonge, C.; Links, T. P.; Smit, A. J.; Graaff, R.
2010-01-01
Background: Skin autofluorescence (AF) as measured with the AGE Reader (DiagnOptics Technologies, Groningen, The Netherlands) is a noninvasive prognostic marker in diabetes mellitus and other diseases with increased cardiovascular risk. This study provides reference values of healthy Caucasian contr
19. Tractor Transmissions. A Teaching Reference.
Science.gov (United States)
American Association for Agricultural Engineering and Vocational Agriculture, Athens, GA.
The manual was developed as a reference for teaching students about transmissions in farm tractors. The manual is divided into five sections: (1) transmission history, (2) gears and bearings in transmission, (3) sliding-gear transmissions, (4) planetary gearing, and (5) glossary. The working principles of the sliding-gear transmission, the most…
20. Mobile Technologies and Roving Reference
Science.gov (United States)
Penner, Katherine
2011-01-01
As 21st century librarians, we have made apt adjustments for reaching out into the digital world, but we need to consider the students who still use library services within our walls. We can use available handheld, mobile technologies to help patrons too shy to approach the desk and free library staff to bring reference service directly to patrons.
1. Haemostatic reference intervals in pregnancy
DEFF Research Database (Denmark)
Szecsi, Pal Bela; Jørgensen, Maja; Klajnbard, Anna;
2010-01-01
Haemostatic reference intervals are generally based on samples from non-pregnant women. Thus, they may not be relevant to pregnant women, a problem that may hinder accurate diagnosis and treatment of haemostatic disorders during pregnancy. In this study, we establish gestational age-specific refe......Haemostatic reference intervals are generally based on samples from non-pregnant women. Thus, they may not be relevant to pregnant women, a problem that may hinder accurate diagnosis and treatment of haemostatic disorders during pregnancy. In this study, we establish gestational age......-specific reference intervals for coagulation tests during normal pregnancy. Eight hundred one women with expected normal pregnancies were included in the study. Of these women, 391 had no complications during pregnancy, vaginal delivery, or postpartum period. Plasma samples were obtained at gestational weeks 13......-20, 21-28, 29-34, 35-42, at active labor, and on postpartum days 1 and 2. Reference intervals for each gestational period using only the uncomplicated pregnancies were calculated in all 391 women for activated partial thromboplastin time (aPTT), fibrinogen, fibrin D-dimer, antithrombin, free protein S...
2. Childhood Obesity. Special Reference Briefs.
Science.gov (United States)
Winick, Myron
This reference brief deals with the problem of childhood obesity and how it can lead to obesity in the adult. Eighty-four abstracts are presented of studies on the identification, prevention, and treatment of obesity in children, focusing on diet and psychological attitudes. Subjects of the studies were children ranging in age from infancy through…
3. Selected Reference Books of 1992.
Science.gov (United States)
McIlvaine, Eileen
1993-01-01
Presents an annotated bibliography of 40 recent scholarly and general works of interest to reference workers in university libraries. Topics areas covered include philosophy, religion, language, literature, architecture, economics, law, area studies, Russia and the Soviet Union, women's studies, and Christopher Columbus. New editions and…
4. The Lyman alpha reference sample
DEFF Research Database (Denmark)
Hayes, M.; Östlin, G.; Schaerer, D.;
2013-01-01
We report on new imaging observations of the Lyman alpha emission line (Lyα), performed with the Hubble Space Telescope, that comprise the backbone of the Lyman alpha Reference Sample. We present images of 14 starburst galaxies at redshifts 0.028
5. Crowdsourcing Application of Virtual Reference Service in University Libraries%众包在高校图书馆虚拟参考咨询服务中的运用
Institute of Scientific and Technical Information of China (English)
薛红
2012-01-01
The paper introduced crowdsourcing and its advantages. Human resource shortage is one of the bottlenecks for the development of virtual reference service in university libraries, crowdsourcing model will contribute to further enhancing its service level and service quality.%众包借助互联网的优势,低成本汇聚各地的人力资源,解决原本需要高昂费用才能解决甚至不能解决的问题。人力资源不足是影响高校虚拟参考咨询服务健康发展的瓶颈之一,学习、借鉴和引进众包模式,将有助于进一步提高其服务水平和服务质量。
6. Application of reference method in the standardization for the determination of alanine aminotransferase%参考方法在丙氨酸氨基转移酶测定标准化中的应用
Institute of Scientific and Technical Information of China (English)
郑松柏; 王建兵; 黄宪章; 马艳; 庄俊华; 徐宁; 周华友; 陈茶
2012-01-01
Objective To investigate the accuracy and comparability of alanine aminotransferase(ALT) measurement results in human serum samples and commercial materials before and after calibration with frozen human serum calibrator assigned by reference method. Methods Five frozen human-pooled serum samples were assigned values by the reference method without pyridoxal 5-phosphate for ALT in four candidate reference laboratories, which were used to evaluate the results of ALT catalytic activity detected by ten testing systems in Guangzhou. One of the serum sample was used as the common calibrator. The results of serum samples and commercial materials from different systems before and after calibration were analyzed for biases and intersystem variations. Results After calibration,the variance of the systems for the results of serum samples decreased from between 11. 90% and 8. 60% to between 6. 78% and 2. 30% ,and the bias decreased dramatically from between - 12. 52% and - 8. 44 % to between - 3. 36% and -0. 08%. Slopes of the regression lines of ALT results of serum samples between reference systems and routine systems after calibration were closer to 1 and intercepts closer to 0 than those obtained before calibration. Conclusion Accuracy and comparability of ALT measurements could be improved by using a common human serum calibrator. But commercial materials might not be commutable for human serum in ALT measurements.%目的 调查不同检测系统使用经参考方法赋值的冰冻人血清样本作为校准品进行校准后,不同来源样本丙氨酸氨基转移酶(ALT)测定结果的可比性与准确性的改变程度.方法 5份经4家候选参考实验室应用不含磷酸吡哆醛的ALT参考方法定值的冰冻人混合血清样本用于评价广州地区10个不同检测系统ALT催化活性结果的可比性与准确性.其中一个样本用作校准品校准各系统.比较校准前后各系统间新鲜血清样本与商品制备物测定
7. OpenGIS参考模型ORM及地理信息服务应用模式%OpenGIS Reference Model(ORM)and Application Schema for Geographic Information Services
Institute of Scientific and Technical Information of China (English)
于海龙; 邬伦
2004-01-01
地理信息语义互操作是地理信息共享应用的基础,为解决OpenGIS抽象规范与实施规范在地理信息语义描述上的不足,OGC建立了OpenGIS参考模型(OpenGIS Reference Model,ORM),旨在通过ORM实现地理信息共享与互操作.该文从空间信息应用政策、空间信息语义描述、空间信息服务定义与分类、多网络服务配置、共享开发标准五个层面介绍ORM,并分析基于ORM的地理信息服务应用模式.
8. References for Haplotype Imputation in the Big Data Era
Science.gov (United States)
Li, Wenzhi; Xu, Wei; Li, Qiling; Ma, Li; Song, Qing
2016-01-01
Imputation is a powerful in silico approach to fill in those missing values in the big datasets. This process requires a reference panel, which is a collection of big data from which the missing information can be extracted and imputed. Haplotype imputation requires ethnicity-matched references; a mismatched reference panel will significantly reduce the quality of imputation. However, currently existing big datasets cover only a small number of ethnicities, there is a lack of ethnicity-matched references for many ethnic populations in the world, which has hampered the data imputation of haplotypes and its downstream applications. To solve this issue, several approaches have been proposed and explored, including the mixed reference panel, the internal reference panel and genotype-converted reference panel. This review article provides the information and comparison between these approaches. Increasing evidence showed that not just one or two genetic elements dictate the gene activity and functions; instead, cis-interactions of multiple elements dictate gene activity. Cis-interactions require the interacting elements to be on the same chromosome molecule, therefore, haplotype analysis is essential for the investigation of cis-interactions among multiple genetic variants at different loci, and appears to be especially important for studying the common diseases. It will be valuable in a wide spectrum of applications from academic research, to clinical diagnosis, prevention, treatment, and pharmaceutical industry. PMID:27274952
9. Project X: Accelerator Reference Design
CERN Document Server
Holmes, S D; Chase, B; Gollwitzer, K; Johnson, D; Kaducak, M; Klebaner, A; Kourbanis, I; Lebedev, V; Leveling, A; Li, D; Nagaitsev, S; Ostroumov, P; Pasquinelli, R; Patrick, J; Prost, L; Scarpine, V; Shemyakin, A; Solyak, N; Steimel, J; Yakovlev, V; Zwaska, R
2013-01-01
Part 1 of "Project X: Accelerator Reference Design, Physics Opportunities, Broader Impacts". Part 1 contains the volume Preface and a description of the conceptual design for a high-intensity proton accelerator facility being developed to support a world-leading program of Intensity Frontier physics over the next two decades at Fermilab. Subjects covered include performance goals, the accelerator physics design, and the technological basis for such a facility.
10. Reference Electrodes in Metal Corrosion
OpenAIRE
S. Szabó; Bakos, I.
2010-01-01
With especial regard to hydrogen electrode, the theoretical fundamentals of electrode potential, the most important reference electrodes and the electrode potential measurement have been discussed. In the case of the hydrogen electrode, it have been emphasised that there is no equilibrium between the hydrogen molecule (H2) and the hydrogen (H+), hydronium (H3O+) ion in the absence of a suitable catalyst. Taking into account the practical aspects as well, the theorectical basis of working of h...
11. National Software Reference Library (NSRL)
Science.gov (United States)
National Software Reference Library (NSRL) (PC database for purchase) A collaboration of the National Institute of Standards and Technology (NIST), the National Institute of Justice (NIJ), the Federal Bureau of Investigation (FBI), the Defense Computer Forensics Laboratory (DCFL),the U.S. Customs Service, software vendors, and state and local law enforement organizations, the NSRL is a tool to assist in fighting crime involving computers.
12. The International Geomagnetic Reference Field
OpenAIRE
Macmillan, Susan; Finlay, Christopher
2011-01-01
The International Geomagnetic Reference Field (IGRF) is an internationally agreed and widely used mathematical model of the Earth’s magnetic field of internal origin. We describe its inception in the 1960s and how it has developed since. We also describe the current generation of the IGRF and potential future developments. Maps of the geomagnetic field derived from the IGRF and valid for 2010-2015 are also included.
13. Hanford Waste Mineralogy Reference Report
International Nuclear Information System (INIS)
This report lists the observed mineral phases present in the Hanford tanks. This task was accomplished by performing a review of numerous reports that used experimental techniques including, but not limited to: x-ray diffraction, polarized light microscopy, scanning electron microscopy, transmission electron microscopy, energy dispersive spectroscopy, electron energy loss spectroscopy, and particle size distribution analyses. This report contains tables that can be used as a quick reference to identify the crystal phases observed in Hanford waste.
14. HANFORD WASTE MINEROLOGY REFERENCE REPORT
International Nuclear Information System (INIS)
This report lists the observed mineral phases present in the Hanford tanks. This task was accomplished by performing a review of numerous reports using experimental techniques including, but not limited to: x-ray diffraction, polarized light microscopy, scanning electron microscopy, transmission electron microscopy, energy dispersive spectroscopy, electron energy loss spectroscopy, and particle size distribution analyses. This report contains tables that can be used as a quick reference to identify the crystal phases present observed in Hanford waste.
15. Tv & video engineer's reference book
CERN Document Server
Jackson, K G
1991-01-01
TV & Video Engineer's Reference Book presents an extensive examination of the basic television standards and broadcasting spectrum. It discusses the fundamental concepts in analogue and digital circuit theory. It addresses studies in the engineering mathematics, formulas, and calculations. Some of the topics covered in the book are the conductors and insulators, passive components, alternating current circuits; broadcast transmission; radio frequency propagation; electron optics in cathode ray tube; color encoding and decoding systems; television transmitters; and remote supervision of unatten
16. HANFORD WASTE MINEROLOGY REFERENCE REPORT
Energy Technology Data Exchange (ETDEWEB)
DISSELKAMP RS
2010-06-18
This report lists the observed mineral phase phases present in the Hanford tanks. This task was accomplished by performing a review of numerous reports using experimental techniques including, but not limited to: x-ray diffraction, polarized light microscopy, scanning electron microscopy, transmission electron microscopy, energy dispersive spectroscopy, electron energy loss spectroscopy, and particle size distribution analyses. This report contains tables that can be used as a quick reference to identify the crystal phases present observed in Hanford waste.
17. Microgrid cyber security reference architecture.
Energy Technology Data Exchange (ETDEWEB)
Veitch, Cynthia K.; Henry, Jordan M.; Richardson, Bryan T.; Hart, Derek H.
2013-07-01
This document describes a microgrid cyber security reference architecture. First, we present a high-level concept of operations for a microgrid, including operational modes, necessary power actors, and the communication protocols typically employed. We then describe our motivation for designing a secure microgrid; in particular, we provide general network and industrial control system (ICS)-speci c vulnerabilities, a threat model, information assurance compliance concerns, and design criteria for a microgrid control system network. Our design approach addresses these concerns by segmenting the microgrid control system network into enclaves, grouping enclaves into functional domains, and describing actor communication using data exchange attributes. We describe cyber actors that can help mitigate potential vulnerabilities, in addition to performance bene ts and vulnerability mitigation that may be realized using this reference architecture. To illustrate our design approach, we present a notional a microgrid control system network implementation, including types of communica- tion occurring on that network, example data exchange attributes for actors in the network, an example of how the network can be segmented to create enclaves and functional domains, and how cyber actors can be used to enforce network segmentation and provide the neces- sary level of security. Finally, we describe areas of focus for the further development of the reference architecture.
18. Application of the hybrid approach to the benchmark dose of urinary cadmium as the reference level for renal effects in cadmium polluted and non-polluted areas in Japan
Energy Technology Data Exchange (ETDEWEB)
Suwazono, Yasushi, E-mail: [email protected] [Department of Occupational and Environmental Medicine, Graduate School of Medicine, Chiba University, 1-8-1 Inohana, Chuoku, Chiba 260-8670 (Japan); Nogawa, Kazuhiro; Uetani, Mirei [Department of Occupational and Environmental Medicine, Graduate School of Medicine, Chiba University, 1-8-1 Inohana, Chuoku, Chiba 260-8670 (Japan); Nakada, Satoru [Safety and Health Organization, Chiba University, 1-33 Yayoicho, Inageku, Chiba 263-8522 (Japan); Kido, Teruhiko [Department of Community Health Nursing, Kanazawa University School of Health Sciences, 5-11-80 Kodatsuno, Kanazawa, Ishikawa 920-0942 (Japan); Nakagawa, Hideaki [Department of Epidemiology and Public Health, Kanazawa Medical University, 1-1 Daigaku, Uchnada, Ishikawa 920-0293 (Japan)
2011-02-15
Objectives: The aim of this study was to evaluate the reference level of urinary cadmium (Cd) that caused renal effects. An updated hybrid approach was used to estimate the benchmark doses (BMDs) and their 95% lower confidence limits (BMDL) in subjects with a wide range of exposure to Cd. Methods: The total number of subjects was 1509 (650 men and 859 women) in non-polluted areas and 3103 (1397 men and 1706 women) in the environmentally exposed Kakehashi river basin. We measured urinary cadmium (U-Cd) as a marker of long-term exposure, and {beta}2-microglobulin ({beta}2-MG) as a marker of renal effects. The BMD and BMDL that corresponded to an additional risk (BMR) of 5% were calculated with background risk at zero exposure set at 5%. Results: The U-Cd BMDL for {beta}2-MG was 3.5 {mu}g/g creatinine in men and 3.7 {mu}g/g creatinine in women. Conclusions: The BMDL values for a wide range of U-Cd were generally within the range of values measured in non-polluted areas in Japan. This indicated that the hybrid approach is a robust method for different ranges of cadmium exposure. The present results may contribute further to recent discussions on health risk assessment of Cd exposure.
19. Research and Application of Reference Information Model in Digital Medical Service Pattern%RIM模型在数字医疗服务模式中的研究应用
Institute of Scientific and Technical Information of China (English)
崔欣; 谢桦; 陈春妍; 孟群; 胡建平
2016-01-01
近年来,以物联网、移动互联网、云计算、大数据为代表的数字化技术为医疗服务创新提供了强大的技术保障,促使医疗服务从传统医疗业向医疗服务创新的新兴医疗服务转型。本文旨在应用参考信息模型(RIM)的框架,深入分析数字化技术对医疗服务模式的影响,并对传统模式改为存在的问题提出解决技术建议。%In recent years, the digital technology(such as the Internet, mobile Internet, cloud computing, big data) provides powerful technical guarantee for medical service innovation, and promotes the traditional medical service industry transformation. The purpose of this paper is to use the framework of reference information model (RIM), for analyzing the influence of digital technology on the medical service pattern in-depth, and puts forward suggestions on the solution of the problems for traditional pattern in technology.
20. Application of the hybrid approach to the benchmark dose of urinary cadmium as the reference level for renal effects in cadmium polluted and non-polluted areas in Japan
International Nuclear Information System (INIS)
Objectives: The aim of this study was to evaluate the reference level of urinary cadmium (Cd) that caused renal effects. An updated hybrid approach was used to estimate the benchmark doses (BMDs) and their 95% lower confidence limits (BMDL) in subjects with a wide range of exposure to Cd. Methods: The total number of subjects was 1509 (650 men and 859 women) in non-polluted areas and 3103 (1397 men and 1706 women) in the environmentally exposed Kakehashi river basin. We measured urinary cadmium (U-Cd) as a marker of long-term exposure, and β2-microglobulin (β2-MG) as a marker of renal effects. The BMD and BMDL that corresponded to an additional risk (BMR) of 5% were calculated with background risk at zero exposure set at 5%. Results: The U-Cd BMDL for β2-MG was 3.5 μg/g creatinine in men and 3.7 μg/g creatinine in women. Conclusions: The BMDL values for a wide range of U-Cd were generally within the range of values measured in non-polluted areas in Japan. This indicated that the hybrid approach is a robust method for different ranges of cadmium exposure. The present results may contribute further to recent discussions on health risk assessment of Cd exposure.
1. Future National Reference Frames for the United States
Science.gov (United States)
Stone, W. A.
2015-12-01
The mission of the National Oceanic and Atmospheric Administration's National Geodetic Survey (NGS) is "to define, maintain and provide access to the National Spatial Reference System (NSRS) to meet our nation's economic, social, and environmental needs." NSRS is the nation's system of latitude, longitude, elevation, and related geophysical and geodetic models and tools, which provides a consistent spatial reference framework for the broad spectrum of geoscientific applications and other positioning-related requirements. Technological developments - notably Global Navigation Satellite Systems (GNSS) - and user accuracy requirements necessitate that NGS endeavor to modernize the NSRS. Preparations are underway by NGS for a comprehensive NSRS makeover, to be completed in 2022 and delivered through a new generation of horizontal and vertical datums (reference frames), featuring unprecedented accuracy, repeatability, and efficiency of access. This evolution is outlined in the "National Geodetic Survey Ten-Year Strategic Plan, 2013-2023." This presentation will outline the motivation for this effort and the history, current status and planned evolution of NSRS. Fundamental to the delivery of the future reference frame paradigm are new geometric and geopotential (elevation) frameworks. The new geometric reference frame, realized through GNSS Continuously Operating Reference Stations (CORS), will replace the North American Datum of 1983 (NAD83) and will provide the nationwide framework for determination of latitude, longitude, and ellipsoid height. Designed to complement the new geometric reference frame, a corresponding geopotential reference frame - based on a national gravimetric geoid and replacing the North American Vertical Datum of 1988 (NAVD88) - will be developed and co-released. The gravimetric geoid - or definitional reference surface (zero elevation) - for the future geopotential reference frame will be built in part from airborne gravimetric data collected in
2. An Estimator for Attitude and Heading Reference Systems Based on Virtual Horizontal Reference
DEFF Research Database (Denmark)
Wang, Yunlong; Soltani, Mohsen; Hussain, Dil muhammed Akbar
2016-01-01
The output of the attitude determination systems suffers from large errors in case of accelerometer malfunctions. In this paper, an attitude estimator, based on Virtual Horizontal Reference (VHR), is designed for an Attitude Heading and Reference System (AHRS) to cope with this problem. The VHR...... makes it possible to correct the output of roll and pitch of the attitude estimator in the situations without accelerometer measurements, which cannot be achieved by the conventional nonlinear attitude estimator. The performance of VHR is tested both in simulation and hardware environment to validate...... their estimation performance. Moreover, the hardware test results are compared with that of a high-precision commercial AHRS to verify the estimation results. The implemented algorithm has shown high accuracy of attitude estimation that makes the system suitable for many applications....
3. Programming Windows® Embedded CE 60 Developer Reference
CERN Document Server
Boling, Douglas
2010-01-01
Get the popular, practical reference to developing small footprint applications-now updated for the Windows Embedded CE 6.0 kernel. Written by an authority on embedded application development, this book focuses in on core operating concepts and the Win32 API. It delivers extensive code samples and sample projects-helping you build proficiency creating innovative Windows applications for a new generation of devices. Discover how to: Create complex applications designed for the unique requirements of embedded devicesManage virtual memory, heaps, and the stack to minimize your memory footprintC
4. VERA: Virtual Enterprise Reference Architecture
DEFF Research Database (Denmark)
Vesterager, Johan; Tølle, Martin; Bernus, Peter
2003-01-01
. To prepare for this is a complex task, in fact all business, management and planning views, and related subject areas and activities, may be involved. In order to deal with this complexity in a systematic way and secure global understanding Globemen has developed a Virtual Enterprise Reference Architecture......Globalisation, outsourcing and customisation are main challenges of today not least for one-of-a-kind producers. A crucial competitive factor will be the ability rapidly to form customer focused virtual enterprises comprised of competencies from different partners by taking full advantage of ICT...
5. Oracle Data Dictionary Pocket Reference
CERN Document Server
Kreines, David
2003-01-01
If you work with Oracle, then you don't need to be told that the data dictionary is large and complex, and grows larger with each new Oracle release. It's one of the basic elements of the Oracle database you interact with regularly, but the sheer number of tables and views makes it difficult to remember which view you need, much less the name of the specific column. Want to make it simpler? The Oracle Data Dictionary Pocket Reference puts all the information you need right at your fingertips. Its handy and compact format lets you locate the table and view you need effortlessly without stoppin
6. Nuclear power a reference handbook
CERN Document Server
Henderson, Harry R
2014-01-01
In the 21st century, nuclear power has been identified as a viable alternative to traditional energy sources to stem global climate change, and condemned as risky to human health and environmentally irresponsible. Do the advantages of nuclear energy outweigh the risks, especially in light of the meltdown at the Fukushima plant in 2011? This guide provides both a comprehensive overview of this critical and controversial technology, presenting reference tools that include important facts and statistics, biographical profiles, a chronology, and a glossary. It covers major controversies and proposed solutions in detail and contains contributions by experts and important stakeholders that provide invaluable perspective on the topic.
7. Energy reference forecast for 2014
International Nuclear Information System (INIS)
The German Federal Ministry for Economic Affairs and Energy has commissioned three reputed institutions to prepare an energy reference forecast as well as a target scenario up to the year 2050. The results of this survey evidence a substantial need for political action if the goals of the Federal Government's energy concept are to be achieved as planned. In view of the wide range of interests among the players involved as well as the complexity of the demands facing the political leadership from diverse areas of life it appears unlikely that the targets laid down in the energy concept can be realised.
8. Reference atmospheres: VIRA II -Venus International Reference Atmosphere update.
Science.gov (United States)
Zasova, Ludmila
2012-07-01
VIRA I was started in 1982 (30 years ago) and published in1985 (ASR,v5,n11, 1985) by G. Keating, A. Kliore, and V. Moroz. The purpose was to produce a concise, descriptive model summarizing the physical properties of the atmosphere of Venus, which by then had been extensively observed by instruments on board the Venera and Pioneer space probes. VIRA was used by many scientists and engineers in their studies as referent standard of atmospheric data. Afterwards several missions have obtained new data. In particular the experiments on late Veneras and Venus Express. Experiments on board of VEX, working on the orbit for 6 years, provide new high quality data on atmospheric structure, clouds properties, dynamics, composition of the atmosphere, thermal balance, ionosphere. These new data will be used for VIRA update. Original data consists of 7 Chapters.(1 ) Models of the structure of the atmosphere of Venus from the surface to 100 km altitude, (2) Circulation of the atmosphere from surface to 100 km, (3) Particulate matter in the Venus atmosphere, (4) Models of Venus neutral upper atmosphere: structure and composition, (5) Composition of the atmosphere below 100 km altitude, (6) Solar and thermal radiation in the Venus atmosphere, (7) The Venus ionosphere. By 2002 Gerry Keating collected materials to update VIRA. But only two chapter were published: (1 ) Models of the structure of the atmosphere of Venus from the surface to 100 km altitude (Zasova et al, 2006, Cosmic Research, 44, N4), (5) Composition of the atmosphere below 100 km altitude (De Bergh et al. 2006, PSS). Both these chapters were based on the data, obtained before VEX. At the moment the structure of the original VIRA looks acceptable for VIRA II also, however, new Chapters may be added. At COSPAR 2014 in Moscow the session on Reference atmospheres (RAPS), may be proposed to continue discussion on VIRA, and start working on MIRA, and complete VIRA and publish (including CD) after COSPAR 2016 (or may be even
9. Portals Reference Implementation v. 1.0
Energy Technology Data Exchange (ETDEWEB)
2016-04-15
The Portals reference implementation is based on the Portals 4.X API, published by Sandia National Laboratories as a freely available public document. It is designed to be an implementation of the Portals Networking Application Programming Interface and is used by several other upper layer protocols like SHMEM, GASNet and MPI. It is implemented over existing networks, specifically Ethernet and InfiniBand networks. This implementation provides Portals networks functionality and serves as a software emulation of Portals compliant networking hardware. It can be used to develop software using the Portals API prior to the debut of Portals networking hardware, such as Bull’s BXI interconnect, as well as a substitute for portals hardware on development platforms that do not have Portals compliant hardware. The reference implementation provides new capabilities beyond that of a typical network, namely the ability to have messages matched in hardware in a way compatible with upper layer software such as MPI or SHMEM. It also offers methods of offloading network operations via triggered operations, which can be used to create offloaded collective operations. Specific details on the Portals API can be found at http://portals4.org.
10. User satisfaction with referrals at a collaborative virtual reference service Virtual reference services, Reference services, Referrals, User satisfaction
OpenAIRE
Nahyun Kwon
2006-01-01
Introduction. This study investigated unmonitored referrals in a nationwide, collaborative chat reference service. Specifically, it examined the extent to which questions are referred, the types of questions that are more likely to be referred than others, and the level of user satisfaction with the referrals in the collaborative chat reference service. Method. The data analysed for this study were 420 chat reference transaction transcripts along with corresponding online survey questionnai...
11. Application of reference gene for real-time RT-PCR%内参照基因在实时荧光定量RT-PCR检测中的应用
Institute of Scientific and Technical Information of China (English)
陈瑾歆; 陈建业; 李云祥
2011-01-01
目的 建立测定内参照β-actin表达量的实时荧光定量RT-PCR两步法检测方法.方法 根据Genbank 中人β-actin保守区域序列设计荧光PCR适用的引物和探针,构建质粒标准品建立标准曲线用于荧光PCR相对定量,检测荧光PCR方法的特异性和重复性.结果 建立了人β-actin实时荧光RT-PCR检测方法.结论 本实验建立的人β-actin表达实时荧光RT-PCR两步法检测方法特异性和重复性较好,为β-actin作为定量RT-PCR中内参照基因进行人其他功能基因和病原基因表达的定量分析奠定了基础.%Objective To establish a Taqman real-time RT-PCR 2-step assay for quantitative detection of the expression of human β-actin. Methods Specific primers and probes were designed for real-time RT-PCR according to β-actin cDNA sequence. Plasmid standard preparations were constructed by T-A clone, extracted and prepared for establishing standard curve used for relative quantification real-time RT-PCR. The expression levels of β-actin were measured by real-time RT-PCR in HepG2 cells to detect the specificity and reproducibility of real-time RT-PCR assay. Results An effective real-time RT-PCR assay was established for detecting β-actin mRNA expression levels.Conclusions The real-time PCR assay for the expression of β-actin is a sensitive, specific tool for quantitative assay of mRNA expression levels of other gene when using β-actin as reference genes.
12. Semantic Features for Classifying Referring Search Terms
Energy Technology Data Exchange (ETDEWEB)
May, Chandler J.; Henry, Michael J.; McGrath, Liam R.; Bell, Eric B.; Marshall, Eric J.; Gregory, Michelle L.
2012-05-11
When an internet user clicks on a result in a search engine, a request is submitted to the destination web server that includes a referrer field containing the search terms given by the user. Using this information, website owners can analyze the search terms leading to their websites to better understand their visitors needs. This work explores some of the features that can be used for classification-based analysis of such referring search terms. We present initial results for the example task of classifying HTTP requests countries of origin. A system that can accurately predict the country of origin from query text may be a valuable complement to IP lookup methods which are susceptible to the obfuscation of dereferrers or proxies. We suggest that the addition of semantic features improves classifier performance in this example application. We begin by looking at related work and presenting our approach. After describing initial experiments and results, we discuss paths forward for this work.
13. ACAA fly ash basics: quick reference card
Energy Technology Data Exchange (ETDEWEB)
NONE
2006-07-01
Fly ash is a fine powdery material created when coal is burned to generate electricity. Before escaping into the environment via the utility stacks, the ash is collected and may be stored for beneficial uses or disposed of, if necessary. The use of fly ash provides environmental benefits, such as the conservation of natural resources, the reduction of greenhouse gas emissions and eliminating the needed for ash disposal in landfills. It is also a valuable mineral resource that is used in construction and manufacturing. Fly ash is used in the production of Portland cement, concrete, mortars and stuccos, manufactured aggregates along with various agricultural applications. As mineral filler, fly ash can be used for paints, shingles, carpet backing, plastics, metal castings and other purposes. This quick reference card is intended to provide the reader basic source, identification and composition, information specifically related to fly ash.
14. Shock wave science and technology reference library
CERN Document Server
2009-01-01
This book, as a volume of the Shock Wave Science and Technology Reference Library, is primarily concerned with detonation waves or compression shock waves in reactive heterogeneous media, including mixtures of solid, liquid and gas phases. The topics involve a variety of energy release and control processes in such media - a contemporary research field that has found wide applications in propulsion and power, hazard prevention as well as military engineering. The six extensive chapters contained in this volume are: - Spray Detonation (SB Murray and PA Thibault) - Detonation of Gas-Particle Flow (F Zhang) - Slurry Detonation (DL Frost and F Zhang) - Detonation of Metalized Composite Explosives (MF Gogulya and MA Brazhnikov) - Shock-Induced Solid-Solid Reactions and Detonations (YA Gordopolov, SS Batsanov, and VS Trofimov) - Shock Ignition of Particles (SM Frolov and AV Fedorov) Each chapter is self-contained and can be read independently of the others, though, they are thematically interrelated. They offer a t...
15. On combining reference data to improve imputation accuracy.
Directory of Open Access Journals (Sweden)
Jun Chen
Full Text Available Genotype imputation is an important tool in human genetics studies, which uses reference sets with known genotypes and prior knowledge on linkage disequilibrium and recombination rates to infer un-typed alleles for human genetic variations at a low cost. The reference sets used by current imputation approaches are based on HapMap data, and/or based on recently available next-generation sequencing (NGS data such as data generated by the 1000 Genomes Project. However, with different coverage and call rates for different NGS data sets, how to integrate NGS data sets of different accuracy as well as previously available reference data as references in imputation is not an easy task and has not been systematically investigated. In this study, we performed a comprehensive assessment of three strategies on using NGS data and previously available reference data in genotype imputation for both simulated data and empirical data, in order to obtain guidelines for optimal reference set construction. Briefly, we considered three strategies: strategy 1 uses one NGS data as a reference; strategy 2 imputes samples by using multiple individual data sets of different accuracy as independent references and then combines the imputed samples with samples based on the high accuracy reference selected when overlapping occurs; and strategy 3 combines multiple available data sets as a single reference after imputing each other. We used three software (MACH, IMPUTE2 and BEAGLE for assessing the performances of these three strategies. Our results show that strategy 2 and strategy 3 have higher imputation accuracy than strategy 1. Particularly, strategy 2 is the best strategy across all the conditions that we have investigated, producing the best accuracy of imputation for rare variant. Our study is helpful in guiding application of imputation methods in next generation association analyses.
16. New SCIAMACHY Solar Reference Spectrum
Science.gov (United States)
Hilbig, Tina; Bramstedt, Klaus; Weber, Mark; Burrows, John P.
2016-04-01
The Scanning Imaging Absorption spectroMeter for Atmospheric CHartographY (SCIAMACHY) aboard ESA's ENVISAT satellite platform was operating from 2002 until 2012. It was designed to measure the radiance backscattered from the Earth and hence determine total columns and vertical profiles of atmospheric trace gas species. Furthermore SCIAMACHY performed daily sun observations via a diffuser. Solar spectra in the wavelength range from 212 nm to 1760 nm and two narrow bands from 1930 to 2040 nm and 2260 to 2380 nm are measured with a spectral resolution of 0,2 to 1,5 nm in the different channels. Recent developments in the SCIAMACHY calibration (e.g. a physical model of the scanner unit including degradation effects, and an on-ground to in-flight correction using the on-board white light source (WLS)) are used for the generation of a new SCIAMACHY solar reference spectrum as a first step towards a 10 years time series of solar spectral irradiance (SSI) data. For validation comparisons with other solar reference spectra are performed.
17. The soil reference shrinkage curve
CERN Document Server
Chertkov, V Y
2014-01-01
A recently proposed model showed how a clay shrinkage curve is transformed to the soil shrinkage curve at the soil clay content higher than a critical one. The objective of the present work was to generalize this model to the soil clay content lower a critical one. I investigated (i) the reference shrinkage curve, that is, one without cracks; (ii) the superficial layer of aggregates, with changed pore structure compared with the intraaggregate matrix; and (iii) soils with sufficiently low clay content where there are large pores inside the intraaggregate clay (so-called lacunar pores). The methodology is based on detail accounting for different contributions to the soil volume and water content during shrinkage. The key point is the calculation of the lacunar pore volume variance at shrinkage. The reference shrinkage curve is determined by eight physical soil parameters: (1) oven-dried specific volume; (2) maximum swelling water content; (3) mean solid density; (4) soil clay content; (5) oven-dried structural...
18. Reference ballistic imaging database performance.
Science.gov (United States)
De Kinder, Jan; Tulleners, Frederic; Thiebaut, Hugues
2004-03-10
Ballistic imaging databases allow law enforcement to link recovered cartridge cases to other crime scenes and to firearms. The success of these databases has led many to propose that all firearms in circulation be entered into a reference ballistic image database (RBID). To assess the performance of an RBID, we fired 4200 cartridge cases from 600 9mm Para Sig Sauer model P226 series pistols. Each pistol fired two Remington cartridges, one of which was imaged in the RBID, and five additional cartridges, consisting of Federal, Speer, Winchester, Wolf, and CCI brands. Randomly selected samples from the second series of Remington cartridge cases and from the five additional brands were then correlated against the RBID. Of the 32 cartridges of the same make correlated against the RBID, 72% ranked in the top 10 positions. Likewise, of the 160 cartridges of the five different brands correlated against the database, 21% ranked in the top 10 positions. Generally, the ranking position increased as the size of the RBID increased. We obtained similar results when we expanded the RBID to include firearms with the same class characteristics for breech face marks, firing pin impressions, and extractor marks. The results of our six queries against the RBID indicate that a reference ballistics image database of new guns is currently fraught with too many difficulties to be an effective and efficient law enforcement tool.
19. Nuclear science references coding manual
Energy Technology Data Exchange (ETDEWEB)
Ramavataram, S.; Dunford, C.L.
1996-08-01
This manual is intended as a guide to Nuclear Science References (NSR) compilers. The basic conventions followed at the National Nuclear Data Center (NNDC), which are compatible with the maintenance and updating of and retrieval from the Nuclear Science References (NSR) file, are outlined. In Section H, the structure of the NSR file such as the valid record identifiers, record contents, text fields as well as the major TOPICS for which are prepared are enumerated. Relevant comments regarding a new entry into the NSR file, assignment of , generation of and linkage characteristics are also given in Section II. In Section III, a brief definition of the Keyword abstract is given followed by specific examples; for each TOPIC, the criteria for inclusion of an article as an entry into the NSR file as well as coding procedures are described. Authors preparing Keyword abstracts either to be published in a Journal (e.g., Nucl. Phys. A) or to be sent directly to NNDC (e.g., Phys. Rev. C) should follow the illustrations in Section III. The scope of the literature covered at the NNDC, the categorization into Primary and Secondary sources, etc., is discussed in Section IV. Useful information regarding permitted character sets, recommended abbreviations, etc., is given under Section V as Appendices.
20. Event boundaries and anaphoric reference.
Science.gov (United States)
Thompson, Alexis N; Radvansky, Gabriel A
2016-06-01
The current study explored the finding that parsing a narrative into separate events impairs anaphor resolution. According to the Event Horizon Model, when a narrative event boundary is encountered, a new event model is created. Information associated with the prior event model is removed from working memory. So long as the event model containing the anaphor referent is currently being processed, this information should still be available when there is no narrative event boundary, even if reading has been disrupted by a working-memory-clearing distractor task. In those cases, readers may reactivate their prior event model, and anaphor resolution would not be affected. Alternatively, comprehension may not be as event oriented as this account suggests. Instead, any disruption of the contents of working memory during comprehension, event related or not, may be sufficient to disrupt anaphor resolution. In this case, reading comprehension would be more strongly guided by other, more basic language processing mechanisms and the event structure of the described events would play a more minor role. In the current experiments, participants were given stories to read in which we included, between the anaphor and its referent, either the presence of a narrative event boundary (Experiment 1) or a narrative event boundary along with a working-memory-clearing distractor task (Experiment 2). The results showed that anaphor resolution was affected by narrative event boundaries but not by a working-memory-clearing distractor task. This is interpreted as being consistent with the Event Horizon Model of event cognition. PMID:26452376
1. Instant Messaging Reference: How Does It Compare?
Science.gov (United States)
Desai, Christina M.
2003-01-01
Compares a digital reference service that uses instant messaging with traditional, face-to-face reference based on experiences at the Southern Illinois University library. Addresses differences in reference questions asked, changes in the reference transaction, student expectations, bibliographic instruction, and librarian attitudes and procedures…
2. Bibliographic databases: help in preparing reference lists.
Science.gov (United States)
Biancuzzo, M
1995-01-01
Typing bibliography references is time consuming. It is also frustrating to have to retype references when you submit a manuscript to journals using different reference styles. Now you don't need to retype them. Computer programs have been developed which help you reorganize your references to many different styles. This experienced nurse author compares several of these programs for you. PMID:7613563
3. 40 CFR 1042.910 - Reference materials.
Science.gov (United States)
2010-07-01
... reference as prescribed in 5 U.S.C. 552(a) and 1 CFR part 51. Anyone may inspect copies at the U.S. EPA, Air... 40 Protection of Environment 32 2010-07-01 2010-07-01 false Reference materials. 1042.910 Section... Other Reference Information § 1042.910 Reference materials. Documents listed in this section have...
4. The Digital Reference Collection in Academic Libraries
OpenAIRE
Osorio, Nestor L.
2012-01-01
Reference services and reference collections in academic libraries are going through significant changes. In this paper, some of the issues prevalent today in building and maintaining digital reference collections will be discussed, such as: presentation and organization, marketing, use, and selection of digital reference resources.
5. 33 CFR 241.3 - References.
Science.gov (United States)
2010-07-01
... 33 Navigation and Navigable Waters 3 2010-07-01 2010-07-01 false References. 241.3 Section 241.3... CONTROL COST-SHARING REQUIREMENTS UNDER THE ABILITY TO PAY PROVISION § 241.3 References. References cited..., Hyattsville, MD 20781-1102. References cited in paragraphs (d) and (e) may be obtained from the...
6. Superior cross-species reference genes: a blueberry case study.
Science.gov (United States)
Die, Jose V; Rowland, Lisa J
2013-01-01
The advent of affordable Next Generation Sequencing technologies has had major impact on studies of many crop species, where access to genomic technologies and genome-scale data sets has been extremely limited until now. The recent development of genomic resources in blueberry will enable the application of high throughput gene expression approaches that should relatively quickly increase our understanding of blueberry physiology. These studies, however, require a highly accurate and robust workflow and make necessary the identification of reference genes with high expression stability for correct target gene normalization. To create a set of superior reference genes for blueberry expression analyses, we mined a publicly available transcriptome data set from blueberry for orthologs to a set of Arabidopsis genes that showed the most stable expression in a developmental series. In total, the expression stability of 13 putative reference genes was evaluated by qPCR and a set of new references with high stability values across a developmental series in fruits and floral buds of blueberry were identified. We also demonstrated the need to use at least two, preferably three, reference genes to avoid inconsistencies in results, even when superior reference genes are used. The new references identified here provide a valuable resource for accurate normalization of gene expression in Vaccinium spp. and may be useful for other members of the Ericaceae family as well.
7. Management Models and Considerations for Virtual Reference
OpenAIRE
Murphy, Joe
2008-01-01
This column (from the ongoing column “Better Practices From the Field” in Science & Technology Libraries), explores the major management considerations for developing and maintaining virtual reference services. Topics include staffing models, choosing technologies, training staff, and workflows for diffuse reference services including text messaging reference, instant messaging reference, and reference via emerging library 2.0 and 3.0 venues. Lessons learned and suggested best practices are s...
8. An overview of digital reference services
OpenAIRE
Hemnani, Anita
2009-01-01
Digital reference service is an emerging trend of traditional reference service. Easily accessible digital reference service has become one of the hallmark of the library and information services. The paper highlights how new visage of traditional reference service is developing as a natural solution to keep pace with comprehensive technological environment. It discusses about the basic concepts, elements of digital reference service and give in detail modes, the advantages, limitations, and...
9. 46 CFR 27.102 - Incorporation by reference.
Science.gov (United States)
2010-10-01
... the Director of the Federal Register—in accordance with 5 U.S.C. 552(a) and 1 CFR part 51. To enforce... reference in this part and the sections affected are: American Boat and Yacht Council (ABYC), 613 Third... Commonwealth Drive, Warrendale, PA 15096-0001 SAE J1475-1984—Hydraulic Hose Fitting for Marine Applications...
10. CrocoPat 2.1 Introduction and Reference Manual
OpenAIRE
Beyer, Dirk; Noack, Andreas
2004-01-01
CrocoPat is an efficient, powerful and easy-to-use tool for manipulating relations of arbitrary arity, including directed graphs. This manual provides an introduction to and a reference for CrocoPat and its programming language RML. It includes several application examples, in particular from the analysis of structural models of software systems.
11. 29 CFR 507.1 - Cross-reference.
Science.gov (United States)
2010-07-01
... OCCUPATIONS AND AS FASHION MODELS § 507.1 Cross-reference. Regulations governing labor condition applications requirements for employers using nonimmigrants on H-1B specialty visas in specialty occupations and as fashion models are found at 20 CFR part 655, subparts H and I....
12. The Micropaleontological Reference Centers Network
Directory of Open Access Journals (Sweden)
David Lazarus
2006-09-01
Full Text Available The Micropaleontological Reference Centers (MRCscomprise large microfossil slide collections prepared from core samples obtained through the Deep Sea Drilling Project(DSDP and Ocean Drilling Program (ODP. The MRCs have been maintained for three decades, largely as a volunteer effort by a global network of curators at more than a dozen institutions (Fig.1, Table 1. They were originallyintended to provide a permanent micropaleontological archive for the DSDP; however, as their geographic and stratigraphic coverage has increased they have become increasingly valuable for research and teaching. This article describes the MRCs and their current usage, identifi es the need to maintain and improve the accuracy of the microfossil taxonomy upon which most DSDP and ODP geochronologyis based, and cites the potential for the future use of the MRCs by the Integrated Ocean Drilling Program (IODP.
13. New Concepts in Digital Reference
CERN Document Server
Lankes, R David
2009-01-01
Let us start with a simple scenario: a man asks a woman 'how high is Mount Everest?' The woman replies '29,029 feet'. Nothing could be simpler. Now let us suppose that rather than standing in a room, or sitting on a bus, the man is at his desk and the woman is 300 miles away with the conversation taking place using e-mail. Still simple? Certainly - it happens every day. So why all the bother about digital (virtual, electronic, chat, etc.) reference? If the man is a pilot flying over Mount Everest, the answer matters. If you are a lawyer going to court, the identity of the woman is very importa
14. Gender agreement and multiple referents
Science.gov (United States)
Finocchiaro, Chiara; Mahon, Bradford Z.; Caramazza, Alfonso
2010-01-01
We report a new pattern of usage in current, spoken Italian that has implications for both psycholinguistic models of language production and linguistic theories of language change. In Italian, gender agreement is mandatory for both singular and plural nouns. However, when two or more nouns of different grammatical gender appear in a conjoined noun phrase (NP), masculine plural agreement is required. In this study, we combined on-line and off-line methodologies in order to assess the mechanisms involved in gender marking in the context of multiple referents. The results of two pronoun production tasks showed that plural feminine agreement was significantly more difficult than plural masculine agreement. In a separate study using offline judgements of acceptability, we found that agreement violations in Italian are tolerated more readily in the case of feminine conjoined noun phrases (e.g., la mela e la banana ‘the:fem apple:fem and the: fem banana: fem’) than masculine conjoined noun phrases (e.g., il fiore e il libro ‘the:mas flower: mas and the:mas book:mas’). Implications of these results are discussed both at the level of functional architecture within the language production system and at the level of changes in language use.* PMID:21037930
15. Gender agreement and multiple referents.
Science.gov (United States)
Finocchiaro, Chiara; Mahon, Bradford Z; Caramazza, Alfonso
2008-01-01
We report a new pattern of usage in current, spoken Italian that has implications for both psycholinguistic models of language production and linguistic theories of language change. In Italian, gender agreement is mandatory for both singular and plural nouns. However, when two or more nouns of different grammatical gender appear in a conjoined noun phrase (NP), masculine plural agreement is required. In this study, we combined on-line and off-line methodologies in order to assess the mechanisms involved in gender marking in the context of multiple referents. The results of two pronoun production tasks showed that plural feminine agreement was significantly more difficult than plural masculine agreement. In a separate study using offline judgements of acceptability, we found that agreement violations in Italian are tolerated more readily in the case of feminine conjoined noun phrases (e.g., la mela e la banana 'the:fem apple:fem and the: fem banana: fem') than masculine conjoined noun phrases (e.g., il fiore e il libro 'the:mas flower: mas and the:mas book:mas'). Implications of these results are discussed both at the level of functional architecture within the language production system and at the level of changes in language use.
16. Generic Argillite/Shale Disposal Reference Case
Energy Technology Data Exchange (ETDEWEB)
Zheng, Liange; Colon, Carlos Jové; Bianchi, Marco; Birkholzer, Jens
2014-08-08
properties (parameters) used in these models are different, which not only make inter-model comparisons difficult, but also compromise the applicability of the lessons learned from one model to another model. The establishment of a reference case would therefore be helpful to set up a baseline for model development. A generic salt repository reference case was developed in Freeze et al. (2013) and the generic argillite repository reference case is presented in this report. The definition of a reference case requires the characterization of the waste inventory, waste form, waste package, repository layout, EBS backfill, host rock, and biosphere. This report mainly documents the processes in EBS bentonite and host rock that are potentially important for performance assessment and properties that are needed to describe these processes, with brief description other components such as waste inventory, waste form, waste package, repository layout, aquifer, and biosphere. A thorough description of the generic argillite repository reference case will be given in Jové Colon et al. (2014).
17. Reference materials and representative test materials: the nanotechnology case
Energy Technology Data Exchange (ETDEWEB)
Roebben, G., E-mail: [email protected] [Joint Research Centre of the European Commission, Institute for Reference Materials and Measurements (Belgium); Rasmussen, K. [Joint Research Centre of the European Commission, Institute for Health and Consumer Protection (Italy); Kestens, V.; Linsinger, T. P. J. [Joint Research Centre of the European Commission, Institute for Reference Materials and Measurements (Belgium); Rauscher, H. [Joint Research Centre of the European Commission, Institute for Health and Consumer Protection (Italy); Emons, H. [Joint Research Centre of the European Commission, Institute for Reference Materials and Measurements (Belgium); Stamm, H. [Joint Research Centre of the European Commission, Institute for Health and Consumer Protection (Italy)
2013-03-15
An increasing number of chemical, physical and biological tests are performed on manufactured nanomaterials for scientific and regulatory purposes. Existing test guidelines and measurement methods are not always directly applicable to or relevant for nanomaterials. Therefore, it is necessary to verify the use of the existing methods with nanomaterials, thereby identifying where modifications are needed, and where new methods need to be developed and validated. Efforts for verification, development and validation of methods as well as quality assurance of (routine) test results significantly benefit from the availability of suitable test and reference materials. This paper provides an overview of the existing types of reference materials and introduces a new class of test materials for which the term 'representative test material' is proposed. The three generic concepts of certified reference material, reference material(non-certified) and representative test material constitute a comprehensive system of benchmarks that can be used by all measurement and testing communities, regardless of their specific discipline. This paper illustrates this system with examples from the field of nanomaterials, including reference materials and representative test materials developed at the European Commission's Joint Research Centre, in particular at the Institute for Reference Materials and Measurements (IRMM), and at the Institute for Health and Consumer Protection (IHCP).
18. Determination of Reference Catalogs for Meridian Observations Using Statistical Method
Science.gov (United States)
Li, Z. Y.
2014-09-01
The meridian observational data are useful for developing high-precision planetary ephemerides of the solar system. These historical data are provided by the jet propulsion laboratory (JPL) or the Institut De Mecanique Celeste Et De Calcul Des Ephemerides (IMCCE). However, we find that the reference systems (realized by the fundamental catalogs FK3 (Third Fundamental Catalogue), FK4 (Fourth Fundamental Catalogue), and FK5 (Fifth Fundamental Catalogue), or Hipparcos), to which the observations are referred, are not given explicitly for some sets of data. The incompleteness of information prevents us from eliminating the systematic effects due to the different fundamental catalogs. The purpose of this paper is to specify clearly the reference catalogs of these observations with the problems in their records by using the JPL DE421 ephemeris. The data for the corresponding planets in the geocentric celestial reference system (GCRS) obtained from the DE421 are transformed to the apparent places with different hypothesis regarding the reference catalogs. Then the validations of the hypothesis are tested by two kinds of statistical quantities which are used to indicate the significance of difference between the original and transformed data series. As a result, this method is proved to be effective for specifying the reference catalogs, and the missed information is determined unambiguously. Finally these meridian data are transformed to the GCRS for further applications in the development of planetary ephemerides.
19. Reference materials and representative test materials: the nanotechnology case
International Nuclear Information System (INIS)
An increasing number of chemical, physical and biological tests are performed on manufactured nanomaterials for scientific and regulatory purposes. Existing test guidelines and measurement methods are not always directly applicable to or relevant for nanomaterials. Therefore, it is necessary to verify the use of the existing methods with nanomaterials, thereby identifying where modifications are needed, and where new methods need to be developed and validated. Efforts for verification, development and validation of methods as well as quality assurance of (routine) test results significantly benefit from the availability of suitable test and reference materials. This paper provides an overview of the existing types of reference materials and introduces a new class of test materials for which the term ‘representative test material’ is proposed. The three generic concepts of certified reference material, reference material(non-certified) and representative test material constitute a comprehensive system of benchmarks that can be used by all measurement and testing communities, regardless of their specific discipline. This paper illustrates this system with examples from the field of nanomaterials, including reference materials and representative test materials developed at the European Commission’s Joint Research Centre, in particular at the Institute for Reference Materials and Measurements (IRMM), and at the Institute for Health and Consumer Protection (IHCP).
20. Chinese-Mandarin Basic Course: References.
Science.gov (United States)
Defense Language Inst., Monterey, CA.
This is a collection of reference materials to be used with the Chinese-Mandarin Basic Course textbooks. This collection consists of information on romanization systems, indexes for reading and writing characters, and other tables for quick reference. (NCR)
1. CA Wellness Plan Data Reference Guide
Data.gov (United States)
U.S. Department of Health & Human Services — The purpose of the California Wellness Plan (CWP) Data Reference Guide (Reference Guide) is to provide access to the lowest-level data for each CWP Objective;...
2. Medical reference dosimetry using EPR measurements of alanine
DEFF Research Database (Denmark)
Helt-Hansen, Jakob; Rosendal, F.; Kofoed, I.M.;
2009-01-01
Background. Electron spin resonance (EPR) is used to determine the absorbed dose of alanine dosimeters exposed to clinical photon beams in a solid-water phantom. Alanine is potentially suitable for medical reference dosimetry, because of its near water equivalence over a wide energy spectrum, low...... methods the proposed algorithm can be applied without normalisation of phase shifts caused by changes in the g-value of the cavity. The study shows that alanine dosimetry is a suitable candidate for medical reference dosimetry especially for quality control applications....
3. Java Foundation Classes in a Nutshell Desktop Quick Reference
CERN Document Server
Flanagan, David
1999-01-01
Java Foundation Classes in a Nutshell is an indispensable quick reference for Java programmers who are writing applications that use graphics or graphical user interfaces. The author of the bestsellingJava in a Nutshell has written fast-paced introductions to the Java APIs that comprise the Java Foundation Classes (JFC), such as the Swing GUI components and Java 2D, so that you can start using these exciting new technologies right away. This book also includes O'Reilly's classic-style, quick-reference material for all of the classes in the javax.swing and java.awt packages and their numerous
4. New trends of library reference services
OpenAIRE
Ranasinghe, W.M.T.D.
2012-01-01
Reference service is considered as the heart of the library services. It is a service, facilitated by a reference librarian, which meets the information needs of users with desired information. Like many other library services, library reference service also has changed with the impact of emerging technologies and in par with changing social needs. The aim of this paper is to discuss some of these new trends of library reference services. These new trends are divided into four main areas name...
5. 40 CFR 1043.100 - Reference materials.
Science.gov (United States)
2010-07-01
... in 5 U.S.C. 552(a) and 1 CFR part 51. Anyone may inspect copies at the U.S. EPA, Air and Radiation... 40 Protection of Environment 32 2010-07-01 2010-07-01 false Reference materials. 1043.100 Section... § 1043.100 Reference materials. Documents listed in this section have been incorporated by reference...
6. 33 CFR 242.3 - References.
Science.gov (United States)
2010-07-01
... 33 Navigation and Navigable Waters 3 2010-07-01 2010-07-01 false References. 242.3 Section 242.3 Navigation and Navigable Waters CORPS OF ENGINEERS, DEPARTMENT OF THE ARMY, DEPARTMENT OF DEFENSE FLOOD PLAIN MANAGEMENT SERVICES PROGRAM ESTABLISHMENT OF FEES FOR COST RECOVERY § 242.3 References. The references...
7. 44 CFR 59.4 - References.
Science.gov (United States)
2010-10-01
... 1954, as amended by the Housing and Community Development Act of 1974 (24 CFR 600.72). (12) Executive... 44 Emergency Management and Assistance 1 2010-10-01 2010-10-01 false References. 59.4 Section 59.4... References. (a) The following are statutory references for the National Flood Insurance Program, under...
8. 33 CFR 277.3 - References.
Science.gov (United States)
2010-07-01
... 33 Navigation and Navigable Waters 3 2010-07-01 2010-07-01 false References. 277.3 Section 277.3... References. (a) Section 6, Pub. L. 647, 67th Congress, 21 June 1940, as amended (33 U.S.C. 516). (Appendix A...) Coast Guard reference: COMDT (G-OPT-3), Exemplification-Principles of Apportionment of Cost...
9. 5 CFR 294.401 - References.
Science.gov (United States)
2010-01-01
... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false References. 294.401 Section 294.401 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS AVAILABILITY OF OFFICIAL INFORMATION Cross References § 294.401 References. The table below provides assistance in locating other...
10. 40 CFR 312.11 - References.
Science.gov (United States)
2010-07-01
... 40 Protection of Environment 27 2010-07-01 2010-07-01 false References. 312.11 Section 312.11 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SUPERFUND, EMERGENCY PLANNING, AND... Definitions and References § 312.11 References. The following industry standards may be used to comply...
11. 32 CFR 861.1 - References.
Science.gov (United States)
2010-07-01
... 32 National Defense 6 2010-07-01 2010-07-01 false References. 861.1 Section 861.1 National Defense Department of Defense (Continued) DEPARTMENT OF THE AIR FORCE AIRCRAFT DEPARTMENT OF DEFENSE COMMERCIAL AIR TRANSPORTATION QUALITY AND SAFETY REVIEW PROGRAM § 861.1 References. The following references apply to this...
12. Inertial reference frames and gravitational forces
International Nuclear Information System (INIS)
The connection between different definitions of inertial, i.e. fundamental, reference frames and the corresponding characterisation of gravitational fields by gravitational forces are considered from the point of view of their possible interpretation in university introductory courses. The introduction of a special class of reference frames, denoted 'mixed reference frames' is proposed and discussed. (author)
13. jQuery Pocket Reference
CERN Document Server
Flanagan, David
2010-01-01
"As someone who uses jQuery on a regular basis, it was surprising to discover how much of the library I'm not using. This book is indispensable for anyone who is serious about using jQuery for non-trivial applications."-- Raffaele Cecco, longtime developer of video games, including Cybernoid, Exolon, and Stormlord jQuery is the "write less, do more" JavaScript library. Its powerful features and ease of use have made it the most popular client-side JavaScript framework for the Web. This book is jQuery's trusty companion: the definitive "read less, learn more" guide to the library. jQuery P
14. User satisfaction with referrals at a collaborative virtual reference service Virtual reference services, Reference services, Referrals, User satisfaction
Directory of Open Access Journals (Sweden)
Nahyun Kwon
2006-01-01
Full Text Available Introduction. This study investigated unmonitored referrals in a nationwide, collaborative chat reference service. Specifically, it examined the extent to which questions are referred, the types of questions that are more likely to be referred than others, and the level of user satisfaction with the referrals in the collaborative chat reference service. Method. The data analysed for this study were 420 chat reference transaction transcripts along with corresponding online survey questionnaires submitted by the service users. Both sets of data were collected from an electronic archive of a southeastern state public library system that has participated in 24/7 Reference of the Metropolitan Cooperative Library System (MCLS. Results. Referrals in the collaborative chat reference service comprised approximately 30% of the total transactions. Circulation-related questions were the most often referred among all question types, possibly because of the inability of 'outside' librarians to access patron accounts. Most importantly, user satisfaction with referrals was found to be significantly lower than that of completed answers. Conclusion. The findings of this study addressed the importance of distinguishing two types of referrals: the expert research referrals conducive to collaborative virtual reference services; and the re-directional local referrals that increase unnecessary question traffic, thereby being detrimental to effective use of collaborative reference. Continuing efforts to conceptualize referrals in multiple dimensions are anticipated to fully grasp complex phenomena underlying referrals.
15. FENDL: International reference nuclear data library for fusion applications
International Nuclear Information System (INIS)
The IAEA nuclear data section, in co-operation with several national nuclear data centres and research groups, has created the first version of an internationally available fusion evaluated nuclear data library (FENDL-1). The FENDL library has been selected to serve as a comprehensive source of processed and tested nuclear data tailored to the requirements of the engineering design activity (EDA) of the ITER project and other fusion-related development projects. The present version of FENDL consists of the following sublibraries covering the necessary nuclear input for all physics and engineering aspects of the material development, design, operation and safety of the ITER project in its current EDA phase: FENDL/A-1.1: neutron activation cross-sections, selected from different available sources, for 636 nuclides, FENDL/D-1.0: nuclear decay data for 2900 nuclides in ENDF-6 format, FENDL/DS-1.0: neutron activation data for dosimetry by foil activation, FENDL/C-1.0: data for the fusion reactions D(d,n), D(d,p), T(d,n), T(t,2n), He-3(d,p) extracted from ENDF/B-6 and processed, FENDL/E-1.0:data for coupled neutron-photon transport calculations, including a data library for neutron interaction and photon production for 63 elements or isotopes, selected from ENDF/B-6, JENDL-3, or BROND-2, and a photon-atom interaction data library for 34 elements. The benchmark validation of FENDL-1 as required by the customer, i.e. the ITER team, is considered to be a task of high priority in the coming months. The well tested and validated nuclear data libraries in processed form of the FENDL-2 are expected to be ready by mid 1996 for use by the ITER team in the final phase of ITER EDA after extensive benchmarking and integral validation studies in the 1995-1996 period. The FENDL data files can be electronically transferred to users from the IAEA nuclear data section online system through INTERNET. A grand total of 54 (sub)directories with 845 files with total size of about 2 million blocks or about 1 Gigabyte (1 block=512 bytes) of numerical data is currently available on-line. (orig.)
16. Reference data sets for testing metrology software
Science.gov (United States)
Kok, G. J. P.; Harris, P. M.; Smith, I. M.; Forbes, A. B.
2016-08-01
Many fields of metrology rely on calculations that are implemented in software. When such software is used to provide a measurement result, which is required to be traceable, it is necessary to recognise explicitly the software and show it to be operating correctly. An approach to testing the performance of calculation software is based on using reference pairs each of which comprises reference input data applied as input to the software and corresponding reference output data against which the output data of the software is compared. However, to make the reference pair useful for verifying and validating calculation software, information is needed about the numerical accuracy of the reference pair, the numerical sensitivity of the reference output data to perturbations in the reference input data, and the measurement uncertainty associated with the reference output data arising from simulated measurement uncertainty associated with the reference input data. Such information is important as a means to express quantitatively the quality of the reference pair as a numerical artefact to test calculation software, and as a basis for performance metrics to express quantitatively the numerical performance of software. In this paper these additional components of a reference data set are described, and various approaches to calculating them are discussed. An example, concerned with the calculation of the Gaussian (least-squares) best-fit plane to measured data, which is typical of calculations undertaken in coordinate metrology, is used to illustrate the ideas presented.
17. Fabricating defensible reference standards for the NDA lab
International Nuclear Information System (INIS)
Nondestructive analysis (NDA) is performed at the Oak Ridge Y-12 Plant in support of the enriched uranium operations. Process materials are analyzed using gamma ray- and neutron-based instruments including segmented gamma scanners, solution assay systems, and an active well coincidence counter. Process wastes are also discarded based on results of these measurements. Good analytical practice, as well as applicable regulations, mandates that these analytical methods be calibrated using reference materials traceable to the national standards base. Reference standards for NDA instruments are not commercially available owing to the large quantities of special nuclear materials involved. Instead, representative materials are selected from each process stream, then thoroughly characterized by methods that are traceable to the national standards base. This paper discusses the process materials to be analyzed, reference materials selected for calibrating each NDA instrument, and details of their characterization and fabrication into working calibrations standards. Example calibration curves are also presented. 4 figs
18. Fabricating defensible reference standards for the NDA lab
Energy Technology Data Exchange (ETDEWEB)
Ceo, R.N.; May, P.K. [Oak Ridge Y-12 Plant, TN (United States)
1997-11-01
Nondestructive analysis (NDA) is performed at the Oak Ridge Y-12 Plant in support of the enriched uranium operations. Process materials are analyzed using gamma ray- and neutron-based instruments including segmented gamma scanners, solution assay systems, and an active well coincidence counter. Process wastes are also discarded based on results of these measurements. Good analytical practice, as well as applicable regulations, mandates that these analytical methods be calibrated using reference materials traceable to the national standards base. Reference standards for NDA instruments are not commercially available owing to the large quantities of special nuclear materials involved. Instead, representative materials are selected from each process stream, then thoroughly characterized by methods that are traceable to the national standards base. This paper discusses the process materials to be analyzed, reference materials selected for calibrating each NDA instrument, and details of their characterization and fabrication into working calibrations standards. Example calibration curves are also presented. 4 figs.
19. Design Reference Missions for Deep-Space Optical Communication
Science.gov (United States)
Breidenthal, J.; Abraham, D.
2016-05-01
We examined the potential, but uncertain, NASA mission portfolio out to a time horizon of 20 years, to identify mission concepts that potentially could benefit from optical communication, considering their communications needs, the environments in which they would operate, and their notional size, weight, and power constraints. A set of 12 design reference missions was selected to represent the full range of potential missions. These design reference missions span the space of potential customer requirements, and encompass the wide range of applications that an optical ground segment might eventually be called upon to serve. The design reference missions encompass a range of orbit types, terminal sizes, and positions in the solar system that reveal the chief system performance variables of an optical ground segment, and may be used to enable assessments of the ability of alternative systems to meet various types of customer needs.
20. Emmetropic eyes: objective performance and clinical reference
Science.gov (United States)
Tepichín-Rodríguez, Eduardo; Cruz Felix, Angel S.; López-Olazagasti, Estela; Balderas-Mata, Sandra
2013-11-01
The application of the wavefront sensors to measuring the monochromatic aberrations of the normal human eyes has given a new insight in the objective understanding of its performance. The resultant wavefront aberration function can be applied to evaluate the image quality on the retina, which includes the analysis of the higher-order aberrations. Among others, and due to their well-known mathematical properties for circular apertures, the wavefront aberration function is most commonly represented in terms of the Zernike polynomials. The main idea is to have a clinical reference of the objective performance of a set of normal human eyes. However, the high-order aberrations in normal human eyes are different for each persoņ that can be interpreted as that there are many possible solutions for the objective performance of emmetropic eyes. When dealing with the Zernike coefficients and excluding the spherical aberration, higher-order aberrations have a tendency to have a zero mean value. Different proposals have been suggested in the literature to deal with this feature. Moreover, it has been also shown that there is an ethnic dependency in the magnitude of the aberrations. We present in this work the objective performance of a set of uncorrected Mexican eyes, and compare them with other ethnic results published in the literature.
1. WECC Variable Generation Planning Reference Book: Appendices
Energy Technology Data Exchange (ETDEWEB)
Makarov, Yuri V.; Du, Pengwei; Etingov, Pavel V.; Ma, Jian; Vyakaranam, Bharat
2013-05-13
The document titled “WECC Variable Generation Planning Reference Book”. This book is divided into two volumes; one is the main document (volume 1)and the other is appendices (volume 2). The main document is a collection of the best practices and the information regarding the application and impact of variables generation on power system planning. This volume (appendices) has additional information on the following topics: Probabilistic load flow problems. 2. Additional useful indices. 3. high-impact low-frequency (HILF) events. 4. Examples of wide-area nomograms. 5. Transmission line ratings, types of dynamic rating methods. 6. Relative costs per MW-km of different electric power transmission technologies. 7. Ultra-high voltage (UHV) transmission. 8.High voltage direct current (VSC-HVDC). 9. HVDC. 10. Rewiring of existing transmission lines. 11. High-temperature low sag (HTLS) conductors. 12. The direct method and energy functions for transient stability analysis in power systems. 13.Blackouts caused by voltage instability. 14. Algorithm for parameter continuation predictor-corrector methods. 15. Approximation techniques available for security regions. 16. Impacts of wind power on power system small signals stability. 17. FIDVR. 18. FACTS. 19. European planning standard and practices. 20. International experience in wind and solar energy sources. 21. Western Renewable Energy Zones (WREZ). 22. various energy storage technologies. 23. demand response. 24. BA consolidation and cooperation options. 25. generator power management requirements and 26. European planning guidelines.
2. Nuclear forensics support. Reference manual
International Nuclear Information System (INIS)
or Illicit Trafficking of Radioactive Material (IAEA-TECDOC-1313). It was quickly recognized that much can be learned from the analysis of reported cases of illicit trafficking. For example, what specifically could the material have been used for? Where was the material obtained: in stock, scrap or waste? Was the amount seized only a sample of a much more significant quantity? These and many other questions can be answered through detailed technical characterization of seized material samples. The combination of scientific methods used for this purpose is normally referred to as 'nuclear forensics', which has become an indispensable tool for use in law enforcement investigations of nuclear trafficking. This publication is based on a document entitled Model Action Plan for Nuclear Forensics and Nuclear Attribution (UCLR-TR-202675). The document is unique in that it brings together, for the first time, a concise but comprehensive description of the various tools and procedures of nuclear forensic investigations that was earlier available only in different areas of the scientific literature. It also has the merit of incorporating experience accumulated over the past decade by law enforcement agencies and nuclear forensics laboratories confronted with cases of illicit events involving nuclear or other radioactive material
3. Reference costs for power generation
International Nuclear Information System (INIS)
The first part of the 2003 study of reference costs for power generation has been completed. It was carried out by the General Directorate for Energy and Raw Materials (DGEMP) of the French Ministry of the Economy, Finance and Industry, with the collaboration of power-plant operators, construction firms and many other experts. A Review Committee of experts including economists (Forecasting Department, French Planning Office), qualified public figures, representatives of power-plant construction firms and operators, and non-governmental organization (NGO) experts, was consulted in the final phase. The study examines the costs of power generated by different methods (i.e. nuclear and fossil-fuel [gas-, coal-, and oil-fired] power plants) in the context of an industrial operation beginning in the year 2015. - The second part of the study relating to decentralized production methods (wind, photovoltaic, combined heat and power) is still in progress and will be presented at the beginning of next year. - 1. Study approach: The study is undertaken mainly from an investor's perspective and uses an 8% discount rate to evaluate the expenses and receipts from different years. In addition, the investment costs are considered explicitly in terms of interest during construction. - 2. Plant operating on a full-time basis (year-round): The following graph illustrates the main conclusions of the study for an effective operating period of 8000 hours. It can be seen that nuclear is more competitive than the other production methods for a year-round operation with an 8% discount rate applied to expenses. This competitiveness is even better if the costs related to greenhouse-gas (CO2) emission are taken into account in estimating the MWh cost price. Integrating the costs resulting from CO2 emissions by non-nuclear fuels (gas, coal), which will be compulsory as of 2004 with the transposition of European directives, increases the total cost per MWh of these power generation methods. Two
4. Charging circuit for a reference capacitor
Energy Technology Data Exchange (ETDEWEB)
Thurber, C.R.
1987-04-14
In a circuit adapted for use with a capacitor for storing a reference voltage supplied by a reference source, the improvement is described comprising: comparison means for comparing the voltage across the capacitor and the voltage of the reference source, and providing an output when the difference in the capacitor voltage and the voltage of the reference source exceeds a predetermined maximum; charge means responsive to the comparison means, for charging the capacitor when the difference in the capacitor voltage and the voltage of the reference source exceeds the predetermined maximum so as to reduce the difference, and switch means responsive to the comparison means output, for coupling the reference source to the capacitor to enable the reference source to directly charge the capacitor to a voltage equal to the reference voltage. The switch means is also for uncoupling the reference source from the capacitor while the capacitor comparison means compares the reference source and capacitor voltages and while the charge means is charging the capacitor.
5. Reference Signal Reconstruction and Its Impact on Detection Performance of WiFi-based Passive Radar
Directory of Open Access Journals (Sweden)
Rao Yunhua
2016-06-01
Full Text Available While Wireless Fidelity (WiFi-based passive radar can achieve high detection resolution in both the range and Doppler domain, it is difficult to extract the reference signal because of the complexities of its signal format and application scenarios. In this study, we analyze a typical application of WiFi-based passive radar and discuss different methods for reference signal extraction. Based on the format and features of WiFi signals, we propose a method for reference signal reconstruction, and analyze the influence of the reconstructed reference signal’s performance on detection. The results show that higher reference SNRs generate lower decoding bit rate errors and better clutter suppression with the reconstructed reference signal. Moreover, we propose a method for removing irrelevant signals to avoid the impact on target detection of a non-direct path signal in the receiving signal. The experimental results validate the efficacy of the proposed signal processing method.
6. Blood plasma reference material: a global resource for proteomic research.
Science.gov (United States)
Malm, Johan; Danmyr, Pia; Nilsson, Rolf; Appelqvist, Roger; Végvári, Akos; Marko-Varga, György
2013-07-01
There is an ever-increasing awareness and interest within the clinical research field, creating a large demand for blood fraction samples as well as other clinical samples. The translational research area is another field that is demanding for blood samples, used widely in proteomics, genomics, as well as metabolomics. Blood samples are globally the most common biological samples that are used in a broad variety of applications in life science. We hereby introduce a new reference blood plasma standard (heparin) that is aimed as a global resource for the proteomics community. We have developed these reference plasma standards by defining the Control group as those with C-reactive protein levels 30 mg/L. In these references we have used both newborn children 1-2 weeks, as well as youngsters 15-30 years, and middle aged 30-50 years, and elderly patients at the ages of 65+. In total, there were 80 patients in each group in the reference plasma pools. We provide data on the developments and characteristics of the reference blood plasma standards, as well as what is used by the team members at the respective laboratories. The standards have been evaluated by pilot sample processing in biobanking operations and are currently a resource that allows the Proteomic society to perform quantitative proteomic studies. By the use of high quality reference plasma samples, global initiatives, such as the Chromosome Human Proteome Project (C-HPP), will benefit as one scientific program when the entire human proteome is mapped and linked to human diseases. The plasma reference standards are a global resource and can be accessed upon request. PMID:23701512
7. Reference View Selection in DIBR-Based Multiview Coding.
Science.gov (United States)
Maugey, Thomas; Petrazzuoli, Giovanni; Frossard, Pascal; Cagnazzo, Marco; Pesquet-Popescu, Beatrice
2016-04-01
Augmented reality, interactive navigation in 3D scenes, multiview video, and other emerging multimedia applications require large sets of images, hence larger data volumes and increased resources compared with traditional video services. The significant increase in the number of images in multiview systems leads to new challenging problems in data representation and data transmission to provide high quality of experience on resource-constrained environments. In order to reduce the size of the data, different multiview video compression strategies have been proposed recently. Most of them use the concept of reference or key views that are used to estimate other images when there is high correlation in the data set. In such coding schemes, the two following questions become fundamental: 1) how many reference views have to be chosen for keeping a good reconstruction quality under coding cost constraints? And 2) where to place these key views in the multiview data set? As these questions are largely overlooked in the literature, we study the reference view selection problem and propose an algorithm for the optimal selection of reference views in multiview coding systems. Based on a novel metric that measures the similarity between the views, we formulate an optimization problem for the positioning of the reference views, such that both the distortion of the view reconstruction and the coding rate cost are minimized. We solve this new problem with a shortest path algorithm that determines both the optimal number of reference views and their positions in the image set. We experimentally validate our solution in a practical multiview distributed coding system and in the standardized 3D-HEVC multiview coding scheme. We show that considering the 3D scene geometry in the reference view, positioning problem brings significant rate-distortion improvements and outperforms the traditional coding strategy that simply selects key frames based on the distance between cameras. PMID
8. Reference frames in virtual spatial navigation are viewpoint dependent
Directory of Open Access Journals (Sweden)
Ágoston eTörök
2014-09-01
Full Text Available Spatial navigation in the mammalian brain relies on a cognitive map of the environment. Such cognitive maps enable us, for example, to take the optimal route from a given location to a known target. The formation of these maps is naturally influenced by our perception of the environment, meaning it is dependent on factors such as our viewpoint and choice of reference frame. Yet, it is unknown how these factors influence the construction of cognitive maps. Here, we evaluated how various combinations of viewpoints and reference frames affect subjects’ performance when they navigated in a bounded virtual environment without landmarks. We measured both their path length and time efficiency and found that (i ground perspective was associated with egocentric frame of reference, (ii aerial perspective was associated with allocentric frame of reference, (iii there was no appreciable performance difference between first and third person egocentric viewing positions and (iv while none of these effects were dependent on gender, males tended to perform better in general. Our study provides evidence that there are inherent associations between visual perspectives and cognitive reference frames. This result has implications about the mechanisms of path integration in the human brain and may also inspire designs of virtual reality applications. Lastly, we demonstrated the effective use of a tablet PC and spatial navigation tasks for studying spatial and cognitive aspects of human memory.
9. REFERENCE CASES FOR USE IN THE CEMENTITOUS PARTNERSHIP PROJECT
Energy Technology Data Exchange (ETDEWEB)
Langton, C.; Kosson, D.; Garrabrants, A.
2010-08-31
The Cementitious Barriers Partnership Project (CBP) is a multi-disciplinary, multi-institution cross cutting collaborative effort supported by the US Department of Energy (DOE) to develop a reasonable and credible set of tools to improve understanding and prediction of the structural, hydraulic and chemical performance of cementitious barriers used in nuclear applications. The period of performance is >100 years for operating facilities and > 1000 years for waste management. The CBP has defined a set of reference cases to provide the following functions: (i) a common set of system configurations to illustrate the methods and tools developed by the CBP, (ii) a common basis for evaluating methodology for uncertainty characterization, (iii) a common set of cases to develop a complete set of parameter and changes in parameters as a function of time and changing conditions, (iv) a basis for experiments and model validation, and (v) a basis for improving conceptual models and reducing model uncertainties. These reference cases include the following two reference disposal units and a reference storage unit: (i) a cementitious low activity waste form in a reinforced concrete disposal vault, (ii) a concrete vault containing a steel high-level waste tank filled with grout (closed high-level waste tank), and (iii) a spent nuclear fuel basin during operation. Each case provides a different set of desired performance characteristics and interfaces between materials and with the environment. Examples of concretes, grout fills and a cementitious waste form are identified for the relevant reference case configurations.
10. REFERENCE CASES FOR USE IN THE CEMENTITIOUS BARRIERS PARTNERSHIP
Energy Technology Data Exchange (ETDEWEB)
Langton, C
2009-01-06
The Cementitious Barriers Project (CBP) is a multidisciplinary cross cutting project initiated by the US Department of Energy (DOE) to develop a reasonable and credible set of tools to improve understanding and prediction of the structural, hydraulic and chemical performance of cementitious barriers used in nuclear applications. The period of performance is >100 years for operating facilities and > 1000 years for waste management. The CBP has defined a set of reference cases to provide the following functions: (1) a common set of system configurations to illustrate the methods and tools developed by the CBP, (2) a common basis for evaluating methodology for uncertainty characterization, (3) a common set of cases to develop a complete set of parameter and changes in parameters as a function of time and changing conditions, and (4) a basis for experiments and model validation, and (5) a basis for improving conceptual models and reducing model uncertainties. These reference cases include the following two reference disposal units and a reference storage unit: (1) a cementitious low activity waste form in a reinforced concrete disposal vault, (2) a concrete vault containing a steel high-level waste tank filled with grout (closed high-level waste tank), and (3) a spent nuclear fuel basin during operation. Each case provides a different set of desired performance characteristics and interfaces between materials and with the environment. Examples of concretes, grout fills and a cementitious waste form are identified for the relevant reference case configurations.
11. Pseudo-Reference-Based Assembly of Vertebrate Transcriptomes
Directory of Open Access Journals (Sweden)
Kyoungwoo Nam
2016-02-01
Full Text Available High-throughput RNA sequencing (RNA-seq provides a comprehensive picture of the transcriptome, including the identity, structure, quantity, and variability of expressed transcripts in cells, through the assembly of sequenced short RNA-seq reads. Although the reference-based approach guarantees the high quality of the resulting transcriptome, this approach is only applicable when the relevant reference genome is present. Here, we developed a pseudo-reference-based assembly (PRA that reconstructs a transcriptome based on a linear regression function of the optimized mapping parameters and genetic distances of the closest species. Using the linear model, we reconstructed transcriptomes of four different aves, the white leg horn, turkey, duck, and zebra finch, with the Gallus gallus genome as a pseudo-reference, and of three primates, the chimpanzee, gorilla, and macaque, with the human genome as a pseudo-reference. The resulting transcriptomes show that the PRAs outperformed the de novo approach for species with within about 10% mutation rate among orthologous transcriptomes, enough to cover distantly related species as far as chicken and duck. Taken together, we suggest that the PRA method can be used as a tool for reconstructing transcriptome maps of vertebrates whose genomes have not yet been sequenced.
12. Pseudo-Reference-Based Assembly of Vertebrate Transcriptomes.
Science.gov (United States)
Nam, Kyoungwoo; Jeong, Heesu; Nam, Jin-Wu
2016-01-01
High-throughput RNA sequencing (RNA-seq) provides a comprehensive picture of the transcriptome, including the identity, structure, quantity, and variability of expressed transcripts in cells, through the assembly of sequenced short RNA-seq reads. Although the reference-based approach guarantees the high quality of the resulting transcriptome, this approach is only applicable when the relevant reference genome is present. Here, we developed a pseudo-reference-based assembly (PRA) that reconstructs a transcriptome based on a linear regression function of the optimized mapping parameters and genetic distances of the closest species. Using the linear model, we reconstructed transcriptomes of four different aves, the white leg horn, turkey, duck, and zebra finch, with the Gallus gallus genome as a pseudo-reference, and of three primates, the chimpanzee, gorilla, and macaque, with the human genome as a pseudo-reference. The resulting transcriptomes show that the PRAs outperformed the de novo approach for species with within about 10% mutation rate among orthologous transcriptomes, enough to cover distantly related species as far as chicken and duck. Taken together, we suggest that the PRA method can be used as a tool for reconstructing transcriptome maps of vertebrates whose genomes have not yet been sequenced. PMID:26927182
13. REFERENCE CASES FOR USE IN THE CEMENTITIOUS BARRIERS PARTNERSHIP
International Nuclear Information System (INIS)
The Cementitious Barriers Project (CBP) is a multidisciplinary cross cutting project initiated by the US Department of Energy (DOE) to develop a reasonable and credible set of tools to improve understanding and prediction of the structural, hydraulic and chemical performance of cementitious barriers used in nuclear applications. The period of performance is >100 years for operating facilities and > 1000 years for waste management. The CBP has defined a set of reference cases to provide the following functions: (1) a common set of system configurations to illustrate the methods and tools developed by the CBP, (2) a common basis for evaluating methodology for uncertainty characterization, (3) a common set of cases to develop a complete set of parameter and changes in parameters as a function of time and changing conditions, and (4) a basis for experiments and model validation, and (5) a basis for improving conceptual models and reducing model uncertainties. These reference cases include the following two reference disposal units and a reference storage unit: (1) a cementitious low activity waste form in a reinforced concrete disposal vault, (2) a concrete vault containing a steel high-level waste tank filled with grout (closed high-level waste tank), and (3) a spent nuclear fuel basin during operation. Each case provides a different set of desired performance characteristics and interfaces between materials and with the environment. Examples of concretes, grout fills and a cementitious waste form are identified for the relevant reference case configurations
14. Survey of reference materials. V. 2: Environmentally related reference materials for trace elements, nuclides and microcontaminants
International Nuclear Information System (INIS)
The present report presently contains over 250 reference materials with trace element and organic contaminant information on fuel, geological and mineral, anthropogenic disposal, soil reference and miscellaneous reference materials. Not included in the current report is information on most biological and environmental reference materials with trace element, stable isotope, radioisotope and organic contaminant information. 8 refs, tabs
15. A transcriptional reference map of defence hormone responses in potato
OpenAIRE
Lea Wiesel; Davis, Jayne L.; Linda Milne; Vanesa Redondo Fernandez; Herold, Miriam B.; Jill Middlefell Williams; Jenny Morris; Hedley, Pete E; Brian Harrower; Newton, Adrian C.; Birch, Paul R. J.; Gilroy, Eleanor M.; Ingo Hein
2015-01-01
Phytohormones are involved in diverse aspects of plant life including the regulation of plant growth, development and reproduction, as well as governing biotic and abiotic stress responses. We have generated a comprehensive transcriptional reference map of the early potato responses to exogenous application of the defence hormones abscisic acid, brassinolides (applied as epibrassinolide), ethylene (applied as the ethylene precursor aminocyclopropanecarboxylic acid), salicylic acid and jasmoni...
16. MPEG2 video parameter and no reference PSNR estimation
DEFF Research Database (Denmark)
Li, Huiying; Forchhammer, Søren
2009-01-01
to the MPEG stream. This may be used in systems and applications where the coded stream is not accessible. Detection of MPEG I-frames and DCT (discrete cosine transform) block size is presented. For the I-frames, the quantization parameters are estimated. Combining these with statistics of the reconstructed...... DCT coefficients, the PSNR is estimated from the decoded video without reference images. Tests on decoded fixed rate MPEG2 sequences demonstrate perfect detection rates and good performance of the PSNR estimation....
17. The Nuclear Science References (NSR) Database and Web Retrieval System
OpenAIRE
PRITYCHENKO B.; Betak, E.; Kellett, M. A.; B. Singh; Totans, J.
2011-01-01
The Nuclear Science References (NSR) database together with its associated Web interface, is the world's only comprehensive source of easily accessible low- and intermediate-energy nuclear physics bibliographic information for more than 200,000 articles since the beginning of nuclear science. The weekly-updated NSR database provides essential support for nuclear data evaluation, compilation and research activities. The principles of the database and Web application development and maintenance...
18. Bisphenol A polycarbonate as a reference material
Science.gov (United States)
Hilado, C. J.; Cumming, H. J.; Williams, J. B.
1977-01-01
Test methods require reference materials to standardize and maintain quality control. Various materials have been evaluated as possible reference materials, including a sample of bisphenol A polycarbonate without additives. Screening tests for relative toxicity under various experimental conditions were performed using male mice exposed to pyrolysis effluents over a 200-800 C temperature range. It was found that the bisphenol A polycarbonate served as a suitable reference material as it is available in large quantities, and does not significantly change with time.
19. A periodic pricing model considering reference effect
OpenAIRE
Yang Hui; Zhang Chen
2016-01-01
The purpose of this paper is to investigate the optimal pricing strategies with reference effects in revenue management settings. We firstly propose a static pricing model with the properties of stochastic demand, finite horizon and fixed capacity, and prove the existence and uniqueness of the solution. Secondly, we extend the fixed pricing model to a periodic pricing model and incorporate a memory-based reference price in the demand function to investigate how the reference effect impacts on...
20. Reference group influence on digital advertising effectiveness
OpenAIRE
Taube, Valtteri
2015-01-01
Reference groups have been researched solely based on laboratory experiments. Also there has been multiple demands in the field of marketing research for replication studies. This thesis will be answering the need and simultaneously making use of new advertisement tech-nologies. Purpose of the thesis is to increase external validity of reference group research by making conceptual replication of White & Dahl's (2006) study in field setting. Reference groups are groups that consumers use t...
1. Reference-Dependent Preferences : Models and Experiments
OpenAIRE
Tenberge, Maximilian
2010-01-01
In economic situations people form expectations prior to their decisions. These expectations represent a reference point and exert a strong influence on decision making and preferences. This paper surveys the theory of reference-dependent preferences. I will summarize the theoretical framework and present experimental evidence in support of preferences being reference-dependent. Additionally, I will address the still open and fundamental questions of how expectations are formed. That is: How ...
2. Phase-sensitive multiple reference optical coherence tomography (Conference Presentation)
Science.gov (United States)
Dsouza, Roshan I.; Subhash, Hrebesh; Neuhaus, Kai; Hogan, Josh; Wilson, Carol; Leahy, Martin
2016-03-01
Multiple reference OCT (MR-OCT) is a recently developed novel time-domain OCT platform based on a miniature reference arm optical delay, which utilizes a single miniature actuator and a partial mirror to generate recirculating optical delay for extended axial-scan range. MR-OCT technology promises to fit into a robust and cost-effective design, compatible with integration into consumer-level devices for addressing wide applications in mobile healthcare and biometry applications. Using conventional intensity based OCT processing techniques, the high-resolution structural imaging capability of MR-OCT has been recently demonstrated for various applications including in vivo human samples. In this study, we demonstrate the feasibility of implementing phase based processing with MR-OCT for various functional applications such as Doppler imaging and sensing of blood vessels, and for tissue vibrography applications. The MR-OCT system operates at 1310nm with a spatial resolution of ~26 µm and an axial scan rate of 600Hz. Initial studies show a displacement-sensitivity of ~20 nm to ~120 nm for the first 1 to 9 orders of reflections, respectively with a mirror as test-sample. The corresponding minimum resolvable velocity for these orders are ~2.3 µm/sec and ~15 µm/sec respectively. Data from a chick chorioallantoic membrane (CAM) model will be shown to demonstrate the feasibility of MR-OCT for imaging in-vivo blood flow.
3. Genetics Home Reference: pseudohypoaldosteronism type 2
Science.gov (United States)
... high levels of chloride (hyperchloremia) and acid (metabolic acidosis) in their blood (together, referred to as hyperchloremic metabolic acidosis). People with hyperkalemia, hyperchloremia, and metabolic acidosis can ...
4. 76 FR 53492 - South Carolina Public Service Authority (Also Referred to as Santee Cooper); Combined Licenses...
Science.gov (United States)
2011-08-26
... COMMISSION South Carolina Public Service Authority (Also Referred to as Santee Cooper); Combined Licenses for... as Santee Cooper), for two Title 10 of the Code of Federal Regulations (10 CFR) part 52 combined... Service Authority (Also Referred to as Santee Cooper) Application for the Virgil C. Summer Nuclear...
5. BLOCKAGE 2.5 reference manual
Energy Technology Data Exchange (ETDEWEB)
Shaffer, C.J.; Brideau, J.; Rao, D.V. [Science and Engineering Associates, Inc., Albuquerque, NM (United States); Bernahl, W. [Software Edge, Inc., Riverwoods, IL (United States)
1996-12-01
The BLOCKAGE 2.5 code was developed by the US Nuclear Regulatory Commission (NRC) as a tool to evaluate license compliance regarding the design of suction strainers for emergency core cooling system (ECCS) pumps in boiling water reactors (BWR) as required by NRC Bulletin 96-03, Potential Plugging of Emergency Core Cooling Suction Strainers by Debris in Boiling Water Reactors. Science and Engineering Associates, Inc. (SEA) and Software Edge, Inc. (SE) developed this PC-based code. The instructions to effectively use this code to evaluate the potential of debris to sufficiently block a pump suction strainer such that a pump could lose NPSH margin was documented in a Users Manual (NRC, NUREG/CR-6370). The Reference Manual contains additional information that supports the use of BLOCKAGE 2.5. It contains descriptions of the analytical models contained in the code, programmer guides illustrating the structure of the code, and summaries of coding verification and model validation exercises that were performed to ensure that the analytical models were correctly coded and applicable to the evaluation of BWR pump suction strainers. The BLOCKAGE code was developed by SEA and programmed in FORTRAN as a code that can be executed from the DOS level on a PC. A graphical users interface (GUI) was then developed by SEA to make BLOCKAGE easier to use and to provide graphical output capability. The GUI was programmed in the C language. The user has the option of executing BLOCKAGE 2.5 with the GUI or from the DOS level and the Users Manual provides instruction for both methods of execution.
6. 40 CFR 260.11 - References.
Science.gov (United States)
2010-07-01
... 40 Protection of Environment 25 2010-07-01 2010-07-01 false References. 260.11 Section 260.11 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES (CONTINUED) HAZARDOUS WASTE MANAGEMENT SYSTEM: GENERAL Definitions § 260.11 References. (a) When used in parts 260 through...
7. Aspects of Reference in Figurative Language.
Science.gov (United States)
Pankhurst, Anne
1995-01-01
This study considers some problems of reference found in figurative language, particularly in metaphor and metonymy. Analysis is based on the notion that the effects communicated by figurative language depend to a large extent on reference to more than one concept, experience, or entity, and that the presence of multiple potential referents…
8. Empiricist Semantics and Indeterminacies of Reference
NARCIS (Netherlands)
Douven, I.
2008-01-01
In concert with his overall empiricist outlook, Quine urges 'to approach semantical matters in the empirical spirit of natural science' ([21,p.8]). Among many other things this means that theories of reference, or interpretations-i.e.,joint ascriptions of referents to the words a certain speaker or
9. Writing references and using citation management software.
Science.gov (United States)
Sungur, Mukadder Orhan; Seyhan, Tülay Özkan
2013-09-01
The correct citation of references is obligatory to gain scientific credibility, to honor the original ideas of previous authors and to avoid plagiarism. Currently, researchers can easily find, cite and store references using citation management software. In this review, two popular citation management software programs (EndNote and Mendeley) are summarized.
10. Writing references and using citation management software
OpenAIRE
Sungur, Mukadder Orhan; Seyhan, Tülay Özkan
2013-01-01
The correct citation of references is obligatory to gain scientific credibility, to honor the original ideas of previous authors and to avoid plagiarism. Currently, researchers can easily find, cite and store references using citation management software. In this review, two popular citation management software programs (EndNote and Mendeley) are summarized.
11. Health physics research reactor reference dosimetry
International Nuclear Information System (INIS)
Reference neutron dosimetry is developed for the Health Physics Research Reactor (HPRR) in the new operational configuration directly above its storage pit. This operational change was physically made early in CY 1985. The new reference dosimetry considered in this document is referred to as the 1986 HPRR reference dosimetry and it replaces any and all HPRR reference documents or papers issued prior to 1986. Reference dosimetry is developed for the unshielded HPRR as well as for the reactor with each of five different shield types and configurations. The reference dosimetry is presented in terms of three different dose and six different dose equivalent reporting conventions. These reporting conventions cover most of those in current use by dosimetrists worldwide. In addition to the reference neutron dosimetry, this document contains other useful dosimetry-related data for the HPRR in its new configuration. These data include dose-distance measurements and calculations, gamma dose measurements, neutron-to-gamma ratios, ''9-to-3 inch'' ratios, threshold detector unit measurements, 56-group neutron energy spectra, sulfur fluence measurements, and details concerning HPRR shields. 26 refs., 11 figs., 31 tabs
12. Accuracy of References in Five Entomology Journals.
Science.gov (United States)
Kristof, Cynthia
ln this paper, the bibliographical references in five core entomology journals are examined for citation accuracy in order to determine if the error rates are similar. Every reference printed in each journal's first issue of 1992 was examined, and these were compared to the original (cited) publications, if possible, in order to determine the…
13. 23 CFR 650.317 - Reference manuals.
Science.gov (United States)
2010-04-01
...(a) and 1 CFR part 51. These materials are incorporated as they exist on the date of the approval... 23 Highways 1 2010-04-01 2010-04-01 false Reference manuals. 650.317 Section 650.317 Highways..., STRUCTURES, AND HYDRAULICS National Bridge Inspection Standards § 650.317 Reference manuals. (a)...
14. Lazy reference counting for the Microgrid
NARCIS (Netherlands)
R. Poss; C. Grelck; S. Herhut; S.-B. Scholz
2012-01-01
This papers revisits non-deferred reference counting, a common technique to ensure that potentially shared large heap objects can be reused safely when they are both input and output to computations. Traditionally, thread-safe reference counting exploit implicit memory-based communication of counter
15. 40 CFR 90.7 - Reference materials.
Science.gov (United States)
2010-07-01
... accordance with 5 U.S.C. 552(a) and 1 CFR part 51. Copies may be inspected at U.S. EPA Air and Radiation... 19103. Document number and name 40 CFR part 90 reference ASTM D86-93: Standard Test Method for...., Warrendale, PA 15096-0001. Document number and name 40 CFR part 90 reference SAE J1930 September...
16. 47 CFR 15.605 - Cross reference.
Science.gov (United States)
2010-10-01
... 47 Telecommunication 1 2010-10-01 2010-10-01 false Cross reference. 15.605 Section 15.605 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL RADIO FREQUENCY DEVICES Access Broadband Over Power Line (Access BPL) § 15.605 Cross reference. (a) The provisions of subparts A and B of this part apply to...
17. Spatial reference in multiple object tracking.
Science.gov (United States)
Jahn, Georg; Papenmeier, Frank; Meyerhoff, Hauke S; Huff, Markus
2012-01-01
Spatial reference in multiple object tracking is available from configurations of dynamic objects and static reference objects. In three experiments, we studied the use of spatial reference in tracking and in relocating targets after abrupt scene rotations. Observers tracked 1, 2, 3, 4, and 6 targets in 3D scenes, in which white balls moved on a square floor plane. The floor plane was either visible thus providing static spatial reference or it was invisible. Without scene rotations, the configuration of dynamic objects provided sufficient spatial reference and static spatial reference was not advantageous. In contrast, with abrupt scene rotations of 20°, static spatial reference supported in relocating targets. A wireframe floor plane lacking local visual detail was as effective as a checkerboard. Individually colored geometric forms as static reference objects provided no additional benefit either, even if targets were centered on these forms at the abrupt scene rotation. Individualizing the dynamic objects themselves by color for a brief interval around the abrupt scene rotation, however, did improve performance. We conclude that attentional tracking of moving targets proceeds within dynamic configurations but detached from static local background.
18. Bonus payments and reference point violations
OpenAIRE
Ockenfels, Axel; Sliwka, Dirk; Werner, Peter
2010-01-01
We investigate how bonus payments affect satisfaction and performance of managers in a large, multinational company. We find that falling behind a naturally occurring reference point for bonus comparisons reduces satisfaction and subsequent performance. The effects tend to be mitigated if information about one's relative standing towards the reference point is withheld.
19. 16 CFR 1207.11 - References.
Science.gov (United States)
2010-01-01
... 16 Commercial Practices 2 2010-01-01 2010-01-01 false References. 1207.11 Section 1207.11 Commercial Practices CONSUMER PRODUCT SAFETY COMMISSION CONSUMER PRODUCT SAFETY ACT REGULATIONS SAFETY STANDARD FOR SWIMMING POOL SLIDES § 1207.11 References. (a) “Statistical Abstract of the United States...
20. 33 CFR 279.3 - References.
Science.gov (United States)
2010-07-01
... (33 CFR part 290). ... 33 Navigation and Navigable Waters 3 2010-07-01 2010-07-01 false References. 279.3 Section 279.3... USE: ESTABLISHMENT OF OBJECTIVES § 279.3 References. (a) Pub. L. 89-72, “Federal Water...
1. 32 CFR 552.107 - References.
Science.gov (United States)
2010-07-01
... 32 National Defense 3 2010-07-01 2010-07-01 true References. 552.107 Section 552.107 National Defense Department of Defense (Continued) DEPARTMENT OF THE ARMY MILITARY RESERVATIONS AND NATIONAL... References. (a) AR 190-5 (Motor Vehicle Traffic Supervision) (b) AR 190-52 (Countering Terrorism and...
2. 32 CFR 2700.1 - References.
Science.gov (United States)
2010-07-01
... 32 National Defense 6 2010-07-01 2010-07-01 false References. 2700.1 Section 2700.1 National Defense Other Regulations Relating to National Defense OFFICE FOR MICRONESIAN STATUS NEGOTIATIONS SECURITY INFORMATION REGULATIONS Introduction § 2700.1 References. (a) Executive Order 12065, “National...
3. 33 CFR 274.3 - References.
Science.gov (United States)
2010-07-01
... 33 Navigation and Navigable Waters 3 2010-07-01 2010-07-01 false References. 274.3 Section 274.3 Navigation and Navigable Waters CORPS OF ENGINEERS, DEPARTMENT OF THE ARMY, DEPARTMENT OF DEFENSE PEST CONTROL PROGRAM FOR CIVIL WORKS PROJECTS Project Operation § 274.3 References. (a) Pub. L. 92-516,...
4. 32 CFR 634.2 - References.
Science.gov (United States)
2010-07-01
... 32 National Defense 4 2010-07-01 2010-07-01 true References. 634.2 Section 634.2 National Defense Department of Defense (Continued) DEPARTMENT OF THE ARMY (CONTINUED) LAW ENFORCEMENT AND CRIMINAL INVESTIGATIONS MOTOR VEHICLE TRAFFIC SUPERVISION Introduction § 634.2 References. Required and...
5. 33 CFR 236.3 - References.
Science.gov (United States)
2010-07-01
... 33 Navigation and Navigable Waters 3 2010-07-01 2010-07-01 false References. 236.3 Section 236.3 Navigation and Navigable Waters CORPS OF ENGINEERS, DEPARTMENT OF THE ARMY, DEPARTMENT OF DEFENSE WATER... QUALITY § 236.3 References. (a) PL 89-72 (b) ER 1105-2-10 (c) ER 1105-2-200...
6. 32 CFR 552.86 - References.
Science.gov (United States)
2010-07-01
... 32 National Defense 3 2010-07-01 2010-07-01 true References. 552.86 Section 552.86 National Defense Department of Defense (Continued) DEPARTMENT OF THE ARMY MILITARY RESERVATIONS AND NATIONAL CEMETERIES REGULATIONS AFFECTING MILITARY RESERVATIONS Fort Lewis Land Use Policy § 552.86 References. (a)...
7. 32 CFR 552.182 - References.
Science.gov (United States)
2010-07-01
... 32 National Defense 3 2010-07-01 2010-07-01 true References. 552.182 Section 552.182 National Defense Department of Defense (Continued) DEPARTMENT OF THE ARMY MILITARY RESERVATIONS AND NATIONAL... Facilities § 552.182 References. Publications referenced in this section may be reviewed in the...
8. 4 CFR 2.2 - References.
Science.gov (United States)
2010-01-01
... 4 Accounts 1 2010-01-01 2010-01-01 false References. 2.2 Section 2.2 Accounts GOVERNMENT ACCOUNTABILITY OFFICE PERSONNEL SYSTEM PURPOSE AND GENERAL PROVISION § 2.2 References. (a) Subchapters III and IV of Chapter 7 of Title 31 U.S.C. (b) Title 5, United States Code....
9. 33 CFR 238.3 - References.
Science.gov (United States)
2010-07-01
... 33 Navigation and Navigable Waters 3 2010-07-01 2010-07-01 false References. 238.3 Section 238.3 Navigation and Navigable Waters CORPS OF ENGINEERS, DEPARTMENT OF THE ARMY, DEPARTMENT OF DEFENSE WATER RESOURCES POLICIES AND AUTHORITIES: FLOOD DAMAGE REDUCTION MEASURES IN URBAN AREAS § 238.3 References....
10. 32 CFR 552.113 - References.
Science.gov (United States)
2010-07-01
... 32 National Defense 3 2010-07-01 2010-07-01 true References. 552.113 Section 552.113 National Defense Department of Defense (Continued) DEPARTMENT OF THE ARMY MILITARY RESERVATIONS AND NATIONAL...-Fort Lewis, Washington § 552.113 References. This regulation is to be used in conjunction with...
11. 32 CFR 2103.1 - References.
Science.gov (United States)
2010-07-01
... 32 National Defense 6 2010-07-01 2010-07-01 false References. 2103.1 Section 2103.1 National Defense Other Regulations Relating to National Defense NATIONAL SECURITY COUNCIL REGULATIONS TO IMPLEMENT... § 2103.1 References. (a) Executive Order 12065, “National Security Information,” dated June 28, 1978....
12. 15 CFR 2008.1 - References.
Science.gov (United States)
2010-01-01
... 15 Commerce and Foreign Trade 3 2010-01-01 2010-01-01 false References. 2008.1 Section 2008.1 Commerce and Foreign Trade Regulations Relating to Foreign Trade Agreements OFFICE OF THE UNITED STATES... REPRESENTATIVE General Provisions § 2008.1 References. (a) Executive Order 12065, “National Security...
13. 40 CFR 270.6 - References.
Science.gov (United States)
2010-07-01
... 1 CFR part 51. These materials are incorporated as they exist on the date of approval and a notice... 40 Protection of Environment 26 2010-07-01 2010-07-01 false References. 270.6 Section 270.6... ADMINISTERED PERMIT PROGRAMS: THE HAZARDOUS WASTE PERMIT PROGRAM General Information § 270.6 References....
14. 33 CFR 239.3 - References.
Science.gov (United States)
2010-07-01
... 33 Navigation and Navigable Waters 3 2010-07-01 2010-07-01 false References. 239.3 Section 239.3 Navigation and Navigable Waters CORPS OF ENGINEERS, DEPARTMENT OF THE ARMY, DEPARTMENT OF DEFENSE WATER... References. (a) Executive Order 11988, Floodplain Management, 24 May 1977. (b) ER 1105-2-200. (c) ER...
15. 32 CFR 552.142 - References.
Science.gov (United States)
2010-07-01
... 32 National Defense 3 2010-07-01 2010-07-01 true References. 552.142 Section 552.142 National Defense Department of Defense (Continued) DEPARTMENT OF THE ARMY MILITARY RESERVATIONS AND NATIONAL... Fort Benjamin Harrison, Indiana § 552.142 References. Required and related publications are...
16. 32 CFR 518.2 - References.
Science.gov (United States)
2010-07-01
... 32 National Defense 3 2010-07-01 2010-07-01 true References. 518.2 Section 518.2 National Defense Department of Defense (Continued) DEPARTMENT OF THE ARMY AID OF CIVIL AUTHORITIES AND PUBLIC RELATIONS THE FREEDOM OF INFORMATION ACT PROGRAM General Provisions § 518.2 References. Required and...
17. 28 CFR 63.3 - References.
Science.gov (United States)
2010-07-01
....) and NFIP criteria (44 CFR part 59 et seq.). (d) Flood Disaster Protection Act of 1973 (Pub. L. 93-234... 28 Judicial Administration 2 2010-07-01 2010-07-01 false References. 63.3 Section 63.3 Judicial... References. (a) Unified National Program for Floodplain Management, Water Resources Council, which...
18. 32 CFR 1290.1 - References.
Science.gov (United States)
2010-07-01
... 32 National Defense 6 2010-07-01 2010-07-01 false References. 1290.1 Section 1290.1 National Defense Other Regulations Relating to National Defense DEFENSE LOGISTICS AGENCY MISCELLANEOUS PREPARING... References. (a) DLAR 5720.1/AR 190-5/OPNAVINST 11200.5B/AFR 125-14/MCO 5110.1B, Motor Vehicle...
19. 24 CFR 35.1310 - References.
Science.gov (United States)
2010-04-01
... by EPA under 40 CFR 745.324 to administer and enforce lead-based paint programs. ... 24 Housing and Urban Development 1 2010-04-01 2010-04-01 false References. 35.1310 Section 35.1310... Hazard Evaluation and Hazard Reduction Activities § 35.1310 References. Further guidance...
20. 32 CFR 651.2 - References.
Science.gov (United States)
2010-07-01
... 32 National Defense 4 2010-07-01 2010-07-01 true References. 651.2 Section 651.2 National Defense Department of Defense (Continued) DEPARTMENT OF THE ARMY (CONTINUED) ENVIRONMENTAL QUALITY ENVIRONMENTAL ANALYSIS OF ARMY ACTIONS (AR 200-2) Introduction § 651.2 References. Required and related publications...
1. 32 CFR 625.3 - References.
Science.gov (United States)
2010-07-01
... 32 National Defense 3 2010-07-01 2010-07-01 true References. 625.3 Section 625.3 National Defense Department of Defense (Continued) DEPARTMENT OF THE ARMY SUPPLIES AND EQUIPMENT SURFACE TRANSPORTATION-ADMINISTRATIVE VEHICLE MANAGEMENT § 625.3 References. (a) Title 31, U.S. Code, section 638. (b)...
2. 36 CFR 328.3 - References.
Science.gov (United States)
2010-07-01
... ENGINEERS § 328.3 References. (a) Title 36 CFR, part 327, Rules and Regulations Governing Public Use of... 36 Parks, Forests, and Public Property 3 2010-07-01 2010-07-01 false References. 328.3 Section 328.3 Parks, Forests, and Public Property CORPS OF ENGINEERS, DEPARTMENT OF THE ARMY REGULATION...
3. 32 CFR 552.161 - References.
Science.gov (United States)
2010-07-01
... 32 National Defense 3 2010-07-01 2010-07-01 true References. 552.161 Section 552.161 National Defense Department of Defense (Continued) DEPARTMENT OF THE ARMY MILITARY RESERVATIONS AND NATIONAL..., and Camp Bonneville § 552.161 References. See appendix E to this subpart....
4. 32 CFR 865.101 - References.
Science.gov (United States)
2010-07-01
... 32 National Defense 6 2010-07-01 2010-07-01 false References. 865.101 Section 865.101 National Defense Department of Defense (Continued) DEPARTMENT OF THE AIR FORCE ORGANIZATION AND MISSION-GENERAL PERSONNEL REVIEW BOARDS Air Force Discharge Review Board § 865.101 References. (a) Title 10 U.S.C.,...
5. Anaphoric Reference in Creoles and Noncreoles.
Science.gov (United States)
Escure, Genevieve
1993-01-01
Three categories of topic referents (nominal, pronominal, and periphrastic) are identified in 27 Belizean texts and 12 American texts, and the effects of referent choice of two variables (topic number and stylistic/lectal context) are investigated. One finding is that Belizean lects are strikingly similar to spontaneous styles of American English.…
6. Uncertainty in Reference and Information Service
Science.gov (United States)
VanScoy, Amy
2015-01-01
Introduction: Uncertainty is understood as an important component of the information seeking process, but it has not been explored as a component of reference and information service. Method: Interpretative phenomenological analysis was used to examine the practitioner perspective of reference and information service for eight academic research…
7. Grouping in Primary Schools and Reference Processes.
Science.gov (United States)
Meijnen, G. W.; Guldemond, H.
2002-01-01
Studied reference processes in within-class grouping for elementary school students in the Netherlands in homogeneous (n=16) and heterogeneous (n=14) classes. Findings indicate that homogeneous grouping sets strong reference processes in motion, and processes of comparison have considerably greater effects in homogeneous groups, with negative…
Science.gov (United States)
Egan, Katie G; Moreno, Megan A
2011-09-01
Perceived peer alcohol use is a predictor of consumption in college males; frequent references to alcohol on Facebook may encourage alcohol consumption. Content analysis of college males' Facebook profiles identified references to alcohol. The average age of 225 identified profiles was 19.9 years. Alcohol references were present on 85.3% of the profiles; the prevalence of alcohol was similar across each undergraduate grade. The average number of alcohol references per profile was 8.5 but increased with undergraduate year (p = .003; confidence interval = 1.5, 7.5). Students who were of legal drinking age referenced alcohol 4.5 times more than underage students, and an increase in number of Facebook friends was associated with an increase in displayed alcohol references (p Facebook is widely used in the college population; widespread alcohol displays on Facebook may influence social norms and cause increases in male college students' alcohol use.
9. Standard digital reference images for titanium castings
CERN Document Server
American Society for Testing and Materials. Philadelphia
2010-01-01
1.1 The digital reference images provided in the adjunct to this standard illustrate various types and degrees of discontinuities occurring in titanium castings. Use of this standard for the specification or grading of castings requires procurement of the adjunct digital reference images, which illustrate the discontinuity types and severity levels. They are intended to provide the following: 1.1.1 A guide enabling recognition of titanium casting discontinuities and their differentiation both as to type and degree through digital radiographic examination. 1.1.2 Example digital radiographic illustrations of discontinuities and a nomenclature for reference in acceptance standards, specifications and drawings. 1.2 The digital reference images consist of seventeen digital files each illustrating eight grades of increasing severity. The files illustrate seven common discontinuity types representing casting sections up to 1-in. (25.4-mm). 1.3 The reference radiographs were developed for casting sections up to 1...
Science.gov (United States)
Egan, Katie G; Moreno, Megan A
2011-09-01
Perceived peer alcohol use is a predictor of consumption in college males; frequent references to alcohol on Facebook may encourage alcohol consumption. Content analysis of college males' Facebook profiles identified references to alcohol. The average age of 225 identified profiles was 19.9 years. Alcohol references were present on 85.3% of the profiles; the prevalence of alcohol was similar across each undergraduate grade. The average number of alcohol references per profile was 8.5 but increased with undergraduate year (p = .003; confidence interval = 1.5, 7.5). Students who were of legal drinking age referenced alcohol 4.5 times more than underage students, and an increase in number of Facebook friends was associated with an increase in displayed alcohol references (p Facebook is widely used in the college population; widespread alcohol displays on Facebook may influence social norms and cause increases in male college students' alcohol use. PMID:21406490
11. MSDS sky reference and preamplifier study
Science.gov (United States)
Larsen, L.; Stewart, S.; Lambeck, P.
1974-01-01
The major goals in re-designing the Multispectral Scanner and Data System (MSDS) sky reference are: (1) to remove the sun-elevation angle and aircraft-attitude angle dependence from the solar-sky illumination measurement, and (2) to obtain data on the optical state of the atmosphere. The present sky reference is dependent on solar elevation and provides essentially no information on important atmospheric parameters. Two sky reference designs were tested. One system is built around a hyperbolic mirror and the reflection approach. A second approach to a sky reference utilizes a fish-eye lens to obtain a 180 deg field of view. A detailed re-design of the present sky reference around the fish-eye approach, even with its limitations, is recommended for the MSDS system. A preamplifier study was undertaken to find ways of improving the noise-equivalent reflectance by reducing the noise level for silicon detector channels on the MSDS.
12. Establishment of reference man in Korea
International Nuclear Information System (INIS)
To determine the physical standards of the Reference Korean, research has been initiated in Korea within the framework of the IAEA Coordinated Research Programme ''Compilation of Anatomical, Physiological and Metabolic Characteristics for a Reference Asian Man''. The physical data of 21,406 Koreans, corresponding to 0.05% of the total Korean population, were compiled. All the data were divided into small groups according to age and sex. Data on the mass of internal organs from 1,921 Koreans (1,344 male and 577 female) were collected. All the data are given in a tabulated form. It was shown that the anatomical parameters of Reference Korean were, as usual, similar to those of Reference Japanese but different from those of ICRP Reference Man from Publication 23. However, the weights of several internal organs (liver, pancreas) were different from those of Japanese. 11 refs, 4 figs, 24 tabs
13. SYSTEMATIC ERROR REDUCTION: NON-TILTED REFERENCE BEAM METHOD FOR LONG TRACE PROFILER.
Energy Technology Data Exchange (ETDEWEB)
QIAN,S.; QIAN, K.; HONG, Y.; SENG, L.; HO, T.; TAKACS, P.
2007-08-25
Systematic error in the Long Trace Profiler (LTP) has become the major error source as measurement accuracy enters the nanoradian and nanometer regime. Great efforts have been made to reduce the systematic error at a number of synchrotron radiation laboratories around the world. Generally, the LTP reference beam has to be tilted away from the optical axis in order to avoid fringe overlap between the sample and reference beams. However, a tilted reference beam will result in considerable systematic error due to optical system imperfections, which is difficult to correct. Six methods of implementing a non-tilted reference beam in the LTP are introduced: (1) application of an external precision angle device to measure and remove slide pitch error without a reference beam, (2) independent slide pitch test by use of not tilted reference beam, (3) non-tilted reference test combined with tilted sample, (4) penta-prism scanning mode without a reference beam correction, (5) non-tilted reference using a second optical head, and (6) alternate switching of data acquisition between the sample and reference beams. With a non-tilted reference method, the measurement accuracy can be improved significantly. Some measurement results are presented. Systematic error in the sample beam arm is not addressed in this paper and should be treated separately.
14. Reference design description for a geologic repository: Revision 01
International Nuclear Information System (INIS)
This document describes the current design expectations for a potential geologic repository that could be located at Yucca Mountain in Nevada. This Reference Design Description (RDD) looks at the surface and subsurface repository and disposal container design. Additionally, it reviews the expected long-term performance of the potential repository. In accordance with current legislation, the reference design for the potential repository does not include an interim storage option. The reference design presented allows the disposal of highly radioactive material received from government-owned spent fuel custodian sites; produces high-level waste sites, and commercial spent fuel sites. All design elements meet current federal, state, and local regulations governing the disposal of high-level radioactive waste and protection of the public and the environment. Due to the complex nature of developing a repository, the design will be created in three phases to support Viability Assessment, License Application, and construction. This document presents the current reference design. It will be updated periodically as the design progresses. Some of the details presented here may change significantly as more cost-effective solutions, technical advancements, or changes to requirements are identified
15. Systematic evaluation of reference protein normalization (RPN in proteomic experiments.
Directory of Open Access Journals (Sweden)
Henrik eZauber
2013-02-01
Full Text Available Quantitative comparative analyses of protein abundances and their modifications have become a widely used technique in studying various biological questions. In the past years, several methods for quantitative proteomics were established using stable-isotope labeling and label-free approaches. We systematically evaluated the application of reference protein normalization (RPN for proteomic experiments using a high mass accuracy LC-MS/MS platform. In RPN each peptide intensity is normalized to an average protein intensity of a spiked-in reference protein per sample. The main advantage of this method, compared to other label-free normalization strategies, is to avoid fraction of total based relative analysis of proteomic data, which is often very much dependent on sample complexity. We could show that reference protein ion intensity sums are reproducible enough to ensure reliable normalization. We validated the RPN strategy by analyzing changes in protein abundances induced by nutrient starvation in Arabidopsis. Beyond that, we provide a principle guideline for determining optimal combination of sample protein and reference protein load on individual LC-MS/MS systems.
16. Development and characterisation of a new line width reference material
Science.gov (United States)
Dai, Gaoliang; Zhu, Fan; Heidelmann, Markus; Fritz, Georg; Bayer, Thomas; Kalt, Samuel; Fluegge, Jens
2015-11-01
A new critical dimension (CD, often synonymously used for line width) reference material with improved vertical parallel sidewalls (IVPSs) has been developed and characterised. The sample has a size of 6 mm × 6 mm, consisting of 4 groups of 5 × 5 feature patterns. Each feature pattern has a group of five reference line features with a nominal CD of 50 nm, 70 nm, 90 nm, 110 nm and 130 nm, respectively. Each feature pattern includes a pair of triangular alignment marks, applicable for precisely identifying the target measurement position, e.g. for comparison or calibration between different tools. The geometry of line features has been investigated thoroughly using a high-resolution transmission electron microscope and a CD atomic force microscope (CD-AFM). Their results indicate the high quality of the line features: the top corner radius of strategy for the non-destructive calibration of the developed sample is introduced, which enables the application of the reference material in practice.
17. Reference Desk Consultation Assignment: An Exploratory Study of Students' Perceptions of Reference Service
OpenAIRE
Martin, Pamela N; Park, Lezlie
2010-01-01
This paper describes the experience of three sophomore English composition classes that were required to visit the reference desk for class credit. Student perceptions of reference consultations are analyzed to gain a clearer understanding of the students’ attitudes towards reference services. Findings of this exploratory study indicate that students still suffer from library anxiety and are much more likely to seek out reference help if they are convinced that a consultation will save them ...
18. Geographical origin: meaningful reference or marketing tool?
DEFF Research Database (Denmark)
Hedegaard, Liselotte
2015-01-01
collected in France where geographical origin is perceived as indicator of quality. A possible explanation resides in the double standards rendered possible by the European labels as they refer to provenance as well as geographical origin. Provenance means to issue from a place in the sense that the place...... constitutes a meaningful reference to a link between food and place that represents expectations of taste and quality. In Denmark, this link is not attributed similar meaning and, hence, the difference between meaningful references and images formed through the language of marketing is less discernible...
19. Environmental reference materials methods and case studies
DEFF Research Database (Denmark)
Schramm-Nielsen, Karina Edith
1998-01-01
evaluation of certification data. The plots illustrate consistency between replicate measurements on samples from a batch of reference material, carried out in a number of laboratories according to a staggered nested design. The development of a reference material is illustrated by a series of experiments......This thesis introduces the reader to the concept of chemical environmental reference materials and their role in traceability and chemical analyses of the environment. A number of models and principles from the literature are described. Some suggestions are made as to how stability studies can...
20. Reference and information services an introduction
CERN Document Server
Bopp, Richard E
2011-01-01
Reflecting the dramatic changes shaped by rapidly developing technologies over the past six years, this new fourth edition of Reference and Information Services takes the introduction to reference sources and services significantly beyond the content of the first three editions. In Part I, Concepts and Processes, chapters have been revised and updated to reflect new ideas and methods in the provision of reference service in an era when many users have access to the Web. In Part II, Information Sources and Their Use, discussion of each source type has been updated to encompass key resources in | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5051005482673645, "perplexity": 5410.200209496729}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463614620.98/warc/CC-MAIN-20170530085905-20170530105905-00094.warc.gz"} |
https://proofwiki.org/wiki/Definition:Fundamental_Circuit_(Matroid) | # Definition:Fundamental Circuit (Matroid)
## Definition
Let $M = \struct {S, \mathscr I}$ be a matroid.
Let $B$ be a base of $M$.
Let $x \in S \setminus B$.
The fundamental circuit of $x$ in the base B, denoted $\map C {x, B}$, is the unique circuit such that:
$x \in \map C {x, B} \subseteq B \cup \set x$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9973423480987549, "perplexity": 491.8093484048399}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057424.99/warc/CC-MAIN-20210923135058-20210923165058-00668.warc.gz"} |
https://gmatclub.com/forum/which-of-the-following-equations-has-only-one-integer-pair-as-solution-244112.html | It is currently 20 Mar 2018, 04:58
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# Which of the following equations has only one integer pair as solution
Author Message
TAGS:
### Hide Tags
Intern
Joined: 18 Apr 2013
Posts: 30
Which of the following equations has only one integer pair as solution [#permalink]
### Show Tags
06 Jul 2017, 07:26
9
This post was
BOOKMARKED
00:00
Difficulty:
75% (hard)
Question Stats:
39% (00:56) correct 61% (00:49) wrong based on 127 sessions
### HideShow timer Statistics
Which of the following equations has only one integer pair as solution?
A) y = 2x
B) y = x/2
C) $$y =\sqrt{5}*x$$
D) y=x+1
E) y=1/x
[Reveal] Spoiler: OA
Last edited by Bunuel on 06 Jul 2017, 09:01, edited 1 time in total.
Formatted the question.
Intern
Joined: 28 Apr 2017
Posts: 5
Re: Which of the following equations has only one integer pair as solution [#permalink]
### Show Tags
06 Jul 2017, 07:54
Is C the answer as only one integer solution possible (0,0) rest all has more than one if you consider negative counter part
Sent from my MotoG3 using GMAT Club Forum mobile app
Current Student
Joined: 22 Sep 2016
Posts: 207
Location: India
GMAT 1: 710 Q50 V35
GPA: 4
Re: Which of the following equations has only one integer pair as solution [#permalink]
### Show Tags
14 Jul 2017, 04:33
roastedchips wrote:
Which of the following equations has only one integer pair as solution?
A) y = 2x
B) y = x/2
C) $$y =\sqrt{5}*x$$
D) y=x+1
E) y=1/x
What are the solutions for E? other than (1,1)?
_________________
Desperately need 'KUDOS' !!
Intern
Joined: 28 Apr 2017
Posts: 5
Re: Which of the following equations has only one integer pair as solution [#permalink]
### Show Tags
14 Jul 2017, 05:41
(-1,-1) as Negative counterparts
Sent from my MotoG3 using GMAT Club Forum mobile app
Current Student
Joined: 22 Sep 2016
Posts: 207
Location: India
GMAT 1: 710 Q50 V35
GPA: 4
Re: Which of the following equations has only one integer pair as solution [#permalink]
### Show Tags
14 Jul 2017, 20:20
1
KUDOS
vivsleo wrote:
(-1,-1) as Negative counterparts
Sent from my MotoG3 using GMAT Club Forum mobile app
_________________
Desperately need 'KUDOS' !!
Director
Joined: 29 Jun 2017
Posts: 518
GMAT 1: 570 Q49 V19
GPA: 4
WE: Engineering (Transportation)
Re: Which of the following equations has only one integer pair as solution [#permalink]
### Show Tags
02 Sep 2017, 08:29
A) y = 2x, x=1y=2, x=2y=4
B) y = x/2 x2y1 x4y2
C) y=√5∗x x0y0
D) y=x+1 x0y1 x1y2 x3y4
E) y=1/x x1y1 and x(-1) y(-1)
_________________
Give Kudos for correct answer and/or if you like the solution.
VP
Status: Learning
Joined: 20 Dec 2015
Posts: 1153
Location: India
Concentration: Operations, Marketing
GMAT 1: 670 Q48 V36
GRE 1: 314 Q157 V157
GPA: 3.4
WE: Manufacturing and Production (Manufacturing)
Re: Which of the following equations has only one integer pair as solution [#permalink]
### Show Tags
02 Sep 2017, 23:44
We can have multiple integer solutions for every equation except C
y=√5∗x in order to have integer solution we have to have x=√5
_________________
Director
Joined: 29 Jun 2017
Posts: 518
GMAT 1: 570 Q49 V19
GPA: 4
WE: Engineering (Transportation)
Re: Which of the following equations has only one integer pair as solution [#permalink]
### Show Tags
02 Sep 2017, 23:55
arvind910619 wrote:
We can have multiple integer solutions for every equation except C
y=√5∗x in order to have integer solution we have to have x=√5
very sorry for your solution x =√5
which you stated above
x=0 and y= 0 is the only integer solution possible as √5 is not an integer.
Hope u can understand.
_________________
Give Kudos for correct answer and/or if you like the solution.
Re: Which of the following equations has only one integer pair as solution [#permalink] 02 Sep 2017, 23:55
Display posts from previous: Sort by | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8265199661254883, "perplexity": 10427.810927695715}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647406.46/warc/CC-MAIN-20180320111412-20180320131412-00184.warc.gz"} |
https://bioinformatics.stackexchange.com/questions/17684/does-the-protein-data-bank-contain-the-estimated-distances-obtained-using-nmr-sp | # Does the Protein Data Bank contain the estimated distances obtained using NMR spectroscopy?
From the Wikipedia entry on the Protein Data Bank (PDB) (emphasis mine):
Most structures are determined by X-ray diffraction, but about 10% of structures are determined by protein NMR. When using X-ray diffraction, approximations of the coordinates of the atoms of the protein are obtained, whereas using NMR, the distance between pairs of atoms of the protein is estimated. The final conformation of the protein is obtained from NMR by solving a distance geometry problem.
I am currently working on an algorithm to solve distance geometry problems in the context of molecular conformations. I would prefer to test this algorithm on real data, so does the PDB contain information about the distances between pairs of atoms that were estimated from NMR spectroscopy measurements?
I checked the PDB file for the 2KB7 protein, but I only found the xyz coordinates of the atoms obtained after solving the distance geometry problem. Is it possible to get access to the estimated distances that were used to generate these conformations?
The data can be found here:
https://bmrb.io/search/instant.php?term=2KB7
There is no link directly in the PBD as it is a one-to-many relationship. Clicking on the first value (a shifts dataset) you get somewhere in the middle a link to the actual dataset, which is a CIF like file with the following table:
vvvvvvv
1 . 1 1 2 2 MET HA H 1 4.070 . . 1 . . . A 2 MET HA . 18256 1
2 . 1 1 2 2 MET C C 13 176.670 . . 1 . . . A 2 MET C . 18256 1
^^^^^^^
Each row has the spectral shift in ppm (chevrons added) of an assigned hydrogen or carbon atom. There are also anisotropic chemical shifts and dipolar couplings datasets etc. etc. Each with a particular piece of data...
Chemical structures are "elucidated" (=solved) in different ways depending on the type of experiment.
Say you wanted only the distanced calculated by Nuclear Overhauser effect (r_NOE), you would need to find a NOE NMR experiment.
• Thanks a lot for your answer. I'm relatively new to the NMR literature. How are spectral/chemical shifts related to the distances between pairs of atoms in a protein? Sep 12, 2021 at 11:07
• Like the solutions to the phase problem in crystallography, the elucidation of the distances depends on the method used and differs between methods. However, the r_NOE mentioned above is a common technique. The elucidation of structures from a simple chemical shifts is not possible in a simple way without a myriad hacks. Sep 15, 2021 at 9:26 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7664615511894226, "perplexity": 843.7261793434038}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662519037.11/warc/CC-MAIN-20220517162558-20220517192558-00075.warc.gz"} |
http://mathoverflow.net/questions/138885/applications-of-non-separable-hilbert-spaces/138887 | # Applications of non-separable Hilbert spaces
In applications, Hilbert spaces of interest are often assumed to be separable. In addition to being extremely convenient mathematically, this assumption can often be justified on computational or physical grounds.
Are there applications where non-separable Hilbert spaces naturally arise?
-
## 1 Answer
The main example of a non-separable Hilbert space is the Besicovitch space of almost periodic functions. Almost periodic functions play a significant role in analysis, from differential equations to operator algebras, and this space is quite useful.
-
Here's Besicovich book (I was giving the same answer...) plouffe.fr/simon/math/… – Pietro Majer Aug 8 '13 at 7:19 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8111119866371155, "perplexity": 779.9388823353233}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507447421.45/warc/CC-MAIN-20141017005727-00023-ip-10-16-133-185.ec2.internal.warc.gz"} |
https://pubmed.ncbi.nlm.nih.gov/21294743/ | Review
. 2011 Feb;69(2):99-106.
doi: 10.1111/j.1753-4887.2010.00365.x.
# Fermentation Potential of the Gut Microbiome: Implications for Energy Homeostasis and Weight Management
Affiliations
• PMID: 21294743
Review
# Fermentation Potential of the Gut Microbiome: Implications for Energy Homeostasis and Weight Management
Tulika Arora et al. Nutr Rev. .
## Abstract
Energy homeostasis is regulated by twin factors, energy intake and energy expenditure. Obesity arises when these two factors are out of balance. Recently, the microflora residing in the human gut has been found to be one of the influential factors disturbing energy balance. Recent interest in this field has led to use of the term "gut microbiome" to describe the genomes of trillions of microbes residing in the gut. Metagenomic studies have shown that the human gut microbiome facilitates fermentation of indigestible carbohydrates to short-chain fatty acids that provide excess energy to the body, thus contributing to the obese phenotype. Alteration in the ratio of Bacteroidetes and Firmicutes drives a change in fermentation patterns that could explain weight gain. Therefore, changes in the gut microbiome (induced by antibiotics or dietary supplements) may be helpful in curbing the obesity pandemic. This review provides information on the expansive role the gut microbiome is believed to play in obesity and other related metabolic disorders. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8026708364486694, "perplexity": 7857.1251705875}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347413406.70/warc/CC-MAIN-20200531120339-20200531150339-00443.warc.gz"} |
http://mathoverflow.net/questions/126553/is-there-a-deep-reason-for-the-fecundity-of-involutions/126576 | # Is there a deep reason for the fecundity of involutions?
You might have come across the book of involutions in your travels. A colleague of mine asked whether there is a natural global reason (versus ad-hoc trickery) for considering involutions in mathematics. The above book provides many situations that suggest such a global perspective. Having witnessed the extraordinary power of certain involutions in operator algebras (e.g. in Tomita-Takesaki theory), I'd be interested in hearing about such a global perspective in summary from an expert. I'm aware that this question as I've asked it risks being trite...perhaps warranting the answer "it's the simplest nontrivial symmetry" but the existence of the above book might suggest otherwise:
Question: What are some "global" reasons for considering involutions in mathematics?
-
It may be the case that we are way too mathematically primitive to even start making useful use of automorphisms of order three! – Mariano Suárez-Alvarez Apr 4 '13 at 21:21
!@Mariano: From where I stand, that is very reasonable! – Jon Bannon Apr 4 '13 at 21:34
I attended an interesting talk given by Tony O'Farrell where he made the case that products of involutions, or more generally, reversible elements in groups, occurred very naturally across mathematics. Unfortunately I don't have references to hand but he might have more on his webpage – Yemon Choi Apr 4 '13 at 22:41
Conversely, objects that have absolutely no symmetry are extremely common. Perhaps the reason why $\mathbb Z_2$ is so striking is that it's just the first non-trivial example of the involution idea, so we're particularly tuned to seeing and making-use of such symmetry. – Ryan Budney Apr 4 '13 at 22:43
This should probably be CW. – Todd Trimble Apr 4 '13 at 23:28
I don't know the true philosophical reason, but $Z_2$ symmetry is really omnipresent in Matematics and in the nature. For example, most animals, including practically all vertebrate animals (like ourselves) have aproximately $Z_2$ symmetric bodies, and no larger group. This suggests that $Z_2$ was the favorite group of the Creator, at least in that period of his activity when we was creating advanced animals:-)
If you prefer Evolution, this $Z_2$ symmetry must somehow be explained by the survival of the fittest. I don't know exactly how, but this suggests that this is a very important group. Notice that plants, mushrooms, and simplest animals usually do not have it.
As a result of this (2-fold symmetry of animal bodies) we tend to like this kind of symmetry. Look at all our technology: cars, ships, airplanes, etc. They all have 2-fold symmetry, at least from outside (like our bodies, they also have this symmetry only outside). Once my friend, an airspace engineer, told me that there was a project of an airplane which did not have this outside 2-fold symmetry. The project was rejected for the only reason that "no one will want to fly in such an airplane". I am serious: http://en.wikipedia.org/wiki/Oblique_wing
In mathematics, from my personal perspective, it is $z\mapsto\overline{z}$ first of all. (Once I even proposed to my co-author to call one of our papers "Some applications of representation theory of $Z_2$"); the paper was full of different representations of this group, We were working on real algebraic geometry.)
This very same symmetry $z\mapsto\overline{z}$ is also hidden in Hermitian symmetry, $C^*$ algebras, all sorts of "duality" everywhere, etc. Which suggests that the Creator of the Universe always had a strong bias in favor of this particular group.
-
Many plants (think of flowers and fruit) and simple animals (think of starfish, jellyfish and anemones) exhibit 3-, 5- or 6-fold symmetry, and often higher. Flowers have been "making useful use of automorphisms of order three" for ages. Jellyfish, possibly the most successful animal to ever exist on the planet, have such striking and high order radial symmetry that they are the biologist's go-to example for demonstrating this phenomenon in nature. – Zack Wolske Apr 5 '13 at 0:43
Maybe jellyfishy C*-algebras have stars of other orders? – Mariano Suárez-Alvarez Apr 5 '13 at 5:33
I don't like this answer. If we understand well how to use hammers, we are going to notice a lot of nails, but that doesn't mean nails are somehow truly ubiquitous or favoured by the gods, it just means that we recognise them when we see them. Bonus points to anyone who makes good use of Jellyfish Algebras, btw. – Ketil Tveiten Apr 5 '13 at 9:02
In this connection, I might add that the word itself, involution, has a botanical origin: see mathoverflow.net/questions/127332/… – Carlo Beenakker Apr 12 '13 at 10:02
Alexandre: I suppose the thing I object to is the word "omnipresent". Observing lots of examples of $\mathbb{Z}/2$-symmetry does not mean that it is somehow a fundamental thing, it only means that $\mathbb{Z}/2$-symmetry is a thing we are good at recognising. Most objects (or living things) in nature don't have any symmetry at all, and in a similar way, most objects in mathematics have no symmetry, it's just that we tend to work with those objects that are nice enough that we can do something with them, and having some kind of low-order symmetry is an easy way to be nice. – Ketil Tveiten Apr 12 '13 at 16:07
It all boils down to the fact that $\mathbb{R}$ has two ends!
In all (most) mathematical processes there is some notion of direction: counting, moving (along a curve), mapping from one space into another, reading a formula from left to right... the geometry of $\mathbb{R}$ is present always in one way or other, and the flip of the negative and positive ends usually induces some sort of involution.
-
An immense number of instances where involutions show up and play a significant role have absolutely nothing to do with the ends of ℝ. I would be awed to see a concrete connection between the Feit-Thompson theorem that simple finite groups have non-trivial involutions (and its central role in the classification of simple groups) and the ends of ℝ, say. – Mariano Suárez-Alvarez Apr 5 '13 at 5:31 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5330890417098999, "perplexity": 854.748032585105}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701165302.57/warc/CC-MAIN-20160205193925-00002-ip-10-236-182-209.ec2.internal.warc.gz"} |
https://dendron.so/notes/773e0b5a-510f-4c21-acf4-2d1ab3ed741e.html | # Style
Code style guidelines. We use prettier to autoformat the code on every commit which helps with most conventional styling conventions. This page lists some additional conventions not covered by prettier.
• when importing modules, unless your working with an all javascript package, we want to use import syntax over require syntax
• unless there's an obvious performance penalty, we prefer using async/await and Promises over callbacks | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.311707466840744, "perplexity": 8643.36927363211}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704821381.83/warc/CC-MAIN-20210127090152-20210127120152-00797.warc.gz"} |
https://www.physicsforums.com/threads/quick-question-about-continuity-at-a-point.528908/ | # Homework Help: Quick Question about continuity at a point
1. Sep 10, 2011
### tylerc1991
1. The problem statement, all variables and given/known data
I have always been comfortable with proving continuity of a function on an interval, but I have been running into problems proving that a function is continuous at a point in it's domain. For example:
Prove $f(x) = x^2$ is continuous at $x = 7$.
2. Relevant equations
We will be using the delta epsilon definition of continuity here.
3. The attempt at a solution
Let $f(x) = x^2$ and $\varepsilon > 0$.
Choose $\delta$= ________ (usually we choose $\delta$ last, so I am just leaving it blank right now).
Now, if $|x - y| = |7 - y| = |y - 7| < \delta$, then
$|f(x) - f(y)| = |49 - y^2| = |y^2 - 49| = |y + 7||y - 7|.$
This is where it gets a little awkward for me. I know that I may say $|y - 7| < \delta$, but what do I do with the $|y + 7|$? Could I say that $|y + 7| < \delta + 14$? Then I would have to choose a $\delta$ such that $\delta (\delta + 14) = \varepsilon$.
Thank you for your help anyone!
2. Sep 12, 2011
### Stephen Tashi
There is probably a way to write the proof using mostly references to absoulte values. However, it is useful to know how to "grunge it out" when no elegant way comes to mind.
When you have to get down and dirty, it is best to write things like $|y-7| < \delta$ in the equivalent form of:
eq 1. $7 - \delta < y < 7 + \delta$
(For simplicity I'll label them "equations" but the they actually are inequalities.)
To square eq. 1 and keep the inequality marks pointed the same way, we must make sure that all the terms are positive. We can make $7 - \delta > 0$ by chosing $\delta < 7$, so remember this condition. Squaring eq 1., we get:
eq. 2. $49 - 14 \delta + \delta^2 < y^2 < 49 + 14\delta + \delta^2$
To get the functions of $\delta$ to be closer to $y^2$ than $\epsilon$ we need eq. 3 and eq. 4 to hold:
eq 3. $49 - \epsilon < 49 - 14 \delta + \delta^2$
eq. 4. $49 + 14\delta + \delta^2 < 49 + \epsilon$
Thos equations simplify to eq 5. and eq 6. respectively:
eq 5. $-\epsilon < -14 \delta + \delta^2$
eq 6. $14 \delta +\delta^2 < \epsilon$
Mutliplying eq 5. by -1 and reversing the inequality sign gives:
eq 7. $14 \delta - \delta^2 < \epsilon$
If eq. 6 holds then eq 7 would also, so we only worry about eq 6.
Rather than worry about solving quadratic equations, it's simpler to take advantage of the fact that we are dealing with inequalities and trying to make $\delta$ small.
So add the condition $0 < \delta < 1$ so that we can say $\delta^2 < \delta$
This and eq 6. imply that we want:
eq 8. $0 < 14 \delta + \delta^2 < 14\delta + \delta < \epsilon$
eq 9. $15 \delta < \epsilon$
So this imples we want:
eq 10. $\delta < \frac {\epsilon}{15}$
We can satisfy eq 10. by setting $\delta$ equal to various things, for example $\delta = (0.5)\frac{\epsilon}{15}$ or $\delta = \frac{\epsilon}{16}$ etc.
We have to remember the previous assumptions we made on $\delta$.
To incorporate all of them , it is sufficient to say:
eq 11. Let $\delta = min\{ \frac{\epsilon}{16}, 1.0 \}$
To have a real proof you have to go through the reasoning in reverse order. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9237013459205627, "perplexity": 399.30148079162956}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823183.3/warc/CC-MAIN-20181209210843-20181209232843-00555.warc.gz"} |
https://www.datacamp.com/community/tutorials/demystifying-crucial-statistics-python | Tutorials
python
+2
# Demystifying Crucial Statistics in Python
Learn about the basic statistics required for Data Science and Machine Learning in Python.
If you have little experience in applying machine learning algorithm, you would have discovered that it does not require any knowledge of Statistics as a prerequisite.
However, knowing some statistics can be beneficial to understand machine learning technically as well intuitively. Knowing some statistics will eventually be required when you want to start validating your results and interpreting them. After all, when there is data, there are statistics. Like Mathematics is the language of Science. Statistics is one of a kind language for Data Science and Machine Learning.
Statistics is a field of mathematics with lots of theories and findings. However, there are various concepts, tools, techniques, and notations are taken from this field to make machine learning what it is today. You can use descriptive statistical methods to help transform observations into useful information that you will be able to understand and share with others. You can use inferential statistical techniques to reason from small samples of data to whole domains. Later in this post, you will study descriptive and inferential statistics. So, don't worry.
Before getting started, let's walk through ten examples where statistical methods are used in an applied machine learning project:
• Problem Framing: Requires the use of exploratory data analysis and data mining.
• Data Understanding: Requires the use of summary statistics and data visualization.
• Data Cleaning: Requires the use of outlier detection, imputation and more.
• Data Selection: Requires the use of data sampling and feature selection methods.
• Data Preparation: Requires the use of data transforms, scaling, encoding and much more.
• Model Evaluation: Requires experimental design and resampling methods.
• Model Configuration: Requires the use of statistical hypothesis tests and estimation statistics.
• Model Selection: Requires the use of statistical hypothesis tests and estimation statistics.
• Model Presentation: Requires the use of estimation statistics such as confidence intervals.
• Model Predictions: Requires the use of estimation statistics such as prediction intervals.
Isn't that fascinating?
This post will give you a solid background in the essential but necessary statistics required for becoming a good machine learning practitioner.
In this post, you will study:
• Introduction to Statistics and its types
• Statistics for data preparation
• Statistics for model evaluation
• Gaussian and Descriptive stats
• Variable correlation
• Non-parametric Statistics
You have a lot to cover, and all of the topics are equally important. Let's get started!
## Introduction to Statistics and its types:
Let's briefly study how to define statistics in simple terms.
Statistics is considered a subfield of mathematics. It refers to a multitude of methods for working with data and using that data to answer many types of questions.
When it comes to the statistical tools that are used in practice, it can be helpful to divide the field of statistics into two broad groups of methods: descriptive statistics for summarizing data, and inferential statistics for concluding samples of data (Statistics for Machine Learning (7-Day Mini-Course)).
• Descriptive Statistics: Descriptive statistics are used to describe the essential features of the data in a study. They provide simple summaries about the sample and the measures. Together with simple graphics analysis, they form the basis of virtually every quantitative analysis of data. The below infographic provides a good summary of descriptive statistics:
Source: IntellSpot
• Inferential Statistics: Inferential statistics are methods that help in quantifying properties of the domain or population from a tinier set of obtained observations called a sample. Below is an infographic which beautifully describes inferential statistics:
Source: Analytics Vidhya
In the next section, you will study the use of statistics for data preparation.
## Statistics for data preparation:
Statistical methods are required in the development of train and test data for your machine learning model.
This includes techniques for:
• Outlier detection
• Missing value imputation
• Data sampling
• Data scaling
• Variable encoding
A basic understanding of data distributions, descriptive statistics, and data visualization is required to help you identify the methods to choose when performing these tasks.
Let's analyze each of the above points briefly.
### Outlier detection:
Let's first see what an outlier is.
An outlier is considered an observation that appears to deviate from other observations in the sample. The following figure makes the definition more prominent.
Source: MathWorks
You can spot the outliers in the data as given the above figure.
Many machine learning algorithms are sensitive to the range and distribution of attribute values in the input data. Outliers in input data can skew and mislead the training process of machine learning algorithms resulting in longer training times, less accurate models and ultimately more mediocre results.
Identification of potential outliers is vital for the following reasons:
• An outlier could indicate the data is bad. In example, the data maybe coded incorrectly, or the experiment did not run correctly. If it can be determined that an outlying point is, in fact, erroneous, then the value that is outlying should be removed from the analysis. If it is possible to correct that is another option.
• In a few cases, it may not be possible to determine whether an outlying point is a bad data point. Outliers could be due to random variation or could possibly indicate something scientifically interesting. In any event, you typically do not want to just delete the outlying observation. However, if the data contains significant outliers, you may need to consider the use of robust statistical techniques.
So, outliers are often not good for your predictive models (Although, sometimes, these outliers can be used as an advantage. But that is out of the scope of this post). You need the statistical know-how to handle outliers efficiently.
### Missing value imputation:
Well, most of the datasets now suffer from the problem of missing values. Your machine learning model may not get trained effectively if the data that you are feeding to the model contains missing values. Statistical tools and techniques come here for the rescue.
Many people tend to discard the data instances which contain a missing value. But that is not a good practice because during that course you may lose essential features/representations of the data. Although there are advanced methods for dealing with missing value problems, these are the quick techniques that one would go for: Mean Imputation and Median Imputation.
It is imperative that you understand what mean and median are.
Say, you have a feature X1 which has these values - 13, 18, 13, 14, 13, 16, 14, 21, 13
The mean is the usual average, so I'll add and then divide:
(13 + 18 + 13 + 14 + 13 + 16 + 14 + 21 + 13) / 9 = 15
Note that the mean, in this case, isn't a value from the original list. This is a common result. You should not assume that your mean will be one of your original numbers.
The median is the middle value, so first, you will have to rewrite the list in numerical order:
13, 13, 13, 13, 14, 14, 16, 18, 21
There are nine numbers in the list, so the middle one will be the (9 + 1) / 2 = 10 / 2 = 5th number:
13, 13, 13, 13, 14, 14, 16, 18, 21
So the median is 14.
### Data sampling:
Data is considered the currency of applied machine learning. Therefore, its collection and usage both are equally significant.
Data sampling refers to statistical methods for selecting observations from the domain with the objective of estimating a population parameter. In other words, sampling is an active process of gathering observations with the intent of estimating a population variable.
Each row of a dataset represents an observation that is indicative of a particular population. When working with data, you often do not have access to all possible observations. This could be for many reasons, for example:
• It may be difficult or expensive to make more observations.
• It may be challenging to gather all the observations together.
• More observations are expected to be made in the future.
Many times, you will not have the right proportion of the data samples. So, you will have to under-sample or over-sample based on the type of problem.
You perform under-sampling when the data samples for a particular category are very high compared to other meaning you discard some of the data samples from the category where they are higher. You perform over-sampling when the data samples for a particular type are decidedly lower compared to the other. In this case, you generate data samples.
This applies to multi-class scenarios as well.
Statistical sampling is a large field of study, but in applied machine learning, there may be three types of sampling that you are likely to use: simple random sampling, systematic sampling, and stratified sampling.
• Simple Random Sampling: Samples are drawn with a uniform probability from the domain.
• Systematic Sampling: Samples are drawn using a pre-specified pattern, such as at intervals.
• Stratified Sampling: Samples are drawn within pre-specified categories (i.e., strata).
Although these are the more common types of sampling that you may encounter, there are other techniques (A Gentle Introduction to Statistical Sampling and Resampling).
### Data Scaling:
Often, the features of your dataset may widely vary in ranges. Some features may have a scale of 0 to 100 while the other may have ranges of 0.01 - 0.001, 10000- 20000, etc.
This is very problematic for efficient modeling. Because a small change in the feature which has a lower value range than the other feature may not have a significant impact on those other features. It affects the process of good learning. Dealing with this problem is known as data scaling.
There are different data scaling techniques such as Min-Max scaling, Absolute scaling, Standard scaling, etc.
### Variable encoding:
At times, your datasets contain a mixture of both numeric and non-numeric data. Many machine learning frameworks like scikit-learn expect all the data to be present in all numeric format. This is also helpful to speed up the computation process.
Again, statistics come for saving you.
Techniques like Label encoding, One-Hot encoding, etc. are used to convert non-numeric data to numeric.
## It's time to apply the techniques!
You have covered a lot of theory for now. You will apply some of these to get the real feel.
You will start off by applying some statistical methods to detect Outliers.
You will use the Z-Score index to detect outliers, and for this, you will investigate the Boston House Price dataset. Let's start off by importing the dataset from sklearn's utilities, and as you go along, you will start the necessary concepts.
import pandas as pd
import numpy as np
# Load the Boston dataset into a variable called boston
# Separate the features from the target
x = boston.data
y = boston.target
To view the dataset in a standard tabular format with the all the feature names, you will convert this into a pandas dataframe.
# Take the columns separately in a variable
columns = boston.feature_names
# Create the dataframe
boston_df = pd.DataFrame(boston.data)
boston_df.columns = columns
It is a common practice to start with univariate outlier analysis where you consider just one feature at a time. Often, a simple box-plot of a particular feature can give you good starting point. You will make a box-plot using seaborn and you will use the DIS feature.
import seaborn as sns
sns.boxplot(x=boston_df['DIS'])
import matplotlib.pyplot as plt
plt.show()
<matplotlib.axes._subplots.AxesSubplot at 0x8abded0>
To view the box-plot, you did the second import of matplotlib since seaborn plots are displayed like ordinary matplotlib plots.
The above plot shows three points between 10 to 12, these are outliers as they're are not included in the box of other observations. Here you analyzed univariate outlier, i.e., you used DIS feature only to check for the outliers.
Let's proceed with Z-Score now.
"The Z-score is the signed number of standard deviations by which the value of an observation or data point is above the mean value of what is being observed or measured." - Wikipedia
The idea behind Z-score is to describe any data point regarding their relationship with the Standard Deviation and Mean for the group of data points. Z-score is about finding the distribution of data where the mean is 0, and the standard deviation is 1, i.e., normal distribution.
Wait! How on earth does this help in identifying the outliers?
Well, while calculating the Z-score you re-scale and center the data (mean of 0 and standard deviation of 1) and look for the instances which are too far from zero. These data points that are way too far from zero are treated as the outliers. In most common cases the threshold of 3 or -3 is used. In example, say the Z-score value is greater than or less than 3 or -3 respectively. This data point will then be identified as an outlier.
You will use the Z-score function defined in scipy library to detect the outliers.
from scipy import stats
z = np.abs(stats.zscore(boston_df))
print(z)
[[0.41771335 0.28482986 1.2879095 ... 1.45900038 0.44105193 1.0755623 ]
[0.41526932 0.48772236 0.59338101 ... 0.30309415 0.44105193 0.49243937]
[0.41527165 0.48772236 0.59338101 ... 0.30309415 0.39642699 1.2087274 ]
...
[0.41137448 0.48772236 0.11573841 ... 1.17646583 0.44105193 0.98304761]
[0.40568883 0.48772236 0.11573841 ... 1.17646583 0.4032249 0.86530163]
[0.41292893 0.48772236 0.11573841 ... 1.17646583 0.44105193 0.66905833]]
It is not possible to detect the outliers by just looking at the above output. You are more intelligent! You will define the threshold for yourself, and you will use a simple condition for detecting the outliers that cross your threshold.
threshold = 3
print(np.where(z > 3))
(array([ 55, 56, 57, 102, 141, 142, 152, 154, 155, 160, 162, 163, 199,
200, 201, 202, 203, 204, 208, 209, 210, 211, 212, 216, 218, 219,
220, 221, 222, 225, 234, 236, 256, 257, 262, 269, 273, 274, 276,
277, 282, 283, 283, 284, 347, 351, 352, 353, 353, 354, 355, 356,
357, 358, 363, 364, 364, 365, 367, 369, 370, 372, 373, 374, 374,
380, 398, 404, 405, 406, 410, 410, 411, 412, 412, 414, 414, 415,
416, 418, 418, 419, 423, 424, 425, 426, 427, 427, 429, 431, 436,
437, 438, 445, 450, 454, 455, 456, 457, 466], dtype=int32), array([ 1, 1, 1, 11, 12, 3, 3, 3, 3, 3, 3, 3, 1, 1, 1, 1, 1,
1, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 5, 3, 3, 1, 5,
5, 3, 3, 3, 3, 3, 3, 1, 3, 1, 1, 7, 7, 1, 7, 7, 7,
3, 3, 3, 3, 3, 5, 5, 5, 3, 3, 3, 12, 5, 12, 0, 0, 0,
0, 5, 0, 11, 11, 11, 12, 0, 12, 11, 11, 0, 11, 11, 11, 11, 11,
11, 0, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11],
dtype=int32))
Again, a confusing output! The first array contains the list of row numbers and the second array contains their respective column numbers. For example, z[55][1] have a Z-score higher than 3.
print(z[55][1])
3.375038763517309
So, the 55th record on column ZN is an outlier. You can extend things from here.
You saw how you could use Z-Score and set its threshold to detect potential outliers in the data. Next, you will see how to do some missing value imputation.
You will use the famous Pima Indian Diabetes dataset which is known to have missing values. But before proceeding any further, you will have to load the dataset into your workspace.
You will load the dataset into a DataFrame object data.
data = pd.read_csv("https://raw.githubusercontent.com/jbrownlee/Datasets/master/pima-indians-diabetes.data.csv",header=None)
print(data.describe())
0 1 2 3 4 5 \
count 768.000000 768.000000 768.000000 768.000000 768.000000 768.000000
mean 3.845052 120.894531 69.105469 20.536458 79.799479 31.992578
std 3.369578 31.972618 19.355807 15.952218 115.244002 7.884160
min 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000
25% 1.000000 99.000000 62.000000 0.000000 0.000000 27.300000
50% 3.000000 117.000000 72.000000 23.000000 30.500000 32.000000
75% 6.000000 140.250000 80.000000 32.000000 127.250000 36.600000
max 17.000000 199.000000 122.000000 99.000000 846.000000 67.100000
6 7 8
count 768.000000 768.000000 768.000000
mean 0.471876 33.240885 0.348958
std 0.331329 11.760232 0.476951
min 0.078000 21.000000 0.000000
25% 0.243750 24.000000 0.000000
50% 0.372500 29.000000 0.000000
75% 0.626250 41.000000 1.000000
max 2.420000 81.000000 1.000000
You might have already noticed that the column names are numeric here. This is because you are using an already preprocessed dataset. But don't worry, you will discover the names soon.
Now, this dataset is known to have missing values, but for your first glance at the above statistics, it might appear that the dataset does not contain missing values at all. But if you take a closer look, you will find that there are some columns where a zero value is entirely invalid. These are the values that are missing.
Specifically, the below columns have an invalid zero value as the minimum:
• Plasma glucose concentration
• Diastolic blood pressure
• Triceps skinfold thickness
• 2-Hour serum insulin
• Body mass index
Let's confirm this by looking at the raw data, the example prints the first 20 rows of data.
data.head(20)
0 1 2 3 4 5 6 7 8
0 6 148 72 35 0 33.6 0.627 50 1
1 1 85 66 29 0 26.6 0.351 31 0
2 8 183 64 0 0 23.3 0.672 32 1
3 1 89 66 23 94 28.1 0.167 21 0
4 0 137 40 35 168 43.1 2.288 33 1
5 5 116 74 0 0 25.6 0.201 30 0
6 3 78 50 32 88 31.0 0.248 26 1
7 10 115 0 0 0 35.3 0.134 29 0
8 2 197 70 45 543 30.5 0.158 53 1
9 8 125 96 0 0 0.0 0.232 54 1
10 4 110 92 0 0 37.6 0.191 30 0
11 10 168 74 0 0 38.0 0.537 34 1
12 10 139 80 0 0 27.1 1.441 57 0
13 1 189 60 23 846 30.1 0.398 59 1
14 5 166 72 19 175 25.8 0.587 51 1
15 7 100 0 0 0 30.0 0.484 32 1
16 0 118 84 47 230 45.8 0.551 31 1
17 7 107 74 0 0 29.6 0.254 31 1
18 1 103 30 38 83 43.3 0.183 33 0
19 1 115 70 30 96 34.6 0.529 32 1
Clearly there are 0 values in the columns 2, 3, 4, and 5.
As this dataset has missing values denoted as 0, so it might be tricky to handle it by just using the conventional means. Let's summarize the approach you will follow to combat this:
• Get the count of zeros in each of the columns you saw earlier.
• Determine which columns have the most zero values from the previous step.
• Replace the zero values in those columns with NaN.
• Check if the NaNs are getting appropriately reflected.
• Call the fillna() function with the imputation strategy.
# Step 1: Get the count of zeros in each of the columns
print((data[[1,2,3,4,5]] == 0).sum())
1 5
2 35
3 227
4 374
5 11
dtype: int64
You can see that columns 1,2 and 5 have just a few zero values, whereas columns 3 and 4 show a lot more, nearly half of the rows.
# Step -2: Mark zero values as missing or NaN
data[[1,2,3,4,5]] = data[[1,2,3,4,5]].replace(0, np.NaN)
# Count the number of NaN values in each column
print(data.isnull().sum())
0 0
1 5
2 35
3 227
4 374
5 11
6 0
7 0
8 0
dtype: int64
Let's get sure at this point of time that your NaN replacement was a hit by taking a look at the dataset as a whole:
# Step 4
0 1 2 3 4 5 6 7 8
0 6 148.0 72.0 35.0 NaN 33.6 0.627 50 1
1 1 85.0 66.0 29.0 NaN 26.6 0.351 31 0
2 8 183.0 64.0 NaN NaN 23.3 0.672 32 1
3 1 89.0 66.0 23.0 94.0 28.1 0.167 21 0
4 0 137.0 40.0 35.0 168.0 43.1 2.288 33 1
5 5 116.0 74.0 NaN NaN 25.6 0.201 30 0
6 3 78.0 50.0 32.0 88.0 31.0 0.248 26 1
7 10 115.0 NaN NaN NaN 35.3 0.134 29 0
8 2 197.0 70.0 45.0 543.0 30.5 0.158 53 1
9 8 125.0 96.0 NaN NaN NaN 0.232 54 1
10 4 110.0 92.0 NaN NaN 37.6 0.191 30 0
11 10 168.0 74.0 NaN NaN 38.0 0.537 34 1
12 10 139.0 80.0 NaN NaN 27.1 1.441 57 0
13 1 189.0 60.0 23.0 846.0 30.1 0.398 59 1
14 5 166.0 72.0 19.0 175.0 25.8 0.587 51 1
15 7 100.0 NaN NaN NaN 30.0 0.484 32 1
16 0 118.0 84.0 47.0 230.0 45.8 0.551 31 1
17 7 107.0 74.0 NaN NaN 29.6 0.254 31 1
18 1 103.0 30.0 38.0 83.0 43.3 0.183 33 0
19 1 115.0 70.0 30.0 96.0 34.6 0.529 32 1
You can see that marking the missing values had the intended effect.
Up till now, you analyzed essential trends when data is missing and how you can make use of simple statistical measures to get a hold of it. Now, you will impute the missing values using Mean Imputation which is essentially imputing the mean of the respective column in place of missing values.
# Step 5: Call the fillna() function with the imputation strategy
data.fillna(data.mean(), inplace=True)
# Count the number of NaN values in each column to verify
print(data.isnull().sum())
0 0
1 0
2 0
3 0
4 0
5 0
6 0
7 0
8 0
dtype: int64
Excellent!
This DataCamp article effectively guides you in implementing data scaling as a data preprocessing step. Be sure to check it out.
Next, you will do variable encoding.
Before that, you need a dataset which actually contains non-numeric data. You will use the famous Iris dataset for this.
# Load the dataset to a DataFrame object iris
# See first 20 rows of the dataset
0 1 2 3 4
0 5.1 3.5 1.4 0.2 Iris-setosa
1 4.9 3.0 1.4 0.2 Iris-setosa
2 4.7 3.2 1.3 0.2 Iris-setosa
3 4.6 3.1 1.5 0.2 Iris-setosa
4 5.0 3.6 1.4 0.2 Iris-setosa
5 5.4 3.9 1.7 0.4 Iris-setosa
6 4.6 3.4 1.4 0.3 Iris-setosa
7 5.0 3.4 1.5 0.2 Iris-setosa
8 4.4 2.9 1.4 0.2 Iris-setosa
9 4.9 3.1 1.5 0.1 Iris-setosa
10 5.4 3.7 1.5 0.2 Iris-setosa
11 4.8 3.4 1.6 0.2 Iris-setosa
12 4.8 3.0 1.4 0.1 Iris-setosa
13 4.3 3.0 1.1 0.1 Iris-setosa
14 5.8 4.0 1.2 0.2 Iris-setosa
15 5.7 4.4 1.5 0.4 Iris-setosa
16 5.4 3.9 1.3 0.4 Iris-setosa
17 5.1 3.5 1.4 0.3 Iris-setosa
18 5.7 3.8 1.7 0.3 Iris-setosa
19 5.1 3.8 1.5 0.3 Iris-setosa
You can easily convert the string values to integer values using the LabelEncoder. The three class values (Iris-setosa, Iris-versicolor, Iris-virginica) are mapped to the integer values (0, 1, 2).
In this case, the fourth column/feature of the dataset contains non-numeric values. So you need to separate it out.
# Convert the DataFrame to a NumPy array
iris = iris.values
# Separate
Y = iris[:,4]
# Label Encode string class values as integers
from sklearn.preprocessing import LabelEncoder
label_encoder = LabelEncoder()
label_encoder = label_encoder.fit(Y)
label_encoded_y = label_encoder.transform(Y)
Now, let's study another area where the need for elementary knowledge of statistics is very crucial.
## Statistics for model evaluation:
You have designed and developed your machine learning model. Now, you want to evaluate the performance of your model on the test data. In this regards, you seek help of various statistical metrics like Precision, Recall, ROC, AUC, RMSE, etc. You also seek help from multiple data resampling techniques such as k-fold Cross-Validation.
Statistics can effectively be used to:
It is important to note that the hypothesis refers to learned models; the results of running a learning algorithm on a dataset. Evaluating and comparing the hypothesis means comparing learned models, which is different from evaluating and comparing machine learning algorithms, which could be trained on different samples from the same problem or various problems.
Let's study Gaussian and Descriptive statistics now.
## Introduction to Gaussian and Descriptive stats:
A sample of data is nothing but a snapshot from a broader population of all the potential observations that could be taken from a domain or generated by a process.
Interestingly, many observations fit a typical pattern or distribution called the normal distribution, or more formally, the Gaussian distribution. This is the bell-shaped distribution that you may be aware of. The following figure denotes a Gaussian distribution:
Source: HyperPhysics
Gaussian processes and Gaussian distributions are whole another sub-fields unto themselves. But, you will now study two of the most essential ingredients that build the entire world of Gaussian distributions in general.
Any sample data taken from a Gaussian distribution can be summarized with two parameters:
• Mean: The central tendency or most likely value in the distribution (the top of the bell).
• Variance: The average difference that observations have from the mean value in the distribution (the spread).
The term variance also gives rise to another critical term, i.e., standard deviation, which is merely the square root of the variance.
The mean, variance, and standard deviation can be directly calculated from data samples using numpy.
You will first generate a sample of 100 random numbers pulled from a Gaussian distribution with a mean of 50 and a standard deviation of 5. You will then calculate the summary statistics.
First, you will import all the dependencies.
# Dependencies
from numpy.random import seed
from numpy.random import randn
from numpy import mean
from numpy import var
from numpy import std
Next, you set the random number generator seed so that your results are reproducible.
seed(1)
# Generate univariate observations
data = 5 * randn(10000) + 50
# Calculate statistics
print('Mean: %.3f' % mean(data))
print('Variance: %.3f' % var(data))
print('Standard Deviation: %.3f' % std(data))
Mean: 50.049
Variance: 24.939
Standard Deviation: 4.994
Close enough, eh?
Let's study the next topic now.
## Variable correlation:
Generally, the features that are contained in a dataset can often be related to each other which is very obvious to happen in practice. In statistical terms, this relationship between the features of your dataset (be it simple or complex) is often termed as correlation.
It is crucial to find out the degree of the correlation of the features in a dataset. This step essentially serves you as feature selection which concerns selecting the most important features from a dataset. This step is one of the most vital steps in a standard machine learning pipeline as it can give you a tremendous accuracy boost that too within a lesser amount of time.
For better understanding and to keep it more practical let's understand why features can be related to each other:
• One feature can be a determinant of another feature
• One feature could be associated with another feature in some degree of composition
• Multiple features can combine and give birth to another feature
Correlation between the features can be of three types: - Positive correlation where both the feature change in the same direction, Neutral correlation when there is no relationship of the change in the two features, Negative correlation where both the features change in opposite directions.
Correlation measurements form the fundamental of filter-based feature selection techniques. Check this article if you want to study more about feature selection.
You can mathematically the relationship between samples of two variables using a statistical method called Pearson’s correlation coefficient, named after the developer of the method, Karl Pearson.
You can calculate the Pearson's correlation score by using the corr() function of pandas with the method parameter as pearson. Let's study the correlation between the features of the Pima Indians Diabetes dataset that you used earlier. You already have the data in good shape.
# Data
0 1 2 3 4 5 6 7 8
0 6 148.0 72.0 35.00000 155.548223 33.6 0.627 50 1
1 1 85.0 66.0 29.00000 155.548223 26.6 0.351 31 0
2 8 183.0 64.0 29.15342 155.548223 23.3 0.672 32 1
3 1 89.0 66.0 23.00000 94.000000 28.1 0.167 21 0
4 0 137.0 40.0 35.00000 168.000000 43.1 2.288 33 1
# Create the matrix of correlation score between the features and the label
scoreTable = data.corr(method='pearson')
# Visulaize the matrix
You can clearly see the Pearson's correlation between all the features and the label of the dataset.
In the next section, you will study non-parametric statistics.
## Non-parametric statistics:
A large portion of the field of statistics and statistical methods is dedicated to data where the distribution is known.
Non-parametric statistics comes in handy when there is no or few information available about the population parameters. Non-parametric tests make no assumptions about the distribution of data.
In the case where you are working with nonparametric data, specialized nonparametric statistical methods can be used that discard all information about the distribution. As such, these methods are often referred to as distribution-free methods.
Bu before a nonparametric statistical method can be applied, the data must be converted into a rank format. Statistical methods that expect data in a rank format are sometimes called rank statistics. Examples of rank statistics can be rank correlation and rank statistical hypothesis tests. Ranking data is exactly as its name suggests.
A widely used nonparametric statistical hypothesis test for checking for a difference between two independent samples is the Mann-Whitney U test, named for Henry Mann and Donald Whitney.
You will implement this test in Python via the mannwhitneyu() which is provided by SciPy.
# The dependencies that you need
from scipy.stats import mannwhitneyu
from numpy.random import rand
# seed the random number generator
seed(1)
# Generate two independent samples
data1 = 50 + (rand(100) * 10)
data2 = 51 + (rand(100) * 10)
# Compare samples
stat, p = mannwhitneyu(data1, data2)
print('Statistics = %.3f, p = %.3f' % (stat, p))
# Interpret
alpha = 0.05
if p > alpha:
print('Same distribution (fail to reject H0)')
else:
print('Different distribution (reject H0)')
Statistics = 4077.000, p = 0.012
Different distribution (reject H0)
alpha is the threshold parameter which is decided by you. The mannwhitneyu() returns two things:
• statistic: The Mann-Whitney U statistic, equal to min(U for x, U for y) if alternative is equal to None (deprecated; exists for backward compatibility), and U for y otherwise.
• pvalue: p-value assuming an asymptotic normal distribution.
If you want to study the other methods of Non-parametric statistics, you can do it from here.
The other two popular non-parametric statistical significance tests that you can use are:
## That calls for a wrap up!
You have finally made it to the end. In this article, you studied a variety of essential statistical concepts that play very crucial role in your machine learning projects. So, understanding them is just important.
From mere an introduction to statistics, you took it to statistical rankings that too with several implementations. That is definitely quite a feat. You studied three different datasets, exploited pandas and numpy functionalities to the fullest and moreover, you used SciPy as well. Next are some links for you if you want to take things further:
Following are the resources I took help from for writing this blog:
Let me know your views/queries in the comments section. Also, check out DataCamp's course on "Statistical Thinking in Python" which is very practically aligned. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47832411527633667, "perplexity": 760.0288466563687}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358180.42/warc/CC-MAIN-20211127103444-20211127133444-00129.warc.gz"} |
https://www.physicsforums.com/threads/integration-by-substitution.208186/ | # Integration by substitution?
1. Jan 11, 2008
### cabellos6
1. The problem statement, all variables and given/known data
I want to integrate (1+x)/(1-x)
2. Relevant equations
3. The attempt at a solution
I have looked at many examples of substitution method - this one appears simple but Im not finishing the last step.....
- I know you must first take u=(1-x)
- Then du = -dx
what happens with the numerator (1+x) as this would be the integral of -(1+x)du/u
id be very grateful if you could run me through the steps for this please.
thanks
2. Jan 11, 2008
### HallsofIvy
Staff Emeritus
You need to simplify the fraction first: dividing 1+ x by 1- x gives -1+ 2/(1-x)= -1- 2/(x-1). It's easy to integrate "-1" and to integrate -2/(x-1), let u= x-1.
Similar Discussions: Integration by substitution? | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9830060601234436, "perplexity": 2260.960394654039}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549426161.99/warc/CC-MAIN-20170726122153-20170726142153-00582.warc.gz"} |
https://matheducators.stackexchange.com/questions/11971/requirements-to-learn-calculus/12082 | # Requirements to learn calculus
I always was non math background student and programming is my hobby. I was attempting to program code instruction given here. Since I don't know calculus I'm stuck. I would like to know what are the things I need to learn before I start learning calculus? I can solve some basic algebra problems. Could anyone please guide me? Thanks.
• You need a glossary, and perhaps a book on numerical methods. – Jasper Feb 1 '17 at 0:44
• Maximum = peak. Minimum = bottom. First derivative = dy/dx = slope. Second derivative = d²y/dx² = rate of change of the slope. (In other words, find the slope near one point, find the slope near another point, subtract the two slopes to find a numerator, and subtract the two x values (of the points) to find a denominator.) You can estimate all of these things using a data set and your basic algebra skills. – Jasper Feb 1 '17 at 0:47
• The slope at a peak is zero (or undefined). The slope at a bottom is also zero (or undefined). The slope just after a peak might be very negative. – Jasper Feb 1 '17 at 1:05
When I teach people calculus, the big reasons they don't succeed tend to be problems with arithmetic and very basic algebra. For example, students won't know how to compute 2/(3/4), or they'll try to simplify $1/(x+y)$ to $1/x+1/y$. If you can handle this kind of stuff, then you're better prepared to learn calculus than most of my students.
There is some material often taught in a trig or precalculus course that may also be useful, but it's not critical. For example, it may be helpful to know what a function is; to know how to manipulate exponentials, e.g., $e^{a+b}=e^ae^b$; and to know trigonometry. If you don't know about trigonometric functions and exponential functions, then you won't be prepared to do calculus on them.
• Indeed, most of the marks I take out off of an exam are precalc stuff... – Jean-Sébastien Feb 3 '17 at 2:24
• @Ben Crowell Thanks for the basics. I could calculate basic arithmetic and algebra. So, I think I need to learn the trigonometry then. – DAKSH Feb 16 '17 at 20:39
Traditional calculus study also requires trigonometry.
Nowadays you will also find some "watered down" calculus texts with no trig functions, intended ironically* for business students.
*You would think business students would be interested in cyclical phenomena, wouldn't you?
• Business students are only interested in the exponential function. – Steven Gubkin Dec 18 '17 at 18:24
From Velleman's Calculus: A Rigorous First Course:
As to the content of the first chapter, it includes (but is not limited to) the following:
• decimal notation, integers, rational numbers, irrational numbers
• sets, subsets, elements, unions, intersections, intervals
• expressions, equations, inequalities
• triangle inequality, Pythagorean Theorem (and distance formula)
• functions, domain, independent and dependent variables, composition of functions
• absolute value function, square root function, linear functions (slope-intercept and point-slope forms), quadratic functions, polynomials, rational functions, trigonometric functions (along with how to use radians, the unit circle, etc)
• @Thanks Benjamin I will look into it. – DAKSH Feb 16 '17 at 20:40
Naturally, it's what they call Precalculus
.. where "they" is whoever's calculus materials you'll be using. While there is some variety, I expect it to include algebra to a few steps beyond the most basic problems, some geometry, and at least the basics of trigonometry. Some curricula also introduce limits in precalculus.
In algebra, you should be comfortable using basic identities including those in @BenCrowell's answer, solving quadratic equations, arithmetic on polynomials, raising polynomials to small exponents, dividing and factoring polynomials, and so on.
In geometry you should understand cartesian coordinates, be familiar with graphing functions, and know the equations for basic shapes. It may be helpful to get the gist of parametric equations.
In trigonometry you should be comfortable solving algebra and geometry problems involving trig functions or which require trig functions for their solutions, and using the common trig identities to rewrite expressions. It may be helpful to know something about the relationship with complex exponents and/or about hyperbolic functions.
But that's for learning Calculus overall, and I've been somewhat inclusive to hedge against variety in calculus curriculum. If you're studying on your own you can probably backfill as needed to a large degree. If you're only interested in this one problem and have access to knowledgable people (or can hire a tutor) you can probably focus pretty narrowly and learn a small subset just sufficient for this one problem. Glancing at the problem, it touches on techniques from numerical analysis and statistics, so at some point it might be better to consult a knowledgeable person and target specifics rather than to try to (in effect) take all of the courses involved. (That might come close to getting a math minor - which I wouldn't discourage, but it depends on your goals.)
The answer depends on the level you want to learn calculus at. I'll assume you want to learn it the way most North American students do, with little emphasis on proofs and theory.
In that case, the content of Serge Lang's Basic Mathematics is more than enough preparation.
Alternatively, Marsden and Weinstein's Calculus I has a self-test section at the beginning to tell you if you can start learning calculus directly, if reading their review chapters is enough (and which ones), or if you should go back and learn from a precalculus book.
You should consider (and the answerers) how much you really need/want to learn. Perhaps only some basics, perhaps only a specific problem or two are really what you need for the task(s) you have in front of you. If you want to learn calculus as a real topic than that will be a bit more work and you should make sure you want that enough. Even so, given what you tell us, I strongly urge you to work with "easier" books than hard ones. You will get more out of something you don't give up on. (Can always go back and do it harder if that is a need, later.)
1. Take a look at "Calculus Made Easy" or "Calculus for the Practical Man". Both are written in a nonpompous style and are relatively easy in excluding some harder topics. They are available as free (legit) pdfs; Google search for best download. After you get one of those (I quite like the Thompson one in being almost fun to read), take a look at some of the book, see if you can learn from it, and consider how it pertains to your programming tasks.
2. The suggestions about Precaculus and being up on algebra (you sounded weak) were very good ones. Not only is it hard to work on calculus with weak algebra but these are topics that are useful themselves (and perhaps even more applications than calculus; for example exponentials and rational functions are common in oil EUR programming on Tableau, Spotfire and Excel)
A cheap, good text in this area is Frank Ayres Schaum's Outline First Year College Mathematics which covers everything up to Calculus other than Geometry (which you don't need for Calculus) and even has a little intro to Calculus (which might be all you need or at least help you before you make the jump to a calculus text.)
Here is a link but you can try other booksellers. I recommend the original 1958 version (lots of used versions available on the net).
https://www.amazon.com/Theory-problems-first-college-mathematics/dp/B0007DPVM2/ref=sr_1_1?s=books&ie=UTF8&qid=1513575322&sr=1-1&keywords=Schaum%27s+Outline+of+first+year+College+Mathematics
1. If you have worked through the Ayres, you could look at "normal textbooks" like Thomas Finney, Swokowski, Stewart, etc. But I worry that they are a little too formal (they get sold to professors or committees that select them and are used when a teacher is available to support with lots of lectures). Better off with one of the suggestions from point 1 or perhaps some Dummies brand book or another Schaum's Outline just on calculus.
2. Disagree with the Velleman text. It's not as bad as it sounds, are some good aspects to it. But it's not a good suggestion for someone who self identifies as non mathy, mature, and needing calculus for work. Would suggest it instead for a precocious math student self studying or even for a regular (strong) class. There are also some places where it really emphasizes precision on limits and introduces new notation even. Just not the right thing to worry about with someone who didn't make it to calc when he could have in school.
3. Also look at Khan academy.
There are both video and problem assistance and the training is pretty supportive and clear and gentle. It may also appeal to you since you are into programming and it is a little techie in terms of the interfaces (some video game aspects of the problem solving, how the lectures are done on YT with an etchasketch). Even just watching a quick video here might help you get a little motivated or intrigued to learn more.
The Calculus is all about limit concepts. So, you need to understand some basic computations with all type of functions like polynomials, exponentials, logarithmic, trigonometric, inverse trigonometric, hyperbolic function... and more.
In order to visualize the Calculus concepts, you need to know the geometric shapes in 2D and 3d and its properties.
If you want to optimize something, you have model/function and you need to find the critical points of the function/model. so, to find those critical points or something. You need to solve the equations, sometimes systems. So, you would know Algebra.
so, to start learning Calculus. My suggestion is you would get through Algebra ( Basic, Intermediate), Geometry( 2D and 3D) and Trigonometry.
Also, you must familiar with all types of coordinate systems, rectangular, polar, cylindrical and sphere.
if you know all the concepts well, before start learning Calculus. Then, you enjoy the Calculus. It is great fun.
I wish you all the best, happy learning. :) Thank you.. ~Satya from India.
Explain exactly why you need to learn calculus in order to program an algorithm. If you can identify a specific need, then you can focus on that specific requirement or need.
Calculus usually consists of 3 general topics - differential calc, integration and vector calc?
Which one of these do you require for solving your programming problem?
Regards | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5734540820121765, "perplexity": 761.2326907055195}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347399820.9/warc/CC-MAIN-20200528135528-20200528165528-00391.warc.gz"} |
https://mathoverflow.net/questions/222616/whitney-sum-formula-for-pontryagin-classes-ii | # Whitney sum formula for Pontryagin classes II
I have read in several places that the total Pontryagin classes of real vector bundles satisfy a Whitney sum formula $p(E\oplus F) = p(E)\cdot p(F)$ modulo 2-torsion. I would like to understand the 2-torsion part better.
Is there a reference which describes the difference between $p(E\oplus F)$ and $p(E)\cdot p(F)$, perhaps in terms of Bocksteins of Stiefel-Whitney classes of $E$ and $F$?
This question was previously part of Whitney sum formula for Pontryagin classes I; Qiaochu Yuan's answer to that question might be helpful.
Under Whitney sum, $p_q\mapsto \sum_j r_{2q-j}\otimes r_j$, where $r_{2s} = p_s$ and $r_{2s+1} = (\delta w_{2s})^2+ p_s\delta w_1$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8711992502212524, "perplexity": 85.17083408271728}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655900335.76/warc/CC-MAIN-20200709131554-20200709161554-00187.warc.gz"} |
https://tex.stackexchange.com/questions/287755/selective-overlay-option-of-textblock-with-the-textpos-package/287902#287902 | # Selective overlay option of textblock with the textpos package
The texpos package has an option called [overlay] that, when loading the package makes all the text boxes textblocks be above (obscuring) other elements of the page.
Is there a way to control whether or not a particular textblock overlays or not?
\documentclass{beamer}
\usepackage[overlay]{texpos}
\begin{document}
\begin{frame}{title}
Other elements
\begin{textblock}{6}(5,7.1) %is there an option to NOT overlay this particular one
Hello % or include a bulky image here.
\end{textblock}
\end{frame}
\end{document}
Since this is an emergency (my presentation is tomorrow) :) I will give one or two 100 point bounties for a solution or a workaround.
You can't do this in general: the [overlay] option works by adjusting the TeX \shipout command so that all of the {textblock} material on a page is output either before (non-overlay) or after (overlay) the non-{textblock} material.
Since this is a presentation, however, you might be able to hack this on a per-page basis. Try setting \makeatletter\TP@overlayfalse before the page you want to hack, and then \TP@overlaytrue after it. That should result in all of the {textblock} environments on the affected page being non-overlay.
You might have to play around with the precise positioning of those commands, but putting them before and after the {frame} environment should work. I haven't tested this – let us know how you get on.
• Hmm: I was fairly confident that would work – boo. I presume the rush is over for you, now, but I'll look at this again. I have a \TPoptions macro implemented in a version 1.8b1 which might be relevant here, and this should prompt me to release that, if only to add a note about how to achieve this sort of thing. Thanks for letting me know. Jan 20, 2016 at 11:23
• I've tried using \TPoptions to overlay just one textbox on the same page as another that is underneath the main text, and haven't got it to work. Am I missing something? Oct 27, 2016 at 12:34
• @hertzsprung The \TPoptions macro allows you to change the in-play options on a per-page basis, but it can't change the effect of those options. On a particular page, the {textblock} material will appear either all before or all after the non-{textblock} material. So no, there's no current way to overlay just one textblock. Doing so would not be impossible, I don't think, but I suspect it would require major surgery to the package. Oct 28, 2016 at 12:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.872024655342102, "perplexity": 1601.7114531883974}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103639050.36/warc/CC-MAIN-20220629115352-20220629145352-00265.warc.gz"} |
https://www.semanticscholar.org/paper/Double-Ramification-Cycles-and-Quantum-Integrable-Buryak-Rossi/0ecc55524422a309a13a1b31d545f64e576615e9 | # Double Ramification Cycles and Quantum Integrable Systems
@article{Buryak2015DoubleRC,
title={Double Ramification Cycles and Quantum Integrable Systems},
author={A. Buryak and P. Rossi},
journal={Letters in Mathematical Physics},
year={2015},
volume={106},
pages={289-317}
}
• Published 2015
• Mathematics, Physics
• Letters in Mathematical Physics
In this paper, we define a quantization of the Double Ramification Hierarchies of Buryak (Commun Math Phys 336:1085–1107, 2015) and Buryak and Rossi (Commun Math Phys, 2014), using intersection numbers of the double ramification cycle, the full Chern class of the Hodge bundle and psi-classes with a given cohomological field theory. We provide effective recursion formulae which determine the full quantum hierarchy starting from just one Hamiltonian, the one associated with the first descendant… Expand
Integrable systems of double ramification type
• Mathematics, Physics
• 2016
In this paper we study various aspects of the double ramification (DR) hierarchy, introduced by the first author, and its quantization. We extend the notion of tau-symmetry to quantum integrableExpand
Tau-Structure for the Double Ramification Hierarchies
• Mathematics, Physics
• 2016
In this paper we continue the study of the double ramification hierarchy of Buryak (Commun Math Phys 336(3):1085–1107, 2015). After showing that the DR hierarchy satisfies tau-symmetry we define itsExpand
D ec 2 01 8 TAU-STRUCTURE FOR THE DOUBLE RAMIFICATION HIERARCHIES
• 2018
In this paper we continue the study of the double ramification hierarchy of [Bur15]. After showing that the DR hierarchy satisfies tau-symmetry we define its partition function as the (logarithm ofExpand
Deformation theory of Cohomological Field Theories
• Mathematics, Physics
• 2020
We develop the deformation theory of cohomological field theories (CohFTs), which is done as a special case of a general deformation theory of morphisms of modular operads. This leads us to introduceExpand
Quantum D4 Drinfeld–Sokolov hierarchy and quantum singularity theory
• Mathematics, Physics
• 2019
Abstract In this paper we compute explicitly the double ramification hierarchy and its quantization for the D 4 Dubrovin–Saito cohomological field theory obtained applying the Givental–TelemanExpand
Integrability, Quantization and Moduli Spaces of Curves
This paper has the purpose of presenting in an organic way a new approach to integrable (1+1)-dimensional field systems and their systematic quantization emerging from intersection theory of theExpand
Towards a description of the double ramification hierarchy for Witten's $r$-spin class
• Mathematics, Physics
• 2015
The double ramification hierarchy is a new integrable hierarchy of hamiltonian PDEs introduced recently by the first author. It is associated to an arbitrary given cohomological field theory. In thisExpand
The quantum Witten-Kontsevich series and one-part double Hurwitz numbers
We study the quantum Witten-Kontsevich series introduced by Buryak, Dubrovin, Guere and Rossi in \cite{buryak2016integrable} as the logarithm of a quantum tau function for the quantum KdV hierarchy.Expand
INTEGRABLE SYSTEMS AND MODULI SPACES OF CURVES
This document has the purpose of presenting in an organic way my research on integrable systems originating from the geometry of moduli spaces of curves, with applications to Gromov-Witten theory andExpand
Quantum hydrodynamics from large-n supersymmetric gauge theories
• Physics, Mathematics
• 2015
We study the connection between periodic finite-difference Intermediate Long Wave ($$\Delta \hbox {ILW}$$ΔILW) hydrodynamical systems and integrable many-body models of Calogero and Ruijsenaars-type.Expand
#### References
SHOWING 1-10 OF 17 REFERENCES
Recursion Relations for Double Ramification Hierarchies
• Mathematics, Physics
• 2014
In this paper we study various properties of the double ramification hierarchy, an integrable hierarchy of hamiltonian PDEs introduced in Buryak (CommunMath Phys 336(3):1085–1107, 2015) usingExpand
Integrable systems and holomorphic curves
In this paper we attempt a self-contained approach to infinite dimensional Hamiltonian systems appearing from holomorphic curve counting in Gromov-Witten theory. It consists of two parts. The firstExpand
Normal forms of hierarchies of integrable PDEs, Frobenius manifolds and Gromov - Witten invariants
• Mathematics, Physics
• 2001
We present a project of classification of a certain class of bihamiltonian 1+1 PDEs depending on a small parameter. Our aim is to embed the theory of Gromov - Witten invariants of all genera into theExpand
String, dilaton and divisor equation in Symplectic Field Theory
• Mathematics, Physics
• 2010
Infinite dimensional Hamiltonian systems appear naturally in the rich algebraic structure of Symplectic Field Theory. Carefully defining a generalization of gravitational descendants and adding themExpand
Gromov–Witten invariants of target curves via Symplectic Field Theory
Abstract We compute the Gromov–Witten potential at all genera of target smooth Riemann surfaces using Symplectic Field Theory techniques and establish differential equations for the full descendantExpand
Integrals of psi-classes over double ramification cycles
• Mathematics
• 2012
DR-cycles are certain cycles on the moduli space of curves. Intuitively, they parametrize curves that allow a map to \mathbb{P}^1 with some specified ramification profile over two points. They areExpand
Double Ramification Cycles and Integrable Hierarchies
In this paper we present a new construction of a hamiltonian hierarchy associated to a cohomological field theory. We conjecture that in the semisimple case our hierarchy is related to theExpand
Polynomial families of tautological classes on Mg,nrt
• Mathematics
• 2012
We study classes Pg,T(α;β) on Mg,nrt defined by pushing forward the virtual fundamental classes of spaces of relative stable maps to an unparameterized P1 with prescribed ramification over 0 and ∞. AExpand
Integrals of ψ-classes over double ramification cycles
• Mathematics
• 2015
A double ramification cycle, or DR-cycle, is a codimension $g$ cycle in the moduli space $\overline{\mathcal M}_{g,n}$ of stable curves. Roughly speaking, given a list of integers $(a_1,\ldots,a_n)$,Expand
Dubrovin-Zhang hierarchy for the Hodge integrals
In this paper we prove that the generating series of the Hodge integrals over the moduli space of stable curves is a solution of a certain deformation of the KdV hierarchy. This hierarchy isExpand | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9275442957878113, "perplexity": 1689.0417182714948}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780053918.46/warc/CC-MAIN-20210916234514-20210917024514-00309.warc.gz"} |
https://socratic.org/questions/the-data-items-in-a-list-are-75-86-87-91-and-93-what-is-the-largest-integer-you- | Algebra
Topics
# The data items in a list are 75,86,87,91, and 93. What is the largest integer you can add to the list so that the mean of six items is less than their median?
Oct 1, 2016
Largest integer is $101$
#### Explanation:
There are 5 numbers in the list, but a sixth one is to be added. (as large as possible)
$75 \text{ "86" "87" "91" "93" } x$
$\textcolor{w h i t e}{\times \times \times \times \times} \uparrow$
The median will be $\frac{87 + 91}{2} = 89$
Mean will be: $\frac{75 + 86 + 87 + 91 + 93 + x}{6} < 89$
$432 + x < 6 \times 89$
$x < 534 - 432$
$x < 102$
The largest integer can be 101.
Check; If $x = 101$
Mean $= \frac{533}{6} = 88.83$
$88.83 < 89$
##### Impact of this question
505 views around the world | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 11, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3764269948005676, "perplexity": 1636.5332438674593}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358570.48/warc/CC-MAIN-20211128164634-20211128194634-00165.warc.gz"} |
https://chiefsfoundation.org/i-wanted-to-understand-the-best-way-to-uncover-distance-physics-as-well-as-the-answer-to-that-question-is-substantially-easier-than-you-feel-2/ | # Let’s explore the idea. Distance is defined by two definitions.
The very first may be the length along with the second is length/distance. If we define the length as the distance among two points, then we would have the second definition, which can be also called in essence the light cone or angle of incidence. So, how do we come up using a definition of the weight in physics?
For those that are not acquainted with every day term, let me explain. The speed of light is usually a notion that has a number of applications. In Newtonian Physics, this speed is measured in units referred to as meters per second. It describes the price at which an object moves relative to some physical source for instance the earth or possibly a larger light source. It is also known as the time interval more than which a phenomenon occurs or adjustments.
It may be the similar speed of light that we expertise as we move through our each day planet, the speed of sound. It’s also known as the speed of light in space, which means it is traveling faster than the speed of light in the infinite space around us.
In terms of physics, that is the time interval in which an object is in a provided place when its velocity is equal towards the speed of light in the empty space surrounding the earth’s orbit and also the sun. What is the definition of the weight in physics?
Weight is defined because the force that is certainly required to turn an object to accelerate it forward, plus the distinction involving this force along with the force of gravity is known as its weight. To calculate the acceleration of an object, you simply need to multiply the mass times the acceleration. How do we arrive at the definition of weightin physics? As a additional refinement, it turns out that mass is defined as the sum of all of the particles that make up the body.
When an object is added towards the program, it requires on a smaller buy an essay function, that is inversely proportional towards the mass that is definitely utilized in the calculation. So, as the addition for the program goes away, the mass becomes slightly a lot more substantial. The equation might be rewritten in order that the acceleration is defined by the mass of the object divided by the square with the velocity on the object (that is the second definition of your weight in physics).
This is usually a quite smaller piece from the story of how to uncover distance. Now, the following query is what does the direction in the angle of incidence imply? Nicely, this will depend on the direction in the supply of your light (that is the earth), nevertheless it is clear that the location on the supply is exactly where the light is reflected back from.
To illustrate, let’s appear at a straight line passing straight in front on the sun and light entering from above. At this point, the angle of incidence would be good since the light was reflected off the surface of the sun.
Another solution to express the principle of distance is usually to use a graphic representation. The term distance and also the word to define distance are derived from the reality that the distance within a circle must be expressed in meters as well as the distance in an ellipse has to be expressed in meters squared. The geometric point of view of your relationship involving a point along with a line has to be put into a system of equations, known as the metric.
We can visualize this as a technique of equations that has a continuous E, which is the gravitational continuous. In physics, the constant E is referred to as the acceleration, the difference among the force of gravity plus the acceleration.
How to find Distance Physics | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9477688670158386, "perplexity": 177.8477657698803}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145657.46/warc/CC-MAIN-20200222085018-20200222115018-00305.warc.gz"} |
http://link.springer.com/article/10.1007%2FJHEP03%282013%29108 | Journal of High Energy Physics
, 2013:108
# Competing orders in M-theory: superfluids, stripes and metamagnetism
• Aristomenis Donos
• Jerome P. Gauntlett
• Julian Sonner
• Benjamin Withers
Article
DOI: 10.1007/JHEP03(2013)108
Donos, A., Gauntlett, J.P., Sonner, J. et al. J. High Energ. Phys. (2013) 2013: 108. doi:10.1007/JHEP03(2013)108
## Abstract
We analyse the infinite class of d = 3 CFTs dual to skew-whiffed AdS4 × SE7 solutions of D = 11 supergravity at finite temperature and charge density and in the presence of a magnetic field. We construct black hole solutions corresponding to the unbroken phase, and at zero temperature some of these become dyonic domain walls of an Einstein-Maxwell-pseudo-scalar theory interpolating between AdS4 in the UV and new families of dyonic $$Ad{S_2}\times {{\mathbb{R}}^2}$$ solutions in the IR. The black holes exhibit both diamagnetic and paramagnetic behaviour. We analyse superfluid and striped instabilities and show that for large enough values of the magnetic field the superfluid instability disappears while the striped instability remains. For larger values of the magnetic field there is also a first-order metamagnetic phase transition and at zero temperature these black hole solutions exhibit hyperscaling violation in the IR with dynamical exponent z = 3/2 and θ = −2.
## Authors and Affiliations
• Aristomenis Donos
• 1
• Jerome P. Gauntlett
• 1
• Julian Sonner
• 2
• Benjamin Withers
• 3
1. 1.Blackett LaboratoryImperial CollegeLondonU.K
2. 2.C.T.P., Massachusetts Institute of TechnologyCambridgeU.S.A
3. 3.Centre for Particle Theory and Department of Mathematical SciencesUniversity of DurhamDurhamU.K | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8499062061309814, "perplexity": 2725.374329961503}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738661962.7/warc/CC-MAIN-20160924173741-00116-ip-10-143-35-109.ec2.internal.warc.gz"} |
https://collegephysicsanswers.com/openstax-solutions/what-lambda-electron-emerging-stanford-linear-accelerator-total-energy-500-gev-b | Question
(a) What is $\lambda$ for an electron emerging from the Stanford Linear Accelerator with a total energy of 50.0 GeV? (b) Find its momentum. (c) What is the electron’s wavelength?
1. $9.78\times 10^{4}$
2. $2.67\times 10^{-17}\textrm{ kg}\cdot\textrm{m/s}$
3. $0.0248 \textrm{ fm}$
Solution Video | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7112839818000793, "perplexity": 1225.421676667438}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540500637.40/warc/CC-MAIN-20191207160050-20191207184050-00247.warc.gz"} |
https://www.physicsforums.com/threads/matlab-help-user-defined-function.650318/ | # Matlab help! User defined function!
1. Nov 7, 2012
### qiyan31
I made a user-defined function for Height.
function Ht=Height(t,V,Theta);
Ht=V*t*sin(Theta)-4.9*t.^2;
end
V is initial velocity, and i kept on getting input "V" is undefined.
Can someone help me plz!
2. Nov 7, 2012
### coalquay404
Works fine for me. What is the precise command that you're using to test the function? | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9005970358848572, "perplexity": 4603.961824630698}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891815318.53/warc/CC-MAIN-20180224033332-20180224053332-00410.warc.gz"} |
https://gamedev.stackexchange.com/questions/57645/desire-advice-on-implementing-this-animation-timeline-system | # Desire advice on implementing this animation timeline system
I have read a lot of questions on here, as well as books regarding game architecture. I have a general question about the implementation of a game's animation timeline, on which many isolated animations sit. I implemented one recently that proved rather inefficient and bug-prone, so I'm doing it again but this time seeking some advice. According to one book I've looked at, the implementation of a timeline does not have a well-known standardized technique and everyone seems to roll their own. Perhaps some of you can offer advice.
Here is the general situation, where "Time" is the game's passing time, starting at 0 when game launches:
The task: Finding the best data structures to be used such that I can throw any arbitrary value of t at the timeline entity and retrieve the active animation objects at that point in time, and finding these quickly. (Note: this is not a question about finding keyframes, but rather about finding valid animation objects on a timeline of many scheduled animations.)
I am using STL for simplicity. One idea I thought of was to have a general map of animations as my timeline, where the map's key was the start time, since maps automatically sort based on keys. Then, for any value of t I can stop iterating as soon as the key is a value higher than t since I know the animation has not started yet.
But this seems inefficient:
I would always iterate from the beginning, even for animations that have already completed. I don't want a technique that stores flags or pointers for completed animations, because I want to always be able to jump to any t and get active animations for that point in time.
Another problem, the map can only have one key, in this case the start time. To know if an animation is active, the algorithm must also peer inside the value of the map to see if its duration has been completed yet. This seems wasteful, surely there is a better way.
I think you get the path I'm going down, and I know this is an issue that everyone tackles. Any advice?
The task: Finding the best data structures to be used such that I can throw any arbitrary value of t at the timeline entity and retrieve the active animation objects at that point in time, and finding these quickly. (Note: this is not a question about finding keyframes, but rather about finding valid animation objects on a timeline of many scheduled animations.)
Firstly, I'd recommend doing the simplest thing that could possibly work first. The following assumes that your timeline isn't changing frequently (or at all).
Keep your animations in a vector (avoid lists or maps unless you have a good reason to use them).
To find all the animations at time T, just iterate through the list returning all the animations whose start time is less than T and whose end time is greater than T.
If you have a very large list of animations in your timeline, and the above approach is too slow, notice that your problem is similar to the geometric one of finding intersections between a line and some axis-aligned boxes. Generally, if we have too much geometry to test at once, we use spacial partitioning to avoid too many intersection tests. You could do the same here, except the problem is much easier (it's one dimensional). Adapt a quadtree solution (or just use a fixed vector of buckets) to do "temporal partitioning".
• I very much enjoyed the other answers and discussions as well, but this cut to the heart of the issue which I eventually learned on google as well. – johnbakers Jun 19 '13 at 17:52
A seeking is your primary concern, a skip list might be your best bet. These can be implemented as a literal linked list or a more compact and cache-friendly data structure using indices into a vector instead of list nodes.
To scrub or advance the timelines, keep a list of events (points in time at which any animation starts or ends). Keep these in a sorted sequence of some kind, e.g., a vector of something like:
struct AnimationEvent {
float time;
enum { START, END } type;
int animationID;
};
Keep a point to the "last time event" you encountered and the last time you looked for. When adding time, you can simply grab the leading time events that fall within the new period, no need for extended searching.
For example, in the graphic above, you'd have a list like
| 0 | START | 12 |
| 0 | START | 13 |
| 0 | START | 16 |
| 1 | START | 1 |
| 1 | START | 14 |
| 2 | START | 2 |
You check what happens in the first second. You see that animation nodes 12, 13, and 16 all start. You record that the last node you consumed was at index 2 and at time 1. You advanced another 0.5 seconds. The next event starts at 1s so nothing happens. You now record that the last node you consumed was still index 2 but at time 0.5s, and you advanced the active nodes (12, 13, 16) by 0.5s. You advance another 0.5s. Now your time is 1s so you consume the records indicating nodes 1 and 14 have started. You advanced your previously active nodes (still 12, 13, 16) by the additional 0.5s. You record that the last node you consumed was at index 4 at time 1.0s. Repeat. When you consume a node for animation, start it or end it as appropriate, and update nodes by the delta in time as appropriate (if an animation was due to start in 0.3s but you advanced 0.5s, then you must both start that animation and advance it by 0.2s).
I will note that this latter technique is super useful. It's used in animation, clipping, spatial partitioning, etc. Internalize it.
• I got distracted by your useful website which I am going to peer at first, then revert back to your answer here. thanks man. – johnbakers Jun 18 '13 at 7:33
• Ok, what's not clear here is the purpose of storing the last index you looked at, and at what time, since it appears you are still iterating from the start of this list every time? Additionally, if randomly scrubbing through time, forward or backward, does this list still serve well? – johnbakers Jun 18 '13 at 9:05
• You don't need to constantly reiterate from the start of the list. That's just a waste. Probably won't matter for most games, the wasted time will be nearly immeasurable. Start where you left off, though, and you can more easily tell "new" events from "old" ones. And yes, going backward the algorithm is the exact same, just reverse the direction of the list you iterate in and invert the animation times. – Sean Middleditch Jun 18 '13 at 17:06
• I get it, thanks. I think what I need depends less on the direction of iteration and remembering previous iterations than it does on the ability to quickly and randomly jump between values of t. I am looking into Interval Trees, BSP, etc for this task which seem appropriate as well. – johnbakers Jun 18 '13 at 22:16
ok, im still not quite clear on what problem you're trying to solve. but im going to describe it as "you have some odd game mechanic that means you want to jump around randomly within a set timeline". and that the timeline itself has no logic (which you usually see with cutscene animation tools).
the difficult part here is that usually for cutscene animation tools, you'll trigger an animation at particular frames, and that animation will have been authored in a different tool, and you'd just tell your animation to play. and fix any problems with the seperate authoring tool. thats basically how cutscene playback works ;) just keyframes to start animations (or end looping ones).
the problem we appear to be solving here is one of random access. and im going to assume that this is the desired access pattern over linear playback. which would have a different solution completely. (which i'd be happy to ramble on about if you want).
right. to optimise for random access, im going to go with doing this on a per frame basis, rather than on a node-timeline basis. the general idea is that you're optimising being able to go directly to frame N, over memory footprint, or overall linear playback. to keep things to a sane memory footprint. i'd recommend having a buffer of some arbitrary number of frames that you've pre-processed in the per frame approach. while keeping the actual datastructure the more traditional node-timeline based paradigm.
so lets say that we're going to have a buffer of 1024 frames that we can jump to at any point. each "frame" would contain an array of two 32bit values per node. one an ID for the animation that's playing, and the other would contain the frame (or duration as a float) of the playback position within that animation. the thinking behind having a contiguous sized array is that it may lead to slightly more predictable memory usage and cache usage, but i'd recommend profiling that and working out whether you can get away with only storing the information for the nodes that have animations on that frame, rather than storing all nodes per frame regardless of animation state.
its then up to your animation system to be able to "jump to" a particular animation / time slice when requested by this system, which is a slightly different concern. the system we're talking about here shouldn't really be handling the animations themselves, but slaving a separate system. a similar approach, but unpacking keyframe / node data may work except it'd require a hell of a lot more memory!. it really depends on exactly what you're trying to do, which isnt really clear.
when you have more than 1024 frames, you would need to use a sliding widow approach and "decompresss" from your original space optimised data structure to your memory unfriendly optimised structure. you'll need at least two buffers, one you're reading from, and one you're decompressing too. assuming you're playing back linearly. jumping back to a frame which is no longer "decompressed" would require decompression before you could jump to it. but you could easily keep as many buffers around as you need depending on overall memory requirements.
the major downsides to this is that, linear playback suddenly becomes a hell of a lot of work for absolutely no benefit. you're doing a hell of a lot of preprocessing that you simply dont need to do. if you've got a number of these systems running at the same time, you're going to thrash the hell out of cache. its really not an approach that i'd recommend other for solving the "random access" problem.
doing linear playback is far easier achieved by just storing animation start points, and kicking off animations at particular frames, and stopping them when required. leaving the delta time to keep the animation in sync with the overall timeline. a hell of a lot less effort ;)
whew. ok hopefully that was slightly more useful? and you're kinda getting what im talking about? does this actually solve the problem you're asking?
I you're actually just asking "how do i write a cutscene system". the answer is far far far easier than the above.
• Thanks, but two points: I know how to interpolate keyframes within an animation object, that's not my question. And while I know that premature optimization is bad, this is an important design decision and warrants a thought-out technique before "getting it to work," as I discovered on my previous attempt, which did work but did so very poorly. In such a case, fixing the speed issue is not a matter of tweaking an API as much as it is in designing an efficient architecture implementation – johnbakers Jun 18 '13 at 6:52
• it comes down to your API design. you need to think about your data access patterns, and design around how you will be most likely using the api. for example, you're going to be spending a lot of time saying "im on frame X". and your access paradigm is going to be optimized around tweening things between keyframes at animation points. eg: i'd probably eventually just pre-process the tweening at the cost of memory (eg: every frame gets an array element for every animation with the full tween data). but at first i'd just go for a mathmatical approach. – Matt D Jun 18 '13 at 6:58
• thanks for the comments, but there really is more to this question than interface; I can have an interface say "add this animation to this timeline" and another interface say "retrieve animations for time t", but its how to implement the best underlying architecture for these requests that I'm asking about. – johnbakers Jun 18 '13 at 6:59
• the point of the interface is that the application using this doesnt care about how its implemented. how you implement it can then be variable. which is kind of what im getting at. as for how to implement it, i'll provide a bit more info later tonight. there's a bunch of things that are curious though, such as, are you going for the hierarchical type clip system ala flash? is this a UI thing? or is this in-game animation sequencing? – Matt D Jun 18 '13 at 7:04
• You are right about making APIs ignorant of implementation, but I'm passed that stage and directly interested in efficient implementation. I have never used flash, and my question is specifically about a best practice or wise idea for organizing individual animation objects (perhaps you can call them "clips") such that I can quickly retrieve all animations within the whole set of animations that are active any arbitrary value of t – johnbakers Jun 18 '13 at 7:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2616629898548126, "perplexity": 909.3324052832622}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578656640.56/warc/CC-MAIN-20190424194348-20190424220348-00093.warc.gz"} |
https://www.winc.com.au/main-catalogue-productdetail/olympic-manilla-folder-foolscap-grey-box-100/87261476?feature=recommend_blowup&feature_ident=rule_name%3Aauto%7Crange_pop%3A2%7Crec_ord%3A1%7Cweighted%3A75.50%7Cident%3A87261462 | No postcode
# Olympic Manilla Folder Foolscap Grey Box 100
87261476
Product Code: 87261476 Manufacturer Code: 193868
• Product Type: Manilla Folder
• Colour: Grey
• Manufactured in Australia with quality Australian Made file board
• Each folder is pre-creased to accommodate up to 25mm of paper
• Files are pre-slotted for use with standard 80mm prong-fasteners
27 in stock
View similar
\$35.49 / box 100
• Product Info
• These Olympic Manilla folders provide a simple and efficient filing solution for every workplace. Each folder can accommodate up to 25mm of paper and includes 5 pre-cut tabs on the long end so documents can be divided into sections and easily classified.
• Manufactured in Australia with quality Australian Made file board
• Each folder is pre-creased to accommodate up to 25mm of paper
• Files are pre-slotted for use with standard 80mm prong-fasteners
• Each folder is pre-cut with five tabs on the long end
• Main Specs
• Extended Specs | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8692777752876282, "perplexity": 20923.561131514256}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256546.11/warc/CC-MAIN-20190521182616-20190521204616-00062.warc.gz"} |
http://math.stackexchange.com/questions/104956/why-does-int-00-5-frac1x2-0-1-doest-converge-and-int-00-5-f | Why does $\int_{0}^{0.5}\frac{1}{x^2-0.1}$ does't converge and $\int_{0}^{0.5}\frac{1}{x^2-0.3}$ does?
In order to prove that $\int_{0}^{0.5}\frac{1}{x^2-1}$ converges I compared it to $\int_{0}^{0.5}\frac{1}{x}$ which converges, by checking that $\lim \frac{\frac{1}{x^2-1}}{\frac{1}{x}}=0$ when $x \to 0$. Then I went to Wolfram Alpha and tried to check $\int_{0}^{0.5}\frac{1}{x^2-0.1}$ (It is suppose to converges by the same test) but it said that it diverges, while $\int_{0}^{0.5}\frac{1}{x^2-0.3}$ doesn't. (with $0.2$ it just couldn't compute). What's really going on there between $0.1$ to $0.3$? Is W.A wrong?
Edit: Sorry! $\int_{0}^{0.5}\frac{1}{x}$ obviously doesn't converge. So, in addition to the rest of the question, how can I prove my original integral does converge?
Thank you very much.
-
Short answer to the edited question: Because $x^2$ ranges from $0$ to $0.25$, and $0.1$ is in this range (making the denominator $0$) whereas $0.3$ is outside of this range. – Eric Naslund Feb 2 '12 at 14:23
It should also be pointed out that there are missing $dx$ terms in most of your posts (at least when they originally are written). This is a bad habit to start! – JavaMan Feb 2 '12 at 17:58
Between $x=0$ and $x=0.5$, the function $\frac{1}{x^2-1}$ is perfectly respectable! Note that the denominator is never $0$ in our interval. The largest absolute value is reached at $x=0.5$. So your function has no issues, it is continuous on a closed interval. For the problem you were initially considering, we are finished.
But the question you were led to ask is more interesting, and shows a good effort to understand the situation.
Look first at $\frac{1}{x^2-0.3}$. The denominator is $0$ at $x=\pm\sqrt{0.3}$. The positive root is roughly $0.547722$, outside our interval, though not by much. Thus the function $\frac{1}{x^2-0.3}$ is well-behaved in the interval $[0,0.5]$.
Look now at $\frac{1}{x^2-0.1}$. The denominator is $0$ at $x=\pm\sqrt{0.1}$. The positive root is about $0.3162278$, and this is inside our interval. So our function blows up inside our interval, and there may be a problem. Indeed there is.
You know that a function can blow up, but despite that the integral converges. A standard example is $\int_0^1 \frac{dx}{\sqrt{x}}$. We will show that $\int_0^{0.5}\frac{dx}{x^2-0.1}$ diverges.
As mentioned above, there is potential trouble at $\sqrt{0.1}$. To make typing easier, let $a=\sqrt{0.1}$. Our function is not defined at $x=a$, and blows up near $x=a$. Recall that $a$ is in our interval. When we are dealing with a singularity inside our interval, it is useful to break up the interval into two integrals, in this case from $0$ to $a$ and from $a$ to $0.5$.
We will show that $\int_0^a \frac{dx}{x^2-0.1}$ diverges. (The integral from $a$ to $0.5$ also does, but showing that one of the integrals is bad is enough.)
So we are looking at the integral $\int_0^a\frac{dx}{x^2-a^2}$. For no good reason, except for a preference for the positive, we look instead at $$\int_0^a \frac{dx}{a^2-x^2}.$$ Make the change of variable $w=a-x$. Note that $a^2-x^2=(a-x)(a+x)=w(2a-w)$. Quickly we arrive at $$\int_{w=0}^a \frac{dw}{w(2a-w)}.$$ This integral diverges, by comparison with $\int_0^a\frac{dw}{w}$, which, as pointed out by anonymous, diverges.
-
Thanks a lot! very helpful and clear. – Jozef Feb 2 '12 at 15:28
You have a mistake. The integral $\int_{0}^{0.5} dx/x$ does not converge! in fact, we can easily calculate it, as the antiderivative of $1/x$ is $lnx$, and $\lim_{x\to 0^+} ln(x) = -\infty$.
-
Right! so how can I prove that my original integral does converge? – Jozef Feb 2 '12 at 14:12
Well, your integral is actually a definite integral. The function $\frac{1}{x^2-1}$ is continuous on the interval $[0,1/2]$, so there is no question of convergence at all. – the L Feb 2 '12 at 14:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9728354811668396, "perplexity": 178.3244116075019}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438043060830.93/warc/CC-MAIN-20150728002420-00148-ip-10-236-191-2.ec2.internal.warc.gz"} |
http://lightofthefurqan.com/2017/06/ | # Grammatically Understanding Surat al-Tawhid
Photo credits: Faleh Zahrawi
In the previous term, I had the opportunity to spend some time on focused exegetical discussions on sūrat al-Tawḥīd with some colleagues. We covered many different aspects of sūrat al-Tawḥīd, but one aspect that I found to be the most interesting was the grammatical discussion surrounding the first verse.
The following is an attempt to grammatically understand the first verse of this chapter. I have relied heavily on a lot of grammatical jargon and have tried to explain it as best as I can so as to facilitate readers not well versed in Arabic grammar.
The first verse of sūrat al-Tawḥīd is as follows,
ٌقُلْ هُوَ اللهَ أَحَد
(Tentative Translation) Say,” He is Allah, the One… 1
### Defining the Text
Before, attempting to understand the verse grammatically, the actual verse and any other potential variant readings must be defined. Works documenting the 7, 10 or 14 readings of the Qurān indicate that most scholars of the readings of the Qurān were in agreement over the popular recitation of the verse that is present in the Qurān today, that is, “قل هو الله أحد”.
Further evidence of the fact that the text of the verse has been correctly preserved is that some books of history have recorded that this verse was minted in the same form on Syrian coins between the years 42 A.H. and 49 A.H. during the caliphate of Marwān bin al-Ḥakm2.
#### Zamakhsharī and Variant Readings
In light of this, it is interesting to note that Zamakhsharī (d. 538 A.H.) mentions some differences in reports of the recitations of this verse3.
1. It has been reported that Ibn Mas’ūd and Ubay bin Ka’b read the verse without the word “قل”, thus reading it as “هُوَ اللهُ أَحَد”
2. A’mash read the word “أَحَد” as “وَاحِد”. Thus the verse would be, “قُلْ هُوَ اللهُ وَاحِد”
3. It has been reported that the Prophet read the verse without the words, “قُلْ هُو”. Thus the verse would simply be, “اللهُ أَحَد”. This has apparently been recorded in a narration that says, “To read ‘اللهُ أَحَد’, is equitable to reading the whole Qurān”.
1. Al-Tawḥīd 112:1
2. Details about this can often be found in entries about Marwān, refer to Ibn al-Athīr, Asad al-Ghābbah fī Ma’rifat al-Ṣaḥābah, v. 4 pg. 348
3. Zamakhsharī, al-Kashshāf ‘an Ḥaqāiq Ghawāmiḍ al-Tanzīl v. 4 pg. 817 – 818 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8489493131637573, "perplexity": 6302.649167689585}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247489282.7/warc/CC-MAIN-20190219000551-20190219022551-00362.warc.gz"} |
https://collegemathteaching.wordpress.com/category/science/ | # College Math Teaching
## August 21, 2014
### Calculation of the Fourier Transform of a tent map, with a calculus tip….
I’ve been following these excellent lectures by Professor Brad Osgood of Stanford. As an aside: yes, he is dynamite in the classroom, but there is probably a reason that Stanford is featuring him. 🙂
And yes, his style is good for obtaining a feeling of comradery that is absent in my classroom; at least in the lower division “service” classes.
This lecture takes us from Fourier Series to Fourier Transforms. Of course, he admits that the transition here is really a heuristic trick with symbolism; it isn’t a bad way to initiate an intuitive feel for the subject though.
However, the point of this post is to offer a “algebra of calculus trick” for dealing with the sort of calculations that one might encounter.
By the way, if you say “hey, just use a calculator” you will be BANNED from this blog!!!! (just kidding…sort of. 🙂 )
So here is the deal: let $f(x)$ represent the tent map: the support of $f$ is $[-1,1]$ and it has the following graph:
The formula is: $f(x)=\left\{\begin{array}{c} x+1,x \in [-1,0) \\ 1-x ,x\in [0,1] \\ 0 \text{ elsewhere} \end{array}\right.$
So, the Fourier Transform is $F(f) = \int^{\infty}_{-\infty} e^{-2 \pi i st}f(t)dt = \int^0_{-1} e^{-2 \pi i st}(1+t)dt + \int^1_0e^{-2 \pi i st}(1-t)dt$
Now, this is an easy integral to do, conceptually, but there is the issue of carrying constants around and being tempted to make “on the fly” simplifications along the way, thereby leading to irritating algebraic errors.
So my tip: just let $a = -2 \pi i s$ and do the integrals:
$\int^0_{-1} e^{at}(1+t)dt + \int^1_0e^{at}(1-t)dt$ and substitute and simplify later:
Now the integrals become: $\int^{1}_{-1} e^{at}dt + \int^0_{-1}te^{at}dt - \int^1_0 te^{at} dt.$
These are easy to do; the first is merely $\frac{1}{a}(e^a - e^{-a})$ and the next two have the same anti-derivative which can be obtained by a “integration by parts” calculation: $\frac{t}{a}e^{at} -\frac{1}{a^2}e^{at}$; evaluating the limits yields:
$-\frac{1}{a^2}-(\frac{-1}{a}e^{-a} -\frac{1}{a^2}e^{-a}) - (\frac{1}{a}e^{a} -\frac{1}{a^2}e^a)+ (-\frac{1}{a^2})$
Add the first integral and simplify and we get: $-\frac{1}{a^2}(2 - (e^{-a} -e^{a})$. NOW use $a = -2\pi i s$ and we have the integral is $\frac{1}{4 \pi^2 s^2}(2 -(e^{2 \pi i s} -e^{-2 \pi i s}) = \frac{1}{4 \pi^2 s^2}(2 - cos(2 \pi s))$ by Euler’s formula.
Now we need some trig to get this into a form that is “engineering/scientist” friendly; here we turn to the formula: $sin^2(x) = \frac{1}{2}(1-cos(2x))$ so $2 - cos(2 \pi s) = 4sin^2(\pi s)$ so our answer is $\frac{sin^2( \pi s)}{(\pi s)^2} = (\frac{sin(\pi s)}{\pi s})^2$ which is often denoted as $(sinc(s))^2$ as the “normalized” $sinc(x)$ function is given by $\frac{sinc(\pi x)}{\pi x}$ (as we want the function to have zeros at integers and to “equal” one at $x = 0$ (remember that famous limit!)
So, the point is that using $a$ made the algebra a whole lot easier.
Now, if you are shaking your head and muttering about how this calculation was crude that that one usually uses “convolution” instead: this post is probably too elementary for you. 🙂
## January 20, 2014
### A bit more prior to admin BS
One thing that surprised me about the professor’s job (at a non-research intensive school; we have a modest but real research requirement, but mostly we teach): I never knew how much time I’d spend doing tasks that have nothing to do with teaching and scholarship. Groan….how much of this do I tell our applicants that arrive on campus to interview? 🙂
But there is something mathematical that I want to talk about; it is a follow up to this post. It has to do with what string theorist tell us: $\sum^{\infty}_{k = 1} k = -\frac{1}{12}$. Needless to say, they are using a non-standard definition of “value of a series”.
Where I think the problem is: when we hear “series” we think of something related to the usual process of addition. Clearly, this non-standard assignment doesn’t related to addition in the way we usually think about it.
So, it might make more sense to think of a “generalized series” as a map from the set of sequences of real numbers (or: the infinite dimensional real vector space) to the real numbers; the usual “limit of partial sums” definition has some nice properties with respect to sequence addition, scalar multiplication and with respect to a “shift operation” and addition, provided we restrict ourselves to a suitable collection of sequences (say, those whose traditional sum of components are absolutely convergent).
So, this “non-standard sum” can be thought of as a map $f:V \rightarrow R^1$ where $f(\{1, 2, 3, 4, 5,....\}) \rightarrow -\frac{1}{12}$. That is a bit less offensive than calling it a “sum”. 🙂
## July 23, 2013
### Nate Silver’s Book: The signal and the noise: why so many predictions fail but some don’t
Filed under: books, elementary mathematics, science, statistics — Tags: , — collegemathteaching @ 4:10 pm
Reposted from my personal blog and from my Daily Kos Diary:
Quick Review
Excellent book. There are a few tiny technical errors (e. g., “non-linear” functions include exponential functions, but not all non-linear phenomena are exponential (e. g. power, root, logarithmic, etc.).
Also, experts have some (justified) quibbles with the book; you can read some of these concerning his chapter on climate change here and some on his discussion of hypothesis testing here.
But, aside from these, it is right on. Anyone who follows the news closely will benefit from it; I especially recommend it to those who closely follow science and politics and even sports.
It is well written and is designed for adults; it makes some (but reasonable) demands on the reader. The scientist, mathematician or engineer can read this at the end of the day but the less technically inclined will probably have to be wide awake while reading this.
Details
Silver sets you up by showing examples of failed predictions; perhaps the worst of the lot was the economic collapse in the United States prior to the 2008 general elections. Much of this was due to the collapse of the real estate market and falling house/property values. Real estate was badly overvalued, and financial firms made packages of investments whose soundness was based on many mortgages NOT defaulting at the same time; it was determined that the risk of that happening was astronomically small. That was wrong of course; one reason is that the risk of such an event is NOT described by the “normal” (bell shaped) distribution but rather by one that allows for failure with a higher degree of probability.
There were more things going on, of course; and many of these things were difficult to model accurately just due to complexity. Too many factors makes a model unusable; too few means that the model is worthless.
Silver also talks about models providing probabilistic outcomes: example saying that the GDP will be X in year Y is unrealistic; what we really should say that the probability of the GDP being X plus/minus “E” is Z percent.
Next Silver takes on pundits. In general: they don’t predict well; they are more about entertainment than anything else. Example: look at the outcome of the 2012 election; the nerds were right; the pundits (be they NPR or Fox News pundits) were wrong. NPR called the election “razor tight” (it wasn’t); Fox called it for the wrong guy. The data was clear and the sports books new this, but that doesn’t sell well, does it?
Now Silver looks at baseball. Of course there are a ton of statistics here; I am a bit sorry he didn’t introduce Bayesian analysis in this chapter though he may have been setting you up for it.
Topics include: what does raw data tell you about a player’s prospects? What role does a talent scout’s input have toward making the prediction? How does a baseball players hitting vary with age, and why is this hard to measure from the data?
The next two chapters deal with predictions: earthquakes and weather. Bottom line: we have statistical data on weather and on earthquakes, but in terms of making “tomorrow’s prediction”, we are much, much, much further along in weather than we are on earthquakes. In terms of earthquakes, we can say stuff like “region Y has a X percent chance of an earthquake of magnitude Z within the next 35 years” but that is about it. On the other hand, we are much better about, say, making forecasts of the path of a hurricane, though these are probabilistic:
In terms of weather: we have many more measurements.
But there IS the following: weather is a chaotic system; a small change in initial conditions can mean to a large change in long term outcomes. Example: one can measure a temperature at time t, but only to a certain degree of precision. The same holds for pressure, wind vectors, etc. Small perturbations can lead to very different outcomes. Solutions aren’t stable with respect to initial conditions.
You can see this easily: try to balance a pen on its tip. Physics tells us there is a precise position at which the pen is at equilibrium, even on its tip. But that equilibrium is so unstable that a small vibration of the table or even small movement of air in the room is enough to upset it.
In fact, some gambling depends on this. For example, consider a coin toss. A coin toss is governed by Newton’s laws for classical mechanics, and in principle, if you could get precise initial conditions and environmental conditions, the outcome shouldn’t be random. But it is…for practical purposes. The same holds for rolling dice.
Now what about dispensing with models and just predicting based on data alone (not regarding physical laws and relationships)? One big problem: data is noisy and is prone to be “overfitted” by a curve (or surface) that exactly matches prior data but is of no predictive value. Think of it this way: if you have n data points in the plane, there is a polynomial of degree n-1 that will fit the data EXACTLY, but in most cases have a very “wiggly” graph that provides no predictive value.
Of course that is overfitting in the extreme. Hence, most use the science of the situation to posit the type of curve that “should” provide a rough fit and then use some mathematical procedure (e. g. “least squares”) to find the “best” curve that fits.
The book goes into many more examples: example: the flu epidemic. Here one finds the old tug between models that are too simplistic to be useful for forecasting and too complicated to be used.
There are interesting sections on poker and chess and the role of probability is discussed as well as the role of machines. The poker chapter is interesting; Silver describes his experience as a poker player. He made a lot of money when poker drew lots of rookies who had money to spend; he didn’t do as well when those “bad” players left and only the most dedicated ones remained. One saw that really bad players lost more money than the best players won (not that hard to understand). He also talked about how hard it was to tell if someone was really good or merely lucky; sometimes this wasn’t perfectly clear after a few months.
Later, Silver discusses climate change and why the vast majority of scientists see it as being real and caused (or made substantially worse) by human activity. He also talks about terrorism and enemy sneak attacks; sometimes there IS a signal out there but it isn’t detected because we don’t realize that there IS a signal to detect.
However the best part of the book (and it is all pretty good, IMHO), is his discussion of Bayes law and Bayesian versus frequentist statistics. I’ve talked about this.
I’ll demonstrate Bayesian reasoning in a couple of examples, and then talk about Bayesian versus frequentist statistical testing.
Example one: back in 1999, I went to the doctor with chest pains. The doctor, based on my symptoms and my current activity level (I still swam and ran long distances with no difficulty) said it was reflux and prescribed prescription antacids. He told me this about a possible stress test: “I could stress test you but the probability of any positive being a false positive is so high, we’d learn nothing from the test”.
Example two: suppose you are testing for a drug that is not widely used; say 5 percent of the population uses it. You have a test that is 95 percent accurate in the following sense: if the person is really using the drug, it will show positive 95 percent of the time, and if the person is NOT using the drug, it will show positive only 5 percent of the time (false positive).
So now you test 2000 people for the drug. If Bob tests positive, what is the probability that he is a drug user?
Answer: There are 100 actual drug users in this population, so you’d expect 100*.95 = 95 true positives. There are 1900 non-users and 1900*.05 = 95 false positives. So there are as many false positives as true positives! The odds that someone who tests positive is really a user is 50 percent.
Now how does this apply to “hypothesis testing”?
Consider basketball. You know that a given player took 10 free shots and made 4. You wonder: what is the probability that this player is a competent free throw shooter (given competence is defined to be, say, 70 percent).
If you just go by the numbers that you see (true: n = 10 is a pathetically small sample; in real life you’d never infer anything), well, the test would be: given the probability of making a free shot is 70 percent, what is the probability that you’d see 4 (or fewer) made free shots out of 10?
Using a calculator (binomial probability calculator), we’d say there is a 4.7 percent chance we’d see 4 or fewer free shots made if the person shooting the shots was a 70 percent shooter. That is the “frequentist” way.
But suppose you found out one of the following:
1. The shooter was me (I played one season in junior high and some pick up ball many years ago…infrequently) or
2. The shooter was an NBA player.
If 1 was true, you’d believe the result or POSSIBLY say “maybe he had a good day”.
If 2 was true, then you’d say “unless this player was chosen from one of the all time worst NBA free throw shooters, he probably just had a bad day”.
Bayesian hypothesis testing gives us a way to make and informed guess. We’d ask: what is the probability that the hypothesis is true given the data that we see (asking the reverse of what the frequentist asks). But to do this, we’d have to guess: if this person is an NBA player, what is the probability, PRIOR to this 4 for 10 shooting, that this person was 70 percent or better (NBA average is about 75 percent). For the sake of argument, assume that there is a 60 percent chance that this person came from the 70 percent or better category (one could do this by seeing the percentage of NBA players shooing 70 percent of better). Assign a “bad” percentage as 50 percent (based on the worst NBA free throw shooters): (the probability of 4 or fewer made free throws out of 10 given a 50 percent free throw shooter is .377)
Then we’d use Bayes law: (.0473*.6)/(.0473*.6 + .377*.4) = .158. So it IS possible that we are seeing a decent free throw shooter having a bad day.
This has profound implications in science. For example, if one is trying to study genes versus the propensity for a given disease, there are a LOT of genes. Say one tests 1000 genes of those who had a certain type of cancer and run a study. If we accept p = .05 (5 percent) chance of having a false positive, we are likely to have 50 false positives out of this study. So, given a positive correlation between a given allele and this disease, what is the probability that this is a false positive? That is, how many true positives are we likely to have?
This is a case in which we can use the science of the situation and perhaps limit our study to genes that have some reasonable expectation of actually causing this malady. Then if we can “preassign” a probability, we might get a better feel if a positive is a false one.
Of course, this technique might induce a “user bias” into the situation from the very start.
The good news is that, given enough data, the frequentist and the Bayesian techniques converge to “the truth”.
Summary Nate Silver’s book is well written, informative and fun to read. I can recommend it without reservation.
## July 12, 2013
### An example to apply Bayes’ Theorem and multivariable calculus
I’ve thought a bit about the breast cancer research results and found a nice “application” exercise that might help teach students about Bayes Theorem, two-variable maximizing, critical points, differentials and the like.
I’ve been interested in the mathematics and statistics of the breast cancer screening issue mostly because it provided a real-life application of statistics and Bayes’ Theorem.
So right now, for women between 40-49, traditional mammograms are about 80 percent accurate in the sense that, if a woman who really has breast cancer gets a mammogram, the test will catch it about 80 percent of the time. The false positive rate is about 8 percent in that: if 100 women who do NOT have breast cancer get a mammogram, 8 of the mammograms will register a “positive”.
Since the breast cancer rate for women in this age group is about 1.4 percent, there will be many more false positives than true positives; in fact a woman in this age group who gets a “positive” first mammogram has about a 16 percent chance of actually having breast cancer. I talk about these issues here.
So, suppose you desire a “more accurate test” for breast cancer. The question is this: what do you mean by “more accurate”?
1. If “more accurate” means “giving the right answer more often”, then that is pretty easy to do.
Current testing is going to be wrong: if C means cancer, N means “doesn’t have cancer”, P means “positive test” and M means “negative test”, then the probability of being wrong is:
$P(M|C)P(C) + P(P|N)P(N) = .2(.014) + .08(.986) = .08168$. On the other hand, if you just declared EVERYONE to be “cancer free”, you’d be wrong only 1.4 percent of the time! So clearly that does not work; the “false negative” rate is 100 percent, though the “false positive” rate is 0.
On the other hand if you just told everyone “you have it”, then you’d be wrong 98.6 percent of the time, but you’d have zero “false negatives”.
So being right more often isn’t what you want to maximize, and trying to minimize the false positives or the false negatives doesn’t work either.
2. So what about “detecting more of the cancer that is there”? Well, that is where this article comes in. Switching to digital mammograms does increase detection rate but also increases the number of false positives:
The authors note that for every 10,000 women 40 to 49 who are given digital mammograms, two more cases of cancer will be identified for every 170 additional false-positive examinations.
So, what one sees is that if a woman gets a positive reading, she now has an 11 percent of actually having breast cancer, though a few more cancers would be detected.
Is this progress?
My whole point: saying one test is “more accurate” than another test isn’t well defined, especially in a situation where one is trying to detect something that is relatively rare.
Here is one way to look at it: let the probability of breast cancer be $a$, the probability of detection of a cancer be given by $x$ and the probability of a false positive be given by $y$. Then the probability of a person actually having breast cancer, given a positive test is given by:
$B(x,y) =\frac{ax}{ax + (1-a)y}$; this gives us something to optimize. The partial derivatives are:
$\frac{\partial B}{\partial x}= \frac{(a)(1-a)y}{(ax+ (1-a)y)^2},\frac{\partial B}{\partial y}=\frac{(-a)(1-a)x}{(ax+ (1-a)y)^2}$. Note that $1-a$ is positive since $a$ is less than 1 (in fact, it is small). We also know that the critical point $x = y =0$ is a bit of a “duh”: find a single test that gives no false positives and no false negatives. This also shows us that our predictions will be better if $y$ goes down (fewer false positives) and if $x$ goes up (fewer false negatives). None of that is a surprise.
But of interest is in the amount of change. The denominators of each partial derivative are identical. The coefficients of the numerators are of the same magnitude; there are different signs. So the rate of improvement of the predictive value is dependent on the relative magnitudes of $x$, which is $.8$ for us, and $y$, which is $.08$. Note that $x$ is much larger than $y$ and $x$ occurs in the numerator $\frac{\partial B}{\partial y}$. Hence an increase in the accuracy of the $y$ factor (a decrease in the false positive rate) will have a greater effect on the accuracy of the test than a similar increase in the “false negative” accuracy.
Using the concept of differentials, we expect a change $\Delta x = .01$ leads to an improvement of about .00136 (substitute $x = .8, y = .08$ into the expression for $\frac{\partial B}{\partial x}$ and multiply by $.01$. Similarly an improvement (decrease) of $\Delta y = -.01$ leads to an improvement of .013609.
You can “verify” this by playing with some numbers:
Current ($x = .8, y = .08$) we get $B = .1243$. Now let’s change: $x = .81, y = .08$ leads to $B = .125693$
Now change: $x = .8, y = .07$ we get $B = .139616$
Bottom line: the best way to increase the predictive value of the test is to reduce the number of false positives, while staying the same (or improving) the percentage of “false negatives”. As things sit, the false positive rate is the bigger factor affecting predictive value.
### Hypothesis Testing: Frequentist and Bayesian
Filed under: science, statistics — Tags: , — collegemathteaching @ 4:24 pm
I was working through Nate Silver’s book The Signal and the Noise and got to his chapter about hypothesis testing. It is interesting reading and I thought I would expand on that by posing a couple of problems.
Problem one: suppose you knew that someone attempted some basketball free throws.
If they made 1 of 4 shots, what would the probability be that they were really, say, a 75 percent free throw shooter?
Or, what if they made 5 of 20 shots?
Problem two: Suppose a woman aged 40-49 got a digital mammagram and got a “positive” reading. What is the probability that she indeed has breast cancer, given that the test catches 80 percent of the breast cancers (note: 20 percent is one estimate of the “false negative” rate; and yes, the false positive rate is 7.8 percent. The actual answer, derived from data, might surprise you: it is : 16.3 percent.
I’ll talk about problem two first, as this will limber the mind for problem one.
So, you are a woman between 40-49 years of age and go into the doctor and get a mammogram. The result: positive.
So, what is the probability that you, in fact, have cancer?
Think of it this way: out of 10,000 women in that age bracket, about 143 have breast cancer and 9857 do not.
So, the number of false positives is 9857*.078 = 768.846; we’ll keep the decimal for the sake of calculation;
The number of true positives is: 143*.8 = 114.4.
The total number of positives is therefore 883.246.
The proportion of true positives is $\frac{114.4}{883.246} = .1628$ So the false positive rate is 83.72 percent.
It turns out that, data has shown the 80-90 percent of positives in women in this age bracket are “false positives”, and our calculation is in line with that.
I want to point out that this example is designed to warm the reader up to Bayesian thinking; the “real life” science/medicine issues are a bit more complicated than this. That is why the recommendations for screening include criteria as to age, symptoms vs. asymptomatic, family histories, etc. All of these factors affect the calculations.
For example: using digital mammograms with this population of 10,000 women in this age bracket adds 2 more “true” detections and adds 170 more false positives. So now our calculation would be $\frac{116.4}{1055.25} = .1103$ , so while the true detections go up, the false positives also goes up!
Our calculation, while specific to this case, generalizes. The formula comes from Bayes Theorem which states:
$P(A|B) = \frac{P(B|A)P(A)}{P(B|A)P(A) + P(B|not(A))P(not(A))}$. Here: $P(A|B)$ is the probability of event A occurring given that B occurs and P(A) is the probability of event A occurring. So in our case, we were answering the question: given a positive mammogram, what is the probability of actually having breast cancer? This is denoted by P(A|B) . We knew: P(B|A) which is the probability of having a positive reading given that one has breast cancer and P(B|not(A)) is the probability of getting a positive reading given that one does NOT have cancer. So for us:$P(B|A) = .8, P(B|not(A)) = .078$ and $P(A) = .0143, P(not(A)) = .9857$ .
The bottom line: If you are testing for a condition that is known to be rare, even a reasonably accurate test will deliver a LOT of false positives.
Here is a warm up (hypothetical) example. Suppose a drug test is 99 percent accurate in that it will detect that a certain drug is there 99 percent of the time (if it is really there) and only yield a false positive 1 percent of the time (gives a positive result even if the person being tested is free of this drug). Suppose the drug use in this population is “known” to be, say 5 percent.
Given a positive test, what is the probability that the person is actually a user of this drug?
Answer: $\frac{.99*.05}{.99*.05+.01*.95} = .839$ . So, in this population, about 16.1 percent of the positives will be “false positives”, even though the test is 99 percent accurate!
Now that you are warmed up, let’s proceed to the basketball question:
Question: suppose someone (that you don’t actually see) shoots free throws.
Case a) the player makes 1 of 4 shots.
Case b) the player makes 2 of 8 shots.
Case c) the player makes 5 of 20 shots.
Now you’d like to know: what is the probability that the player in question is really a 75 percent free throw shooter? (I picked 75 percent as the NBA average for last season is 75.3 percent).
Now suppose you knew NOTHING else about this situation; you know only that someone attempted free throws and you got the following data.
The traditional “hypothesis test” uses the “frequentist” model: you would say: if the hypothesis that the person really is a 75 percent free throw shooter is true, what is the probability that we’d see this data?
So one would use the formula for the binomial distribution and use n = 4 for case A, n = 8 for case B and n = 20 for case C and use p = .75 for all cases.
In case A, we’d calculate the probability that the number of “successes” (made free throws) is less than or equal to 1; 2 for case B and 5 for case C.
For you experts: the null hypothesis would be, say for the various cases would be $P(Y \le 1 | p = .75), P(Y \le 2 | p = .75), P(Y \le 5 | = .75)$ respectively, where the probability mass function is adjusted for the different values of n .
We could do the calculations by hand, or rely on this handy calculator.
Case A: .0508
Case B: .0042
Case C: .0000 ($3.81 \times 10^{-6}$)
By traditional standards: Case A: we would be on the verge of “rejecting the null hypothesis that p = .75 and we’d easily reject the null hypothesis in cases B and C. The usual standard (for life science and political science) is p = .05).
(for a refresher, go here)
So that is that, right?
Well, what if I told you more of the story?
Suppose now, that in each case, the shooter was me? I am not a good athlete and I played one season in junior high, and rarely, some pickup basketball. I am a terrible player. Most anyone would happily reject the null hypothesis without a second thought.
But now: suppose I tell you that I took these performances from NBA box scores? (the first one was taken from one of the Spurs-Heat finals games; the other two are made up for demonstration).
Now, you might not be so quick to reject the null hypothesis. You might reason: “well, he is an NBA player and were he always as bad as the cases show, he wouldn’t be an NBA player. This is probably just a bad game.” In other words, you’d be more open to the possibility that this is a false positive.
Now you don’t know this for sure; this could be an exceptionally bad free throw shooter (Ben Wallace shot 41.5 percent, Shaquille O’Neal shot 52.7 percent) but unless you knew that, you’d be at least reasonably sure that this person, being an NBA player, is probably a 70-75 shooter, at worst.
So “how” sure might you be? You might look at NBA statistics and surmise that, say (I am just making this up), 68 percent of NBA players shoot between 72-78 percent from the line. So, you might say that, prior to this guy shooting at all, the probability of the hypothesis being true is about 70 percent (say). Yes, this is a prior judgement but it is a reasonable one. Now you’d use Bayes law:
$P(A|B) = \frac{P(B|A)P(A)}{P(B|A)P(A) + P(B|not(A))P(not(A))}$
Here: A represents the “75 percent shooter” being actually true, and B is the is the probability that we actually get the data. Note the difference in outlook: in the first case (the “frequentist” method), we wondered “if the hypothesis is true, how likely is it that we’d see data like this”. In this case, called the Bayesian method, we are wondering: “if we have this data, what is the probability that the null hypothesis is true”. It is a reverse statement, of sorts.
Of course, we have $P(A) = .7, P(not(A)) = .3$ and we’ve already calculated P(B|A) for the various cases. We need to make a SECOND assumption: what does event not(A) mean? Given what I’ve said, one might say not(A) is someone who shoots, say, 40 percent (to make him among the worst possible in the NBA). Then for the various cases, we calculate $P(B|not(A)) = .4752, .3154, .1256$ respectively.
So, we now calculate using the Bayesian method:
Case A, the shooter made 1 of 4: .1996. The frequentist p-value was .0508
Case B, the shooter made 2 of 8: .0301. The frequentist p-value was .0042
Case C, the shooter made 5 of 20: 7.08 x 10^-5 The frequentist p-value was 3.81 x 10^-6
We see the following:
1. The Bayesian method is less likely to produce a “false positive”.
2. As n, the number of data points, grows, the Bayesian conclusion and the frequentist conclusions tend toward “the truth”; that is, if the shooter shoots enough foul shots and continues to make 25 percent of them, then the shooter really becomes a 25 percent free throw shooter.
So to sum it up:
1. The frequentist approach relies on fewer prior assumptions and is computationally simpler. But it doesn’t include extra information that might make it easier to distinguish false positives from genuine positives.
2. The Bayesian approach takes in more available information. But it is a bit more prone to the user’s preconceived notions and is harder to calculate.
How does this apply to science?
Well, suppose you wanted to do an experiment that tried to find out which human gene alleles correspond so a certain human ailment. So a brute force experiment in which every human gene is examined and is statistically tested for correlation with the given ailment with null hypothesis of “no correlation” would be a LOT of statistical tests; tens of thousands, at least. And at a p-value threshold of .05 (we are willing to risk a false positive rate of 5 percent), we will get a LOT of false positives. On the other hand, if we applied bit of science prior to the experiment and were able to assign higher prior probabilities (called “posterior probability”) to the genes “more likely” to be influential and lower posterior probability to those unlikely to have much influence, our false positive rates will go down.
Of course, none of this eliminates the need for replication, but Bayesian techniques might cut down the number of experiments we need to replicate.
## March 5, 2013
### Math in the News (or: here is a nice source of exercises)
I am writing a paper and am through with the mathematics part. Now I have to organize, put in figures and, in general, make it readable. Or, in other words, the “fun” part is over. 🙂
So, I’ll go ahead and post some media articles which demonstrate mathematical or statistical concepts:
Topology (knot theory)
As far as what is going on:
After a century of studying their tangled mathematics, physicists can tie almost anything into knots, including their own shoelaces and invisible underwater whirlpools. At least, they can now thanks to a little help from a 3D printer and some inspiration from the animal kingdom.
Physicists had long believed that a vortex could be twisted into a knot, even though they’d never seen one in nature or the even in the lab. Determined to finally create a knotted vortex loop of their very own, physicists at the University of Chicago designed a wing that resembles a delicately twisted ribbon and brought it to life using a 3D printer.
After submerging their masterpiece in water and using electricity to create tiny bubbles around it, the researchers yanked the wing forward, leaving a similarly shaped vortex in its wake. Centripetal force drew the bubbles into the center of the vortex, revealing its otherwise invisible, knotted structure and allowing the scientists to see how it moved through the fluid—an idea they hit on while watching YouTube videos of dolphins playing with bubble rings.
By sweeping a sheet of laser light across the bubble-illuminated vortex and snapping pictures with a high-speed camera, they were able to create the first 3D animations of how these elusive knots behave, they report today in Nature Physics. It turns out that most of them elegantly unravel within a few hundred milliseconds, like the trefoil-knotted vortex in the video above. […]
Note: the trefoil is the simplest of all of the non-trivial (really knotted) knots in that its projection has the fewest number of crossings, or in that it can be made with the fewest number of straight sticks.
I do have one quibble though: shoelaces are NOT knotted…unless the tips are glued together to make the lace a complete “circuit”. There ARE arcs in space that are knotted:
This arc can never be “straightened out” into a nice simple arc because of its bad behavior near the end points. Note: some arcs which have an “infinite number of stitches” CAN be straightened out. For example if you take an arc and tie an infinite number of shrinking trefoil knots in it and let those trefoil knots shrink toward an endpoint, the resulting arc can be straightened out into a straight one. Seeing this is kind of fun; it involves the use of the “lamp cord trick”
(this is from R. H. Bing’s book The Geometric Topology of 3-Manifolds; the book is chock full of gems like this.)
Social Issues
It is my intent to stay a-political here. But there are such things as numbers and statistics and ways of interpreting such things. So, here are some examples:
Welfare
From here:
My testimony will amplify and support the following points:
A complete picture of time on welfare requires an understanding of two seemingly contradictory facts: the majority of families who ever use welfare do so for relatively short periods of time, but the majority of the current caseload will eventually receive welfare for relatively long periods of time.
It is a good mental exercise to see how this statement could be true (and it is); I invite you to try to figure this out BEFORE clicking on the link. It is a fun exercise though the “answer” will be obvious to some readers.
Speaking of Welfare: there is a debate on whether drug testing welfare recipients is a good idea or not. It turns out that, at least in terms of money saved/spent: it was a money losing proposition for the State of Florida, even when one factors in those who walked away prior to the drug tests. This data might make a good example. Also, there is the idea of a false positive: assuming that the statistic of, say, 3 percent of those on welfare use illegal drugs, how accurate (in terms of false positives) does a test have to be in order to have, say, a 90 percent predictive value? That is, how low does the probability of a false positive have to be for one to be 90 percent sure that someone has used drugs, given that they got a positive drug test?
Lastly: Social Security You sometimes hear: life expectancy was 62 when Social Security started. Well, given that working people pay into it, what are the key data points we need in order to determine what changes should be made? Note: what caused a shorter life expectancy and how does that effect: the percent of workers paying into it and the time that a worker draws from it? Think about these questions and then read what the Social Security office says. There are some interesting “conditional expectation” problems to be generated here.
## March 3, 2013
### Mathematics, Statistics, Physics
Filed under: applications of calculus, media, news, physics, probability, science, statistics — collegemathteaching @ 11:00 pm
This is a fun little post about the interplay between physics, mathematics and statistics (Brownian Motion)
Here is a teaser video:
The article itself has a nice animation showing the effects of a Poisson process: one will get some statistical clumping in areas rather than uniform spreading.
Treat yourself to the whole article; it is entertaining.
## June 5, 2012
### Quantum Mechanics, Hermitian Operators and Square Integrable Functions
In one dimensional quantum mechanics, the state vectors are taken from the Hilbert space of complex valued “square integrable” functions, and the observables correspond to the so-called “Hermitian operators”. That is, if we let the state vectors be represented by $\psi(x) = f(x) + ig(x)$ and we say $\psi \cdot \phi = \int^{\infty}_{-\infty} \overline{\psi} \phi dx$ where the overline decoration denotes complex conjugation.
The state vectors are said to be “square integrable” which means, strictly speaking, that $\int^{\infty}_{-\infty} \overline{\psi}\psi dx$ is finite.
However, there is another hidden assumption beyond the integral existing and being defined and finite. See if you can spot the assumption in the following remarks:
Suppose we wish to show that the operator $\frac{d^2}{dx^2}$ is Hermitian. To do that we’d have to show that:
$\int^{\infty}_{-\infty} \overline{\frac{d^2}{dx^2}\phi} \psi dx = \int^{\infty}_{-\infty} \overline{\phi}\frac{d^2}{dx^2}\psi dx$. This doesn’t seem too hard to do at first, if we use integration by parts:
$\int^{\infty}_{-\infty} \overline{\frac{d^2}{dx^2}\phi} \psi dx = [\overline{\frac{d}{dx}\phi} \psi]^{\infty}_{-\infty} - \int^{\infty}_{-\infty}\overline{\frac{d}{dx}\phi} \frac{d}{dx}\psi dx$. Now because the functions are square integrable, the $[\overline{\frac{d}{dx}\phi} \psi]^{\infty}_{-\infty}$ term is zero (the functions must go to zero as $x$ tends to infinity) and so we have: $\int^{\infty}_{-\infty} \overline{\frac{d^2}{dx^2}\phi} \psi dx = - \int^{\infty}_{-\infty}\overline{\frac{d}{dx}\phi} \frac{d}{dx}\psi dx$. Now we use integration by parts again:
$- \int^{\infty}_{-\infty}\overline{\frac{d}{dx}\phi} \frac{d}{dx}\psi dx = -[\overline{\phi} \frac{d}{dx}\psi]^{\infty}_{-\infty} + \int^{\infty}_{-\infty} \overline{\phi}\frac{d^2}{dx^2} \psi dx$ which is what we wanted to show.
Now did you catch the “hidden assumption”?
Here it is: it is possible for a function $\psi$ to be square integrable but to be unbounded!
If you wish to work this out for yourself, here is a hint: imagine a rectangle with height $2^{k}$ and base of width $\frac{1}{2^{3k}}$. Let $f$ be a function whose graph is a constant function of height $2^{k}$ for $x \in [k - \frac{1}{2^{3k+1}}, k + \frac{1}{2^{3k+1}}]$ for all positive integers $k$ and zero elsewhere. Then $f^2$ has height $2^{2k}$ over all of those intervals which means that the area enclosed by each rectangle (tall, but thin rectangles) is $\frac{1}{2^k}$. Hence $\int^{\infty}_{-\infty} f^2 dx = \frac{1}{2} + \frac{1}{4} + ...\frac{1}{2^k} +.... = \frac{1}{1-\frac{1}{2}} - 1 = 1$. $f$ is certainly square integrable but is unbounded!
It is easy to make $f$ into a continuous function; merely smooth by a bump function whose graph stays in the tall, thin rectangles. Hence $f$ can be made to be as smooth as desired.
So, mathematically speaking, to make these sorts of results work, we must make the assumption that $lim_{x \rightarrow \infty} \psi(x) = 0$ and add that to the “square integrable” assumption.
## August 17, 2011
### Quantum Mechanics and Undergraduate Mathematics XIV: bras, kets and all that (Dirac notation)
Filed under: advanced mathematics, applied mathematics, linear albegra, physics, quantum mechanics, science — collegemathteaching @ 11:29 pm
Up to now, I’ve used mathematical notation for state vectors, inner products and operators. However, physicists use something called “Dirac” notation (“bras” and “kets”) which we will now discuss.
Recall: our vectors are integrable functions $\psi: R^1 \rightarrow C^1$ where $\int^{-\infty}_{\infty} \overline{\psi} \psi dx$ converges.
Our inner product is: $\langle \phi, \psi \rangle = \int^{-\infty}_{\infty} \overline{\phi} \psi dx$
Here is the Dirac notation version of this:
A “ket” can be thought of as the vector $\langle , \psi \rangle$. Of course, there is an easy vector space isomorphism (Hilbert space isomorphism really) between the vector space of state vectors and kets given by $\Theta_k \psi = \langle,\psi \rangle$. The kets are denoted by $|\psi \rangle$.
Similarly there are the “bra” vectors which are “dual” to the “kets”; these are denoted by $\langle \phi |$ and the vector space isomorphism is given by $\Theta_b \psi = \langle,\overline{\psi} |$. I chose this isomorphism because in the bra vector space, $a \langle\alpha,| = \langle \overline{a} \alpha,|$. Then there is a vector space isomorphism between the bras and the kets given by $\langle \psi | \rightarrow |\overline{\psi} \rangle$.
Now $\langle \psi | \phi \rangle$ is the inner product; that is $\langle \psi | \phi \rangle = \int^{\infty}_{-\infty} \overline{\psi}\phi dx$
By convention: if $A$ is a linear operator, $\langle \psi,|A = \langle A(\psi)|$ and $A |\psi \rangle = |A(\psi) \rangle$ Now if $A$ is a Hermitian operator (the ones that correspond to observables are), then there is no ambiguity in writing $\langle \psi | A | \phi \rangle$.
This leads to the following: let $A$ be an operator corresponding to an observable with eigenvectors $\alpha_i$ and eigenvalues $a_i$. Let $\psi$ be a state vector.
Then $\psi = \sum_i \langle \alpha_i|\psi \rangle \alpha_i$ and if $Y$ is a random variable corresponding to the observed value of $A$, then $P(Y = a_k) = |\langle \alpha_k | \psi \rangle |^2$ and the expectation $E(A) = \langle \psi | A | \psi \rangle$.
## August 13, 2011
### Beware of Randomness…
Filed under: mathematics education, news, probability, science, statistics — collegemathteaching @ 10:18 pm
We teach about p-values in statistics. But rejecting a null hypothesis at a small p-value does not give us immunity from type I error: (via Scientific American)
The p-value puts a number on the effects of randomness. It is the probability of seeing a positive experimental outcome even if your hypothesis is wrong. A long-standing convention in many scientific fields is that any result with a p-value below 0.05 is deemed statistically significant. An arbitrary convention, it is often the wrong one. When you make a comparison of an ineffective drug to a placebo, you will typically get a statistically significant result one time out of 20. And if you make 20 such comparisons in a scientific paper, on average, you will get one significant result with a p-value less than 0.05—even when the drug does not work.
Many scientific papers make 20 or 40 or even hundreds of comparisons. In such cases, researchers who do not adjust the standard p-value threshold of 0.05 are virtually guaranteed to find statistical significance in results that are meaningless statistical flukes. A study that ran in the February issue of the American Journal
of Clinical Nutrition tested dozens of compounds and concluded that those found in blueberries lower the risk of high blood pressure, with a p-value of 0.03. But the researchers looked at so many compounds and made so many comparisons (more than 50), that it was almost a sure thing that some of the p-values in the paper would be less than 0.05 just by chance.
The same applies to a well-publicized study that a team of neuroscientists once conducted on a salmon. When they presented the fish with pictures of people expressing emotions, regions of the salmon’s brain lit up. The result was statistically significant with a p-value of less than 0.001; however, as the researchers argued, there are so many possible patterns that a statistically significant result was virtually guaranteed, so the result was totally worthless. p-value notwithstanding, there was no way that the fish could have reacted to human emotions. The salmon in the fMRI happened to be dead.
Emphasis mine.
Moral: one can run an experiment honestly and competently and analyze the results competently and honestly…and still get a false result. Damn that randomness!
Older Posts » | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 119, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7195568084716797, "perplexity": 779.1206575287338}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320865.14/warc/CC-MAIN-20170626203042-20170626223042-00375.warc.gz"} |
http://mathhelpforum.com/calculus/194677-calculate-volume-tetrahedron-print.html | # Calculate the volume of the tetrahedron
• December 26th 2011, 05:09 AM
kotsos
Calculate the volume of the tetrahedron
for this problem :
Calculate the volume of the tetrahedron whose vertices are y=0, z=0, x=0 και y-x+z=1
this integral is it the corect one ?
$\int_{x=0}^{1}\int_{y=0}^{-y+1}\int_{z=0}^{z=x-y+1}dG=\int_{0}^{1}\int_{y=0}^{-y+1}(x-y+1) dydx$
• December 26th 2011, 05:15 AM
Prove It
Re: Calculate the volume of the tetrahedron
They both look fine to me...
• December 27th 2011, 12:51 AM
matheagle
Re: Calculate the volume of the tetrahedron
Are you sure of that plane?
Because when you let z equal zero it makes a odd slice in the xy plane
It's y=1+x which isn't going to give you a closed region with the x and y axis. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9587384462356567, "perplexity": 1773.024514905741}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471983553659.85/warc/CC-MAIN-20160823201913-00038-ip-10-153-172-175.ec2.internal.warc.gz"} |
https://cartesianproduct.wordpress.com/tag/physics/ | ## The maths and physics of walking in the sand
I love this – which I picked up from Ian Stewart’s now slightly out-of-date (e.g., pre-proof of Fermat’s Last Theorem) and out-of-print The Problems of Mathematics (but a good read and on sale very cheaply at Amazon) – because it demonstrates the harmony of physics with maths, is based on a common experience and is also quite counter-
intuitive.
Most of us are familiar with the experience – if you walk on damp sand two things happen: firstly the area around our foot becomes suddenly dry and secondly, as we lift our foot off, the footprint fills with water. What is happening here?
Well, it turns out that the sand, before we stand on it, is in a locally optimised packing state – in other words, although the grains of sand are essentially randomly distributed they are packed together in a way that minimises (locally) the space between the grains. If they weren’t then even the smallest disturbance would force them into a better packed state and release the potential energy they store in their less efficiently packed state.
This doesn’t mean, of course, that they are packed in the most efficient way possible – just as they are randomly thrown together they fall into the locally available lowest energy state (this is the physics) which is the locally available best packing (this is the maths).
But this also means that when we stand on the sand we cannot actually be compressing it – because that would actually imply a form of perpetual motion as we created an ever lower energy state/even more efficient packing out of nothing. In fact we make the sand less efficiently compressed – the energy of our foot strike allowing the grains to reach a less compressed packing – and, as a result, create more space for the water in the surrounding sand to rush into: hence the sand surrounding our foot becomes drier as the water drains out of it and into where we are standing.
Then, as we lift our foot, we take away the energy that was sustaining the less efficient packing and the grains of sand rearrange themselves into a more efficient packing (or – to look at it in the physical sense – release the energy stored when we stand on the sand). This more efficient packing mean less room for the water in the sand and so the space left by our foot fills with water expelled from the sand.
## A probably entirely naïve question about the principle of relativity
Surely I can quite easily design an experiment that shows the relativity principle is false.
If turn around on the spot the principle, as I understand it, asserts that I cannot build an experiment that proves it was me that moved as opposed to everything else that moved while I stayed still.
But the rest of the universe is very massive – possibly of infinite mass – and so to move it through $2\pi$ radians takes a hell of a lot more energy than moving me.
## Perhaps “you” will live forever after all
This is inspired by Max Tegmark‘s Our Mathematical Universe: My Quest for the Ultimate Nature of Reality: I have been thinking about this since I finished the book and I cannot find a convincing argument against the thesis (certainly the ones Tegmark uses in the book didn’t impress me – but perhaps I misunderstood them.)
So, let us conduct a thought experiment that might suggest “you” can live forever.
In this world we assume that you don’t do anything dangerous – such as commute to work. The only factors that could kill you are the normal processes of human ageing (and related factors such as cancer): your fate is completely determined by chemical processes in your body.
And we accept the “many worlds” view of quantum mechanics – in other words all the possible quantum states exist and so “the universe” is constantly multiplying as more and more of these worlds are created.
Now, if we accept that the chemical processes are, in the end, driven by what appears to us as stochastic (random) quantum effects – in other words chemicals react because atoms/electrons/molecules are in a particular range of energies governed by the quantum wave equation – then it must surely be the case that in one of the many worlds the nasty (to our health) reactions never happen because “randomly” it transpires that the would-be reactants are never in the right energy state at the right time.
To us in the everyday world our experience is that chemical reactions “just happen”, but in the end that is a statistically driven thing: there are billions of carbon atoms in the piece of wood we set fire to and their state is changing all the time so eventually they have the energy needed to “catch fire”. But what if, in just one quantum world of many trillions, the wood refuses to light?
So, too for us humans: in one world, the bad genetic mutations that cause ageing or cancer just don’t happen and so “you” (one of many trillions of “you”s) stays young for ever.
The obvious counter argument is: where are these forever-young people? The 300 year olds, the 3000 year olds? Leaving aside Biblical literalism, there is no evidence that such people have ever lived.
But that is surely just because this is so very, very rare that you could not possible expect to meet such a person. After all, around 70 – 100 billion humans have ever been born and each of them has around 37 trillion cells, which live for an average of a few days (probably) – so in a year perhaps 37 billion trillion cell division events – each of which could spawn a new quantum universe – take place. That means the chances of you being in the same universe as one of the immortals is pretty slim.
Yet, on the other hand, we all know someone who seems to never age as quickly as we do…
…I’d be really interested in hearing arguments against the hypothesis from within the many worlds view of quantum physics.
## Deconstructing Max Tegmark’s argument against a simulated universe
In the end Max Tegmark‘s Our Mathematical Universe: My Quest for the Ultimate Nature of Reality has proved to be something of a disappointment – somewhere along the way the science got lost and was replaced by a lot of metaphysical speculation.
I haven’t quite finished it yet – so I’ll do a fuller review when I do (and there were good bits too), but what I want to take issue with here is his case (or, perhaps more accurately, the cases he quotes with approval) against the idea that we live in some sort of great computer simulation.
I am not arguing in favour of such a view of our universe – but it certainly has its strengths – if you think computer power will keep on growing then it is pretty difficult, if we apply the basic “Copernican” principal that we are nothing special, to avoid the conclusion that we are in such a universe.
Tegmark uses two major arguments against this idea that I want to take issue with.
The first, I think, is not an argument against it at all – namely that we are more likely to be a simulation within a simulation if we accept this basic thought. Well, maybe – but this is completely untestable/falsifiable and so beyond science. (In contrast the basic idea that we are in a simulated universe is testable – for instance if we find that our universe has a “precision limit” that would be a strong pointer.)
The second is the degree of complexity of simulating a many worlds quantum multiverse. But, of course, the simulator does not need to actually “run” all those other quantum worlds at all – because it’s not a physical reality, merely a simulation. All it has to do is leave the signs (eg the traces of superposition we can detect) in our environment that such alternate universes exist, but once “decoherence” takes place those alternate universes go straight to your garbage collection routines. So too for more anything much beyond the solar system – all the simulation has to do is provide us with the signals – it doesn’t have to actually, for instance, “run” a supernova explosion in a distant galaxy.
## Why we’ll never meet aliens
Well, the answer is pretty plain: Einstein‘s theory of general relativity – which even in the last month has added to it’s already impressive list of predictive successes – tells us that to travel at the speed of light a massive body would require an infinite amount of propulsive energy. In other words, things are too far away and travel too slow for us to ever hope to meet aliens.
But what if – and it’s a very big if – we could communicate with them, instantaneously? GR tells us massive bodies cannot travel fast, or rather along a null time line – which is what really matters if you want to be alive when you arrive at your destination – but information has no mass as such.
Intriguingly, an article in the current edition of the New Scientist looks at ways in which quantum entanglement could be used to pass information – instantaneously – across any distance at all. Quantum entanglement is one of the stranger things we can see and measure today – Einstein dismissed it as “spooky interaction at a distance” – and essentially means that we can take two similar paired particles and by measuring the state of one can instantaneously see the other part of the pair fall into a particular state (e.g., if the paired particles are electrons and we measure one’s quantum spin, the other instantly is seen to have the other spin – no matter how far away it is at the time).
Entanglement does not allow us to transmit information though, because of what the cosmologist Antony Valentini calls, in an analogy with thermodynamic “heat death”, the “quantum death” of the universe – in essence, he says that in the instants following the Big Bang physical particles dropped into a state in which – say – all electron spins were completely evenly distributed, meaning that we cannot find electrons with which to send information – just random noise.
But – he also suggests – inflation – the super-rapid expansion of the very early universe may also have left us with a very small proportion of particles that escaped “quantum death” – just as inflation meant that the universe is not completely smooth because it pushed things apart at such a rate that random quantum fluctuations were left as a permanent imprint.
If we could find such particles we could use them to send messages across the universe at infinite speed.
Perhaps we are already surrounded by such “messages”: those who theorise about intelligent life elsewhere in the universe are puzzled that we have not yet detected any signs of it, despite now knowing that planets are extremely common. That might suggest either intelligent life is very rare, or very short-lived or that – by looking at the electromagnetic spectrum – we are simply barking up the wrong tree.
Before we get too excited I have to add a few caveats:
• While Valentini is a serious and credible scientist and has published papers which show, he says, the predictive power of his theory (NB he’s not the one speculating about alien communication – that’s just me) – such as the observed characteristics of the cosmic microwave background (an “echo” of the big bang) – his views are far from the scientific consensus.
• To test the theories we would have to either be incredibly lucky or detect the decay products of a particle – the gravitino – we have little evidence for beyond a pleasing theoretical symmetry between what we know about “standard” particle physics and theories of quantum gravity.
• Even if we did detect and capture such particles they alone would not allow us to escape the confines of general relativity – as they are massive and so while they could allow two parties to theoretically communicate instantly, the parties themselves would still be confined by GR’s spacetime – communicating with aliens would require us and them in someway to use such particles that were already out there, and perhaps have been whizzing about since the big bang itself.
But we can dream!
Update; You may want to read Andy Lutomirski’s comment which, I think it’s fair to say, is a one paragraph statement of the consensus physics. I am not qualified to say he’s wrong and I’m not trying to – merely looking at an interesting theory. And I have tracked down Anthony Valentini’s 2001 paper on this too.
## In what sense do photons exist?
This is a genuine question on my part – and I would be grateful for any answers!
The inspiration for asking the question comes from Genius: Richard Feynman and Modern Physics – my current “listen while running” book – along with Feynman’s own description of radiation in QED – The Strange Theory of Light and Matter.
Feynman argues that there is no radiation without absorption: in other words a tree that falls in an empty forest does indeed make no sound (if we imagine the sound is transmitted by photons, that is).
This sounds like a gross violation of all common sense – how could a photon know when it leaves a radiating body that it is to be absorbed?
But then, general relativity comes to our rescue – because in the photon’s inertial frame the journey from radiator to absorber is instantaneous.
But how can a body that exists for no time at all, exist at all?
Then again my assumption in asking this question is that time is in some sense privileged as a dimension of spacetime. This is a pretty deep controversy in theoretical physics these days and I am not qualified to shed much light on it – but let us assume that a body can exist with a zero dimension in time but real dimensions in space, can we then have bodies which have zero dimensions in space but a real dimension in time? If so, what are they?
## The Summa Metaphysica and all that
Stumbled across a fascinating article in this weekend’s Guardian magazine about David Birnbaum and his “Summa Metaphysica” (this – Q4P2 – Principia Metaphysica (The Birnbaum Summa Metaphysics) – is listed in Amazon and appears to be the work, or a version of it).
Normally I more or less ignore the magazine supplements that come with the weekend’s papers but I was attracted this one by a graphic that mentioned the “Bohr radius” – as I had just been listening to Genius: Richard Feynman and Modern Physics while running in the gym.
To be honest, the article doesn’t do a lot to illuminate what Birnbaum a New York jeweller who has spent a lot of his own money to promote his ideas – is about. But then, maybe that is because he’s not about very much at all: metaphysics is, after all, literally beyond science and testing.
It does tell us a lot about how Birnbaum has upset quite a few genuine scientists about how he has promoted his claims to have found an ultimate theory to explain existence. He used an imprint – Harvard Matrix – on his self-published books that seems to have left a few people at the university feeling their good name has been misappropriated, while others feel that they were used to give a veneer of credibility to a conference Birnbaum funded at Bard college this past May (though I can only admire – seriously – the Oxford chemist, Peter Atkins, who says he attended because it gave him a chance to get an expenses-paid trip to New York and because he likes a good argument).
Such as they are, Birnbaum’s ideas seem to centre on the concept of “potential” (energy? it’s not clear). It’s not mentioned in the article but, of course, the concept of a multitude of inflationary universes is also, in a sense, related to potential energy (or at least the energy that is freed when the ‘inflation’ changes state). But those ideas are, at least to some degree, testable and potentially falsifiable. By all accounts Birnbaum’s are not.
In any case I doubt there is much relation between them and inflationary cosmology either, but as cosmology/particle physics becomes more complex, then the scope for the naive (a category into which, being kind, I will place Birnbaum) as well as the exploitative – wait for the next batch of psuedo-science in the Daily Mail – grows larger.
Many scientists in these advanced fields are unhappy about the core of the “standard model” – in that it posits a very large number of supposedly “fundamental” particles. There has been disappointment as well as joy over how well the model has stood up to the LHCs explorations – a triumph of the scientific method as great as Le Verrier’s prediction of Neptune, but also a confirmation of a model that looks less than fundamental after all. If, and until, we solve some of those seeming contradictions then we are just going to have to live with the interstices of physics being filled with ideas from strange people – especially rich ones who want to be taken seriously.
(Incidentally, the Bohr radius graphic was actually a reference to an idea promoted by Jim Carter who denies the truth of quantum mechanics.)
## My one problem with Feynman’s QED
Well, as predicted, I finished off Richard Feynman‘s QED – The Strange Theory of Light and Matter in short order this morning – and it is a truly marvellous book. I just wish I had read it as an undergraduate.
My one problem with it was its explanation of “stimulated emission“. Now, as an undergraduate, I remember I understood this quite well – it came up in a discussion of MASERs (intense microwave sources in deep space) as opposed to the more familiar LASERs ifI remember correctly. But that’s a long time ago.
Perhaps I should look it all up again.
## What a brilliant book
Just over three hours ago I started reading Richard Feynman‘s QED – The Strange Theory of Light and Matter (Penguin Press Science): and now, 110 pages later, I am stunned at its brilliance.
If you are any sort of physics undergraduate you must read it. Similarly, if I was teaching ‘A’ level physics I would be handing it out to my students.
There is little maths in it and not much physics either – but as a way of explaining a high concept of physics – without cutting corners with bad analogies – it is just fantastic.
I’ve taken a break now because reading that much of a science book at more or less one sitting is not conducive to grasping all its points – but I am sure it will be finished either this evening or tomorrow morning.
## Schrödinger’s cat: for real
Quantum Mechanics is, along with General Relativity, the foundation stone of modern physics and few explanations of its importance are more famous than the “Schrödinger’s cat” thought experiment.
This seeks to explain the way “uncertainty” operates at the heart of the theory. Imagine a cat in a box with a poison gas capsule. The capsule is set off if a radioactive decay takes place. But radioactivity is governed by quantum mechanics – we can posit statistical theories about how likely the radioactive decay is to take place but we cannot be certain – unless we observe. Therefore the best way we can state of the physical state of the cat – so long as it remains unobserved – is to say it is both alive and dead.
Now physicists still argue about what happens next – the act of observing the cat. In the classical, or Copenhagen, view of quantum mechanics the “wave equation collapses” and observing forces the cat into a dead or alive state. In increasingly influential “many worlds” interpretations anything that can happen does and an infinite number of yous sees an infinite number of dead or alive cats. But that particular mind bender is not what we are about here. (NB we should also note that in the real world the cat is “observed” almost instantaneously by the molecules in the box – this is a mind experiment not a real one, except… well read on for that).
The idea of the cat being alive and dead at once is what is known as “quantum superposition” – in other words both states exist at once, with the prevalence of one state over another being determined by statistics and nothing else.
Quantum superposition is very real and detectible. You may have heard of the famous interferometer experiments where a single particle is sent through some sort of diffraction grating and yet the pattern detected is one of interference – as though the particle interfered with itself – in fact this indicates that superposed states exist.
In fact the quantum theories suggest that superposition should apply not just to single particles but to everything and every collection of things in the universe. In other words cats could and should be alive and dead at the same time. If we can find a sufficiently large object where superimposition does not work then we would actually have to rethink the quantum theories and equations which have stood us in such good stead (for instance making the computer you are reading this on possible).
And Stefan Nimmrichter of the Vienna Centre for Quantum Science Technology and Klaus Hornberger of the University of Duisberg-Essen have proposed we use this measurement – how far up the scale of superposition we have got as a way of determining just how successful quantum mechanics’s laws are (you can read their paper here).
They propose a logarithmic scale (see graph) based on the size of the object showing superposition – so the advance from the early 60s score of about 5 to today’s of about 12 might mean we can be one million times more confident in quantum theory’s universal application. (A SQUID is a very sensitive magnetometer which relies on superconductivity.)
And they say that having a 4kg ‘house cat’ be superposed in two states 10cm apart (which might be taken for a good example of lying dead versus prowling around) would require a score of about 57 – in other words about $10^{45}$ more experimental power than currently available.
That probably means no one reading this is ever likely to see a successful demonstration that Schrödinger’s cat is rather more than a thought experiment, but it does give us a target to aim at! | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 2, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4691857099533081, "perplexity": 701.6152865857805}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823220.45/warc/CC-MAIN-20171019031425-20171019051425-00599.warc.gz"} |
http://math.stackexchange.com/questions/16284/algebra-problem | # Algebra Problem
The expression $x^2-4x+5$ is a factor of $ax^3+bx^2+25$. Express the sum a+b as an integer.
Please give an explanation of how the answer
-
$$\frac{a x^3+b x^2+25}{x^2-4x+5}=4a+b+a x+\frac{(11a+4b)x-5(4a+b)+25}{x^2-4x+5}$$ – non-expert Jan 4 '11 at 3:12
Please don't use the imperative mode; you aren't assigning homework to the group. If this is homework (seems likely, given the phrasing), then please use the [homework] tag. This is not linear algebra, so I've changed the tag. Finally, you should say what you have tried and why or how you are stuck. – Arturo Magidin Jan 4 '11 at 3:13
Arturo, the sentence " Please don't use the imperative mode" is a nice example of logical antinomy :-) (But I agree with you, of course) – Georges Elencwajg Jan 4 '11 at 8:41
$\rm\ 0 = a\ x^3 + b\ x^2 + 25 - (x^2 - 4\ x + 5)\ (a\ x + 5) = (4\ a + b - 5)\ x^2 + (20 - 5\ a)\ x\ \Rightarrow \ a,\: b = \ldots$
Note: $\rm\ a\ x + 5\$ comes from comparing leading and constant coefficients.
If $x^2-4x+5$ is a factor of $ax^3+bx^2+25$, then $ax^3+bx^2+25=(x^2-4x+5)(\text{something})$. Since $ax^3+\cdots$ is a polynomial of degree 3 and $x^2-\cdots$ is a polynomial of degree 2, the "something" must be a polynomial of degree 1: $$ax^3+bx^2+0x+25=(x^2-4x+5)(\underline{\;\;\;\;\;\;}x+\underline{\;\;\;\;\;\;})$$ Try to fill in the two blanks based the terms on the left side that don't have $a$ and $b$ in them (for example, how will $+25$ end up in the product?), then finish the multiplication to find the values of $a$ and $b$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7244922518730164, "perplexity": 439.5520761134773}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115862141.23/warc/CC-MAIN-20150124161102-00090-ip-10-180-212-252.ec2.internal.warc.gz"} |
http://noobstarter.com/brain-food-richard-cornish-nootropics-tianeptine.html | Past noon, I began to feel better, but since I would be driving to errands around 4 PM, I decided to not risk it and take an hour-long nap, which went well, as did the driving. The evening was normal enough that I forgot I had stayed up the previous night, and indeed, I didn’t much feel like going to bed until past midnight. I then slept well, the Zeo giving me a 108 ZQ (not an all-time record, but still unusual).
The first night I was eating some coconut oil, I did my n-backing past 11 PM; normally that damages my scores, but instead I got 66/66/75/88/77% (▁▁▂▇▃) on D4B and did not feel mentally exhausted by the end. The next day, I performed well on the Cambridge mental rotations test. An anecdote, of course, and it may be due to the vitamin D I simultaneously started. Or another day, I was slumped under apathy after a promising start to the day; a dose of fish & coconut oil, and 1 last vitamin D, and I was back to feeling chipper and optimist. Unfortunately I haven’t been testing out coconut oil & vitamin D separately, so who knows which is to thank. But still interesting.
Ginkgo Biloba, Bacopa Monnieri, and Lion’s Mane: This particular unique blend boosts mental focus, memory, learning, and cognitive performance while reducing anxiety and depression, and I’ve found that it can significantly boost mental alertness for around six hours at a time without any jitteriness or irritability – or any significant amounts of caffeine. It’s important to allow for a grace period of about 12 weeks before you feel the stack’s full potential, so don’t expect immediate results with this combination.
Feeling behind, I resolved to take some armodafinil the next morning, which I did - but in my hurry I failed to recall that 200mg armodafinil was probably too much to take during the day, with its long half life. As a result, I felt irritated and not that great during the day (possibly aggravated by some caffeine - I wish some studies would be done on the possible interaction of modafinil and caffeine so I knew if I was imagining it or not). Certainly not what I had been hoping for. I went to bed after midnight (half an hour later than usual), and suffered severe insomnia. The time wasn’t entirely wasted as I wrote a short story and figured out how to make nicotine gum placebos during the hours in the dark, but I could have done without the experience. All metrics omitted because it was a day usage.
Low-dose lithium orotate is extremely cheap, ~$10 a year. There is some research literature on it improving mood and impulse control in regular people, but some of it is epidemiological (which implies considerable unreliability); my current belief is that there is probably some effect size, but at just 5mg, it may be too tiny to matter. I have ~40% belief that there will be a large effect size, but I’m doing a long experiment and I should be able to detect a large effect size with >75% chance. So, the formula is NPV of the difference between taking and not taking, times quality of information, times expectation: \frac{10 - 0}{\ln 1.05} \times 0.75 \times 0.40 = 61.4, which justifies a time investment of less than 9 hours. As it happens, it took less than an hour to make the pills & placebos, and taking them is a matter of seconds per week, so the analysis will be the time-consuming part. This one may actually turn a profit. And many people swear by them. Neal Thakkar, for example, is an entrepreneur from Marlboro, New Jersey, who claims nootropics improved his life so profoundly that he can’t imagine living without them. His first breakthrough came about five years ago, when he tried a piracetam/choline combination, or “stack,” and was amazed by his increased verbal fluency. (Piracetam is a cognitive-enhancement drug permitted for sale in the U. S. as a dietary supplement; choline is a natural substance.) NGF may sound intriguing, but the price is a dealbreaker: at suggested doses of 1-100μg (NGF dosing in humans for benefits is, shall we say, not an exact science), and a cost from sketchy suppliers of$1210/100μg/$470/500μg/$750/1000μg/$1000/1000μg/$1030/1000μg/$235/20μg. (Levi-Montalcini was presumably able to divert some of her lab’s production.) A year’s supply then would be comically expensive: at the lowest doses of 1-10μg using the cheapest sellers (for something one is dumping into one’s eyes?), it could cost anywhere up to$10,000.
As you are no doubt well aware, coffee and cigarettes have long been a popular combination. Ah, nostalgia. Just think back to the 1950’s and the man in the suit perfectly pairing his black brew with a cigarette hanging out the corner of his mouth as he enjoyed the Sunday paper or rocked on a lazy afternoon out on the family patio. Heck, there’s even a movie called “Coffee and Cigarettes” and a song called “Cigarettes & Coffee” (in the former, you can see Bill Murray, Tom Waits, Steve Buscemi and Cate Blanchett partaking in their fair share of smoking and sipping).
Difficulty remembering. As discussed previously, challenges with episodic memory may start as early as middle age, even if your brain is healthy. As you get older, problems with memory tend to become more and more frequent. Once you reach your mid 30s, you will most likely begin to notice an increased frequency of forgetfulness. At this point, it may become common for you to lose your belongings and misplace your possessions, like your car keys or smartphones. This can truly be frustrating at best. At worst, it can be downright scary. You might also start misplacing names and having more “tip of the tongue” moments.
I have a needle phobia, so injections are right out; but from the images I have found, it looks like testosterone enanthate gels using DMSO resemble other gels like Vaseline. This suggests an easy experimental procedure: spoon an appropriate dose of testosterone gel into one opaque jar, spoon some Vaseline gel into another, and pick one randomly to apply while not looking. If one gel evaporates but the other doesn’t, or they have some other difference in behavior, the procedure can be expanded to something like and then half an hour later, take a shower to remove all visible traces of the gel. Testosterone itself has a fairly short half-life of 2-4 hours, but the gel or effects might linger. (Injections apparently operate on a time-scale of weeks; I’m not clear on whether this is because the oil takes that long to be absorbed by surrounding materials or something else.) Experimental design will depend on the specifics of the obtained substance. As a controlled substance (Schedule III in the US), supplies will be hard to obtain; I may have to resort to the Silk Road.
Still, the scientific backing and ingredient sourcing of nootropics on the market varies widely, and even those based in some research won't necessarily immediately, always or ever translate to better grades or an ability to finally crank out that novel. Nor are supplements of any kind risk-free, says Jocelyn Kerl, a pharmacist in Madison, Wisconsin.
An unusual intervention is infrared/near-infrared light of particular wavelengths (LLLT), theorized to assist mitochondrial respiration and yielding a variety of therapeutic benefits. Some have suggested it may have cognitive benefits. LLLT sounds strange but it’s simple, easy, cheap, and just plausible enough it might work. I tried out LLLT treatment on a sporadic basis 2013-2014, and statistically, usage correlated strongly & statistically-significantly with increases in my daily self-ratings, and not with any sleep disturbances. Excited by that result, I did a randomized self-experiment 2014-2015 with the same procedure, only to find that the causal effect was weak or non-existent. I have stopped using LLLT as likely not worth the inconvenience.
Whether you want to optimise your nutrition during exam season or simply want to stay sharp in your next work meeting, paying attention to your diet can really pay off. Although there is no single 'brain food' that can protect against age-related disorders such as Alzheimers' or dementia, and there are many other medical conditions that can affect the brain, paying attention to what you eat gives you the best chance of getting all the nutrients you need for cognitive health.
A young man I'll call Alex recently graduated from Harvard. As a history major, Alex wrote about a dozen papers a term. He also ran a student organisation, for which he often worked more than 40 hours a week; when he wasn't working, he had classes. Weeknights were devoted to all the schoolwork he couldn't finish during the day, and weekend nights were spent drinking with friends and going to parties. "Trite as it sounds," he told me, it seemed important to "maybe appreciate my own youth". Since, in essence, this life was impossible, Alex began taking Adderall to make it possible.
So with these 8 results in hand, what do I think? Roughly, I was right 5 of the days and wrong 3 of them. If not for the sleep effect on #4, which is - in a way - cheating (one hopes to detect modafinil due to good effects), the ratio would be 5:4 which is awfully close to a coin-flip. Indeed, a scoring rule ranks my performance at almost identical to a coin flip: -5.49 vs -5.5420. (The bright side is that I didn’t do worse than a coin flip: I was at least calibrated.)
Regardless, while in the absence of piracetam, I did notice some stimulant effects (somewhat negative - more aggressive than usual while driving) and similar effects to piracetam, I did not notice any mental performance beyond piracetam when using them both. The most I can say is that on some nights, I seemed to be less easily tired when writing or editing or n-backing (and I felt less tired than ICON 2011 than ICON 2010), but those were also often nights I was also trying out all the other things I had gotten in that order from Smart Powders, and I am still dis-entangling what was responsible. (Probably the l-theanine or sulbutiamine.)
Surgeries – Here's another unpleasant surprise. You're probably thinking we're referring to a brain surgery, but that's not the only surgery that can influence the blood flow to your brain the bad way. For example, a heart surgery can cause hypoperfusion. How? Fat globules, which are released during these kinds of procedures, can find their way to your brain and disrupt the optimal blood flow.
Our top recommendation for cognitive energy enhancement is Brainol. This product is formulated from all natural ingredients. Brainol is a product that works internally. This herbal blend contains 19 key ingredients such as Huperzine A, L-Tyrosine, L-Theanine, St. John’s Wort, Phosphatidylserine, Bacopa Monnieri and Guarana, to name but a few. There are no unwanted side effects from these all natural ingredients.
Adaptogens are also known to participate in regulating homeostasis through helping to beneficially regulate the mechanisms of action associated with the HPA-axis (think back to the importance of proper HPA-axis function which you learned about in my last article on breathwork), including cortisol regulation and nitric oxide regulation. Through these mechanisms, they can protect against chronic inflammation, atherosclerosis, neurodegenerative cognitive impairment, metabolic disorders, cancer and other aging-related diseases. There are plenty of adaptogens with potent benefits, but the ones you learn about in this article are an excellent start to begin building or expanding your stress-adaptation toolbox.
The greatly increased variance, but only somewhat increased mean, is consistent with nicotine operating on me with an inverted U-curve for dosage/performance (or the Yerkes-Dodson law): on good days, 1mg nicotine is too much and degrades performance (perhaps I am overstimulated and find it hard to focus on something as boring as n-back) while on bad days, nicotine is just right and improves n-back performance.
The evidence? Found helpful in reducing bodily twitching in myoclonus epilepsy, a rare disorder, but otherwise little studied. Mixed evidence from a study published in 1991 suggests it may improve memory in subjects with cognitive impairment. A meta-analysis published in 2010 that reviewed studies of piracetam and other racetam drugs found that piracetam was somewhat helpful in improving cognition in people who had suffered a stroke or brain injury; the drugs’ effectiveness in treating depression and reducing anxiety was more significant.
When Giurgea coined the word nootropic (combining the Greek words for mind and bending) in the 1970s, he was focused on a drug he had synthesized called piracetam. Although it is approved in many countries, it isn’t categorized as a prescription drug in the United States. That means it can be purchased online, along with a number of newer formulations in the same drug family (including aniracetam, phenylpiracetam, and oxiracetam). Some studies have shown beneficial effects, including one in the 1990s that indicated possible improvement in the hippocampal membranes in Alzheimer’s patients. But long-term studies haven’t yet borne out the hype.
“Over the years, I have learned so much from the work of Dr. Mosconi, whose accomplished credentials spanning both neuroscience and nutrition are wholly unique. This book represents the first time her studies on the interaction between food and long-term cognitive function reach a general audience. Dr. Mosconi always makes the point that we would eat differently and treat our brains better if only we could see what we are doing to them. From the lab to the kitchen, this is extremely valuable and urgent advice, complete with recommendations that any one of us can take.”
This is not 100% clear from the data and just blindly using a plausible amount carries the risk of the negative effects, so I intend to run another large experiment. I will reuse the NOW Foods Magnesium Citrate Powder, but this time, I will use longer blocks (to make cumulative overdosing more evident) and try to avoid any doses >150mg of elemental magnesium.
Lost confidence. If you can’t find your keys, much less get through your workday in a timely fashion without a slew of mistakes, you are going to lose confidence in both your brain and yourself. When you cannot remember where you put things and it takes an absurd amount of effort just to do a simple task, you might question your very sanity. As your confidence continues to nose-dive, you just end up making more and more mistakes. It turns into a vicious cycle. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29709744453430176, "perplexity": 2767.813324149161}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039742978.60/warc/CC-MAIN-20181116045735-20181116071735-00130.warc.gz"} |
http://swmath.org/?term=fractional%20derivatives | • # FODE
• Referenced in 217 articles [sw08377]
• paper, firstly the time fractional, the sense of Riemann-Liouville derivative, Fokker-Planck equation ... time fractional ordinary differential equation (FODE) in the sense of Caputo derivative by discretizing ... properties of Riemann-Liouville derivative and Caputo derivative. Then combining the predictor-corrector approach with ... Planck equation, some numerical results for time fractional Fokker-Planck equation with several different fractional...
• # SubIval
• Referenced in 7 articles [sw22654]
• numerical method for computations of the fractional derivative in IVPs (initial value problems ... backward differentiation formula) for a first order derivative. The formula resulting from SubIval is: t0Dαtx ... Liouville and Caputo definitions of the fractional derivative...
• # differint
• Referenced in 1 article [sw31175]
• methods for the numerical computation of fractional derivatives and integrals have been defined. However, these ... numerical algorithms for the computation of fractional derivatives and integrals. This package is coded ... Letnikov, and Riemann-Liouville algorithms from the fractional calculus are included in this package...
• Referenced in 1 article [sw17727]
• with the Riemann-Liouville and Caputo fractional derivatives. As an additional information about the anomalous ... identification of two required parameters of the fractional diffusion equations by approximately known initial data ... fractional diffusivity, the order of fractional differentiation and the Laplace variable. Estimations of the upper ... error bound for this parameter are derived. A technique of optimal Laplace variable determination based...
• # SymbMath
• Referenced in 1 article [sw00936]
• graphic computation, e.g. any order of derivative, fractional calculus, solve equation, plot data and user ... piecewise, recursive, multi-value functions and procedures, derivatives, integrals and rules...
• # FOMNE
• Referenced in 2 articles [sw22544]
• analyzed. The fractional order memristor no equilibrium system is then derived from the integer order ... mode control algorithm is derived to globally synchronize the identical fractional order memristor systems...
• # Algorithm 885
• Referenced in 8 articles [sw09118]
• distribution. The first is a new algorithm derived from Algorithm 304’s calculation ... normal distribution via a series or continued fraction approximation, and it is good...
• # COULCC
• Referenced in 12 articles [sw11843]
• COULCC: A continued-fraction algorithm for Coulomb functions of complex order with complex arguments ... varying Coulomb wave functions, and their radial derivatives, for complex η (Sommerfeld parameter), complex energies...
• # FCC
• Referenced in 1 article [sw24629]
• methods. To accomplish this, a fractional differentiation matrix is derived at the Chebyshev Gauss-Lobatto ... order FDEs and a system of linear fractional-order delay-differential equations... | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9202792048454285, "perplexity": 2697.933192475745}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655899931.31/warc/CC-MAIN-20200709100539-20200709130539-00204.warc.gz"} |
https://www.arxiv-vanity.com/papers/hep-ph/0505200/ | arXiv Vanity renders academic papers from arXiv as responsive web pages so you don’t have to squint at a PDF. Read this paper on arXiv.org.
# Neutrino Masses and Mixings in a Minimal SO(10) Model
K.S. Babu Department of Physics, Oklahoma Center for High Energy Physics, Oklahoma State University, Stillwater, OK 74078, USA C. Macesanu Department of Physics, Oklahoma Center for High Energy Physics, Oklahoma State University, Stillwater, OK 74078, USA Department of Physics, Syracuse University, Syracuse, NY 13244-1130, USA
###### Abstract
We consider a minimal formulation of Grand Unified Theory wherein all the fermion masses arise from Yukawa couplings involving one and one 10 of Higgs multiplets. It has recently been recognized that such theories can explain, via the type–II seesaw mechanism, the large mixing as a consequence of unification at the GUT scale. In this picture, however, the CKM phase lies preferentially in the second quadrant, in contradiction with experimental measurements. We revisit this minimal model and show that the conventional type–I seesaw mechanism generates phenomenologically viable neutrino masses and mixings, while being consistent with CKM CP violation. We also present improved fits in the type–II seesaw scenario and suggest fully consistent fits in a mixed scenario.
OSU-HEP-05-7
SU-4252-806
## I Introduction
Grand Unified Theories (GUT) provide a natural framework to understand the properties of fundamental particles such as their charges and masses. GUT models based on gauge symmetry have a number of particularly appealing features. All the fermions in a family fit in a single 16–dimensional spinor multiplet of . In order to complete this multiplet, a right–handed neutrino field is required, which would pave the way for the seesaw mechanism which explains the smallness of left–handed neutrino masses. contains and the left–right symmetric Pati–Salam symmetry group as subgroups, both with very interesting properties from a phenomenological perspective. With low energy supersymmetry, and models also lead remarkably to the unification of the three Standard Model gauge couplings at a scale GeV.
In grand unified theories, the gauge sector and the fermionic matter sector are generally quite simple. However, the same is not true of the Higgs sector. Since the larger symmetry needs to be broken down to the Standard Model, generally one needs to introduce a large number of Higgs multiplets, with different symmetry properties under gauge transformations. If all of these Higgs fields couple to the fermion sector, one would lose much of the predictive power of the theory in the masses and mixings of quarks and leptons, and so also one of the attractive aspects of GUTs.
Of interest then are the so–called minimal unification theories, in which only a small number of Higgs multiplets couple to the fermionic sector. One such realization is the minimal GUT babu in which only one 10 and one of Higgs fields couple to the fermions. These two Higgs fields are responsible for giving masses to all the fermions of the theory, including large Majorana masses to the right–handed neutrinos. This model is minimal in the following sense. The fermions belong to the of , and the fermion bilinears are given by . Thus 10, 120 and Higgs fields can have renormalizable Yukawa couplings. If only one of these Higgs fields is employed, there would be no family mixings, so two is the minimal set. The has certain advantages. It contains a Standard Model singlet field and so can break down to , changing the rank of the group. Its Yukawa couplings to the fermions also provide large Majorana masses to the right–handed neutrinos leading to the seesaw mechanism. It was noted in Ref. babu that due to the cross couplings between the and the 10 Higgs fields, the Standard Model doublet fields contained in the will acquire vacuum expectation values (VEVs) along with the VEVs of the Higgs doublets from the . The Yukawa coupling matrix will then contribute both to the Dirac masses of quarks and leptons, as well as to the Majorana masses of the right–handed neutrinos.
It is not difficult to realize that this minimal model is highly constrained in explaining the fermion masses and mixings. There are two complex symmetric Yukawa coupling matrices, one of which can be taken to be real and diagonal without loss of generality. These matrices have 9 real parameters and six phases. The mass matrices also depend on two ratios of VEVs, leading to 11 magnitudes and six phases in total in the quark and lepton sector, to be compared with the 13 observables (9 masses, 3 CKM mixings and one CP phase). Since the phases are constrained to be between and , this system does provide restrictions. More importantly, once a fit is found for the charged fermions, the neutrino sector is fixed in this model. It is not obvious at all that including the neutrino sector the model can be phenomenologically viable.
Early analyses babu ; lavoura found that just fitting the lepton-quark sector is highly constraining. Also, this fitting has been found to be highly nontrivial (in terms of complexity); therefore these analyses were done in the limit when the phases involved are either zero or . In such a framework, one finds that the parameters of the models are more or less determined by the fit to the lepton-quark sector (the quark masses themselves are not known with great precision, so there is still some room for small variations of the parameters). As a consequence, one could more or less predict the neutrino masses and mixings; however, since neutrino data was rather scarce at the time, one could not impose meaningful constraints on the minimal model from these predictions.
In view of the new information on the neutrino sector gathered in the past few years solar ; atm ; chooz , one should ask if this model is still consistent with experimental data. Interest in the study of this model has also been reawakened by the observation that unification at GUT scale implies large (even close to maximal) mixing in the 2-3 sector of the neutrino mass matrix bajc , provided that the dominant contribution to the neutrino mass is from type–II seesaw. There has been a number of recent papers studying the minimal using varying approaches: some analytical, concentrating on the 23 neutrino sector bajc ; Bajc2 , some numerical, either in the approximation that the phases involved in reconstructing the lepton sector are zero fukuyama01 ; fukuyama02 ; fukuyama ; moha1 , or taking these phases into account moha2 ; Dutta1 . The conclusions of these analyses seem to be that the minimal cannot account by itself for the observed neutrino sector (although it comes pretty close). However, one might restore agreement with the neutrino data if one slightly modifies the minimal ; for example, one can set the quark sector CKM phase to lie in the second quadrant, and rely on new contributions from the SUSY breaking sector in order to explain data on quark CP violation moha2 ; or one might add higher dimensional operators to the theory moha2 ; Dutta1 , or even another Higgs multiplet (a 120) which will serve as small perturbation to fermion masses Dutta2 ; Bertolini ; Bertolini:2005 .
In this paper we propose to revisit the analysis for the minimal model, with no extra fields added. The argument for this endeavor is that our approach is different in two significant ways from previous analyses. First, we use a different method than moha2 ; Dutta1 in fitting for the lepton–quark sector. Since this fit is technically rather difficult, and moreover, since the results of this fit define the parameter space in which one can search for an acceptable prediction for the neutrino sector, we think that it is important to have an alternative approach. Second, rather than relying on precomputed values of quark sector parameters at GUT scale, we use as inputs scale values, and run them up to unification scale. This allows for more flexibility and we think more reliable predictions for the parameter values at GUT scale. With these modifications in our approach, we find that we agree with some results obtained in moha2 ; Dutta1 (in particular, the fact that type–II seesaw does not work well when the CKM phase is in the first quadrant), but not with others. Most interesting, we find that it is possible to fit the neutrino sector in the minimal model, in the case when type–I seesaw contribution to neutrino mass dominates. We also present a mixed scenario which gives excellent agreement with the neutrino data.
The paper is organized as follows. In the next section we give a quick overview of the features of the minimal model relevant for our purpose. In section III we address the problem of fitting the lepton–quark sector in this framework. We also define the experimentally allowed range in which the input parameters (quark and lepton masses at scale) are allowed to vary. We start section IV with a quick overview of the phenomenological constraints on the neutrino sector. There we provide a very good fit to all the fermion masses and mixings using type–I seesaw. We follow by analyzing the predictions of the minimal model in the case when type–II seesaw is the dominant contribution to neutrino masses. We then analyze the predictions in a type–I seesaw dominance scenario, and in a scenario when both contributions (type–I and type–II) have roughly the same magnitude. We end with our conclusions in Sec. V.
## Ii The minimal SO(10) model
The model we consider in this paper is an supersymmetric model where the masses of the fermions are given by coupling with only two Higgs multiplets: a 10 and a babu . Both the and contain Higgs multiplets which are (2,2) under the subgroup. Most of these (2,2) Higgses acquire mass at the GUT scale. However, one pair of Higgs doublets and (which generally are linear combinations of the original ones) will stay light. (Details about the Higgs multiplet decomposition and breaking can be found, for example, in Bajc_sb ; Fukuyama_sb ; Aulakh_sb ; nasri1 ). Upon breaking of the symmetry of the Standard Model, the vacuum expectation value of the doublet will give mass to the up-type quarks and will generate a Dirac mass term for the neutrinos, while the vacuum expectation value of the doublet will give mass to the down-type quarks and the charged leptons.
The mass matrices for quarks and leptons wil then have the following form:
Mu = κuY10+κ′uY126 Md = κdY10+κ′dY126 MDν = κuY10−3κ′uY126 Ml = κdY10−3κ′dY126 (1)
where are the Yukawa coefficients for the coupling of the fermions to the and multiplets respectively. Note that in the above equations the parameters as well as the Yukawa matrices are in general complex, thus insuring that the fermion mass matrices will contain CP violating phases.
The multiplet also contains a and a Pati-Salam multiplets. The Higgs fields which are color singlets and / triplets (denoted by and ) may provide Majorana mass term for the right–handed and the left–handed neutrinos. One then has:
MνR = ⟨ΔR⟩Y126 MνL = ⟨ΔL⟩Y126 . (2)
If the vacuum expectation of the triplet is around GeV then the Majorana mass term for the right–handed neutrinos will give rise, through the seesaw mechanism, to left–handed neutrino masses of order eV. On the other hand, the VEV of contributes directly to the left–handed neutrino mass matrix (this contribution is called type–II seesaw), so this requires that the is either zero or at most of order eV. This requirement is satisfied naturally in such models, since generally aquires a VEV of order seesaw .
## Iii Lepton and quark masses and mixings
Our first task is to account for the observed lepton and quark masses, and for the measured values of the CKM matrix elements. By expressing the Yukawa matrices and in Eqs. (II) in favor of and , we get a linear relation between the lepton and quark mass matrices; at GUT scale:
Ml = a Mu+b Md , (3)
where and are a combinations of the parameters in Eq. (II). For simplicity let’s work in a basis where is diagonal (this can be done without loss of generality). Then, with the diagonal up–quark mass matrix. If we allow the entries in the diagonal quark mass matrices to be complex: , , then the CKM matrix can be written in its standard form as a function of three real angles and a phase:
VCKM = ⎛⎜⎝c12c13s12c13s13e−iδ−s12c23−c12s23s13eiδc12c23−s12s23s13eiδs23c13s12s23−c12c23s13eiδ−c12s23−s12c23s13eiδc23c13⎞⎟⎠ . (4)
Since their phases can be absorbed in the definitions of the parameters, we will take the coefficients to be real, too. One of the quark mass phases can be set to zero without loss of generality, we set . It should be noted that a common phase of , which we denote as , will appear in the Dirac and Majorana mass matrices of the neutrinos, and will be relevant to the study of neutrino oscillations.
The relation (3) will generally impose some constraints on the masses of the quarks and leptons. For example, if we take all the phases to be zero (or ), then on the right-hand side of the equation there are just two unknowns, the coefficients . On the other hand, the eigenvalues of the lepton mass matrix are known, which will give us 3 equations. It is not obvious, then, that this system can be solved; however, early analysis lavoura ; babu shows that solutions exist, in the range of experimentally allowed values, provided that the quark masses satisfy some constraints.
Newer studies fukuyama ; moha1 ; moha2 allow for (some) phases to be non-zero, and thus relaxes somewhat the constraints on quark masses. However, it is interesting to note that these solutions are not very different from the purely real case. That is, most of the phases involved have to be close to zero (or ), and the values of the parameters do not change by much. We shall explain this in the following.
The algebraic problem of solving for the lepton masses in the case when the elements of the matrices are complex is quite difficult. This would involve solving a system of 3 polynomial equations of degree six in unknown quantities . Most of the analysis so far is done by numerical simulations (some analytical results are obtained for the case of 2nd and 3rd families only bajc ; Bajc2 ). In this section we attempt to solve the full problem (with all the phases nonzero) in a semi-analytical manner, that is, by identifying the dominant terms in the equations and obtaining an approximate solution in the first step, which can then be made more accurate by successive iterations.
Due to the hierarchy between the eigenvalues of the lepton mass matrix one can suspect that the mass matrix itself has a hierarchical form. This assumption is supported by the observation that the off-diagonal elements of are indeed hierarchical; for example . ( is a short–hand notation for , are the elements of .) Then, the three equations for the invariants of the real matrix (the trace, the determinant and the sum of its determinants) become:
|L33|2+2|L23|2 ≃ m2τ |L22L33−L223|2 ≃ m2μm2τ Det[LL†] = m2em2μm2τ . (5)
We find it is convenient to work in terms of the dimensionless parameters , and the ratios . Explicitly from the equations above in terms of these parameters, we obtain:
~L33=eiα1 = ~aV233+~b e−iz3 ~Δ23=~mμeiα2 = (~b rs+~a rceiz2)~L33+~a ~b V232ei(bb−bs) ~Δ=~me~mμeiα3 = ~b rd ~Δ23−~a2 ~b ei(at−bd)(rs(V31V33)2 (6) +2rcV31V32V21V22ei(z2−z3)+rcV231V22V33eiz2)
Here we have kept only the leading terms, using . Moreover, note that only phase differences like can be determinated from Eq. (3); therefore, by multiplying with overall phases, we have written Eqs. (III) in terms of these differences (with the notation ).
The key to solving this system is to recognize that there is some tuning involved. Analyzing the first two equations leads to the conclusion that . Then the phase in the first equation should be close to so that the two terms almost cancel each other. Similar cancellations happen in the second and the third equations, which require respectively that and 111Note that taking these phase differences to or zero results in exactly the mass signs which the analysis in fukuyama found to work for the real masses case.. Also, in the third equation, neglecting the small electron mass on the left hand side results in:
~a2≃rd ~mμrs|V31V33|2 . (7)
For values of the parameters in the experimentally feasible region, this is consistent with the above estimate .
Analytically solving Eqs. (III) with the approximations discussed will provide solutions for the phases and parameters accurate to the 10% level. Using these first order results, one can compute and put the neglected terms back in Eqs. (III,III), which can be solved again, thus defining an iterative procedure which can be implemented numerically, and brings us arbitrarily close to the exact solution. We find that 5 to 10 iterations are usually sufficient to recover the and masses with better than 0.1% accuracy ( can be brought to a fixed value by multiplying with an overall coefficient).
We end this section with some comments on the range of input parameters (masses and phases) which allow for a solution to Eq. (3). As we discussed above, the phases are either close to or to zero. This is required by the necessity to almost cancel two large terms in the right-hand side of Eqs. (III). One can see that the larger the absolute magnitude of these terms (for example and in the first equation), the more stringent are the constraints on the phases. The opposite is also true; the smaller the and parameters, the more the phases can deviate from , and generally the easier it is to solve the system. This means that lower values of , are preferred; from Eq. (7), this implies a preference for low values of the ratio 222This also means higher values for are preferred. Since , this implies a preference for values of the CKM phase close to (as noted in moha2 ). (there is not much scope to vary ). It turns out that low values of and large values of can also help, since they lower the absolute magnitude of the larger term on the right-hand side of the equation for in (III). Previous analysis found indeed that fitting for the lepton masses require a low value for Dutta1 .
### iii.1 Low scale values and RGE running
As was discussed in the above section, the relation (3) implies some constraints on the quark masses (the lepton masses being taken as input). That is, not all values of quark masses consistent with the experimental results are also consistent with the model we use. Our purpose first is to identify these points in the parameter space defined by the experimentally alowed values for quark masses,
Let us then define what this parameter space is. Altought the relations in the previous section hold at GUT scale, one must necessarily start with the low energy values for our parameters. We choose to use as input the values of the quark masses and the CKM angles at the scale. Estimates of these quantities can be found for example in koide . However, we consider some of their numbers rather too precise (for example, their error in estimating the masses of the and quarks are only 25%, respectively 15%, while the corresponding errors in PDG pdg are much larger). Therefore, in the interest of making the parameter space as large as possible, we use the following values:
• for the second family: 70 MeV 95 MeV 333Note that the lower limit for is rather low compared with koide ; however, the value at 2 GeV scale is well within the limits cited in pdg . Lattice results also seem to favor smaller values of (2 GeV) hashimoto .; 650 MeV 850 MeV. With a running factor from to 2 GeV of around 1.7, these limits would translate to values at 2 GeV scale of: 120 MeV 160 MeV ; 1.1 GeV 1.44 GeV. Lattice estimations Gupta would indicate a value in the lower part of the range for , and a upper part for .
• for the light quarks: here generally the ratio of quark masses are more trustworthy than limits on the masses themselves; we therefore use use (as noted in the previous section, high values of this ratio are preferred), and . We note here that is a parameter which does not affect the results much.
• for the heavy quarks: 2.9 GeV 3.11 GeV, (or 4.23 GeV 4.54 GeV) and for the pole top mass 171 GeV 181 GeV (the corresponding mass is evaluated using the three loops relation, and comes out about 10 GeV smaller).
• the CKM angles at scale:
s12=0.222±0.003 , s23=0.04±0.004 , s13=0.0035±0.0015.
For the gauge coupling constants we take the following values at scale: . With these values at low scale one can get unification of coupling constants at the scale . The exact value of , as well as the values of the fermions Yukawas at the unification scale, will depend also on the supersymmetry breaking scale () and , the ratio between the up-type and down-type SUSY Higgs VEVs. We generally consider values of between 200 GeV and 1 TeV, and between 5 and 60.
Having chosen specific values of the parameters described above, we then run the fermion Yukawa coupling and the quark sector mixing angles, first from to , using two-loop Standard Model renormalization group equations; then we run from SUSY scale to the GUT scale using two loop444More precisely, we use the two-loop RGEs for the running of the gauge coupling constants and the third family fermions ( and ). To evaluate the light fermion masses, we use the one-loop equations for the ratios and . This approximation is justified, since the leading two-loop effect on the fermion masses comes from the change in the values of gauge coupling constants at two-loop; however, the contributions due to the gauge terms are family-independent and will not affect these ratios. SUSY RGEs barger . After computing the neutrino mass matrix at GUT scale, we run its elements back to scale babuleung ; Chankowski and evaluate the resulting masses and mixing angles.
## Iv Neutrino masses and mixings
In the present framework, there are two contributions to neutrino masses. First one has the canonical seesaw term:
(Mν)seesawI=MDνM−1RMDν (8)
with and given by (II). However, the existence in this model of the (,3,1) Higgs multiplet implies the possibility of a direct left-handed neutrino mass term when the Higgs triplet from this acquires a VEV (as it generally can be expected to happen). The neutrino mass contribution of such a term would be
(Mν)seesawII=vLY126=λMR (9)
where and is a factor depending on the specific form of the Higgs potential seesaw .
The scale of the canonical seesaw contribution Eq. (8) (which we call type–I seesaw in the following) to the left handed neutrino mass matrix is given by . The contribution of the type–II seesaw factor (Eq. (9)) is of order . One cannot know apriori how the factor compares with unity, therefore one cannot say which type of seesaw dominates (or if they are of the same order of magnitude). Therefore, in the following each case will be analyzed separately.
However, let us first review the current experimental data on the neutrino mixing angles and mass splittings. Latest analysis Maltoni sets the following bounds:
• from oscillations:
1.4×10−3 eV2≤Δm223≤3.3×10−3 eV2 ; 0.34≤sin2θ23≤0.66 ;
with the best fit for and (from atmospheric and K2K data).
• from oscillations:
7.3×10−5 eV2≤Δm212≤9.1×10−5 eV2 ; 0.23≤sin2θ12≤0.37 ;
with the best fit for and (from solar and KamLAND data). Note also that a previously acceptable region with a somewhat higher mass splitting (the LMA II solution fogli ) is excluded now at about by the latest KamLAND data.
• finally, by using direct constraints from the CHOOZ reactor experiment as well as combined three-neutrino fitting of the atmospheric and solar oscillations, one can set the following upper limit on the mixing angle:
sin2θ13≤0.022 .
The procedure we use in searching for a fit to neutrino sector parameters is as follows. First the low scale values of the quark and lepton masses and the CKM matrix angles and phase are chosen. (Generally we take a fixed value for and , while the other parameters are chosen randomly from a predefined range; however, and can also be chosen randomly). Next we pick a value for and , and compute the quark-lepton sector quantities at GUT scale. Here we determine the relation between the lepton Yukawa couplings and quark Yukawa couplings, which amounts to determining the parameters and phases in Eq. (3). The phases combinations are chosen as input (that is, they are picked randomly), while , and the remaining two phases are obtained by the procedure of fitting the lepton eigenvalues described in Section III. Finally, we scan over the parameters which appear in the neutrino sector (if the neutrino mass matrix is either of type–I seesaw or of type–II seesaw, there is only one phase ; if both types appear, there will be two extra parameters, the relative magnitude and phase of the two contributions).
The rest of this section is devoted to a detailed analysis of the predictions of the minimal model for the neutrino sector, in type–I, type–II and mixed scenarios. (Due to its relative simplicity, we will start with the type–II case). However, let us first summarize our results. We find that in the type–II scenario, there is no good fit to the neutrino sector if the CKM phase is consistent with experimental measurements (around 60 deg). This is in agreement with previous analysis moha2 ; Dutta1 ; however, our results are a bit more encouraging, in that that for we find reasonably good fits, which improve significantly with not very large increases in the CKM phase. We can obtain marginal fits for as low as 80 deg. More interesting are the results for the type–I case; here we can find good fits to the neutrino sector for values of as low as , certainly consistent with experimental limits. As such fits have been not found before, one might consider this to be the main result of our paper. Also, we find that in the mixed case, there is possible to obtain a good neutrino sector fit in the case when the contributions coming from type–I and type–II are roughly equal in magnitude and of opposite phase.
### iv.1 Example of Type–I Seesaw Fit
We give here a representative example of a fit obtained in a type–I dominant case. This is obtained for GeV, GeV, GeV, 174 GeV, and GeV. The values of the quark and lepton masses at GUT scale (in GeV) and the CKM angles are:
mu=0.0006745mc=0.3308mt=97.335md=0.0009726ms=0.02167mb=1.1475me=0.000344mμ=0.0726mτ=1.350s12=0.2248s23=0.03278s13=0.00216δCKM=1.193 . (10)
Here the masses are defined as , where are the corresponding Yukawa couplings, and GeV is the SM Higgs vacuum expectation value555 One can write Eqs. (3), (IV.2) in terms of either the Yukawa couplings of the leptons and quarks, or their masses (that is, Yukawa couplings times running Higgs VEVs). In this paper we use the Yukawa couplings, but we multiply by the Higgs VEVs at the SUSY scale for simplicity of presentation. One can easily check then when going from one convention to the other, just the parameter rescales, while does not change.. The values of GUT scale phases (in radians) and parameters are given by:
au=0.881ac=0.32678at=3.0382bd=3.63235bs=3.23784bb=0.a=0.08136b=5.9797σ=3.244 . (11)
With these inputs, one can evaluate all mass matrices at GUT scale. In order to compute the neutrino mass matrix at scale, we use the running factors , where
r22=(MνijMν33)MZ/(MνijMν33)MGUT , r23=(Mνi3Mν33)MZ/(Mνi3Mν33)MGUT ,
with . The elements of the neutrino matrix above are evaluated in a basis where the lepton mass matrix is diagonal.
One then obtains for the neutrino parameters at low scale:
Δm223/Δm212≃24 , sin2θ12≃0.27 , sin22θ23≃0.90 , sin22θ13≃0.08 .
Note here that only the atmospheric angle is close to the experimental limit, the solar angle and the mass spliting ratio being close to the preferred values. The elements of the diagonal neutrino mass matrix are
mνi ≃ {0.0021exp(0.11i) , 0.0098exp(−3.06i) , 0.048}
in eV, with a normalization eV. The phases of the first two masses are the Majorana phases (in radians). Moreover, the Dirac phase appearing in the MNS matrix is rad, and one evaluates the effective neutrino mass for the neutrinoless double beta decay process to be
|∑U2eimνi| ≃ 0.009 eV .
### iv.2 Type–II seesaw
Much of the recent work on the neutrino sector in the minimal has concentrated on the scenario when the type–II seesaw contribution to neutrino masses is dominant. The reason for the interest in this case is that, with:
Mν ∼ MR ∼ Ml−Md .
unification at the GUT scale, , naturally leads to a small value of and hence large mixing in the 2-3 sector bajc . However, while the general argument holds, it has been difficult (or impossible) to fit both large and the hierarchy between the solar and atmospheric mass splittings at once. In this section we will try to show why this is so, and under which conditions this might be achievable.
We will use the same conventions as in section III (that is, we work in a basis where is diagonal, and the parameters and are real and positive). However, in the construction of the neutrino mass matrices there will be an extra phase besides those which were relevant for the quark-lepton mass matrices. This phase can be though as an overall phase of . One then has:
MR = y(eiσMl−Md) aMDν = −(beiσ+2)Mleiσ+3Md . (12)
Following the analysis in Sec. III one can write:
(Ml)22 ≃ |b| ms eib2 (Ml)23 ≃ a mt eia3V32V33 (Ml)33 = a mt eia3+b mb ≃ mτeiα , (13)
with close to and close to zero. Then, the neutrino mass matrix will be proportional to:
(Mν)[2,3]∼Ml−Mdde−iσ∼(msei(ϵ−α)(b−e−iσ)m23m23mτeiα−mbe−iσ) (14)
Note also that is almost real positive, and due to the fact that and , the 22 and 23 elements in the neutrino mass matrix are roughly of the same order of magnitude (in practice, one get somewhat larger than ). One then sees that if the phase is chosen such that the two terms in the 33 mass matrix element cancel each other (that is ) then there will be large mixing in the 2-3 sector, with:
tan(θν)23≃∣∣∣m23m33∣∣∣.
However, this is not the whole story. One needs also some hierarchy between the atmospheric and solar neutrino mass splittings:
Δm2solΔm2atm = r ≲ 120
(based on the experimental measurements of neutrino parameters reviewed in the previous section). In terms of the eigenvalues of the mass matrix (14), one then has:
m22m23 ≃(|m22m33−m223||m222|+|m233|+2|m23|2)2 ≲ 120 . (15)
In order for this to hold, one needs a cancellation between the and terms in the numerator of the above fraction. This in turn imposes a constraint on the phases involved:
ϕ=Arg(mτ−mbe−i(σ−α))≃0 . (16)
More detailed analysis shows that it is not possible (or very difficult) to get larger than 1 while satisfying the relation (15) between eigenvalues. However, this will create problems with the atmospheric mixing angle. The PMNS matrix is
UPMNS=U†lUν
where are the matrices which diagonalize the lepton and neutrino mass matrices, respectively. Since the lepton mass matrix has a hierarchical form, the matrix is close to unity, with , where . The atmospheric mixing angle will then be:
tanθatm ≃ ∣∣ ∣ ∣∣(m23m33)∗ν−(m23m33)∗l1+(m23m33)∗ν(m23m33)l∣∣ ∣ ∣∣
where the and lower indices make clear that we are discussing elements of the lepton and neutrino mass matrices. Note, however, that Eq. (16) implies that and have the same phase of ; then, since , the net effect of the rotation coming from the lepton sector is to reduce the 2-3 mixing angle. Practically, since , even if one has a value of close to one from the neutrino mass matrix, will become of order 0.7 after rotation in the lepton sector is taken into account.
This situation is represented graphically in Fig. 1. As discussed above, corresponds to the case most favorable for getting the right solar-atmospheric mass splitting ratio, while corresponds to the case of maximal mixing angle. In practice this means that most reasonable fits are actually obtained when the angle is close to around (otherwise generally either the angle or the mass ratio are too small) 666Contributions from the phases in the lepton mass matrix can also improve the goodness of the fit (for example, if is significantly different from zero, or different from ). However, this generally requires that the parameters have low values (as explained in section III). Hence we see that neutrino sector also prefers in the second quadrant and low .. Note however that (or any value greater than about ) would require that at GUT scale . We may infer that in order to obtain a large mixing in the 2-3 sector one needs that at GUT scale should be close to .
This in turn can be insured by requiring large and/or . For example, in Fig. 2 we present the results obtained for values GeV, GeV, GeV and (with these values, the ratio ). Also, here we set the CKM phase , and let the other quark sector parameters vary between the limits discussed in section IIIA. The left panel shows the maximum atmospheric/solar mass splitting ratio as a function of the atmosperic mixing angle . The three different lines correspond to different cuts on the solar mixing angle: (dotted), (dashed) and no cut (solid). One can observe here the correlation betewee large atmospheric mixing and small atmospheric-solar mass ratio.
Conversely, the right panel shows the maximum of the ratio as a function of the solar mixing angle . The three different lines correspond to cuts on the atmospheric mixing angle: (dotted), (dashed) and (solid line). We note here that the correlation between the solar angle and the mass ratio has the form of a step function (abrupt decrease in once goes over a certain threshold), while there seems to be a close to linear correlation between the maximal solar and atmospheric angles.
It is interesting to consider how these results change if the paramenters and/or are modified. One finds out that the neutrino sector results have a strong dependence on the parameter . For example, if one keeps the parameters used in Fig. 2 fixed but increases , one finds that the fit for the atmospheric angle - atmospheric/solar mass ratio improves to a certain amount. That can be traced to the fact that the ratio increases with . However, one also finds that the solar angle generally gets smaller. This happens because there is a correlation between the solar angle and the value of the quark mass at GUT scale; namely increases with . On the other hand, the ratio decreases with increasing .
Fig. 3 exemplifies this behaviour. The three lines correspond to the maximum value for the mass splitting ratio , at values of (dotted), (solid) and (dashed line). A cut on the solar angle is also imposed in the left panel, and a cut on the atmospheric angle in the right panel. One can see that at larger values for one might potentially get better fits for atmospheric angle and the atmospheric/solar mass ratio; however, the constraint on the solar angle becomes more restrictive.
Smaller variations of the neutrino sector results will follow modification of the parameters and . However, these variations follow the same pattern as above: that is, an improvement in the fit for the atmospheric angle due to the increase of the ratio (which can be due to an increase in , or a decrease in ) coincide with a worsening of the fit for the solar angle. As a consequence, the results presented in Figs. 2, 3 can be improved only marginally. Scanning over a range of parameter space and 777In practice we find that the best results are obtained for large , large and large (such that is between 0.96 and 1)., we find the best fit to the neutrino sector to be , and the atmospheric/solar mass splitting ratio . We note that although these numbers provide a somewhat marginal fit to the experimental results (the mixing angles are close to the exclusion limit, while the value for mass ratio is central) they are still allowed.
However, the results discussed above were obtained for a value of the CKM phase which is too large compared with the measured value ( from PDG pdg ). As argued in section III (and indeed noted by previous analysis) there is a strong dependence on the goodness of the fit on the value of , with larger values giving better fits. We show this dependence in Fig. 4. The parameters are the same as in Fig 2 ( GeV, GeV, GeV and ), but the three lines correspond to different values for : (dotted line), (solid) and (dashed line). One can notice a rapid deterioration in the goodness of the fit with decreased . Thus, for , the best fit to neutrino sector we find (after scanning over the SUSY parameter space) is , and .
For purposes of illustration, we give a fit obtained for a type–II dominant case for , GeV, 181 GeV, and TeV. The quark masses at low scale are GeV, GeV. Then the values of the quark and lepton masses at GUT scale are (in GeV):
mu=0.0008185mc=0.3772mt=139.876md=0.0015588ms=0.03554mb=2.3547me=0.000525mμ=0.1107mτ=2.420s12=0.225s23=0.0297s13=0.00384δCKM=1.4 . (17)
The values of GUT scale phases (in radians) and parameters are given by:
au=−0.4689ac=−1.0869at=3.0928bd=2.6063bs=2.2916bb=0.a=0.09093b=4.423σ=3.577 . (18)
The running factors for the neutrino mass matrix are .
One then obtains for the neutrino parameters at low scale:
Δm223/Δm212≃18 , sin22θ12≃0.7 , sin22θ23≃0.88 , sin22θ13≃0.094 .
The elements of the diagonal neutrino mass matrix (masses and Majorana phases) are
mνi ≃ {0.0016exp(0.27i) , 0.011exp(−2.86i) , 0.048}
in eV.
The Dirac phase appearing in the MNS matrix is rad, and one evaluates the effective neutrino mass for the neutrinoless double beta decay process to be
|∑U2eimνi| ≃ 0.01 eV .
### iv.3 Type–I seesaw
The fact that in type–II seesaw one can obtain large mixing in the 23 sector is due to a lucky coincidence: the type–II neutrino mass matrix being written as a sum of two hierarchical matrices ( and ), the most natural form for the neutrino mass matrix is also hierarchical. However, since 33 elements of both matrices and are roughly of the same magnitude, by choosing the relative phase between the two to be close to , one can get a neutrino mass matrix of the form suited to explain large mixing in the 2-3 sector.
The question arises then if such a coincidence happens for the type–I seesaw neutrino mass matrix. To see that, let’s write the Dirac neutrino mass matrix in the following form:
MDν = beiσ+2a[~MR+b−e−iσbeiσ+2Md]∼~MR+~Md
where is the scaled right-handed neutrino mass matrix and is a rescaled down-type quark diagonal mass matrix (the scaling factor in this later case is close to unity, since is roughly of order 10). Then the type–I seesaw neutrino mass matrix would be:
MνI = MDνM−1RMDν ∼ ~MR+2~Md+~Md~MR−1~Md . (19)
Now, for most values of the phase , is hierarchical, therefore so is , therefore the type–I neutrino mass matrix is the sum of three hierarchical matrices ( being diagonal). So it is not surprising that for most values of the phase is also hierarchical. What is remarkable is that there are some values of for which the type–I seesaw mass matrix has a large mixing in the 2-3 sector, and moreover, this happens for the same values of as in the case when the type–II mass matrix is non-hierarchical (that is, close to ).
In order to see this let us consider the magnitude and the phase of the 33 elements (the largest ones) in the three terms on the right-hand side of Eq. (19). If is not close to , the magnitude of is of order , with varying phase ( in Fig. 1 in the first quadrant); the magnitude of is also of order , and the phase . For the last term, we make use of the fact that being hierarchical, ; then , with a phase close to . We see then that for most values of is of order , while the off-diagonal elements are small. However, for , the cancellation in the 33 element of is matched by a cancellation between the 33 elements of the and terms from Eq. (19) (since the relative phase between these is also ), thus leading to a non-hierarchical form for the type–I seesaw neutrino mass matrix.
The fine-tuning between different contributions to the neutrino mass matrix is thus a little bit more involved in the type–I seesaw case compared to the type–II seesaw, but it can still lead to large mixing in the 2-3 sector. Moreover, since the correlations between the input parameters and the neutrino mass matrix elements are not so strong, most of the constraints discussed in the above section do not hold (for example, does not have to be necessarily very close to ). This may lead one to believe that it is possible to obtain a better fit for the neutrino sector in type–I models, and we found that in fact this is the case.
For example, we show in Fig 5(left) the maximum atmospheric/solar mass splitting ratio as a function of the atmosperic mixing angle - with cuts on the solar mixing angle: (dotted), (dashed) and no cut (solid). In the left panel we show maximum as a function of the solar angle for (dotted), (dashed) and (solid line). This figure is obtained for values GeV, GeV, GeV and , while the CKM phase is allowed to vary between 60 and 70 deg. We see that it is possible to obtain a large atmosperic/solar mass splitting ratio for values of the atmospheric and solar mixings consisted with experimental constraints.
How do these results change if we modify the SUSY parameters and | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9820085763931274, "perplexity": 480.182238385711}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400193391.9/warc/CC-MAIN-20200920031425-20200920061425-00429.warc.gz"} |
http://slideplayer.com/slide/4894834/ | # Concentrations of Solutions
## Presentation on theme: "Concentrations of Solutions"— Presentation transcript:
Concentrations of Solutions
Prentice-Hall Chapter 16.2 Dr. Yager
Objectives Solve problems involving the molarity of a solution
Describe the effect of dilution on the total moles of solute in solution Define percent by volume and percent by mass
Molarity
To make a 0. 5 molar (0. 5M) solution, first add 0
To make a 0.5 molar (0.5M) solution, first add 0.5 mol of solute to a 1-L volumetric flask half filled with distilled water.
Swirl the flask carefully to dissolve the solute.
Fill the flask with water exactly to the 1-Liter mark.
A solution has a volume of 2. 0 L and contains 36
A solution has a volume of 2.0 L and contains 36.0 g of glucose (C6H12O6). If the molar mass of glucose is 180 g/mol, what is the molarity of the solution?
Household laundry bleach is a dilute aqueous solution of sodium hypochlorite (NaClO). How many moles of solute are present in 1.5 L of 0.07 M NaClO?
How many moles of solute are in 250 ml of 2. 0M CaCl2
How many moles of solute are in 250 ml of 2.0M CaCl2? How many grams of CaCl2 is this?
The concentration of a solution is a measure of the amount of solute that is dissolved in a given quantity of solvent. A dilute solution is one that contains a small amount of solute. A concentrated solution contains a large amount of solute.
Making Dilutions Key Idea
Diluting a solution reduces the number of moles of solute per unit volume, but the total number of moles of solute in solution does not change.
The total number of moles of solute remains unchanged upon dilution, so you can write this equation.
M1 and V1 are the molarity and volume of the initial solution, and M2 and V2 are the molarity and volume of the diluted solution.
Making a Dilute Solution
To prepare 100 ml of 0. 40M MgSO4 from a stock solution of 2
To prepare 100 ml of 0.40M MgSO4 from a stock solution of 2.0M MgSO4, a student first measures 20 mL of the stock solution with a 20- mL pipet.
She then transfers the 20 mL to a 100-mL volumetric flask.
Finally she carefully adds water to the mark to make 100 mL of solution.
Volume-Measuring Devices
How many milliliters of a solution of 4
How many milliliters of a solution of 4.00 M KI are needed to prepare ml of M KI? M1 x V1 = M2 x V2 4.00 M x V1 = M x ml V1 = 47.5 ml
Percent Solutions The concentration of a solution in percent can be expressed in two ways: as the ratio of the volume of the solute to the volume of the solution or as the ratio of the mass of the solute to the mass of the solution.
Concentration in Percent (Volume/Volume)
Isopropyl alcohol (2-propanol) is sold as a 91% solution
Isopropyl alcohol (2-propanol) is sold as a 91% solution. This solution consist of 91 mL of isopropyl alcohol mixed with enough water to make 100 mL of solution.
A bottle of the antiseptic hydrogen peroxide (H2O2) is labeled 3
A bottle of the antiseptic hydrogen peroxide (H2O2) is labeled 3.0% (v/v). How many milliliters of H2O2 are in a ml bottle of this solution?
Concentration in Percent (Mass/Mass)
A bottle of glucose is labeled 2. 8% (m/m)
A bottle of glucose is labeled 2.8% (m/m). How many grams of glucose are in g of solution?
1. To make a 1. 00M aqueous solution of NaCl, 58
1. To make a 1.00M aqueous solution of NaCl, 58.4 g of NaCl are dissolved in 1.00 liter of water. enough water to make 1.00 liter solution. 1.00 kg of water. 100 mL of water.
1. To make a 1. 00M aqueous solution of NaCl, 58
1. To make a 1.00M aqueous solution of NaCl, 58.4 g of NaCl are dissolved in 1.00 liter of water. enough water to make 1.00 liter solution. 1.00 kg of water. 100 mL of water.
2. What mass of sodium iodide (NaI) is contained in 250 mL of a 0
2. What mass of sodium iodide (NaI) is contained in 250 mL of a 0.500M solution? 150 g 75.0 g 18.7 g 0.50 g
2. What mass of sodium iodide (NaI) is contained in 250 mL of a 0
2. What mass of sodium iodide (NaI) is contained in 250 mL of a 0.500M solution? 150 g 75.0 g 18.7 g 0.50 g
3. Diluting a solution does NOT change which of the following?
concentration volume milliliters of solvent moles of solute
3. Diluting a solution does NOT change which of the following?
concentration volume milliliters of solvent moles of solute
4. In a 2000 g solution of glucose that is labeled 5
4. In a 2000 g solution of glucose that is labeled 5.0% (m/m), the mass of water is 2000 g. 100 g. 1995 g. 1900 g.
4. In a 2000 g solution of glucose that is labeled 5
4. In a 2000 g solution of glucose that is labeled 5.0% (m/m), the mass of water is 2000 g. 100 g. 1995 g. 1900 g. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8204020857810974, "perplexity": 2862.56972995371}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178361808.18/warc/CC-MAIN-20210228235852-20210301025852-00398.warc.gz"} |
https://www.gradesaver.com/textbooks/math/algebra/elementary-and-intermediate-algebra-concepts-and-applications-6th-edition/chapter-5-polynomials-and-factoring-5-3-factoring-trinomials-of-the-type-ax2-bx-c-5-3-exercise-set-page-326/3 | ## Elementary and Intermediate Algebra: Concepts & Applications (6th Edition)
The given expression, $5x^2-14x-3 ,$ will have binomial factors where the first terms of the binomials can only be $5x$ and $x$. Hence, $\text{ Choice B .}$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6156867742538452, "perplexity": 761.7455921905544}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221217901.91/warc/CC-MAIN-20180820234831-20180821014831-00651.warc.gz"} |
https://samacheerkalvi.guru/category/class-7/ | # Class 7
## Samacheer Kalvi 7th Maths Solutions Term 2 Chapter 3 Algebra Ex 3.3
Students can Download Maths Chapter 3 Algebra Ex 3.3 Questions and Answers, Notes Pdf, Samacheer Kalvi 7th Maths Book Solutions Guide Pdf helps you to revise the complete Tamilnadu State Board New Syllabus and score more marks in your examinations.
## Tamilnadu Samacheer Kalvi 7th Maths Solutions Term 2 Chapter 3 Algebra Ex 3.3
Question 1.
Fill in the blanks.
(i) The degree of the term a3b2c4d2 is _______
(ii) Degree of the constant term is _______
(iii) The coefficient of leading term of the expression 3z2y + 2x – 3 is _______
(i) 11
(ii) 0
(iii) 3
Identifying the Degree and Leading Coefficient Calculator of Polynomials.
Question 2.
Say True or False.
(i) The degree of m2 n and mn2 are equal.
(ii) 7a2b and -7ab2 are like terms.
(iii) The degree of the expression -4x2 yz is -4
(iv) Any integer can be the degree of the expression.
(i) True
(ii) False
(iii) False
(iv) True
Question 3.
Find the degree of the following terms.
(i) 5x2
(ii) -7 ab
(iii) 12pq2 r2
(iv) -125
(v) 3z
Solution:
(i) 5x2
In 5x2, the exponent is 2. Thus the degree of the expression is 2.
(ii) -7ab
In -7ab, the sum of powers of a and b is 2. (That is 1 + 1 = 2).
Thus the degree of the expression is 2.
(iii) 12pq2 r2
In 12pq2 r2, the sum of powers of p, q and r is 5. (That is 1 +2 + 2 = 5).
Thus the degree of the expression is 5.
(iv) -125
Here – 125 is the constant term. Degree of constant term is 0.
∴ Degree of -125 is 0.
(v) 3z
The exponent is 3z is 1.
Thus the degree of the expression is 1.
Question 4.
Find the degree of the following expressions.
(i) x3 – 1
(ii) 3x2 + 2x + 1
(iii) 3t4 – 5st2 + 7s2t2
(iv) 5 – 9y + 15y2 – 6y3
(v) u5 + u4v + u3v2 + u2v3 + uv4
Solution:
(i) x3 – 1
The terms of the given expression are x3, -1
Degree of each of the terms: 3,0
Terms with highest degree: x3.
Therefore, degree of the expression is 3.
(ii) 3x2 + 2x + 1
The terms of the given expression are 3x2, 2x, 1
Degree of each of the terms: 2, 1, 0
Terms with highest degree: 3x2
Therefore, degree of the expression is 2.
(iii) 3t4 – 5st2 + 7s2t2
The terms of the given expression are 3t4, – 5st2, 7s3t2
Degree of each of the terms: 4, 3, 5
Terms with highest degree: 7s2t2
Therefore, degree of the expression is 5.
(iv) 5 – 9y + 15y2 – 6y3
The terms of the given expression are 5, – 9y , 15y2, – 6y3
Degree of each of the terms: 0, 1, 2, 3
Terms with highest degree: – 6y3
Therefore, degree of the expression is 3.
(v) u5 + u4v + u3v2 + u2v3 + uv4
The terms of the given expression are u5, u4v , u3v2, u2v3, uv4
Degree of each of the terms: 5, 5, 5, 5, 5
Terms with highest degree: u5, u4v , u3v2, u2v3, uv4
Therefore, degree of the expression is 5.
Question 5.
Identify the like terms : 12x3y2z, – y3x2z, 4z3y2x, 6x3z2y, -5y3x2z
Solution:
-y3 x2z and -5y3x2z are like terms.
Question 6.
Add and find the degree of the following expressions.
(i) (9x + 3y) and (10x – 9y)
(ii) (k2 – 25k + 46) and (23 – 2k2 + 21 k)
(iii) (3m2n + 4pq2) and (5nm2 – 2q2p)
Solution:
(i) (9x + 3y) and (10x – 9y)
This can be written as (9x + 3y) + (10x – 9y)
Grouping the like terms, we get
(9x + 10x) + (3y – 9y) = x(9 + 10) + y(3 – 0) = 19x + y(-6) = 19x – 6y
Thus degree of the expression is 1.
(ii) (k2 – 25k + 46) and (23 – 2k2 + 21k)
This can be written as (k2 – 25k + 46) + (23 – 2k2 + 21k)
Grouping the like terms, we get
(k2 – 2k2) + (-25 k + 21 k) + (46 + 23)
= k2 (1 – 2) + k(-25 + 21) + 69 = – 1k2 – 4k + 69
Thus degree of the expression is 2.
(iii) (3m2n + 4pq2) and (5nm2 – 2q2p)
This can be written as (3m2n + 4pq2) + (5nm2 – 2q2p)
Grouping the like terms, we get
(3m2n + 5m2n) + (4pq2 – 2pq2)
= m2n(3 + 5) + pq2(4 – 2) = 8m2n + 2pq2
Thus degree of the expression is 3.
Question 7.
Simplify and find the degree of the following expressions.
(i) 10x2 – 3xy + 9y2 – (3x2 – 6xy – 3y2)
(ii) 9a4 – 6a3 – 6a4 – 3a2 + 7a3 + 5a2
(iii) 4x2 – 3x – [8x – (5x2 – 8)]
Solution:
(i) 10x2 – 3xy + 9y2 – (3x2 – 6xy – 3y2)
= 10x2 – 3xy + 9y2 + (-3x2 + 6xy + 3y2)
= 10x2 – 3xy + 9y2 – 3x2 + 6xy + 3y2
= (10x2 – 3x2) + (- 3xy + 6xy) + (9y2 + 3y2)
= x2(10 – 3) + xy(-3 + 6) + y2(9 + 3)
= x2(7) + xy(3) + y2(12)
Hence, the degree of the expression is 2.
(ii) 9a4 – 6a3 – 6a4 – 3a2 + 7a3 + 5a2
= (9a4 – 6a4) + (- 6a3 + 7a3) + (- 3a2 + 5a2)
= a4(9-6) + a3 (- 6 + 7) + a2(-3 + 5)
= 3a4 + a3 + 2a2
Hence, the degree of the expression is 4.
(iii) 4x2 – 3x – [8x – (5x2 – 8)]
= 4x2 – 3x – [8x + -5x2 + 8)]
= 4x2 – 3x – [8x – 5x2 – 8]
= 4x2 – 3x – 8x + 5x2 – 8
(4x2 + 5x2) + (- 3x – 8x) – 8
= x2(4+ 5) + x(-3-8) – 8
= x2(9) + x(- 11) – 8
= 9x2 – 11x – 8
Hence, the degree of the expression is 2.
Objective Type Question
Question 8.
3p2 – 5pq + 2q2 + 6pq – q2 +pq is a
(i) Monomial
(ii) Binomial
(iii) Trinomial
(iii) Trinomial
Question 9.
The degree of 6x7 – 7x3 + 4 is
(i) 7
(ii) 3
(iii) 6
(iv) 4
(i) 7
Question 10.
If p(x) and q(x) are two expressions of degree 3, then the degree of p(x) + q(x) is
(i) 6
(ii) 0
(iii) 3
(iv) Undefined
(iii) 3
## Samacheer Kalvi 7th Maths Solutions Term 2 Chapter 1 Number System Ex 1.2
Students can Download Maths Chapter 1 Number System Ex 1.2 Questions and Answers, Notes Pdf, Samacheer Kalvi 7th Maths Book Solutions Guide Pdf helps you to revise the complete Tamilnadu State Board New Syllabus and score more marks in your examinations.
## Tamilnadu Samacheer Kalvi 7th Maths Solutions Term 2 Chapter 1 Number System Ex 1.2
Question 1.
Fill in the following place value table.
1/2 as a decimal is 0.5.
Question 2.
Write the decimal numbers from the following place value table.
Question 3.
Write the following decimal numbers in the place value table.
(i) 25.178
(ii) 0.025
(iii) 428.001
(iv) 173.178
(v) 19.54
Solution:
(i) 25.178
(ii) 0.025
(iii) 428.001
(iv) 173.178
(v) 19.54
Question 4.
Write each of the following as decimal numbers.
(i) 20 + 1 + $$\frac { 2 }{ 10 }$$ + $$\frac { 3 }{ 100 }$$ + $$\frac { 7 }{ 1000 }$$
(ii) 3 + $$\frac { 8 }{ 10 }$$ + $$\frac { 4 }{ 100 }$$ + $$\frac { 5 }{ 1000 }$$
(iii) 6 + $$\frac { 0 }{ 10 }$$ + $$\frac { 0 }{ 100 }$$ + $$\frac { 9 }{ 1000 }$$
(iv) 900 + 50 + 6 + $$\frac { 3 }{ 100 }$$
(v) $$\frac { 6 }{ 10 }$$ + $$\frac { 3 }{ 100 }$$ + $$\frac { 1 }{ 1000 }$$
Solution:
(i) 20 + 1 + $$\frac { 2 }{ 10 }$$ + $$\frac { 3 }{ 100 }$$ + $$\frac { 7 }{ 1000 }$$ = 21 + 2 × $$\frac { 1 }{ 10 }$$ + 3 × $$\frac { 1 }{ 100 }$$ + 7 × $$\frac { 1 }{ 1000 }$$ = 21.237
(ii) 3 + $$\frac { 8 }{ 10 }$$ + $$\frac { 4 }{ 100 }$$ + $$\frac { 5 }{ 1000 }$$ = 3 + 8 × $$\frac { 1 }{ 10 }$$ + 4 × $$\frac { 1 }{ 100 }$$ + 5 × $$\frac { 1 }{ 1000 }$$ = 3.845
(iii) 6 + $$\frac { 0 }{ 10 }$$ + $$\frac { 0 }{ 100 }$$ + $$\frac { 9 }{ 1000 }$$ = 6 + 0 × $$\frac { 1 }{ 10 }$$ + 0 × $$\frac { 1 }{ 100 }$$ + 9 × $$\frac { 1 }{ 1000 }$$ = 6.009
(iv) 900 + 50 + 6 + $$\frac { 3 }{ 100 }$$ = 956 + 0 × $$\frac { 1 }{ 10 }$$ + 3 × $$\frac { 1 }{ 100 }$$ = 956.03
(v) $$\frac { 6 }{ 10 }$$ + $$\frac { 3 }{ 100 }$$ + $$\frac { 1 }{ 1000 }$$ = 6 × $$\frac { 1 }{ 10 }$$ + 3 × $$\frac { 1 }{ 100 }$$ = 0.631
Question 5.
Convert the following fractions into decimal numbers.
(i) $$\frac { 3 }{ 10 }$$
(ii) 3 $$\frac { 1 }{ 2 }$$
(iii) 3 $$\frac { 3 }{ 5 }$$
(iv) $$\frac { 3 }{ 2 }$$
(v) $$\frac { 4 }{ 5 }$$
(vi) $$\frac { 99 }{ 100 }$$
(vii) 3 $$\frac { 19 }{ 25 }$$
Solution:
(i) $$\frac { 3 }{ 10 }$$ = 0.3
(ii) 3 $$\frac { 1 }{ 2 }$$ = $$\frac { 7 }{ 2 }$$ = $$\frac{7 \times 5}{2 \times 5}$$ = $$\frac { 35 }{ 10 }$$ = 3.5
(iii) 3 $$\frac { 3 }{ 5 }$$ = $$\frac { 18 }{ 5 }$$ = $$\frac{18 \times 2}{5 \times 2}$$ = $$\frac { 36 }{ 10 }$$ = 3.6
(iv) $$\frac { 3 }{ 2 }$$ = $$\frac{3 \times 5}{2 \times 5}$$ = $$\frac { 15 }{ 10 }$$ = 1.5
(v) $$\frac { 4 }{ 5 }$$ = $$\frac{4 \times 2}{5 \times 2}$$ = $$\frac { 8 }{ 10 }$$ = 0.8
(vi) $$\frac { 99 }{ 100 }$$ = 0.99
(vii) 3 $$\frac { 19 }{ 25 }$$ = $$\frac { 94 }{ 25 }$$ = $$\frac{94 \times 4}{25 \times 4}$$ = $$\frac { 376 }{ 100 }$$ = 3.76
Question 6.
Write the following decimals as fractions.
(i) 2.5
(ii) 6.4
(iii) 0.75
Solution:
(i) 2.5 = 2 + $$\frac { 5 }{ 10 }$$ = $$\frac { 25 }{ 10 }$$
(ii) 6.4 = 6 + $$\frac { 4 }{ 10 }$$ = $$\frac { 64 }{ 10 }$$
(iii) 0.75 = 0 + $$\frac { 7 }{ 10 }$$ + $$\frac { 5 }{ 100 }$$ = $$\frac { 70+5 }{ 100 }$$ = $$\frac { 75 }{ 100 }$$
Question 7.
Express the following decimals as fractions in lowest form.
(i) 2.34
(ii) 0.18
(iii) 3.56
Solution:
(i) 2.34 = 2 + $$\frac { 34 }{ 100 }$$ = 2 + $$\frac{34 \div 2}{100 \div 2}$$ = 2 + $$\frac { 17 }{ 50 }$$ = 2$$\frac { 17 }{ 50 }$$ = $$\frac { 117 }{ 50 }$$
(ii) 0.18 = 0 + $$\frac { 18 }{ 100 }$$ = $$\frac{18 \div 2}{100 \div 2}$$ = $$\frac { 9 }{ 50 }$$
(iii) 3.56 = 3 + $$\frac { 56 }{ 100 }$$ = 3 + $$\frac{56 \div 4}{100 \div 4}$$ = 3 + $$\frac { 14 }{ 25 }$$ = 3 $$\frac { 14 }{ 25 }$$ = $$\frac { 89 }{ 25 }$$
Objective Questions
Question 8.
3 + $$\frac { 4 }{ 100 }$$ + $$\frac { 9 }{ 1000 }$$ = ?
(i) 30.49
(ii) 3049 9
(iii) 3.0049
(iv) 3.049
(iv) 3.049
Hint: = 3 × 1 + $$\frac { 0 }{ 10 }$$ + $$\frac { 4 }{ 100 }$$ + $$\frac { 9 }{ 1000 }$$ = 3.049
Question 9.
$$\frac { 3 }{ 5 }$$ = _______
(i) 0.06
(ii) 0.006
(iii) 6
(iv) 0.6
(iv) 0.6
Hint: $$\frac { 3 }{ 5 }$$ = $$\frac{3 \times 2}{5 \times 2}$$ = $$\frac { 6 }{ 10 }$$ = 0.06
Question 10.
The simplest form of 0.35 is
(i) $$\frac { 35 }{ 1000 }$$
(ii) $$\frac { 35 }{ 10 }$$
(iii) $$\frac { 7 }{ 20 }$$
(iv) $$\frac { 7 }{ 100 }$$
(iii) $$\frac { 7 }{ 20 }$$
Hint: 0.35 = $$\frac { 35 }{ 100 }$$ = $$\frac{35 \div 5}{100 \div 5}$$ = $$\frac { 7 }{ 20 }$$
## Samacheer Kalvi 7th Maths Solutions Term 2 Chapter 1 Number System Ex 1.5
Students can Download Maths Chapter 1 Number System Ex 1.5 Questions and Answers, Notes Pdf, Samacheer Kalvi 7th Maths Book Solutions Guide Pdf helps you to revise the complete Tamilnadu State Board New Syllabus and score more marks in your examinations.
## Tamilnadu Samacheer Kalvi 7th Maths Solutions Term 2 Chapter 1 Number System Ex 1.5
5/8 as a decimal is 0.625.
Question 1.
Write the following decimal numbers in the place value table.
(i) 247.36
(ii) 132.105
Solution:
(i) 247.36
(ii) 132.105
11/16 as a decimal is 0.6875.
Question 2.
Write each of the following as decimal number.
(i) 300 + 5 + $$\frac { 7 }{ 10 }$$ + $$\frac { 9 }{ 100 }$$ + $$\frac { 2 }{ 100 }$$
(ii) 1000 + 400 + 30 + 2 + $$\frac { 6 }{ 10 }$$ + $$\frac { 7 }{ 100 }$$
Solution:
(i) 300 + 5 + $$\frac { 7 }{ 10 }$$ + $$\frac { 9 }{ 100 }$$ + $$\frac { 2 }{ 100 }$$ = 305.792
(ii) 1000 + 400 + 30 + 2 + $$\frac { 6 }{ 10 }$$ + $$\frac { 7 }{ 100 }$$ = 1432.67
Question 3.
Which is greater?
(i) 0.888 (or) 0.28
(ii) 23.914 (or) 23.915
Solution:
(i) 0.888 (or) 0.28
The whole number parts is equal for both the numbers.
Comparing the digits in the tenths place we get, 8 > 2.
0.888 > 0.28 ∴ 0.888 is greater.
(ii) 23.914 or 23.915
The whole number part is equal in both the numbers.
Also the tenth place and hundredths place are also equal.
∴ Comparing the thousandths place, we get 5 > 4.
23.915 > 23.914 ∴ 23.915 is greater.
Question 4.
In a 25 m swimming competition, the time taken by 5 swimmers A, B, C, D and E are 15.7 seconds, 15.68 seconds, 15.6 seconds, 15.74 seconds and 15.67 seconds respectively. Identify the winner.
Solution:
The winner is one who took less time for swimming 25 m.
Comparing the time taken by A, B, C, D, E the whole number part is equal for all participants.
Comparing digit in tenths place we get 6 < 7.
∴ Comparing 15.68, 15.6, 15.67, that is comparing the digits in hundredths place we get 15.60 < 15.67 < 15.68
One who took 15.6 seconds is the winner. ∴ C is the winner.
Question 5.
Convert the following decimal numbers into fractions
(i) 23.4
(ii) 46.301
Solution:
(i) 23.4 = $$\frac { 234 }{ 10 }$$ = $$\frac{234 \div 2}{10 \div 2}$$ = $$\frac { 117 }{ 5 }$$
(ii) 46.301 = $$\frac { 46301 }{ 1000 }$$
Question 6.
Express the following in kilometres using decimals,
(i) 256 m
(ii) 4567 m
Solution:
1 m = $$\frac { 1 }{ 1000 }$$ km = 0.001 Km
(i) 256 m = $$\frac { 256 }{ 1000 }$$ km = 0.256 km
(ii) 4567 m = $$\frac { 4567 }{ 1000 }$$ km = 4.567 km
Question 7.
There are 26 boys and 24 girls in a class. Express the fractions of boys and girls as decimal numbers.
Solution:
Boys = 26; Girls = 24; Total = 50
Fraction of boys = $$\frac { 26 }{ 50 }$$ = $$\frac{26 \times 2}{50 \times 2}$$ = $$\frac { 52 }{ 100 }$$ = 0.52
Fraction of girls = $$\frac { 24 }{ 50 }$$ = $$\frac{24 \times 2}{50 \times 2}$$ = $$\frac { 48 }{ 100 }$$ = 0.48
Challenge Problems
Question 8.
Write the following amount using decimals.
(i) 809 rupees 99 paise
(ii) 147 rupees 70 paise
Solution:
100 paise = 1 rupee; 1 paise = $$\frac { 1 }{ 100 }$$ rupee
(i) 809 rupees 99 paise = 809 rupees + $$\frac { 99 }{ 100 }$$ rupees
= 809 + 0.99 rupees = ₹ 809.99
(ii) 147 rupees 70 paise = 147 rupees + $$\frac { 70 }{ 100 }$$ rupees
= 147 rupees + 0.70 rupees = ₹ 147.70
Question 9.
Express the following in metres using decimals.
(i) 1328 cm
(ii) 419 cm
Solution:
100 cm = 1 m; 1 cm = $$\frac { 1 }{ 100 }$$ m
(i) 1328 cm = $$\frac { 1328 }{ 100 }$$ m = 13.28 m
(ii) 419 cm = $$\frac { 419 }{ 100 }$$ m = 4.19 m
Question 10.
Express the following using decimal notation.
(i) 8 m 30 cm in metres
(ii) 24 km 200 m in kilometres
Solution:
(i) 8 m 30 cm in metres
8 m + $$\frac { 30 }{ 100 }$$ m = 8 m + 0.30 m = 8.30 m
(ii) 24 km 200 m in kilometres
24 km + $$\frac { 200 }{ 1000 }$$ km = 24 km + 0.200 km = 24.200 km
Question 11.
Write the following fractions as decimal numbers.
(i) $$\frac { 23 }{ 10000 }$$
(ii) $$\frac { 421 }{ 100 }$$
(iii) $$\frac { 37 }{ 10 }$$
Solution:
(i) $$\frac { 23 }{ 10000 }$$ = 0.0023
(ii) $$\frac { 421 }{ 100 }$$ = 4.21
(iii) $$\frac { 37 }{ 10 }$$ = 3.7
Question 12.
Convert the following decimals into fractions and reduce them to the lowest form,
(i) 2.125
(ii) 0.0005
Solution:
(i) 2.125 = $$\frac { 2125 }{ 1000 }$$ = $$\frac{2125 \div 25}{1000 \div 25}$$ = $$\frac { 85 }{ 40 }$$ = $$\frac{85 \div 5}{40 \div 5}$$ = $$\frac { 17 }{ 8 }$$
(ii) 0.0005 = $$\frac { 5 }{ 1000 }$$ = $$\frac{5 \div 5}{10000 \div 5}$$ = $$\frac { 1 }{ 2000 }$$
Question 13.
Represent the decimal numbers 0.07 and 0.7 on a number line.
Solution:
0.07 lies between 0.0 and 0.1
The unit space between 0 and 0.1 is divided into 10 equal parts and 7th part is taken. Also 0.7 lies between 0 and 1.
The unit space between 0 and 1 is divided into 10 equal parts, and the 7th part is taken.
Question 14.
Write the following decimal numbers in words.
(i) 4.9
(ii) 220.0
(iii) 0.7
(iv) 86.3
Solution:
(i) 4.9 = Four and nine tenths
(ii) 220.0 = Two hundred and twenty
(iii) 0.7 = Seven tenths
(iv) 86.3 = Eighty six and three tenths.
Question 15.
Between which two whole numbers the given numbers lie?
(i) 0.2
(ii) 3.4
(iii) 3.9
(iv) 2.7
(v) 1.7
(vi) 1.3
Solution:
(i) 0.2 lies between 0 and 1.
(ii) 3.4 lies between 3 and 4.
(iii) 3.9 lies between 3 and 4.
(iv) 2.7 lies between 2 and 3.
(v) 1.7 lies between 1 and 2.
(vi) 1.3 lies between 1 and 2.
Question 16.
By how much is $$\frac { 9 }{ 10 }$$ km less than 1 km. Express the same in decimal form.
Solution:
Given measures are 1 km and $$\frac { 9 }{ 10 }$$ km. i.e., 1 km and 0.9 km.
Difference = 1.0 – 0.9 = 0.1 km.
## Samacheer Kalvi 7th Maths Solutions Term 3 Chapter 1 Number System Ex 1.1
Students can Download Maths Chapter 1 Number System Ex 1.1 Questions and Answers, Notes Pdf, Samacheer Kalvi 7th Maths Book Solutions Guide Pdf helps you to revise the complete Tamilnadu State Board New Syllabus and score more marks in your examinations.
## Tamilnadu Samacheer Kalvi 7th Maths Solutions Term 3 Chapter 1 Number System Ex 1.1
Question 1.
Round each of the following decimals to the nearest whole number.
(i) 8.71
(ii) 26.01
(iii) 69.48
(iv) 103.72
(v) 49.84
(vi) 101.35
(vii) 39.814
(viii) 1.23
Solution.
(i) 8.71
Underlining the digit to be rounded 8.71. Since the digit next to the underlined digit, 7 which is greater than 5, adding 1 to the underlined digit.
Hence the nearest whole number 8.71 rounds to is 9.
(ii) 26.01
Underlining the digit to be rounded 26.01. Since the digit next to the underlined digit, 0 which is less than 5, the underlined digit 6 remains the same.
∴ The nearest whole number 26.01 rounds to is 26.
(iii) 69.48
Underlining the digit to be rounded 69.48. Since the digit next to the underlined digit, 4 which is less than 5, the underlined digit 9 remains the same.
∴ The whole number is 69.48 rounds to is 69.
(iv) 103.72
Underlining the digit to be rounded 103.72 since the digit next to the underlined digit, 7 which is greater than 5, we add 1 to the under lined digit.
Hence the nearest whole number 103.72 rounds to is 104.
(v) 49.84
Underlining the digit to be rounded 49.84. Since the digit next to the underlined digit 8 which is greater than 5, we add 1 to the underlined digit.
Hence the nearest whole number 49.84 rounds to 50.
(vi) 101.35
Underlining the digit to be rounded 101.35. Since the digit next to the underlined digit 3 is less than 5, the underlined digit 1 remains the same.
Hence the nearest whole number 101.35 rounds to is 101.
(vii) 39.814
Underlining the digit to be rounded 39.814. Since the digit next to the underlined digit 8 is greater than 5, we add 1 to the underlined digit.
Hence the nearest whole number 39.814 rounds to is 40.
(viii) 1.23
Underlining the digit to be rounded 1.23. Since the digit next to the underlined digit 2, is less than 5, the underlined digit 1 remains the same.
Hence the nearest whole number 1.23 rounds to is 1.
1/8 as a decimal is 0.125.
Question 2.
Round each decimal number to the given place value.
(i) 5.992; tenths place
(ii) 21.805; hundredth place
(iii) 35.0014; thousandth place
Solution:
(i) 992; tenths place
Underlining the digit to be rounded 5.992. Since the digit next to the underlined digit is 9 greater than 5, we add 1 to the underlined digit.
Hence the rounded number is 6.0.
(ii) 21.805; hundredth place
Underlining the digit to be rounded 21.805 since the digit next to the underlined digit is 5, we add 1 to the underlined digit.
Hence the rounded number is 21.81.
(iii) 35.0014; thousandth place
Underlining the digit to be rounded 35.0014. Since the digit next to the underlined digit is 4 less than 5 the underlined digit remains the same.
Hence the rounded number is 35.001.
One Decimal is equal to 435.56 Square Feet.
Question 3.
Round the following decimal numbers upto 1 places of decimal.
(i) 123.37
(ii) 19.99
(iii) 910.546
Solution:
(i) 123.37
Rounding 123.37 upto one places of decimal means round to the nearest tenths place. Underling the digit in the tenths place of 123.37 gives 123.37. Since the digit next to the tenth place value is 7 which is greater than 5, we add 1 to the underlined digit to get 123.4. Hence the rounded value of 123.37 upto one places of decimal is 123.4.
(ii) 19.99
Rounding 19.99 upto one places of decimal means round to the nearest tenth place. Underling the digit in the tenths place of 19.99 gives 19.99. Since the digit next to the tenth place value is 9 which is greater than 5, we add 1 to the underlined digit to get 20.
Hence the rounded value of 19.99 upto one places of decimal is 20.0.
(iii) 910.546
Rounding 910.546 upto one places of decimal means round to the nearest tenths place underlining the digit in the tenths place of 910.546 gives 910.546. Since the digit next to the tenth place value is 4, which is less than 5 the underlined digit remains the same. Hence the rounded value of 910.546 upto one places of decimal is 910.5.
Rounding to the nearest hundredth is 838.27.
Question 4.
Round the following decimal numbers upto 2 places of decimal.
(i) 87.755
(ii) 301.513
(iii) 79.997
Solution:
(i) 87.755
Rounding 87.755 upto 2 places of decimal means round to the nearest hundredths place. Underlining the digit in the hundredth place of 87.755 gives 87.755. Since the digit next to the hundredth place value is 5, we add 1 to the underlined digit.
Hence the rounded value of 87.755 upto two places of decimal is 87.76.
(ii) 301.513
Rounding 301.51 upto 2 places of decimal means round to the nearest hundredths place. Underlining the digit in the hundredth place of 301.513 gives 301.513. Since the digit next to the underlined digit 3 is less than 5, the underlined digit remains the same.
∴ The rounded value of 301.513 upto 2 places of decimal is 301.51.
(iii) 79.997
Rounding 79.997 upto 2 places of decimal means round to the nearest hundredths place. Underlining the digit in the hundredth place of 79.997 gives 79.997. Since the digit next to the underlined digit 7 is greater than 5, we add 1 to the underlined number.
Hence the rounded value of 79.997 upto 2 places of decimal is 80.00.
Question 5.
Round the following decimal numbers upto 3 place of decimal
(a) 24.4003
(b) 1251.2345
(c) 61.00203
Solution:
(a) 24.4003
Rounding 24.4003 upto 3 places of decimal means rounding to the nearest thousandths place. Underlining the digit in the thousandths place of 24.4003 gives 24.4003. In 24.4003 the digit next to the thousandths value is 3 which is less than 5.
∴ The underlined digit remains the same. So the rounded value of24.4003 upto 3 places of decimal is 24.400.
(b) 1251.2345
Rounding 1251.2345 upto 3 places of decimal means rounding to the nearest thousandths place. Underlining the digit in the thousandths place of 1251.2345 gives 1251.2345, the digit next to the thousandths place value is 5 and so we add 1 to the underlined digit. So the rounded value of 1251.2345 upto 3 places of decimal is 1251.235.
(c) 61.00203
Rounding 61.00203 upto 3 places of decimal means rounding to the nearest thousandths place. Underlining the digit in the thousandth place of 61.00203 gives 61.00203. In 61.00203, the digit next to the thousandths place value is 0, which is less than 5.
Hence the underlined digit remains the same. So the rounded value of 61.00203 upto 3 places of decimal is 61.002.
When we write a decimal number with three places, we are representing the thousandths place.
## Samacheer Kalvi 7th Maths Solutions Term 2 Chapter 1 Number System Ex 1.1
Students can Download Maths Chapter 1 Number System Ex 1.1 Questions and Answers, Notes Pdf, Samacheer Kalvi 7th Maths Book Solutions Guide Pdf helps you to revise the complete Tamilnadu State Board New Syllabus and score more marks in your examinations.
## Tamilnadu Samacheer Kalvi 7th Maths Solutions Term 2 Chapter 1 Number System Ex 1.1
Question 1.
Write the decimal numbers for the following pictorial representation of numbers.
Solution:
(i) Tens 2 ones 2 tenths = 12.2
(ii) Tens 1 ones 3 tenths = 21.3
That’s literally all there is to it! 1/32 as a decimal is 0.03125.
Question 2.
Express the following in cm using decimals.
(i) 5 mm
(ii) 9 mm
(iii) 42 mm
(iv) 8 cm 9 mm
(v) 375 mm
Solution:
(i) 5 mm
1 mm = $$\frac { 1 }{ 10 }$$ cm = 0.1 cm
5 mm = $$\frac { 5 }{ 10 }$$ = 0.5 cm
(ii) 9 mm
1 mm = $$\frac { 1 }{ 10 }$$ cm = 0.1 cm
9 mm = $$\frac { 9 }{ 10 }$$ cm = 0.9 cm
(iii) 42 mm
1 mm = $$\frac { 1 }{ 10 }$$ cm = 0.1 cm
42 mm = $$\frac { 42 }{ 10 }$$ cm = 4.2 cm
(iv) 8 cm 9 mm
1 mm = $$\frac { 1 }{ 10 }$$ cm = 0.1 cm
8 cm 9 mm = 8 cm + $$\frac { 9 }{ 10 }$$ cm = 8.9 cm
(v) 375 mm
1 mm = $$\frac { 1 }{ 10 }$$ cm = 0.1 cm
375 mm = $$\frac { 375 }{ 10 }$$ cm = 37.5 cm
Question 3.
Express the following in metres using decimals.
(i) 16 cm
(ii) 7 cm
(iii) 43 cm
(iv) 6 m 6 cm
(v) 2 m 54 cm
Solution:
(i) 16 cm
1 cm = $$\frac { 1 }{ 100 }$$ cm = 0.01 m
16 cm = $$\frac { 16 }{ 100 }$$ m = 0.16 m
(ii) 7 cm
1 cm = $$\frac { 1 }{ 100 }$$ cm = 0.01 m
1 cm = $$\frac { 7 }{ 100 }$$ m = 0.07 m
(iii) 43 cm
1 cm = $$\frac { 1 }{ 100 }$$ cm = 0.01 m
43 cm = $$\frac { 43 }{ 100 }$$ m = 0.43 m
(iv) 6 m 6 cm
1 cm = $$\frac { 1 }{ 10 }$$ m = 0.01 m
6 m 6 cm = 6 m + $$\frac { 6 }{ 100 }$$ m = 6 m + 0.06 m = 6.06 m
(v) 2 mm 54 cm
1 cm = $$\frac { 1 }{ 100 }$$ cm = 0.01 m
2 m 54 cm = 2 m + $$\frac { 54 }{ 100 }$$ m = 2 m + 0.54 m = 2.54 m
Question 4.
Expand the following decimal numbers.
(i) 37.3
(ii) 658.37
(iii) 237.6
(iv) 5678.358
Solution:
(i) 37.3 = 30 + 7 + $$\frac { 3 }{ 10 }$$ = 3 × 101 + 7 × 100 + 3 × 10-1
(ii) 658.37 = 600 + 50 + 8 + $$\frac { 3 }{ 10 }$$ + $$\frac { 7 }{ 100 }$$
= 6 × 102 + 5 × 101 + 8 × 100 + 3 × 10-1 + 7 × 10-2
(iii) 237.6 = 200 + 30 + 7 + $$\frac { 6 }{ 10 }$$
= 2 × 102 + 3 × 101 + 7 × 100 + 6 × 10-1
(iv) 5678.358 = 5000 + 600 + 70 + 8 + $$\frac { 3 }{ 10 }$$ + $$\frac { 5 }{ 100 }$$ + $$\frac { 8 }{ 1000 }$$
= 5 × 103 + 6 × 102 + 7 × 101 + 8 × 100 + 3 × 10-1 + 5 × 10-2 + 8 × 10-3
Question 5.
Express the following decimal numbers in place value grid and write the place value of the underlined digit.
(i) 53.61
(ii) 263.271
(iii) 17.39
(iv) 9.657
(v) 4972.068
Solution:
(i) 53.61
(ii) 263.271
(iii) 17.39
(iv) 9.657
(v) 4972.068
Objective Type Questions
Question 6.
The place value of 3 in 85.073 is _____
(i) tenths
(ii) hundredths
(iii) thousands
(iv) thousandths
(iv) thousandths
Hint: 1000 g = 1 kg; 1 g = $$\frac { 1 }{ 1000 }$$ kg
Question 7.
To convert grams into kilograms, we have to divide it by
(i) 10000
(ii) 1000
(iii) 100
(iv) 10
(ii) 1000
Hint: 85.073 = 8 × 10 + 5 × 1 + 0 × $$\frac { 1 }{ 10 }$$ + 7 × $$\frac { 1 }{ 100 }$$ + 3 × $$\frac { 1 }{ 1000 }$$
Question 8.
The decimal representation of 30 kg and 43 g is ____ kg.
(i) 30.43
(ii) 30.430
(iii) 30.043
(iv) 30.0043
(iii) 30.043
Hint: 30 kg and 43 g = 30 kg + $$\frac { 43 }{ 1000 }$$ kg = 30 + 0.043 = 30.043
Question 9.
A cricket pitch is about 264 cm wide. It is equal to _____ m.
(i) 26.4
(ii) 2.64
(iii) 0.264
(iv) 0.0264
(ii) 2.64
Hint: 264 cm = $$\frac { 264 }{ 100 }$$ m = 2.64 m
## Samacheer Kalvi 7th Maths Solutions Term 1 Chapter 1 Number System Ex 1.3
Students can Download Maths Chapter 1 Number System Ex 1.3 Questions and Answers, Notes Pdf, Samacheer Kalvi 7th Maths Book Solutions Guide Pdf helps you to revise the complete Tamilnadu State Board New Syllabus and score more marks in your examinations.
## Tamilnadu Samacheer Kalvi 7th Maths Solutions Term 1 Chapter 1 Number System Ex 1.3
Question 1.
Fill in the blanks.
(i) -80 × ____ = -80
(ii) (-10) × ____ = 20
(iii) 100 × ___ = -500
(iv) ____ × (-9) = -45
(v) ___ × 75 = 0
Solution:
(i) 1
(ii) -2
(iii) -5
(iv) 5
(v) 0
Question 2.
Say True or False:
(i) (-15) × 5 = 75
(ii) (-100) × 0 × 20 = 0
(iii) 8 × (-4) = 32
Solution:
(i) False
(ii) True
(iii) False
Question 3.
What will be the sign of the product of the following:
(i) 16 times of negative integers.
(ii) 29 times of negative integers.
Solution:
(i) 16 is an even interger.
If negative integers are multiplied even number of times, the product is a positive integer.
∴ 16 times a negative integer is a positive integer.
(ii) 29 times negative integer.
If negative integers are multiplied odd number of times, the product is a negative integer. 29 is odd.
∴ 29 times negative integers is a negative integer.
Question 4.
Find the product of
(i) (-35) × 22
(ii) (-10) × 12 × (-9)
(iii) (-9) × (-8) × (-7) × (-6)
(iv) (-25) × 0 × 45 × 90
(v) (-2) × (+50) × (-25) × 4
Solution:
(i) 35 × 22 = -770
(ii) (-10) × 12 × (-9) = (-120) × (-9) = +1080
(iii) (-9) × (-8) × (-7) × (-6) = (+72) × (-7) × (-6) = (-504) × (-6) = +3024
(iv) (-25) × 0 × 45 × 90 = 0 × 45 × 90 = 0 × 90 = 0
(v) (-2) × (+50) × (-25) × 4 = (-100) × -25 × 4 = 2500 × 4 = 10,000
Question 5.
Check the following for equality and if they are equal, mention the property.
(i) (8 – 13) × 7 and 8 – (13 × 7)
Solution:
Consider (8 – 13) × 7 = (-5) × 7 = -35
Now 8 – (13 × 7) = 8 – 91 = -83
∴ (8 – 13) × 7 ≠ 8 – (13 × 7)
(ii) [(-6) – (+8)] × (-4) and (-6) – [8 × (-4)]
Solution:
[(-6) – (+8)] × (-4) = [(-6) + (-8)] × (-4) = (-14) × (-4) = +56
Now (-6) – [8 × (-4)] = (-6) – (-32)
= (-6) + (+32) = +26
∴ [(-6) – (+8)] × (-4) ≠ (-6) – [8 × (-4)]
(iii) 3 × [(-4) + (-10)] and [3 × (-4) + 3 × (-10)]
Solution:
Consider 3 × [(-4) + (-10)] = 3 × -14 = -42
Now [3 × (-4) + 3 × (-10)] = (-12) + (-30) = -42
Here 3 × [(-4) + (-10)] = [3 × (-4) + 3 × (-10)]
It is the distributive property of multiplication over addition.
Question 6.
During summer, the level of the water in a pond decreases by 2 inches every week due to evaporation. What is the change in the level of the water over a period of 6 weeks?
Solution:
Level of water decreases a week = 2 inches.
Level of water decreases in 6 weeks = 6 × 2 = 12 inches
Question 7.
Find all possible pairs of integers that give a product of -50.
Solution:
Factor of 50 are 1, 2, 5, 10, 25, 50.
Possible pairs of integers that gives product -50:
(-1 × 50), (1 × (-50)), (-2 × 25), (2 × (-25)), (-5 × 10), (5 × (-10))
What is the factor of 50?. Answer: The factors of 50 are 1, 2, 5,10, 25, and 50.
Objective Type Questions
Question 8.
Which of the following expressions is equal to -30.
(i) -20 – (-5 × 2)
(ii) (6 × 10) – (6× 5)
(iii) (2 × 5)+ (4 × 5)
(iv) (-6) × (+5)
Solution:
(iv) (-6) × (+5)
Hint:
(i) -20 + (10) = -10
(ii) 60 – 30 = 30
(iii) 10 + 20 = 30
(iv) (-6) × (+5) = – 30
Question 9.
Which property is illustrated by the equation: (5 × 2) + (5 × 5) = 5 × (2 + 5)
(i) commutative
(ii) closure
(iii) distributive
(iv) associative
Solution:
(iii) distributive
Question 10.
11 × (-1) = _____
(i) -1
(ii) 0
(iii) +1
(iv) -11
Solution:
(iv) -11
Question 11.
(-12) × (-9) =
(i) 108
(ii) -108
(iii) +1
(iv) -1
Solution:
(i) 108
## Samacheer Kalvi 7th Maths Solutions Term 3 Chapter 2 Percentage and Simple Interest Ex 2.5
Students can Download Maths Chapter 2 Percentage and Simple Interest Ex 2.5 Questions and Answers, Notes Pdf, Samacheer Kalvi 7th Maths Book Solutions Guide Pdf helps you to revise the complete Tamilnadu State Board New Syllabus and score more marks in your examinations.
## Tamilnadu Samacheer Kalvi 7th Maths Solutions Term 3 Chapter 2 Percentage and Simple Interest Ex 2.5
To convert CGPA to percentage, all you need to do is multiply your CGPA by 9.5.
Miscellaneous Practice Problems
Question 1.
When Mathi was buying her flat she had to put down a deposit of $$\frac { 1 }{ 10 }$$ of the value of the flat. What percentage was this?
Solution:
Percentage of $$\frac { 1 }{ 10 }$$ = $$\frac { 1 }{ 10 }$$ × 100 % = 10 %
Mathi has to put down a deposit of 10 % of the value of the flat.
Question 2.
Yazhini scored 15 out of 25 in a test. Express the marks scored by her in percentage.
Solution:
Yazhini’s score = 15 out of 25 = $$\frac { 15 }{ 25 }$$
Score in percentage = $$\frac { 15 }{ 25 }$$ × 100% = 60%
Question 3.
Out of total 120 teachers of a school 70 were male. Express the number of male teachers as percentage.
Solution:
Total teachers of the school = 120
Number of male teachers = 70
∴ Percentage of male teacher = $$\frac { 70 }{ 120 }$$ × 100 % = $$\frac { 700 }{ 12 }$$ %
Score in percentage = 58.33%
Percentage of male teachers = 58.33%
The percentage difference calculator is here to help you compare two numbers.
Question 4.
A cricket team won 70 matches during a year and lost 28 matches and no results for two matches. Find the percentage of matches they won.
Solution:
Number of Matches won = 70
Number of Matches lost = 28
“No result” Matches = 2
Total Matches = 70 + 28 + 2 = 100
Percentage of Matches won = $$\frac { 70 }{ 100 }$$ × 100 % = 70 %
The won 70% of the matches
The Percentage Difference Calculator (% difference calculator) will find the percent difference between two positive numbers greater than 0.
Question 5.
There are 500 students in a rural school. If 370 of them can swim, what percentage of them can swim and what percentage cannot?
Solution:
Total number of students = 500
Number of students who can swim = 370
Percentage of students who can swim = $$\frac { 370 }{ 500 }$$ × 100 % = 74 %
Number of students who cannot swim = 500 – 370 = 130
Percentage of students who cannot swim = $$\frac { 130 }{ 500 }$$ × 100 % = 26 %
i.e. 74% can swim and 26% cannot swim
Question 6.
The ratio of Saral’s income to her savings is 4 : 1. What is the percentage of money saved by her?
Solution:
Total parts of money = 4 + 1 = 5
Part of money saved = 1
∴ Percentage of money saved = $$\frac { 1 }{ 5 }$$ × 100% = 20%
∴ 20% of money is saved by Saral
Question 7.
A salesman is on a commission rate of 5%. How much commission does he make on sales worth ₹ 1,500?
Solution:
Total amount on sale = ₹ 1,500
Commission rate = 5 %
Commission received = 5 % of ₹ 1,500 = $$\frac { 5 }{ 100 }$$ × 1500 = ₹ 75
∴ Commission received = ₹ 75
Question 8.
In the year 2015 ticket to the world cup cricket match was ₹ 1,500. This year the price has been increased by 18%. What is the price of a ticket this year?
Solution.
Price of a ticket in 2015 = ₹ 1500
Increased price this year = 18% of price in 2015
= 18 % of ₹ 1500 = $$\frac { 18 }{ 100 }$$ × 1500
= ₹ 270
Price of ticket this year = last year price + increased price
= ₹ 1500 + ₹ 270 = ₹ 1770
Price of ticket this year = ₹ 1770
Question 9.
2 is what percentage of 50?
Solution:
Let the required percentage be x
x% of 50 = 2
$$\frac { x }{ 100 }$$ × 50 = 2
x = $$\frac{2 \times 100}{50}$$ = 4 %
∴ 4 % of 50 is 2
Question 10.
What percentage of 8 is 64?
Solution:
Let the required percentage be x
So x % of 8 = 64
$$\frac { x }{ 100 }$$ × 8 = 64
x = $$\frac{64 \times 100}{8}$$ = 800
∴ 800 % of 8 is 64
Question 11.
Stephen invested ₹ 10,000 in a savings bank account that earned 2% simple interest. Find the interest earned if the amount was kept in the bank for 4 years.
Solution:
Principal (P) = ₹ 10,000
Rate of interest (r) = 2%
Time (n) = 4 years
∴ Simple Interest I = $$\frac { pnr }{ 100 }$$
= $$\frac{10000 \times 4 \times 2}{100}$$
= ₹ 800
Stephen will earn ₹ 800
Question 12.
Riya bought ₹ 15,000 from a bank to buy a car at 10% simple interest. If she paid ₹ 9,000 as interest while clearing the loan, find the time for which the loan was given.
Solution:
Here Principal (P) = ₹ 15,000
Rate of interest (r) = 10 %
Simple Interest (I) = ₹ 9000
I = $$\frac { pnr }{ 100 }$$
9000 = $$\frac{15000 \times n \times 10}{100}$$
n = $$\frac{9000 \times 100}{15000 \times 10}$$
n = 6 years
∴ The loan was given for 6 years
Question 13.
In how much time will the simple interest on ₹ 3,000 at the rate of 8% per annum be the same as simple interest on ?4,000 at 12% per annum for 4 years?
Solution:
Let the required number of years be x
Simple Interest I = $$\frac { pnr }{ 100 }$$
Principal P1 = ₹ 3000
Rate of interest (r) = 8 %
Time (n1) = n1 years
Simple Interest I1 = $$\frac{3000 \times 8 \times n_{1}}{100}$$ = 240 n1
Principal (P2) = ₹ 4000
Rate of interest (r) = 12 %
Time n2 = 4 years
Simple Interest I2 = $$\frac{4000 \times 12 \times 4}{100}$$
I2 = 1920
If I1 = I2
240 n1 = 1920
n1 = $$\frac { 1920 }{ 240 }$$ = 8
∴ The required time = 8 years
Challenge Problems
Question 14.
A man travelled 80 km by car and 320 km by train to reach his destination. Find what percent of total journey did he travel by car and what per cent by train?
Solution:
Distance travelled by car = 80 km.
Distance travelled by train = 320 km
Total distance = 80 + 320 km = 400 km
Percentage of distance travelled by car = $$\frac { 80 }{ 400 }$$ × 100 % = 20 %
Percentage of distance travelled by train = $$\frac { 320 }{ 800 }$$ × 100 % = 40 %
Question 15.
Lalitha took a math test and got 35 correct and 10 incorrect answers. What was the percentage of correct answers?
Solution:
Number of correct answers = 35
Number of incorrect answers = 10
Total number of answers = 35 + 10 = 45
Percentage of correct answers = $$\frac { 35 }{ 45 }$$ × 100 %
= 77.777 % = 77.78 %
Question 17.
The population of a village is 8000. Out of these, 80% are literate and of these literate people, 40% are women. Find the percentage of literate women to the total population?
Solution:
Population of the village = 8000 people
literate people = 80 % of population
= 80 % of 8000 = $$\frac { 80 }{ 100 }$$ × 8000
literate people = 6400
Percentage of women = 40 %
Number of women = 40 % of literate people
= $$\frac { 40 }{ 100 }$$ × 6400 = 2560
∴ literate women : Total population
= 8000 : 2560
= 25 : 8
Question 18.
A student earned a grade of 80% on a math test that had 20 problems. How many problems on this test did the student answer correctly?
Solution:
Total number of problems in the test = 20
Students score = 80 %
Number of problem answered = $$\frac { 80 }{ 100 }$$ × 20 = 16
Question 19.
A metal bar weighs 8.5 kg. 85% of the bar is silver. How many kilograms of silver are in the bar?
Solution:
Total weight of the metal = 8.5 kg
Percentage of silver in the metal = 85%
Weight of silver in the metal = 85% of total weight
= $$\frac { 85 }{ 100 }$$ × 8.5 kg
= 7.225 kg
7.225 kg of silver are in the bar.
Question 20.
Concession card holders pay ₹ 120 for a train ticket. Full fare is ₹ 230. What is the percentage of discount for concession card holders?
Solution:
Train ticket fare = ₹ 230
Ticket fare on concession = ₹ 120
Discount = Ticket fare – concession fare = 230 – 120 = ₹ 110
Percentage of discount = 47.83%
Question 21.
A tank can hold 200 litres of water. At present, it is only 40% full. How many litres of water to fill in the tank, so that it is 75 % full?
Solution:
Capacity of the water tank = 200 litres
Percentage of water in the tank = 40%
Percentage of water to fill = Upto 75%
Difference in percentage = 75 % – 40 % = 35 %
∴ Volume of water to be filled = Percentage of difference × total capacity
= $$\frac { 35 }{ 100 }$$ × 200 = 70 l
70 l of water to be filled
The Percentage Difference Calculator (% difference calculator) will find the percent difference between two positive numbers greater than 0.
Question 22.
Which is greater 16 $$\frac { 2 }{ 3 }$$ or $$\frac { 2 }{ 5 }$$ or 0.17 ?
Solution:
16 $$\frac { 2 }{ 3 }$$ = $$\frac { 50 }{ 30 }$$
= $$\frac { 50 }{ 30 }$$ × 100 % = 1666.67 %
⇒ $$\frac { 2 }{ 5 }$$
= $$\frac { 2 }{ 5 }$$ × 100 = 40 %
0.17 = $$\frac { 17 }{ 100 }$$ = 17 %
∴ 1666.67 is greater
∴ 16 $$\frac { 2 }{ 3 }$$ is greater
Question 23.
The value of a machine depreciates at 10% per year. If the present value is ₹ 1,62,000, what is the worth of the machine after two years.
Solution:
Present value of the machine = ₹ 1,67,000
Rate of depreciation = 10 % Per annum
Time (n) = 2 years
For 1 year depreciation amount = $$\frac{1,62,000 \times 1 \times 10}{100}$$ = ₹ 16,200
Worth of the machine after one year = Worth of Machine – Depreciation
= 1,67,000 – 16,200 = 1,45,800
Depreciation of the machine for 2nd year = 145800 × 1 × $$\frac { 10 }{ 100 }$$ = 14580
Worth of the machine after 2 years = 1,45,800 – 14,580 = 1,31,220
∴ Worth of the machine after 2 years = ₹ 1,31,220
Question 24.
In simple interest, a sum of money amounts to ₹ 6,200 in 2 years and ₹ 6,800 in 3 years. Find the principal and rate of interest.
Solution:
Let the principal P = ₹ 100
If A = 6200
⇒ Principal + Interest for 2 years = 6200
A = ₹ 7400
⇒ Principal + Interest for 3 years = 7400
∴ Difference gives the Interest for 1 year
∴ Interest for 1 year = 7400 – 6200
I = 1200
$$\frac { pnr }{ 100 }$$ = 1200 ⇒ $$\frac{P \times 1 \times r}{100}$$ = 1200
If the Principal = 10,000 then
$$\frac{10,000 \times 1 \times r}{100}$$ = 1200 ⇒ r = 12 %
Rate of interest = 12 % Per month
Question 25.
A sum of ₹ 46,900 was lent out at simple interest and at the end of 2 years, the total amount was ₹ 53,466.Find the rate of interest per year.
Solution:
Here principal P = ₹ 46900
Time n = 2 years
Amount A = ₹ 53466
Let r n be the rate of interest per year p
Intrest I = $$\frac { pnr }{ 100 }$$
A = P + I
53466 = 46900 + $$\frac{46900 \times 2 \times r}{100}$$
53466 – 46900 = $$\frac{46900 \times 2 \times r}{100}$$
6566 = 469 × 2 × r
r = $$\frac{6566}{2 \times 469}$$ % = 7 %
Rate of interest = 7 % Per Year
Question 26.
Arun lent ₹ 5,000 to Balaji for 2 years and ₹ 3,000 to Charles for 4 years on simple interest at the same rate of interest and received ₹ 2,200 in all from both of them as interest. Find the rate of interest per year.
Solution:
Principal lent to Balaji P1 = ₹ 5000
Time n1 = 2 years
Let r be the rate of interest per year
Simple interest got from Balaji = $$\frac { pnr }{ 100 }$$ ⇒ I1 = $$\frac{5000 \times 25 \times r}{100}$$
Again principal let to Charles P2 = ₹ 3000
Time (n2) = 4 years
Simple interest got from Charles (I2) = $$\frac{3000 \times 4 \times r}{100}$$
Altogether Arun got ₹ 2200 as interest.
∴ I1 + I2 = 2200
$$\frac{5000 \times 2 \times r}{100}+\frac{3000 \times 4 \times r}{100}$$ = 2200
100r + 120r = 2200
220r = 2200 = $$\frac { 2200 }{ 220 }$$
r = 10 %
Rate of interest per year = 10 %
Question 27.
If a principal is getting doubled after 4 years, then calculate the rate of interest. (Hint: Let P = ₹ 100)
Solution:
Let the principal P = ₹ 100
Given it is doubled after 4 years
i.e. Time n = 4 years
After 4 years A = ₹ 200
∴ A = P + I
A – P = I
200 – 100 = I
After 4 years interest I = 100
I = $$\frac { pnr }{ 100 }$$ ⇒ 100 = $$\frac{100 \times 4 \times r}{100}$$
4r = 100 ⇒ r = 25 %
Rate of interest r = 25 %
## Samacheer Kalvi 7th Maths Solutions Term 3 Chapter 2 Percentage and Simple Interest Ex 2.2
Students can Download Maths Chapter 2 Percentage and Simple Interest Ex 2.2 Questions and Answers, Notes Pdf, Samacheer Kalvi 7th Maths Book Solutions Guide Pdf helps you to revise the complete Tamilnadu State Board New Syllabus and score more marks in your examinations.
## Tamilnadu Samacheer Kalvi 7th Maths Solutions Term 3 Chapter 2 Percentage and Simple Interest Ex 2.2
The percentage difference calculator is here to help you compare two numbers.
Question 1.
Write each of the following percentage as decimal.
(i) 21 %
(ii) 93.1 %
(iii) 151 %
(iv) 65 %
(v) 0.64 %
Solution:
(i) 21 %
= $$\frac { 21 }{ 100 }$$ = 0.21
(ii) 93.1 %
= $$\frac { 93.1 }{ 100 }$$ = 0.931
(iii) 151 %
= $$\frac { 151 }{ 100 }$$ = 1.51
(iv) 65 %
= $$\frac { 65 }{ 100 }$$ = 0.65
(v) 0.64 %
= $$\frac { 0.64 }{ 100 }$$ = 0.0064
Question 2.
Convert each of the following decimal as percentage
(i) 0.282
(ii) 1.51
(iii) 1.09
(iv) 0.71
(v) 0.858
Solution:
(i) 0.282
= 0.282 × 100% = $$\frac { 282 }{ 1000 }$$ × 100 %
= 28.2 %
(ii) 1.51
= $$\frac { 151 }{ 100 }$$ × 100 %
= 151 %
(iii) 1.09
= $$\frac { 109 }{ 100 }$$ × 100 %
= 109 %
(iv) 0.71
= $$\frac { 71 }{ 100 }$$ × 100 %
= 71 %
(v) 0.858
= $$\frac { 858 }{ 1000 }$$ × 100 %
= 85.8 %
Question 3.
In an examination a student scored 75% of marks. Represent the given the percentage in decimal form?
Solution:
Student’s Score = 75% = $$\frac { 75 }{ 100 }$$ = 0.75
Question 4.
In a village 70.5% people are literate. Express it as a decimal.
Solution:
Percentage of literate people = 70.5%
= $$\frac { 70.5 }{ 100 }$$
= 0.705
Question 5.
Scoring rate of a batsman is 86%. Write his strike rate as decimal.
Solution:
Scoring rate of the batsman = 86%
= $$\frac { 86 }{ 100 }$$
= 0.86
Question 6.
The height of a flag pole in school is 6.75m. Write it as percentage.
Solution:
Height of flag pole = 6.75m
= $$\frac { 675 }{ 100 }$$
= 6.75%
Question 7.
The weights of two chemical substances are 20.34 g and 18.78 g. Write the difference in percentage?
Solution:
Weight of substance 1 = 20.34g
Percentage of substance 1 = $$\frac { 2034 }{ 100 }$$ = 2034 %
Weight of substance 2 = 18.78g
Percentage of substance 2 = $$\frac { 1878 }{ 100 }$$ = 1878 %
Their difference = 2034 – 1878 = 156%
Percentage decrease calculator finds the decrease from one amount to another as a percentage of the first amount.
Question 8.
Find the percentage of shaded region in the following figure.
Solution:
Total region = 4 parts
Fraction of shaded region = $$\frac { 1 }{ 4 }$$
Percentage of shaded region = $$\frac { 1 }{ 4 }$$ × $$\frac { 100 }{ 100 }$$
= $$\frac { 1 }{ 4 }$$ × 100 %
= 25 %
Objective Type Questions
Question 1.
Decimal value of 142.5% is
(i) 1.425
(ii) 0.1425
(iii) 142.5
(iv) 14.25
Hint:
142.5 % = $$\frac { 1425 }{ 10 }$$ %
= $$\frac { 1425 }{ 10 }$$ × $$\frac { 1 }{ 100 }$$
= 1.425
(i) 1.425
Question 2.
The percentage of 0.005 is
(i) 0.005 %
(ii) 5 %
(iii) 0.5 %
(iv) 0.05 %
Hint:
0.005 = $$\frac { 5 }{ 1000 }$$
= $$\frac { 5 }{ 1000 }$$ × $$\frac { 100 }{ 100 }$$
= 0.5 %
(iii) 0.5 %
Question 3.
The percentage of 4.7 is
(i) 0.47 %
(ii) 4.7 %
(iii) 47 %
(iv) 470 %
Hint:
4.7 = $$\frac { 47 }{ 10 }$$
= $$\frac { 47 }{ 10 }$$ × $$\frac { 100 }{ 100 }$$
= 470 %
(iv) 470 %
## Samacheer Kalvi 7th Science Solutions Term 1 Chapter 1 Measurement
Students can Download Science Chapter 1 Measurement Questions and Answers, Notes Pdf, Samacheer Kalvi 7th Science Book Solutions Guide Pdf helps you to revise the complete Tamilnadu State Board New Syllabus and score more marks in your examinations.
## Tamilnadu Samacheer Kalvi 7th Science Solutions Term 1 Chapter 1 Measurement
### Samacheer Kalvi 7th Science Measurement Textual Evaluation
Question 1.
Which of the following is a derived unit?
(a) mass
(b) time
(c) area
(d) length
(c) area
Question 2.
Which of the following is correct?
(a) 1L=lcc
(b) 1L= l0cc
(c) 1L= l00cc
(d) 1L= l000cc
(d) 1L = 1000cc
Question 3.
SI unit of density is
(a) kg/m2
(b) kg/m3
(c) kg/m
(d) g/m3
(b) kg/m3
Question 4.
Two spheres have equal mass and volume in the ratio 2:1. The ratio of their density is
(a) 1:2
(b) 2:1
(c) 4:1
(d) 1:4
(b) 2:1
Question 5.
Light year is the unit of
(a) Distance
(b) time
(c) density
(d) both length and time
(a) Distance
II. Fill in the blanks:
1. Volume of irregularly shaped objects are measured using the law of ___________
2. One cubic metre is equal to ___________ cubic centimetre.
3. Density of mercury is ___________
4. One astronomical unit is equal to ___________
5. The area of a leaf can be measured using a ___________
1. Archimedes
2. 10,00,000 or 1066
3. 13,600 kg/m3
4. 1.496×1011m
5. graph sheet
III. State whether the following statements are true or false.
Question 1.
The region covered by the boundary of the plane figure is called its volume.
(False) Correct statement: The region covered by the boundary of plane figure is called its area.
Question 2.
Volume of liquids can be found using measuring containers.
True
Question 3.
Water is denser than kerosene.
True
Question 4.
A ball of iron floats in mercury.
True
Question 5.
A substance which contains less number of molecules per unit volume is said to be denser.
False. Correct statement: A substance which contains more number of molecules per unit volume is said to be denser.
IV. Match the items in column – I to the items in column – II :
Question 1.
Column -1 Column – II i. Area (a) light year ii. Distance (b) m3 iii. Density (c) m2 iv. Volume (d) kg V. Mass (e) kg/ m3
1. i
2. ii
3. iii
4. iv
5. v
Question 2.
Column -1 Column – II i. Area (a) g / cm3 ii. Length (b) measuring jar iii. Density (c) amount of a substance iv. Volume (d) rope V. Mass (e) plane figures
1. i
2. ii
3. iii
4. iv
5. v
V. Arrange the following in correct sequence :
Question 1.
1L, 100 cc, 10 L, 10 cc
10 cc, 100 cc, 1L, 10L
Question 2.
Copper, Aluminium, Gold, Iron
Aluminium, Iron, Copper, Gold
VI. Use the analogy to fill in the blank:
Question 1.
Area: M2 :: Volume : _________
M3
Question 2.
Liquid : Litre :: Solid : _________
cm3
Question 3.
Water: Kerosene :: ______ : Aluminium
Iron
VII. Assertion and reason type questions:
Mark the correct choice as
(a) Both assertion and reason are true and reason is the correct explanation of assertion
(b) Both assertion and reason are true, but reason is not the correct explanation of assertion
(c) If assertion is true but reason is false
(d) Assertion is false but reason is true.
Question 1.
Assertion (A) : Volume of a stone is found using a measuring cylinder.
Reason (R) : Stone is an irregularly shaped object.
(a) If both assertion and reason are true and reason is the correct explanation of assertion
Question 2.
Assertion (A) : Wood floats in water.
Reason (R) : Water is a transparent liquid.
(b) If both assertion and reason are true, but reason is not the correct explanation of assertion
Correct explanation: Density of water is more than the density of wood.
Question 3.
Assertion (A) : Iron ball sinks in water.
Reason (R) : Water is denser than iron.
(b) If both assertion and reason are true, but reason is not the correct explanation of assertion
Correct explanation : Density of iron is more than that of water.
Question 1.
Name some of the derived quantities.
Area, volume, density.
Question 2.
Give the value of one light year.
One light year = 9.46 x 1015m
Question 3.
Write down the formula used to find the volume of a cylinder.
Volume of a cylinder = πr2 h
Question 4.
Give the formula to find the density of objects.
Question 5.
Name the liquid in which an iron ball sinks.
Iron ball sinks in water. The density of an iron ball is more than that of water so it sinks in water.
Question 6.
Name the unit used to measure the distance between celestial objects.
Astronomical unit and light year are the units used to measure the distance between celestial objects.
Question 7.
What is the density of gold?
Density of gold is 19,300 kg/m3
Question 1.
What are derived quantities?
The physical quantities which can be obtained by multiplying, dividing or by mathematically combining the fundamental quantities are known as derived quantities.
(or)
The physical quantities which are expressed is terms of fundamental quantities are called
Question 2.
Distinguish between the volume of liquid and capacity of a container.
S.No Volume of liquid Capacity of a container 1. Volume is the amount of space taken up by a liquid Capacity is the measure of an objects ability to hold a substance like solid, liquid or gas 2. It is measured in cubic units. It is measured in litres, gallons, pounds, etc. 3. It is calculated by multiplying the length, width and height of an object. It’s measurement is cc or ml.
Question 3.
Define the density of objects.
Density of a substance is defined as the mass of the substance contained in unit volume
Question 4.
What is one light year?
One light year is the distance travelled by light in vacuum during the period of one year.
1 Light year = 9.46 x 1015m.
Question 5.
Define one astronomical unit?
One astronomical unit is defined as the average distance between the earth and the sun.
1 AU = 1.496 5 106km = 1.496 × 1011m.
Question 1.
Describe the graphical method to find the area of an irregularly shaped plane figure.
To find the area of an irregularly shaped plane figure, we have to use graph paper.
1. Place a piece of paper with an irregular shape on a graph paper and draw its outline.
2. To find the area enclosed by the outline, count the number of squares inside it (M).
3. You will find that some squares lie partially inside the outline.
4. Count a square only if half (p) or more of it (N) lies inside the outline.
5. Finally count the number of squares, that are less than half. Let it be
For the shape in figure we have the following:
M = 50
N = 7
P = 4
Q = 4
Now, the approximate area of the can be calculated using the following formula.
Area of the leaf = M+($$\frac { 3 }{ 4 }$$) N + ($$\frac { 1 }{ 2 }$$) P+$$\frac { 1 }{ 4 }$$ Qsq.cm
= 52 + 5.25 = 58.25 sq.mm = 0.5825 sq.cm
Question 2.
How will you determine the density of a stone using a measuring jar?
Determination of density of a stone using a measuring cylinder.
1. In order to determine the density of a solid, we must know the mass and volume of the stone.
2. The mass of the stone is determined by a physical balance very accurately. Let it be ‘m’ grams.
3. In order to find the volume, take a measuring cylinder and pour in it some water.
4. Record the volume of water from the graduations marked on measuring cylinder. Let it be 40 cm3.
5. Now tie the given stone to a fine thread and lower it gently in the measuring cylinder, such that it is completely immersed in water.
6. Record the new level of water. Let it be 60 cm3
∴Volume of the solid = (60-40) cm3
= 20 cm3 = V cm3 (assume)
Knowing the mass and the volume of the stone, the density can be calculate by the formula:
XI. Questions based on Higher er Thinking skills:
Question 1.
There are three spheres A, B, C as shown below :
Sphere A and B are made of the same material. Sphere C is made of a different material. Spheres A and C have equal radii. The radius of sphere B is half that of A. Density of A is double that of C.
Question 2.
1. Find the ratio of masses of spheres A and B.
2. Find the ratio of volumes of spheres A and B.
3. Find the ratio of masses of spheres A and C.
i. Ratio of masses of spheres A and B
MA : MB
D × VA : D × VB
Let the mass of sphere A = MA
Let the mass of sphere B = MB
Mass = Density × Volume
MA = DA × VA
MB = DB × VB (Density is same)
Volume of Sphere A =$$\frac { 4 }{ 3 }$$πr3
ii. Ratio of volumes of spheres A and B
VA : VB
8 : 1 (As mass is directly proportional to volume)
iii. Ratio of masses of spheres A and C.
[∴ Density of A is double that of C ]
XII. Numerical problems:
Question 1.
A circular disc has a radius 10 cm. Find the area of the disc in m2. (Use n = 3.14)
Given radius = 10 cm = 0.1m
π= 3.14
Area of a circular disc A = ?
Formula : Area of a circle A = πr2
= 3.14 × 0.1 × 0.1
Solution : A = 0.0314 m2
Question 2.
The dimension of a school playground is 800 m x 500 m. Find the area of the ground.
Given :The dimension of a school
Playground = l x b = 800 m x 500 m
Formula :Area of the ground A = l x b
= 800 x 500 = 4,00,000
A = 4,00,000 m2
Question 3.
Two spheres of same size are made from copper and iron respectively. Find the ratio between their masses. Density of copper 8,900 kg/m and iron 7,800 kg/m3
Given : Density Copper Dc = 8900 kg/m3
Density of Iron D1 = 7800 kg/m3
Volume of Copper sphere = Volume of Iron sphere
To find : Ratio of Masses of Copper (MC) and Iron (MI)
Solution: Mass = Density x Volume
MC = DC x V, M1 = D1 x V
MC = 8900 V, M1 = 7,800 V
MC = M1
8900 V : 7800 V
= 1.14: 1
Question 4.
A liquid having a mass of 250 g fills a space of lOOOcc. Find the density of the liquid.
Given : Mass of a liquid M = 250 g
Volume V = l000cc
Density of the liquid D = ?
Solution: Density of the liquid = 0.25 g/cc
Question 5.
A sphere of radius 1cm is made from silver. If the mass of the sphere is 33 g, find the density of silver (Take π = 3.14)
Given : radius of a sphere r = 1cm
Volume of the sphere V = ?
Mass of the sphere M = 33 g
Density of silver D = ?
Solution: Density of silver sphere = 7.889 g/cc.
XIII. Cross word puzzle:
Clues – Across
1. SI unit of temperature
2. A derived quantity
3. Mass per unit volume
4. Maximum volume of liquid a container can hold
Clues – Down
a. A derived quantity
b. SI unit of volume
c. A liquid denser than iron
d. A unit of length used to measure very long distances
Clues – Across
1. KELVIN
2. VOLUME
3. DENSITY
4. CAPACITY
Clues – Down
a. VELOCITY
b. CUBIC METRE
c. MERCURY
d. LIGHT YEAR
### Samacheer Kalvi 7th Science Measurement lntext Activities
Students can practice CBSE Class 7 Science MCQs Multiple Choice Questions with Answers to score good marks in the examination.
Activity -1
Take a leaf from any one of trees in your neighborhood.
Place the leaf on a graph sheet and draw the outline of the leaf with a pencil. Remove the leaf. You can see the outline of the leaf on the graph sheet.
1. Now, count the number of whole squares enclosed within the outline of the leaf. Take it to be M.
2. Then, count the number of squares that are more than half. Take it as N.
3. Next, count the number of squares which are half of a whole square. Note it to be P.
4. Finally, count the number of squares that are less than half. Let it be Q.
5. M = _______;N = _______; P = _______; Q = _______
Now, the approximate area of the leaf can be calculated using the following formula:
Approximate area of the leaf = M +($$\frac { 3 }{ 4 }$$) N+($$\frac { 1 }{ 2 }$$) P+($$\frac { 1 }{ 4 }$$) Q square cm
Area of the leaf =________.
This formula can be used to calculate the area of any irregularly shaped plane figures.
M = 50
N = 7
P = 4
Q = 4
Activity – 2
Draw the following regularly shaped figures on a graph sheet and find their area by the graphical method. Also, find their area using appropriate formula. Compare the results obtained in two methods by tabulating them.
(a) A rectangle whose length is 12 cm and breadth is 4 cm.
(b) A square whose side is 6 cm.
(c) A circle whose radius is 7 cm.
(d) A triangle whose base is 6 cm and height is 8 cm.
Activity – 3
Take a measuring cylinder and pour some water into it (Do not fill the cylinder completely). Note down the volume of water from the readings of the measuring cylinder. Take it as V . Now take a small stone and tie it with a thread. Immerse the stone inside the water by holding the thread. This has to be done such that the stone does not touch the walls of the measuring cylinder. Now, the level of water has raised. Note down the volume of water and take it to be V . The volume of the stone is equal to the raise in the volume of water.
V1= _______
V2=_______
Volume of stone = v2 – v1 =_______.
V1 = 30 cc, V2 = 40 cc; Volume of stone =v2 – v1 = 40cc – 30cc = 10cc
Activity – 4
(a) Take an iron block and a wooden block of same mass (say 1 kg each). Measure their volume. Which one of them has more volume and occupies more volume?
(b) Take an iron block and a wooden block of same size. Weigh them and measure their mass. Which one of them has more mass?
(a) Wooden block has more volume and occupies more volume. (As the molecules of wood are loosely packed)
(b) Iron block has more mass. (In iron block, molecules are closely packed).
### Samacheer Kalvi 7th Science Measurement Additional Questions
Question 1.
The unit of volume is _____
(a) m3
(b) m3
(c) cm3
(d) km
(a) m3
Question 2.
Physical quantities are classified into _____ type
(a) three
(b) two
(c) four
(d) none of the above
(b) two
Question 3.
The SI unit of speed is _____
(a) m/s2
(b) m/s
(c) km/h
(d) m2/s
(a) m/s2
Question 4.
1 litre = ______ cc
(a) 100
(b) 1000
(c) 10
(d) 0.1
1000
Question 5.
The formula to calculate area of a rectangle is _____
(b) side × side
(d) none of the above
Question 6.
_____ is a derived quantity.
(a) length
(b) mass
(c) time
(d) area
(d) area
Question 7.
The amount of space occupied by a three dimensional object is known as its _____
(a) density
(b) volume
(c) Area
(d) mass
(b) volume
Question 8.
The maximum volume of liquid that a continer can hold is _____
(a) area
(b) volume
(c) capacity
(d) density
(c) capacity
Question 9.
The shortest distance between the earth and the sun is called as _____ position.
(a) Light year
(b) normal
(c) perihelion
(d) aphelion
(c) Perihelion
Question 10.
The largest distance between the earth and the sun is called as _____ position.
(a) normal
(b) perihelion
(c) aphelion
(d) none of the above
(c) aphelion
II. Fill in the Blanks.
1. The materials with higher density are called ______
2. The materials with lower density are called ______
3. The area of irregularly shaped figures can be calculated with the help of a ______
4. The SI unit of volume is ______
5. The SI unit of density is ______
6. The CGS unit of density is ______
7. If the density of a solid is lower than that of a liquid it ______ is that liquid
8. If the density of a solid is higher than that of a liquid, it ______ is that liquid.
9. The total number of seconds in one year = ______
10. The average distance between the earth and the sun is about ______ million kilometre.
1. denser
2. rarer
3. graph sheet
4. cubic metre or m3
5. kg/m3
6. g/cm3
7. floats
8. sinks
9. 3.15 3 x 107 second
10. 149.6
III. True or False – if false give the correct statement.
Question 1.
One square metre is the area enclosed inside a square of side 2 metre.
(False) Correct Statement: One square metre is the area enclosed inside a square of side 1 metre.
Question 2.
Area is a derived quantity as we obtain by multiplying twice of the fundamental physical quantity length.
True.
Question 3.
Density of water is 100 kg/m3
(False) Correct statement: Density of water is 1000 kg/m3
Question 4.
Density is defined as the mass of the substance contained in unit volume.
True.
Question 5.
The lightness or heaviness of a body is due to volume
(False) Correct statement: The lightness or heaviness of a body is due to density.
Question 6.
Neptune is 30 AU away from sun.
True.
Question 7.
The nearest star to our solar system is proxima centauri.
True.
Question 8.
The volume of a figure is the region covered by the boundary of the figure.
(False) Correct statement: The area of a figure is the region covered by the boundary of the figure.
Question 9.
1 Light year = 9.46 x 105 m.
True.
Question 10.
One light year is defined as the distance travelled by light inW vacuum during the period of one year.
True.
IV. Match the following :
Question 1.
1 Length (a) ampere (A) 2 time (b) kelvin (K) 3 Mass (c) metre (M) 4 Temperature (d) second (S) 5 Electric current (e) kilogram (K)
1. c
2. d
3. e
4. b
5. a
Question 2.
1. c
2. d
3. a
4. b
V. Assertion and Reason.
Mark the correct choice as
(a) Both A and R are true but R is not the correct reason.
(b) Both A and R are true and R is the correct reason.
(c) A is true but R is false.
(d) A is false but R is true
Question 1.
Assertion (A): The distance between two celestial bodies is measured by the unit of light year.
Reason (R) : The distance travelled by the light in one year in vacuum is called one light year.
(a) Both A and R are true but R is not the correct reason.
Question 2.
Assertion (A): It is easier to swim in sea water than in river water.
Reason (R) : Density of sea water is more than that of river water
(a) Both A and R are true but R is not the correct reason.
(b) Both A and R are true and R is the correct reason.
(c) A is true but R is false.
(d) A is false but R is true.
(b) Both A and R are true and R is the correct reason.
Question 1.
Write the SI unit of speed.
m/s
Question 2.
What is the fundamental unit of amount of substance?
mole (mol)
Question 3.
What are the types of physical quantity?
1. Fundamental quantity
2. Derived quantity.
Question 4.
What is the SI unit of electric charge?
Coulomb (C)
Question 5.
Mention the formula to calculate area of a circle?
n × r2 = πr2.
Question 6.
How do you find the area of irregularly shaped figures?
Graphical method.
Question 7.
How will you determine the volume of a liquid?
By using measuring cylinder.
Question 8.
What are the other units used to measure the volume of liquids?
Gallon, ounce and quart.
Question 9.
Which one of the following has more volume. Iron block or a wooden block of same mass.
Wooden block.
Question 10.
Which one of the following has more density. Water or cooking oil.
Water
Question 1.
What is fundamental quantity? Give examples.
A set of physical quantities which cannot be expressed in terms of any other quantities are known as fundamental quantities. Ex: Length, mass, time.
Question 2.
Define mass Mention its unit.
Mass is the amount of matter contained in a body. It’s unit is kilogram (kg).
Question 3.
What are the multiples and sub multiples of mass?
The multiples of mass are quintal and metric tonne.
The sub-multiples of mass are gram and milligrams.
Question 4.
What is physical quantity? give example.
A quantity that can be measured is called a physical quantity. For example, the length of a piece of cloth, the time at which school begins.
Question 5.
What do you mean by ‘unit’?
The known measure of a physical quantity is called the unit of measurement.
Question 6.
What is measurement?
Comparison of an unknown quantity with a standard quantity is called measurement.
Question 7.
What is meant by area?
Area is the measure of the region inside a closed line.
Question 8.
What is capacity of a container?
The volume of liquid which a container can hold is called its capacity.
Question 9.
What is the relation between density, volume and mass?
Question 10.
Define astronomical unit.
One astronomical unit is defined as the average distance between the earth and the sun. 1AU = 1.496 x 1011 m or 149.6 x 106 m
Question 1.
How will you find the volume of an irregularly shaped object (stone) by using measuring cylinder?
1. Take a measuring cylinder and pour some water into it.
2. Note down the volume of water from the readings of the measuring cylinder.
3. Take it as V1
4. Now take Q small stone and tie it with a thread.
5. Immerse the stone inside the water by holding the thread.
6. This has to be done such that the stone does not touch the walls of the measuring cylinder.
7. Now the level of water has raised.
8. Note down the volume of water and take it to be V2
The volume of the stone is equal to the raise in the volume of water.
V1 = 30cc, V2 = 40cc
Volume of stone = V2 – Vj = 40 – 30 = 10cc
IX. Problems for practice:
Question 1.
A piece of iron weighs 230 g and has a volume of 20 cm3. Find the density of iron.
Solution:
Mass of iron (m) = 230 g
Volume of iron (v) = 20 cm3
Question 2.
Find the mass of silver of volume 50 cm3 and density 10.5 g / cm3
Solution:
Mass of silver (M) = ?
Volume of silver (V) =50 cm3
Density of silver D = 10.5 g/cm3
mass (M) = Density × Volume
= 10.5 x 50 = 525 g
X. Creative questions: HOTS
Question 1.
Why does an iron needle sink in water, but not an iron ship?
Iron needle is compact and its density is 7.6 g/cm3 Thus, as the density of iron needle is more than 1 g/cm3 therefore, it sinks in water. However, the iron ship is constructed in such a way that it is mostly hollow from within, thus, the volume of iron ship becomes very large as compared to its mass and hence its density is less than lg/cm3 . As the density of iron ship is less than 1g/cm3, therefore it floats in water.
## Samacheer Kalvi 7th Maths Solutions Term 2 Chapter 4 Geometry Ex 4.1
Students can Download Maths Chapter 4 Geometry Ex 4.1 Questions and Answers, Notes Pdf, Samacheer Kalvi 7th Maths Book Solutions Guide Pdf helps you to revise the complete Tamilnadu State Board New Syllabus and score more marks in your examinations.
## Tamilnadu Samacheer Kalvi 7th Maths Solutions Term 2 Chapter 4 Geometry Ex 4.1
Question 1.
Can 30°, 60° and 90° be the angles of a triangle?
Solution:
Given angles 30°, 60° and 90°
Sum of the angles = 30° + 60° + 90° = 180°
∴ The given angles form a triangle.
Question 2.
Can you draw a triangle with 25°, 65° and 80° as angles?
Solution:
Given angle 25°, 65° and 80°.
Sum of the angles = 25° + 65° + 80° = 170° ≠ 180
∴ We cannot draw a triangle with these measures.
Find the value of x in each of the given triangles.
Question 3.
In each of the following triangles, find the value of x.
Solution:
(i) Let ∠G = x
By angle sum property we know that,
∠E + ∠F + ∠G = 180°
80° + 55° + x = 180°
135° + x = 180°
x = 45°
(ii) Let ∠M = x
By angle sum property of triangles we have
∠M + ∠M + ∠O = 180°
x + 96° + 22° = 180°
x + 118° = 180°
X = 180° – 118° = 620
(iii) Let ∠Z = (2x + 1)° and ∠Y = 90°
By the sum property of triangles we have
∠x + ∠y + ∠z = 180°
29° + 90° + (2x + 1)° = 180°
119° + (2x + 1)° = 180°
(2x + 1)° = 180° – 119°
2x + 1° = 61°
2x = 61° – 1°
2x = 60°
x = $$\frac{60^{\circ}}{2}$$
x = 30°
(iv) Let ∠J = x and ∠L – 3x.
By angle sum property of triangles we have
∠J + ∠K + ∠L = 180°
x + 112° + 3x = 180°
4x = 180° – 112°
x = 68°
x = $$\frac{68^{\circ}}{4}$$
x = 17°
(v) Let ∠S = 3x°
Given $$\overline{\mathrm{RS}}$$ = Given $$\overline{\mathrm{RT}}$$ = 4.5 cm
Given ∠S = ∠T = 3x° [∵ Angles opposite to equal sides are equal]
By angle sum property of a triangle we have,
∠R + ∠S + ∠T = 180°
72° + 3x + 3x = 180°
72° + 6x = 180°
x = $$\frac{108^{\circ}}{6}$$
x = 18°
(vi) Given ∠X = 3x; ∠Y = 2x; ∠Z = ∠4x
By angle sum property of a triangle we have
∠X + ∠Y + ∠Z = 180°
3x + 2x + 4x = 180°
∴ 9x = 180°
x = $$\frac{180^{\circ}}{9}$$ = 20°
(vii) Given ∠T = (x – 4)°
∠U = 90°
∠V = (3x – 2)°
By angle sum property of a triang we have
∠T + ∠U + ∠V = 180°
(x – 4)° + 90° + (3x – 2)° = 180°
x – 4° + 90° + 3x – 2° = 180°
x + 3x + 90° – 4° – 2° = 180°
4x + 84° = 180°
4x = 180° – 84°
4x = 96°
x = $$\frac{96^{\circ}}{4}$$ = 24°
x = 24°
(viii) Given ∠N = (x + 31)°
∠O = (3x – 10)°
∠P = (2x – 3)°
By angle sum property of a triangle we have
∠N + ∠O + ∠P = O
(x + 31)° + (3x – 10)° + (2x – 3)° = 180°
x + 31°+ 3x – 10° + 2x – 3° = 180°
x + 3x + 2x + 31° – 10° – 3° = 180°
6x + 18° = 180°
6x = 180° + 18°
6x = 162°
x = $$\frac{162^{\circ}}{6}$$ = 27°
x = 27°
Question 4.
Two line segments $$\overline{A D}$$ and $$\overline{B C}$$ intersect at O. Joining $$\overline{A B}$$ and $$\overline{D C}$$ we get two triangles, ∆AOB and ∆DOC as shown in the figure. Find the ∠A and ∠B.
Solution:
In ∆AOB and ∆DOC,
∠AOB = ∠DOC [∵ Vertically opposite angles are equal]
Let ∠AOB = ∠DOC = y
By angle sum property of a triangle we have
∠A + ∠B + ∠AOB = ∠D + ∠C + ∠DOC = 180°
3x + 2x + y = 70° + 30° + y = 180°
5x + y = 100° + y = 180°
Here 5x + y = 100° + y
5x = 100° + y – y
5x = 100°
x = $$\frac{100^{\circ}}{5}$$ = 20°
∠A = 3x = 3 × 20 = 60°
∠B = 2x = 2 × 20 = 40°
∠A = 60°
∠B = 40°
Question 5.
Observe the figure and find the value of
∠A + ∠N + ∠G + ∠L + ∠E + ∠S.
Solution:
In the figure we have two triangles namely ∆AGE and ∆NLS.
By angle sum property of triangles,
Sum of angles of ∆AGE = ∠A + ∠G + ∠E = 180° …(1)
Also sum of angles of ∆NLS = ∠N + ∠L + ∠S = 180° … (2)
(1) + (2) ∠A + ∠G + ∠E + ∠N + ∠L + ∠S = 180° + 180°
i.e., ∠A + ∠N + ∠G + ∠L + ∠E + ∠S = 360°
Question 6.
If the three angles of a triangle are in the ratio 3 : 5 : 4, then find them.
Solution:
Given three angles of the triangles are in the ratio 3 : 5 : 4.
Let the three angle be 3x, 5x and 4x.
By angle sum property of a triangle, we have
3x + 5x + 4x = 180°
12x = 180°
x = $$\frac{180^{\circ}}{12}$$
x = 15°
∴ The angle are 3x = 3 × 15° = 45°
5x = 5 × 15° = 75°
4x = 4 × 15° = 60°
Three angles of the triangle are 45°, 75°, 60°
Question 7.
In ∆RST, ∠S is 10° greater than ∠R and ∠T is 5° less than ∠S , find the three angles of the triangle.
Solution:
In ∆RST. Let ∠R = x.
Then given S is ∠10° greater than ∠R
∴ ∠S = x + 10°
Also given ∠T is 5° less then ∠S.
So ∠T = ∠S – 5° = (x + 10)° – 5° = x + 10° – 5°
By angle sum property of triangles, sum of three angles = 180°.
∠R + ∠S + ∠T = 180°
x + x + 10° + x + 5° = 180°
3x + 15° = 180°
3x = 180° – 15°
x = $$\frac{165^{\circ}}{3}$$ = 55°
∠R = x = 55°
∠S = x + 10° = 55° + 10° = 65°
∠T = x + 5° = 55° + 5° = 60°
∴ ∠R = 55°
∠S = 65°
∠T = 60°
Question 8.
In ∆ABC , if ∠B is 3 times ∠A and ∠C is 2 times ∠A, then find the angles.
Solution:
In ABC, Let ∠A = x,
then ∠B = 3 times ∠A = 3x
∠C = 2 times ∠A = 2x
By angle sum property of a triangles,
Sum of three angles of ∆ABC =180°.
∠A + ∠B + ∠C = 180
x + 3x + 2x = 180°
x (1 + 3 + 2) = 180°
6x = 180°
x = $$\frac{180^{\circ}}{6}$$ = 30°
∠A = x = 30°
∠B = 3x = 3 × 30° = 90°
∠C = 2x = 2 × 30° = 60°
∴ ∠A = 30°
∠B = 90°
∠C = 60°
Question 9.
In ∆XYZ, if ∠X : ∠Z is 5 : 4 and ∠Y = 72°. Find ∠X and ∠Z.
Solution:
Given in ∆XYZ, ∠X : ∠Z = 5 : 4
Let ∠X = 5x; and ∠Z = 4x given ∠Y = 72°
By the angle sum property of triangles sum of three angles of a triangles is 180°.
∠X + ∠Y + ∠Z = 180°
5x + 72 + 4x = 180°
5x + 4x = 180° – 72°
9x = 108°
x = $$\frac{108^{\circ}}{9}$$ = 12°
∠X = 5x = 5 × 12° = 60°
∠Z = 4x = 4 × 12° = 48°
∴ ∠X = 60°
∠Z = 48°
Question 10.
In a right angled triangle ABC, ∠B is right angle, ∠A is x + 1 and ∠C is 2x + 5. Find ∠A and ∠C.
Solution:
Given in ∆ABC ∠B = 90°
∠A = x + 1
∠B = 2x + 5
By angle sum property of triangles
Sum of three angles of ∆ABC = 180°
∠A + ∠B + ∠C = 180°
(x + 1) + 90° + (2x + 5) = 180°
x + 2x + 1° + 90° + 5° = 180°
3x + 96° = 180°
3x = 180° – 96° = 84°
x = $$\frac{84^{\circ}}{3}$$ = 28°
∠A = x + 1 = 28 + 1 = 29
∠C = 2x + 5 = 2 (28) + 5 = 56 + 5 = 61
∴ ∠A = 29°
∠C = 61°
Question 11.
In a right angled triangle MNO, ∠N = 90°, MO is extended to P. If ∠NOP = 128°, find the other two angles of ∆MNO.
Solution:
Given ∠N = 90°
MO is extended to P, the exterior angle ∠NOP = 128°
Exterior angle is equal to the sum of interior opposite angles.
∴ ∠M + ∠N = 128°
∠M + 90° = 128°
∠M = 128° – 90°
∠M = 38°
By angle sum property of triangles,
∴ ∠M + ∠N + ∠O = 180°
38° + 90° + ∠O = 180°
∠O = 180° – 128°
∠O = 52°
∴ ∠M = 38° and ∠O = 52°
Question 12.
Find the value of x in each of the given triangles.
Solution:
(i) In ∆ABC, given B = 65°,
AC is extended to L, the exterior angle at C, ∠BCL = 135°
Exterior angle is equal to the sum of opposite interior angles.
∠A + ∠B = ∠BCL
∠A + 65° = 135°
∠A = 135° – 65°
∴ ∠A = 70°
x + ∠A = 180° [∵ linear pair]
x + 70° = 180° [∵ ∠A = 70°]
x = 180° – 70°
∴ x = 110°
(ii) In ∆ABC, given B = 3x – 8°
∠XAZ = ∠BAC [∵ vertically opposite angles]
8x + 7 + ∠BAC
i.e., In ∆ABC, ∠A = 8x + 7
Exterior angle ∠XCY = 120°
Exterior angle is equal to the sum of the interior opposite angles.
∠A + ∠B = 120°
8x + 7 + 3x – 8 = 120°
8x + 3x = 120° + 8 – 7
11x = 121°
x = $$\frac{121^{\circ}}{11}$$ = 11°
Question 13.
In ∆LMN, MN is extended to O. If ∠MLN = 100 – x, ∠LMN = 2x and ∠LNO = 6x – 5, find the value of x.
Solution:
Exterior angle is equal to the sum of the opposite interior angles.
∠LNO = ∠MLN + ∠LMN
6x – 5 = 100° – x + 2x
6x – 5 + x – 2x = 100°
6x + x – 2x = 100° + 5°
5x = 105°
x = $$\frac{105^{\circ}}{5}$$ = 21°
x = 21°
Question 14.
Using the given figure find the value of x.
Solution:
In ∆EDC, side DE is extended to B, to form the exterior angle ∠CEB = x.
We know that the exterior angle is equal to the sum of the opposite interior angles
∠CEB = ∠CDE + ∠ECD
x = 50° + 60°
x = 110°
Question 15.
Using the diagram find the value of x.
Solution:
Given triangle is an equilateral triangle as the three sides are equal. For an equilateral triangle all three angles are equal and is equal to 60° Also exterior angle is equal to sum of opposite interior angles.
x = 60° + 60°.
x = 120°
Objective Type Questions
Question 16.
The angles of a triangle are in the ratio 2:3:4. Then the angles are
(i) 20,30,40
(ii) 40, 60, 80
(iii) 80, 20, 80
(iv) 10, 15, 20
(ii) 40, 60, 80
Question 17.
One of the angles of a triangle is 65°. If the difference of the other two angles is 45°, then the two angles are
(i) 85°, 40°
(ii) 70°, 25°
(iii) 80°, 35°
(iv) 80° , 135°
(iii) 80°,35°
Question 18.
In the given figure, AB is parallel to CD. Then the value of b is
(i) 112°
(ii) 68°
(iii) 102°
(iv) 62° A
(ii) 68°
Question 19.
In the given figure, which of the following statement is true?
(i) x + y + z = 180°
(ii) x + y + z = a + b + c
(iii) x + y + z = 2(a + b + c)
(iv) x + y + z = 3(a + b + c)
Ans :
(iii) x + y + z = 2(a + b + c)]
Question 20.
An exterior angle of a triangle is 70° and two interior opposite angles are equal. Then measure of each of these angle will be
(i) 110°
(ii) 120°
(iii) 35°
(iv) 60°
(iii) 35°
Question 21.
In a ∆ABC, AB = AC. The value of x is _____.
(i) 80°
(ii) 100°
(iii) 130°
(iv) 120° | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7792416214942932, "perplexity": 3248.3243190484645}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104672585.89/warc/CC-MAIN-20220706121103-20220706151103-00512.warc.gz"} |
http://www.koreascience.or.kr/article/ArticleFullRecord.jsp?cn=DBSHBB_2005_v42n1_111 | COMPOSITION OPERATORS ON THE PRIVALOV SPACES OF THE UNIT BALL OF ℂn
Title & Authors
COMPOSITION OPERATORS ON THE PRIVALOV SPACES OF THE UNIT BALL OF ℂn
UEKI SEI-ICHIRO;
Abstract
Let B and S be the unit ball and the unit sphere in $\small{\mathbb{C}^n}$, respectively. Let $\small{{\sigma}}$ be the normalized Lebesgue measure on S. Define the Privalov spaces $N^P(B)\;(1\;<\;p\;<\;{\infty})$ by N^P(B)\;=\;\{\;f\;{\in}\;H(B) : \sup_{0 be a holomorphic self-map of B. Let $\small{{\mu}}$ denote the pull-back measure $\small{{\sigma}o({\varphi}^{\ast})^{-1}}$. In this paper, we prove that the composition operator $\small{C_{\varphi}}$ is metrically bounded on $\small{N^P}$(B) if and only if $\small{{\mu}(S(\zeta,\delta)){\le}C{\delta}^n}$ for some constant C and $\small{C_{\varphi}}$ is metrically compact on $\small{N^P(B)}$ if and only if $\small{{\mu}(S(\zeta,\delta))=o({\delta}^n)}$ as $\small{{\delta}\;{\downarrow}\;0}$ uniformly in $\small{{\zeta}\;\in\;S}$. Our results are an analogous results for Mac Cluer's Carleson-measure criterion for the boundedness or compactness of $\small{C_{\varphi}}$ on the Hardy spaces $\small{H^P(B)}$.
Keywords
Hardy spaces;Privalov spaces;composition operators;unit ball of $\small{\mathbb{C}^n}$;
Language
English
Cited by
1.
On a product-type operator from Bloch spaces to weighted-type spaces on the unit ball, Applied Mathematics and Computation, 2011, 217, 12, 5930
2.
Weighted composition operators from Bergman–Privalov-type spaces to weighted-type spaces on the unit ball, Applied Mathematics and Computation, 2010, 217, 5, 1939
3.
Weighted Composition Operators and Integral-Type Operators between Weighted Hardy Spaces on the Unit Ball, Discrete Dynamics in Nature and Society, 2009, 2009, 1
4.
Composition Operators from the Weighted Bergman Space to the th Weighted Spaces on the Unit Disc, Discrete Dynamics in Nature and Society, 2009, 2009, 1
5.
Composition followed by differentiation from H∞ and the Bloch space to nth weighted-type spaces on the unit disk, Applied Mathematics and Computation, 2010, 216, 12, 3450
6.
On the Generalized Hardy Spaces, Abstract and Applied Analysis, 2010, 2010, 1
7.
On operator from the logarithmic Bloch-type space to the mixed-norm space on the unit ball, Applied Mathematics and Computation, 2010, 215, 12, 4248
8.
Norms of multiplication operators on Hardy spaces and weighted composition operators from Hardy spaces to weighted-type spaces on bounded symmetric domains, Applied Mathematics and Computation, 2010, 217, 6, 2870
9.
Products of composition and differentiation operators from Zygmund spaces to Bloch spaces and Bers spaces, Applied Mathematics and Computation, 2010, 217, 7, 3144
10.
Norms of some operators on bounded symmetric domains, Applied Mathematics and Computation, 2010, 216, 1, 187
11.
Weighted differentiation composition operators from H∞ and Bloch spaces to nth weighted-type spaces on the unit disk, Applied Mathematics and Computation, 2010, 216, 12, 3634
References
1.
J. S. Choa and H. O. Kim, Composition operators on some F-algebras of holo- morphic functions, Nihonkai Math. J. 7 (1996), 29-39
2.
J. S. Choa and H. O. Kim, Composition operators between Nevanlinna-type spaces, J. Math. Anal. Appl. 257 (2001), 378-402
3.
C. C. Cowen and B. D. MacCluer, Composition Operators on Spaces of Analytic Functions, CRC Press, 1994
4.
B. D. MacCluer, Spectra of compact composition operators on $H^p(B_N)$, Analysis 4 (1984), 87-103
5.
B. D. MacCluer, Compact composition operators on $H^p(B_N)$, Michigan Math. J. 32 (1985), 237-248
6.
B. D. MacCluer and J. H. Shapiro, Angular derivatives and compact composition operators on the Hardy and Bergman spaces, Canad. J. Math. 38 (1986), 878-906
7.
Y. Matsugu and S. Ueki, Isometries of weighted Bergman-Privalov spaces on the unit ball of $C^n$, J. Math. Soc. Japan 54 (2002), 341-347
8.
S. C. Power, Hormander's Carleson theorem for the ball, Glasg. Math. J. 26 (1985), 13-17
9.
W. Rudin, Function Theory on the Unit Ball of $C^n$, Springer-Verlag, Berlin, New York, Heiderberg, 1980
10.
M. Stoll, Mean growth and Taylor coefficients of some topological algebras of analytic functions, Ann. Polon. Math. 35 (1977), 139-158
11.
M. Stoll, Invariant Potential Theory in the Unit Ball of $C^n$, Cambridge Univ. Press, 1994
12.
A. V. Subbotin, Functional properties of Privalov spaces of holomorphic functions in several variables, Math. Notes 65 (1999), 230-237
13.
J. Xiao, Compact composition operators on the Area-Nevanlinna class, Expo. 17 (1999), 255-264 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 15, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9785030484199524, "perplexity": 1133.4381173634592}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423681.33/warc/CC-MAIN-20170721022216-20170721042216-00489.warc.gz"} |
https://lavelle.chem.ucla.edu/forum/viewtopic.php?f=131&t=42456&p=216404 | ## 5/2R vs 3/2R
$\Delta S = \frac{q_{rev}}{T}$
Briana Yik 1H
Posts: 30
Joined: Fri Sep 28, 2018 12:20 am
### 5/2R vs 3/2R
how do we know when to use 5/2R versus 3/2R?
lizettelopez1F
Posts: 29
Joined: Fri Sep 28, 2018 12:19 am
### Re: 5/2R vs 3/2R
Cp=5/2r is used when you need to find the heat capacity of an ideal gas with a constant pressure. Cv=3/2R is used when you need to find the heat capacity of an ideal gas with a constant volume
KatrinaPho_2I
Posts: 60
Joined: Fri Sep 28, 2018 12:28 am
### Re: 5/2R vs 3/2R
These values only work if you are dealing with monoatomic ideal gases.
abbydouglas1K
Posts: 65
Joined: Fri Sep 28, 2018 12:26 am
### Re: 5/2R vs 3/2R
If the question says that the pressure is being kept constant(Cp) or that the volume is being kept constant (Cv) you will use 5/2R and 3/2R respectively if the molecule in question is a monatomic gas.
Christopher Anisi 2K
Posts: 30
Joined: Fri Sep 28, 2018 12:21 am
### Re: 5/2R vs 3/2R
For a monoatomic gas under a constant pressure the Cp value is equal to 5/2R however if it is under a constant volume the Cv value is equal to 3/2R.
Morgan Carrington 2H
Posts: 54
Joined: Wed Nov 14, 2018 12:22 am
### Re: 5/2R vs 3/2R
KatrinaPho_2I wrote:These values only work if you are dealing with monoatomic ideal gases.
What does you mean by a monoatomic gas?
Claire_Kim_2F
Posts: 67
Joined: Wed Sep 30, 2020 10:02 pm
### Re: 5/2R vs 3/2R
Monoatomic gases are gases that are noble gases like halogen and other substances. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5402328372001648, "perplexity": 6158.004398152943}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703587074.70/warc/CC-MAIN-20210125154534-20210125184534-00333.warc.gz"} |
http://mathhelpforum.com/pre-calculus/3638-factor-theorem.html | # Math Help - factor theorem.
1. ## factor theorem.
Let f(x)=x^3-8x^2+17x-9. Use the factor theorem to find other solutions to f(x)-f(1)=0, besides x=1. My answer is 2,5 could that be right. thanks for looking.
2. Originally Posted by kwtolley
Let f(x)=x^3-8x^2+17x-9. Use the factor theorem to find other solutions to f(x)-f(1)=0, besides x=1. My answer is 2,5 could that be right. thanks for looking.
$f(x)=x^3-8x^2+17x-9$
$f(x)=f(1)=1-8+17-9=1$
$f(x)=x^3-8x^2+17x-9=1$
$f(x)=x^3-8x^2+17x-10=0$
Yes, 2 and 5 are solutions of this equation besides 1.
KeepSmiling
Malay
3. Hello, kwtolley!
Another approach . . .
Let $f(x) \:=\:x^3 - 8x^2 + 17x - 9$
Use the factor theorem to find other solutions to $f(x)-f(1)\,=\,0$, besides $x=1.$
My answers are: $2,\;5.$
We are given: . $f(x) - f(1) \:= \:0$
]. . . . . $(x^3 - 8x^2 + 17x - 9) - (1^3 - 8\cdot1^2 + 17\cdot1 - 9) \;= \;0$
. . . . . . . . . $(x^3 - 1^3) - 8(x^2 - 1^2) + 17(x - 1) - 9 + 9 \;= \;0$
. . . $(x - 1)(x^2 + x + 1) - 8(x - 1)(x + 1) + 17(x - 1) \;= \;0$
. . . . . . . . . . . . . . . $(x - 1)(x^2 + x + 1 - 8[x+1] + 17) \;= \;0$
. . . . . . . . . . . . . . . . . . . . . . . . . $(x - 1)(x^2 - 7x + 10) \;= \;0$
. . . . . . . . . . . . . . . . . . . . . . . . . $(x - 1)(x - 2)(x - 5) \;= \;0$
Therefore, the solutions are: . $x \:= \:1,\;2,\;5$
4. Originally Posted by Soroban
Hello, kwtolley!
Another approach . . .
We are given: . $f(x) - f(1) \:= \:0$
]. . . . . $(x^3 - 8x^2 + 17x - 9) - (1^3 - 8\cdot1^2 + 17\cdot1 - 9) \;= \;0$
. . . . . . . . . $(x^3 - 1^3) - 8(x^2 - 1^2) + 17(x - 1) - 9 + 9 \;= \;0$
. . . $(x - 1)(x^2 + x + 1) - 8(x - 1)(x + 1) + 17(x - 1) \;= \;0$
. . . . . . . . . . . . . . . $(x - 1)(x^2 + x + 1 - 8[x+1] + 17) \;= \;0$
. . . . . . . . . . . . . . . . . . . . . . . . . $(x - 1)(x^2 - 7x + 10) \;= \;0$
. . . . . . . . . . . . . . . . . . . . . . . . . $(x - 1)(x - 2)(x - 5) \;= \;0$
Therefore, the solutions are: . $x \:= \:1,\;2,\;5$
Great.
Alternative approach are always interesting.
KeepSmiling
Malay
5. ## Factor theorem
Thanks to everyone for looking it over with me. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 24, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9866786003112793, "perplexity": 66.95012432627594}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507442900.2/warc/CC-MAIN-20141017005722-00164-ip-10-16-133-185.ec2.internal.warc.gz"} |
http://hal.in2p3.fr/in2p3-00149009 | # Correlations of neutral and charged particles in $^{40}$Ar- $^{58}$Ni reaction at 77 MeV/u
Abstract : The measurement of the two-particle correlation function for different particle species allows to obtain information about the development of the particle emission process: the space-time properties of emitting sources and the emission time sequence of different particles. The single-particle characteristics and two-particle correlation functions for neutral and charged particles registered in forward direction are used to determine that the heavy fragments (deuterons and tritons) are emitted in the first stage of the reaction (pre-equilibrium source) while the majority of neutrons and protons originates from the long-lived quasi-projectile. The emission time sequence of protons, neutrons and deuterons has been obtained from the analysis of non-identical particle correlation functions.
Keywords :
Document type :
Journal articles
http://hal.in2p3.fr/in2p3-00149009
Contributor : Lpsc Bibliotheque <>
Submitted on : Thursday, May 24, 2007 - 9:40:09 AM
Last modification on : Thursday, November 19, 2020 - 12:58:48 PM
### Citation
K. Wosinska, J. Pluta, F. Hanappe, L. Stuttge, J.C. Angélique, et al.. Correlations of neutral and charged particles in $^{40}$Ar- $^{58}$Ni reaction at 77 MeV/u. European Physical Journal A, EDP Sciences, 2007, 32, pp.55-59. ⟨10.1140/epja/i2006-10279-1⟩. ⟨in2p3-00149009⟩
Record views | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6578194499015808, "perplexity": 3750.519513200186}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141195745.90/warc/CC-MAIN-20201128184858-20201128214858-00233.warc.gz"} |
Subsets and Splits